ChatGPT approach to Artificial General Intelligence AGI

 

By Kristo Ivanov, prof.em., Umeå University

July 2023 (rev. 240428-2135)

 

<https://ia802703.us.archive.org/13/items/chat-gpt-agi/ChatGPT-AGI.html>

<https://www8.informatik.umu.se/~kivanov/ChatGPT-AGI.html>

 

 

CONTENTS

 

1. Link to a general disclaimer

2. Historical background

3. Initial overall considerations

4. Advancing toward nowhere?

AI in The Design of Inquiring Systems

On Statistics and Experimental Inference

Apparent Obsolescence and Desperate Logic

A Faustian Bargain

A Cultural Lobotomy

The Human Psyche

Driving Forces behind Technology

A new AI military “Gospel”?

Ego Inflation or Philosophy behind “Powerful Experiences”

David Noble and “The religion of technology”

Autism, or “Mathematical computer-oriented Minds”?

Artificially intelligent artistic production?

Return to The Design of Inquiring Systems

5. Temporary conclusions

6. References to ChatGPT in other own essays

From: Information and Theology

From: Reason and Gender

From: Computerization as Design of Logic Acrobatics

From: Information and Debate

From: Computers as Embodied Mathematics and Logic

From: The Russia-NATO-Ukraine Information Crisis

7. The open letter: "Pause Giant AI Experiments"

8. Comment to the open letter (proposal for moratorium)

9. Case study: “Creating safe AGI that benefits all of humanity”

10. Conclusion: the meaning of hype

11. Conclusion: beyond the hype

12. Concluding humor

 

2. Historical background

 

The Wikipedia report on artificial intelligence - AI in July 2023, seen as an introduction to artificial general intelligence - AGI, in the last paragraph of its introductory section observes that the term artificial intelligence has been criticized for overhyping AI’s true technological capabilities. The present text is intended to specify a few details of my criticism. The second paragraph of the very same introductory section writes about AI’s “several waves of optimism”:

 

Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.

 

I understand that we are experiencing its last, or unfortunately only the latest, wave of optimism. I suggest that the cause of these waves of optimism is the basic misunderstanding of what intelligence and consequent artificiality are or should be, allied to the forecasted marketing power and hoped economic profitability, and for whom, of the hype of AI and now AGI. This is related to a misunderstanding of the essence of logic and mathematics, and consequently a misunderstanding of statistics that is reduced to big data and to a controversially and problematically described mathematical statistics, as I tried to show that they are embodied in computers. The result suggests an ongoing cultural crisis as a societal analogy to the scientific scandal of lobotomy, a cut off of the emotions and feeling, valuational and intuitional dimensions of thought and psyche of a great deal of the population. It portrays difficulties that in part were discretely addressed long time ago in articles about The artificiality of science and, The metaphysics of design: A Simon-Churchman "debate (in Interfaces, vol. 10, No. 2, April 1980).

 

The fist mentioned article on the artificiality of Science is Churchman's review of Herbert Simon's book on The Sciences of the Artificial (1969) that academically gave to the latter a prestige in the field of artificial intelligence, impressing the academic community of his being a "genius" in the most disparate fields of knowledge, and concluding with his being awarded the Nobel prize in Economics. All based on a misunderstanding of the essence and function of logic and mathematics as mentioned above.

 

The latter article refers to the work of a pioneer of AI and Nobel Memorial Prize in Economic Sciences, Herbert Simon, and of prof. West Churchman, philosopher and systems scientist. It also raises the relevance of Churchman’s timely article “An analysis of the concept of simulation” (in Symposium on Simulation Models by A.C. Hoggatt & F.R. Balderston, eds., Cincinnati: South-Western, 1963, pp. 1-13) as well as its application by Ian I. Mitroff, “Fundamental issues in the simulation of human behavior (Management Science, Vol. 15, No. 12, August 1969). The main issue in also covered in Churchman’s “Real time systems and public information” (in Proceedings of the AFIPS/ACM 1968 Fall Joint Computer Conference, pp. 1467-1468, among others in pp.1467-1498). I am sure that very few people will recognize that the same issue is partly covered in “The researcher and the manager: A dialectic of implementation” in Management Science, Vol. 11, No. 4, Feb. 1965 (more on this below). This is relevant if we conceive the manager as analog to the questioner and user of the answer, and the researcher as analog to the producer of the answer, based on his database and elaboration of the data.

 

It is finally symptomatic that the whole of discussions going on about AI and AGI apparently ignore completely the violent hype that started in the (now “obsolete”?) seventies around Hubert Dreyfus’s views on artificial intelligence. There is no interest in What Computers Can’t Do. This will not treated here because they were based on “modern” continental philosophers such as Merleau-Ponty, Martin Heidegger and phenomenology that I perceive as arising from a misunderstanding or abuse of the philosophy of Immanuel Kant, as discussed in my essay on the ongoing computerization of society. The reason that the earlier hype around Dreyfus is ignored in the present renewed hype of AI-AGI is related to why humanity continues with divorces and wars, and why ChatGPT has not been adduced to help in the solution of the Russia-Ukraine conflict or the 2023 Israel-Hamas war. More on this below, but before that, it is necessary to orient readers who do not yet know what the acronym ChatGPT stands for, orienting them to Wikipedia’s explanation of the term. Common educated people’s difficulty to even understand the explanation is part of the “mystery” that surrounds the hype of artificial intelligence as of most modern mathematized science and technology, isolating them from democratic control.

 

 

3. Initial overall considerations

 

In the above section it can be seen that the referenced texts are dated up to the year 1980. I I am well aware that they may be considered by some as being outdated because of the great and rapid technological development in these more than 40 years. Nevertheless, as I wrote in the latest revision of my general disclaimer: On occasion of my retirement I intended to carry out in full scale my research that had been neglected because of managerial duties at my university, and to summarize my experiences and reflections in one or several publications. The more I studied and reflected upon my experiences, the more I got convinced that the main problem was not the lack of debates, books, publications, experiences, reflections, but rather the lack of wish, will, courage, that is "lack of timeto select readings, to read, to think, or to understand in depth and to act upon knowledge that is still valid and already available.

 

This means that I perceive the present attitude to AGI as revealing a deep cultural crisis, first of all, in the western world, with certain affinities with the message of Oswald Spengler’s The Decline of the West. Just because the western world is or was the most “advanced” (towards nowhere?). The decline is properly illustrated by the breakdown of social communication, which in science I exemplified by the reciprocal ignoring of scientific insights within increasingly narrows fields of science where the concept of “field” itself is being obliterated and promotes both the narrowing and consequent speed of perceived advancement. The process of narrowing and reciprocal ignoring the wholeness of reality also implies a loss of democratic social control of scientific activities that often are driven by narrow and often secretive disciplinary, economic and military interests. Scientists themselves who are active as e.g. physicists in fields that earlier were related as in the general denomination “physics”, not to even mention the general educated public, do not understand and cannot evaluate their earlier colleagues work in many highly specialized “sub-fields”, despite the assumed existence of one “scientific method”, which bestows to all the prestigious name of science. All this while, paradoxically, Democracy with capital D as the final “guarantor” has come to be a substitute of the earlier perception of God as I explain in my essay of the conflict between Russia and Ukraine.

 

The loss of democratic control, which in its essence is seen as "general" for implying the whole society, includes a loss also in the community of scientists dealing with both science in general, as well as within each field and subfield that results from progressive specialization that is misunderstood as a sign or progress. In his book review mentioned above, Churchman observes that "the debate about artificial intelligence has been raging ever since the introduction of that rather unfortunate terminology". He sees Simon's introduction of the term sciences of the artificial in one more "benign" sign of his "unremitting enthusiasm", which is matched by the community of scientists sharing his attitude of being "strongly positivistic in his thinking". I myself have met scientists who do not even know that positivism in this context means logical positivism, and justify themselves by boldly acknowledging that, yes, they are proud for being positive in their attitude to their work (without even differentiating between positivistic an positive), and logical in their thinking. Similar improbable accounts have I heard about AI-enthusiasts who counter the criticism of the system not accounting for induction by countering it with the question “what is it?”

 

Churchman goes further in explaining why databases with statements in the indicative mood do not take into account the goals implied in their application to particular uses related to unforeseen (whose?) goals. And he observes that it is not perceived that natural science itself is artificial. This artificiality of science in its gradual mathematization and logification leading to "Newton's syndrome" (belief in the possibility of going from the simple to the complex, or "more of the same") is something that was already affirmed in the far-reaching insights of Jan Brouwer about the foundations of logic and mathematics, as outlined in my essay on Computers as embodied mathematics and logic. It should be seen as a foreknowledge for understanding the whole text that follows. I have come to the conclusion, supported by the text below, that the basic problem of the use (not the design) of AI and AGI is the lack of understanding of its essence being formal science as represented by mathematics and logic. The latter’s function and limitations are ignored when they are further equated to intelligence and further to a misunderstood human intelligence, which plunges us into a hodgepodge of philosophical and psychological controversies that are also ignored and soon forgotten in face of the possibility of making money and obtain power over nature and other humans.

 

Quite often this is done under the mantle of an appeal to “pragmatism”, but then there is no reference to what pragmatism is or should be since it would lead to a discussion of the philosophy of pragmatism its history and its variants. I may be repeating myself if I claim that one, if not the basic problem in understanding AI and the more so in understanding AGI is that most common, even educated, citizens including scientists do not need to understand what the foundations of logic related to mathematics and to what a theory is related to the abused word model, and the even more abused term conceptual framework. For instance, in mathematized science, best exemplified by a physics that is mathematical physics, mathematics and logic would be meaningless without a supporting theory and its concepts with closely associated sensory presence (e.g. force, mass, speed, acceleration) which have been painfully developed under hundreds of years. Today in quantum physics (cf. my problematization of it)  new concepts have been derived mathematically in order to support empirical observations in complex experiments that have no direct bodily sensory presence, e.g. “particles” that are not particles but rather mathematical constructs. On the other hand, AI and AGI are pure logical and mathematical manipulations that are theoretically supported only for application in well-defined and limited fields that have a theory, but instead they are mixed with manipulation of common language and use or abuse of “tools” of mathematical statistics divorced from controversies about foundations of statistics. More on this below. Nothing of this kind of knowledge is available to the common educated citizen who is now going to be drowned in the hype of advertising on AI, and not even to many scientists who work in well-defined and theoretically supported fields, sometimes unaware that the concept of field itself is put in question.

 

All this is ignored or forgotten as source of debates about AI, and even less about AGI in their "waves of optimism" culminating with the ongoing latest hypes. But it does not detract from the possibility of legitimate profitable applications, legitimate only when (if) one (who?) knows the consequences of what one is doing (to oneself and others) in the short and long run, and whether it is ethical (and what that means with and without religion) besides of being profitable (in the short and long run, and for whom?).

 

 

4. Advancing toward nowhere?

 

AI in The Design of Inquiring Systems

 

I do not know a better illustration of the meaning of AI and AGI than in the seminal work of the earlier mentioned West Churchman with a background in pragmatism in his book The Design of Inquiring Systems (1971) to which I will usually refer further to with its initials DIS, and that I extended with a general DIS Word and Subject Index. I remember seeing some year ago, in the “Talk” section of the Wikipedia article on artificial intelligence somebody referring to this book. Older Wikipedia “talks” are not archived in Wikipedia, but I remember that it was swiftly disposed of because the author was seen as an “outsider”, without Wikipedia’s primary editors of the article displaying an understanding of what it all was about. This is a less-known attitude of a censorship by primary groups of editors with unconscious “vested interests” in a particular subject of Wikipedia, as I illustrate in an article on Wikipedia-democracy. I resume the reasons for the impossibility of “talks” or debates about such issues at the end of the conclusions of my special essay about Information and Debate.

 

In chapter 4 (p. 79) of the above mentioned DIS book the author presents an elaborated version of an earlier paper On the Design of Inductive Systems (1969) written in co-authorship with Bruce Buchanan with material based on a research project conducted by E.A. Feigenbaum, Joshua Lederberg, Bruce Buchanan and others, together with the author, at Stanford University. Bruce Buchanan, however does not seem to have taken very seriously his co-authorship with Churchman, since in his later works mentioned below he clearly joins the logical-empiricist track of Herbert Simon. The latter, in philosophy and methodology of science, stands on the lucrative opposite side of West Churchman, as suggested in the only text I know that indirectly and in rhetorical elegant way spells this opposition, the earlier mentioned paper by  W. Ulrich's The metaphysics of design: A Simon-Churchman "Debate".

 

What is happening today is that AGI, or AI in the spirit of ChatGPT, seems to consist of smart elaborations of the chapter 4 mentioned above, which in turn is a modernization of what Churchman in a book published so early as in 1948 already had called Theory of Experimental Inference, and Buchanan, without daring to enter into the problems of philosophy of science, tries to extend from MYCIN (and cf. DENDRAL. They are considered by Churchman in DIS pp. 89, 93, 98)  further in "Artificial intelligence as an experimental science" in James H. Fetzer (ed.) Aspects of Artificial Intelligence (1988, pp. 195, 207, 209-250,). In fact, much of the AI hype today is based on a simplification, if not outright exploitation of misunderstood induction and experimental inference, thereby ignoring the reason and motivation for Churchman's progress since 1948 up to 1971 and 1979, from the above-mentioned Theory of Experimental Inference up to DIS and The Systems Approach and its Enemies. It is also an exploitation of misunderstood logic and mathematics equated to “analytical - abstract thought”, equated further to reason confounded with “intellect”, whatever it is or should be, forgetting the subtle hodgepodge of Kantian differentiation between reason and understanding, pure and practical reason, and their relation to the synthesis in his Third Critique, of judgment, and aesthetics that today is reduced to “design”. And the whole is alternatively reduced to “Intelligence quotient” IQ vs. “Emotional quotient” EQ (see below) seasoned with the expectation that computers in the future will take care of IQ, and humans of the EQ, whatever they are or should be. We can wonder whether capital intensive industry and associated governmental economy with take care of computers and IQ, leaving EQ to churches and agencies for social welfare.

 

On Statistics and Experimental Inference

 

The problems are akin to those relating to the critiques of "Big Data", and even those ignore the earlier criticism of abuse of statistics such as Edgar S. Dunn's Social Information Processing and Statistical Systems: Change and Reform (1974), with the author honoring me with a positive review of my dissertation on Quality-Control of Information. Swedish readers have the opportunity to read Olle Sjöström’s work, starting with a PhD dissertation (1980, which as usual in computer context may be considered as “obsolete”) on Svensk Samhällsstatistik: Etik, Policy och Planering [Swedish Social Statistics: Ethics, Policy and Planning]. It starts (p. 154) its English summary asking rhetorically:

 

Has the growth in the “production” of statistical data, together with the associated publications, press releases and so forth, led to improved information for decision makers, better informed citizens and increased knowledge about current social conditions?

 

The matter is developed further in the book (1980, recalling why “history and critique is not obsolete”) with the title Svensk Statistikhistoria: En undanskymd Kritisk Tradition [History of Swedish Statistics. A Hidden Critical Tradition], with an extremely rich multi-language bibliography (p. 240-245), and an initial English abstract that states:

 

This essay delineates a critical tradition of statistics in Europe and Sweden, lost or ignored during the last decades. European statistics develops from philosophical empiricism and optimistic endeavours in the 17th century to the exploration of social conditions (John Graunt) and to comparisons among European national states (Hermann Conring). […] This book claims that it is necessary to awake and renew such a critical statistics tradition in order to benefit from modern statistical techniques and computer resources in the future.

 

One of the consequences of the loss of a critical tradition in dealing inductively with big data is the reduction of statistics to only mathematical statistics, the loss of the distinction and relation between production or rather measurement, and consumption or rather use and implementation of data or rather information and knowledge. Consequently one uses standard statistical “tools” on “data“ that is fed into the computer without needing to know the presuppositions for the “consumption of data” by the automated use of the “tools” (more on “tools” below). Never mind about the foundations of statistics as discussed, for instance, in West Churchman’s discussion of “objective probability” in Prediction and Optimal Decision (chap. 6). It is only a question of automated computerized application of ready-made formulas on available data. One further consequence and example may be the unquestioned new AI hype of "biotechnology" in How Artificial Intelligence is Revolutionizing Drug Discovery (March 20, 2023), raising "hundreds of millions of dollars to pursue their AI-driven drug discovery pipelines". Which outsiders and laymen would or could question such specialist "scientific" affirmations, when it is obvious that "hundreds of millions of dollars" can always produce discoveries, the more so when there can be no cost-benefits analysis (taste here the extent of the difficulties), and forgetting that the relevant concept of cost here is opportunity cost.

 

Intelligence, however, is much more than (misunderstood) experimental inference, and the very same Fetzer keeps returning cyclically in time to an endless inconclusive struggle about the ineffable "much more", as in Computers and Cognition: Why Minds are not Machines (2000), and Artificial Intelligence: Its Scope and Limits (2012). Until Fetzer, like myself, gets older and older, and cannot yet incorporate the last hype-revival of ChatGPT in a new book. The less so when experimental inference, statistics and mathematics are melted down, as in Q-learning, in an apparently prestigious mixture that ignores the meaning of “inquiring system” and that is impossible to understand by the average well-educated democratic citizen.

 

Apparent Obsolescence and Desperate Logic

 

Of course, it is characteristic for the AI field that "older" works from the 1960ies are in practice ignored or dismissed for not portraying later modern advances. But not only from the 60ies: also those from the 80ies are considered to be old. By whom?: by those whose opinions and work today will therefore also be considered obsolete after a couple of years. This is the more so when there is no consciousness of the foundations and thereby the meaning of logic and mathematics, that were discussed up to the beginnings of the past century as I survey in my Computers and embodied mathematics and logic. Not to mention the basis of mathematical or formal logic in the work of Gottfried Leibniz (1646-1716). And in my later essay the consequent problems of computerization or digitalization of society, represented here by the AI-hype, I have a section of a "Case study of desperate logic".

 

It is a "desperate logic" which could include William J. Rapaport's section of the book Aspects of Artificial Intelligence (p. 81) with the desperate title of "Syntactic semantics: Foundations of computational natural-language understanding", that revives my text on "Symptoms: Syntax. Semantics, Pragmatics". It reminds of older desperate logic as Sheldon Klein’s Culture, Mysticism & Social Structure and the Calculation of Behavior (European Conference on AI, ECAI, 1982, pp. 141-142). It also reminds of the present hype of ChatGPT in terms of its use of large language models LLM, which may be seen as a hodgepodge of syntax, semantics and pragmatics that tends to invalidate all these categories, as they would also be invalidated by the hype-attempt to create a Data Science akin to an Information Science if one could unravel the meaning and difference between them, in oblivion of earlier mentioned The Design of Inquiring Systems - DIS. It is also a "desperate logic" as suggested in the nowadays mostly dimmed but formerly hyped book Gödel, Escher, Bach: an Eternal Golden Braid (1979). It is a subject (whatever it is) that is constantly and inconsequentially revived, as lately "mystically" on the Internet as About the "Uncalculable World" and "Mizutani Portfolio". My impression is that the latest modern example of desperate logic is found in the mindblowing undefinable field (whatever field means) of prompt engineering as a “process of structuring text that can be interpreted and understood by a generative AI mode”, mentioned also later in this essay.

 

I myself feel tempted to feel pride in having devised as analogy that, despite my exceptionally numerous links, I do not remember having read or heard anywhere. It is today’s view of “old obsolete knowledge” of 400, 100, 40 or 10 years ago, the account of the painstaking work of, say, Leibniz, Galilei and Newton, not to mention the world’s sacred books and the philosophies of, say, Plato and Aristoteles, all this, invites to conceive a bold, if yet ultimately and necessarily imperfect analogy:

 

It is as if those pioneers were like hunters who, risking their own lives and deepening their skills, had succeeded in imprisoning and domesticating a big wild beast, or putting it in a cage. Later come we, and in relative comfort examine its “capabilities”, kill and dismember its body, eating gradually the flesh, and exploiting its remains, until arriving at the bones and analyzing their material composition. Until realizing that the beast is dead, and we know nothing about its origin, the effects of its life on the environment, or its offspring and its effects: “Desperate hunting”. 

 

Or one of the latest logical-mathematical geniuses who in "neuroscience", and based on experiments with masses of neurons in monkeys' brains, finds that "the human brain would be the center of the universe and tries to explain the history, the culture and the human civilization based on recently discovered principles regarding human brain function". Finally, this brain science or neuroscience is converging with AI in the latest hype of worldwide European and USA-projects: the USA National Institute of Health’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN, here and here, with the Wikipedia summary here). It goes from an understanding of the “structure of the mouse brain” over to “the structure of the human brain” and “Developing and applying brain-inspired computing technology”. What is mentioned is “brain imaging data structure (BIDS) conversion” and “neuroimaging data and metadata”, without worries about difference between data, metadata, information, and system, or between structure and function (DIS, chap. 3). But guess whether “To date, more than 1,100 awards totaling approximately $2.4 billion have been made by the BRAIN Initiative” will not produce some interesting and successful results and applications. In front of this perspective, it is a small consolation to read the insightful and benevolent reflections of Abraham Kaplan about what seem to be “brain mythology” in The Conduct of Inquiry (1964, p. 325, my italics):

 

Suppose it were definitely established that the direct cause of schizophrenia is the concentration of a certain substance in the blood, or the formation of a certain structure in nerve cells or their connections. The effect of this cause, so far as it concerns psychiatry, would still be statable only in terms of macro behavior (for instance, that resulting from delusions of persecution). Nor would it follow that treatment necessarily requires the laying on of hands, whether by medicine or surgery. For the possibility would still be open that the substance or structure itself is produced or destroyed by various patterns of interpersonal behavior or communication (as in the familiar case of ulcers, so far as producing then is concerned, at any rate). Psychophysical dualism always encounters the problem of interaction between the two realms. What I find as exciting about such development as cybernetics and related disciplines is precisely that they have “helped to pull down the wall between the great world of physics and the ghetto of the mind”. [Kaplan’s ending quotation is from Phillip G. Frank, The Validation of Scientific Theories, N.Y., 1961/2018, p. 155.]

 

The hype of cybernetics in the sixties is today in 2023 the hype of AI and AGI. Neither Frank nor Kaplan seem to have been able to imagine that instead of “pulling down the wall” what would happen is that the “world of physics” incorporates or outright swallows the “ghetto of the mind”, the more so for not considering a ghetto sufficiently dignified.

 

A Faustian Bargain

 

The important and serious thing seems to be the necessity of a serious study, probably possible only in terms of analytical psychology, of what is happening nowadays with the mind or psyche of modern man in the shadow of computerization, digitalization or logification. This to the point of disrupting the basis of human relationships, as illustrated further below with reference to Reason and Gender. It is then more than fair, however, to remark that such kind of analytical thinking, typical in formal science such as mathematics and logic, has been important for the particular development of the West. It may be seen as rooted in the Judeo-Christian tradition, as noted in controversial studies of the Jewish Ashkenazi intelligence (cf. here and here) and its performance in the West (cf. the "archetypal" Albert Einstein). This happens, however, in parallel to the gradual secularization or loss of its religious basis and Christian influence. Here is also the place to acknowledge, of course, the immense importance of the formal way of thinking in science, technology and industry. It stands behind the western mainly material wellbeing with the remark, however, that it may have been what Goethe meant with what came to be called a "Faustian bargain". An explanation of the meaning of such a Faustian bargain can be sought in Carl Jung’s insight (Collected Works, Vol. 6, p.77 §113) in a quotation that I already reproduced in an essay of Information and Theology:

 

Through the shifting of interest from the inner to the outer world our knowledge of nature was increased a thousandfold in comparison with earlier ages, but knowledge and experience of the inner world were correspondingly reduced. The religious interest, which ought normally to be the greatest and most decisive factor, turned away from the inner world, and the great figures of dogma dwindled to strange and incomprehensible vestiges, a prey to every sort of criticism. [...] Modern rationalism is a process of sham enlightenment and even prides itself morally on its iconoclastic tendencies.

 

So, Leibniz is not even mentioned in e.g. James Fetzer’s edited book on Aspects of Artificial intelligence, not even in Clark Glymour's contribution with the title "Artificial Intelligence is philosophy", despite his writing (p. 205) that "The trouble with artificial intelligence work is that it can use the senses to overwhelm the intellect".

 

A Cultural Lobotomy

 

I propose that it is much more than this: the advent of incursion of the latest if not last AI-Hype is the misunderstanding and abuse of mathematics and logic in the process of dissociation of the human mind in the process that I present in my text on the meaning of human-computer interaction HCI. In this process can be seen (by those who know it) an oblivion of the historical issue of Geisteswissenschften or “sciences of the spirit” related mainly to the name of Wilhelm Dilthey, an issue which later seems to have reverberated in Rudolf Steiner’s Anthroposophy. The dissociation of the human mind can rhetorically be seen as tragic ongoing analogy to the historical scientific and political scandal of Lobotomy, which also happened to dissociate the brain from its so-called emotional side, as formal science does with the undefined thought or "cognition" in the undefined mind that eschews all relation with analytical psychology. The metaphysics of design: A Simon-Churchman “debate”. I guess that many readers will think that it is too farfetched and controversial to be reminded that a "cultural lobotomy" also was the background of the Holocaust, (cf. the identification of inmates in Nazi concentration camps, which later became worldwide personal identification number) based on the "intelligent" superiority of a "race". It all accompanied by eugenics in several "advanced" western countries, with controversial interpretation of the Swedish national intellectual heroes, Nobel Prize winners Gunnar and Alva Myrdal's stand on the issue.

 

It is a cultural lobotomy that may also stand in the background of the ongoing equalization of humans and animals, with humans reduced to only animals, while believing that we are heightening animals up to humans. This is related to the fact that the undervaluation of humans increases the incapacity of seeing the difference between human and animals, but also between humans and machines, as evidenced in the apparently unproblematic frequent reference to the famous "Turing Test". It reminds us that the less one understands what a human being is, the easier it is to equal him to a machine or to “nature”, including to see as Nobel prize laureate in economics Herbert Simon did “behaving human beings as simples as ants (more on this below). All this is in turn related also to the philosophy of veganism  as well as of the heightening of "Mother Nature" to a new goddess for a pantheistic solution of the issue of pollution and climate warming, up to the point of youngsters declaring to relinquish having a polluting offspring.

 

The step is short to equalize humans and machines, with the supposed final attainment of machines that are not machines, with own "consciousness" if not "spirit", whatever it is perceived to be, and a superhuman if not godly intelligence. But if computers do not become superhuman, humans can perceive and behave and be slow computers and not only as animals as in the Holocaust of the second world war. This is also the rationale for the expectation that not only many manual workers but even so-called intellectuals and scientists who nowadays already think as slow computers in the computerized West, will in the future be replaced by advanced AI/AGI/ChatGPT. The greater the number of people at work, even “intellectual” work, with tasks that are designed to require to act as only slow computers, the easiest will be to replace them with slow or faster “AI” computers. Not only that: it is the same cultural lobotomy, the separation (and not only differentiation) between logic and psyche, that enables democratic societies and especially politicians to believe that it is possible to counter criminality with only law, police and prisons, instead of relying honestly on "love thy neighbor". Or to believe in countering credit card frauds with a progressive build-up of the computer "security" and behind card security codes. Or to substitute computer security for quality of information as defined in my doctoral dissertation on Quality-Control of Information, and originally suggested in my early paper on the concept of security that today reappears under the avatar of AI-Safety.

 

The Human Psyche

 

In lack of an understanding of the human psyche that would require study of the history of philosophy and e.g. analytical psychology with and understanding of theology, there is a hodge-podge "models" to support the ad-hoc concept of "Emotional intelligence" (EI) possibly measured by a corresponding emotional quotient - EQ, analog with old abused IQ. Even forgetting that the problems mentioned above may lie behind the reportedly increasing frequency of the diagnoses of Autism, including Asperger. It is interesting to note the vagaries of people, especially when discussing the arcane modern questions of whether AI will be able to “feel”. The flood of mass-media speculations is exemplified in a program of 44 minutes at the Swedish Radio on March 15, 2024 with the title Kan AI ha känslor? [Can AI have feelings?] with the under-title “Today, humans are increasingly interacting with AI. AI can make us humans feel joy and even love, but can AI have emotions and what would they mean if AI had real emotions?” The program, with two guests researchers Fredrik Svenaeus and Karim Jebari, showing the practical impossibility of expressing and sharing psychological and philosophical knowledge when philosophy that historically included psychology is reduced to simple reference to “phenomenology” (appreciated in universities for its theology without God), and theology is programmatically and silently excluded from the conversation. This is ultimately also the reason of why those who happen to study The Design of Inquiring Systems are not able to figure out how an application of e.g. the ignored philosopher Hegel’s and others’ philosophical thought (p. 180ff, see below) can be related to Carl Jung’s analytical philosophy as it appears in the discussion of “progress” (pp 203-205) and of “the mind” (p.261ff.), but the more so in the conclusion that (p.272):

 

[I]t may be safe to say that at the present time inquiry has become part of the unconscious life of most people: they are unaware of the ways in which they function as inquiring systems. Nor is there a strong inclination for them to give expression to this function so that its nature appears at the conscious level. As a consequence we are suffering now the most dangerous symptoms of an inability to bring to the conscious level an important human function. We assign to the experts and the politicians the roles of designing and creating the environment in which we live because we can see no way in which we can play any role whatsoever in these activities. Appalled as we may be at the events that are occurring in the world about us as the output of blind technology and politics, we each in our own frustrated way feel that we can do nothing about it.

 

Medical brain lobotomy, seen here as an archetypal analogy to the reduction of intelligence to logic, mathematics and formal science divorced from emotions, feelings and intuition, was supposed to deserve the highest level of international official scientific recognition by means of the Nobel prize for physiology and medicine, year 1949. It also raises the memory of the Nobel prize in economy to Herbert Simon, while West Churchman arguably decreased his chances of a similar official recognition because of his opposition to Simon's view of intelligence in the artificiality of science and his view of "behaving human beings as simples as ants", (probably not intended to be applied to the author himself). As with the case of the Turing Test mentioned above, the step may be short to considering certain “non-intelligent” humans as machines or ants to the point of allowing for attitudes such as in the Holocaust. Churchman also opposed some tenets of an earlier recipient of the prize in Economic Sciences, Kenneth Arrow. It belongs to this story the confession of a colleague of mine who as member of the awarding institution for prize in physics, the Royal Academy of Sciences, that an internal report he had written about a case of this kinds of problems in his discipline had been classified as secret for the next 50 years. Talking about democratic freedom of expression and openness in research.

 

There is a symptomatic case of a Swedish doctor and author, grand old man and champion of intellectual debate in journalistic contexts, mainly in social criticism and satire in the last more than 60 years: P. C. Jersild. One of his best known books, A Living Soul, symptomatically relevant for our “lobotomized brains”, deals with a living, thinking “and feeling” (?) human brain floating in a container of liquid. Also symptomatic is his answer at the end of a long interview of the Swedish Radio in a program broadcasted on December 9 2023, about what he thought on the ongoing hype of AI-AGI. After his many well-formulated answers and thoughts about “everything” during the interview, including about his atheism and commitment to voluntary euthanasia despite of being a physician, he finally, with a certain hesitance, confessed that it was a difficult matter and that he had not yet a formed opinion. Guess why. It is not easy to know what a computer really is, the less so as related to formal science and the human psyche, that is, again: human-computer interaction, and  “intelligence”.

 

Driving Forces behind Technology

 

So much for democratic freedom of expression at the highest levels of the highest sciences, reminding the barely advertised that "Democracy is a myth" because of negligence of its presuppositions, as I explore in a text of the information on the conflict between Russia and Ukraine. Science as scientific "fields" in general and computer science in particular, and artificial intelligence more particularly, nowadays eschews all democratic control because it is only controlled by narrow isolated self-controlled communities of adepts who, as in universities and in supposedly democratic Wikipedia, are the only who are seen as entitled to understand what it all is about. And in the meantime the ongoing new wave of AI-hype is advertised by means of a plethora of "visionary" programs of doubtful quality in mass media and in discussions in social media: a reprise of the Dot.com bubble, possibly a revival of the historically archetypal Tulip mania. Let's remark that many people made big profits by all those "visions", as they are making by Cryptocurrency.

 

Ultimately, however, the question may boil down to which are the motivating forces that drive modern scientific, technical and industrial view of western and already also parts of eastern one, the latter’s antiquity having already been the object of my reflections in the text in Logic and Rape as related to Science and Civilization in China. It is a view that may be related to a peculiarity of the Judeo-Christian civilization as related to “smartness” in the controversy around the “Ashkenazi Jewish intelligence” (see here, here published here, and here). It is a view that is driven by mathematics and logic of western mathematized technoscience, as synthetized also in mathematical logic as embodied in computers. It is a matter that has been studied in the philosophy of science and of technology that is touched upon in an essay of mine on Trends in philosophy of technology. I believe that theologically this field was best surveyed by editing of Carl Mitcham and Jim Grote in Theology and Technology (volumes 1 and 2). My own experience, in witnessing that even deeply committed and educated scientists and philosophers adore the power of mathematics and logic (a power as suggested by Jan Brouwer, cf. my Computers and embodied mathematics and logic), is that they are seduced by the feeling of power (supposedly for good and for bad) over nature and humans to the detriment of the Christian primacy of charity and love. It may be a case of the analytical psychology's conception of Ego Inflation when believing the they are high priests gifted with the capacity of understanding the mathematical-logical language (synthetized in mathematical logic) in which God himself is supposed to have described the created universe. Or, as I saw in a letter written by a particularly gifted and educated physicist:

 

I particularly remember an experience when I was taking a PhD course in quantum mechanics and going through a particular proof using group theory to predict atomic states. It was like a lightning strike and I actually started crying with emotion. It was a powerful experience.

 

It is interesting and symptomatic that this is a powerful experience, to the point of convincing if not compelling humans to a religious conversion that could have been a sign of the Biblical message that humans have a divine spark in themselves because they are created in the "image of God". Nevertheless, it can easily be a symptom of proud "Ego inflation" mentioned above. It is therefore also the source of powerful speculations raised around the names of Fibonacci and Mandelbrot, as already considered in a couple of other essays, on Computers as Embodied Mathematics, Information on Christianism or Atheism, and Quantum Physics, Computers & Psychology. A well-known Swedish professor of theoretical physics tells in an interview at the Swedish Radio about a likewise life-changing (but not religiously converting) powerful childhood experience in his witnessing the appearance in the sky of the announced apparition of a comet. Other theoretical physicists witness but are equally insensitive to religious conversions, such as e.g. the famous Nobel prize laureate Peter Higgs about whom Wikipedia reports:

 

Higgs was an atheist. He described Richard Dawkins as having adopted a "fundamentalist" view of non-atheists. Higgs expressed displeasure with the nickname the "God particle". Although it has been reported that he believed the term "might offend people who are religious", Higgs stated that this is not the case, lamenting the letters he has received which claim the God particle was predicted in the Torah, the Qur'an and Buddhist scriptures. In a 2013 interview with Decca Aitkenhead, Higgs was quoted as saying:

 

I'm not a believer. Some people get confused between the science and the theology. They claim that what happened at Cern proves the existence of God. The church in Spain has also been guilty of using that name as evidence for what they want to prove. [It] reinforces confused thinking in the heads of people who are already thinking in a confused way. If they believe that story about creation in seven days, are they being intelligent?

— The Guardian, 6 December 2013

 

But I have not seen speculations at all about parallels with the relation to our "powerful" experiences of feelings of passionate love as considered in the essay on Reason and Gender. There are people who infer from their own feelings of love, love for children spouses and parents, that they have to ask wherefrom comes this powerful feeling of love, as well wherefrom and why do tears come when listening to certain kinds of music that ultimately can be religious, as I remind in my text on “Intuitions in Music” in the paper Information and Theology. There must be a “powerful” feeling of love in them, instilled from outside, from above or the “inside” (Genesis 1:27). That is, love up to the point of being able to sacrifice their own life in saving or losing the loved ones’.

 

If, as in analytical psychology, it is a question of “archetypes” or worse, “mechanisms” rooted in the human mind, then the answer will be that psychology only affirms that they are there, not claiming to know wherefrom or from whom they are placed there. Up to now I never heard people confessing to have “started crying with emotion” or sobbing and getting tears in their eyes, upon the intuition that their feeling of loving or being loved by their children, but unhappily sometimes less by spouses divorcing spouses, is the same feeling of godly love for us (not “intelligence”), making them to intuit how much we can be loved, and to wonder where this feeling of love comes from. But many if not most of them seem to be readier to prize mathematics for their own supposedly godly mind that in fact navigates in the world as a proudly engineered Titanic did in the ocean. Perhaps fantasy in science fiction has a presage of decadence in imagining, as in the film Zardoz, that the immortal “Eternals” leading a luxurious but aimless existence “are overseen and protected from death by the Tabernacle, an artificial intelligence”. And this is consistent with the hypothesis that ultimately it was mathematical logic that raises disoriented powerful feelings.

 

A support for this hypothesis comes from the simultaneous tendency to argument only or mainly in logical terms (and/or with the help of ChatGPT!), without reference to historical sources or “philosophical” thinkers. Whenever other sources are mentioned, as here, the accusation is raised that it is an illegitimate “psychologizing”. But see an example in my bold attempt to suggest with the accusation of psychologizing, the enforcing of pure logic can be experienced as a spiritual “rape”. Or the accusation is raised that references beyond pure logical argumentation are an abuse of “academic-philosophical-historical” speculations, while the question should be seen “pragmatically”, whether it is convincing and “it works” in the physical world, often combined with “it sells” that in turn is reduced to coarse usability, utility and economic profit or “it is profitable”. One should instead say: “As quantum physic works, until a next nuclear third world war”. The appeal to pragmatism, however, is then done without questioning the history and essence of philosophical pragmatism that is downplayed to being only “philosophical-academic” interest.

 

And talking about the forgotten love it comes to my mind what Oswald Spengler writes in his amazing great work The Decline of the West. Building, as he claims, upon mainly Goethe and Nietzsche he writes in a book that I bought some more than 40 years ago but do not claim to have been able to study and digest before reaching old age. Nevertheless, in this present context I cannot avoid to remark what he writes in vol. 2, pp. 218f.:

 

Religion is metaphysics and nothing else – Credo quia absurdum – and this metaphysics is not the metaphysics of knowledge, argument, proof (which is mere philosophy or learnedness) but lived and experienced metaphysics – that is, the unthinkable as a certainty, the supernatural as a fact, life as existence in a world that is non-actual, but true. Jesus never lived one moment in any other world but this. […]

 

“His” teachings, as they had flowed from his mind and noble nature – his inner feeling of the relation between God and man and of the high meaning of the times, and were exhaustively comprised in and defined by the word “love” – fell into the background, and their place was taken by the teaching of him.

 

A new AI military “Gospel”?

 

And today, when for many there is no much left of His teachings, but only “powerful experiences” possibly seasoned with some theological erudition and reference-dropping, this can be taken as a support of my view that mathematical powerful experiences are an expression of the inflationary power in the Ego-Mind, and not of human or godly love. It was the "super-intelligent" power of mathematics and logic over nature (and climate), and power over other human beings, the "enemies". It was a “science” applied to wars with high-tech weapons including "lethal autonomous weapons" (see here, here and here). Application to wars, however, is more complex than the manufacturing of products. AI will claim that it is, and will also be, applied to the general use of weapons such more or less conventional bombs, as illustrated in the case of the 2023 Israel-Hamas war in an article by Jonathan Cook (see also here in the necessarily highly controversial Unz Review, December 5, 2023). Here follows my excerpt from the article, including a couple of readers’ comments:

 

The whistleblowers confirm that, given new, generous parameters of who and what can be attacked, the artificial intelligence system, called “Gospel”, is generating lists of targets so rapidly the military cannot keep up.

In a report last Friday [December 1, 2023], the Guardian [in an article with the title ‘The Gospel’: how Israel uses AI to select bombing targets in Gaza, with repercussions as in Le Monde, Dec. 6, 2023] corroborated Israel’s reliance on the Gospel computing system. The paper quoted a former White House official familiar with the Pentagon’s development of autonomous offensive systems as stating that Israel’s no-holds-barred AI war on Gaza was an important moment”.

 Israeli military is now using an artificial intelligence system, Habsora or Gospel, to identify targets.

Speaking of the military’s new reliance on Gospel, Aviv Kochavi, the former head of the Israeli military, told the Israeli Ynet website earlier this year [June 23, 2023] “In the past, we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50 per cent of them being attacked.”

A former intelligence officer told that the Targets Administrative Division that runs Gospel had been turned into a “mass assassination factory”. Tens of thousands of people had been listed as “junior Hamas operatives” and were therefore treated as targets. The officer added that the “emphasis is on quantity and not on quality”.

 

COMMENTS [two selected comments of the article]

 

The REAL value of “AI” is as lying sockpuppet to take the blame:
•Health Insurance companies use “AI” to deny claims. (Don’t blame us)
•Military uses “AI” to kill civilians. (Don’t blame us)
• Hedge Funds use “AI” to insider trade. (Don’t blame us)
• Corporations use “AI” for sales targets during down quarters (Don’t blame us).

This is the other face of Sam Altman’s empire!

 

The Ego’s inflationary power offers also an important advantage of logic (and consequently AI/AGI) in the politics of academic careers: it dispenses painful references, not to mention charitable attention to contemporaneous and historical work of the “neighbors” that we are supposed to respect, if not to love. A logically structured text is the only way of claiming attention without a single reference to, and (worse) dependence upon prior knowledge except technical and analytical. That is: unless one questions the essence of mathematized science, or of logic and mathematics in inquiry. So, in my experience, logically oriented scholars, now with the support of heavy AI/AGI technology, can do well without any academic “fatherhood”, let alone an ex-post legitimately criticized fatherhood. The more so by adducing the rapid technological development invalidates the import of earlier, even 5, or 10 years old insights. This can be related to youngsters’ lack of respect for the knowledge and experience of their parents, the more so of their grandparents, not to mention of generations of people who with struggles and sacrifices have built our accumulated knowledge and wellbeing. Their implicit fatherhood is the pseudo-religious Logos that is felt as implicit in the abused assumed etymology of the word “Logic” as used today.

 

It is also symptomatic that there is a spread rumor (see Internet browser on <mathematics god language>) as if it were Galileo Galilei's quotation, that he would have written that mathematics is the language of God. There is, however, only one documented source, The Assayer, in which he really only claimed that it is the language of science (natural science), without mentioning God. Further theological aspects of the issue are found by searching the keyword <mathematics> in my essay on Information and Theology.

 

The contempt for philosophy and not only theology, based on ignorance and secularism is also what guides smart authors of smart articles in prestigious journals such as The Economist. In an ambitious article noted in one main Swedish newspaper, Svenska Dagbladet (Nov 19 2023) about the economic-political events in November 2023 in the OpenAI organization, it writes, for instance, referring to the events that are detailed below, relating to the board and CEO of OpenAI:

 

The events at OpenAI are a dramatic manifestation of a wider divide in Silicon Valley. On one side are the “doomers”, who believe that, left unchecked, AI poses an existential risk to humanity and hence advocate stricter regulations. Opposing them are “boomers”, who play down fears of an AI apocalypse and stress its potential to turbocharge progress. The split reflects in part philosophical differences. Many in the doomer camp are influenced by “effective altruism”, a movement worried that AI might wipe out humanity. Boomers espouse a worldview called “effective accelerationism”, which counters that the development of AI should be speeded up.

 

So, the difference between doomers, among whom many readers would like to see myself included, and boomers, would reflect “in part” philosophical differences. Both doomers and boomers could be influenced by “effective altruism” (that I have already considered in a section of my text on Information and Theology). What is not said is that both of them may be projecting their desperation, respectively their hopes, in lesser or greater capabilities of “democracy”. What is not spelled out is that the effective altruism, as represented by the outspoken atheist “moral philosopher” Peter Singer, is a failed substitute of religion. Boomers do not seem to be influenced by anything. Once upon a time they might have been denominated as being simply optimists, while the further discussion in The Economist indicates that psychologically and politically they might be rather seen as entrepreneurs or “individuals who create and/or invest in one or more businesses, bearing most of the risks and enjoying most of the rewards” in this life, in view of absence of a future one. It was the strong concern of investors that drove the dramatic development of the relation between board and CEO of the OpenAI organization. So much for philosophy and money, forgetting also the philosophy of technology, besides economics and politics.

 

Ego Inflation or philosophy behind “Powerful Experiences”

 

I know of a particular case, beside one documented on the Internet, in which an ego inflation may have contributed to the total conversion of a thoughtful educated and intelligent man to Christianity, a process that I characterize as a serendipitous, paradoxical "right decision based on wrong premises". He found that since the decision was right it doesn’t matter whether the premises were wrong. But it discloses a defective understanding of the “system” and of the interpretation of Christianity as illustrated in the essay on Conscience and Truth by former cardinal (and later pope) Joseph Ratzinger. The same premises may lead to other related wrong decisions, including in applications or implementations of the original decision, such as in interpretations of Christianity when advocating and justifying USA’s use of nuclear bombs on Hiroshima and Nagasaki (justified for avoiding “worse” consequences) or the handling of the conflict between Israel and Palestine, ultimately illustrated by the 2023 Israel-Hamas war, or as I show, between Russia and Ukraine. The Israel-Hamas war offers a good example of defective natural intelligence that obviously undermines the developments and evaluation of AGI, as illustrated by a (here slightly edited) communication I received from an avowedly Christian engineer and scientist who felt strong sympathies for Israel:

 

[N]ow we touch on a question that cannot be solved by philosophy. Regardless of all philosophies, there is a reality that cannot be philosophized away. For example, in some cases you have to make decisions, which regardless of which decision you make, have negative consequences for people and society. And making no decision at all can lead to even worse consequences

[...]

It is easy to sit in your best armchair with a cup of Darjeeling tea and a slice of toast with Cooper's Oxford Marmalade and comment on [others'] actions. And weigh the different options against each other. When you are in her situation, it is not so easy to play wise and good. Then one is forced to confess color.

 

[R]eal life shows how we humans can be faced with terrible decision situations where whatever we decide leads to negative consequences. Still, a decision must be made. Not making a decision at all may lead to even worse consequences. Trying to play well in situations like this usually doesn't work.

 

Sitting in one's armchair and thinking beautiful philosophical thoughts that have nice Latin names (that make oneself appear wise and good, etc.) can certainly be done. But reality is sometimes more complicated than our theories about it (the kind of philosophy I now call "mental masturbation"). Over the years, I have become increasingly negative towards intellectual people, which led me to define the term "stupid-intelligence".

 

It is a similar coarse conception of “philosophy” and a-philosophical pragmatism or common sense, or process of conversion or decision by unconscious emotion in face of powerful experiences, that has "divinized" violence and wars, including the nuclear bombs on Hiroshima and Nagasaki in order to avoid supposed “worse consequences” (for me and/or others than the bombed?). It is also a way to relativize a defective understanding of love, as when stating, for instance (transl. from Swedish)Without truth, love risks being reduced to a syrupy drug. And without love, the truth risks becoming an icy regulation”. This statement, which per se may legitimately states the Christian dialectic between charity and justice, is understandable in view of the present daily tendency to use and abuse the word truth as mathematical formal truth, and love as covering everything sexual and “nice”, while ignoring the problem of Reason and Gender. But, besides the typical logicist (logical positivist) assumption that regulations or logical rules only need to be complemented with separate vague “love”, it is also a simultaneous denial of the Christian primacy and essence of love (Matthew 22:37-40, esp. Galatians 5.14) coupled to a subjectivist implicit perception of the meaning of truth. It is a “truth” that, opposite to the Catholic need of a Church, relies only upon a Protestant personal reading and interpretation of the Bible, eschewing the supposed logical positivist “psychologism” in relating the whole question to depth psychology as in the former cardinal and later pope Joseph Ratzinger in his essay Conscience and Truth.

 

It is also this contemptuous attitude to philosophy that allows the ignorance of the meaning of “powerful” as already painstakingly discussed in the context of “the sublime”. It is also this kind of ignorance that allows for the accusation of “psychologizing” when one does not know that psychology originated in philosophy, and further, that historically philosophy appropriated the realm of theology, while the latter is offset by cheap acknowledgment of being a pious Christian believer who quotes the Bible. This when, in fact, one is a logical positivist who not only legitimately distinguishes but also separates facts of science from values and religion that is soon reduced to politics, as I show in the essay on Information and Theology.

 

A separation or facts of science from values is paradoxically also a disclaimer of the engineers’ responsibility for the consequences of their work and possible failures (so long as they are on the payroll and get paid by somebody). This is so because the engineered products, including “tools” can be said to have been always “right” but were used in the wrong way, equivalent to that they should not have been used (before being tested in the right way), or should not be have been given to people who could use them wrongly (e.g. in the case of weapons). An example I have personal experience of is in the context of repeated groundings of the aircraft Boeing 737 MAX 9, the latest in January 2024 (historical overview here). A knowledgeable aircraft wizard assured me that the problems were not really technical but rather managerial, being a consequence of the Boeing’s management in later times having prioritized economic interests before technical ones. It is easy to imagine how disclaimers are going to work with engineering products like AI/AGI that are less “concrete” than aircraft.

 

The separation we talk about, implies a contempt for philosophy that fits the protest and rejection of being labeled at all; as, say, logical positivist or logical empiricist. The answer, then is “I am not logical positivist, I am only my name, (say) John Smith”, picking an opinion here and an idea there. This then happens without realizing that this also has a labeling name, eclecticism, and that by the (father’s) family name one is labeled as a member of a family, which in turn is labeled by the name of the father, as Christians acknowledge having a common Father that makes them brothers. Others unconsciously may assume that their family is only their Mother (and half-brothers) who loves all her children obtained (“eclectically”?) from many fathers (many cultures having a special  contemptuous and cursing denomination for such behavior), according to the conceptions that I illustrate in my introduction to Reason and Gender.

 

It is only gods, geniuses or “logicists” who without any outer references, have an innate skill (often related to high IQ) to connect logical Leibnizian networks of empirical data (logical empiricists), who dare believe that they can create their own individual a-historical “-isms” and create an own logically consistent “Leibnizian” system of thought and unconscious feelings. They live without a family and without a history, since the renounce to the evaluation of “-isms” is also an implicit contemptuous renounce to the supposed “obsolete” thoughts of our forefathers whose errors and achievements are the base of our ongoing life. They do not foresee that their own present work and conclusion will be contemptuously rejected or outright ignored by others within few years. Unconsciously they rely on other “-ism” such as the mentioned eclecticism and state individualism, where the state, often confounded with society or one of its scientific subcultures, is the family of those without family. They do not realize that in relying on their own logical networks without references to earlier authorities it is like they were gods editing their own “bible”, and forgetting that the Bible relies on God (and historical accounts). And they forget that typical logicists who do not understand the meaning and limitations of language from which logic is extracted (and not the other way round) reject the Bible itself because of logical contradictions. Or abuse the Bible by ignoring them, reading the Bible as the devil does. These considerations lie behind the insight, highly relevant for understanding the present essay in face of the issue of AI/AGI, that the crushing amount of references and apparent complication of its argumentation are not necessary for a reader who is guided by the Bible instead of treatises on mathematical logic. This fits a key statement of Blaise Pascal in a relevant section of his famous Pensées (§ 251) which, in order to be properly understood must be read and studied in its context, readable on the net (preferably in original French because of orthographic misspellings in translation):

 

Other  religions,  as  the  pagan,  are  more  popular,  for  they consist  in  externals.  But  they  are  not  for  educated  people  A purely  intellectual  religion  would  be  more  suited  to  the learned,  but  it  would  be  of  no  use  to  the  common  people.  The  Christian  religion  alone  is  adapted  to  all,  being  composed  of  externals  and  internals.  It  raises  the  common  people  to  the  internal,  and  humbles  the  proud  to  the  external;  it  is  not  perfect  without  the  two,  for  the  people  must  understand  the spirit  of  the  letter,  and  the  learned  must  submit  their  spirit  to  the  letter.

 

Otherwise, it requires advanced secular ingeniousness to conceive tests of AI-AGI products beyond the example of my essay on the Ukraine-conflict. I mean tests sufficiently pedagogical in order to put into evidence their possibly dangerous limitations, exemplified by the AI creation of original jokes, overviewed in e.g. an article in Time magazine in January 2022, being an issue that expands on the Internet as it can be perceived by browsing with strings like <artificial intelligence AI humor humour>. One such test of AI limitations was illustrated by Anders Q. Björkman in the Swedish newspaper Svenska Dagbladet, December 2-3 2023, “Jag hetsar AI:n – och då händer något” [I provoke the AI – and then something happens]: it is the request that the AI tells an own joke. It fails, but the journalist refers to the Swedish National Encyclopedia, as defining humor as arising from

 

"the collision between (at least) two human rule systems. To the extent that the collision turns out to be apparent or explicable from a third rule system, there is a case of humor"

 

What is interesting here is that humor is conceived as depending upon rule systems. As such, if rule systems are synonyms for logical Leibnizian networks it also illustrates the spontaneous psychological relief experienced with the insight into the relativity and insufficiency or framing statements in only logical terms. I read somewhere that the German philosopher and theologian Ernst Troeltsch indicated that humor may have similar influence as religion in fostering humility by means of a downplaying of the importance of the big Ego (that is inflated by logic). And yet we are far from understanding humor if we consider the difficulty in understanding a standard work on humor in the world’s literature: Henri Bergson’s book Laughter. I claim that the process of understanding, and evaluate exceeds the scope of this essay, but returns us to the broad philosophical (including the psychological), and theological issues suggested in The Design of Inquiring Systems. A meaningful curiosity, however, can be mentioned here is that Bergson emphasizes that it is easier to laugh collectively, which can be seen as implied in humor’s downplaying of the individualizing Ego.

 

It is the afore mentioned separation between facts of science from values of religion (nowadays profane democratic “core values”) that the aforementioned Churchman struggled to counter in his whole work, at the (academic survival’s) cost of renaming God (in his The Design of Inquiring Systems) with the acronym GOD - Guarantor of Destiny, which in turn allowed an academically acceptable, watered down further (mis)use of the term, while “divinizing” a priestly role for, and responsibility of the Designer. That is, a divinized “Design” which in turn is a misunderstanding and abuse of the Kantian role of aesthetics in his Critique of Judgment, as addressed in my text on computerization of society, and in a commentary to Gunnela Ivanov’s doctoral dissertation relating to the democratization of design.

 

David Noble and “The religion of technology”

 

In 1998 the historian and “critic of technologyDavid F. Nobel published the book The Religion of Technology, a subject that from a (for me) controversial “leftist” perspective recalls the more politically neutral idea of  Richard Stivers’ book Technology as Magic: The triumph of the irrational. What is of immediate interest here is Noble’s historical account of the reports and thoughts about AI up to about the year of publication of his book, as it is found with the title “The immortal mind: Artificial Intelligence” on pages143-171. In Noble’s account, the “powerful experiences” surveyed in the above section are a are not a revelation but only a triviality that accompanies the development of science and technology. Nevertheless, he does not linger on the reason of the main involved phenomenon that he expresses as follows (p. 145, my italics):

 

Descartes’ peculiar obsession [with geometry and arithmetic, closer to God; my note] became the principal philosophical preoccupation for three centuries, as diverse thinkers sought to comprehend the mechanisms of human understanding, the categories of reason, the phenomenology of mind. Moreover, in the midnineteenth century, mathematics became not just a model for pure thinking but the means of describing the process of thought itself.

 

But why it did just became? Thinking and feeling like a historian he was, Noble does not dwell on a philosophical criticism of Descartes as related to theology, and why it would have prevailed. In other words, Nobel’s argument allows him to dispense the expansion of his thought in the direction of philosophy of science, and the painstaking work of placing Descartes in a wider context as in Churchman’ s DIS (pp.18, 22, 37, 61-62, 218, 263, 274, 276). It would have spared him to develop or envelope further his thought in the direction of Marxist thought, which he does not perceive and foresee that, in lack of theological insight,  would in turn reinforce faith in, and commitment of capital to technoscience in business and industry. As when reading his concluding affirmative quotations from other authors (p. 208):

 

The religion of technology, in the end, “rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill…By striving for the impossible, [we] run the risk of destroying the good life that is possible.” Put simply, the technological pursuit of salvation has become a threat to our survival…

[…]

Transcendence…means escape from the earth-bound and the repetitive, climbing above the everyday. It means putting men on the moon before feeding and housing the world’s poor…The revolutionary step would be to bring men down to earth…But respite from our transcendent faith in the religious machine requires that we “alter the ideological basis of the whole system.”. Such an undertaking demands defiance of the divine pretensions of the few in the interest of securing the mortal necessities of the many, and presupposes that we disabuse ourselves of our inherited other-worldly propensities in order to embrace anew our one and only earthly existence.

 

Despite my admiration for Nobel’s historical accounts, his conclusions are a temptation for an understanding, despite of it all, of the motives for USA’s McCarthyism or Second Red Scare. It can be seen as a reenactment of the Inquisition and its problems, except for Democracy (with capital D) being seen as a substitute for God, as explained by Tage Lindbom in his book The Myth of Democracy. The same question reappears in Nobel’s apparent “Marxist cultural analysis” that would be concretized and lead to the Cultural Marxism. What is interesting for the argument here is not Nobel’s “quoted conclusions” above, but his gathering of the historical arguments that he witnessed during the formative years of AI development. Among others:

 

The reduction of human thought to mathematical representation made imaginable the mechanical simulation or replication of the human thought process. For once, the mysteries of the immortal mind were rendered transparent and comprehensible… The thinking person might then be joined by the thinking machine modeled upon the patterns of human thought but independent of the thinking person. [p. 148.]

 

Confronted by the limitations of mechanical analog computers while overseeing the operations of MIT¨s Differential Analyzer, the most advanced computation machine of its day, [Claude] Shannon suggested speeding up and simplifying the system by substituting electromagnetic relays for machined parts, using Boole’s binary arithmetic to describe the electrical network. By using the Boolean system, invented to describe the laws of thought, to describe the operation of electric circuits, Shannon laid the groundwork for the electrical simulation of thought – the foundation of electronic computers. [p. 149.]

 

With his minimalist definition of machine intelligence, [Alan] Turing had deftly sidestepped philosophical discussions about the actual meaning of mind and thought; his materialist approach dismissed at the outset any discussion of the existence of an autonomous mind or a soul, which had preoccupied Descartes and Boole. (Turing had by this time become an avowed atheist). [p. 151.]

 

Nearly all of the theoretical developments that made possible the design of computers and the advance of Artificial Intelligence stemmed from military-related experience. [p.152.]

 

[Allen] Newell and [Herbert] Simon wrote: “The vagueness that has plagued the theory of higher mental processes and other parts of psychology disappears when the phenomena are described as programs.” [p. 155.]

 

The generally recognized father of what became A-Life [Artificial Life] was the mathematician John von Neumann, the “main scientific voice in the country’s nuclear weapon establishment.” Toward the early end of his life, suffering from terminal cancer, von Neumann earnestly devoted himself to weapons development, advocating the use of nuclear weapons and favoring a preventive nuclear war. At the same time, he began to ponder the fundamental logical similarities between life and machines…[p.165.]

 

Buttressed by government funding and institutional support, A-Life advocates shared with their Artificial Intelligence colleagues and arrogant impatience with criticism. [p.169.]

 

In short, Noble purport to show how religious and dangerous fantasies have invaded the imagination of many people who were the pioneers of computer applications related to AI. In order to appreciate them properly without underestimating their human dignity we may see them as particularly gifted in one specific aspect of their personality, recalling the controversies and discussions of “Ashkenazi Jewish Intelligence” (see e.g. here, here, here, here, and here). Regarding the AI colleagues showing “arrogant impatience with criticism” I may confirm or debunk this affirmation depending upon possible reactions to the present text of mine. Arrogance is the psychological sign of what psychologists have indicated as “Ego inflation” as mentioned above and below in my text, and more provokingly in other texts such as on Logic and Rape.

 

 

Autism, or “Mathematical computer-oriented Minds”?

 

The following is a digression along the lines of the previous sections on cultural lobotomy, the human psyche, driving forces behind technology, and ego-Inflation behind powerful experiences.

 

It deals with the kind of decisions based on logic and “facts” and unconscious emotion that in the future can be taken more easily with an apparently undebatable ChatGPT-support, and without having an idea of what systems thinking is about. This has also divinized relativity theory, quantum physics (“it works”), music and luminaries such as the “divinized genius” Albert Einstein, and others such as Richard Wagner, all of them often with their own theological musings (here, here, here and here). There are many logical-mathematical manipulators whose thought process is described in the study of the foundations of mathematics by Jan Brouwer. There are also information and computer geniuses or more generally engineering geniuses like the historical Vannevar Bush (that I quote and comment elsewhere, without relating him as I could to the mention of Claude Shannon above) who could as well be characterized also particularly gifted humans such as great writers, musicians, dancers, poets, artists, sorcerers, or now computer wizard or "mental acrobats". What is opportunely disregarded is sometimes meaningful aspects if not troubles of their personal lives. They can also have read the Bible very carefully, being able to quote selected pieces of text that either support their own interpretation of its message, or show perceived contradictions which supposedly (logically) demonstrate its irrelevance (cf. “Even the Devil Quotes Scripture”). They display capabilities that recall the “savant syndrome”. It is worth noting that this latter denomination is a "politically correct" shortening an earlier one: idiot savant, which can be misinterpreted when ignoring that etymologically idiot, as in the world idiographic, derives from the root idio, with a dictionary meaning as "only one's own": personal, separate, distinct, unique, also sometimes qualified, respectively in Dictionary boxes on Google and in Wikipedia, that it really means (if one does not deepen the meaning of “wisdom”):

 

"A person who has an exceptional aptitude in one particular field, such as music or mathematics, despite having significant impairment in other areas of intellectual or social functioning; or a person who is extremely unworldly but displays natural wisdom and insight."

 

“Savant syndrome is a phenomenon where someone demonstrates exceptional aptitude in one domain, such as art or mathematics, despite social or intellectual impairment.”

 

It is interesting to note the relation to lobotomy in the case of the savant example of Kim Peek, who is reported to have had a “condition in which the bundle of nerves that connects the two hemispheres of the brain is missing”. I will propose another analogy to the “cultural lobotomy” and “savant syndrome”: the likewise complex but very modern term autism. It is qualified in Wikipedia as a neurodevelopmental disorder, soon argumentatively adjusted to be seen as a particular neurodevelopmental “order”, as in a broadcasted program (March 12, 2024) at the Swedish public television SVT in a series on “Personality” where both autism and ADHD were seen as human diversities, or just different capabilities. I see this language manipulation as a postmodern relativism, as when dysfunction or disability tend to be renamed as a (functional- or ability-) diversity. All this while the American Psychological Association, APA, still defines them as for dysfunction (updated April 19, 2018) as “any impairment, disturbance, or deficiency in behavior or operation”, and disability (updated November 15, 2023) as “a lasting physical or mental impairment that significantly interferes with an individual’s ability to function in one or more central life activities, such as self-care, ambulation, communication, social interaction, sexual expression, or employment. For example, an individual with low vision or blindness has a visual disability.”

 

My point in this section will be to show how this sort of autism or diversity finds a legitimate expression in a book by Paul Hoffman, Archimedes’ revenge: The joys and perils of mathematics, published in 1988 during one of the hype periods of artificial intelligence, which was not yet hyped with the addition of neologism as artificial general intelligence. The title itself refers to the frustrating “Archimedes’ cattle problem”. The point I want to make may be also enhanced by reading and meditating upon at least the Wikipedia account of a later book by Hoffman, with the revealing title The man who loved only numbers, published in 1998.

 

Keeping to Hoffman’s first mentioned book, it is interesting to note that besides the understandable lack of reference to G. Leibniz in his extensive 11-pages word index, he has the following kind of disclaimers, which position his understanding of mathematics and logic (pp. 2-4, 159)  :

 

Many books have been written about the philosophical underpinnings of mathematics, about the extent to which it is the science of certainty, in that its conclusions are logically unassailable. Many other works have rhapsodized at length about the nature of infinity and the beauty of higher dimensions. Such philosophical and poetic excursions have their place, but they are far from the concerns of most working mathematicians. In this book I give a glimpse of some of the things that mathematicians, pure and applied, actually do. […]

 

With fundamental questions about number and shape still unsettled, it is no wonder that there is much disagreement and confusion about that the computer -- a very complex mathematical tool – can and cannot do. I have tried to stay clear of mushy metaphysical issues about the nature of man and machine in favor or presenting what little is known about the theoretical limits on computing. […]

 

Much ink has been spilled in the philosophical literature on what the absence of judgments means in terms of a machine’s ability to think, but the pursuit of such intriguing speculations would take us too far afield.

 

This does not prevent Hoffman from, deep in the book (p. 168), speculating about using Boolean algebra in e.g. representing the state of two people’s being friends and not being friends, irrespective of the meaning and measurement of friendship. The problem is extended to his discussion of democracy when stating that indeed, mathematics demonstrates the theoretical futility of creating a perfectly democratic voting system. He refers (pp. 5, 223) to the “wrestling with the mechanics of setting up a democratic nation”, and refers to the “Nobel prize-winning work of the American economist Kenneth Arrow” that shows that achieving the ideals of a perfect democracy is a mathematical impossibility. Never mind about the problem of Arrow’s conception (“impossibility theorem”) indicated by e.g. West Churchman, and about the myth of democracy, while the attention is focused on its mathematics. And Hoffman spends pages (e.g. 237f.) referring to books applying game theory to conflicts in the Old Testament between God and human beings, including game-theoretic implications of omniscience, omnipotence, immortality, and incomprehensibility. Not to mention the pondering (p. 144, 211) that “a machine state is like a mental state” and the pondering about the difference between electronic components in a computer and neurons in the human brain.

 

All this shows how the hubris of artificial intelligence, even early long before the success of today’s so-called general artificial intelligence, grew from the apparent self-imposed limitations of reasonable people who were certainly not mentally impaired, if yet possibly living and thriving in a sub-culture. The problem then is not that they were autistic in simple, raw medical clinical sense. In analytical psychology there is the conception of Psychological Types, which may be seen as vulgarized in the culturally and historically uprooted Five-Factor Model of Personality- A better variant is popularized in the book Surrounded by Idiots, and others by the author and management consultant Thomas Erikson with the DISC assessment tools. In these popularizations of types the question is the interplay between variants of mathematical statistics and especially cluster analysis, in mathematical approaches to personality psychology, in oblivion of its “philosophical assumptions”. Otherwise, the question in analytical psychology is of the interplay between thinking, feeling, sensation and intuition in relation to consciousness and the unconscious, and psychic development is the capacity to achieve a balance among them in the so-called process of individuation. Such conceptual constructions are not more absurd than the mathematical ones as in, say, quantum physics, if philosophical pragmatism is taken seriously. They have a theological counterpart in the Bible’s famous Corinthians 12:14ff.

 

Now to each one the manifestation of the Spirit is given for the common good. To one there is given through the Spirit a message of wisdom, to another a message of knowledge by means of the same Spirit. […]”

 

The cultural problem appears when a specific constellation of aptitudes are awarded improper high value by society at large, such as by industry, finance and academia in a secular technical-industrial culture developed after the scientific revolution. As in Wikipedia’s quotation from William Whewell’s Philosophy of the Inductive Sciences: it is the (my italics) “the transition from an implicit trust in the internal powers of man's mind to a professed dependence upon external observation”. It is the external observation of empirical experiments that are assumed to verify results of mathematically conceived relations to earlier external observations. Society and its politics begin to reward mainly thinking seen as equivalent to mathematical-logical ability while mathematics and logic undergo changes in themselves and in their functions, subservient to only external observation. And this can be seen as inflating the self-confidence (“ego inflation”) of seduced mathematically and logically gifted scientists who are overvalued by “democratically” selected political leaders and managers who strive for profitability and economic-industrial-military power in a society in a cultural crisis being driven by the Western world.

 

Artificially intelligent artistic production?

 

Extending the question of powerful experience we come to the problem of how to frame the question of AI/AGI influence and use in artistic production, where art stands for the production of powerful experiences. The matter has already been noted in the cultural world and even mentioned above, but here it will be inspired by an article on (my trans.) “Artificial music is the future’s melody” published in the major morning newspaper Svenska Dagbladet (30 December 2023), written by Björn Ulvaeus, presented as a Swedish musician, singer, songwriter, and producer best known as a member of the musical group ABBA. The hype about this issue is illustrated also by the fact that the following day, another author, Ebba Bonde, wrote an article dealing with the AI-manipulation of pictures/images having, however, the typical uncommitting and advertising title “The use of AI is as much a danger as an opportunity”.

 

Ulvaeus starts by acknowledging that we all are influenced by what we have listened to in our lives, that he does not know where his inspiration comes from, and he expresses a deep gratitude for the miraculous mystery of music’s mere existence, without referring to his otherwise well-publicized atheism (that prevents thanking God). He assures the readers, however, that “we do not need to feel shame for having difficulty with the definition of what music is”. He notes that sounds have a remarkable ability to influence our “feelings”, and to penetrate in our “unconscious”. In this way he proceeds without feeling shame for having difficulty with definitions, this time not only of music but of feelings or the unconscious, which happen to have been main matters of struggles for theologians, philosophers and psychologists for centuries. Giving up these struggles, the ongoing drive for computerization of society, focused here on AI, is combined with an aestheticizing post-modern abuse of Kantian aesthetics that has been renamed DESIGN. In this perspective Ulvaeus as musician and songwriter is a designer. It all follows unconsciously the steps of the likewise abused post-Kantians Nietzsche and the academically often referred, if not namedropped, Heidegger. I have written about this in the essay on Computerization as abuse of formal science, with the alternative earlier title of “computerization as design of logic acrobatics”. It deals now with the acrobatic use of AI, when its users admire the circus acrobatics of computer scientists turned into craftsmen.

 

Therefore, the text proceeds swiftly to state that up to now songs that we love have come from humans how have dug deep in their psyches, experiences, sorrows, triumphs, fears, losses and endeavors. “But now has appeared a new type of composer, eager to learn by means of “deep learning…with a neural network that loosely resembles the human’s, but not yet.” Ulvaeus tells about being introduced to Google Deep Minds “Music AI incubator”, which he felt was an overwhelming experience that seems to have made him mainly if not only aware of the importance of finding solutions for protection in “AI-training” or “prompting” from infringements of artistic copyrights related to sound, style, genre, voice, instrumentation, melody and lyrics.

 

But the reader of his article senses some sort of insight in what follows:

 

Songwriting is an intuitive and deeply human activity, and the big question is whether an AI model can create something that reflects the humanity behind a song, and whether that actually matters to the listener. Is there a depth to what a human can create that an AI cannot replicate? If so, can we appreciate that depth? Do we care? Or will our brains fill in the blanks and make us think we're hearing something that isn't there? - Musical experience is subjective. And isn't the only thing that matters what comes out of the speakers and the effect it has on the listener? Whether it was created by AI or a human? Or?

[...]

The Abba story made our "Voyage" avatar concert in London possible. It is a human story of hard work, triumph, disappointment, joy, love and sorrow. [...] It is so organic and full of contradictions and crossroads where we could have taken a different path that I cannot help but cling, perhaps vainly, to the hope that it could not have been artificially created.

[...]

If AI, through the lens of language, can learn everything a human can know about the innermost nature of the world? Can the creative process itself be replicated by AI? The process by which ideas born of curiosity and imagination are realized and push the boundaries of the human art world.

[...]

Will AI help us create great new work or make us obsolete? Personally, I think we will learn to coexist, as we have always done with other technological inventions in the world of music.

 

To start from the end: “we shall learn to coexist?”. So, it is not AI that will teach us to do it? As I claim that it will not write my essay on the Russia-NATO-Ukraine conflict? Nor it will make labor unions superfluous by teaching “us” what to do about conflicts of interest, greed, hate and unemployment? And Ulvaeus refers to the assumedly prophetic Yuval Noah Harari who means that AI has “hacked” the operating system for the human civilization. So much for prophecy and for knowledge about operating systems, as it transpires in Harari’s Can mindfulness save us from the menace of artificial intelligence? An unthought prayer to an imaginary God apparently cannot compete with “mindfulness”, since it is reported that Harari is “derisive about the great religions having anything to offer” and

 

“to the best of our scientific knowledge, all these sacred texts were written by imaginative Homo sapiens. They are just stories invented by our ancestors in order to legitimize social norms and political structures.”

 

That is, “to the best of his knowledge”, which also fits what has been called Sweden’s official religion of atheism, also practiced in the philosopher Åsa Wikforss’ analysis of so-called fake news (while ignoring The Myth of Democracy), which are related to the consequences of AI/AGI. Harari’s “mindfulness” promisingly should have meant that “we better understand our minds before the algorithms make our minds up for us” but it seems to either ignore or counter everything that the most mindful students of the mind have come up, beginning with the analytical psychology that e.g. Jordan Peterson adduces as I myself have done in my writings. The problem, however, is defectively posted to begin with, since it does not address the question of the meaning and function of art as necessarily done in philosophy, as suggested in my essay on the influence of Immanuel Kant in the conception of societal computerization as an abuse of formal science. And regarding music Ulvaeus’ understanding misses the primary meaning of art as human communication, communication between humans closely related to love, whose corruption becoming visible in this discussion of AI in art is surveyed in my preface to Reason and Gender. Not to mention the missed history and meaning of music in religion and theology as indicated in my essay on Information and Theology.

 

Concerning “avatars” as in avatar concerts it is possible to identify the same tendencies of dehumanization of human relationships as outlined about 2D-Lovers in “The technological turn” of my Reason and Gender.

 

For the rest, I claim that the above approach to artificially intelligent artistic production exemplifies the same dehumanization process that I noted above in the context of the hyped Turing Test, about which I express that the less one understands and feels what a human being is, the more he will be equated to a machine, an avatar, an ant or whatever, as in the Holocaust. The decline of culture and civilization coupled to an undervaluation of philosophy and theology will necessarily become visible in the discussion of more abstract issues such as art. Consequently one will not be able to understand neither what Plato writes about the effects and importance of music (Republic, 398b-403c, but esp. 424b-c) nor music’s role and effect in the paganism, atheism, and future of our youth.

 

More generally one will not be able to understand the meaning of difference between an original work of art and a copy of it which stands at the heart of the (infringement of) copyright. Original means that something is (problematically established to be) not consciously and openly influenced by, or rather generally recognized as similar to a prior human creation. In terms of analytical psychology, however, two or more works of art can be influenced or inspired by an archetype that belongs to the collective unconscious and as such legitimately common to humans. This problematizes the concept itself of copyright at a more basic level than the erudite and politically oriented famous work of Walter Benjamin, The work of art in the age of mechanical reproduction. It puts in evidence the economic-political core of the creation of copyright, while it can be questioned in spiritual terms as it was in earlier times e.g. among musicians who freely “borrowed” and lent from each other and from themselves, justifying it by seeing that their own imagination and creativity was a gift from God. Or, as I heard once the world-famous tenor Plácido Domingo respond to somebody in a broadcasted interview who praised his voice, by uttering that it was not his own merit but a loan from God who could at any time take it back.

 

I myself come to think of my own indignation and feeling of revolt in that my mentor West Churchman’s mentioned book The Design of Inquiring Systems (1971) is not being reprinted and is not made available on the Internet, while a direct request for it had not even been answered (as per January 2024) by the owners of its copyright. It is an eloquent example of the unethical disinterest for disclosing truth, while also negating the supposedly “holy” democratic right of freedom of expression, all under the silent mantle of narrow political-economic (dis)interest.

 

Return to The Design of Inquiring Systems

 

Well, the definition of wisdom is controversial. In way of summary: it all is about chapters 2 and 5 of part 1 of DIS, while ignoring or not understanding the whole of its 16 chapters in both parts 1 and 2 of the book. In particular, it is worth mentioning that in AI and AGI the emphasis is on chapter 2, the building of fact nets, and not on chapter 5 since the AI community is not specifically designing and performing scientific experiments and creating or establishing facts, but only selecting them, in oblivion of the intellectual challenge illustrated in Steven Shapin's famous A Social History of Truth. This is a phenomenon that I observed during many years of teaching at universities: most readers understand the book up to chapter 4 or chapter 5 discussing empiricism, but later they seem to get gradually lost, up to the point of confessing that they cannot see the necessity and meaning of the second part of the book with the last 7 (where appear the references to Carl Jung and analytical psychology) out of its 16 chapters. For many the final result is that when discussing the book’s content, they testify that yes, they have read it, and that they already “know” it, assuming that it does not influence their pre-conceptions on the AI-issues, and their conclusions.

 

This is not to “divinize” West Churchman who, in my opinion, got limited by his need to survive academically by fitting in the USA academia (also here), and for reasons of the problematic relation between science, theology and politics that I survey in my text on Information and Theology. I think that he basically did it by his repeated relating to the academically divinized philosopher Immanuel Kant that I comment upon in my essay on the ongoing computerization of society. Kant also deserves and important place in Churchman’s latest major book, The Systems Approach and its Enemies, in chap. 4 on “Logic: A Theory of Reality, or Kant for Planners”. In my view, Kant along his numerous followers such as Jürgen Habermas, diverted the development of many scholars, including one among Churchman’s most knowledgeable and faithful students, Werner Ulrich who, by the way, authored the best biographical documentation of his thesis advisor, accounted for in the summary of Churchman’s influence on me.

 

 

5. Temporary conclusions

 

On the basis of the above I wish for the time being to emphasize one basic conclusion. The possible if not probable result of the use of AGI is that it will suffocate debate by cementing the past and what (and where) has been written or said up to now, both in terms of choice of, or availability of data (measured and created, or chosen by, and available to, or affordable for whom?), imposed way of thinking (as illustrated in my Logic as Rape) and of affordable technological base. An alternate simple formulation found in an article in The Economist (Nov 9th 2023) in the context of Hollywood actors’ strike reflecting a fear of the technology is

 

The process of ingesting everything and then spitting out an average may lead to a stylistic and conceptual regression to the mean, says a literary agent, who sees similarities with the algorithms on social media that help propagate the most pedestrian views.

 

AGI may work for contexts of basic natural science isolated from human interaction, as is the case of established classical physics such as in astronomy (but even then, not necessarily in quantum physics). It will cement what has existed up to now around us or around those who supply or have control over the environment that has produced the data for unknown uses, and have control over the equipment. It will even include logic manipulations (deductions and inductions) in e.g. large language models LLM (soon "very large language models" VLLM?) of what is electronically recorded as having been said, written or heard up to a certain moment about "something" with no agreed-upon definitions, and finally selected by unknown GPT "trainers" for known or indeterminate uses. It will keep lots of people busy in a difficult rebutting of what has been easily said or written by anybody with the help of AGI-GPT. It may be seen as a loudspeaker or broadcaster of it all to all of us, or to selected audiences. All this coupled to the difficulties if not impossibility of genuine debates as I try to survey in a text on Information and Debate. From the teachings of the most serious research on the history and sociology of science, one is neglecting every insight into the fact that most of the ongoing scientific work is based on confidence in the truth of others that from now on will not be known, as masterly illustrated by earlier mentioned Steven Shapin in The Social History of Truth. It is an image that revives the mind-blowing concerns raised already in 1967 by Russell Ackoff’s article on Management Misinformation Systems, which also clarifies the pitfalls of offering ready-made "recipes" on how to solve known problems using known methods:

 

Five assumptions commonly made by designers of management information systems are identified. It is argued that these are not justified in many (if not most) cases and hence lead to major deficiencies in the resulting systems. These assumptions are: (1) the critical deficiency under which most managers operate is the lack of relevant information, (2) the manager needs the information he wants, (3) if a manager has the information he needs his decision milking will improve, (4) better communication between managers improves organizational performance, and (5) a manager does not have to understand how his information system works, only how to use it. To overcome these assumptions and the deficiencies which result from them, a management information system should be imbedded in a management control system.

 

DIS deals just with what is or should be management, control and system. In terms of DIS, AGI that in the best case may work as a profitable "user manual", implies increased dependence upon available data that unknowingly may be fake data, misinformation and disinformation, and upon computer technology, including dependence upon those who own and can afford the data and technology. In other words: DIS chapters 2 and 4 on Leibnizian Inquiring Systems: Fact Nets, chapter 3 and chapter 5 on Lockean Inquiring Systems: Consensus, which probably is the last chapter understood by most readers, and is the coupling to their blind reliance upon the “myth of democracy“ (more on this myth below and in my essay on the Russia-Ukraine conflict). This will be to the detriment of partly chapter 6 on Kantian Inquiring Systems: Representations  which is the dogma of most contemporaneous philosophers, chapter 8 on Hegelian Inquiring Systems: Dialectic - the last chapter that is usually understood by most labor unions, Marxists and leftists, and chapter 9 on Singerian Inquiring Systems: Measurement and Progress – unconsciously accepted if yet not understood in detail by most scientists. Plus the whole part 2 with its 7 last chapters starting with chapter 10 on the three basic models or imageries of inquiring systems. It implies emphasis on Democritean Imagery or Mechanism, and Carneadean Imagery: Probability (Statistics) to the detriment of Aristotelian Imagery: Teleology or goals, which is already applied earlier in chapter 3 on the teleology of The Anatomy of Goal Seeking, and I would outright relate to Aquinian theology with the consequent teleology and (analytical) psychology. The rest of part 2 introduces the problem of implementation recalling the question of The Researcher and the Manager, plus the question of religion, theology, psychology and faith, the challenges to teleology, and the dimensions of inquiry that are developed in the next published book The Systems Approach and its Enemies.

 

In still other words: it is the problem that I try to approach in my earlier essays on the computerization of society and the human-computer interaction. One of the late suggestive reports is the case of an insurance company that trained advanced text chatbots to initially reject the insured clients’ written requests for compensation based on their particular insurance policy. It is thought provoking to imagine what this way of thinking means for the medical safety of patients to be in the future served by commercial medical AGI chatbots. This trend has already started and has been commented in Is ChatGPT Smarter than a Primary Care Physician? without a visible understanding of which is the basic problem, besides moral and juridical “responsibility”, beyond an initial comparison between the performance by ChatGPT and human clinical practice. Or, in a medical report on This Drug works, but Wait till You Hear What’s in it, without an understanding of the difference between statistics and a scientific theoretical “Why”, as explained by Churchman in the book Prediction and Optimal Decision (chap 6 on “Objective Probability” relative to statistical population or group). I know of statisticians who made up their living practically by writing physicians’ doctoral theses, since such theses consisted mainly of statistical work with their available data. This is as medicine tends to become more and more result of statistics on big data, and correlation a substitute of problematic causation when the concept of cause itself is put in question as in advanced quantum physics and philosophy of science. The whole is further complicated by physicians themselves, e.g. in health care centers, giving in for smart AI-sellers’ offers of “amplification” or computer “support” (that I comment in the context of human-computer interaction). They will illude themselves that they retain a responsibility which in fact can be no longer their own, being simply victims of the good old Hawthorne effect, seduced as they get for being the focus of attention and by the novelty of being research subjects of fashionable AI. All this while the first symptoms of AI-abuse in health care are being reported (in Medscape, November 20, 2023) in Accused of Using AI to Skirt Doctor’s Orders, Deny Claims.

 

It is obvious that in today’s scientific and cultural milieu readers of these lines feel frustration in face of the perception of an “enormous impossible” task of understanding all the above references as related to the scientific and technical presently “dogmatic” bases of western civilization. I claim that this stands at the heart of what I sense as being a western cultural crisis. The crisis of understanding the bases of our own western culture in analogy to the task of understanding Christianity as compared to, say, Hinduism, Islam or Confucianism. Nevertheless, there are people who do not shy away from such challenges. An example is the Swedish intellectual Ellen Key who in launching in Sweden a sort of “enlightened feminism” did not shy away from embarking in a study of Theosophy related to Buddhism and Anthroposophy. I also heard of students who meet analog problems as the just mentioned reference to Theosophy and Buddhism when envisioning critical hospital care by computer aided dying of patients through the study of the relationship between Heidegger, death, and technology in Japanese philosophy and engineering. More specifically in a field known as kansei engineering, accounting for "emotions, affects, sensibility or “perception” (whatever they happen to be or happen to interrelate) in line with traditional Buddhist and Shinto philosophy, while in fact ignoring the whole body of analytical psychology as I explain in the above mentioned criticism of Key’s enlightened feminism.

 

I do not assume that e.g. feminism needs to be a much more complex phenomenon than the ongoing computerization of society. But I have already tried to explain (cf. above) why writing and discussing serious matters is in general a hopeless task today. Regardless the paradox of my writing down this just now, repeating then the error committed by Churchman when writing and publishing the DIS to which I referred above. Because of all this I rely upon some excerpts from some of my earlier writings, starting from my article on The Russia-NATO-Ukraine Information Crisis where I quote what Plato, before or "as if" presaging the advent of Christianity and of the Bible, wrote (as in my available translations) in his famous Seventh Letter (341d, and 344c):

 

 

If I thought it possible to deal adequately with the subject in a treatise or a lecture for the general public, what finer achievement would there have been in my life than to write a work of great benefit to mankind and to bring the nature of things to light for all men? I do not, however, think the attempt to tell mankind of these matters a good thing, except in the case of some few who are capable of discovering the truth for themselves with a little guidance. In the case of the rest to do so would excite in some an unjustified contempt in a thoroughly offensive fashion, in others certain lofty and vain hopes, as if they had acquired some awesome lore. […]

 

 

For this reason no serious man will ever think of writing about serious realities for the general public so as to make them a prey to envy and perplexity.

 

 

This insight is summarized by the Latin locution Intelligenti pauca (“few words suffice for him who understands”), as well as by the point in my essay Information and Debate on the difficulty if not meaninglessness of a supposedly democratic argumentation on deeper issues. In view of the Quality of Information. Its meaning, however, goes deeper in the Bible as it can be seen in the Bible Gateway by typing in the search field e.g. “whoever has ears”, and “ears eyes”, my own main choice being Matthew 13:1-23, Lucas 8:4-15, Mark 8:18, and Acts 28:27. I realize that my writing this text then is a paradox, and it all is an answer to what is the conclusion of this paper for all those who do not want or cannot read and understand the linked references. They might have been superfluous if the population practiced some of the great religions, for us in the West beginning with Christianity. It has been said that if people had followed at least five or six out of the ten Commandments, the world’s situation would have substantially improved, with or without AI-AGI.

 

The reader who feels somehow impatient because of the length of this paper may now jump over the following chapter 6 that contains occasionally commented (in bold font) further explanations and examples of problems of AI in earlier papers of mine, going directly to chapter 7 on The open letter: "Pause Giant AI Experiments".

 

 

6. References to ChatGPT in other essays

 

In what follows below I try to offset the hopelessness of discussing matters and references that are difficult to read and understand, the more difficult in lack of motivation. I do this by means of (sometimes heavily edited) excerpts from texts that I have written as per July 2023, which when necessary are interpolated by my specific comments in bold text style. They will be completed with my comments to answers obtained from ChatGPT to some specific questions of mine. But let me start with a somewhat "lofty" section out of Information and Theology that matches the probable theological meaning of the Plato's quotation above:

 

 

From Information and Theology

 

"Myths" of Artificial Intelligence. The explaining away of religion and Chesterton's "believing in anything" also opens the way for believing in the future interaction with other planetary worlds in outer space, and in artificial intelligence, AI, computers with self-consciousness, whatever it is, will overpower the human mind, or that AI-robots should have human rights, or that we will ultimately create the superintelligent robotical paradise on Earth, or whatever wherever. Or believing in discussions about, say - machine consciousness and machine ethics ending in so called technological singularity.

 

All this without the possibility of "debating" the matter because of myriads of misunderstandings arising in part from faulty difficult definitions and presuppositions, to begin with about the meaning of intelligence. Buchanan's initial consideration says it all for those who can understand: "In calling AI an experimental science, we presuppose the ability to perform controlled experiments involving intelligent behavior." I repeat, for emphasis: "perform controlled experiments involving intelligent behavior". I add: where control is possible and advisable, as discussed by West Churchman in his book that followed DIS, on The Systems Approach and its Enemies, especially the chapter on "Logic: General", followed by "Logic. A theory of reality, or Kant for planners." But who cares? "Please give me a few lines' summary because I have not the time to read all this stuff"…


This phenomenon of misunderstandings and phantasies about AI is revealed and, in a sense also "solved" in all its complexity by what I regard as a fundamental work by Churchman by the time we elders met the first wave of hype-enthusiasm about AI in the seventies. It was The Design of Inquiring Systems: Basic Concepts of Systems and Organization that I myself tried to expand and facilitate the reading of, by means of a 
Word and Issue Indexfollowed by a sort of contextual evaluation in The Systems Approach to Design and Inquiring Information Systems.


I think that with this kind of understanding it is not, anymore, a question of whether AI in its many forms will be applied in modern society. It is rather a question of forecasting the consequences and the possibilities of counteracting the dangerous ones, becoming a problem that I considered in my 
Trends in Philosophy of Technology, and ultimately a theological problem that motivates the present text. The difficulties will be enormous, not only because the academic devaluation of theology, and even philosophy in technical and political context. Even when a professor of computer science warns about overconfidence in AI in Swedish mass media (Dagens Nyheter, October 7th 2018), he relies upon exhortations for the need to be conscious about the system's limitations. The warnings are based upon appeals to understand and to be conscious that we are still far from creating (an undefined) intelligence at a human level with the ability to feel and reason, evaluate, make moral evaluations and explain what it is doing and planning to do. All italicized concepts remain undefined, presupposing political, philosophical and theological competence, understanding why we are "still" far from "creating" artificial life and paradise on earth, understanding the why of not to "believe in anything".

 

Even a most sophisticated Italian mathematician, Paolo Zellini, who dedicated much of his life writing about the philosophy of mathematics including computer science, concludes his work with a rather inconsequential book, so far only in Italian language, that vaguely warns about the Dictatorship of calculusHis barely outspoken warnings are supported by reference to the extremely explicit ones by the more popular sort of polymath Jaron Lanier. Lanier's limitations appear most clearly when he introduces also provocative thoughts on virtual-reality that challenge earlier elaborate condemnation by others as being deleterious gnosticism in computer science. Despite positive ambitious reviewsZellini's neglect of theology, particularly of Christianity, leads him to miss the most relevant historical aspects of the contribution by Jan Brouwer to the understanding of the problems considered here. 

 

It is symptomatic that when we humans no longer believe in God, we happen to believe that we (Nietzschean superhumans) are so godlike as to be able to create machines in the track of ChatGPT that will transcend human intelligence and be substitutes for God.

 

My comment: This excerpt illustrates how the hype of AI and AGI may nourish itself upon the secular abandonment of the idea of GOD and related theology.

 

 

From: Reason and Gender

All this while the modern technological mindset can be envisaged as promoting the deconstruction or destruction of the (cf. Martin Buber) “you-thou” by reducing it to a divinized capitalized “It” as in the hyped “dialogue” of humans with a ChatGPT  using Large Language Model tools – LLMs (see also here). Language models are basically logic, and further: mathematical logic that forgets the observation by the Jan Brouwer mentioned above (treated in my essay here) in his study of the foundations of mathematics and logic, that logic was to begin with extracted from language. It further opened to the tautological consequence of the ambitions of mathematize logic into "mathematical logic" in order to finally being used as substitute for language conversation between humans that logic was originally extracted from. Humans then gradually leave the scene as unemployed, and the original language conversations and negotiations turn into authoritarian LLM declarative sentences sent through computer networks by a few to the many, plunging everything into the problematic "human-computer interaction".

It is the kind of chat that produces instances in Japan where individuals are reported to have fallen in love or having developed so-called romantic feelings for (holograms of) contrasexual (cf. LGBT), virtual or fictional characters, sometimes referred to as "2D lovers" or "waifu culture" (commented in depth by The New York Times, July 26, 2009). It all to be related to the concepts of “3D lovers” and Real Doll. This phenomenon gained some attention in the media, particularly with the emergence of vocaloid characters like Hatsune Miku in a perhaps more permissive Japanese culture permeated by Shintoism. More information about the topic can be obtained in connection with the Nijikon and Hentai phenomena as well as fictosexuality and pansexuality, which paradoxically indicate the deep gender-psychological and thereby theological implications of human-computer interaction.

Such detours of so-called human “sexuality” stand at the basis of the undermining of the fundamental “I – You” relation whose “archetype” is the gender relation between humans represented by man and woman. This undermining implies the sliding of the You into an It and a consequent confusion of pronouns that are illustrated by what happens lately in the computer field of human-computer interaction or HCI with the appearance of the device ChatGPT mentioned above. In fact, when I inputted into the machine the question “Please explain which are the basic databanks, whether including the whole Internet, which are used in order to find and elaborate the answers in ChatGPT”, I initiated my question with a “please” felt to be directed to those They (or a He or She?!) who are (or were?!) “responsible” for its operation. But I got the following answer from an that could have been a We but was an It, as follows (my emphasis):

 

As an AI language model, my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I don't have direct access to databases or retrieve information from specific sources in real-time. have been trained on a diverse range of data, including books, websites, and other texts, to develop a broad understanding of human language. My purpose is to assist and provide information to the best of my abilities, regardless of the specific sources of my training data.

 

This is, of course, the problem of responsibility in human interaction that is today exasperated also in the introduction of the “autonomous” technology represented by the ramblings of so-called artificial general intelligence AGI where no longer is questioned what intelligence is or should be.

 

My comment: All this highlights the question of “WHO” is ChatGPT or whatever future actor in the expanded field of Artificial General Intelligence. It is closely related to the question of "Whose responsibility", which in turn tends to be easily transformed into an essentially juridical law-question (as Portuguese-speaking readers can hear at an interview with Brazilian computer scientist. Silvio Meira, here and here), while the source of the problem is found at the basis of what fact, logic, and intelligence is all about as related to ethics. Today there are (too) many brilliant minds (as I show in my essay on computerization) who are tempted to discuss, in an extremely logically structured way, matters that they should understand better, recalling again Plato's quote above. In general, the problems created by AI and AGI are translated into assumed future solutions designed by the process of assumed ideal democracy. For the rest It is unclear to me what the last sentence means with “regardless the specific sources of my training data. This considering what was reported by Reuters (Oct 17, 2023) regarding China’s Ernie 4.0 generative AI model (China’s ChatGPT), stating that the prior week, Beijing had published proposed security requirements for firms offering services powered by the technology, including a blacklist of sources that cannot be used to train AI models, beyond copyrighted material. I have seen nothing having been said about that kind of things in neither ChatGPT, nor its contenders Google’s chatt-AI Bard and Gemini. Symptomatically, Wikipedia’s article on Gemini quotes a couple of professors who in the context of comparing these AI-products “cautioned that it was difficult to interpret benchmarks scores without insight into the training data used”. All this while extensive censorship was exerted on the news about e.g. the Russia-Ukraine conflict, as remarked in my paper on it. But Fei-Fei Li (author of The worlds I see – curiosity, exploration and discovery at the dawn of AI) at the Stanford Institute for Human-Centered Artificial Intelligence (Stanford HAI) is reported in a newspaper (Svenska Dagbladet, December 9, 2023) to “emphasizes the importance of an ethical AI development”. I am sure that many perceive such entrepreneurial wording as well-meant uncommitted buzzwords, as the title of the newspaper article, “AI is controlled by humans - for better or worse”, identical to similar stuff found on the net. The involved dazzling successful personalities, however, as well Jensen (Jen-Hsun) Huang co-founder and president of Nvidia Corporation, certainly deserve to be listed among the computer geniuses I introduce in my above-mentioned paper on Computerization.

 

 

From: Computerization as Design of Logic Acrobatics

 

And today the ultimate consequence of this thinking is the recent and by now famous ChatGPT, which probably few computer scientists seem as yet to understand what they are doing in terms of logical fact nets or Leibnizian Inquiring Systems. They are basically logical operations performed on the contents of the assumed “facts”, including facts and about functions of devices that are stored and retrievable from the total Internet. 

 

I think that those who feel compelled by their conscience to unravel this tragic mental confusion can only do this by relating computer science to mathematical logic and empiricism, or logical empiricism, along the guidelines outlined by West Churchman in his The Design of Inquiring Systems repeatedly mentioned here, for good reasons. In particular it is a question of what follows from the conceptions of chap. 2 on Leibnizian Inquiring System: Fact nets (logical networks), and chap. 5 on Lockean Inquiring Systems: Consensus. That is, consensus within the community of Pre-Trainers (the PT in GPT, people selected by somebody), which establishes the sources and selection of facts to be or not to be networked and Generatively Transformed. The former fact nets are intertwined linguistic manipulations, and the latter consensus is the manipulated factual sentences, declarative knowledge that in turn will trigger physical devices when it all happens to be politically trusted wherever found in the Internet by those who can afford the search for their own entertainment, profit or purposes. And the first purposes unfortunately, and ultimately, will be the purposes of automated warfare, and “killer robots” or lethal autonomous weapons (see also here) that will be certainly called "defensive". That is the core of GPT & Co, which is supposed to revolutionize and save or at least to win the world, it all seasoned with more or less intellectually and politically naïve warnings for “Existential risk from artificial general intelligence”. Whatever existential means in its relation to religious.

 

Not knowing or understanding what intelligence is or should be, the door is open for limitless fantasies about the next if not ultimate hype of Artificial General Intelligence including the construction of the above mentioned Theory of everything (and its philosophical counterpart here). Therefore, many will start hoping not only to be able to ask any question and get the answer, but also to ask the computer to do anything and having it done, such as trusting self-driving vehicles without even having understood the problems of auto-pilots or problems of scientific experimentation and theory-building, or technocratic planning being replaced by a sort of ex-post pre-training as in the elaborations of GP - Generative Pre-Training. Whose training and responsibility? In terms of the conflict between Russia and Ukraine it is like asking ChatGPT or Bing what to do in order to solve and stop the conflict by applying a thinking that I have provocatively and rhetorically called Logic as Rape in the form of a ChatGPT instead of an inquiring system. Cf. West Churchman’s “Logic: A Theory of Reality, or Kant for Planners” in his earlier mentioned book The Systems Approach and its Enemies (chap. IV). In other words, a revival of the fantasies about the HAL 9000 fictional AI “character” or of the older idea of Frankenstein ‘s monster.

 

All this without understanding what it all is about, as illustrated in Jordan Peterson’’s short video on ChatGPT with the title of The dark side of AI. If one has the time it is possible to extend the experience by comparing it with Chris Hrapsky’s video on The limits of ChatGPT and discovering a dark side. It is not a question of increasingly faulty intelligence as suggested in an ambitious essay: The age of stupidity. (It is written in Portuguese, but with one main reference in English about The negative Flynn effect). It is more than so, it is a gradually increasing cultural crisis, affecting the population’s intellect, becoming more visible in the affluent western world where it is not masked by material privations. Ultimately, we can consider interaction in terms of the sort of “archetype” of interaction mentioned in my essays on Reason and Gender and Computerization. A CNN news report on a related event, The man who married a hologram related to the phenomenon of Nijikon may give a hint of what is to come, stating that “Researchers say such events are indicative of broader technological trends and social phenomena”.

 

The late and perhaps ultimate consequence of short-circuiting the human and social element together with the consequent debate in the increasingly inclusive logical processes is the expanding phenomenon of polarization of parties, misunderstandings, conflict and violence. They could follow from the ChatGPT’s sociopsychological choking of debate by forcing strong secularized Lockean consensus that was mentioned above. In its bland realistic and immediate form it may be disinformation and ransomeware and, in particular, Ryuk under the general labels of Cyberattacks and its counterpart Computer security.

 

My comment: Still worse it may be if the information is perceived as so overwhelming, the more so when it is much more difficult to question and demonstrate that a declarative sentence is wrong (cost of measurement and gathering of data, hidden definitions and environmental systemic conditions) than to just formulate a supposed factual reality upon the basis of available collected data. I illustrate something related to this in my essay on Information and Debate with the rhetorical questions of “Why Not?” (exemplified with divorce, homosexuality, polygamy, etc.).

 

 

From: Information and Debate

 

The “apocalypse” of debate is pre-announced in the development of artificially intelligent “chat-bots” that are envisaged to allow anyone, in a sick society, to discuss with them, as announced by META-Facebook on August 5, 2022 regarding “BlenderBot 3: An AI Chatbot that Improves Through Conversation”.

 

What is most symptomatic is its forgoing to mention the possibility of religion being a necessary basis of meaningful consensus: charity and respect for the human being, where respect does not mean “tolerance” in the sense it does not matter what somebody thinks so long it does not bother me or "us". It means that we all want to help each other to reach a common truth and good. One opportunity for getting convinced about this is to study very carefully the most ingenious analysis I have seen of the breakdown of debate vs. political correctness, or whatever you want to call it, in Ian Buruma's bewildering article, related to the #MeToo issue, "Editing in an age of outrage", in Financial Times, March 29th 2019. (Swedish translation "Det ska fan vara publicist när Twitterstormen viner" in Dagens Nyheter, April 14th 2019.) Ultimately, in the future, in analogy to the problem of self-driving vehicles, people may have to discuss mindless texts that are generated by artificially intelligent agents such as improved GPT-2:

 

"GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing..."

 

and will be drained of their intellectual energy as human drivers will be drained of their skill and intelligence in order to avoid incidents with the self-driving vehicles. [Test a simplified GPT-2 here, linked at from here.]

 

In order to limit the volume of text in this article, this case study is presented in its original form, as a separate article on Information on Christianism and Atheism. In it I make the observation that discussions on religion vs. rationality can be seen as a poor reenactment and analogy of the misunderstood and endless theological debate on the Filioque, subject of great controversy between Eastern and Western Christianity that serendipitously deals with the meaning of Spirit. where today the most analytically or rhetorically gifted and persevering party seems to win the game. Here I can specify that my choice of the adverb serendipitously referring to the discussion of the meaning of Spirit can be relevant for the ongoing debate on Artificial Intelligence and especially Artificial General Intelligence – AI/AGI, if intelligence is related as it should to the discussion of the human intellect. It seems to me absurd to imagine that a deep-going “world-wide” discussion on the Filioque among educated intellectuals should have been going on for centuries without having a bearing on other intellectual questions and on how to think and behave.

 

My comment: What will happen with the lawful responsibilities in using self-driving vehicles, or automated weapons, and the labor market upheavals with the introduction of AI and AGI in industry, commerce and government, is or will make its appearance in the lame warning gathered under the Wikipedia title of Existential Risks from Artificial General Intelligence.

 

 

From: Computers as Embodied Mathematics and Logic

 

The reports in The Economist and Wikipedia(should) say it all, reminding also cases of analog analytical logical-mathematical giftedness such as of Adam Neumann and the related business WeWork, and Sam Altman as related to general artificial intelligence (Open AI, ChatGPT, etc. as per April 2023) and its financing, or sheer analytical financial genius of a Bernard Madoff.

 

[plus the following that were included in the essay’s later versions]

 

My own annotations on my copy of Carl Jung's book Memories, Dreams, Reflections recorded by his associate Aniela Jaffé, indicate that I first bought and read it in April 1975 but happened to definitely relate it to this essay of mine in July 2023. The book is interesting especially because it tells about the author's psychological interpretation of his understanding and feelings in meeting mathematics. Such feelings, when experienced by common educated people can easily be dismissed for being caused by lack of giftedness or sheer stupidity. The complexity of Jung's account requires literal quotations (cf. pp. 43 ff., 340 ff):

 

[…] I felt a downright fear of the mathematics class. The teacher pretended that algebra was a perfectly natural affair, to be taken for granted, whereas I didn't even know what numbers really were […] they were nothing that could be imagined, mere quantities that resulted from counting. To my confusion these quantities were now represented by letters, which signified sounds, so that it became possible to hear them, so to speak. […] No one could tell me what numbers were, and I was unable even to formulate the question. […] But the things that exasperated me most of all was the proposition: If a=b and b=c, then a=c. even though by definition a meant something other than b, and, being different, could therefore not be equated with b, let alone with c. Whenever it was a question of equivalence, then it was said that a=a, b=b, and so on. This I could accept, whereas a=b seemed to me a downright lie or a fraud. I was equally outraged when the teacher stated in the teeth of his own definition of parallel lines that they met at infinity. […] My intellectual morality fought against these whimsical inconsistencies, which have forever debarred me from understanding mathematics. Right into old age I have had the incorrigible feeling that if, like my schoolmates, I could have accepted without struggle the proposition that a=b, or that sun=moon, dog=cat, then mathematics might have fooled me endlessly – just how much I only began to realize at the age of eighty-four. All my life it remained a puzzle to me why it was that I never managed to get my bearings in mathematics when there was no doubt that I could calculate properly. Least of all did I understand my own moral doubts concerning mathematics.

 

In a later section of the book, in the chapter "On life after death", Jung introduced some exceedingly complex ideas about the relation of human death to the realm of consciousness vs. the unconscious or "the intermediate stage between unconscious and conscious cognition". It would take us too far in this our context but the following extracted comment (p. 341) may give a taste of the relation to mathematics:

 

Ideas of this sort are, of course, inaccurate, and give a strong picture, like a body projected on a plane or, conversely, like the construction of a four-dimensional model out of a three-dimensional body. They use the terms of a three-dimensional world in order to represent themselves to us. Mathematics goes to great pains to create expressions for relationships which pass empirical comprehension. In much the same way, it is all-important for a discipline to build up images of intangibles by logical principles and on the basis of empirical data, that is, on the evidence of dreams. The method employed is what I have called "the method of the necessary statement". It represents the principle of amplification in the interpretation of dreams, but can most easily be demonstrated by the statements implicit in simple whole numbers.

 

My point with these quotations is to suggest that mathematics because of its very nature introduces unconscious processes in the human mind while surreptitiously merging them with ordinary conscious ones. It all recalls what has been considered in this essay about both the aforementioned Brouwer and Zellini, as well giving a hint of the ultimate mysteries of quantum physics (more on this above and below) in the relation between psyche and matter. An alternative equivalent conception presented in my Information and Theology is the balance between the inner knowledge of human psyche vs. natural science and knowledge of the outer world. This role of the unconscious and inner knowledge of the human psyche related to (the aesthetics of) mathematics and logic also explains the problems of addiction (not only of children, but also of “childish” adults), including computer addiction, the overhype of virtual reality, and the success of mathematical-logical minds in all sorts of modern endeavors including mathematized technoscience or white collar crime. The latter is most brilliantly exemplified by the genius of Bernard Madoff and by the fact that “Internet AI personalities” such as Tom Bilyeu at Impact Theory and Emad Mostaque: see here a logical conversation between them with the ambitiously alarming title How Ai will disrupt the entire world in 3 years - Prepare now while others panic. The latter have strong opinions about AI there is a penchant for bold entrepreneurship and/or interest in “volatile” activities, financial and others, as hedge fronds. And much entrepreneurship with empty talk about AI may be conscious or unconscious attempt to commit “white collar crime”. Who is capable to discuss the degree of consciousness vs. unconsciousness in order to determine culpability and whether it will be considered as financial crime?

 

The technical mind has been colorfully exposed in studies of the philosophy of science in the tradition of Jacques Ellul. Some of the deep approaches in the latter’s “anarchist” tradition (examples here, here, and here), however, have sometimes missed the point when they lose the connection to broad Christian theology, philosophy and its derivate (Carl Jung’s analytical) psychology. This appears in the following (below) de-contextualized excerpt from one of the deep approaches to Cybergnosticism, (a term further explained here). The text of the excerpt her below is followed sometimes by my own notes in italics in square brackets in order to not leave everything to the general “Comment” at the end:

 

[J]ust like alcoholics and drug addicts, game-addicted children are now regularly treated by psychologists and psychiatrists in order to get rid of their addictive behaviour, [Yes, but how effective in this treatment on what grounds, and why, since it may by then used for treating all “addicts”, and how should they be defined and identified, since they may be even scientists?]

[M]any videogames take advantage of techniques similar to those used by the military to harden people emotionally. [Yes, but what kind of the treatment of these hardened people would be allowed and recommended by the military leadership vs. ex-post by e.g. the “Veteran benefits fof PTSD” or equivalent organizations?]

As the driving force of la Technique is, according to Ellul, the crave for absolute efficiency in all human endeavours. [Ellul himself is quoted in Wikipedia from his book “The Technological Society” as writing: “Not even the moral conversion of the technicians could make a difference. At best, they would cease to be good technicians. In the end, technique has only one principle, efficient ordering. What about the difficulty, to begin with, in defining “efficiency” vs. “effectiveness”, the more so when it is not even recognized that the difference lies in the conceptualization of the systemic boundaries of (whose?) craves.]

It was Eric Voegelin’s intriguing and much-debated thesis, that there is a deep-seated disorder in our civilisation rooted in a ‘gnostic’ sentiment of alienation and discontent with reality perceived as evil, in the con- sequential ‘gnostic’ turn away from this reality [Psychologically, or how else, is a “sentiment” to be defined, compared with, say a thought, intuition or perception?]

Arguably, we cannot get at the real motives and ideas behind the computer phenomenon in general, and the cyberspace and virtual reality sub-phenomena in particular, nor arrive at a proper understanding of their roots and future direction of growth, unless we take into account these mighty metaphysical driving forces and motivations […] [How about metaphysical forces and motivations, if not within the frame of a religion’s theology?]

It is my thesis that the roots of cyberspace and cybergaming must be investigated in a much wider context than is done in these and other similar works so as to clarify and make comprehensible the motive background and worldviews of the key personages of the field. [What about the difference between (here) “roots” vs. “forces and motivations” in the previous paragraphs].

 

They rest ultimately only upon the criticism by Gnosticism and overworked erudition and rationalism (paradoxically, an apparently own “gnosticism”) of the one “political philosopher” Eric Voegelin. Wikpedia summarizes, as it follows some of his critics’ opinions, which match my own impression from my readings and comment of Voegelin in another context of Information and Theology:

 

[C]ritics have proposed that Voegelin's concept of Gnosis lacks theoretical precision. Therefore, Voegelin's gnosis can, according to the criticis, hardly serve as a scientific basis for an analysis of political movements. Rather, the term "Gnosticism" as used by Voegelin is more of an invective just as "when on the lowest level of propaganda, those who do not conform with one's own opinion are smeared as communists.”

 

That is, “smeared as gnostics”, as I heard one of his followers smeared Carl Jung, based on Voegelin’s readings of second hand texts on him (as contrasted to e.g. Jordan Peterson’s practice as first hand reader of original texts). Until further notice I assume that the Jung's attitude mentioned above is motivated by his psychic inclination to watch upon the unique (psyche) instead of "playing God" or suffering an “Ego inflation” by imagining oneself as understanding mathematics seen as God’s language, and searching general "laws" of nature where the human being himself tends to be “abstracted from” or is regarded as only an object of an undefined or pantheistic "Nature". A concretization of these thoughts is suggested by problems and the history of debates on the world’s first societal introduction of personal identification number  - PIN in Sweden (as a number = “John Smith”), followed by national identification numbers all over the world as I explain in my essay on The Meaning of Man-Machine Interaction. It is meaningful to remember the practice of tattooing a number on the skin of prisoners in Nazi concentration camps, followed today by the increasing use of PIN’s, not the least for (who’s?) control of the population by means of face recognition systems and biometric identifiers. All facilitated by alleging a struggle against “terrorism”. And all this while never reflecting upon the meaning of or relation between identification and individuation, and never reflecting upon the phenomenon, in certain corrupt political regimes, of governmental production of false “identification” documents, including vehicle registration plates. When faith and allegiance is rested on an assumed Democracy and technology instead of on God, then the unavoidable failures of democracy imply a dependence upon anonymous human power whose policing of citizens offsets the loss of judgment caused by the computerized technological logical bypass of the network of multiple human judgments.

 

The extension of these problems reaches the import of the whole issue of my text on the consequences of the computerization of society which follows the conclusions of the present essay, and that I rhetorically named Computerization as Design of Logic Acrobatics but somebody else could have called On the Digitalization of Society. A concrete manifestation of the latest problems of computerization, which at the same time illustrates the psychic derangements of mathematical thought as suggested in Jung's account is the open letter Pause Giant AI Experiments: We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. By August 2nd, 2023 when I accessed it, it had gathered more that 33000 signatures based on what I consider as a legitimate fear that is paradoxically based on wrong premises. It reminds an analog "moratorium" that I recall in my essay on Trends in the Philosophy of Technology. They are wrong "Leibnizian" premises as I explain in my paper on Artificial General Intelligence and ChatGPT.

 

My comment: these name-related cases illustrate the destructive power of minds that are powerful in terms of (unbridled) rationalist analytical logical-mathematical capability, which reach their psychic apex in the field of quantum physics, and most recently in the field of computer science that embodies and rewards mathematics and logic. Jung's difficulties illustrate a mind's struggle in accepting a way of mathematical thinking that opens the road for abuse of human "Leibnizian" thought when applied to matters that require a whole human mind. It is an abuse that was not feared at the time of the polymath Leibniz himself (1646-1716) before the breakthrough of the misunderstood, all-encompassing and mathematized natural science. The mathematization of natural science was later extended to all science, forgetting the purpose and meaning of divorcing thought from associated feeling and intuition, and partly even from sensation by means of mathematical notations as historically conceived in the work of Florian Cajori. In common life it, as knowledge of “God’s language”, can lie behind tendencies toward Narcissistic personality disorder, especially the empirically verified subtype High-Functioning/Exhibitionistic.

 

 

From: The Russia-NATO-Ukraine Information Crisis

 

“[G]eneral artificial intelligence” (GAI), in the form of the latest hype of ChatGPT is seen as a super-human (again, “godly”) intelligence to which human democracy will politically have to submit itself despite of cheap warnings for the need of (paradoxically) “democratic control” in face of “existential risks of artificial intelligence”.

 

Or the assumed brilliant future of ChatGPT (Chat Generative Pre-trained Transformer) that “was tricked to endorse the 2022 Russian invasion of Ukraine”. 

 

I myself wanted to complete such possible criticism by using the latest (as per June 18, 2023) hyped computerized tool ChatGPT – OpenAI (see also here, version at the same date).

 

On June 17, 2023 I put the following two questions to the available ChatGPT, and got the two subsequent answers (below) from the site https://chat.openai.com. They put in evidence the disregard of the core issues in my text and its references. Elsewhere I mention why ChatGPT itself could not write my text. Its creators, unless they only wanted to market a profitable product, should understand that neither AI nor AGI could have written this text since they instead should have been trying to apply the earlier mentioned Design of Inquiring Systems.

 

My two questions and respective answers were:

(1)  Which are the causes of the conflict between Russia, NATO and Ukraine?

(2)  How to solve the conflict and achieve peace between Russia, NATO and Ukraine?

 

(1)  Which are the causes of the conflict between Russia, NATO and Ukraine?

 

ChatGPT:


The conflict between Russia, NATO, and Ukraine has its roots in a combination of historical, political, and security factors. While it is challenging to summarize all the causes comprehensively, here are some key factors that have contributed to the conflict:

Historical context: The historical relationship between Russia and Ukraine has been complex and intertwined. Ukraine was part of the Soviet Union until its dissolution in 1991, and there are historical and cultural ties between the two countries. The legacy of this shared history has shaped their interactions and perceptions.

Ukrainian independence and alignment: After gaining independence, Ukraine pursued closer ties with the West, including aspirations for European integration and possible NATO membership. This shift in alignment has been viewed by Russia as a threat to its sphere of influence, particularly in the context of its historical ties to Ukraine.

Crimea annexation: In 2014, Russia annexed Crimea, a region that was previously part of Ukraine. The move was widely condemned by the international community, as it violated Ukraine's sovereignty and territorial integrity. This event significantly escalated tensions between Russia and Ukraine, as well as between Russia and NATO.

Eastern Ukraine conflict: Following the annexation of Crimea, pro-Russian separatist movements emerged in eastern Ukraine, leading to an armed conflict between Ukrainian government forces and separatist groups. The conflict has resulted in thousands of deaths and a humanitarian crisis in the affected regions.

Geopolitical competition: The conflict can also be seen within the broader context of geopolitical competition between Russia and NATO. NATO's expansion eastward, closer to Russia's borders, has raised concerns in Moscow about encroachment and a perceived threat to its security. Russia has expressed opposition to NATO's presence in countries it considers part of its sphere of influence.

Ethnic and linguistic divisions: Ukraine is a diverse country with different ethnic and linguistic groups. The divisions between ethnic Ukrainians and ethnic Russians, particularly in eastern Ukraine, have played a role in exacerbating tensions and fueling the conflict.

Energy dependence: The issue of energy dependence has added another layer of complexity to the conflict. Ukraine serves as a transit route for Russian gas supplies to Europe, and disputes over energy pricing and supply have at times strained relations between the two countries.

It's important to note that the causes of the conflict are multifaceted and intertwined, and different perspectives exist on each side. The complexity of the conflict underscores the need for a comprehensive and nuanced approach to finding a peaceful resolution.

 

(2)  How to solve the conflict and achieve peace between Russia, NATO and Ukraine?

 

ChatGPT:


The conflict between Russia, NATO, and Ukraine is a complex geopolitical issue with no simple solution. However, I can provide you with some general principles and approaches that can contribute to the resolution of conflicts and the achievement of peace. Please note that these suggestions are not specific to this particular situation and may not address all the intricacies of the conflict at hand. Here are some steps that could be taken:

Diplomacy and dialogue: Encouraging diplomatic efforts and open dialogue between all parties involved is crucial. Diplomatic negotiations can help foster understanding, address concerns, and find common ground.

Mediation and international involvement: Engaging neutral mediators or international organizations, such as the United Nations or the Organization for Security and Cooperation in Europe (OSCE), can facilitate negotiations and provide a platform for discussions.

Confidence-building measures: Implementing confidence-building measures can help create an atmosphere of trust and reduce tensions. These can include the withdrawal of troops from certain areas, establishing demilitarized zones, and implementing arms control agreements.

Respect for international law and agreements: Upholding international law and respecting existing agreements, such as the Minsk Protocol and the Budapest Memorandum, is essential. All parties should be committed to honoring their obligations and finding peaceful solutions within the framework of international law.

Addressing underlying grievances: Understanding and addressing the underlying grievances of all parties involved is important for long-term peace. This may involve addressing historical, cultural, economic, and security concerns through negotiation and compromise.

Economic and humanitarian assistance: Providing economic and humanitarian aid to the affected regions can help alleviate the suffering of the local population and create conditions for stability. This can include support for reconstruction, development projects, and assistance in meeting basic needs.

Public diplomacy and communication: Engaging in public diplomacy efforts can help shape public opinion and build support for peaceful resolutions. Promoting accurate information, fostering understanding, and countering misinformation are essential components of this approach.

Long-term conflict resolution: Developing a comprehensive and sustainable plan for long-term conflict resolution is crucial. This may involve addressing broader regional security concerns, promoting economic integration, and fostering people-to-people exchanges.

It is important to recognize that the conflict is complex and deeply rooted, and the path to peace will require sustained efforts from all parties involved. The suggestions provided here serve as general guidelines and should be adapted to the specific dynamics and challenges of the Russia-NATO-Ukraine conflict.

My comment: this use and response of Chat-GPT illustrates some of the reasons for why the ChatGPT could not have written the kind of text in my essay of the Russia-NATO-Ukraine conflict. Even less so when it is not designed with the intention of relating to the "Hegelian Inquiring Systems" mentioned above. That is: it is not intended to use as much as possible the same data or "facts" in order to also support a counter-story with different if not opposite conclusions to those of the initial main story.

 

 

From a first specific question put to ChatGPT

 

My question:

Please explain which are the basic databanks, whether including the whole Internet, which is used in order to find and elaborate the answers in GPT.

 

ChatGPT


As an AI language model, my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I don't have direct access to databases or retrieve information from specific sources in real-time. I have been trained on a diverse range of data, including books, websites, and other texts, to develop a broad understanding of human language. My purpose is to assist and provide information to the best of my abilities, regardless of the specific sources of my training data.

It's important to note that while I strive to provide accurate and up-to-date information, my responses may not always reflect the most current news events or developments. News is constantly evolving, and new information can emerge that may change the context or understanding of a given topic.

 

 

My comment:

This says it all. The “human trainers” are particular, unknown people with their own (or the producing company’s) unknown ethical and political convictions based on unknown facts. What they are implementing is a sort of inductive process as described in Churchman’s Theory of Experimental Inference but renamed “Machine learning”. Its basic assumption, which underlines my observation of “cementation” is “that whatever worked in the past (i.e. strategies, algorithms, and inferences) will most likely continue to work in the future.” Unsaid: working for whom. See also Wikipedia’s list of “See also” with reference to, among others, “Automated machine learning” and “Big Data”. See also “Large Language models” and “Artificial Neural Network” including its extensive “See also”.

 

 

From a second specific question put to ChatGPT

 

My question:

What can the NATO do for achieving peace in the Russia-Ukraine conflict?

 

ChatGPT


NATO (North Atlantic Treaty Organization) plays a significant role in addressing conflicts and promoting peace and stability among its member states. While I can provide some general suggestions, please note that the situation in the Russia-Ukraine conflict is constantly evolving, and specific actions may depend on the current circumstances. Here are some measures NATO can take to contribute to achieving peace in the Russia-Ukraine conflict:

1.     Collective Defense: NATO's core principle is collective defense, stating that an armed attack against one member is considered an attack against all. NATO can continue to reaffirm its commitment to the security of its member states, including Ukraine, and maintain a strong deterrent posture to discourage any further aggression.

2.     Political Diplomacy: NATO can engage in diplomatic efforts to encourage dialogue and negotiation between Russia and Ukraine. This could involve providing a platform for discussions, offering mediation services, or supporting international organizations and diplomatic initiatives focused on finding a peaceful resolution.

3.     Support for Ukraine: NATO can provide political, economic, and humanitarian support to Ukraine. This assistance can include military aid, capacity-building programs, training and advising Ukrainian armed forces, enhancing cybersecurity capabilities, and facilitating defense reforms to strengthen Ukraine's ability to defend itself.

4.     Monitoring and Reporting: NATO can continue monitoring the situation on the ground through intelligence gathering, satellite imagery, and other means. By providing accurate and timely information about the conflict, NATO can contribute to international understanding and help counter disinformation campaigns.

5.     Economic Sanctions: NATO member countries, in coordination with other international partners, can impose economic sanctions on Russia to exert pressure and encourage a peaceful resolution. These sanctions can target specific sectors of the Russian economy or individuals involved in destabilizing actions.

6.     Confidence-Building Measures: NATO can promote confidence-building measures between Russia and Ukraine, such as establishing channels of communication, implementing arms control agreements, and facilitating exchanges and cooperation in non-military areas like culture, education, and business.

7.     Cooperative Security Initiatives: NATO can explore opportunities for cooperative security initiatives with Russia, aimed at enhancing trust and reducing tensions. These initiatives can include joint military exercises, information sharing on military activities, and dialogue on shared security concerns.

It's important to note that NATO's actions are often carried out in coordination with other international organizations, such as the United Nations and the Organization for Security and Co-operation in Europe (OSCE). Resolving the Russia-Ukraine conflict requires a comprehensive and multifaceted approach involving diplomatic efforts, political will, and engagement from all relevant parties.

 

My comment: This example, compared to the Chat-GPT answers given to the earlier question (above) What can the NATO do for achieving peace in the Russia-Ukraine conflict? shows how the ChatGPT machinery is biased in the sense that items 3, 4 and 5 imply direct support of Ukraine. It disregards the main source of the conflict in that NATO being a creation under the political and military influence of the USA would not in its charter guarantee the exclusion or non-membership of countries neighboring Russia or its “sphere of influence”. All this under the assumption of national sovereignty and other considerations in my essay on The Russia-NATO-Ukraine Information Crisis, which for these same reasons could never have been produced by ChatGPT. This is related to neglecting, to begin with, the philosophy of the earlier mentioned Hegelian inquiring system (IS) and abusing the philosophy of Leibnizian and naïve Lockean IS.

 

This illustrates what was mentioned above, that the possible if not probable result of the use of AGI is that it will cement the past and suffocate future questioning and debate, also caused by the waves on unemployment combined with automation that restrict the range of opinions. This happens already today in that terminological neologisms and abuse of prestigious and “mind-blowing pseudo-philosophical” words and acronyms of all sorts are being used and abused in reference to AI and AGI in the press and in social media. See Swedish TV 18 Oct and 1 Nov. 2022, The Conference 2023: “Förstärkt Mänsklighet” [Enhanced Humanity], covering e.g. “artistic practices involving emerging technologies” and “organize the world’s information to make it universally accessible and useful”. The idea is illustrated in the journal The Economist’s article (Nov 9th 2023) on “Now AI can write, sing and act, is it still possible to be a star?” with the under-title “The deal that ended the Hollywood actors’ strike reflects a fear of the technology”, which indeed is fear of infringement of artistic copyright (cf. the Swedish event Tosbot commented in Svenska Dagbladet 1-2 December 2023; on Tosbot in English here).

 

Not only politically correct journalists who do not even need to be educated in political science, foreign relations and diplomacy, but even school children will be able, without understanding it, to use ChatGPT in order publish a logically structured and credible essay with facts about the salvific power of AI, e.g. in "How to solve the conflict and achieve peace between Russia, NATO and Ukraine?" But it will be much more difficult if not impossible to search for or finance research of dissenters who on their own will be able to afford and formulate a counter-story like the above-mentioned The Russia-NATO-Ukraine Information Crisis or a text like the present one.

 

Furthermore: there are many of those problems surveyed in the above mentioned outline on Existential risks from artificial general intelligence, starting with the upheavals of the labor market. A few of them had been already noticed in research during and prior to earlier "waves of optimism", exasperating the social games of reduction of ethical, intellectual, and theological dimensions to politics of liberalism vs. Marxism. They were outlined in the leftist approach of a timely doctoral dissertation at my university on Work-oriented Design of Computer-Artifacts (1988) as I comment it in my essay on Information and Theology, where it can be seen as an example of reduction of religion and theology to science and politics, or directly from theology to politics.

 

 

7. Pause Giant AI Experiments: An Open Letter

 

This document (cf. “See also” in Wikipedia), published on March 22, 2023 by a community related to the above mentioned "Existential risks from artificial general intelligence" was accessed on August 3, 2023 on the net with the subtitle "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". My earlier references above to ChatGPT were to the version for the general public, the free version GPT-2. The text of the “Open Letter” in my adapted lay-out, is the following – in a smaller font size - in turn followed at the end by my own comments:

 

-----

 

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

 

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models”. We agree: that some point is NOW.

 

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

 

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

 

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

 

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

 

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

 

----

 

Notes and references

[1] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Bostrom, N. (2016). Superintelligence. Oxford University Press.

Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).

Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.

Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.

Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine43(3) (pp. 282-293).

Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.

Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.

Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.

 

[2] Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.

Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.

 

[3] Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.

OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.

 

[4]Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk".

 

[5] Examples include human cloning, human germline modification, gain-of-function research, and eugenics.

 

----

 

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.

In addition to this open letter, we have published a set of policy recommendations which can be found here:

 

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.

In addition to this open letter, we have published a set of policy recommendations ("Policymaking in the Pause") which can be found here, and has the following title and contents:

 

 

Policymaking in the Pause:
What can policymakers do now to combat risks from advanced AI systems?

Contents:

Introduction
Policy recommendations:
           
Mandate robust third-party auditing and certification for specific AI systems
            Regulate organizations’ access to computational power
            Establish capable AI agencies at national level
            Establish liability for AI-caused harm
            Introduce measures to prevent and track AI model leaks
            Expand technical AI safety research funding
Develop standards for identifying and managing AI- generated content and recommendations
Conclusion

 

 

8. Comment to the Open Letter (Proposal for moratorium)

 

If it is not disrespectful I would like to start by asking the idiomatic "What are we to make out of this?" My spontaneous reaction, if not response, is to think about a book that left in me a deep impression when I read it more than forty years ago: William E. Akin's Technocracy and the American Dream: The Technocrat Movement, 1900-1941 (1977). I perceive it as a painstakingly researched passionate life-work of an apparently forgotten "Associate Professor of History in the Loyola Campus of Concordia University, Montreal".

 

Reflecting upon my impression I drew the conclusion that in my present situation I cannot afford to do what I should: write an analog of the book focused on the ongoing AI and AGI-hype. In doing so I would be writing more than a doctoral dissertation on these matters, including a connection to the history of bureaucracy seen, as AI, a system of logical rules working on collected data, and related to technocracy as indicated in Wikipedia’s section on “See also”.

 

On the other hand, as I write in my general disclaimer in the link that initiates the list of “Contents” at the beginning of present essay, such endeavor should not be necessary because "it all" is already written in Akin’s above-mentioned book, the problem being what Plato writes in my quotation above, and what is repeated in later occasions in such books as, say, by Richard Stivers. Let's see some excerpts from Akin regarding "technological unemployment" that a doctoral dissertation at my university department translated into a Marxist analysis of the role of labor unions under the title computer Work-Oriented Design of Computer Artifacts (published as book here and commented by me in the context of Information and Theology). The political reason for the rise and establishment of labor unions is today forgotten in the facile wishful thinking that technological unemployment caused by AI-AGI will be avoided by adherence to “ethical standards” or offset by “universal basic income”.

 

The key issue is that the Open Letter requesting of moratorium surreptitiously subsumes a political democratic process while assuming that its implicit technocratic view, expressed and supported by computer scientists, can be reconciled with a misunderstood or ignored mythological democracy. It is a democracy that according to technocratic "policy recommendations" mandates auditing and certifications, regulates access to computational power, establishes agencies and liabilities, introduces measures to prevent AI-misuse, and expands funding for AI safety research.

 

Related to mythological democracy’s lame “tautological voluntary and non-binding reliance on experts”, it is then interesting to remark what The Economist (Nov 23rd, 2023) writes with reference to the above-mentioned “boomers”, in the context of Sam Altman’s return marking a new phase for Open AI:

[Boomers] will worry politicians, who are scrambling to show that they take the risks seriously. In July President Joe Biden’s administration nudged seven leading model-makers, including Google, Meta, Microsoft and OpenAI, to make “voluntary commitments” to have their AI products inspected by experts before releasing them to the public. On November 1st the British government got a similar group to sign another non-binding agreement that allowed regulators to test their AIs for trustworthiness and harmful capabilities, such as endangering national security.

And now over to Akin on technological unemployment (pp. 156-164.):

 

The charge of technological unemployment was the most relevant economic issue. Far more serious, lengthy, and passionate discussion took place over this question than any other. It was the most difficult to resolve and perhaps of most lasting significance. […]

 

Somewhat surprisingly, the spokesmen for business were among those unable to present a solid front when the issue first arose. Businessmen have never wished to admit the existence of technological unemployment. Aside from the serious economic problems involved, to do so raised equally important questions regarding the social and moral values of capitalism, which both entrepreneurs and corporations preferred to leave unmasked. It cut through and could potentially destroy, their most cherished notions: their easy identification of technological change and progress, their assumption that change was compatible with stability, their belief in the social and moral value of work, the idea of the self-made man and the theme of individualism, and the necessity of laissez-faire. […]

 

The vehemence with which leading scientists and engineers repudiated the concept of technological unemployment reflected their fear that it constituted a frontal attack on their professional and social roles. Their disquiet was even greater than that of businessmen. To them the benevolence of science and technology was indisputable. […]

 

The assurances of scientists and engineers, that technology would continue to create new products and industries, "each demanding an army of workers," also lent support to the optimistic common-sense view despite the fact that it required the same kind of blind faith. […]

 

Most businessmen […] followed a mechanistic theory which held that technology reduced costs, thereby creating greater product demand, which increased production, which in turn necessitated higher employment. […]

 

To avoid serious repercussions, the economy's well-being ultimately depended on constantly increasing the level of consumption. […] The logic of those who argued the case dictated that over the long run increasing purchasing power was linked to economic growth. To insure minimum dislocations, economic growth must match technological improvement. […]

 

As important as the question of technological unemployment was, the broad implications of technology for society was of even greater significance for some. One of the most pressing issues raised was the effect of technology on social organization. Was there a cultural lag that had to be lessened or bridged? […] Did it require a technical elite to engineer society as well as the machine? […]

 

One of the shortcomings of the proponents of technocracy was their failure to reconcile the technocratic view with democracy.

 

 

In other words: the Open Letter is a replay or reenactment of the "philosophy" of the technocratic movement with its problematically related social responsibility (c.f. also here and on Walter Rautenstrauch here), the more so when computerization in general and AI in particular seldom if ever can be analyzed for profitability in terms of cost-benefit analysis, but are valued in terms of saving costs of human labor. And it is more than social responsibility, it is a matter of human love in the sense of Thou shalt love thy neighbour as thyself (Matt. 22:39) It is not only a question of "unemployment". A modern technocratic movement in its computer-artificial-intelligence dress, does need neither the opinion nor a dialog and even less an "impossible debate" with an untrained, not technically gifted and a supposedly unintelligent workforce that meets the impossibilities of a "human-computer interaction" that is, a substitute of human language. It needs only the ultimate political legitimation of a more or less mythological Democracy, It is a Democracy that today does not display references to biblical quotations in order to expect that new computer technology will create enough new influential jobs and will need neither the Bible nor labor unions in order to allow a Universal basic income to an increasing world-population. And it is a Democracy that is used as a dumping place for personal responsibility, as I write in another theological context, and is done even by children in the context of climate global warming where the “researcher” is the scientist and the “manager” is the politic body of democracy:

 

Politics, for instance, often is also explained away by scientists and engineers who assume that all would be alright if only politicians followed the scientists' recommendations, as denounced in the famous paper by Churchman & Schainblatt The researcher and the manager. A dialectic of implementation and its Commentaries. 

 

More than so, the Open Letter ignores not only the paradox of computerization not saving manpower and time, not allowing more time for culture and relaxation, as surveyed Staffan Burenstam Linder’s The Harried Leisure Class, and in my text on the Meaning of Human-Computer interaction. It also mainly ignores one main motivating force and commitments of the military-industrial complex as in the latest conflict between Russia and Ukraine, and the sociology as well as the political science lying behind the clash between duly secularized socialism and liberal capitalism. A socialism with roots in the absence of labor unions for countering the advent of atheist industrialization. Today it is like enabling to ask ChatGPT-2 (or the later paid GPT-4) the questions quoted above in this essay about the causes and the solution of the Russia-Ukraine conflict, while one main force of the ongoing USA research may be to complement the logical-mathematical rape of mind and nature by a quantum physics for nuclear weapons, with the human-computer interacting AI rape of the human mind for design and implementation of self-driving unmanned weapons as analogs to drone warfare (as here, in Ukraine).

 

This Open Letter or proposal for moratorium will obviously not stand alone. It was, for instance, followed by a related AI Safety Summit in London 1-2 November 2023, leading to the so called Bletchley Declaration (published by the government of the UK, officially represented by the Department of Science, Innovation & Technology, Foreign Commonwealth & Development Office, and The Prime Minister’s Office, 10 Downing Street). It contained, among others, the following typical policy thoughts, to be expected to be recurrent in a plethora of coming AI conferences around the world (my italics):

 

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.

 

Yes, indeed: “These issues are in part because those capabilities are not fully understood and are therefore hard to predict”. “Capabilities” or especially risks not fully understood, or not at all understood? This was the reason for the upheaval in the episode of the world-wide diffused news in November 2023 of the sudden sacking and reinstatement of genius-wizard Sam Altman from the position of CEO of OpenAI, the creator of (excessively?) successful of ChatGPT. The news agencies repeated that the explicit reason given for his sacking had been “unclear-vague” shortcomings in his communications with the board. They did not dare to advance the hypothesis that the OpenAI investors, employees and prospective clients were welcoming Altman’s optimism in downplaying the “existential risks” of AI (see above) in contrast to the OpenAI board’s eventual mounting political-ethical concerns. This may be an illustration of the coming ethical and political struggles in the development and applications of AGI, which may be hopeless if one considers the risk that today political Democracy is a myth. It is a myth hidden behind non-binding assurances that vague, undefined and problematic “productivity-efficiency-effectiveness” (see the “theoretical-conceptual framework” in hodgepodge here) will be “responsibly” managed with expertise, following democratically decided governmental rules after a rich dialogue with all affected parties, avoiding the production of fake information or information implying “existential risks”, controlling for protecting from access by extraneous irresponsible influence from non-authorized personnel, it all guaranteed by security-safety measures, etc. etc.

 

I repeat now from the beginning of the previous paragraph: “These issues are in part because those capabilities are not fully understood and are therefore hard to predict”. “Capabilities” or especially risks not fully understood, or not at all understood:

 

For instance, the technocratically avoided issue of unemployment of people that are simply sacked and replaced by AI-AGI. Even in so-called intellectual work, mediocre professionals may be replaced by AGI/ChatGPT, but they may also improve their productivity with its help, while the best top-professionals may not improve their performance because they, to begin with, were not reasoning only “logically-empirically”, as slow computers without even taking stand in the debates about logical empiricism, with the consequence that they will not continue to be rated as “best”.

 

Paradoxically this defective understanding is hoped to be achieved by a future long series of conferences. Cf. the above-mentioned AI Safety Summit (never mind what the concept of safety or security is or should be, as suggested above) that is only one conference in the first waves of the “conference industry” in the increased hype of AI in years 2022-2023, immediately followed by e.g. a conferences in London on November 9, 2023 Generative AI Summit, on November 27-28 AI World Congress 2023, and on November 30-December 1, 2023 on Delivering AI and Big Data for a Smarter Future (also AI & Big Data Expo). In general, risks are not understood, and AI/AGI are the latest reminder of that ongoing societal computerization, which may be soon followed by news on applications of mind-blowing computational neuroscience, brain-computer interface and the like, employing, among others, neurophysiologists, with skills exemplified by the president of the Allen institute for Brain Sciences, and others who are introduced as being e.g. a “neurophysiologist and computational neuroscientist” and work on the “neural basis of consciousness”. That is: whatever that means for even a highly educated democratically minded citizen who will exert his democratic duties in future elections that will direct the desirable future of national scientific efforts. It all can be seen as a gigantic irreversible global experiment on humanity, as illustrated also in my already mentioned paper on the meaning of human-computer interaction. Let me complete this section with replicating a quotation from the epilogue of my paper on computerization and logic which I hope may convey an image of the increasing scope of techno-science and of computer-logical dreaminess, from the atomic bombs on Hiroshima and Nagasaki, up to today’s super-human, divine intelligence. It can be seen either as a hymn to a fantastic promising limitless technological progress (towards…) or as a document of recurring naïve technocratic hype.

 

As an illustrative tribute to the genius and naivety of the celebrated great engineer Vannevar Bush, I will terminate with a quotation of his most famous prophetic article “As we may think” in The Atlantic Monthly issue of July 1945, that I recommend to the readers for getting a time perspective on the drive for computerization:

 

It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.

 

 

9. Case study: “Creating safe AGI that benefits all of humanity”

 

On December 24, 2023 I accessed the Open AI’s policies and terms of use (effective December 23, 2023, previous versions here) for the European Economic Area (EEA), Switzerland, or UK, that were effective on December (for others living outside the EEA, Switzerland or UK, see other terms of use). From the contents I made the following core selection as material for the case study in the form of subsequent comments along the above text in this essay, keeping most of the original layout except of the font type:

 

Who We Are

OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity. For more information about OpenAI, please visit https://openai.com/about. Our Services are provided to you by:

.        OpenAI Ireland Ltd, a company incorporated in the Republic of Ireland with its registered office at 1st Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01 YC43, Ireland and company number 737350, if you are resident in the EEA or Switzerland. 

.        OpenAI, L.L.C., a Delaware company with its registered office at 3180 18th Street, San Francisco, California 94110, United States and company number 7063675, if you are resident in the UK.

Additional Service-Specific Terms

Depending on the specific Service or features you use, additional Service-specific terms and policies may apply to your use of our Services. The key ones to be aware of, and which form part of these Terms, are described below: 

.        Usage Policies: these policies explain how you may use our Services and Content.

.        Service Terms: these terms apply when you use certain Services or features;

.        Sharing & Publication Policy: this policy sets out rules for when you share Content;

Using Our Services

What You Can Do. Subject to your compliance with these Terms, you may access and use our Services. In using our Services, you must comply with all applicable laws as well as the Service-specific terms and policies listed above.

What You Cannot Do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you are prohibited from:

.        Using our Services in a way that infringes, misappropriates or violates anyone’s rights.

.        Modifying, copying, leasing, selling or distributing any of our Services.

.        Attempting to or assisting anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law).

.        Automatically or programmatically extracting data or Output (defined below).

.        Representing that Output was human-generated when it was not.

.        Interfering with or disrupting our Services, including circumventing any rate limits or restrictions or bypassing any protective measures or safety mitigations we put on our Services.

.        Using Output to develop models that compete with OpenAI.

Content

Your Content. You may provide input to the Services (“Input”), and receive output from the Services based on the Input (“Output”). Input and Output are collectively “Content”. You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licences, and permissions needed to provide Input to our Services.

Ownership of Content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output. 

Similarity of Content. Due to the nature of our Services and artificial intelligence generally, Output may not be unique and other users may receive similar output from our Services. Our assignment above does not extend to other users’ output or any Third Party Output. 

Our Use of Content. We can use your Content worldwide to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies and keep our Services safe. 

Opt Out. If you do not want us to use your Content to train our models, you have the option to opt out by updating your account settings. Further information can be found in this Help Center article. Please note that in some cases this may limit the ability of our Services to better address your specific use case.

Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe, and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in Output that does not accurately reflect real people, places, or facts. 

When you use our Services you understand and agree:

.        Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. 

.        You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.

.        You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. 

.        Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

 

 

My comments:

 

I see the policies and terms of use mainly as an expression of goodwill, and mainly as a complex juridical disclaimer based on two main concepts of service, and user. I myself wonder whether OpenAI, without any notice has deleted my account (as per January 2024) and started refusing my IP Address (I can no longer neither log-in nor sign up from my regular IP address) because I have used its services for commenting them criticizing AI as I showed above in my present text, as they perhaps were judged to infringe the last paragraph above on “When you use our Services you understand and agree”.

 

These policies and terms of use can and must be seen and related to production-consumption, or means-goal, or producer-consuming client, or scientist producer of a tool, or politician user of the tool. I have already dwelled on the problems of this conception focused on “tool” in my paper on The meaning of human-computer interaction, and for the purpose of space and time I must refer the reader to this source in its section dealing with tool.

 

The core of the problem, however, can be seen in the definition of producer, product and user in terms of the explanation of the meaning of so-called morphological or structural, functional and teleological classes as introduced in the earlier mentioned DIS. It includes the overview of complications, for instance, that one cause produces one effect, but another cause or several other causes can also produce the very same effect. But it is later realized that some of these productions can only happen under earlier unknown external conditions that are under the control of others who happen to be interested and motivated by the desire of seeing or having the result, or motivated to sometimes prevent it, because of being motivated to counter the happening of such result. And so on. In the middle of all this, mathematics and its abuse has a special simplifying function as I consider it in the section of my essay on the famous rejected parts of Jan Brouwer’s dissertation about the foundations of mathematics, where I write the following:

 

In presenting The rejected parts of Brouwer's dissertation, [Walter Peter] Van Stigt refers to Brouwer's interpretation of causality as essentially mathematical: the ability to link events in the mind, to see sequences and repetition of sequences in time, to link sensations as the immediate source of awareness of time and discreteness. It is the source of man's power to predict the future and interfere in the course of events. This "intellectual or mathematical" way of looking at the world is not only a one-sided concentration and interpretation of reality: by ignoring and willfully removing aspects which deviate from the expected course of events, man supplements and creates more regularity than exists in nature, he makes the world linear or "one-sided". The regularity observed in nature is due to the nature of the measuring instruments and physical science has value only as weapon, not concerning life. It is clearly inferior and has nothing to do with religion or wisdom. More in detail, in Brouwer's own words:

 

"Man has the faculty, accompanying all his interactions with nature, of objectifying the world, of seeing in the world causal systems in time. The primordial phenomenon is simply the intuition of time in which repetition of "thing in time and again thing" is possible, but in which (and this is a phenomenon outside mathematics) a sensation can fall apart in component qualities, so that a single moment can be lived through a sequence of qualitatively different things. One can, however, restrict oneself to the mere sensation of theses sequences as such, independent of the various degrees to which objects are perceived in the world outside are to be feared or desired. (The attention is reduced to an intellectual observation.) The human tactics of "acting purposively" then consists in replacing the end by the means (a later occurrence in the intellectually observed sequence by an earlier occurrence) when the human instinct feels that chance favours the means."

 

What happens, then, is that the reader may find the explanation tedious and will ask, as I have many times asked, to summarize the whole thing I a few words. So, what happens is what I already have tried to explain in the context of Information and Debate: People feel that they have no time, no motivation or no capabilities to read such texts. If they hear that the explanation in summary is partly also found in Russell Ackoff’s and Fred Emery’s chapter on Structure, Function and Purpose in their book On Purposeful Systems (pp. 13-32), they may wish to get a summary of the summary, in words that allow them to grasp the whole in a few hours, or minutes.

 

But the summary, which appears in DIS, (p. 59) is not easily understood and accepted when it states:

 

Thus in the broader viewpoint one cannot distinguish between science and its politics; it make no sense to the designer to say that science is a body of knowledge and politics is people, and therefore the two must be different. For the designer it is impossible to optimize the system of acquiring basic knowledge without considering the political problems that such a system generates. The boundaries of “basic research” expand into the area of national policy making, and the client becomes larger than the scientific community.

 

And, in fact, returning to text starting some lines above the last paragraph of the selection out of “terms of service”:

 

.       1. Given the probabilistic nature of machine learning, use of our Services may in some situations result in Output that does not accurately reflect real people, places, or facts.

.       2. Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. 

.       3. You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.

.       4. You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. 

.       5. Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

That is an attempt to express a juridical disclaimer, smart lawyers editing a text that relieves the producer from risks, if sued. For me it is analogy to a manufacturer of weapons who advises the buyer/client about the use of the weapons. There are national laws that regulate such an industry and commerce but there is also a world of arms trafficking and criminal possession of weapons related to international politics and conflicts. In our case it is a disclaimer that seeks an assurance that weapons will not be given to, and used by children but only used by “us” or by democratic and friendly people in friendly countries, and only for self-defense, as only by Ukraine in the conflict with Russia, disregarding all the complications that are suggested in my essay on the conflict. And regarding the above repeated text of numbered “terms of service” above:

 

1. The output may not accurately reflect real people, places, or facts. [Who should and can do what about that it may not be truth?].

2. They say that you should [or must?] not rely on output from our services as a sole source of truth [but claims that it is truth?].

3. You must evaluate.  [But how? – this may mean understanding and applying the ignored DIS].

4. You must not use any output that could have a legal or material impact. [We, who, “must not use”, but how to know whether it “could” have impact?].

5. Our services may be offensive. [But it probably intends to claim that “we” are not offensive].

 

And who are “we” and who are “you”? Among the “we”, I can claim that I myself understand and follow “musts” but who is responsible for the whole of “us” and of “you”, and what does it imply to be “responsible”, for what, and to whom? To the state, in state individualism, which is often equated to “society”? All guaranteed by the “myth of democracy”?

 

 

10. Conclusion: The meaning of hype

 

All this up to the point of raising the question of what “hype” does mean in our context. Today it is not general popular knowledge to know about and relate to the historical issue of Gustave Le Bon’s The Crowd: A Study of the Popular Mind, as the beginning of crowd psychology, and at the edge of Groupthink, with its relation to the wiki-concept and to my own doctoral thesis on Quality-control of Information.

 

 Regarding criticism and influence of the book The Crowd, Wikipedia mentions feelings of power and security allow the individual not only to act as part of the mass, but also to feel safety in numbers. Today the safety in numbers may be in numbers of fellows on the net who share the misunderstanding or belief, the number of nodes and hits in the Internet, or the volume of big data.

 

For all its relations to Freudian psychology I think that the message is better understood in the light of analytical psychology and Carl Jung’s reflections in his Civilization in Transition (Collected Works, vol, 10). I dare to mention my own association to the particular chapter in the book (# 589-824) where Jung (mentioned by addicts to the New Age movement) analyzes “Flying Saucers: A Modern Myth of Things Seen in the Skies”. In terms of philosophy of science, this as well as the case of the technical hubris of the Titanic-case, may very well be fruitful, if yet apparently farfetched analogies, recalling George Lakoff and Mark Johnson’s Metaphors We Live By. A thoughtful book by the professor of sociology Richard Stivers says it all: Technology as Magic: The Triumph of the Irrational, including the article on “The Case of the University”. It is a serious alternative to Arthur C. Clarke’s perhaps better known phrase “Any sufficiently advanced technology is indistinguishable from magic” (see also here). Never mind that the complexities of the philosophy of technology hide the basic role of the psychic abuse of formal thinking. A reminder of the tragedy behind it all is the tragic historic analogy is the Holocaust, based on a whole culturally advanced nation’s deep popular, historic, and scientific “knowledge” of the minds, behavior, and history of mainly Jews as an ethnic group. So many references and so much, in part misleading, reading would not be necessary if, as Blaise Pascal remarks in his Pensées, (1955, §251) humans would “fall in love for”, or accept and follow Christianity that unites psychological exteriority with interiority, or follow at least some of the few great world religions.

 

I think that the situation is very serious and the more so because the consequences, grave as they can be, will not be perceived, except in the long run, in the same way as cultures may degenerate and conflictual relations between people and nations develop until exploding in murders and wars, historically even two world wars. More recently we have the examples of the Russia-Ukraine conflict and Israel-Hamas war. We have also cultural “civil wars” like feminism (cf. our boomers vs. doomers mentioned earlier) where the gregariousness of nationalism or sex-gender or analytical leaning attracts individuals who have not a fully developed psychic identity (individuation) and therefore easily achieve “meaning and identity” by melting with a collective. Even with no visible conflict, the consequence can be an oppressive passivation of especially old, weak, ill or handicapped people who will not be able to further their own opinions and interests, as in my illustration of difficulties and limitations of  the already mentioned human-computer interaction.

 

In the meantime, the hype is invading our daily environment. I picked up a concrete simple example in Sweden but I should have preferred to wait until another text appeared that I saw much later in the Swedish design magazine Form, No. 2, 2024, p. 71, written by Björn Nordin, with the title “Människan och AI” [Humans and AI]. I propose Swedish readers to also consult it after reading my analysis below because it illustrates what is coming to drain or drawn us intellectually. By then I had already, painfully, worked out my comments that follow, to the first mentioned text that is an advertising brochure enclosed in and distributed together with the newspaper Svenska Dagbladet in mid-December 2023, with the title (my trans.) AI or HI. I did not see an explanation of what HI is, probably human intelligence. It was produced by the Swedish branch of the Universum Employer Branding Agency. In two pages (pp.4-5) it offers a series of affirmations about AI. I will begin by displaying a short summary of the text (my trans.), followed by the same text interspersed with my own commentaries [with my emphasis in italics and in square brackets] based on arguments presented earlier in this essay. The brochure begins by telling that:

 

Already in year 1589 Queen Elizabeth I rejected a patent application for a knitting machine for fear that it would put the population out of work, so concerns about the impact of new technology on jobs are neither new nor particularly surprising. Already today many types of basic counselling and customer service are now carried out without human contact. The CEO of a firm, Futurion, calls AI "the super colleague who never sleeps", who knows most languages, has a number of different degrees”, and at the same time notes that many of the skills we were able to study in the past, AI is now able to handle perfectly well on its own: only when it comes to managing human creativity and critical thinking, however, it is not doing so well. The CEO also finds that the demanded skills of the future will be very much about social and creative skills such as dealing with people, the ability to put together broad and to come up with original and unusual solutions. New roles and responsibilities will emerge. We will therefore have to learn and re-learn, probably several times over. It is too early to say what these new roles will look like, but there is already a growing demand for technical specialists in machine learning, prompt designers and AI developers.

 

Generative AI such as ChatGPT cannot replace the creativity and imagination of writers, designers and other creative professionals. It requires a level of originality, emotion and expression that machines cannot replicate, at least not yet. What do we do with the time left over? We don't know yet. But if we can avoid spending evenings reading meeting documents, getting help to tidy up the statistics for the presentation and getting an editable first draft of a document in a couple of minutes, we should reasonably have a lot of time to spare. The transition from an eight to six hour working day is possible with increased use of AI, however, it is important to note that it is not only a technological issue but also a socio-economic and cultural one. Future changes in working habits are likely to be influenced by a combination of technological advances, policies, and changes in societal attitudes towards work and leisure. Whatever the outcome, we can say that AI is here to stay, and if you don't want to be left behind, you might as well jump on the bandwagon.

 

 

WITH MY COMMENTS:

Already in year 1589 Queen Elizabeth-I [but, please, mind the difference between the worlds of 1589 and 2023, and why not mention also the better studied luddites among whom I myself may be classified] rejected a patent application for a knitting machine for fear that it would put the population out of work, so concerns about the impact of new technology on jobs are neither new nor particularly surprising. Already today many types of basic counselling and customer service are now carried out without human contact. [Already commented earlier in my earlier text]. The CEO of a firm, Futurion, calls AI "the super colleague who never sleeps", who knows most languages, has a number of different degrees”, and at the same time notes that many of the skills we were able to study in the past, AI is now able to handle perfectly well on its own: only when it comes to managing human creativity and critical thinking [how to recognize that?], however, it is not doing so well. The CEO also finds that the demanded skills of the future will be very much about social and creative skills such as dealing with people [not in chat bots!], the ability to put together broad solutions [synonym to the earlier-mentioned DIS “systems thinking?] and to come up with original and unusual solutions [how to recognize and classify them?]. New roles and responsibilities will emerge. We will therefore have to learn and re-learn, probably several times over [equivalent to outweigh and repair short-term truths and solutions?]. It is too early to say what these new roles will look like, but there is already a growing demand for technical specialists in machine learning, prompt designers and AI developers. [guess: what else?].

 

Generative AI such as ChatGPT cannot replace the creativity and imagination [what are they and their relation?] of writers, designers and other creative professionals [but cf. the earlier mentioned infringement of artistic copyright]. It requires a level of originality, emotion and expression [what are they and their relation?]  that machines cannot replicate, at least not yet [remark this not yet]. What to do we do with the time left over? [Left over by whom to whom? More automation in supermarkets leading to less employees, transfer of tasks to customers, same length of queues]. We don't know yet. But if we can avoid spending evenings reading meeting documents [why?], getting help to tidy up the statistics for the presentation and getting an editable first draft of a document in a couple of minutes, we should reasonably have a lot of time to spare [again who are “we”, employers or fewer employees?] The transition from an eight to six hour working day is possible [with the same amount of manpower?]  with increased use of AI, however, it is important to note that it is not only a technological issue but also a socio-economic and cultural one [and religious about greed, and political, cf. labor unions?]. Future changes in working habits are likely to be influenced by a combination of technological advances, policies, and changes in societal attitudes towards work and leisure [typical ChatGPT-text, and what attitudes?]. Whatever the outcome, we can say that AI is here to stay, and if you [cf. “we” vs. “you”] don't want to be left behind, you might as well jump on the [whose?] bandwagon.

 

 

11. Conclusion: Beyond the hype

 

ENOUGH WITH (TOO MANY) COMMENTS. What are to make out of this nice streamlined text, streamlined except for my own emphasis and commentaries? They must have been partly and cheaply edited automatically by ChatGPT, such as many other coming published texts that can be easily read and accepted but would require an attentive reading and critical examination. It is like reading the ChatGPT answered to my questions about the causes of the conflict between Russia and Ukraine, and compare them with what I write about in my paper on the issue. In contrast, we can expect the whole society to be drowned in paying attention to an immense amount of such ChatGPT/AGI advertising, The advertising will include the apparently more ambitious texts such as the above-mentioned intelligently artificial artistic production. The stuff will be almost daily worldwide produced and broadcasted also in documentaries on AI, in the coming explosive future of the AI-industry.

 

In this perspective the obvious real productive capabilities of AI/AGI may cause a collective schizophrenia, or cultural crisis or lobotomy of the “crowd”. It will keep few in the crowd busy in trying to respond to “automatic” ChatGPT-produced texts, including continuous AI-advertising, produced by lots of people, businesses, employees who have vested interests in continuing to foster the AI/AGI-hype that guarantees their own employment. Problems that would have required a broad social systemic understanding of the causes of the problem will instead lead to the discussion of whether and what particular, continuously arising AI/AGI “tools” developed by a few businesses may help to solve the problems. A high rate of unemployment caused by AI/AGI will at the same time disclose the high fraction of present job opportunities that do not require humanity for their production, but mainly or only routine computer-like routine calculation, mechanical movements, and consumption of demanded but unnecessary products (including future AI/AGI) that keep the economy going. Promises for the above-mentioned “universal basic income” and the financing of wars, irrespective global warming, will ignore economy, social psychology and politics, the problem of human greed and of the difficulty to practice the  goodness of Christianity and major religions, as evidenced by wars and by the rise of labor unions related to communism and socialism.

 

The Faustian bargain reminds that a good portion of science and technology could have been seen as legitimate in facilitating everyday life at unknown costs, but for whom? Credible long-term cost-benefit analysis will not be possible, as little as it will be possible to check the accuracy of statistical predictions of the future effects of new AI applications. When facilitation and increased productivity will lead to more unemployment, to the production of nuclear weapons and increased production of more lethal weapons for ongoing wars, and cause continuous worldwide pollution, this will raise claims of dangers and of climate warming, leading desperate souls to feel and write analogies to the Industrial Society and Its Future (see below). The counterparts will claim to strive for and rely on future solutions by means of “more of the same”, godly super-human intelligence that makes many humans obsolete, confirming that it was Faustian hubris. It recalls the famous title of Oswald Spengler’s famous book The Decline of the West, which may also be seen as a secular interpretation of, or one more steps in the understanding of the Book of Revelation.

 

Atheists and those who cannot surmount the difficulties of reading and interpreting the Book of Revelation can instead consider the possibility of having in their minds an Icarus complex, despite vague doubts that have been expressed as to the therapeutic value of such a diagnosis. It displays, however, similarities with other better known alternative concepts, including the already mentioned autism or other diagnoses as narcissistic personality disorder. The description of the Icarus complex does refer to the prominent American psychologist Henry Murray who in turn has had his life story related to the famous case of the American mathematician Ted Kaczynski, who is further described as

 

“a mathematician who went on to be known as the 'Unabomber'. Between 1978 and 1995, Kaczynski murdered three individuals and injured 23 others in a nationwide mail bombing campaign against people he believed to be advancing modern technology and the destruction of the natural environment”.

 

More sophisticated than simple “luddites” and even activists in the crusade against global warming, he must have felt an overwhelming crushing discomfort in the contact with the technical-industrial society at the time, a feeling similar to the one felt by many contemporaneous citizens who today live in complex febrile megacity milieus but long for a simple life in contact with nature. He authored his own description and interpretation of the Apocalypse in the thoughtful book Industrial Society and Its Future (Wikipedia report here, printed book here). It is a manifesto and social critique opposing industrialization, rejecting leftism, and advocating for a nature-centered form of anarchism that today would have a deal in common with the climate and environmental movements. I treat those commonalities with emphasis on climate in another essay on Climate and apocalyptic global warming.

 

My point here, however, is directed to those who have enough mathematical-logical leaning for having been lured into an Icarus-complex, pathological narcissism, or denial of possibilities to avoid the Apocalypse. Even if they do not have the reported high mathematical genius and intelligence of Ted Kaczynski with an IQ at 167, better they choose a career in the AI-industry, like the above-mentioned boomers who are opposed to doomers. Doing so, becoming boomers, they may avoid Kaczynski’s tragedy that fits doomers, his being a good example of what even a pure mathematical-logical genius can lead to. It can lead to an additional “logical” tragedy of crime and war such as the one I recently described for Ukraine, which can be seen as a logically motivated war between boomers and doomers. But one may guess that the best solution is to upgrade the desperation of doomers and exhilaration of boomers to a better understanding of how to avoid computerization as abuse of formal science, allied in the driving West to an active Christian attitude. It would prevent the vain hope that “democracy” will allow us the “freedom” of setting up more laws, more police and an extended judicial system for forced control of the future use and abuse of AI/AGI. It would also prevent us from seeing a coming AI/AGI as the saving Messiah that leads us towards paradise, alternatively as expecting the Apocalypse as an unavoidable tragedy.

 

For the rest, it is symptomatic that I never found in the Swedish (and barely in international) media an in-depth interview or discussion of the matters related to this text of mine. That is: until February 2024 when in the Al Jazeera English (Europe) news channel, in “The AI Series”, Nobel peace prize laureate Maria Ressa interviewed and discussed with Urvashi Aneja (also here) in Goa, technology policy researcher and founding director of Digital Futures Lab, the subject How currents AI developments impact the Global South (also here, plus additional text here) introduced with the following text, where “the Global South” refers to the view from Goa, in Southeast Asia:

 

While many of today’s headline-grabbing artificial intelligence (AI) tools are designed in Silicon Valley, much of the labour that fuels the boom is based in the Global South, raising questions over who stands to gain from the technology and at what cost.

Nobel Peace Prize laureate Maria Ressa and the director of the Digital Futures Lab, Urvashi Aneja, explore the impact AI is already having on communities in the Global South – from labour conditions to democracy and the environment – and why countries need to move beyond “catching up with the North” when deciding what role AI can and should play in their societies.

 

In another text, the very same Urvashi Aneja relates to the same thought I had expressed above about the “cementing” of human thinking by people in industry, business, finance and academia, living in sub-cultures or societies in cultural crisis: ”Artificial intelligence is a status quo technology, as it reproduces the future based on the past. […] The current model of AI development concentrates power in the hands of a few. What we need is a new model that puts public interest at the centre.”

 

I mean, it is dramatically symptomatic that a finally welcomed such a serious discussion of AI/AGI should come from the “Global South”. It can also be seen as a part of the “conclusion” except for that a new model that puts public interest at the center must be seen primarily as the “old” forgotten model, an ethical and religious rather than political question. As I suggest in my essay on Information and Theology.

 

 

12. Concluding humor

 

To Fellow Engineers

 

Over lunch at a Swedish embassy, the hostess asks an old engineer if he knows the difference between a lady, a diplomat and an engineer:

 

- Do you know the difference between a lady, a diplomat and an engineer?

 

The old man calmly said yes and explained:

 

- The lady, when she says NO, means MAYBE; when she says MAYBE, she means YES; and when she says YES she is not a lady.

 

- The diplomat, when he says YES, means MAYBE; when he says MAYBE, he means NO; when he says NO, he is not a diplomat.

 

- The engineer, when he says YES, means YES; when he says NO, he means NO; and when he says MAYBE... he's not an engineer.

 

That's it!

 

-------------

 

[Forwarded by a Brazilian colleague, from unknown source, translated from the Portuguese]

 

Spoiler suggestion: The engineer is the only one who is supposed to deal mainly with inanimate nature. And is supposed here to ignore statistical probability and measurement theory, which introduce the MAYBE. The only obligatory party in the engineer’s dialogue is the one who, YES, pays the monthly salary. And it has been already recalled above that the German philosopher and theologian Ernst Troeltsch observed that humor may have similar influence on atheists as religion does, in fostering humility by downplaying the importance of the big Ego that is inflated by logic of YES and NO.