ChatGPT approach to Artificial
General Intelligence AGI
By Kristo Ivanov, prof.em., Umeå University
July 2023 (rev.
250604-1005)
<https://ia802703.us.archive.org/13/items/chat-gpt-agi/ChatGPT-AGI.html>
<https://www8.informatik.umu.se/~kivanov/ChatGPT-AGI.html>
CONTENTS
1. Link
to a general disclaimer
3. Initial overall considerations
AI in
The Design of Inquiring Systems
On
Statistics and Experimental Inference
Apparent
Obsolescence and Desperate Logic
Driving Forces
behind Technology
Ego Inflation or
Philosophy behind “Powerful Experiences”
David
Noble and “The religion of technology”
Autism, or
“Mathematical/computer-oriented Minds”?
Antinomy? – Non
mathematical/computer-oriented minds
The
meeting between the two different minds
Artificially
intelligent artistic production?
Return to The
Design of Inquiring Systems
6. References
to ChatGPT in other own essays
From:
Information and Theology
From:
Computerization as Design of Logic
Acrobatics
From: Computers as Embodied Mathematics and Logic
From:
The Russia-NATO-Ukraine Information
Crisis
7. The
open letter: "Pause Giant AI Experiments"
8. Comment
to the open letter (proposal for moratorium)
9. Case study: “Creating safe AGI that benefits all of
humanity”
10. Conclusion:
the meaning of hype
11. Conclusion:
beyond the hype
12. Conclusion
in one sentence
13. Concluding
humor
2. Historical background
The Wikipedia
report on artificial intelligence - AI in July 2023, seen as an
introduction to artificial
general intelligence - AGI, in the last paragraph of its introductory section
observes that the term artificial intelligence has been criticized for
overhyping AI’s true technological capabilities. The present text is intended
to specify a few details of my criticism. The second paragraph of the very same
introductory section writes about AI’s “several waves of optimism”:
Artificial intelligence
was founded as an academic discipline in 1956, and in the years since it has
experienced several waves of optimism, followed
by disappointment and the loss of funding (known as an "AI winter"), followed
by new approaches, success, and renewed funding. AI research has tried and discarded
many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In
the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this
technique has proved highly successful, helping to solve many challenging
problems throughout industry and academia.
I understand that we are experiencing its last, or
unfortunately only the latest, wave of optimism. I suggest that the cause of
these waves of optimism is the basic misunderstanding of what intelligence and consequent artificiality are or should
be, allied to the forecasted marketing power and hoped economic profitability,
and for whom, of the hype of AI and now AGI. This is related to a
misunderstanding of the essence of logic and mathematics, and consequently a
misunderstanding of statistics that is reduced
to big
data and to a controversially and problematically described mathematical
statistics, as I tried to show that they are embodied
in computers. The result suggests an ongoing cultural crisis as a
societal analogy to the scientific scandal of lobotomy, a cut off of
the emotions and feeling, valuational and intuitional dimensions of thought and
psyche of a great deal of the population, and therefore also a cut off of
misunderstood democracy. It portrays difficulties that in part were discretely
addressed long time ago in articles about The artificiality of science and, The metaphysics of design: A Simon-Churchman "debate (in Interfaces, vol.
10, No. 2, April 1980).
The fist mentioned article on the artificiality of
Science is Churchman's review of Herbert Simon's book on The Sciences of the Artificial (1969) that academically gave to the latter a prestige
in the field of artificial intelligence, impressing the academic community of
his being a "genius" in the most disparate fields of knowledge, and
concluding with his being awarded the Nobel prize in Economics. All based on a
misunderstanding of the essence and function of logic and mathematics as
mentioned above.
The latter article refers to the work of a pioneer of
AI and Nobel Memorial Prize in Economic Sciences, Herbert
Simon, and of prof. West Churchman, philosopher
and systems scientist. It also raises the relevance of Churchman’s timely
article “An analysis of the concept of simulation” (in Symposium on Simulation Models by A.C.
Hoggatt & F.R. Balderston, eds., Cincinnati: South-Western, 1963, pp. 1-13)
as well as its application by Ian I. Mitroff, “Fundamental issues
in the simulation of human behavior” (Management
Science, Vol. 15, No. 12, August 1969). The main issue in also covered in
Churchman’s “Real
time systems and public information” (in
Proceedings of the AFIPS/ACM 1968 Fall Joint Computer Conference, pp.
1467-1468, among others in pp.1467-1498). I am sure that very few people will
recognize that the same issue is partly covered in “The researcher and the
manager: A dialectic of implementation” in Management
Science, Vol. 11, No. 4, Feb. 1965 (more on this below). This is relevant
if we conceive the manager as analog to the questioner and user of the answer,
and the researcher as analog to the producer of the answer, based on his database
and elaboration of the data.
It is finally symptomatic
that the whole of discussions going on about AI and AGI apparently ignore
completely the violent hype that started in the (now “obsolete”?) seventies
around Hubert Dreyfus’s views on artificial intelligence. There is no interest in What Computers Can’t Do. This will not treated here because they were based on “modern”
continental philosophers such as Merleau-Ponty, Martin Heidegger and
phenomenology that I perceive as arising from a misunderstanding or abuse of
the philosophy of Immanuel Kant, as discussed in my
essay on the ongoing computerization of society. The reason that the earlier hype around Dreyfus is ignored in the present renewed
hype of AI-AGI is related to why humanity continues with divorces and wars, and
why ChatGPT has not been adduced to help in the solution of the Russia-Ukraine
conflict or the 2023 Israel-Hamas war. More on this below, but before that, it
is necessary to orient readers who do not yet know what the acronym ChatGPT
stands for, orienting them to Wikipedia’s
explanation of the term. Common educated
people’s difficulty to even understand the explanation is part of the “mystery”
that surrounds the hype of artificial intelligence as of most modern
mathematized science and technology, isolating them from democratic control.
3. Initial
overall considerations
In the above section it can be seen that the
referenced texts are dated up to the year 1980. I I
am well aware that they may be considered by some as being outdated because of
the great and rapid technological development in these more than 40 years.
Nevertheless, as I wrote in the latest revision of my general
disclaimer: On
occasion of my retirement I intended to carry out in full scale my research
that had been neglected because of managerial duties at my university, and to
summarize my experiences and reflections in one or several publications. The
more I studied and reflected upon my experiences, the more I got convinced that
the main problem was not the lack of debates, books, publications, experiences,
reflections, but rather the lack of wish, will, courage, that is "lack
of time" to select readings, to read, to think, or to
understand in depth and to act upon knowledge that is still valid and already
available.
This means that I perceive the present attitude to AGI
as revealing a deep cultural crisis, first of all, in the western world, with
certain affinities with the message of Oswald Spengler’s The Decline of the West. Just because the western world is or was
the most “advanced” (towards nowhere?). The decline is properly illustrated by
the breakdown of social communication, which in science I exemplified by the
reciprocal ignoring of scientific insights within increasingly narrows fields
of science where the concept of “field” itself is being obliterated and
promotes both the narrowing and consequent speed of perceived advancement. The
process of narrowing and reciprocal ignoring the wholeness of reality also
implies a loss of democratic social
control of scientific activities that often are driven by narrow and often
secretive disciplinary, economic and military interests. Scientists themselves
who are active as e.g. physicists in fields that earlier were related as in the
general denomination “physics”, not to even mention the general educated
public, do not understand and cannot evaluate their earlier colleagues work in
many highly specialized “sub-fields”, despite the assumed existence of one
“scientific method”, which bestows to all the prestigious name of science. All
this while, paradoxically, Democracy with capital D as the final “guarantor”
has come to be a substitute of the earlier perception of God as I explain in my
essay
of the conflict between Russia and Ukraine.
The loss of
democratic control, which in its essence is seen as "general" for
implying the whole society, includes a loss also in the community of scientists
dealing with both science in general, as well as within each field and subfield
that results from progressive specialization that is misunderstood as a sign or
progress. In his book review mentioned above, Churchman observes that "the
debate about artificial intelligence has been raging ever since the
introduction of that rather unfortunate terminology". He sees Simon's
introduction of the term sciences of the
artificial in one more "benign" sign of his "unremitting
enthusiasm", which is matched by the community of scientists sharing his
attitude of being "strongly positivistic in his thinking". I myself
have met scientists who do not even know that positivism in this context means logical positivism, and justify themselves by boldly acknowledging that,
yes, they are proud for being positive in their attitude to their work (without
even differentiating between positivistic an positive), and logical in their
thinking. Similar improbable accounts have I heard about AI-enthusiasts who
counter the criticism of the system not accounting for induction by countering it with the question “what is it?”
Churchman goes
further in explaining why databases with statements in the indicative mood do
not take into account the goals implied in their application to particular uses
related to unforeseen (whose?) goals. And he observes that it is not perceived
that natural science itself is artificial. This artificiality of science in its
gradual mathematization and logification leading to "Newton's
syndrome" (belief in the possibility of going from the simple to the
complex, or "more of the same") is something that was already
affirmed in the far-reaching insights of Jan Brouwer about the
foundations of logic and mathematics, as outlined in my essay on Computers as embodied mathematics and logic. It should be seen as a foreknowledge for understanding
the whole text that follows. I have come to the conclusion, supported by the
text below, that the basic problem of the use (not the design) of AI and AGI is
the lack of understanding of its essence being formal science as represented by
mathematics and logic. The latter’s function and limitations are ignored when
they are further equated to intelligence and further to
a misunderstood human
intelligence, which plunges us into a hodgepodge of philosophical
and psychological controversies that are also ignored and soon forgotten in
face of the possibility of making money and obtain power over nature and other
humans.
Quite often this
is done under the mantle of an appeal to “pragmatism”, but then there is no
reference to what pragmatism is or should be since it would lead to a
discussion of the philosophy of
pragmatism its history and its variants. I may be repeating
myself if I claim that one, if not the basic
problem in understanding AI and the more so in understanding AGI is that most
common, even educated, citizens including scientists do not need to understand
what the foundations of logic related to mathematics and to what a theory is related to the abused word model, and the even more abused term conceptual framework. For instance, in
mathematized science, best exemplified by a physics that is mathematical
physics, mathematics and logic would be meaningless without a supporting theory
and its concepts with closely
associated sensory presence (e.g. force, mass,
speed, acceleration) which have been
painfully developed under hundreds of years. Today in quantum physics (cf. my paper on some of its
problematizations) new concepts have
been derived mathematically in order
to support empirical observations in complex experiments that have no direct
bodily sensory presence, e.g. “particles” that are not particles but rather
mathematical constructs.
The consequences
are that quantum physics “works” in a sense that eschews the basic of what
“works” means in the deep sense of the philosophical pragmatism developed up to
Churchman’s DIS. It works for certain
applications, not the least nuclear weapons, and for total long-run
consequences that are unknown. A
weird serious symptom is that so late as in July 2024 the mathematician Inge
S. Helland had to publish “A new approach
toward the quantum foundation” (cf. an earlier
book on the foundation) which recalls several chapters of DIS regarding groups of observers,
representation and foundations of statistics (more on this below), as in the
following excerpt from the abstract:
A general theory based on six postulates is
introduced. The basic notions are theoretical variables that are associated
with an observer or with a group of communicating observers. These variables
may be accessible or inaccessible. From these postulates, the ordinary
formalism of quantum theory is derived. The mathematical derivations are not
given in this article, but I refer to the recent articles. Three possible
applications of the general theory can be given as follows: (1) the variables
may be decision variables connected to the decisions of a person or a group of
persons, (2) the variables may be statistical parameters or future data, and
(3) most importantly, the variables are physical variables in some context. The
last application gives a completely new foundation of quantum mechanics, a
foundation which in my opinion is much easier to understand than ordinary
formalism. […]
On the other
hand:
Most trendy AI and AGI are logical and mathematical
manipulations that are theoretically supported only for application in
well-defined and limited fields that have a theory, but now are mixed with
manipulation of common language and unperceived use or abuse of “tools” of
mathematical statistics divorced from controversies about foundations of probability
and statistics. |
More on this
below. Science is being reduced to such “statistics”, under the hiding labels
of e.g. artificial “neural networks”, “machine learning” or, extremely, “multilayered
convolutional neural networks”, by means of anonymous “training” and such,
which may give speedy short-term profitable (for some) and working applications
of uncertain long-term quality. This includes the medical field where theory is
gradually reduced to statistics, as indicated by a statistician friend of mine
who (at least was a trained statistician and) some 50 years ago could support
his whole family by statistically elaborating into exam-dissertations the data
furnished by medical students becoming physicians. Nothing of this kind of
knowledge is available to the common educated citizen who is now going to be
drowned in the hype of advertising on AI (including “advertising through
mention in work of Nobel laureates). It is even available to many scientists
who work in well-defined and theoretically supported fields, sometimes unaware
that the concept of “field” itself is put in question. It happens when the most
is reduced to logic, mathematics and computerized statistical computations that
can never be submitted to some kind of democratic control except by “cronyism” as surveyed in
my paper on information and
debate, or by general universal testimony that “it works” (for some), as by the
“proof” of quantum physics in nuclear explosions in Hiroshima and Nagasaki.
All this is
ignored or forgotten as source of debates about AI, and even less about AGI in
their "waves of optimism" culminating with the ongoing latest hypes.
But it does not detract from the possibility of legitimate profitable
applications, legitimate only when (if) one (who?) knows the consequences of
what one is doing (to oneself and others) in the short and long run, and
whether it is ethical (and what that means with and without religion) besides
of being profitable (in the short and long run, and for whom?).
AI in The Design of Inquiring Systems
I do not know a better illustration of the meaning of AI and AGI than in
the seminal work of the earlier mentioned West Churchman with a background in
pragmatism in his book The
Design of Inquiring Systems (1971) to which I will usually refer further to with its initials DIS,
and that I extended with a general DIS Word and Subject Index. I “discovered”
the extremely challenging, demanding book at about the time I was completing my
PhD
dissertation. I used it in my later teaching with, because of some
important reason, mainly startling, unclassifiable and cloudy results as
expressed, decades later, by one of my PhD students in the context of Interaction Design, HCI, Philosophy
of Design, Technology and Society.
I remember seeing some year ago, in the “Talk”
section of the Wikipedia article on artificial intelligence
somebody referring to this book. Older Wikipedia “talks” are not archived in
Wikipedia, but I remember that it was swiftly disposed of because the author
was seen as an “outsider”, without Wikipedia’s primary editors of the article
displaying an understanding of what it all was about. This is a less-known
attitude of a censorship by primary groups of editors with unconscious “vested
interests” in a particular subject of Wikipedia, as I illustrate in an article on
Wikipedia-democracy. I resume the reasons for the impossibility of
“talks” or debates about such issues at the end of the conclusions of my
special essay about Information and Debate.
In chapter 4 (p. 79) of the above mentioned DIS book
the author presents an elaborated version of an earlier paper On the Design of Inductive Systems (1969) written in co-authorship with Bruce
Buchanan with material based on a research project conducted
by E.A. Feigenbaum, Joshua
Lederberg, Bruce Buchanan and others, together with the author,
at Stanford University. Bruce Buchanan, however does not seem to have taken
very seriously his co-authorship with Churchman, since in his later works
mentioned below he clearly joins the
logical-empiricist track of Herbert Simon. The latter, in philosophy and
methodology of science, stands on the lucrative opposite side of West
Churchman, as suggested in the only text I know that indirectly and in
rhetorical elegant way spells this opposition, the earlier mentioned paper by
W. Ulrich's The metaphysics of design: A
Simon-Churchman "Debate".
What is happening today is that AGI, or AI in the
spirit of ChatGPT, seems to consist of smart elaborations of the chapter 4
mentioned above, which in turn is a modernization of what Churchman in a book
published so early as in 1948 already had called Theory of Experimental Inference, and Buchanan, without daring to enter into the
problems of philosophy of science, tries to extend from MYCIN (and cf. DENDRAL. They are considered by Churchman in DIS pp. 89, 93, 98) further in "Artificial
intelligence as an experimental science" in James H. Fetzer (ed.) Aspects of Artificial Intelligence (1988, pp. 195, 207, 209-250,). In fact, much of the
AI hype today is based on a simplification, if not outright exploitation of
misunderstood induction and experimental
inference, thereby ignoring the reason and motivation for Churchman's
progress since 1948 up to 1971 and 1979, from the above-mentioned Theory of Experimental Inference up to DIS and The Systems Approach and its Enemies.
After having written these lines, on October 9, 2024 I
felt obliged to complete all this with a quotation from the announcement of the
Nobel Prize in Physics (see details here and here) and Chemistry (details here and here) 2024. It did
not acknowledge that it was no longer real physics and chemistry but mostly
mathematized statistical induction. It all is indeed being reduced to
mathematics and related to mainly hyped computers, AI, and brain research, when
stating:
The Nobel Prize in Physics 2024 was awarded to John J.
Hopfield and Geoffrey E. Hinton "for foundational discoveries and
inventions that enable machine learning with artificial neural networks".
And in Chemistry 2024:
The Nobel Prize in Chemistry 2024 was divided, one
half awarded to David Baker "for computational protein design", the
other half jointly to Demis Hassabis and John M. Jumper "for protein
structure prediction".
That is, most such science is becoming mathematics as
the discipline of statistics also became computerized mathematics of formulas
manipulating data or numbers fed by “anybody” into computers without knowledge
of the presuppositions of probability and statistics. It is also an
exploitation of misunderstood logic and mathematics equated to “analytical -
abstract thought”, equated further to reason confounded with “intellect”,
whatever it is or should be, forgetting the subtle hodgepodge of Kantian differentiation
between reason and understanding, pure and practical reason, and their relation
to the synthesis in his Third Critique, of judgment, and aesthetics that today
is reduced to “design”. Kant’s
philosophical intervention appears in his own work to be motivated as reaction
to gratuitous societal misunderstandings and abuse of religion, in particular
Christianity. What he himself could not witness is the consequences of his work
in misunderstanding and abuse of reason and intellect reduced to logic and empiricism.
This is noted in the famous quotation attributed to Chesterton (see here,
#53): When men stop believing in
God they don't believe in nothing; they believe in anything. And the
whole problem is alternatively reduced to the anythings
of “Intelligence quotient” IQ vs. “Emotional quotient” EQ (see below) seasoned
with the expectation that computers in the future will take care of IQ, and
humans of the EQ, whatever they are or should be. We can wonder whether capital
intensive industry and associated governmental economy with take care of
computers and IQ, leaving EQ to churches and agencies for social welfare.
On Statistics and Experimental Inference
The problems are akin to those relating to the critiques
of "Big Data", and even those ignore the earlier criticism of abuse
of statistics such as Edgar S. Dunn's Social Information Processing and Statistical Systems:
Change and Reform (1974), with the
author honoring me with a positive review of my dissertation on Quality-Control of Information. Swedish readers have the opportunity to read Olle Sjöström’s work,
starting with a PhD dissertation (1980, which as usual in computer context may
be considered as “obsolete”) on Svensk Samhällsstatistik: Etik,
Policy och Planering [Swedish Social Statistics: Ethics, Policy and
Planning]. It starts (p. 154) its English summary asking rhetorically:
Has the growth in the “production” of statistical
data, together with the associated publications, press releases and so forth,
led to improved information for decision makers, better informed citizens and
increased knowledge about current social conditions?
The matter is developed further in the book (1980,
recalling why “history and critique is not obsolete”) with the title Svensk Statistikhistoria:
En undanskymd Kritisk
Tradition [History of Swedish Statistics. A Hidden Critical Tradition],
with an extremely rich multi-language bibliography (p. 240-245), and an initial
English abstract that states:
This essay delineates a critical tradition of
statistics in Europe and Sweden, lost or ignored during the last decades.
European statistics develops from philosophical empiricism and optimistic endeavours in the 17th century to the exploration of social
conditions (John
Graunt) and to comparisons
among European national states (Hermann
Conring). […] This book claims
that it is necessary to awake and renew such a critical statistics tradition in
order to benefit from modern statistical techniques and computer resources in
the future.
One of the consequences of the loss of a critical
tradition in dealing inductively with big data is the reduction of statistics
to only mathematical statistics, the
loss of the distinction and relation between production or rather measurement,
and consumption or rather use and implementation of data or rather information
and knowledge. Consequently one uses standard
statistical “tools” on “data“ that is fed into the
computer without needing to know the presuppositions for the “consumption of
data” by the automated use of the “tools” (more on “tools” below). Never mind
about the foundations of statistics as discussed, for instance, in West
Churchman’s discussion of “objective probability” in Prediction and Optimal Decision (chap. 6). It is only a question of automated
computerized application of ready-made formulas on available data. One further
consequence and example may be the unquestioned new AI hype of
"biotechnology" in How Artificial Intelligence is Revolutionizing Drug
Discovery (March 20,
2023), raising "hundreds of
millions of dollars to
pursue their AI-driven drug discovery pipelines". Which outsiders and
laymen would or could question such specialist "scientific"
affirmations, when it is obvious that "hundreds of millions of
dollars" can always produce discoveries that are difficult to or cannot be
evaluated, the more so when there can be no cost-benefits analysis (taste here the
extent of the difficulties), and forgetting that the relevant concept of cost
here is opportunity cost. In oblivion of the discussions of
“objective probability”, one is reminded of the joke that even a broken,
old-fashioned watch is right twice per day.
Intelligence, however, is much more than (misunderstood) experimental inference, and the very
same Fetzer keeps returning cyclically in time to an endless inconclusive
struggle about the ineffable "much more", as in Computers and Cognition: Why Minds are not Machines (2000), and Artificial Intelligence: Its Scope and Limits (2012). Until Fetzer,
like myself, gets older and older, and cannot yet incorporate the last
hype-revival of ChatGPT in a new book. The less so when experimental inference,
statistics and mathematics are melted down, as in Q-learning, in an
apparently prestigious mixture that ignores the meaning of “inquiring system”
and that is impossible to understand by both the average well-educated
democratic citizen, and at least by the freshman university student of computer
science. Because it requires a laborious study in order to understand, for
instance, how “the mode of representation of information seems strongly to
influence the success or failure of the inquirer in arriving at a solution”.
(And more, as in the DIS-book,
chapter 6 on Kantian inquiring systems:
Representations.)
Apparent Obsolescence and Desperate Logic
Of course, it is characteristic for the AI field that
"older" works from the 1960ies are in practice ignored or dismissed
for not portraying later modern advances. But not only from the 60ies: also those from the 80ies are considered to be old. By whom?: by those whose opinions and work today will therefore
also be considered obsolete after a couple of years. This is the more so when
there is no consciousness of the foundations and thereby the meaning of logic
and mathematics, that were discussed up to the beginnings of the past century
as I survey in my Computers and embodied mathematics and logic. Not to mention the basis of mathematical or formal
logic in the work of Gottfried Leibniz (1646-1716). And in my later essay
the consequent problems of computerization or digitalization of society,
represented here by the AI-hype, I have a section of a "Case
study of desperate logic". It could be seen as being completed by another
“case study of desperate mathematics” when a serious and engaged deep-going
Italian mathematician and numerical
analyst, Paolo Zellini. After having entered into the deeper meaning of
mathematics and “algorithm”, he writes what I see as a desperate attack on the
apparent abusive impotence and abuse of mathematics in a book with the eloquent
title La dittatura del calcolo [The dictatorship of calculus]. I comment more in detail his work in a
particular section
of my essay on Computers as embodiment of
mathematics and logic. Zellini’s work also problematizes the abuse of statistics in
hyped Nobel prizes of
chemistry and physics, sometimes adorned by the popularizing hype of
“algorithms”, which the general educated population does not know anything
about, and is not supposed to even understand the presentation of Zellini’s book The Mathematics of the Gods and the Algorithms of Men:
A Cultural History (my italics below):
Is mathematics a discovery or an invention? Have we invented numbers or
do they truly exist? What sort of reality should we attribute to them?
Mathematics has always been a way of understanding and ordering the world: from
sacred ancient texts and pre-Socratic philosophers to twentieth-century
logicians such as Russell and Frege and beyond. In this masterful, elegant
book, mathematician and philosopher Paolo Zellini
offers a brief cultural and intellectual history of mathematics, from ancient
Greece to India to our contemporary obsession with algorithms, showing how mathematical thinking is inextricably linked
with philosophical, existential and religious questions—and indeed with our
cosmic understanding of the world.
It is a "desperate logic" which could include William J.
Rapaport's section of the book Aspects of
Artificial Intelligence (p. 81) with the desperate title of "Syntactic
semantics: Foundations of computational natural-language understanding", that revives
my text on "Symptoms:
Syntax. Semantics, Pragmatics". It reminds of older desperate logic as
expressed in Sheldon Klein’s interest for “meta-linguistic pragmatics of artificial intelligence and
grammars” (sic), as well as in his Culture,
Mysticism & Social Structure and the Calculation of Behavior (European Conference on AI, ECAI, 1982, pp. 141-142).
In the latter he (in a Kabbalah spirit?)
endeavors so far as to relate computer science and linguistics to the ancient
cosmological and philosophical Chinese Confucian bible, the classic I Ching. It also reminds of the present hype of ChatGPT in
terms of its use of large language models LLM, which may be seen as a
hodgepodge of syntax, semantics and pragmatics that tends to invalidate all
these categories, as they would also be invalidated by the hype-attempt to
create a Data Science akin to an Information Science if
one could unravel the meaning and difference between them, in oblivion of
earlier mentioned The Design of Inquiring
Systems - DIS. It is also a "desperate logic" as suggested in the
nowadays mostly dimmed but formerly hyped book Gödel, Escher, Bach: an Eternal Golden Braid (1979). It is a subject (whatever it is) that is
constantly and inconsequentially revived, as lately "mystically" on
the Internet as About the "Uncalculable World" and
"Mizutani Portfolio". My impression is that the latest modern example of
desperate logic is found in the mindblowing
undefinable field (whatever field means) of prompt engineering as a “process of
structuring text that can be interpreted and understood by a generative AI mode”, mentioned also later in this essay.
I myself feel tempted to feel pride in having devised
as analogy that, despite my exceptionally numerous links, I do not remember
having read or heard anywhere. It is today’s view of “old obsolete knowledge”
of 400, 100, 40 or 10 years ago, the account of the painstaking work of, say,
Leibniz, Galilei and Newton, not to mention the world’s sacred books and the
philosophies of, say, Plato and Aristoteles, all this, invites to conceive a
bold, if yet ultimately and necessarily imperfect analogy:
It is as if those pioneers were like hunters who,
with sacrifices up to risking their own lives and deepening their skills, had
succeeded in imprisoning and domesticating the big wild beast of “nature” or
putting it into a cage. Later come we and in relative comfort examine its
“capabilities”, kill and dismember its body eating gradually the flesh and
exploiting its remains, until arriving at the bones and analyzing their
material composition. And then some of us realize that the beast is dead that
we know nothing about its origin, the effects of its earlier life on the
environment, or its offspring and its effects: desperate, greedy, mindless
“hunting”. |
Or one of the
latest logical-mathematical geniuses who in "neuroscience", and
based on experiments with masses of neurons in monkeys' brains, finds that "the human brain would be the
center of the universe and tries to explain the history, the culture and the
human civilization based on recently discovered principles regarding human
brain function". Worse than that: in the letter ”The Economist
Today” that I received on October 4, 2024, long after having written most of
the present text, I read what can always be justified by the Newtonian idea of
starting from simplicity plus “more of the same”, such as a
build-up of logical networks. It is a sort of “lobotomy” (see more below), in
the sense that it cuts out and isolates the brain from the mind, intellect or
psyche that is then ignored and then unconsciously “resurrected” in the form of
vague speculations about “consciousness”, with a
continued ignorance of the unconscious. So, The
Economist Today writes (and see here a mind-blowing
video of 12 minutes on this research):
This week we noted a remarkable scientific
achievement: the complete mapping of the
brain of an adult fruit fly. That should help understanding
of the far more complex human brain—and could open
up new fields of information technology too.
Finally, this brain
science or neuroscience is converging
with AI in the latest hype of worldwide European and USA-projects: the USA
National Institute of Health’s Brain
Research Through Advancing Innovative Neurotechnologies
(BRAIN, here and here, with the Wikipedia summary here). It goes from an understanding of the “structure of the mouse
brain” over to “the structure of the human brain” and “Developing and applying
brain-inspired computing technology”. What is mentioned is “brain imaging data structure (BIDS) conversion” and “neuroimaging data
and metadata”, without worries about
difference between data, metadata, information, and system, or between
structure and function (DIS, chap.
3). But guess whether “To date, more than 1,100 awards totaling approximately $2.4 billion have
been made by the BRAIN Initiative”
will not produce some interesting and successful results and applications. In
front of this perspective, it is a small consolation to read the insightful and
benevolent reflections of Abraham Kaplan about what seem to be a scientism of
“brain mythology” in The Conduct of Inquiry (1964, p. 325, my italics):
Suppose it were definitely established that the direct
cause of schizophrenia is the concentration of a certain substance in the
blood, or the formation of a certain structure in nerve cells or their
connections. The effect of this cause, so far as it concerns psychiatry, would
still be statable only in terms of macro behavior (for instance, that resulting
from delusions of persecution). Nor would
it follow that treatment necessarily requires the laying on of hands, whether
by medicine or surgery. For the possibility would still be open that the
substance or structure itself is produced or destroyed by various patterns of
interpersonal behavior or communication (as in the familiar case of ulcers, so
far as producing then is concerned, at any rate). Psychophysical dualism
always encounters the problem of interaction between the two realms. What I
find as exciting about such development as cybernetics and related disciplines
is precisely that they have “helped to pull down the wall between the great
world of physics and the ghetto of the mind”. [Kaplan’s ending quotation is
from Phillip G. Frank, The Validation of
Scientific Theories,
N.Y., 1961/2018, p. 155.]
The hype of cybernetics in the sixties is today in
2023 the hype of AI and AGI. Neither Frank nor Kaplan seem to have been able to
imagine that instead of “pulling down the wall” what would happen is that the
“world of physics” (as future AI) incorporates or outright swallows the “ghetto
of the mind”, the more so for not considering a ghetto sufficiently dignified.
And, in fact brain mythology, now a while after the main of my text was
written, appears under the label of neurobiology (in Academia Biology 2024-10-97)
appropriating even theology in such a paper as Credition and the neurobiology of belief: the brain function in believing. All the while it is communicated that DIY
[Do It Yourself] Brain Stim[ulation] is Growing in
Popularity, but Is It Safe, Effective?”. It is mind-blowing, but not more
than its historical basis in the philosophy of science as exposed in the year
1935 by H.H.
Price in the paper Logical Positivism and Theology (see also here).
“Desperate logic”, has in the meantime invaded science under the label
of Nobel prizes in the year 2024, disregarding the meaning of mathematics,
logic, and statistics as related to philosophy of science and cultural aspect
represented by language and arts. The Swedish public television on occasion of the Nobel banquet on 11
December 2024 (video, at minutes 51:40 – 52:40,
whole video summarizing the banquet here) shortly interviewed
the earlier mentioned Geoffrey Hinton who was awarded the 2024 Nobel Prize in Physics, shared with John
Hopfield,:
- You have said that you are used to be
the only one in the room being right. Where does that come from?
- I guess that it comes from my childhood.
My parents were atheists and sent me to a Christian school, and I was the only
person who did not believe in God. As time went by it turned out that I was
right.
- Has
that mindset helped you?
- Yes,
tremendously, because from a young age I was surrounded at school. Everybody
else had a different opinion, and I thought they were wrong. And that was very
useful when I was studying neural nets.
And for many many years there were very few of us who
believed in neural nets. Everybody else said that this was a ridiculous idea,
and it turned out that we were right.
- Well,
I wish you a lovely evening, professor Hilton.
- Thank
you.
- Congratulations,
one again.
- Thanks.
Some spectators
perceived Hinton’s answers as surprisingly haughty, raising in today’s
considered “antiquated” psychology the suspicion of a diagnosis of ego-inflation or narcissism. I think,
however, that it rather reveals a psychology or way of thinking that fits what
this present text of mine associates with an extremely mathematical-logical
mind, which in turn fits the essence of a misunderstood and therefore abused AI.
A late more complex example is the very same Nobel prize laureate Geoffrey
Hinton as interviewed by Curt Jaimungal who is introduced on the net at
YouTube, Substack, X (Twitter), Linkedin
(Self-employed University of Toronto), and Spotify, with the latter stating (read
on 27 Jan 2025, after the main of the present text had already been written:)
“Listen to Theories of Everything [TOE] with Curt Jaimungal on Spotify,
exploring theoretical physics, consciousness, AI, and God in a technically
rigorous manner.”
The interview is
found on YouTube, as a video of about 13 minutes with the label Artificial Intelligence is Advancing at a Frightening
Rate (saved here) with a further
reference to 73 minutes’ video on Why The “Godfather of AI” Now Fears His Own Creation, (undertitle “The
Existential Threat of AI”), and would
require a sort of doctoral dissertation for commenting its severed ties to
philosophy, physics, logic and mathematics, which I consider already implicit
in the body of my present text. Such “sort of doctoral dissertation” would be
also an example of a preposterous requirement of intellectual effort for
commenting a body of arguments that amount to an intellectual “straw man”, and
a case of what the AI-hype is creating, a series of “intellectual straw men”,
which require extenuating intellectual resources for sterile “straw-struggles”.
Another more
simple example of the wanderings of a mathematical or exclusively logical min I
did already offer in an earlier paper, labelling it as a case
study of type psychology. A further example is when a colleague of mine, upon
my suggestion read but did not seem to understand Tage Lindbom’s book The Myth of Democracy. When reading another book that mentioned and commented
the word “myth” he stopped and remarked that it should not be allowed to be
used for democracy, as Tage Lindbom had done, because it is dangerous, most
readers taking for granted that myth means a lie: it is dangerous to claim that
democracy is a lie. I understand that if this colleague of mine were a dictator
or the manager of a newspaper or publishing house, he would “democratically”
apply censorship to book’s title “Democracy is a myth, or to the book itself.
The interesting thing, in our context, is that this may be taken as an example
of how a logical-mathematical engineering mind can stop considering seriously a
book after reflecting upon the logical connection to a couple of another book’s
sentences about its title.
And this is also
what seems to be happening to many of those who got or hail several of the
scientific Nobel prizes this year 2024, such as those who participated in the
Nobel laureates’ discussion of “opportunities and risks of AI” at the round-table
in the Swedish television on 12 December (page saved until
25 December 2025). In particular, Geoffrey Hinton (who had been assigned the
historically preposterous title of “Godfather of AI”) has affirmed that
philosophy is unnecessary in practicing or discussing science and its impact.
Nobody seems to mind that the idea of neural
networks, as elaborated in recurrent neural networks originated in statistical
mechanics, and developed in so-called neuroscience that we refer
to on several occasions. Please note: human mind’s workings being studied with
tools from statistical mechanics.
Probability and statistics, while ignoring their ultimate foundations as also
happens with regard to mathematics (mathematical statistics), is apparently
seen as the ultimate tool of universal scientific research, as clearly spelled
in the rational given for hailing the work of theoretical physicist,
mathematician, and Nobel laureate Frank Wilczek. Together with
three other scholars he is quoted as declaring that “success in creating AI would
be the biggest event in human history” but “Unfortunately, it might also be the
last, unless we learn how to avoid the risks”. (Who are “we” in politics?
Politics of the “democratic” United Nations and its International Court of
Justice?) He also declares himself as atheist, agnostic or pantheist, while:
In May 2022, he was awarded the Templeton
Prize for his
"investigations into the fundamental laws of nature, that has transformed
our understanding of the forces that govern our universe and revealed an
inspiring vision of a world that embodies mathematical beauty. […] a vision of
a universe that he regards as embodying mathematical beauty at the scales of
the magnificent large and unimaginably small.”
The important and serious thing seems to be the
necessity of a serious study, probably possible only in terms of analytical
psychology, of what is happening nowadays with the mind or psyche of modern man
in the shadow of computerization, digitalization or logification. This to the
point of disrupting the basis of human relationships, as illustrated further below with reference to Reason and Gender. It is then more than fair, however, to remark that
such kind of analytical thinking, typical in formal science such as mathematics
and logic, has been important for the particular development of the West. It
may be seen as rooted in the Judeo-Christian tradition, as noted in
controversial studies of the Jewish Ashkenazi intelligence (cf. here, here and here) and its
performance in the West (cf. the "archetypal" Albert Einstein). This happens,
however, in parallel to the gradual secularization or loss of its religious
basis and Christian influence. Here is also the place to acknowledge, of
course, the immense importance of the formal way of thinking in science,
technology and industry. It stands behind the western mainly material wellbeing
with the remark, however, that it may have been what Goethe meant with what
came to be called a "Faustian bargain". An
explanation of the meaning of such a Faustian bargain can be sought in Carl
Jung’s insight (Collected Works, Vol. 6, p.77 §113) in
a quotation that I already reproduced in an essay of Information and Theology:
Through the shifting of interest from the
inner to the outer world our knowledge of nature was increased a thousandfold
in comparison with earlier ages, but knowledge and experience of the inner
world were correspondingly reduced. The religious interest, which ought
normally to be the greatest and most decisive factor, turned away from the
inner world, and the great figures of dogma dwindled to strange and
incomprehensible vestiges, a prey to every sort of criticism. [...] Modern
rationalism is a process of sham enlightenment and even prides itself morally
on its iconoclastic tendencies.
So, Leibniz is not even mentioned in e.g. James
Fetzer’s edited book on Aspects of
Artificial intelligence, not even in Clark Glymour's
contribution with the title "Artificial
Intelligence is philosophy", despite his writing (p. 205) that
"The trouble with artificial intelligence work is that it can use the
senses to overwhelm the intellect".
I propose that it is much more than this: the advent
of incursion of the latest if not last AI-Hype is the misunderstanding and
abuse of mathematics and logic in the process of dissociation of the human mind
in the process that I present in my text on the meaning of
human-computer interaction HCI. In this process can be seen (by those who know it)
an oblivion of the historical issue of Geisteswissenschften or “sciences of
the spirit” related mainly to the name of Wilhelm Dilthey, an issue which
later seems to have reverberated in Rudolf Steiner’s
Anthroposophy. The dissociation of the human mind can rhetorically be seen as
tragic ongoing analogy to the historical scientific and political scandal of Lobotomy, which also
happened to dissociate the brain from its so-called emotional side, as formal
science does with the undefined thought or "cognition" in the
undefined mind that eschews all relation with analytical psychology. The metaphysics of design: A Simon-Churchman “debate”. I guess that many readers will think that it is too
farfetched and controversial to be reminded that a "cultural
lobotomy" also was the background of the Holocaust, (cf. the identification
of inmates in Nazi concentration camps, which later became worldwide personal
identification number) based on the "intelligent" superiority of a
"race". It all accompanied by eugenics in several "advanced" western countries,
with controversial
interpretation of the Swedish national intellectual heroes, Nobel
Prize winners Gunnar
and Alva Myrdal's stand on the issue.
It is a cultural lobotomy that may also stand in the
background of the ongoing equalization of humans and animals, with humans
reduced to only animals, while believing that we are heightening animals up to
humans. This is related to the fact that the undervaluation of humans increases
the incapacity of seeing the difference between human and animals, but also
between humans and machines, as evidenced in the apparently unproblematic
frequent reference to the famous "Turing Test". It
reminds us that the less one understands what a human being is, the easier it
is to equal him to a machine or to “nature”, including to see as Nobel prize
laureate in economics Herbert Simon did “behaving
human beings as simples as ants” (more on this
below). All this is in turn related also to the philosophy
of veganism as well as of
the heightening of "Mother Nature" to a new
goddess for a pantheistic solution of the issue of pollution and climate warming, up to the point of youngsters declaring to relinquish having a
polluting offspring.
The step is short to equalize humans and machines,
with the supposed final attainment of machines that are not machines, with own
"consciousness" if not "spirit", whatever it is perceived
to be, and a superhuman if not godly intelligence. But if computers do not
become superhuman, humans can perceive and behave and be slow computers and not
only as animals as in the Holocaust of the second world war. This is also the
rationale for the expectation that not only many manual workers but even
so-called intellectuals and scientists who nowadays already think as slow
computers in the computerized West, will in the future be replaced by, or kept
operating advanced AI/AGI/ChatGPT. The greater the number of people at work,
even “intellectual” work, with tasks that are designed to require to act as
only slow computers, the easiest will be to replace them with faster “AI”
computers.
Not only that: it is the same cultural lobotomy of the
separation (and not only differentiation) between brain, logic and psyche, that
enables democratic societies and especially politicians to believe that it is
possible to counter criminality with only computer-logic, law, police, prisons,
military and weapons, instead of relying honestly on "love thy
neighbor". Or to believe in countering credit
card frauds with a progressive build-up of the computer
"security" behind advanced
passwords, e.g. of at least 16 characters that must include
uppercase and lowercase letters, numbers and special characters or symbols, card
security codes, multi-factor
authentication and facial
recognition systems, including the whole security industry as suggested
in e.g. a magazine like Detektor. Why not, soon,
also brain recognition systems? Or,
ultimately to believe in security behind shields of the police and military
that are supported or counteracted by logical gaming in spyware like the famous
Pegasus (that I consider
also elsewhere), property of
the controversial Israeli cyber-intelligence group NSO Group (cf. its
mind-blowing history), in international espionage or wars against
and between criminal gangs. That is, the belief of being able to substitute
computer security and struggle against disinformation, for quality of
information as defined in my doctoral dissertation on Quality-Control of Information, as originally
suggested in my early paper on the
concept of security that today reappears under the avatar of (AI-)Safety.
We can indeed expect to meet a mind-blowing future, man-machine
interaction, further exemplified by a mind-blowing website like HIBP (Have I been pwned? See also here), which relate
to DDOS attacks like the “Internet Archive
Breach Expose 31 Million Users” in October 2024, undermined by increasing
complexity, and worsened cultural crisis under the label of AI-Safety and the
coming neologisms of “AI-something”. It all can begin with the infantile (of
course my-our) Good-AI and (of course your-their) Bad-AI.
In lack of an understanding of the human psyche that
would require study of the history of philosophy and e.g. analytical psychology
with and understanding of theology, there is a hodge-podge "models"
to support the ad-hoc concept of "Emotional
intelligence" (EI) possibly measured by a corresponding
emotional quotient - EQ, analog with old abused IQ. Even forgetting that the
problems mentioned above may lie behind the reportedly increasing frequency of
the diagnoses of Autism, including Asperger. It is
interesting to note the vagaries of people, especially when discussing the
arcane modern questions of whether AI will be able to “feel”. The flood of
mass-media speculations is exemplified in a program of 44 minutes at the
Swedish Radio on March 15, 2024 with the title Kan AI ha känslor? [Can AI have feelings?] with the under-title “Today,
humans are increasingly interacting with AI. AI can make us humans feel joy and
even love, but can AI have emotions and what would they mean if AI had real
emotions?” The program, with two guests researchers Fredrik Svenaeus and Karim Jebari, showing the practical impossibility of expressing
and sharing psychological and philosophical knowledge when philosophy that
historically included psychology is reduced to simple reference to
“phenomenology” (appreciated in universities for its theology without God). Love and its Christian
theology (surveyed in my essay on Reason and Gender) usually is programmatically ignored and silently
excluded from the conversation. Or it is reduced to politics and in the best
case to (social human feelings in) “relationships”. It is the case of the apex of the fourth and last (on AI
and Creativity, retrievable until April 28, 2027, saved here) in a series of
four television programs at the Swedish public
television broadcaster with the title Smartare än hjärnan [Smarter
than the brain]. After the first three programs trying to popularize a
necessarily superficial understanding of AI, the fourth and last one tries to dwelve into AI’s relation to creativity including art and
entertainment, it interviews some researcher and others related to the matter
(cf. below about art, and “Alphago”), namely Mark Solms, Anna Lembke, Robert
Waldinger, Alexander Reben, Pulkit Agrawal, Jodie Foster, and Karin Adelsköld. The conclusion
seems to have been that creativity has to do ultimately with “consciousness” (whatever is
its connection to the unconscious, cf. in my
Reason and Gender) and the importance
of human relationships, which are problematized by AI developments. For me this
“conclusion” in our context is an anti-climax, and
“relationships” are an euphemism.
This is ultimately also the reason of why those who
happen to study The Design of Inquiring
Systems are not able to figure out how an application of e.g. the ignored
philosopher Hegel’s and others’ philosophical thought (p. 180ff, see below) can be related to Carl Jung’s analytical philosophy
as it appears in the discussion of “progress” (pp 203-205) and of “the mind”
(p.261ff.), but the more so in the conclusion that (p.272):
[I]t may be safe to say that at the
present time inquiry has become part of the unconscious life of most people:
they are unaware of the ways in which they function as inquiring systems. Nor
is there a strong inclination for them to give expression to this function so
that its nature appears at the conscious level. As a consequence, we are
suffering now the most dangerous symptoms of an inability to bring to the
conscious level an important human function. We assign to the experts and the
politicians the roles of designing and creating the environment in which we
live because we can see no way in which we can play any role whatsoever in
these activities. Appalled as we may be at the events that are occurring in the
world about us as the output of blind technology and politics, we each in our
own frustrated way feel that we can do nothing about it.
Medical brain lobotomy, seen here as an archetypal
analogy to the reduction of intelligence to logic, mathematics and formal
science divorced from emotions, feelings and intuition, was supposed to deserve
the highest level of international official scientific recognition by means of
the Nobel prize for physiology and medicine, year 1949. It also raises the
memory of the Nobel prize in economy to Herbert Simon, while West Churchman
arguably decreased his chances of a similar official recognition because of his
opposition to Simon's view of intelligence in the artificiality of science and
his view of "behaving
human beings as simples as ants", (probably not intended to be applied to the
author himself). As with the case of the Turing
Test mentioned above, the step may be short to considering certain
“non-intelligent” humans as machines or ants to the point of allowing for
attitudes such as in the Holocaust. Churchman also opposed
some tenets of an earlier recipient of the prize in Economic
Sciences, Kenneth
Arrow. It belongs to this story the confession of a colleague of mine who as
member of the awarding
institution for prize in physics, the Royal Academy of Sciences,
that an internal report he had written about a case of this kinds of problems
in his discipline had been classified as secret for the next 50 years. Talking
about democratic freedom of expression and openness in research.
There is a
symptomatic case of a Swedish doctor and author, grand old man and champion of
intellectual debate in journalistic contexts, mainly in social criticism and
satire in the last more than 60 years: P. C. Jersild. One of his
best known books, A Living Soul, symptomatically relevant for our “lobotomized
brains”, deals with a living,
thinking, and feeling (?) human brain floating in a container of liquid. Also
symptomatic is his answer at the end of a long interview of the Swedish Radio
in a program broadcasted on December 9 2023, about what he thought on the ongoing hype of AI-AGI.
After his many well-formulated answers and thoughts about “everything” during
the interview, including about his atheism and commitment to voluntary euthanasia despite of being a
physician, he finally, with a certain hesitance, confessed that it was a
difficult matter and that he had not yet a formed opinion. Guess why. It is not
easy to know what a computer really is,
the less so as related to formal science and the human psyche, that is, again: human-computer
interaction, and “intelligence”.
Driving Forces behind Technology
So much for democratic freedom of expression at the
highest levels of the highest sciences, reminding the barely advertised that
"Democracy is a myth" because of negligence of its presuppositions,
as I explore in a text of the information on the
conflict between Russia and Ukraine. Science as scientific
"fields" in general and computer science in particular, and
artificial intelligence more particularly, nowadays eschews all democratic
control because it is only controlled by narrow isolated self-controlled
communities of adepts who, as in universities and in supposedly democratic
Wikipedia, are the only who are seen as entitled to understand what it all is
about. And in the meantime the ongoing new wave of AI-hype is advertised by
means of a plethora of "visionary" programs of doubtful quality in
mass media and in discussions in social media: a reprise of the Dot.com bubble, possibly a
revival of the historically archetypal Tulip mania. Let's remark that many people made big profits by
all those "visions", as they are making by Cryptocurrency.
Ultimately, however, the question may boil down to
which are the motivating forces that drive modern scientific, technical and
industrial view of western and already also parts of eastern one, the latter’s
antiquity having already been the object of my reflections
in the text in Logic and Rape as related to Science and Civilization in China. It is a view that may be related to a peculiarity of
the Judeo-Christian civilization as related to “smartness” in the controversy
around the “Ashkenazi Jewish intelligence” (see here, here published here, and here). It is a view that is driven by
mathematics and logic of western mathematized technoscience, as synthetized
also in mathematical logic as embodied in computers. It is a matter that has
been studied in the philosophy of science and of technology that is touched
upon in an essay of mine on Trends in philosophy of technology. I believe that theologically this field was best
surveyed by editing of Carl Mitcham and Jim Grote
in Theology and Technology (volumes 1 and 2). My own experience, in witnessing
that even deeply committed and educated scientists and philosophers adore the
power of mathematics and logic (a power as suggested by Jan Brouwer, cf. my Computers and embodied mathematics and logic), is
that they are seduced by the feeling of power
(supposedly for good and for bad) over
nature and humans to the detriment of the Christian primacy of charity and love. It may be a case of the analytical psychology's conception of
Ego Inflation when believing the they
are high priests gifted with the capacity of understanding the
mathematical-logical language (synthetized in mathematical logic) in which God
himself is supposed to have described the created universe. Or, as I saw in a
letter written by a particularly gifted and educated physicist:
I particularly remember an experience when I was
taking a PhD course in quantum mechanics and going through a particular proof
using group theory to predict atomic states. It was like a lightning strike and
I actually started crying with emotion. It was a powerful experience.
It is interesting and symptomatic that this is a
powerful experience, to the point of convincing if not compelling humans to a
religious conversion that could have been a sign of the Biblical message that
humans have a divine spark in themselves because they are created in the
"image of God". Nevertheless, it can easily be a symptom of proud
"Ego inflation" mentioned above. It is therefore also the source of
powerful speculations raised around the names of Fibonacci and Mandelbrot, as already
considered in a couple of other essays, on
Computers as Embodied Mathematics, Information on Christianism or Atheism, and Quantum Physics, Computers & Psychology. A well-known
Swedish professor of theoretical physics tells in an interview
at the Swedish Radio about a likewise life-changing (but not religiously
converting) powerful childhood experience in his witnessing the appearance in
the sky of the announced apparition of a comet. Other theoretical physicists
witness but are equally insensitive to religious conversions, such as e.g. the
famous Nobel prize laureate Peter
Higgs about whom Wikipedia reports:
Higgs was an atheist. He described Richard Dawkins as having adopted a "fundamentalist" view of non-atheists. Higgs
expressed displeasure with the nickname the "God particle". Although
it has been reported that he believed the term "might offend people who
are religious", Higgs stated that this is not the case, lamenting the
letters he has received which claim the God particle was predicted in the Torah, the Qur'an and Buddhist
scriptures. In a 2013 interview with Decca
Aitkenhead, Higgs was quoted as saying:
I'm not a believer. Some people get
confused between the science and the theology. They claim that what happened at Cern proves the existence of God. The church in Spain has
also been guilty of using that name as evidence for what they want to prove.
[It] reinforces confused thinking in the heads of people who are already
thinking in a confused way. If they believe that story about creation in seven
days, are they being intelligent?
— The Guardian, 6 December 2013
But I have not seen speculations at all about
parallels with the relation to our "powerful" experiences of feelings
of passionate love as considered in the essay on Reason and Gender. There are people who infer from their own feelings
of love, love for children spouses and parents, that they have to ask wherefrom comes this powerful feeling of
love, as well wherefrom and why do tears come when listening to certain
kinds of music that ultimately can be religious,
as I remind in my text on “Intuitions
in Music” in the paper Information
and Theology. There must be a “powerful” feeling of love in them, instilled
from outside, from above or the “inside” (Genesis
1:27). That is, love up to the point of being able to sacrifice their own
life in saving or losing the loved ones’.
If, as in analytical psychology, it is a question of
“archetypes” or worse, “mechanisms” rooted in the human mind, then the answer
will be that psychology only affirms that they are there, not claiming to know
wherefrom or from whom they are placed there. Up to now I never heard people
confessing to have “started crying with emotion” or sobbing and getting tears
in their eyes, upon the intuition that their feeling of loving or being loved
by their children, but unhappily sometimes less by spouses divorcing spouses,
is the same feeling of godly love for us (not “intelligence”), making them to
intuit how much we can be loved, and to wonder where this feeling of love comes
from. But many if not most of them seem to be readier to prize mathematics for
their own supposedly godly mind that in fact navigates in the world as a
proudly engineered Titanic did in the ocean. Perhaps fantasy in science fiction
has a presage of decadence in imagining, as in the film Zardoz, that the
immortal “Eternals” leading a luxurious but aimless existence “are overseen and protected from death by
the Tabernacle, an artificial intelligence”. And
this is consistent with the hypothesis that ultimately it was mathematical
logic that raises disoriented powerful feelings.
A support for this hypothesis comes from the
simultaneous tendency to argument only or mainly in logical terms (and/or with
the help of ChatGPT!), without reference to historical sources or
“philosophical” thinkers. Whenever other sources are mentioned, as here, the
accusation is raised that it is an illegitimate “psychologizing”. But see an
example in my bold attempt to suggest with the accusation of psychologizing,
the enforcing of pure logic
can be experienced as a spiritual “rape”. Or the
accusation is raised that references beyond pure logical argumentation are an
abuse of “academic-philosophical-historical” speculations, while the question
should be seen “pragmatically”, whether it is convincing and “it works” in the
physical world, often combined with “it sells” that in turn is reduced to
coarse usability, utility and economic profit or “it is profitable”. One should
instead say: “As quantum physic works, until
a next nuclear third world war”. The appeal to pragmatism, however, is then
done without questioning the history and essence of philosophical pragmatism that is
downplayed to being only “philosophical-academic” interest.
And talking about the forgotten love it comes to my
mind what Oswald Spengler writes in his amazing great work The Decline of the West. Building, as he claims, upon mainly Goethe
and Nietzsche he writes in a book that I bought some more than 40 years ago but
do not claim to have been able to study and digest before reaching old age.
Nevertheless, in this present context I cannot avoid to remark what he writes
in vol. 2, pp. 218f.:
Religion
is metaphysics and nothing else – Credo quia absurdum
– and this metaphysics is not the metaphysics of knowledge, argument, proof
(which is mere philosophy or learnedness) but lived and experienced metaphysics – that is, the unthinkable as a
certainty, the supernatural as a fact, life as existence in a world that is
non-actual, but true. Jesus never lived one moment in any other world but this.
[…]
“His” teachings, as they had flowed from his mind and
noble nature – his inner feeling of the relation between God and man and of the
high meaning of the times, and were exhaustively comprised in and defined by
the word “love” – fell into the background, and their place was taken by the teaching of him.
And today, when for many there is no much left of His teachings, but only “powerful
experiences” possibly seasoned with some theological erudition and
reference-dropping, this can be taken as a support of my view that mathematical
powerful experiences are an
expression of the inflationary power in
the Ego-Mind, and not of human or
godly love. It was the "super-intelligent" power of mathematics and logic over
nature (and climate), and power over other human beings, the
"enemies". It was a “science” applied to wars with high-tech weapons
including "lethal autonomous weapons" (see here, here and here). Application
to wars, however, is more complex than the manufacturing of products. AI will
claim that it is, and will also be, applied to the general use of weapons such
more or less conventional bombs, as illustrated in the case of the 2023
Israel-Hamas war in an article by Jonathan Cook (see also here in the
necessarily highly controversial Unz Review, December 5, 2023). Here follows my excerpt from the article, including a
couple of readers’ comments:
The whistleblowers confirm that, given new, generous parameters of who
and what can be attacked, the artificial intelligence system, called “Gospel”,
is generating lists of targets so rapidly the military cannot keep up.
…
In a report last Friday [December 1, 2023], the Guardian [in an article with the title ‘The
Gospel’: how Israel uses AI to select bombing targets in Gaza, with
repercussions as in Le Monde, Dec.
6, 2023] corroborated Israel’s reliance on the Gospel computing
system. The paper quoted a former White House official familiar with the
Pentagon’s development of autonomous offensive systems as stating that Israel’s
no-holds-barred AI war on Gaza was an “important moment”.
…
Israeli military
is now using an artificial intelligence system, Habsora
or Gospel, to identify targets.
…
Speaking of the
military’s new reliance on Gospel, Aviv Kochavi, the
former head of the Israeli military, told the Israeli Ynet website earlier this year [June 23, 2023] “In the past, we
would produce 50 targets in Gaza per year. Now, this machine produces 100
targets a single day, with 50 per cent of them being attacked.”
…
A former intelligence
officer told that the Targets Administrative Division
that runs Gospel had been turned into a “mass assassination factory”. Tens of
thousands of people had been listed as “junior Hamas operatives” and were
therefore treated as targets. The officer added that the “emphasis is on
quantity and not on quality”.
…
COMMENTS [two selected comments of the article]
The REAL value of “AI” is as lying sockpuppet
to take the blame:
•Health Insurance companies use “AI” to deny claims. (Don’t blame us)
•Military uses “AI” to kill civilians. (Don’t blame us)
• Hedge Funds use “AI” to insider trade. (Don’t blame us)
• Corporations use “AI” for sales targets during down quarters (Don’t blame
us).
This is
the other face of Sam Altman’s empire!
The Ego’s inflationary power offers also an important
advantage of logic (and consequently AI/AGI) in the politics of academic
careers: it dispenses painful references, not to mention charitable attention
to contemporaneous and historical work of the “neighbors” that we are supposed
to respect, if not to love. A logically structured text is the only way of
claiming attention without a single reference to, and (worse) dependence upon
prior knowledge except technical and analytical. That is: unless one questions
the essence of mathematized science, or of logic and mathematics in inquiry.
So, in my experience, logically oriented scholars, now with the support of
heavy AI/AGI technology, can do well without any academic “fatherhood”, let
alone an ex-post legitimately
criticized fatherhood. The more so by adducing the rapid technological
development invalidates the import of earlier, even 5, or 10
years old insights. This can be related to youngsters’ lack of respect
for the knowledge and experience of their parents, the more so of their
grandparents, not to mention of generations of people who with struggles and
sacrifices have built our accumulated knowledge and wellbeing. Their implicit
fatherhood is the pseudo-religious Logos that is felt as implicit in the abused assumed etymology of the word
“Logic” as used today.
It is also symptomatic that there is a spread rumor
(see Internet browser on <mathematics god language>) as if it were
Galileo Galilei's quotation, that he would have written that mathematics is the
language of God. There is, however, only one documented source, The Assayer, in which he
really only claimed that it is the language of science (natural science),
without mentioning God. Further theological aspects of the issue are found by
searching the keyword <mathematics> in my essay on Information and Theology.
The contempt for
philosophy and not only theology, based on ignorance and secularism is also
what guides smart authors of smart articles in prestigious journals such as The Economist. In an ambitious article noted in one
main Swedish newspaper, Svenska Dagbladet (Nov 19 2023) about the
economic-political events in November 2023 in the OpenAI organization, it writes, for
instance, referring to the events that are detailed below, relating to the board and CEO of OpenAI:
The
events at OpenAI are a dramatic manifestation of a wider divide in
Silicon Valley. On one side are the “doomers”, who
believe that, left unchecked, AI poses an
existential risk to humanity and hence advocate stricter regulations. Opposing
them are “boomers”, who play down fears of an AI apocalypse
and stress its potential to turbocharge progress. The split reflects in part
philosophical differences. Many in the doomer camp
are influenced by “effective altruism”, a movement worried that AI might wipe out humanity. Boomers
espouse a worldview called “effective accelerationism”, which counters that the
development of AI should
be speeded up.
So, the difference between doomers, among whom many readers would
like to see myself included, and boomers, would reflect “in part” philosophical differences. Both doomers and boomers could be influenced by “effective
altruism” (that I have already considered in a section
of my text on Information and Theology). What is not
said is that both of them may be projecting their desperation, respectively
their hopes, in lesser or greater capabilities of “democracy”. What is not
spelled out is that the effective altruism, as represented by the outspoken atheist “moral
philosopher” Peter Singer, is a failed
substitute of religion. Boomers do not seem to be influenced by anything. Once
upon a time they might have been denominated as being simply optimists, while
the further discussion in The Economist
indicates that psychologically and politically they might be rather seen as entrepreneurs or “individuals
who create and/or invest in one or more businesses, bearing most of the risks
and enjoying most of the rewards” in this life, in view of absence of a future
one. It was the strong concern of investors that drove the dramatic development
of the relation between board and CEO of the OpenAI organization. So much for
philosophy and money, forgetting also the philosophy of technology, besides
economics and politics.
Ego
Inflation or philosophy behind “Powerful Experiences”
I know of a particular case, beside one
documented on the Internet, in which an ego inflation may have contributed to
the total conversion of a thoughtful educated and intelligent man to
Christianity, a process that I characterize as a serendipitous, paradoxical
"right decision based on wrong premises". He found that since the
decision was right it doesn’t matter whether the premises were wrong. But it
discloses a defective understanding of the “system” and of the interpretation
of Christianity as illustrated in the essay on Conscience and Truth by former cardinal (and later pope) Joseph Ratzinger. The same premises
may lead to other related wrong decisions, including in applications or
implementations of the original decision, such as in interpretations of
Christianity when advocating and justifying USA’s use of nuclear bombs on
Hiroshima and Nagasaki (justified for avoiding “worse” consequences) or the
handling of the conflict between Israel and Palestine, ultimately illustrated
by the 2023
Israel-Hamas war, or as I show, between Russia and
Ukraine. The Israel-Hamas war offers a good example of
defective natural intelligence that obviously undermines the developments and
evaluation of AGI, as illustrated by a (here slightly edited) communication I
received from a self-avowed Christian engineer and scientist who felt strong
sympathies for Israel:
[N]ow we touch on a question that cannot be solved by
philosophy. Regardless of all philosophies, there is a reality that cannot be
philosophized away. For example, in some cases you have to make decisions,
which regardless of which decision you make, have negative consequences for
people and society. And making no decision at all can lead to even worse
consequences
[...]
It is easy to sit in your best armchair with a cup of
Darjeeling tea and a slice of toast with Cooper's Oxford Marmalade and comment
on [others'] actions. And weigh the different options against each other. When
you are in her situation, it is not so easy to play wise and good.
Then one is forced to confess color.
[R]eal life shows how we
humans can be faced with terrible decision situations where whatever we decide
leads to negative consequences. Still, a decision must be made. Not making a
decision at all may lead to even worse consequences. Trying to play well in situations
like this usually doesn't work.
Sitting in one's armchair and thinking beautiful
philosophical thoughts that have nice Latin names (that make oneself appear
wise and good, etc.) can certainly be done. But reality is sometimes more
complicated than our theories about it (the kind of philosophy I now call
"mental masturbation"). Over the years, I have become increasingly
negative towards intellectual people, which led me to define the term
"stupid-intelligence".
It is the self-avowed
Christian engineer and scientist who, commenting my suggestion to read about
the philosophy of science implied in West Churchman’s DIS, revealed what it goes under the phenomenon of Ego-inflation when writing that he
cannot afford the time and effort required to read it:
As for Churchman, I'm not fundamentally uninterested.
I just feel too old to get into a new area. Or rather, I don't know whether I'm
interested. I would have known that if I had had the time to get into his way
of thinking. That's one of the disadvantages of aging. Now I get to die without
knowing whether I had undervalued his thinking.
To which I answered tersely while suspecting that I
witnessed ego-inflation: “I would rather say that you get to die without
knowing whether you thought and acted correctly about certain fundamental
questions." The problem may have not been only his own ego-problem of
having missed something, but rather the ultimate consequences for others, of
his missed understanding. Otherwise: “who cares whether he (knows that he) has
undervalued somebody’s thinking?”.
It is a similar coarse conception of knowledge and “philosophy”
and a-philosophical pragmatism or common
sense, or process of conversion or decision by unconscious emotion in face of
powerful experiences, that has "divinized" violence and wars,
including the nuclear bombs on Hiroshima and Nagasaki in order to avoid
supposed “worse consequences” (for me and/or others than the bombed?). It is
also a way to do away the theological (Christian) concept of “sin”, and
relativize a defective understanding of love, as when stating,
for instance (transl. from Swedish) “Without truth, love risks being reduced to a syrupy drug. And without
love, the truth risks becoming an icy regulation”. This statement, which
per se may legitimately states the Christian dialectic between charity and
justice, is understandable in view of the present daily tendency to use and
abuse the word truth as mathematical formal truth, and love as covering everything
sexual and “nice”, while ignoring the problem of Reason and Gender. But, besides the
typical logicist (logical positivist) assumption that
regulations or logical rules only need to be complemented with separate vague
“love”, it is also a simultaneous denial of the Christian primacy and essence of love (Matthew 22:37-40, esp. Galatians 5.14) coupled to a
subjectivist implicit perception of the meaning of truth. It is a “truth” that,
opposite to the Catholic need of a Church, relies only upon a Protestant
personal reading and interpretation of the Bible, eschewing the supposed
logical positivist “psychologism” in relating
the whole question to depth psychology as in the former cardinal and later pope
Joseph Ratzinger in his essay Conscience and Truth.
It is also this contemptuous attitude to philosophy
that allows the ignorance of the meaning of “powerful” as already painstakingly
discussed in the context of “the sublime”. It is also
this kind of ignorance that allows for the accusation of “psychologizing” when one does
not know how psychology originated in philosophy, and further, that
historically philosophy appropriated the realm of theology, while the latter is
offset by cheap acknowledgment of being a pious Christian believer who quotes
the Bible. This when, in fact, one is a logical
positivist who not only legitimately distinguishes but also separates facts of science from values
and religion that is soon reduced
to scientism (or logical positivism), politics or sentimental art experiences, such as in a sentimental impact of
Portuguese “fados” and of music
played in Nazi concentration camps (see here and here). It is a
reduction that I comment in my essay on Information
and Theology. All possibly paired with alibis like assuring affirmations of
having deep religious beliefs aside scientific practical work, as it is may be
allowed even within logical positivism.
A separation or facts of science from values is
paradoxically also a disclaimer of the engineers’ responsibility for the
consequences of their work and possible failures (so long as they are on the
payroll and get paid by somebody). This is so because the engineered products,
including “tools” (including nuclear weapons such as the atomic
bombs on Hiroshima and Nagasaki) can be said to have been always “right” but were
used in the wrong way, equivalent to that they should not have been used (before being tested in the right way), or
should not be have been given to people (other than us or our allies!) who
could use them wrongly (e.g. in the case of weapons). An example I have
personal experience of is in the context of repeated groundings of the aircraft
Boeing
737 MAX 9, the latest in January 2024 (historical overview here). A
knowledgeable aircraft wizard assured me that the problems were not really
technical but rather managerial, being a consequence of the Boeing’s management
in later times having prioritized economic interests before technical ones. It
is easy to imagine how disclaimers are going to work with engineering products
like AI/AGI that are less “concrete” than aircraft.
The separation we talk about, implies a contempt for
philosophy that fits the protest and rejection of being labeled at all; as,
say, logical positivist or logical empiricist. The answer, then is “I am not
logical positivist, I am only my name, (say) John Smith”, picking an opinion
here and an idea there. This then happens without realizing that this also has
a labeling name, eclecticism, and that by the
(father’s) family name one is labeled as a member of a family, which in turn is
labeled by the name of the father, as Christians acknowledge having a common
Father that makes them brothers. Others unconsciously may assume that their
family is only their Mother (and half-brothers) who loves all her children obtained (“eclectically”?)
from many fathers (many cultures having a special contemptuous and cursing denomination for
such behavior), according to the conceptions that I illustrate in my
introduction to Reason and Gender.
It is only gods,
geniuses or “logicists” who without any outer
references, have an innate skill (often related to high IQ) to connect logical
Leibnizian networks of empirical data (logical empiricists), who dare believe
that they can create their own individual a-historical “-isms” and create an
own logically consistent “Leibnizian” system of thought and unconscious
feelings. They live without a family
and without a history, since the renounce to the evaluation of “-isms” is also
an implicit contemptuous renounce to the supposed “obsolete” thoughts of our
forefathers whose errors and achievements are the base of our ongoing life.
They do not foresee that their own present work and conclusion will be
contemptuously rejected or outright ignored by others within few years.
Unconsciously they rely on other “-ism” such as the mentioned eclecticism and state individualism, where the state, often confounded with society or one
of its scientific subcultures, is the family
of those without family. They do not realize that in relying on their own
logical networks without references to earlier authorities it is like they were
gods editing their own “bible”, and forgetting that the Bible relies on God
(and historical accounts). And they forget that typical logicists
who do not understand the meaning and limitations of language from which logic
is extracted (and not the other way round) reject the Bible itself because of
logical contradictions. Or abuse the Bible by ignoring them, reading the Bible
as the devil does. These considerations lie behind the insight, highly relevant
for understanding the present essay in face of the issue of AI/AGI, that the
crushing amount of references and apparent
complication of its argumentation are not necessary for a reader who is guided
by the Bible instead of treatises on mathematical logic.
This fits a key statement of Blaise Pascal in a relevant
section of his famous Pensées (§ 251) which,
in order to be properly understood must be read and studied in its
context, readable on the net (preferably in original
French because of orthographic misspellings in translation):
Other religions, as the pagan, are more popular, for
they consist in externals. But they are not for educated people. A purely
intellectual religion would be more suited to the learned, but it would be of
no use to the common people. The Christian religion alone is adapted to all,
being composed of externals and internals. It raises the common people to the
internal, and humbles the proud to the external; it is not perfect without the
two, for the people must understand the spirit of the letter, and the learned
must submit their spirit to the letter.
Otherwise, it requires advanced secular ingeniousness
to conceive tests of AI-AGI products beyond the example of my essay on the
Ukraine-conflict. I mean tests sufficiently pedagogical in order to put into
evidence their possibly dangerous limitations, exemplified by the AI creation
of original jokes, overviewed in e.g. an article
in Time magazine in January
2022, being an issue that expands on the Internet as it can be perceived by
browsing with strings like <artificial intelligence AI humor humour>. One such test of AI limitations was illustrated
by Anders Q. Björkman in the Swedish newspaper Svenska Dagbladet, December 2-3 2023, “Jag hetsar AI:n – och
då händer något” [I provoke the AI – and then something happens]: it
is the request that the AI tells an own joke. It fails, but the journalist
refers to the Swedish National
Encyclopedia, as defining humor as arising from
"the collision between
(at least) two human rule systems. To the extent that the collision turns out
to be apparent or explicable from a third rule system, there is a case of
humor"
What is interesting here is that humor is conceived as
depending upon rule systems. As such, if rule systems are synonyms for logical
Leibnizian networks it also illustrates the spontaneous psychological relief
experienced with the insight into the relativity and insufficiency or framing
statements in only logical terms. I read somewhere that the German philosopher
and theologian Ernst Troeltsch indicated that
humor may have similar influence as religion in fostering humility by means of
a downplaying of the importance of the big Ego (that is inflated by logic). And
yet we are far from understanding humor if we consider the difficulty in understanding
a standard work on humor in the world’s literature: Henri Bergson’s book Laughter. I claim that the
process of understanding, and evaluate exceeds
the scope of this essay, but returns
us to the broad philosophical (including the psychological), and theological
issues suggested in The Design of
Inquiring Systems. A meaningful curiosity, however, can be mentioned here
is that Bergson emphasizes that it is easier to laugh collectively, which can be seen as implied in humor’s downplaying
of the individualizing Ego.
It is the afore mentioned separation between facts of
science from values of religion (nowadays profane democratic “core values”)
that the aforementioned Churchman struggled to counter in his whole work, at
the (academic survival’s) cost of renaming God (in his The Design of Inquiring Systems) with the acronym GOD - Guarantor of Destiny, which in
turn allowed an academically acceptable, watered down further
(mis)use of the term, while “divinizing” a priestly role for, and
responsibility of the Designer. That
is, a divinized “Design” which in turn is a misunderstanding and abuse of the
Kantian role of aesthetics in his Critique of Judgment, as addressed in
my text on computerization of society, and in a commentary to Gunnela Ivanov’s doctoral
dissertation relating to the democratization of design.
David Noble and “The religion of technology”
In 1998 the historian and “critic of
technology” David F. Noble published the
book The Religion of Technology, a subject that from a (for me) controversial “leftist”
perspective recalls the more politically neutral idea of Richard Stivers’ book Technology as Magic: The triumph of the irrational. What is of immediate interest here is Noble’s
historical account of the reports and thoughts about AI up to about the year of
publication of his book, as it is found with the title “The immortal mind:
Artificial Intelligence” on pages143-171. In Noble’s account, the “powerful
experiences” surveyed in the above section are not a revelation but only a
triviality that accompanies the development of science and technology.
Nevertheless, he does not linger on the reason of the main involved phenomenon
that he expresses as follows (p. 145, my italics):
Descartes’ peculiar obsession [with geometry and
arithmetic, closer to God; my note] became the principal philosophical
preoccupation for three centuries, as diverse thinkers sought to comprehend the
mechanisms of human understanding, the categories of reason, the phenomenology
of mind. Moreover, in the mid-nineteenth
century, mathematics became not just
a model for pure thinking but the means of describing the process of thought
itself.
But why it
did just became? Thinking and feeling like a historian
he was, Noble does not dwell on a philosophical criticism of Descartes as
related to theology, and why it would have prevailed. In other words, Nobel’s
argument allows him to dispense the expansion of his thought in the direction
of philosophy of science, and the painstaking work of placing Descartes in a
wider context as in Churchman’ s DIS
(pp.18, 22, 37, 61-62, 218, 263, 274, 276). It would have spared him to develop
or envelope further his thought in the direction of Marxist thought, which he does
not perceive and foresee that, in lack of theological insight, would in
turn reinforce faith in, and commitment of capital to technoscience in business
and industry. As when reading his concluding affirmative quotations from other
authors (p. 208):
The religion of technology, in the end, “rests on
extravagant hopes which are only meaningful in the context of transcendent
belief in a religious God, hopes for a total salvation which technology cannot
fulfill…By striving for the impossible, [we] run the risk of destroying the
good life that is possible.” Put simply, the technological pursuit of salvation
has become a threat to our survival…
[…]
Transcendence…means escape from the earth-bound and
the repetitive, climbing above the everyday. It means putting men on the moon
before feeding and housing the world’s poor…The revolutionary step would be to
bring men down to earth…But respite from our transcendent faith in the
religious machine requires that we “alter the ideological basis of the whole
system.”. Such an undertaking demands defiance of the divine pretensions of the
few in the interest of securing the mortal necessities of the many, and
presupposes that we disabuse ourselves of our inherited other-worldly
propensities in order to embrace anew our one and only earthly existence.
Despite my admiration for Noble’s historical accounts,
his conclusions are a temptation for an understanding, despite of it all, of
the motives for USA’s McCarthyism or Second Red Scare. It can be seen
as a reenactment of the Inquisition and
its problems, except for Democracy (with capital D) being seen as
a substitute for God, as explained by Tage Lindbom in his book The Myth of Democracy. The same
question reappears in Nobel’s apparent “Marxist
cultural analysis” that would be concretized and lead to the Cultural
Marxism. What is
interesting for the argument here is not Nobel’s “quoted conclusions” above,
but his gathering of the historical arguments that he witnessed during the
formative years of AI development. Among others:
The reduction of human thought to mathematical
representation made imaginable the mechanical simulation or replication of the
human thought process. For once, the mysteries of the immortal mind were
rendered transparent and comprehensible… The thinking person might then be
joined by the thinking machine modeled upon the patterns of human thought but
independent of the thinking person. [p. 148.]
Confronted by the limitations of mechanical analog
computers while overseeing the operations of MIT¨s Differential Analyzer, the
most advanced computation machine of its day, [Claude]
Shannon suggested speeding up and simplifying the system by
substituting electromagnetic relays for machined parts, using Boole’s binary
arithmetic to describe the electrical network. By using the Boolean system,
invented to describe the laws of thought, to describe the operation of electric
circuits, Shannon laid the groundwork for the electrical simulation
of thought – the foundation of electronic computers. [p. 149.]
With his minimalist definition of machine
intelligence, [Alan]
Turing had deftly sidestepped philosophical discussions
about the actual meaning of mind and thought; his materialist approach
dismissed at the outset any discussion of the existence of an autonomous mind
or a soul, which had preoccupied Descartes and Boole. (Turing had by this time
become an avowed atheist). [p. 151.]
Nearly all of the theoretical developments that made
possible the design of computers and the advance of Artificial Intelligence
stemmed from military-related experience. [p.152.]
[Allen]
Newell and [Herbert]
Simon wrote: “The vagueness that has plagued the theory of
higher mental processes and other parts of psychology disappears when the
phenomena are described as programs.” [p. 155.]
The generally recognized father of what became A-Life
[Artificial Life] was the mathematician John
von Neumann, the “main scientific voice in the country’s nuclear
weapon establishment.” Toward the early end of his life, suffering from
terminal cancer, von Neumann earnestly devoted himself to weapons development,
advocating the use of nuclear weapons and favoring a preventive nuclear war. At
the same time, he began to ponder the fundamental logical similarities between
life and machines…[p.165.]
Buttressed by government funding and institutional
support, A-Life advocates shared with their Artificial Intelligence colleagues
and arrogant impatience with criticism. [p.169.]
In short, Noble purport to show how religious and
dangerous fantasies have invaded the imagination of many people who were the
pioneers of computer applications related to AI. The new (and last?) wave of
AI/AGI-hype in mass media, business, finance, education and research that
inundated and sucks in vain the energy of society starting 2024 after the
launch of ChatGPT and its competitors, can be easily explained as a resonance with the
“spiritual” but not religious, and
still less Christian historical aspects illustrated by Noble. Or then yes, they
are “religious” in the sense of the famous quotation attributed to Chesterton
that “When men stop believing in God they don't believe in nothing; they
believe in anything” [# 53]. The more so in
anything that stimulates the human spirit, whatever it happens to be, or
thought, feeling, sensation of intuition, in disarray with the rest of the
psyche. In order to appreciate them properly without underestimating their human
dignity we may see them as particularly gifted in one specific aspect of their
personality, recalling the controversies and discussions of “Ashkenazi Jewish
Intelligence” (see e.g. here, here, here, here, and here). Regarding the
AI colleagues showing “arrogant impatience with criticism” I may confirm or
debunk this affirmation depending upon possible reactions to the present text
of mine. Arrogance, leading to accusing critcs for “psychologism”, is the psychological sign of what
psychologists have indicated as “Ego inflation” as mentioned above and below in
my text, and more provokingly in other texts such as on Logic and Rape. It may also be
seen a sign of arrogance what a logically and mathematically gifted
(“eclectic”) colleague once wrote me about his unwillingness to be classified
as being near some tradition or school of thought (e.g. “positivist”):
I've never liked labels either. To be placed in
compartments. I feel that there is a snobbery in this. A way to show one's
great learning by being able to label everything and everyone. I like the
rebellious Hungarian psychiatrist Thomas Szasz when he says that there is no
psychology or psychiatry, only biography and autobiography.
That is, a novel way for people who against their will
may be classified as “positivists or
logical empiricists” of accusing people for “psychologism”,
that there is no psychology (and consequently, e.g. no Carl Jung). Never mind
that controversial and (notwithstanding his merits, philosophically and
theologically) very problematic Szasz, appears
himself to contradict this attribution according to an insert (accessed July 22, 2024) in the
webpage of The Thomas S. Szasz, M.D. Cybercenter for Liberty and Responsibility:
Please note: Neither
Thomas S. Szasz, MD, nor Jeffrey A. Schaler, Ph.D., are
"anti-psychiatrists." We both believe in psychiatry between
consenting adults. We are opposed to institutional psychiatry and coercion. We
are not opposed to contractual or consensual psychoanalysis, psychotherapy, or
counseling, what have you, as long as the designated client is able to fire his
or her therapist at any time. Obviously we do not
consider drugs as medicine for behavior. If people want to take drugs to
control the way they think, feel, behave and perceive, by all means they should
be free to do so. See Szasz's important work: Antipsychiatry:
Quackery Squared
A paradox that confirms the ultimate failure of the
human psyche that is dominated by Ego-inflation is what happens in the
technical realm of (necessarily AI-related) Virtual Reality
– VR. I have already struggled to expose, and only refer to it by means of
the two linked words that follow in this sentence, the mind-blowing
“spiritual-philosophical” and phenomenology-related (Pierre Levy) deconstruction
down to the phenomenon of death, of the concept of not only truth but also of
“reality” itself, as I earlier suggest both in the context
of “debate” and of theology. And this
happens while modern secular man paradoxically also appreciates and discusses fiction but finds
absurd e.g. the Catholic teaching of transubstantiation. See an excerpt
from a text that a doctoral student interested in Kensei
philosophy and Kensei
engineering for applications of VR sent to me in August 2023
calling my attention upon this trend:
Kansei
tends to assert -- in line with traditional Buddhist and Shinto philosophy --
that the distinction between human subjective experience and the outside world
is illusory, and therefore inauthentic. Death seems incomprehensible, and
frightens us, because we cannot imagine a world in which we have no subjective
experience, even though it is ultimately illusory. Technology can be designed,
using a method known as kansei engineering,
to reduce, or even eliminate, a person's sense that his subjective experience
is distinct from the objective environment.
This failure of the human psyche also means the
incapability of spiritualization such as in mathematics and in biblical
exegesis in approaching ethical issues. This is exemplified in a program of the
Swedish public television (SVT2) in the series “Idébyrån”,
broadcasted on May 2nd, 2024, with the title Ondskan i oss
[The evil in us, available
until Dec. 10, 2024] introduced with the questions:
Why do we behave wickedly? What drives us there? We
humans are constantly starting new wars, in Sweden children are used to murder
children, we hate each other online, and old people are cheated of their
savings. What drives us there?
A theologian, an author and a psychiatrist discussed
during half an hour but nobody seriously adduced the role of religion in
general or of Judeo-Christianity in particular. The theologian, Natalia Lantz,
had a background in her doctoral dissertation with the title The Hypertemple
in Mind: Experiencing Temple Space in Ezekiel, The Temple Scroll and Mishnah
Middot, an approach that seems to be related to Barbara E. Mann’s Space
and Place in Jewish Cultures (Rutgers UP, 2012) which analyses the historical
meaning of space in Jewish communities in relation to contemporary critical
theory, which Lantz introduces as follows (the links are mine):
In this study I perform a theory driven
close reading of selected sections of Ezekiel (chs. 40-48), the Temple
Scroll (cols. 2/3-13:8, 29:3b-47:18) and Mishnah Middot, in order
to explore how the architectonic descriptions of the temple in these works may
have been used to create temple space in the minds of their immediate
audiences. I combine Critical Space Theory and narrative as virtual reality to
hypothesize the audiences’ immersion and interaction with temple space against
the background of these three different engagement contexts. […] I argue that
these architectural descriptions simulated a temple space that could be
experienced virtually, alike modern day computer
simulations that temporarily disrupt the conditions of the physical reality […]
That is: “…experienced virtually, alike modern-day
computer simulations…”, which borrows a prestigious technical-spiritual
interpretation, or a theology where a “spiritualization” of biblical exegesis
may explain the unbelievably absent theological perspective in the Swedish
television program on “The evil in us”, in the
country that often is considered as the most secularized of the world. The religion of technology? And it is
this sort of academic neo-theology that I
show may explain what happened in the Israel-Hamas war where (the
denial of) the Judeo-Christian God or devil appears as having been substituted
by (the denial of) the Holocaust.
Autism,
or “Mathematical computer-oriented Minds”?
The following is a digression along the lines of the
previous sections on cultural lobotomy, the human psyche, driving forces behind
technology, and ego-Inflation behind powerful experiences.
It deals with the kind of decisions based on logic and
“facts” and unconscious emotion that in the future can be taken more easily
with an apparently undebatable ChatGPT-support, and without having an idea of
what systems thinking is about. This has also divinized relativity theory, quantum physics (“it works”),
music and luminaries such as the “divinized genius” Albert
Einstein, and others such
as Richard
Wagner, all of them often with their own theological musings (here, here, here and here). There are
many logical-mathematical manipulators
whose thought process is described in the study of the foundations of
mathematics by Jan Brouwer. There are also
information and computer geniuses or more generally engineering
geniuses like the historical Vannevar
Bush (that I quote and comment elsewhere, without
relating him as I could to the mention of Claude Shannon above) who could as
well be characterized also particularly gifted humans such as great writers,
musicians, dancers, poets, artists, sorcerers, or now computer wizard or
"mental acrobats". They include those who are named “Tech bros” (cf. The Economist, July 17 th 2024), following dark etymological roots in the “Bro culture”, or other
analytically-logically gifted pundit who can be indiscriminately confounded
with “mental fireworks” of smart analytically gifted luminaries of the business
world, such as Peter Thiel, or (in my
view) reductionist psychologist
such as Jonathan Haidt and his “moral
foundations theory”. What sometimes is
opportunely ignored or disregarded are meaningful aspects if not troubles of
their personal lives. They can also have read the Bible very carefully, being
able to quote selected pieces of text that either support their own
interpretation of its message, or show perceived contradictions which
supposedly (logically) demonstrate its irrelevance (cf. “Even the Devil
Quotes Scripture”). They display capabilities that recall the “savant
syndrome”. It is worth noting that this latter denomination is
a "politically correct" shortening an earlier one: idiot savant, which can be
misinterpreted when ignoring that etymologically idiot, as in the world idiographic, derives from the root idio, with a
dictionary meaning as "only one's own": personal, separate, distinct,
unique, also sometimes qualified, respectively in Dictionary
boxes on Google and in Wikipedia, that it really means (if one does
not deepen the meaning of “wisdom”):
"A person who has an exceptional aptitude in one
particular field, such as music or mathematics, despite having significant
impairment in other areas of intellectual or social functioning; or a person
who is extremely unworldly but displays natural wisdom and insight."
“Savant syndrome is a phenomenon where someone
demonstrates exceptional aptitude in one domain, such as art or mathematics,
despite social or intellectual impairment.”
It is important to remark: “Having significant impairment in other areas of intellectual or social
functioning; or a person who is extremely unworldly…”. If we forget the
further talk about a misunderstood “wisdom” this may, simply or not so simply,
mean arrogance or hubris in believing and
being believed to be extremely “intelligent”. This to the point of pathological
narcissism that is socially
and politically supported by a society that is imbued by dependence upon,
and admiration for analytical-technological power. Or, this to the point that a
whole society keeps wondering about the advent of super-human computer
intelligence and about the difference between humans and machines that have an
own “consciousness”, having own legal rights and possibly being loved and love
as humans.
It is interesting to note the relation to lobotomy in
the case of the savant example of Kim Peek, who is
reported to have had a “condition in which the bundle of nerves that connects
the two hemispheres of the brain is missing”. I will propose another analogy to
the “cultural lobotomy” and “savant syndrome”: the likewise complex but very
modern term autism. It is
qualified in Wikipedia as a neurodevelopmental
disorder, soon argumentatively adjusted to be seen as a particular
neurodevelopmental “order”, as in a broadcasted program (March 12,
2024) at the Swedish public television SVT in a series on “Personality” where
both autism and ADHD were seen as
human diversities, or just different capabilities. I see this language manipulation as a postmodern relativism, as
when dysfunction or disability tend to be renamed as a (functional- or
ability-) diversity. All this while
the American Psychological Association, APA, still defines them as for dysfunction (updated April 19, 2018) as “any impairment, disturbance, or deficiency in behavior or
operation”, and disability (updated November 15, 2023) as “a lasting physical or mental impairment that significantly
interferes with an individual’s ability to function in one or more central life
activities, such as self-care, ambulation, communication, social interaction,
sexual expression, or employment. For example, an individual with low vision or
blindness has a visual disability.” A more complex historical example than of
such a destiny than the above mentioned Kim Peek is William
James Sidis, an American child
prodigy with exceptional mathematical and linguistic skills, born to Jewish
emigrants from Ukraine and an also exceptional father Boris
Sidis, who lived a problematic short life
dying in 1944, aged 46.
My point in this section will be to show how this sort
of autism or diversity finds a legitimate expression in the captivating biography
of the tragic life of the ultimate modern champion of logic, Kurt Gödel, by Stephen
Budiansky, Journey to the Edge of Reason. The Life of Kurt Gödel (2021), reviewed in Swedish by Helena Granström in Axess, No. 6, 2021. Granström remarks that Gödel felt that his incompleteness
theorems demonstrated the “human ability to sense the truth,
even when it escapes the grasp of our formal methods”, thanks to a sort of
intuition. This must be seen as a coarse psychological speculation without a
psychological theory, which is indeed subsumed in my discussion of Computers as embodied mathematics and logic.
But not all appears as outright autism of whatever. In
a book by Paul
Hoffman, Archimedes’ revenge: The joys and perils of
mathematics, published in
1988 during one of the hype periods of artificial intelligence, which was not
yet hyped with the addition of neologism as artificial general intelligence. The
title itself refers to the frustrating “Archimedes’
cattle problem”. The point I want to make may be also enhanced by
reading and meditating upon at least the Wikipedia
account of a later book by Hoffman, with the revealing title The man who loved only numbers, published in 1998.
Keeping to Hoffman’s first mentioned book, it is
interesting to note that besides the understandable lack of reference to G.
Leibniz in his extensive 11-pages word index, he has the following kind of
disclaimers, which position his understanding of mathematics and logic (pp.
2-4, 159) :
Many books have been written about the philosophical
underpinnings of mathematics, about the extent to which it is the science of
certainty, in that its conclusions are logically unassailable. Many other works
have rhapsodized at length about the nature of infinity and the beauty of
higher dimensions. Such philosophical and poetic excursions have their place,
but they are far from the concerns of most working mathematicians. In this book
I give a glimpse of some of the things that mathematicians, pure and applied,
actually do. […]
With fundamental questions about number and shape
still unsettled, it is no wonder that there is much disagreement and confusion
about that the computer -- a very complex mathematical tool – can and cannot
do. I have tried to stay clear of mushy metaphysical issues about the nature of
man and machine in favor or presenting what little is known about the
theoretical limits on computing. […]
Much ink has been spilled in the philosophical
literature on what the absence of judgments means in terms of a machine’s
ability to think, but the pursuit of such intriguing speculations would take us
too far afield.
This does not prevent Hoffman from, deep in the book
(p. 168), speculating about using Boolean algebra in e.g. representing the
state of two people’s being friends and not being friends, irrespective of the
meaning and measurement of friendship. The problem is extended to his
discussion of democracy when stating that indeed, mathematics demonstrates the
theoretical futility of creating a perfectly democratic voting system, as illustrated in
USA politics (analysis by Zachary B. Wolf and Renée Rigdon on July
20 2024 in CNN’s “Why Matters newsletter”), even if disregarding its dependence
upon big interests - big donors’ money. He refers (pp. 5, 223) to the
“wrestling with the mechanics of setting up a democratic nation”, and refers to
the “Nobel prize-winning work of the American economist Kenneth Arrow” that shows
that achieving the ideals of a perfect democracy is a mathematical
impossibility. Never mind about the problem of Arrow’s
conception (“impossibility theorem”) indicated by e.g. West Churchman,
and about the
myth of democracy, while the attention is focused on its mathematics.
And Hoffman spends pages (e.g. 237f.) referring to books applying game theory
to conflicts in the Old Testament between God and human beings, including
game-theoretic implications of omniscience, omnipotence, immortality, and
incomprehensibility. Not to mention the pondering (p. 144, 211) that “a machine
state is like a mental state” and the pondering about the difference between
electronic components in a computer and neurons in the human brain.
All this shows how the hubris of artificial
intelligence, even early long before the success of today’s so-called general
artificial intelligence, grew from the apparent self-imposed limitations of
reasonable people who were certainly not mentally impaired, if yet possibly
living and thriving in a sub-culture. The problem then is not that they were
autistic in simple, raw medical clinical sense. In analytical psychology there
is the conception of Psychological Types, which may be seen as vulgarized in the culturally and historically
uprooted Five-Factor Model of Personality. A better variant is popularized in the book Surrounded by Idiots, and others by the author and management consultant Thomas Erikson with
the DISC assessment
tools. In these
popularizations of types the question is the interplay between variants of
mathematical statistics and especially cluster analysis, in mathematical approaches to personality psychology, in oblivion of its “philosophical
assumptions”. Otherwise, the
question in analytical psychology is of the interplay between thinking,
feeling, sensation and intuition in relation to consciousness and the
unconscious, and psychic development is the capacity to achieve a balance among
them in the so-called process of individuation. Such
conceptual constructions are not more absurd than the mathematical ones as in,
say, quantum
physics, if philosophical pragmatism is taken seriously. They
have a theological counterpart in the Bible’s famous Corinthians
12:14ff :
“Now to
each one the manifestation of the Spirit is given for the common good. To one there is given through the Spirit a
message of wisdom, to another a message of knowledge by means of the
same Spirit. […]”
The cultural problem appears when a specific
constellation of aptitudes are awarded improper high value by society at large,
when represented by industry including the weapons industries, finance and
academia in a secular technical-industrial culture developed after the scientific revolution. As in Wikipedia’s
quotation from William Whewell’s Philosophy of the Inductive Sciences: it
is the (my italics) “the
transition from an implicit trust in
the internal powers of man's mind to
a professed dependence upon external observation”. It is the external
observation of empirical experiments that are assumed to verify results of
mathematically conceived relations to earlier external observations. Society
and its politics begin to reward mainly thinking seen as equivalent to
mathematical-logical ability while mathematics and logic undergo changes in
themselves and in their functions, subservient to only indirect external observation. “Indirect” here means that the
observation is made on observable consequences of hypothesized mathematical
constructions built on more primitive and intuitive concrete observations. An
insight I all this requires a deep understanding of the foundations of
mathematics, which I touch upon in my Computers as embodied mathematics and logic. If the reader
needs here to get a mind-blowing feeling of this conceptual, more than physical
world he/she can (try to) read the improbable professional activity of quantum
(also informational) physicist and
Nobel laureate Anton Zeilinger. A follower in
the same tradition is the Swedish physicist Sören
Holst in his paper on The Magical Quantum World (trans. site by Google Translate), in which he unknowingly confirms Richard Stivers’ book on Technology as Magic. It is an activity which consequently evades informed
political democratic control, other than financial-commercial and military. And
this can be seen as inflating the self-confidence if not outright arrogance
(“ego inflation”) of seduced mathematically and logically gifted scientists who
are overvalued by “democratically” selected political leaders and managers who
strive, or rather hope, for profitability and economic-industrial-military
power (cf. “it works”) in a society in a cultural crisis being driven by the
Western world.
An important additional aspect of the computerization
and implementation of AI-AGI related to the prestige of mathematical
computer-oriented minds is that, as I write in my essay on The Meaning of Human-Computer Interaction, the whole can be seen as
a non-reversible social experiment that seems to diffuse
a sort of “dementia”. It incapacitates the daily life of an increasing
part of the population, especially elders who will be “promoted” to being
considered as mentally handicapped in their “human-computer iteraction” at gradually earlier age, and made dependent upon
the routine help by children and grandchildren. Another way to see it is to
observe that an increasing part of the population will not “serve” the other
part, but will serve itself by means of (laborious instructions learned about)
tools and machines that this decreasing other part designs and manufactures by
means of other machines that are designed and manufactured by themselves. And
nobody is able or willing to determine the net amount of unemployment resulting
from the process (see among all the other references to “unemployment” in the
present text), which can be illustrated by the case of *self-checkouts” combined with international
article numbers, that result in less “others-serving” cashiers and
personnel.
For the rest I recommend a reading of the Wikipedia
account of the famous theoretical physicist Robert Oppenheimer, “father of the
atomic bomb”, especially the section on his “final
years”, beyond his plunging into “mysticism” and references
to literary-religious works like the Hindu sacred text Bhagavad Gita, while his scientific brother Wolfgang Pauli in a complex
relationship with Carl Jung tried to unravel
meanings of quantum physics, a relationship that I myself try to unravel in my
text on Quantum physics, computers & psychology.
Antinomy?
– Non mathematica/computer-oriented
minds
The following is an expansion of an excerpt from my
essay on Computers as embodied
mathematics and logic, which is presented later here below, and
recommended to be read. In it the analytical psychologist and author Carl Jung testifies his
difficulties in accepting some ground statements of equality in mathematics which could be
interpreted as stupidity or dementia.
What is not easy to understand is that, especially the
mind of an inwardly, heartily “psychologist” perceives that here is a problem
in that, for instance, “Two
objects that are not equal are said to be distinct” and “The truth of an equality depends on an interpretation of its members”. The problem arises in that
neither two human beings are ever equal, nor indeed even two (“physical”) objects are “equal”, depending upon the
meaning of physics, object, and of equality. For an introduction in going deep
into the question including what an “object” is, the whole work of the
philosopher F.S.C. Northrop may be
helpful, and not only his The Logic of
the Sciences and the Humanities and inspiration for my dissertation on quality-control of information.
The easiest
way of catching the complexity of all this, I think, is to understand the
problem of the societal introduction of personal identification number – PIN,
that I survey in my paper on The meaning of human-computer interaction and cannot reproduce here. It is meaningful to
remember that PINs or camp serial numbers were
tattooed on the skin of prisoners in concentration camps during the second world war. What is not easily
understood is that PINs (assigned by whom) are a first step in facilitating a
disintegration of the conception of a human being through an identification
through an equalization of human objects (not subjects) who are represented by
numbers, in groups that are also represented by numbers of elements assigned
(by whom) to each group. It is possible that it is this kind of insights that
lie under the fact (accounted for here below in the present text) that the psychologist Carl
Jung, whose particular mind was absorbed by the “subject” rather than “object”
of the individual human being, stands
as a historical typical “non-mathematical” mind. It may also be the core of the
psycho-social resistance against the introduction of personal identification numbers - PINs as evidenced in the USA in the societal
debate about the historical report on Records, Computers and the
Rights of Citizens, when earlier
basis of identification were date and place of birth, names of the parents, and
possibly dwelling address. Today one may discuss the ethics of the Israel-Gaza
war in terms of numbers, say 1400 killed Israelites vs. 41000 killed
Palestinians as per September 2024.
It is this
approach that seems to make it natural for us today to find it reasonable to
see millions of people as numbers in HCI, and to launch atomic bombs on
Hiroshima and Nagasaki (but not yet in Ukraine or Russia) with a total of estimated about 150.000 civilian deaths compared to a number of otherwise dead
allied soldiers of less than 20.000. And this is repeated today, I claim, in
the Israel-Hamas war, where the Biblical injunction of “ eye for eye, tooth for tooth” of
Exodus 21:23-25, with the help of technology is substituted by “30 eyes for an eye and
30 teeth for a tooth”, or 16.000 children for 100 children. Numbers of objects
can also be stuff for what I see as semantic statistical exercises, as when PM
Netanyahu in his talk to the USA Congress as recorded in The
Times of Israel, explains that
“Three thousand Hamas terrorists stormed into
Israel. They butchered 1,200 people from 41 countries, including 39 Americans.
Proportionately, compared to our population size, that’s like 20 [twenty] 9/11s in one day”.
What is not perceived
in this kind of numerical “rational analyses” may be a tragic analogy to the
pride of a whole world for a gigantic rational piece of naval engineering like
the “Titanic”. A whole constructed navigating world that does not
foresee its meeting the iceberg, as many today see climate
warming, if not a third
nuclear world war.
All this is in turn part of “disinformation” campaigns that
rhetorically I claim “do not exist”, such as “Netanyahu’s
claims before the US Congress: Facts of falsehoods?, and ”Fact-checking
Israeli Prime Minister Benjamin Netanyahu’s address to Congress”.
The meeting between the two different minds
It is easy to not perceive that this may be the core
of the whole present text, above and below, with the difference that the humans
meeting AGI are meeting an inanimate object that embodies mathematics and
logic, which in turn “animates” those humans who interact with the “machine”.
All this raises questions that are confusing and may misdirect the present
ongoing thoughts as they relate to several other papers of mine with different
attack points, some of them already mentioned such as Computers as embodied mathematics and logic, The meaning of man-machine
interaction, Computerization as abuse of formal science, Information and
Debate, Disinformation as a myth, Logic as rape, Information on: Christianism
or atheism, all of them found in my archived
collection. Thinking
about all this amounts to think about “everything” while the question
approaches the psychological problem that motivated Carl Jung to “meet” his
senior colleague psychologist Sigmund Freud by developing the idea of Psychological Types.
Keeping close to this last mentioned approach I wish
to limit myself by reporting some personal experience in my mail-dialog with an
extremely mathematically gifted and widely well-read colleague whose
spontaneous “lectures” to me on e.g. quantum physics raised my admiration as in
meeting a musical child prodigy, such as in arranged YouTube flash mob or smart mob
“improvisations” (example here, others here). The dialog
proceeded up to the point that I perceived in him a dose of sentimentality that
could be an unconscious reaction to an extreme formal mathematical-logical
rationality such as found in the fields of quantum physics, computers and AI,
tempting me to apply amateurish Jungian analysis. It could resemble a
reciprocal amateurish psychoanalysis with phenomena
of transference and countertransference, which also
recall the phenomenon of video game
addiction. I found that in the best case it could turn into a
phenomenon catalogued as paragraph number
213 in my reference (in Information
and Debate) to the classic The Art of
Worldly Wisdom.
213 - Know how to Contradict: "An
affected doubt is the subtlest picklock that curiosity can use to find out what
it wants to know. Also in learning it is a subtle plan
of the pupil to contradict the master, who thereupon takes pains to explain the
truth more thoroughly and with more force, so that a moderate contradiction
produces complete instruction."
As I expressed it in the interaction with my
colleague, however, this required my own evaluation of the reciprocal
“existential” needs. It begins to resemble a professional psychotherapeutic
relation or ultimately the situation following from a formal Christian marriage
where moral emotional commitment requires a lifelong faithful dialog and
alternating roles of “teacher and pupil” expressing reciprocal love. Such an
approach is related to the whole question of Reason and Gender. It all can also
be valid regarding a meaningful relation between genuine friends (cf. here and here) or between the
author and readers of a serious text.
In this context of meeting between two different
minds, but within one same person, I will also mention an exceptional writing
by a person, colleague of mine, whom I met around the year 1980 while I still
was assistant professor at the dept. of mathematics and later computer science
(in Sweden also “datalogi”) at Linköping
university. A colleague of mine named Anders Beckman with whom I
lost contact with for his having left the university, wrote a research report
of 26 pages, dated August 1980 with the title Varför jag inte kan vara
datalog: en diskussion av värderingar [Why I can't be
a computer scientist: A discussion of values; catalogued as
LiTH-MAT-R-80-40, ISSN 0348-2960]. He struggles throughout the whole paper with
what he could not name except for “values”, and I have identified as the
relation between formal science and the rest which is now being forgotten with
the hype of AI, and barely identifiable in papers such as (in Swedish) Per
Flensburg’s Den stora tankestriden
inom informatik [The great battle of ideas in informatics; see also here].
On page 22 he writes something that has been
symptomatic for many students of mine, namely the difficulty of grasping the
meaning of West Churchman’s epochal book on the Design of Inquiring Systems. In mentioning that the researcher’s
personality influences his activity as researcher he writes (my translation and
italics):
For example, I have had difficulty reading C. West
Churchman, without being able to give any
reasonable explanation. However, I have gradually realized that Churchman
and I have completely different values on issues that are important to me, such
as gender roles and related questions about human value, and that these values are
noticeable in his work. Being aware of this allows me to decide whether the
differences in values affect my assessment of the value of the work.
I perceive a sort of anti-climax in the obscure
conclusion, which can be just a peculiarity in the constitution of Beckman’s
attitude, may have contributed to the drama of his simply leaving both the
discipline and the university. It can be seen as a sort of “academic tragedy”
in the sense that a deep thinking by promising computer scientists is so honest
to truth and ethics as to change the whole orientation of his future life. In
fact, as I see it, he had met and sensed the challenge of overcoming the whole
problem of the foundations and meaning of logic and mathematics, and of the
orientation of modern mathematized science and technology as related to philosophy
(where he gets submerged in M. Heidegger, cf. further on
theology in my Information and theology).
To a lesser degree the same phenomenon is described by
a former student of mine and later professor Erik Stolterman who in
his sort of blog on “interaction design, HCI, philosophy of design and
technology & society”, in October 2012 wrote:
One of the books that has had the most influence on me is "The
Design of Inquiring Systems: Basic Concepts of Systems and
Organizations" by C. West Churchman. The book was published in 1971. I
probably got my copy in the early 80s. The reason why I read the book at that
time was not primarily because I wanted to, but because my teacher at that
time, Kristo Ivanov, who would later become my PhD adviser was a big fan of
Churchman. […] Thinking
back on the time when I read the book and also met Churchman and heard him talk
about his ideas, I am quite sure that I did not really understand what the book
was about. Actually, now, even without reading it, just by looking at the List
of Content I am quite sure I understand the book much better than ever before.
[…] The book was so difficult at the time when I read it. At the same time I loved to read it. I am not sure I
really understood it at all. Right now, I am so looking forward to read it
again, now that my old copy has magically surfaced.
I perceive this also as a sort of anti-climax with
obscure conclusion from the meeting of two or more different minds in the same
person, about the use of the lessons in the book. One can easily guess what
this drama driven by big fans of “design” (which I foresaw early as expressed
in a paper on The Systems approach to Design) consciously or unconsciously means in
the psyche of people, and what this leads to for those who are or will be
obliged to develop or mostly use human-computer
interaction including AI in their daily work. In particular, it
also means inability to connect the problem with philosophy, and ultimately
theology. As when in a whole chapter of the book on The Design way: Intentional Change in an Unpredictable World the authors borrow Churchman’s use in his book The Systems Approach and its Enemies (1979, p. 98) of the playful acronym GOD standing for
the “Guarantor of Design”. All this while in that book and its earlier DIS (cf. above) Churchman had mentioned
and discussed dozens of times God and guarantor without acronyms. Then the
acronym turns out to be a way to deviate thoughts into the unforeseeable
results of our development and use of technology, all this through a sort of alluring
aestheticism and divinization
of the “Designer”, who often is the very same reader of the book on the design way, to whom are applied the
attributes that have philosophically characterized Genius. And this goes
on continuing, unheeded, to avoid discussion about what overhyped intelligence
as related to wisdom and theology is
or should be, to begin with. As I suggest in my paper on the digitalization
of society.
It is, however, important to note what is not so
evident in secularized western countries championed by the example of Sweden:
that theology as related to the analytic psychology of wisdom is not only a
matter of pious faith, but rather an arguable basis for all ethics including
professional ethics of all geniuses or designers, financers, manufacturers, sellers,
profiteers and users of (if not affected by) their products. It is the reason
for my toil in expressing related thoughts in papers on information and
theology, applying them to the cases of disinformation, personal
conscience, and information on the wars in Ukraine and Gaza.
It is this misunderstanding of the basis of ethics
that, for instance, makes a young Danish feminist famous for her
activism motivated by her being victim of revenge porn, and for
re-discovering after a longer stay in hospital the dictionary meaning of economism, while ignoring its philosophical
meaning, and all because of her ignoring the historical
philosophical debate on utilitarianism. If she minded
about Christianism, she and her enthusiastic readers would recognize in her
wish (e.g. in an interview
at the Swedish radio on 16 and 18 May 2025) for “giving
priority to our relations to other human beings” the concept of love that is the core of the forgotten Christian
doctrine. An ulterior proof that this is not understood is the announced
research of substituting AI care for human terminal care in hospitals (and
homes?), in order to improve their economy in face of the cost and scarcity of
manpower.
Artificially
intelligent artistic production?
Extending the question of powerful experience we come to the problem
of how to frame the question of AI/AGI influence and use in artistic
production, where art stands for the production of powerful experiences and
more than so. In the cultural history of the Western world and its philosophy,
art has been powerful in the sense that Kant and post-Kantians have related it
to Kant’s synthesis of the Third Critique or Critique of Judgement that I have considered in an earlier essay on Computerization as abuse of formal science. In our context the matter has already been noted in
the cultural world and even mentioned above, but here it will be inspired by an
article on (my trans.) “Artificial
music is the future’s melody” published in the major morning newspaper Svenska Dagbladet (30 December 2023),
written by Björn Ulvaeus, presented as a
Swedish musician, singer,
songwriter, and producer best known as a member of the musical group ABBA. The hype about this issue is illustrated also by the
fact that the following day, another author, Ebba Bonde, wrote an article
dealing with the AI-manipulation of pictures/images having, however, the
typical uncommitting and advertising title “The
use of AI is as much a danger as an opportunity”.
Ulvaeus starts by acknowledging that we all are
influenced by what we have listened to in our lives, that he does not know
where his inspiration comes from, and he expresses a deep gratitude for the
miraculous mystery of music’s mere existence, without referring to his
otherwise well-publicized atheism (that prevents thanking God). He assures the
readers, however, that “we do not need to feel shame for having difficulty with
the definition of what music is”. He notes that sounds have a remarkable ability
to influence our “feelings”, and to penetrate in our “unconscious”. In this way
he proceeds without feeling shame for having difficulty with definitions, this
time not only of music but of feelings or the unconscious, which happen to have
been main matters of struggles for theologians, philosophers and psychologists
for centuries. Giving up these struggles, the ongoing drive for computerization
of society, focused here on AI, is combined with an aestheticizing post-modern
abuse of Kantian aesthetics that has been renamed DESIGN. In this perspective Ulvaeus as musician and songwriter is a
designer. It all follows
unconsciously the steps of the likewise abused post-Kantians Nietzsche and the
academically often referred, if not namedropped, Heidegger. I have written
about this in the essay on Computerization as abuse of formal science, with the alternative earlier title of “computerization
as design of logic acrobatics”. It deals now with the acrobatic use of AI, when
its users admire the circus acrobatics of computer scientists turned into
craftsmen.
Therefore, the text proceeds swiftly to state that up
to now songs that we love have come from humans how have dug deep in their
psyches, experiences, sorrows, triumphs, fears, losses and endeavors. “But now
has appeared a new type of composer, eager to learn by means of “deep
learning…with a neural network that loosely resembles the human’s,
but not yet.” Ulvaeus tells about being introduced to Google
Deep Minds “Music AI incubator”, which he felt was an overwhelming experience that
seems to have made him mainly if not only aware of the importance of finding
solutions for protection in “AI-training” or “prompting” from infringements of
artistic copyrights related to sound, style, genre, voice, instrumentation,
melody and lyrics.
But the reader of his article senses some sort of
insight in what follows:
Songwriting is an intuitive and deeply human activity,
and the big question is whether an AI model can create something that reflects
the humanity behind a song, and whether that actually matters to the listener.
Is there a depth to what a human can create that an AI cannot replicate? If so,
can we appreciate that depth? Do we care? Or will our brains fill in the blanks
and make us think we're hearing something that isn't there? - Musical
experience is subjective. And isn't the only thing that matters what comes out
of the speakers and the effect it has on the listener? Whether it was created
by AI or a human? Or?
[...]
The Abba story made our "Voyage"
avatar concert in London possible. It is a human story of hard work,
triumph, disappointment, joy, love and sorrow. [...] It is so organic and full
of contradictions and crossroads where we could have taken a different path
that I cannot help but cling, perhaps vainly, to the hope that it could not
have been artificially created.
[...]
If AI, through the lens of language, can learn
everything a human can know about the innermost nature of the world? Can the
creative process itself be replicated by AI? The process by which ideas born of
curiosity and imagination are realized and push the boundaries of the human art
world.
[...]
Will AI help us create great new work or make us
obsolete? Personally, I think we will learn to coexist, as we have always done
with other technological inventions in the world of music.
To start from the end: “we shall learn to coexist?”.
So, it is not AI that will teach us to do it by structuring our “relationships”
by loving each other? As I claim that it will not write my essay on the
Russia-NATO-Ukraine conflict? Nor it will make labor unions superfluous by
teaching “us” what to do about conflicts of interest, greed, hate and
unemployment? And Ulvaeus refers to the assumedly prophetic Yuval
Noah Harari who means that AI has “hacked” the operating system
for the human civilization. So much for prophecy and for knowledge about
operating systems, as it transpires in Harari’s Can mindfulness save us from the menace
of artificial intelligence? An unthought prayer to an imaginary God apparently
cannot compete with “mindfulness”, since it is reported that Harari is “derisive about the great religions having
anything to offer” and
“to
the best of our scientific knowledge, all these sacred texts were written by
imaginative Homo sapiens. They are just stories invented by our ancestors in
order to legitimize social norms and political structures.”
That is, “to the best of
his knowledge”, which also fits what has been called Sweden’s official religion
of atheism, also practiced in a Swedish
philosopher’s book
on so-called fake news (while ignoring The Myth of Democracy), which are related to the consequences of AI/AGI. Harari’s
“mindfulness” promisingly should have meant that “we better understand our
minds before the algorithms make our minds up for us” but it seems to either
ignore or counter everything that the most mindful students of the mind have
come up, beginning with the analytical psychology that e.g. Jordan Peterson adduces as I myself have
done in my writings. The problem, however, is defectively posted to begin
with, since it does not address the question of the meaning and function of art
as necessarily done in philosophy, as suggested in my essay on the influence of
Immanuel Kant in the conception of societal computerization
as an abuse of formal science. And regarding music Ulvaeus’ understanding misses
the primary meaning of art as human communication, communication between humans
closely related to love, whose corruption becoming visible in this discussion
of AI in art is surveyed in my preface to Reason and Gender. Not to mention the missed history and meaning of music
in religion and theology as indicated in my essay on Information and Theology.
Concerning “avatars” as in avatar concerts it is possible to identify the same tendencies of
dehumanization of human relationships as outlined about 2D-Lovers in “The
technological turn” of my Reason
and Gender.
For the rest, I claim that the above approach to
artificially intelligent artistic production exemplifies the same
dehumanization process that I noted above in the context of the hyped Turing Test, about which I
express that the less one understands and feels what a human being is, the more
he will be equated to a machine, an avatar, an ant or whatever, as in the
Holocaust. The decline of culture and civilization coupled to an undervaluation
of philosophy and theology will necessarily become visible in the discussion of
more abstract issues such as art. Consequently one will not be able to
understand neither what Plato writes about the effects
and importance of music (Republic, 398b-403c,
but esp. 424b-c) nor music’s role and effect in the paganism, atheism, and future of our youth.
More generally one will not be able to understand the
meaning of difference between an original work of art and a copy of it which
stands at the heart of the (infringement of) copyright. Original means that
something is (problematically established to be) not consciously and openly
influenced by, or rather generally recognized as similar to a prior human
creation. In terms of analytical psychology, however, two or more works of art
can be influenced or inspired by an archetype that belongs to the collective
unconscious and as such legitimately common to humans. This problematizes the
concept itself of copyright at a more basic level than the erudite and
politically oriented famous work of Walter Benjamin, The work of art in the age of mechanical reproduction. It puts in evidence the economic-political core of the
creation of copyright, while it can be questioned in spiritual terms as it was
in earlier times e.g. among musicians who freely “borrowed” and lent from each
other and from themselves, justifying it by seeing that their own imagination
and creativity was a gift from God. Or, as I heard once the world-famous tenor Plácido
Domingo respond to somebody in a broadcasted interview who
praised his voice, by uttering that it was not his own merit but a loan from
God who could at any time take it back.
I myself come to think of my own indignation and
feeling of revolt in that my mentor West Churchman’s mentioned book The Design of Inquiring Systems (1971)
is not being reprinted and is not made available on the Internet, while a
direct request for it
had not even been answered (as per January 2024) by the owners
of its copyright. It is an eloquent example of the unethical disinterest for
disclosing truth, while also negating the supposedly “holy” democratic right of
freedom of expression, all under the silent mantle of narrow political-economic
(dis)interest.
Return
to The Design of Inquiring Systems
Well, the definition of wisdom is controversial. In
way of summary: it all is about chapters 2 and 5 of part 1 of DIS, while ignoring or not understanding
the whole of its 16 chapters in both parts 1 and 2 of the book. In particular, it is worth mentioning
that in AI and AGI the emphasis is on chapter 2, the building of fact nets, and
not on chapter 5 since the AI community is not specifically designing and
performing scientific experiments and creating or establishing facts, but only
selecting them, in oblivion of the intellectual challenge illustrated in Steven Shapin's famous A Social History of Truth. This is a phenomenon that I observed during many years
of teaching at universities: most readers understand the book up to chapter 4
or chapter 5 discussing empiricism, but later they seem to get gradually lost,
up to the point of confessing that they cannot see the necessity and meaning of
the second part of the book with the last 7 (where appear the references to
Carl Jung and analytical psychology) out of its 16 chapters. For many the final
result is that when discussing the book’s content, they testify that yes, they
have read it, and that they already “know” it, assuming that it does not
influence their pre-conceptions on the AI-issues, and their conclusions.
This is not to “divinize” West Churchman who, in my
opinion, got limited by his need to survive academically by fitting in the USA academia (also here), and for
reasons of the problematic relation between science, theology and politics that
I survey in my text on Information and
Theology. I think that he basically did it by his repeated relating to the
academically divinized philosopher Immanuel Kant that I comment upon in my essay on the
ongoing computerization of society. Kant also deserves and important
place in Churchman’s latest major book, The
Systems Approach and its Enemies, in chap. 4 on “Logic: A Theory of
Reality, or Kant for Planners”. In my view, Kant along his numerous followers
such as Jürgen Habermas, diverted the development of many scholars, including
one among Churchman’s most knowledgeable and faithful students, Werner Ulrich who, by the way, authored the best biographical
documentation of his thesis advisor, accounted for in the summary of
Churchman’s influence on me.
On the basis of the above I wish for the time being to emphasize one
basic conclusion. The possible if not probable result of the use of AGI is that
it will suffocate debate by cementing the
past and what (and where) has been written or said up to now, both in terms
of choice of, or availability of data (measured and created, or chosen by, and
available to, or affordable for whom?), imposed way of thinking (as illustrated
in my Logic as Rape) and of
affordable technological base. An alternate simple formulation found in an
article in The Economist (Nov 9th 2023) in the context
of Hollywood actors’ strike reflecting a fear of the technology is
The process of ingesting everything and then spitting
out an average may lead to a stylistic and conceptual regression to the mean,
says a literary agent, who sees similarities with the algorithms on social
media that help propagate the most pedestrian views.
AGI may work for
contexts of basic natural science isolated from human interaction, as is the
case of established classical physics such as in astronomy (but even then, not
necessarily in quantum
physics). It will cement what has existed up to now around us
or around those who supply or have control over the environment that has
produced the data for unknown uses, and have control over the equipment. It
will even include logic manipulations (deductions and inductions) in e.g. large
language models LLM (soon "very large language models" VLLM?) of what
is electronically recorded as having been said, written or heard up to a
certain moment about "something" with no agreed-upon definitions, and
finally selected by unknown GPT "trainers" for known or indeterminate
uses. It will keep lots of people busy in a difficult rebutting of what has
been easily said or written by anybody with the help of AGI-GPT. It may be seen
as a loudspeaker or broadcaster of it all to all of us, or to selected
audiences. All this coupled to the difficulties if not impossibility of genuine
debates as I try to survey in a text on Information and Debate. From the
teachings of the most serious research on the history and sociology of science,
one is neglecting every insight into the fact that most of the ongoing
scientific work is based on confidence in the truth of others that from now on
will not be known, as masterly illustrated by earlier mentioned Steven Shapin
in The Social History of Truth. It is
an image that revives the mind-blowing concerns raised already in 1967 by Russell
Ackoff’s article on Management
Misinformation Systems, which also clarifies the pitfalls of offering
ready-made "recipes" on how to solve known problems using known
methods:
Five assumptions commonly made by
designers of management information systems are identified. It is argued that
these are not justified in many (if not most) cases and hence lead to major
deficiencies in the resulting systems. These assumptions are: (1) the critical
deficiency under which most managers operate is the lack of relevant
information, (2) the manager needs the information he wants, (3) if a manager
has the information he needs his decision milking will improve, (4) better
communication between managers improves organizational performance, and (5) a
manager does not have to understand how his information system works, only how
to use it. To overcome these assumptions and the deficiencies which result from
them, a management information system should be imbedded in a management
control system.
DIS deals just with what is or should be management,
control and system. To understand the difficulty of understanding DIS it is helpful to consider how it is
ignored it in an apparently “encyclopedic” review by K. Lyytinen and V. Grover
“Management Misinformation Systems: A Time to Revisit?” (J.
of the Association for Information Systems, 2017, vol. 18, 3) who rely on
ad-hoc terms including (in our context) tautological “intelligence
augmentation” as stated in
“We
identify significant shifts in research on decision making including the role
of abduction, data layering and options, and intelligence augmentation.”
In terms of DIS, AGI that in the best case may work
as a profitable "user manual" of AI, implies increased dependence
upon available data that unknowingly may be fake data, misinformation
and disinformation, and upon computer technology, including dependence
upon those who own and can afford the data and technology. In other words: DIS chapters 2 and 4 on Leibnizian Inquiring Systems: Fact Nets, chapter 3 and chapter 5 on Lockean
Inquiring Systems: Consensus, which probably
is the last chapter understood by most readers, and is the coupling to their blind reliance upon the “myth of democracy“
(more on this myth below and in my essay on the
Russia-Ukraine conflict). This will
be to the detriment of partly chapter 6 on Kantian
Inquiring Systems: Representations which is the dogma of most contemporaneous
philosophers, chapter 7 and 8 on Hegelian Inquiring Systems: Dialectic - the illustrative
chapter 8 being usually understood and liked by most labor unions, Marxists and
leftists but never by AI/AGI developers whose products often should but are never allowed to produce conflicting stories
or explanations, and chapter 9 on
Singerian Inquiring Systems: Measurement and Progress – unconsciously accepted if yet not understood in
detail by most scientists. Plus the whole part 2 with its 7 last chapters starting with
chapter 10 on the three basic models or imageries of inquiring systems. It
implies emphasis on Democritean
Imagery or Mechanism, and Carneadean Imagery: Probability (Statistics) to the detriment of Aristotelian
Imagery: Teleology or goals, which
is already applied earlier in chapter 3 on the teleology of The Anatomy of Goal Seeking, and I would
outright relate to Aquinian theology with
the consequent teleology and (analytical) psychology. The rest of
part 2 introduces the problem of implementation
recalling the question of The
Researcher and the Manager, plus the question of religion, theology, psychology and faith, the challenges to
teleology, and the dimensions of inquiry that are developed in the next
published book The Systems
Approach and its Enemies.
In still other
words: it is the problem that I try to approach in my earlier essays on the computerization of society and
the human-computer
interaction. One of the late suggestive reports is the case of an
insurance company that trained advanced text chatbots to initially
reject the insured clients’ written requests for compensation based on their
particular insurance policy. It is thought provoking to imagine what this way
of thinking means for the medical
safety of patients to be in the future served by commercial medical AGI
chatbots. This trend has already started and has been commented in Is ChatGPT
Smarter than a Primary Care Physician? without a
visible understanding of which is the basic problem, besides moral and
juridical “responsibility”, beyond an initial comparison between the
performance by ChatGPT and human clinical practice. Or, in a medical report on This Drug works,
but Wait till You Hear What’s in it, without an
understanding of the difference between statistics and a scientific theoretical
“Why”, as explained by Churchman in the book Prediction and
Optimal Decision (chap 6 on “Objective Probability” relative to statistical population or group). I know of statisticians who made
up their living practically by writing physicians’ doctoral theses, since such
theses consisted mainly of statistical work with their available data. This is
as medicine tends to become more and more result of statistics on big data, and
correlation a substitute of problematic causation when the concept of cause
itself is put in question as in advanced quantum physics and philosophy of
science. The whole is further complicated by physicians themselves, e.g. in
health care centers, giving in for smart AI-sellers’ offers of “amplification”
or computer “support” (that I comment in the context of human-computer
interaction). They will illude themselves that they retain a
responsibility which in fact can be no longer their own, being simply victims
of the good old Hawthorne effect, seduced as
they get for being the focus of attention and by the novelty of being research
subjects of fashionable AI. All this while the first symptoms of AI-abuse in
health care are being reported (in Medscape,
November 20, 2023) in Accused of Using
AI to Skirt Doctor’s Orders, Deny Claims, followed by the
expected uncritical hype-noise as in AI Tech Ready to
Transform Doctors’ Working Lives, or by less uncritical hype noise, which will
suck a few critics’ and many readers’ energy in sterile reading, analyses and
rebuttals.
It is obvious
that in today’s scientific and cultural milieu readers of these lines feel
frustration in face of the perception of an “enormous impossible” task of
understanding all the above references as related to the scientific and
technical presently “dogmatic” bases of western civilization. I claim that this
stands at the heart of what I sense as being a western cultural crisis. The
crisis of understanding the bases of our own western culture in analogy to the
task of understanding Christianity as compared to, say, Hinduism, Islam or
Confucianism. Nevertheless, there are people who do not shy away from such
challenges. An example is the Swedish intellectual Ellen Key who in launching
in Sweden a sort of “enlightened feminism” did not shy
away from embarking in a study of Theosophy related to Buddhism and
Anthroposophy. Other ambitious western intellectuals may dwell also on Hinduism
and “spirituality” (a for me mind-blowing example found in Jan Olof Bengtsson’s analysis) challenging
Carl Jung’s warnings (but cf. Harold Coward’s Jung and Eastern
Thought) that a
westerner who barely understands his own Christian culture may hope for a
simplified rational-logical understanding of other complex cultures. I cannot
allow myself to dwell further into this issue here except for directing the
reader to only one of the relevant places in Jung’s collected works, vol.11 Psychology and Religion: West and East, (CW11,
§771 ff.). Otherwise, the mirage of “spirituality” sinks into logic,
mathematics, computers, and ultimately AGI. Or it sinks into the morass of a
logic structuring of “spirituality” as in the “New Age”-counterculture of the
1970s as exemplified by “religious movements” like Adidam (considered in Bengtsson’s analysis), related to the names of Franklin Jones, and further to
Ken Wilber. A final split
of the western mind in a “theory of everything”, prior to a third world war?
I also heard of
a student who meets analog problems as the just mentioned reference to
Theosophy and Buddhism when envisioning critical hospital care by computer
aided dying of patients through the study of the relationship between Heidegger, death, and
technology in Japanese philosophy and engineering. More specifically, such
things are focused in a field known as kansei engineering, accounting for "emotions,
affects, sensibility or perception” (whatever they happen to be and
interrelate) in line with traditional Buddhist and Shinto philosophy. All this
while in fact ignoring the whole body of analytical psychology as I explain in
the above mentioned criticism of Key’s enlightened
feminism. All this while AI industry may also explore the market of combining
“adult” robotic products like combinations of “real dolls” with AI ideas
like conversational AI platforms and advanced
robotics.
I do not assume that e.g. feminism needs to be a much more complex
phenomenon than the ongoing computerization of society. But I have already
tried to explain (cf. above) why writing and discussing serious matters is in
general a hopeless task today. Regardless
the paradox of my writing down this just now, repeating then the error
committed by Churchman when writing and publishing the DIS to which I referred above. Because of all this I rely upon some
excerpts from some of my earlier writings, starting from my article on The Russia-NATO-Ukraine Information Crisis where
I quote what Plato, before or "as if" presaging the advent of Christianity and of the
Bible, wrote (as in my available translations) in his famous Seventh Letter (341d, and 344c):
If I thought it possible to deal adequately with the
subject in a treatise or a lecture for the general public, what finer
achievement would there have been in my life than to write a work of great
benefit to mankind and to bring the nature of things to light for all men? I
do not, however, think the attempt to tell mankind of these matters a good
thing, except in the case of some few who are capable of discovering the
truth for themselves with a little guidance. In the case of the rest to do so
would excite in some an unjustified contempt in a thoroughly offensive
fashion, in others certain lofty and vain hopes, as if they had acquired some
awesome lore. […] |
For this reason no serious
man will ever think of writing about serious realities for the general public
so as to make them a prey to envy and perplexity. |
This insight is summarized by the Latin locution Intelligenti pauca (“few words suffice for him who understands”), as well
as by the point in my essay Information and Debate on the difficulty if not meaninglessness of a supposedly democratic
argumentation on deeper issues. In view of the Quality of Information. Its meaning,
however, goes deeper in the Bible as it can be seen in the Bible Gateway by
typing in the search field e.g. “whoever
has ears”, and “ears
eyes”, my own main choice being Matthew 13:1-23, Lucas 8:4-15, Mark 8:18, and Acts 28:27. I realize that
my writing this text then is a paradox, and it all is an answer to what is the
conclusion of this paper for all those who do not want or cannot read and
understand the linked references. They might have been superfluous if the
population practiced some of the great religions, for us in the West beginning
with Christianity. It has been said that if people had followed at least five
or six out of the ten Commandments, the world’s situation would have
substantially improved, with or without AI-AGI.
The reader who feels somehow impatient because of the
length of this paper may now jump over the following chapter 6 that contains
occasionally commented (in bold font)
further explanations and examples of problems of AI in earlier papers of mine,
going directly to chapter 7 on The
open letter: "Pause Giant AI Experiments".
6.
References to ChatGPT in other essays
In what follows below I try to offset the hopelessness
of discussing matters and references that are difficult to read and understand,
the more difficult in lack of motivation. I do this by means of (sometimes
heavily edited) excerpts from texts that I have written as per July 2023, which
when necessary are interpolated by my specific comments in bold text style.
They will be completed with my comments to answers obtained from ChatGPT to
some specific questions of mine. But let me start with a somewhat "lofty"
section out of Information and Theology that
matches the probable theological meaning of the Plato's quotation above:
"Myths" of Artificial Intelligence.
The explaining away of religion and Chesterton's "believing in
anything" also opens the way for believing in the future interaction with
other planetary worlds in outer space, and in artificial intelligence, AI,
computers with self-consciousness, whatever it is, will overpower the human
mind, or that AI-robots should have human rights, or that we will ultimately
create the superintelligent robotical paradise on
Earth, or whatever wherever. Or believing in discussions about, say - machine consciousness and machine ethics ending in so called technological singularity.
All this without the possibility of
"debating" the matter because of myriads of misunderstandings arising
in part from faulty difficult definitions and presuppositions, to begin with
about the meaning of intelligence. Buchanan's initial consideration says
it all for those who can understand: "In calling AI an experimental science, we presuppose the
ability to perform controlled experiments involving intelligent behavior."
I repeat, for emphasis: "perform controlled
experiments involving intelligent
behavior". I add: where control is possible and advisable, as
discussed by West Churchman in his book that followed DIS, on The Systems Approach
and its Enemies, especially the chapter on "Logic: General",
followed by "Logic. A theory of reality, or Kant for planners." But
who cares? ("Please give me a few lines' summary because I have not the
time to read all this stuff"…)
This phenomenon of misunderstandings and phantasies about AI is
revealed and, in a sense also "solved" in all its
complexity by what I regard as a fundamental work by Churchman by the time we elders met
the first wave of hype-enthusiasm about AI in the seventies. It was The
Design of Inquiring Systems: Basic Concepts of Systems and Organization that
I myself tried to expand and facilitate the reading of, by means of a Word and Issue Index, followed
by a sort of contextual evaluation in The Systems Approach to Design and Inquiring Information Systems.
I think that with this kind of understanding it is not, anymore, a
question of whether AI in its many forms will be applied in modern society. It
is rather a question of forecasting the consequences and the possibilities of
counteracting the dangerous ones, becoming a problem that I considered in
my Trends in Philosophy of Technology,
and ultimately a theological problem that motivates the present text. The
difficulties will be enormous, not only because the academic devaluation of
theology, and even philosophy in technical and political context. Even when a
professor of computer science warns about overconfidence in AI in Swedish mass media (Dagens
Nyheter, October 7th 2018), he relies upon exhortations for the need
to be conscious about the system's limitations. The
warnings are based upon appeals to understand and to be conscious that we are still far from creating (an
undefined) intelligence at a human level with
the ability to feel and reason, evaluate, make moral
evaluations and explain what it is doing and planning
to do. All italicized concepts remain undefined, presupposing political,
philosophical and theological competence, understanding why we are "still" far
from "creating" artificial life and paradise on earth, understanding
the why of not to "believe in anything".
Even a most sophisticated Italian
mathematician, Paolo Zellini, who dedicated much
of his life writing about the philosophy of mathematics including computer
science, concludes his work with a rather inconsequential book, so far only in
Italian language, that vaguely warns about the Dictatorship of calculus. His
barely outspoken warnings are supported by reference to the extremely explicit
ones by the more popular sort of polymath Jaron
Lanier. Lanier's limitations appear most clearly when
he introduces also provocative thoughts on virtual-reality that challenge
earlier elaborate condemnation by
others as being deleterious gnosticism in
computer science. Despite positive ambitious reviews, Zellini's neglect
of theology, particularly of Christianity, leads him to miss the most relevant
historical aspects of the contribution by Jan Brouwer to the understanding of
the problems considered here.
It is symptomatic that when we humans no
longer believe in God, we happen to believe that we (Nietzschean superhumans)
are so godlike as to be able to create machines in the track of ChatGPT that
will transcend human intelligence and be substitutes for God.
My
comment: This excerpt illustrates how the hype of AI and AGI may nourish itself
upon the secular abandonment of the idea of GOD and related theology.
All this while the modern technological mindset can be
envisaged as promoting the deconstruction or destruction of the (cf. Martin
Buber) “you-thou” by reducing it to a divinized capitalized “It” as in the
hyped “dialogue” of humans with a ChatGPT
using Large
Language Model tools – LLMs (see also here).
Language models are basically logic, and further: mathematical logic that
forgets the observation by the Jan Brouwer
mentioned above (treated in my essay here)
in his study of the foundations of mathematics and logic, that logic was to
begin with extracted from language. It further opened to the tautological
consequence of the ambitions of mathematize logic into "mathematical
logic" in order to finally being used as substitute
for language conversation between humans that logic was originally extracted
from. Humans then gradually leave the scene as unemployed, and the original
language conversations and negotiations turn into authoritarian LLM declarative
sentences sent through computer networks by a few to the many, plunging
everything into the problematic "human-computer
interaction".
It is the kind of chat that produces instances in
Japan where individuals are reported to have fallen in love or having developed
so-called romantic or dangerously passionate feelings (up to one documented
suicide) for (with or without holograms of) contrasexual (cf.
LGBT), virtual or fictional characters, sometimes referred to as "2D
lovers" or "waifu culture" (commented in depth by The New York Times, July 26, 2009).
It all to be related to the concepts of “3D
lovers” and Real
Doll, or even more sexually
neutral “AI-powered characters” such as “Star”
and “Charstar”
or “Character
AI”. This phenomenon gained some attention in the media,
particularly with the emergence of vocaloid
characters like Hatsune
Miku in a perhaps more permissive Japanese
culture permeated by Shintoism.
More information about the topic can be obtained in connection with the Nijikon and Hentai phenomena as well as fictosexuality and
pansexuality, which paradoxically indicate the deep
gender-psychological and thereby theological implications o
human-computer interaction.
Such detours of so-called human
“sexuality” stand at the basis of the undermining of the fundamental “I – You”
relation whose “archetype” is the gender relation between humans represented by
man and woman. This undermining implies the sliding of the You into
an It and a consequent confusion of pronouns that are
illustrated by what happens lately in the computer field of human-computer
interaction or HCI with the appearance of the device
ChatGPT mentioned above. In fact, when I inputted into the machine
the question “Please explain which are the basic databanks, whether
including the whole Internet, which are used in order to find and elaborate the
answers in ChatGPT”, I initiated my question with a “please” felt to be
directed to those They (or a He or She?!) who
are (or were?!) “responsible” for its operation. But I got the
following answer from an I that could have been a We but was an It, as
follows (my emphasis):
As an AI language model, my responses
are generated based on a mixture of licensed data, data created by human
trainers, and publicly available data. I don't have direct access
to databases or retrieve information from specific sources in real-time. I have
been trained on a diverse range of data, including books, websites, and other
texts, to develop a broad understanding of human language. My purpose
is to assist and provide information to the best of my abilities, regardless of
the specific sources of my training data.
This is, of course, the problem of responsibility in
human interaction that is today exasperated also in the introduction of the
“autonomous” technology represented by the ramblings of so-called artificial general intelligence AGI where
no longer is questioned what intelligence is or should be.
My
comment: All this highlights the question of “WHO” is ChatGPT or whatever
future actor in the expanded field of Artificial General Intelligence. It is
closely related to the question of "Whose responsibility", which in
turn tends to be easily transformed into an essentially juridical law-question
(as Portuguese-speaking readers can hear at an interview with Brazilian
computer scientist. Silvio Meira, here
and here), while the source of the problem is found at the basis of what fact,
logic, and intelligence is all about as related to ethics. Today there are
(too) many brilliant minds (as I show in my essay on computerization) who are tempted to discuss, in an
extremely logically structured way, matters that they should understand better,
recalling again Plato's quote above. In general, the problems created by AI and
AGI are translated into assumed future solutions designed by the process of
assumed ideal democracy. For the rest It is unclear to me what the last
sentence means with “regardless the specific sources of my training data. This
considering what was reported by Reuters (Oct 17, 2023) regarding China’s Ernie 4.0 generative
AI model (China’s ChatGPT), stating that the prior week, Beijing had published proposed
security requirements for firms offering services powered by the technology,
including a blacklist of sources that cannot be used to train AI models, beyond
copyrighted material. Long after writing this latter sentence I read about the
suddenly (January 2025) extra hyped “revolutionary” Chinese DeepSeek a candid section on “Concerns, Censorship, Security and privacy”, that were not much publicized for
earlier American-Western AI products. I have seen little having been said about
that kind of things in neither ChatGPT, nor its contenders Google’s chatt-AI Bard and
Gemini. Symptomatically, Wikipedia’s article on Gemini quotes a couple of professors who in
the context of comparing these AI-products “cautioned that it was difficult to
interpret benchmarks scores without insight into the training data used”. All
this while extensive censorship was exerted on the news about e.g. the
Russia-Ukraine conflict, as remarked in my paper on it. But Fei-Fei Li
(author of The
worlds I see – curiosity, exploration and discovery at the dawn of AI) at the Stanford Institute for
Human-Centered Artificial Intelligence (Stanford HAI) is reported in a
newspaper (Svenska
Dagbladet,
December 9, 2023)
to “emphasizes the importance of an ethical AI development”. I am sure that
many perceive such entrepreneurial wording as well-meant uncommitted buzzwords,
as the title of the newspaper article, “AI is controlled by humans - for better
or worse”, identical to similar stuff found on the net. The involved dazzling successful
personalities, however, as well Jensen (Jen-Hsun) Huang co-founder and president of Nvidia Corporation, certainly deserve to be problematically listed among the computer geniuses I introduce in my above-mentioned paper
on Computerization.
But
there is more to it. After having written most of the above about Reason and
Gender, in March 2025 I had to complete with the following, which introduced a
new chapter in my text on the subject, with the tile Teenager’s
suicide allegedly caused by AI-chatbot. The rest of the chapter with my comments, because of reason of space in
this present paper, can be read at this link to the chapter in Reason and Gender. And now to the mentioned introduction of the chapter:
What
follows is The Japan Times’ summary article published on March 18, 2025 about a teenager’s suicide that was already described in detail in an article published on November 11, 2024 by the Privacy World
Blog, and regarding a case
filed in Florida’s Federal Court on October 22, 2024. The teenager’s mother
Megan Garcia (but not her son’s “stepfather”?!) is also interviewed in a
YouTube video An AI chatbot killed my son. I take the required space here below, because it qualitatively
illustrates the possible if not probable long-run consequences of the
downplaying increasing problems in the relation between man and woman under the
mantle of a misunderstood “sexuality”. The suicidal teenager had established a
passionate relationship with the “fictional character” Daenerys Targaryen. All this against the background of technological development
represented here by AI and the doings-litigations of the company Character.ai that presents itself rather “secretively”or exploitatively at Character.AI, which affects the minds of not only teenagers but also of men and
women, being a question of degree in face of psychic nuances in the variations
of the human mind. Now over to the Japan Times’ text: […]
From: Computerization
as Design of Logic Acrobatics
And today the ultimate consequence of this
thinking is the recent and by now famous ChatGPT,
which probably few computer scientists seem as yet to understand what they are
doing in terms of logical fact nets or Leibnizian
Inquiring Systems. They are basically logical operations performed on the
contents of the assumed “facts”, including facts about functions of devices
that are stored and retrievable from the total Internet.
I think that
those who feel compelled by their conscience to unravel this tragic mental
confusion can only do this by relating computer science to mathematical
logic and empiricism, or logical
empiricism, along the guidelines outlined by West Churchman in
his The Design of Inquiring Systems repeatedly
mentioned here, for good reasons. In
particular it is a question of what follows from the conceptions of chap. 2 on Leibnizian Inquiring System: Fact nets
(logical networks), and chap. 5 on Lockean
Inquiring Systems: Consensus. That is, consensus within the community of
Pre-Trainers (the PT in GPT, people selected by somebody), which establishes
the sources and selection of facts to be or not to be networked and
Generatively Transformed. The former fact nets are intertwined linguistic
manipulations, and the latter consensus is the manipulated factual
sentences, declarative
knowledge that in turn will trigger physical devices when it
all happens to be politically trusted wherever found in the Internet by those
who can afford the search for their own entertainment, profit or purposes. And
the first purposes unfortunately, and
ultimately, will be the purposes of automated warfare, and “killer
robots” or lethal autonomous weapons (see also here) that will be
certainly called "defensive". That is the core of GPT & Co, which
is supposed to revolutionize and save or at least to win the world, it all
seasoned with more or less intellectually and politically naïve warnings for “Existential
risk from artificial general intelligence”. Whatever
existential means in its relation to religious.
Not knowing or understanding what
intelligence is or should be, the door is open for limitless fantasies about
the next if not ultimate hype of Artificial General Intelligence including the construction of the above
mentioned Theory of everything (and its philosophical counterpart here). Therefore, many will start hoping not
only to be able to ask any question and get
the answer, but also to ask the computer to do anything and having it done, such as trusting self-driving
vehicles without even having understood the
problems of auto-pilots or
problems of scientific experimentation and theory-building, or technocratic
planning being replaced by a sort of ex-post
pre-training as in the elaborations of GP - Generative
Pre-Training. Whose
training and responsibility? In terms of the conflict between Russia and Ukraine it
is like asking ChatGPT or Bing
what to do in order to solve and stop the conflict by applying a thinking that
I have provocatively and rhetorically called Logic as Rape in the form of a ChatGPT instead of an inquiring system. Cf. West Churchman’s
“Logic: A Theory of Reality, or Kant for Planners” in his earlier mentioned
book The Systems Approach and its Enemies (chap. IV). In other words, a revival of
the fantasies about the HAL 9000
fictional AI “character” or of the older idea of Frankenstein ‘s
monster.
All this without understanding what it all
is about, as illustrated in Jordan Peterson’’s short
video on ChatGPT with the title of The dark side of AI. If one has the time
it is possible to extend the experience by comparing it with Chris Hrapsky’s video on The limits of ChatGPT and discovering a dark side. It is not a question of increasingly
faulty intelligence as suggested in an ambitious essay: The age of stupidity.
(It is written in Portuguese, but with one main reference in English about The negative Flynn effect).
It is more than so, it is a gradually increasing cultural crisis, affecting the
population’s intellect, becoming more visible in the affluent western world
where it is not masked by material privations. Ultimately, we can consider
interaction in terms of the sort of “archetype” of interaction mentioned in my
essays on Reason and Gender and Computerization. A CNN news report on a
related event, The man who married a hologram related to the phenomenon of Nijikon may
give a hint of what is to come, stating that “Researchers say such events are indicative of broader
technological trends and social phenomena”.
The late and perhaps ultimate consequence
of short-circuiting the human and social element together with the consequent
debate in the increasingly inclusive logical processes is the expanding
phenomenon of polarization of parties, misunderstandings, conflict and
violence. They could follow from the ChatGPT’s sociopsychological choking of
debate by forcing strong secularized Lockean
consensus that was mentioned above. In its bland realistic and immediate
form it may be disinformation and
ransomeware and, in particular, Ryuk
under the general labels of Cyberattacks and
its counterpart Computer
security.
My
comment: Still worse it may be if the information is perceived as so
overwhelming, the more so when it is much more difficult to question and
demonstrate that a declarative sentence is wrong (cost of measurement and
gathering of data, hidden definitions and environmental systemic conditions)
than to just formulate a supposed factual reality upon the basis of available
collected data. I illustrate something related to this in my essay on Information
and Debate
with the rhetorical questions of “Why Not?” (exemplified with divorce,
homosexuality, polygamy, etc.).
The “apocalypse” of
debate is pre-announced in the development of artificially intelligent “chat-bots” that are
envisaged to allow anyone, in a sick society, to discuss with them, as
announced by META-Facebook on August 5, 2022
regarding “BlenderBot 3: An AI Chatbot that Improves
Through Conversation”.
What is most symptomatic
is its forgoing
to mention the possibility of religion being a necessary basis of meaningful
consensus: charity and respect for the human being, where respect does not mean
“tolerance” in the sense it does not matter what somebody thinks so long it
does not bother me or "us". It means that we all want to help each
other to reach a common truth and good. One opportunity for getting convinced
about this is to study very carefully the most ingenious analysis I have seen
of the breakdown of debate vs. political correctness, or whatever you want to
call it, in Ian Buruma's bewildering article, related
to the #MeToo issue, "Editing in an age of outrage", in Financial Times, March
29th 2019. (Swedish
translation "Det ska fan vara publicist när Twitterstormen viner" in Dagens Nyheter, April 14th 2019.) Ultimately,
in the future, in analogy to the problem of self-driving vehicles,
people may have to discuss mindless texts that are generated by artificially
intelligent agents such as improved GPT-2:
"GPT-2 generates synthetic text samples
in response to the model being primed with an arbitrary input. The model is
chameleon-like—it adapts to the style and content of the conditioning text.
This allows the user to generate realistic and coherent continuations about a
topic of their choosing..."
and will be drained of
their intellectual energy as human drivers will be drained of their skill and
intelligence in order to avoid incidents with the self-driving vehicles. [Test a simplified GPT-2 here,
linked at from here.]
In order to limit the volume of
text in this article, this case study is presented in its original form, as a
separate article on Information
on Christianism and Atheism. In it I make the observation that discussions on religion vs. rationality can be seen as
a poor reenactment and analogy of the misunderstood and endless theological
debate on the Filioque, subject of great controversy between Eastern and Western Christianity that serendipitously
deals with the meaning of Spirit. where today the most analytically or rhetorically
gifted and persevering party seems to win the game. Here I can specify that my
choice of the adverb serendipitously referring to the discussion of the
meaning of Spirit can be relevant for the ongoing debate
on Artificial Intelligence and especially Artificial General Intelligence –
AI/AGI, if intelligence is related as it should to the discussion of the human
intellect. It seems to me absurd to imagine that a deep-going “world-wide”
discussion on the Filioque among educated intellectuals should
have been going on for centuries without having a bearing on other intellectual
questions and on how to think and behave.
My comment: What will happen with the lawful
responsibilities in using self-driving vehicles, or automated weapons, and the
labor market upheavals with the introduction of AI and AGI in industry,
commerce and government, is or will make its appearance in the lame warning
gathered under the Wikipedia title of Existential Risks from Artificial General
Intelligence.
From:
Computers as Embodied Mathematics and Logic
The reports in The Economist and Wikipedia(should) say it all, reminding also
cases of analog analytical logical-mathematical giftedness such as of Adam
Neumann and
the related business WeWork,
and Sam
Altman as
related to general artificial intelligence (Open
AI, ChatGPT, etc. as per April
2023) and
its financing, or sheer analytical financial genius of a Bernard
Madoff.
[plus the
following that were included in the essay’s later versions]
My
own annotations on my copy of Carl Jung's book Memories, Dreams, Reflections recorded by his associate Aniela
Jaffé, indicate that I first bought and read it in April
1975 but happened to definitely relate it to this essay of mine in July 2023.
The book is interesting especially because it tells
about the author's psychological interpretation of his understanding and
feelings in meeting mathematics. Such feelings, when experienced by common
educated people can easily be dismissed for being caused by lack of giftedness
or sheer stupidity. The complexity of Jung's account requires literal
quotations (cf. pp. 43 ff., 340 ff):
[…] I felt a downright fear of the
mathematics class. The teacher pretended that algebra was a perfectly natural
affair, to be taken for granted, whereas I didn't even know what numbers really
were […] they were nothing that could be imagined, mere quantities that
resulted from counting. To my confusion these quantities were now represented
by letters, which signified sounds, so that it became possible to hear them, so
to speak. […] No one could tell me what numbers were, and I was unable even to
formulate the question. […] But the things that exasperated me most of all was
the proposition: If a=b and b=c,
then a=c. even though by definition a meant something other than b, and, being different, could
therefore not be equated with b,
let alone with c. Whenever
it was a question of equivalence, then it was said that a=a, b=b, and so on.
This I could accept, whereas a=b seemed to me a downright lie or a fraud. I was
equally outraged when the teacher stated in the teeth of his own definition of
parallel lines that they met at infinity. […] My intellectual morality fought
against these whimsical inconsistencies, which have forever debarred me from
understanding mathematics. Right into old age I have had the incorrigible
feeling that if, like my schoolmates, I could have accepted without struggle
the proposition that a=b, or that sun=moon, dog=cat, then
mathematics might have fooled me endlessly – just how much I only began to realize at
the age of eighty-four. All my life it remained a puzzle to me why it was that
I never managed to get my bearings in mathematics when there was no doubt that
I could calculate properly. Least of all did I understand my own moral doubts concerning mathematics.
In a later section of the book, in the
chapter "On life after death", Jung introduced some exceedingly
complex ideas about the relation of human death to the realm of consciousness
vs. the unconscious or "the intermediate stage between unconscious and
conscious cognition". It would take us too far in this our context but the
following extracted comment (p. 341) may give a taste of the relation to
mathematics:
Ideas of this sort are, of course,
inaccurate, and give a strong picture, like a body projected on a plane or,
conversely, like the construction of a four-dimensional model out of a
three-dimensional body. They use the terms of a three-dimensional world in
order to represent themselves to us. Mathematics goes to great pains to create
expressions for relationships which pass empirical comprehension. In much the
same way, it is all-important for a discipline to build up images of
intangibles by logical principles and on the basis of empirical data, that is,
on the evidence of dreams. The method employed is what I have called "the
method of the necessary statement". It represents the principle of amplification in the interpretation of dreams,
but can most easily be demonstrated by the statements implicit in simple whole
numbers.
My point with these quotations is to
suggest that mathematics because of its very nature introduces unconscious
processes in the human mind while surreptitiously merging them with ordinary
conscious ones, an example being the mathematically opposite of Jung, the
earlier (and below) mentioned Robert Oppenheimer. It all recalls what has been
considered in this essay about both the aforementioned Brouwer and Zellini, as well giving a hint of the ultimate mysteries of
quantum physics (more on this above and below) in the relation between psyche
and matter. An alternative equivalent conception presented in my Information and Theology is
the balance between the inner knowledge of human psyche vs. natural science and
knowledge of the outer world. This role of the unconscious and inner knowledge
of the human psyche related to (the aesthetics of) mathematics and logic also
explains the problems of addiction (not only of children, but also of
“childish” adults), including computer
addiction, the overhype of virtual reality,
and the success of mathematical-logical minds in all sorts of modern endeavors
including mathematized technoscience or white
collar crime. The latter is most brilliantly
exemplified by the genius of Bernard Madoff and
by the fact that “Internet AI personalities” such as Tom Bilyeu at Impact Theory and
Emad Mostaque: see here a
logical conversation between them with the ambitiously alarming title How Ai will disrupt the entire world in
3 years - Prepare now while others panic. The
latter have strong opinions about AI there is a penchant for bold
entrepreneurship and/or interest in “volatile” activities, financial and
others, as hedge fronds. And
much entrepreneurship with empty talk about AI may be conscious or unconscious
attempt to commit “white collar crime”. Who is capable to discuss the degree of
consciousness vs. unconsciousness in order to determine culpability and whether
it will be considered as financial crime?
The technical mind has been colorfully
exposed in studies of the philosophy of science in the tradition of Jacques Ellul.
Some of the deep approaches in the latter’s “anarchist” tradition (examples here, here,
and here),
however, have sometimes missed the point when they lose the connection to broad
Christian theology, philosophy and its derivate (Carl Jung’s analytical)
psychology. This appears in the following (below) de-contextualized excerpt
from one of the deep approaches to Cybergnosticism, (a term further explained here). The text of the excerpt her below is
followed sometimes by my own notes in italics in square brackets in order to
not leave everything to the general “Comment” at the end:
[J]ust
like alcoholics and drug addicts, game-addicted children are now regularly
treated by psychologists and psychiatrists in order to get rid of their addictive
behaviour, [Yes,
but how effective in this treatment on what grounds, and why, since it may by
then used for treating all “addicts”, and how should they be defined and
identified, since they may be even scientists?]
[M]any videogames take
advantage of techniques similar to those used by the military to harden people
emotionally. [Yes, but what kind of the
treatment of these hardened people would be allowed and recommended by the
military leadership vs. ex-post by e.g. the “Veteran benefits fof PTSD” or equivalent organizations?]
As the driving force of la Technique is, according to
Ellul, the crave for absolute efficiency in all human endeavours.
[Ellul himself is quoted in Wikipedia
from his book “The Technological Society” as writing: “Not
even the moral conversion of the technicians could make a difference. At best,
they would cease to be good technicians. In the end, technique has only one
principle, efficient ordering. What
about the difficulty, to begin with, in defining “efficiency”
vs. “effectiveness”,
the more so when it is not even recognized that the difference lies in the
conceptualization of the systemic boundaries of (whose?) craves.]
It was Eric Voegelin’s
intriguing and much-debated thesis, that there is a deep-seated disorder in our
civilisation rooted in a ‘gnostic’ sentiment of
alienation and discontent with reality perceived as evil, in the con-
sequential ‘gnostic’ turn away from this reality [Psychologically, or how else, is a “sentiment” to be defined, compared
with, say a thought, intuition or perception?]
Arguably, we cannot get
at the real motives and ideas behind the computer phenomenon in general, and
the cyberspace and virtual reality sub-phenomena in particular, nor arrive at a
proper understanding of their roots and future direction of growth, unless we
take into account these mighty metaphysical driving forces and motivations […]
[How about metaphysical forces and
motivations, if not within the frame of a religion’s theology?]
It is my thesis that
the roots of cyberspace and cybergaming must be investigated in a much wider
context than is done in these and other similar works so as to clarify and make
comprehensible the motive background and worldviews of the key personages of
the field. [What about the difference
between (here) “roots” vs. “forces and motivations” in the previous paragraphs].
They rest ultimately only upon the
criticism by Gnosticism and
overworked erudition and rationalism (paradoxically, an apparently own “gnosticism”) of the one “political philosopher” Eric
Voegelin. Wikpedia summarizes, as
it follows some of his critics’ opinions, which match my own impression from my
readings and comment of Voegelin in another
context of Information and Theology:
[C]ritics
have proposed that Voegelin's concept of Gnosis lacks theoretical precision.
Therefore, Voegelin's gnosis can, according to the critics, hardly
serve as a scientific basis for an analysis of political movements. Rather, the
term "Gnosticism" as used by Voegelin is more of an invective just as
"when on the lowest level of propaganda, those who do not conform with
one's own opinion are smeared as communists.”
That is, “smeared as gnostics”,
as I heard one of his followers smeared Carl Jung, based on Voegelin’s readings
of second hand texts on him (as contrasted to e.g. Jordan Peterson’s
practice as first hand reader of original texts). Until further notice I assume
that the Jung's attitude mentioned above is motivated by his psychic
inclination to watch upon the unique (psyche) instead of "playing
God" or suffering an “Ego inflation” by imagining oneself as understanding
mathematics seen as God’s language, and searching general "laws" of
nature where the human being himself tends to be “abstracted from” or is
regarded as only an object of an undefined or pantheistic "Nature". A
concretization of these thoughts is suggested by problems and the history of
debates on the world’s first societal introduction of personal
identification number
- PIN in Sweden (as a
number = “John Smith”), followed by national identification numbers all
over the world as I
explain in my essay on The
Meaning of Man-Machine Interaction. It is meaningful to remember the practice of
tattooing a number on the skin of prisoners in Nazi concentration camps,
followed today by the increasing use of PIN’s,
not the least for (who’s?) control of the population by means of face
recognition systems and biometric identifiers. All
facilitated by alleging a struggle against “terrorism”. And all this while
never reflecting upon the meaning of or relation between identification and individuation, and never
reflecting upon the phenomenon, in certain corrupt political regimes, of
governmental production of false “identification” documents, including vehicle registration plates. When faith
and allegiance is rested on an assumed Democracy and technology instead of on
God, then the unavoidable failures of democracy imply a dependence upon
anonymous human power whose policing of citizens offsets the loss of judgment
caused by the computerized technological logical bypass of the network of
multiple human judgments.
The extension of these problems reaches
the import of the whole issue of my text on the consequences of the
computerization of society which follows the conclusions of the present essay,
and that I rhetorically named Computerization
as Design of Logic Acrobatics but somebody else could have called On the
Digitalization of Society. A
concrete manifestation of the latest problems of computerization, which at the
same time illustrates the psychic derangements of mathematical thought as
suggested in Jung's account is the open letter Pause Giant AI Experiments: We call on all AI labs to immediately pause
for at least 6 months the training of AI systems more powerful than GPT-4. By August 2nd, 2023
when I accessed it, it had gathered more that 33000 signatures based on what I
consider as a legitimate fear that is paradoxically based on wrong premises. It
reminds an analog "moratorium" that I recall in my essay on Trends
in the Philosophy of Technology. They are wrong "Leibnizian"
premises as I explain in my paper on Artificial
General Intelligence and ChatGPT.
My
comment: these name-related cases illustrate the destructive power of minds
that are powerful in terms of (unbridled) rationalist analytical
logical-mathematical capability, which reach their psychic apex in the field of quantum physics and personalities such as Robert
Oppenheimer with his symptomatic “mysticism”, and most recently in the field
of computer science that embodies and rewards mathematics and logic. Jung's
difficulties illustrate a mind's struggle in accepting a way of mathematical
thinking that opens the road for abuse of human "Leibnizian" thought
when applied to matters that require a whole human mind. It is an abuse that
was not feared at the time of the polymath Leibniz himself (1646-1716) before the breakthrough of the misunderstood,
all-encompassing and mathematized natural science. The mathematization of
natural science was later extended to all science, forgetting the purpose and
meaning of divorcing thought from associated feeling and intuition, and partly
even from sensation by means of mathematical notations as historically conceived in
the work of Florian Cajori. In common life it, as knowledge of
“God’s language”, can lie behind tendencies toward Narcissistic personality disorder, especially the empirically
verified subtype High-Functioning/Exhibitionistic.
If I had been a faithful enthusiast of Freudian
psychoanalysis rather than of Jungian analytical psychology I would have
recurred to counter such accusations of psychologism
for being examples of the criticized concepts of resistance and rationalization, as it is done
by the stout defenders of the theory of evolution, or of fact
resistance (or of alternative
facts, as opposed to my dissertation on quality
of information) as well as by the defenders of its opposite, intelligent
design. An example is
given by an analytically gifted colleague who read an early version of the
present paper and wrote to me (here in a slightly edited anonymized form):
I don't think that your justification for equating
logic with rape is justified. It's relatively easy to defend oneself against
bad logic, it is enough to answer a little nonsensically and on a high tone,
and everyone will understand that the dialogue has broken down and no one is
right. It's worse with bad psychology. When a person psychologizes another
person by putting himself above and judge the other, it is not possible to
defend oneself because whatever the second one says, the first one will not answer
immediately but will also psychologize the answer. The other can never defend
himself against it. There is a good example of this in your debate article.
On further reflection, I find that one could rewrite
your "rape" article by replacing all mentions of "logic"
with "psychology" or "psychologizing". Then it will be much
more accurate. Psychology as rape.
Oh really? “The other can never defend himself against it”? Defend himself from a psychological attack? What about
defending oneself by complaining for psychologism?
And how can one defend himself from a logical
attack, a “logicism” with perfect
logic using undefined ambiguous terms that are applied in indeterminate or
inappropriate circumstances? It is the same logic and mathematics that
motivated a cultured mathematician like Paolo Zellini to write (in Italian) The Dictatorship of Calculus, along the lines of another book translated into
English, The Mathematics of the Gods and the Algorithms of Men. It is here that appears the question of power vs. love
mentioned above in the form of the power of logic for handling physical nature
when there is no room for talking about Spirit.
This is the case of Blaise Pascal’s famous talk about esprit de géometrie et de
finesse (check here and e.g. “geometry”
in his work, also here), long before
Jung conceived the psychological types and its more or less mindful, criticized derivates. It is the
power of mathematics and logic that allows to manipulate and force spiritless
nature, up to the rape of nature represented by (climate warming and) the
application of quantum physics
for nuclear weapons, while humans are seen mainly if not only a part of
nature. It is to do what “we” want, it is coarse “pragmatism” while erudite “critical
pragmatism” goes astray in the “crisis of western philosophy”,
refers to “ethics” but does not dare to try to be applied to ongoing
controversial societal problems such as consumption of drugs and wars like the
latest in Ukraine. There is no talk about integrating matter with spirit,
Christian Spirit instead of anthroposophic spirit, or
feminine with masculine instead of caring for LGBT. At this point I do propose that such behavior can be seen as an analog of rape with no possibility of divorce, only of keeping silent. It would correspond to not being able to axle with
the responsibility to frame a “convincing” counter-argument, because of
weakness or stupidity. The extreme situation is the rape committed in the case
of abusive popular problems of daily human-computer interaction that I explain
and concretize in an essay with the specific title of The meaning of human-computer interaction, and lately if not finally in the analysis of artificial
general intelligence (AGI/AI). The refusal, inability or impossibility of framing a
counter-argument turns the whole event into a “rape”. Silence is taken as a looser’s agreement, mediocrity or stupidity.
From: The
Russia-NATO-Ukraine Information Crisis
“[G]eneral
artificial intelligence” (GAI), in the form of the latest hype of ChatGPT is seen as a super-human (again,
“godly”) intelligence to which human democracy will politically have to submit
itself despite of cheap warnings for the need of (paradoxically) “democratic
control” in face of “existential risks of artificial intelligence”.
Or
the assumed brilliant future of ChatGPT (Chat Generative Pre-trained Transformer) that “was
tricked to endorse the 2022 Russian invasion of Ukraine”.
I myself wanted to complete such possible criticism
by using the latest (as per June 18, 2023) hyped computerized tool ChatGPT – OpenAI (see also here, version at the same date).
On June 17, 2023 I put the following two questions to
the available ChatGPT, and got the two subsequent answers (below) from the site
https://chat.openai.com. They put in
evidence the disregard of the core issues in my text and its references. Elsewhere I mention why
ChatGPT itself could not write my text. Its creators, unless they only wanted
to market a profitable product, should understand that neither AI nor AGI could have
written this text since they instead should have been trying to apply the
earlier mentioned Design of Inquiring
Systems.
My two questions and respective answers were:
(1)
Which are the causes of the conflict between Russia,
NATO and Ukraine?
(2)
How to solve the conflict and achieve peace between
Russia, NATO and Ukraine?
(1) Which are the causes of the conflict between
Russia, NATO and Ukraine?
ChatGPT:
The conflict between Russia, NATO, and Ukraine has its
roots in a combination of historical, political, and security factors. While it
is challenging to summarize all the causes comprehensively, here are some key
factors that have contributed to the conflict:
Historical context: The historical
relationship between Russia and Ukraine has been complex and intertwined.
Ukraine was part of the Soviet Union until its dissolution in 1991, and there
are historical and cultural ties between the two countries. The legacy of this
shared history has shaped their interactions and perceptions.
Ukrainian independence and alignment:
After gaining independence, Ukraine pursued closer ties with the West,
including aspirations for European integration and possible NATO membership.
This shift in alignment has been viewed by Russia as a threat to its sphere of
influence, particularly in the context of its historical ties to Ukraine.
Crimea annexation: In 2014, Russia
annexed Crimea, a region that was previously part of Ukraine. The move was
widely condemned by the international community, as it violated Ukraine's
sovereignty and territorial integrity. This event significantly escalated
tensions between Russia and Ukraine, as well as between Russia and NATO.
Eastern Ukraine conflict: Following
the annexation of Crimea, pro-Russian separatist movements emerged in eastern
Ukraine, leading to an armed conflict between Ukrainian government forces and
separatist groups. The conflict has resulted in thousands of deaths and a
humanitarian crisis in the affected regions.
Geopolitical competition: The
conflict can also be seen within the broader context of geopolitical
competition between Russia and NATO. NATO's expansion eastward, closer to
Russia's borders, has raised concerns in Moscow about encroachment and a
perceived threat to its security. Russia has expressed opposition to NATO's
presence in countries it considers part of its sphere of influence.
Ethnic and linguistic divisions: Ukraine is
a diverse country with different ethnic and linguistic groups. The divisions
between ethnic Ukrainians and ethnic Russians, particularly in eastern Ukraine,
have played a role in exacerbating tensions and fueling the conflict.
Energy dependence: The issue of energy
dependence has added another layer of complexity to the conflict. Ukraine
serves as a transit route for Russian gas supplies to Europe, and disputes over
energy pricing and supply have at times strained relations between the two countries.
It's important to note that the causes of
the conflict are multifaceted and intertwined, and different perspectives exist
on each side. The complexity of the conflict underscores the need for a
comprehensive and nuanced approach to finding a peaceful resolution.
(2)
How to solve the conflict and achieve peace
between Russia, NATO and Ukraine?
ChatGPT:
The conflict between Russia, NATO, and Ukraine is a complex geopolitical
issue with no simple solution. However, I can provide you with some general
principles and approaches that can contribute to the resolution of conflicts
and the achievement of peace. Please note that these suggestions are not
specific to this particular situation and may not address all the intricacies
of the conflict at hand. Here are some steps that could be taken:
Diplomacy
and dialogue: Encouraging diplomatic efforts and open dialogue
between all parties involved is crucial. Diplomatic negotiations can help
foster understanding, address concerns, and find common ground.
Mediation
and international involvement: Engaging neutral
mediators or international organizations, such as the United Nations or the
Organization for Security and Cooperation in Europe (OSCE), can facilitate
negotiations and provide a platform for discussions.
Confidence-building
measures: Implementing confidence-building measures can help
create an atmosphere of trust and reduce tensions. These can include the
withdrawal of troops from certain areas, establishing demilitarized zones, and
implementing arms control agreements.
Respect
for international law and agreements: Upholding
international law and respecting existing agreements, such as the Minsk
Protocol and the Budapest Memorandum, is essential. All parties should be
committed to honoring their obligations and finding peaceful solutions within
the framework of international law.
Addressing
underlying grievances: Understanding and addressing the
underlying grievances of all parties involved is important for long-term peace.
This may involve addressing historical, cultural, economic, and security
concerns through negotiation and compromise.
Economic
and humanitarian assistance: Providing economic and
humanitarian aid to the affected regions can help alleviate the suffering of
the local population and create conditions for stability. This can include
support for reconstruction, development projects, and assistance in meeting basic
needs.
Public
diplomacy and communication: Engaging in public
diplomacy efforts can help shape public opinion and build support for peaceful
resolutions. Promoting accurate information, fostering understanding, and
countering misinformation are essential components of this approach.
Long-term
conflict resolution: Developing a comprehensive and sustainable plan for
long-term conflict resolution is crucial. This may involve addressing broader
regional security concerns, promoting economic integration, and fostering
people-to-people exchanges.
It is important to recognize that the conflict is
complex and deeply rooted, and the path to peace will require sustained efforts
from all parties involved. The suggestions provided here serve as general
guidelines and should be adapted to the specific dynamics and challenges of the
Russia-NATO-Ukraine conflict.
My comment: this
use and response of Chat-GPT illustrates some of the reasons for why the
ChatGPT could not have written the kind of text in my essay of the
Russia-NATO-Ukraine conflict. Even less so when it is not designed with the
intention of relating to the "Hegelian Inquiring Systems" mentioned
above. That is: it is not intended to use as much as possible the same data or
"facts" in order to also support a counter-story with different if
not opposite conclusions to those of the initial main story.
From a first
specific question put to ChatGPT
My question:
Please explain which are the basic databanks, whether
including the whole Internet, which is used in order to find and elaborate the
answers in GPT.
ChatGPT
As an AI language model, my responses are generated based on a mixture
of licensed data, data created by human trainers, and publicly available data.
I don't have direct access to databases or retrieve information from specific
sources in real-time. I have been trained on a diverse range of data, including
books, websites, and other texts, to develop a broad understanding of human
language. My purpose is to assist and provide information to the best of my
abilities, regardless of the specific sources of my training data.
It's important to note that while I strive to provide
accurate and up-to-date information, my responses may not always reflect the
most current news events or developments. News is constantly evolving, and new
information can emerge that may change the context or understanding of a given
topic.
My comment:
This says it
all. The “human trainers” are particular, unknown people with their own (or the
producing company’s) unknown ethical and political convictions based on unknown
facts. What they are implementing is a sort of inductive process as described
in Churchman’s Theory of Experimental Inference but renamed “Machine learning”. Its basic assumption, which underlines my
observation of “cementation” is “that
whatever worked in the past (i.e. strategies, algorithms, and inferences) will
most likely continue to work in the future.” Unsaid: working for whom. See also
Wikipedia’s list of “See also” with reference to,
among others, “Automated machine learning” and “Big Data”. See also “Large Language models” and “Artificial Neural Network” including its
extensive “See also”.
From a second
specific question put to ChatGPT
My question:
What can the NATO do for achieving peace in the
Russia-Ukraine conflict?
ChatGPT
NATO (North Atlantic Treaty Organization) plays a significant role in
addressing conflicts and promoting peace and stability among its member states.
While I can provide some general suggestions, please note that the situation in
the Russia-Ukraine conflict is constantly evolving, and specific actions may
depend on the current circumstances. Here are some measures NATO can take to
contribute to achieving peace in the Russia-Ukraine conflict:
1. Collective Defense:
NATO's core principle is collective defense, stating that an armed attack
against one member is considered an attack against all. NATO can continue to
reaffirm its commitment to the security of its member states, including
Ukraine, and maintain a strong deterrent posture to discourage any further
aggression.
2. Political Diplomacy:
NATO can engage in diplomatic efforts to encourage dialogue and negotiation
between Russia and Ukraine. This could involve providing a platform for
discussions, offering mediation services, or supporting international
organizations and diplomatic initiatives focused on finding a peaceful
resolution.
3. Support for Ukraine:
NATO can provide political, economic, and humanitarian support to Ukraine. This
assistance can include military aid, capacity-building programs, training and
advising Ukrainian armed forces, enhancing cybersecurity capabilities, and
facilitating defense reforms to strengthen Ukraine's ability to defend itself.
4. Monitoring and Reporting:
NATO can continue monitoring the situation on the ground through intelligence
gathering, satellite imagery, and other means. By providing accurate and timely
information about the conflict, NATO can contribute to international
understanding and help counter disinformation campaigns.
5. Economic Sanctions:
NATO member countries, in coordination with other international partners, can
impose economic sanctions on Russia to exert pressure and encourage a peaceful
resolution. These sanctions can target specific sectors of the Russian economy
or individuals involved in destabilizing actions.
6. Confidence-Building Measures:
NATO can promote confidence-building measures between Russia and Ukraine, such
as establishing channels of communication, implementing arms control
agreements, and facilitating exchanges and cooperation in non-military areas
like culture, education, and business.
7. Cooperative Security Initiatives:
NATO can explore opportunities for cooperative security initiatives with
Russia, aimed at enhancing trust and reducing tensions. These initiatives can
include joint military exercises, information sharing on military activities,
and dialogue on shared security concerns.
It's important to note that NATO's actions are often
carried out in coordination with other international organizations, such as the
United Nations and the Organization for Security and Co-operation in Europe
(OSCE). Resolving the Russia-Ukraine conflict requires a comprehensive and
multifaceted approach involving diplomatic efforts, political will, and
engagement from all relevant parties.
My comment: This
example, compared to the Chat-GPT answers given to the earlier question (above)
“What can the NATO do for achieving peace in the
Russia-Ukraine conflict? shows how the ChatGPT machinery is biased in the sense
that items 3, 4 and 5 imply direct support of Ukraine. It disregards the main
source of the conflict in that NATO being a creation under the political and
military influence of the USA would not in its charter guarantee the exclusion
or non-membership of countries neighboring Russia or its “sphere of influence”.
All this under the assumption of national sovereignty and other considerations
in my essay on The
Russia-NATO-Ukraine Information Crisis, which for these same reasons could never have been
produced by ChatGPT. This is related to neglecting, to begin with, the
philosophy of the earlier mentioned Hegelian inquiring system (IS) and abusing
the philosophy of Leibnizian and naïve Lockean IS.
This illustrates
what was mentioned above, that the possible if not probable result of the use
of AGI is that it will cement the past and suffocate future questioning and
debate, also caused by the waves on unemployment combined with automation that
restrict the range of opinions. This happens already today in that
terminological neologisms and abuse of prestigious and “mind-blowing
pseudo-philosophical” words and acronyms of all sorts are being used and abused
in reference to AI and AGI
in the press and in social media. See Swedish TV 18 Oct and 1 Nov. 2022, The Conference 2023: “Förstärkt Mänsklighet” [Enhanced Humanity],
covering e.g. “artistic
practices involving emerging technologies”
and “organize
the world’s information to make it universally accessible and useful”.
The idea is illustrated in the journal The
Economist’s article (Nov 9th 2023) on “Now
AI can write, sing and act, is it still possible to be a star?”
with the under-title “The deal that ended the Hollywood actors’ strike reflects
a fear of the technology”, which indeed is fear of infringement
of artistic copyright (cf. the Swedish event Tosbot commented in Svenska Dagbladet 1-2 December
2023; on Tosbot in English here).
Not only
politically correct journalists who do not even need to be educated in
political science, foreign relations and diplomacy, but even school children
will be able, without understanding it, to use ChatGPT in order publish a
logically structured and credible essay with facts about the salvific power of
AI, e.g. in "How to solve the conflict and achieve peace between Russia,
NATO and Ukraine?" But it will be much more difficult if not impossible to
search for or finance research of dissenters who on their own will be able to
afford and formulate a counter-story like the above-mentioned The
Russia-NATO-Ukraine Information Crisis or a text like the present one.
Furthermore: there are
many of those problems surveyed in the above mentioned outline on Existential risks from artificial general intelligence, starting with
the upheavals of the labor market. A few of them had been already noticed in
research during and prior to earlier "waves of optimism",
exasperating the social games of reduction of ethical, intellectual, and
theological dimensions to politics of liberalism vs. Marxism. They were
outlined in the leftist approach of a timely doctoral dissertation at my
university on Work-oriented Design of Computer-Artifacts (1988) as I comment it in my essay on Information and Theology, where it can be seen as an example of reduction of
religion and theology to science
and politics,
or directly from theology to politics.
7. Pause Giant AI Experiments: An
Open Letter
This document (cf. “See
also” in Wikipedia), published on March 22, 2023 by a community related to
the above mentioned "Existential
risks from artificial general intelligence" was
accessed on August 3, 2023 on the net with the subtitle "We
call on all AI labs to immediately pause for at least 6 months the training of
AI systems more powerful than GPT-4". My earlier references above to ChatGPT were to the
version for the general public, the free version GPT-2. The text of the “Open
Letter” in my adapted lay-out, is the following – in a smaller font size - in
turn followed at the end by my own comments:
-----
AI systems with human-competitive intelligence can
pose profound risks to society and humanity, as shown by extensive research[1] and
acknowledged by top AI labs.[2] As stated in the
widely-endorsed Asilomar AI Principles, Advanced AI
could represent a profound change in the history of life on Earth, and should
be planned for and managed with commensurate care and resources.
Unfortunately, this level of planning and management is not happening, even
though recent months have seen AI labs locked in an out-of-control race to
develop and deploy ever more powerful digital minds that no one – not even
their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming
human-competitive at general tasks,[3] and we must ask
ourselves: Should we let machines flood our information
channels with propaganda and untruth? Should we automate away
all the jobs, including the fulfilling ones? Should we develop
nonhuman minds that might eventually outnumber, outsmart, obsolete and
replace us? Should we risk loss of control of our
civilization? Such decisions must not be delegated to unelected tech
leaders. Powerful AI systems should be developed only once we are
confident that their effects will be positive and their risks will be
manageable. This confidence must be well justified and increase with
the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial
general intelligence, states that "At some
point, it may be important to get independent review before starting to train
future systems, and for the most advanced efforts to agree to limit the rate of
growth of compute used for creating new models”. We agree: that some point
is NOW.
Therefore, we call on all AI labs to
immediately pause for at least 6 months the training of AI systems more
powerful than GPT-4. This pause should be public and verifiable, and
include all key actors. If such a pause cannot be enacted quickly, governments
should step in and institute a moratorium.
AI labs and independent experts should use this pause
to jointly develop and implement a set of shared safety protocols for advanced
AI design and development that are rigorously audited and overseen by
independent outside experts. These protocols should ensure that systems
adhering to them are safe beyond a reasonable doubt.[4] This
does not mean a pause on AI development in general, merely a
stepping back from the dangerous race to ever-larger unpredictable black-box
models with emergent capabilities.
AI research and development should be refocused on
making today's powerful, state-of-the-art systems more
accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and
loyal.
In parallel, AI developers must work with policymakers
to dramatically accelerate development of robust AI governance systems. These
should at a minimum include: new and capable regulatory authorities dedicated
to AI; oversight and tracking of highly capable AI systems and large pools of
computational capability; provenance and watermarking systems to help
distinguish real from synthetic and to track model leaks; a robust auditing and
certification ecosystem; liability for AI-caused harm; robust public funding
for technical AI safety research; and well-resourced institutions for coping
with the dramatic economic and political disruptions (especially to democracy)
that AI will cause.
Humanity can enjoy a flourishing future with AI.
Having succeeded in creating powerful AI systems, we can now enjoy an "AI
summer" in which we reap the rewards, engineer these systems for the clear
benefit of all, and give society a chance to adapt. Society has hit pause on
other technologies with potentially catastrophic effects on society.[5] We
can do so here. Let's enjoy a long AI summer, not rush unprepared into a
fall.
----
Notes and references
[1] Bender, E. M., Gebru, T.,
McMillan-Major, A., & Shmitchell, S. (2021,
March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In
Proceedings of the 2021 ACM conference on fairness, accountability, and
transparency (pp. 610-623).
Bostrom, N. (2016). Superintelligence.
Oxford University Press.
Bucknall, B. S., & Dori-Hacohen, S.
(2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference
on AI, Ethics, and Society (pp.
119-129).
Carlsmith, J. (2022). Is
Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment
Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision
of Reward. AI Magazine, 43(3)
(pp. 282-293).
Eloundou, T., et al. (2023). GPTs
are GPTs: An Early Look at the Labor Market Impact Potential of Large Language
Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk
Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The
alignment problem from a deep learning perspective. arXiv preprint
arXiv:2209.00626.
Russell, S. (2019). Human Compatible:
Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of
Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical
and social risks of harm from language models. arXiv preprint
arXiv:2112.04359.
[2] Ordonez, V. et al.
(2023, March 16). OpenAI CEO Sam Altman says AI will reshape society,
acknowledges risks: 'A little bit scared of this'. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
[3] Bubeck, S. et al. (2023). Sparks
of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report.
arXiv:2303.08774.
[4]Ample legal precedent exists – for
example, the widely adopted OECD AI Principles require
that AI systems "function appropriately and do not pose unreasonable
safety risk".
[5] Examples include human cloning,
human germline modification, gain-of-function research, and eugenics.
----
We have prepared some FAQs in response to questions
and discussion in the media and elsewhere. You can find them here.
In addition to this open letter, we have published a
set of policy recommendations which can be found here:
We have prepared some FAQs in response to questions
and discussion in the media and elsewhere. You can find them here.
In addition to this open letter, we have published a
set of policy recommendations ("Policymaking in the Pause") which can
be found here, and has the following title and
contents:
Policymaking in the Pause:
What can policymakers do now to combat risks from advanced AI systems?
Contents:
Introduction
Policy recommendations:
Mandate robust
third-party auditing and certification for specific AI systems
Regulate organizations’
access to computational power
Establish capable AI agencies
at national level
Establish liability for
AI-caused harm
Introduce measures to
prevent and track AI model leaks
Expand technical AI safety
research funding
Develop standards for identifying and managing AI- generated content and
recommendations
Conclusion
8. Comment to the Open Letter (Proposal for moratorium)
If it is not disrespectful I
would like to start by asking the idiomatic "What are we to make out of
this?" My spontaneous reaction, if not response, is to think about a book
that left in me a deep impression when I read it more than forty years ago:
William E. Akin's Technocracy and the American Dream: The Technocrat
Movement, 1900-1941 (1977). I perceive it as a painstakingly
researched passionate life-work of an apparently forgotten "Associate
Professor of History in the Loyola Campus of Concordia
University, Montreal".
Reflecting upon my impression I drew the conclusion
that in my present situation I cannot afford to do what I should: write an
analog of the book focused on the ongoing AI and AGI-hype. In doing so I would
be writing more than a doctoral dissertation on these matters, including a
connection to the history of bureaucracy seen, as AI, a
system of logical rules working on collected data, and related to technocracy
as indicated in Wikipedia’s section on “See also”.
On the other hand, as I write in my general
disclaimer in the link that initiates the list of “Contents” at
the beginning of present essay, such endeavor should not be necessary because
"it all" is already written in Akin’s above-mentioned book, the
problem being what Plato writes in my quotation above, and what is repeated in
later occasions in such books as, say, by
Richard Stivers. Let's see some excerpts from Akin regarding
"technological unemployment" that a doctoral dissertation at my
university department translated into a Marxist analysis of the role of labor
unions under the title computer Work-Oriented Design of Computer Artifacts (published as book here and commented
by me in the context of Information and
Theology). The political reason for the rise and establishment of labor
unions is today forgotten in the facile wishful thinking that technological
unemployment caused by AI-AGI will be avoided by adherence to “ethical
standards” or offset by “universal basic
income”. As if it were a substitute of religion, as
expressed by pope Leo XIV:
“Sensing myself
called to continue in this same path, I chose to take the name Leo XIV. There
are different reasons for this, but mainly because Pope Leo XIII in his
historic Encyclical Rerum Novarum addressed the social question in
the context of the first great industrial revolution. In our own day, the
Church offers to everyone the treasury of her social teaching in response to
another industrial revolution and to developments in the field of artificial
intelligence that pose new challenges for the defence
of human dignity, justice and labour”.
In doing so the Open Letter trivializes the whole
question of unemployment, which is not only unemployment but rather a
subversion of the societal relation between human beings, a growing proportion
of them being cut-off from expressing their opinions in dialogues and
discussions of everyday life as these deal with the motives, results and
improvement of daily work. And, if there is unemployment, why those who are
still employed do not profit of enjoying progressively shorter working time, to
be possibly compensated by an employment of the unemployed? Yes, because those
who are still employed, partly in operation and development of AI-machinery,
and their labor unions, prefer to continue working so long as, or even longer
than before, with a gradually increasing income, “economic growth” that allows
for a higher consumption of goods and services such as gadgets, entertainment,
drugs, holidays and tourism in their limited “free” time. By the way, why the
increasing demand of drugs that also cause the development of criminal
gangs and organized crime? This may be because “freedom” is not understood as
freedom from restraints in doing good or ethically better voluntary ideal
activities (as old retired professors finally try to do…), but freedom in doing
the misunderstood democratic “whatever I and we happen to want, or feel that we
need”. Behind all this there is also the problem of taxation of income, and
sales taxes, since the promises of welfare that help to keep governments in
power presuppose the financing of welfare. Not to mention the political mess of
tariffs on import and
export of goods, and rates of interests to the (owners of venture) capital and
labor that develops and produces the AI-equipment and its marketing.
This kind of problems, which point to the historical
worldwide debate of political economy including Marxist
thought, and criticism of
Marxism, puts into evidence what I see as an infantilization
of the mass media reporting of societal problems that, exceptionally, happen to
be treated in selected texts. Examples are William Akin’s Technocracy and the American Dream (see below) and Christer Sanne’s
(English presentation and shorter texts here, archived here) great opus in
Swedish Arbetets Tid [The time, or duration, of work, Swedish
review here], with its
encyclopedic list
of 14 pages of references in several languages that can be
retrieved by this link substituting
its end thirteen times, from 7681 to 7695. Occasionally even newspapers such as
Aftonbladet on June 20, 2024 can be
deep-going as Mattias Beijmo’s article “Hur lite kan vi arbeta – utan att
må dåligt?” [How little can we work - without feeling
bad? Archived version here]. In it he
relates the text-to-video model AI
Sora and the Microsoft
Copilot to the history of Swedish politics for law
on working time limits in year 1919, plus trade union’s historical
work by Rudolf Meidner and the Rehn-Meidner
model of economic and wage policy, the whole completed with comments by the
economist Linn Spross and references
to intuitions in science fiction literature by Douglas
Coupland’s Generation X: Tales for an
accelerated culture, and Iain
Blanks series The Culture. What is not perceived is that the whole is a deep
question of cultural values, which as I try to show in another
context, ends as many computer and information issues in a theology of needs and
duties vs. wants and sacrifices.
In other words, the rationale of the need and impact
of AI does away with the whole political, economic and moral problem of the
Western world as it may be seen as represented by the motivation and
consequences of the whole “scientific” work of Karl Marx, relations between
communism, socialism, and liberalism. It was a work which in turn was a
masterwork of unstated atheistic postulation of modern logical empiricist
science and technology allowing “analyses” by logical and political means,
whatever they theoretically and practically are supposed to mean in the
Marxist conception of the modern world. |
This leads in turn to a, in AI ignored, unending criticism of criticism,
or debate that may be exemplified by the historian Paul Johnson’ s quote
in Wikipedia on “criticism”:
in his 1988 book Intellectuals wrote that Marx "developed
traits characteristic of a certain type of scholar, especially Talmudic ones: a tendency to accumulate
immense masses of half-assimilated materials and to plan encyclopaedic
works which were never completed; a withering contempt for all non-scholars;
and extreme assertiveness and irascibility in dealing with other scholars.
Virtually all his work, indeed, has the hallmark of Talmudic study: it is
essentially a commentary on, a critique of the work of others in his
field."
Which is rebutted in a quote of the Marxian economist Richard
D. Wolff, with a hidden traditional suggestion of antisemitism
(cf. above references to “Ashkenazi intelligence”, once again, cf. here, here and here), and so on, as
outlined in my own essay on
unending sterile “debates”, which will characterize unending debates about AI
and AGI:
Johnson's passage describing Marx as type of scholar that is not in
reality productive, drawing similarities to Talmudic studies (Jewish studies of
the Talmud). Arguably a racist statement.[6]Here Wolff describes Johnson's book: "Criticism
as malicious gossip [...] a right-wing tirade of rage is vented against
left-wing social critics, intellectuals in general, Jews, women, and most of
the others who compose the usual targets of such mentalities."
It is left unclear what are “such mentalities”, if not simply political
right wing in the play of reduction
of religion, especially Christianism, to politics, if not
“science”. The whole, forgetting Marx & Co., can be seen as further
complicated by the paradox of “time saving” allowed or promised now by AGI,
leading to the puzzling question quality of life and welfare, framed long
before the appearance of AGI, namely in an epochal book (for those who see the
need and want to think further) written by the professor of political economy Staffan
Burenstan Linder and published
in 1969, Den rastlösa välfärdsmänniskan: Tidsbrist I överflöd
– en ekonomisk studie, further
translated into English with the title The harried leisure class (Columbia University Press, 1970). See especially, to
begin with, the chapter 6 on “the rational in a growing irrationality”.
“[T]he many
ramifications of the relationship between increasing goods and decreasing time
in our economy. As time becomes increasingly scarce, there is a need to
continually reallocate time among competing goals and needs. Inevitably, values
begin to change in the reallocation process and the whole quality of life is
altered.”
Returning to the Open Letter, its key issue is that
the requesting of moratorium
surreptitiously subsumes a political democratic process while assuming that its
implicit technocratic view, expressed and supported by computer scientists, can
be reconciled with a misunderstood or ignored mythological
democracy. It is a democracy that according to technocratic
"policy recommendations" mandates auditing and certifications, regulates
access to computational power, establishes
agencies and liabilities, introduces
measures to prevent AI-misuse, and expands
funding for AI safety research.
By the way, compare all this with e.g. the
implications of “How
the data-center boom became a political battleground” in The Economist (October 10, 2024).
Related to mythological democracy’s lame “tautological
voluntary and non-binding reliance on experts”, it is then interesting to
remark what The Economist (Nov 23rd,
2023) writes with reference to the above-mentioned “boomers”, in the context
of Sam
Altman’s return marking a new phase for Open AI:
[Boomers] will worry politicians, who are scrambling
to show that they take the risks seriously. In July President Joe Biden’s
administration nudged seven leading model-makers, including Google, Meta,
Microsoft and OpenAI, to make “voluntary commitments” to
have their AI products inspected by experts before releasing them to
the public. On November 1st the British government got a similar group to sign
another non-binding agreement that allowed regulators to test their AIs for trustworthiness and harmful capabilities, such as
endangering national security.
And now over to Akin on technological unemployment
(pp. 156-164.):
The charge of technological unemployment was the most
relevant economic issue. Far more serious, lengthy, and passionate discussion
took place over this question than any other. It was the most difficult to
resolve and perhaps of most lasting significance. […]
Somewhat surprisingly, the spokesmen for business were
among those unable to present a solid front when the issue first arose.
Businessmen have never wished to admit the existence of technological
unemployment. Aside from the serious economic problems involved, to do so
raised equally important questions regarding the social and moral values of
capitalism, which both entrepreneurs and corporations preferred to leave
unmasked. It cut through and could potentially destroy, their most cherished
notions: their easy identification of technological change and progress, their
assumption that change was compatible with stability, their belief in the
social and moral value of work, the idea of the self-made man and the theme of
individualism, and the necessity of laissez-faire. […]
The vehemence with which leading scientists and
engineers repudiated the concept of technological unemployment reflected their
fear that it constituted a frontal attack on their professional and social
roles. Their disquiet was even greater than that of businessmen. To them the
benevolence of science and technology was indisputable. […]
The assurances of scientists and engineers, that
technology would continue to create new products and industries, "each
demanding an army of workers," also lent support to the optimistic
common-sense view despite the fact that it required the same kind of blind
faith. […]
Most businessmen […] followed a mechanistic theory
which held that technology reduced costs, thereby creating greater product
demand, which increased production, which in turn necessitated higher
employment. […]
To avoid serious repercussions, the economy's
well-being ultimately depended on constantly increasing the level of
consumption. […] The logic of those who argued the case dictated that over the
long run increasing purchasing power was linked to economic growth. To insure minimum dislocations, economic growth must match
technological improvement. […]
As important as the question of technological
unemployment was, the broad implications of technology for society was of even greater significance for some. One of the most
pressing issues raised was the effect of technology on social organization. Was
there a cultural lag that had to be lessened or bridged? […] Did it require a
technical elite to engineer society as well as the machine? […]
One of the shortcomings of the proponents of
technocracy was their failure to reconcile the technocratic view with
democracy.
In other words: the Open Letter is a replay or
reenactment of the "philosophy" of the technocratic movement with its
problematically related social
responsibility (c.f. also here and on Walter Rautenstrauch here), the more
so when computerization in general and AI in particular seldom if ever can be
analyzed for profitability in terms of cost-benefit
analysis, but are valued in terms of saving costs of human
labor. And it is more than social responsibility, it
is a matter of human love in the sense of Thou
shalt love thy neighbour as thyself (Matt. 22:39) It is not only a question of
"unemployment". A modern technocratic movement in its
computer-artificial-intelligence dress, does need neither the opinion nor a
dialog and even less an "impossible
debate" with an untrained, not technically gifted and a supposedly
unintelligent workforce that meets the impossibilities
of a "human-computer interaction" that is, a
substitute of human language. It needs only the ultimate political legitimation
of a more or less mythological Democracy, It is a Democracy that today does not
display references to biblical quotations in order to expect that new computer
technology will create enough new influential jobs and will need neither the
Bible nor labor unions in order to allow a Universal basic
income to an increasing world-population. And it is a Democracy that is used
as a dumping place for personal responsibility, as I write in another
theological context, and is done even by children in the context of climate global
warming where the “researcher” is the scientist and the
“manager” is the politic body of democracy:
Politics, for instance, often is also
explained away by scientists and engineers who assume that all would be alright
if only politicians followed the scientists' recommendations, as denounced in
the famous paper by Churchman & Schainblatt The researcher and the manager. A dialectic of
implementation and its Commentaries.
More than so, the Open Letter ignores not only the
paradox of computerization not saving manpower and time, not allowing more time
for culture and relaxation, as surveyed Staffan Burenstam
Linder’s The Harried Leisure Class, and in my text on the Meaning of Human-Computer interaction. It also mainly
ignores one main motivating force and commitments of the military-industrial
complex as in the latest conflict between Russia and Ukraine, and the
sociology as well as the political science lying behind the clash between duly
secularized socialism and liberal capitalism. A socialism with roots in the
absence of labor unions for countering the advent of atheist industrialization. Today it is
like enabling to ask ChatGPT-2 (or the later paid GPT-4) the questions quoted
above in this essay about the causes and the solution of the Russia-Ukraine
conflict, while one main force of the ongoing USA research may be to complement
the logical-mathematical rape of mind and nature by a quantum physics for nuclear
weapons, with the human-computer interacting AI rape of the human mind for
design and implementation of self-driving unmanned
weapons as analogs to drone warfare (as here, in Ukraine).
This Open Letter or proposal for moratorium will
obviously not stand alone. It was, for instance, followed by a related AI Safety Summit in London 1-2
November 2023, leading to the so called Bletchley Declaration (published by
the government of the UK, officially represented by the Department of Science,
Innovation & Technology, Foreign Commonwealth & Development Office, and
The Prime Minister’s Office, 10 Downing Street). It contained, among others,
the following typical policy thoughts, to be expected to be recurrent in a
plethora of coming AI conferences around the world (my italics):
Particular safety risks arise at the ‘frontier’
of AI, understood as being those highly capable
general-purpose AI models, including foundation models, that could
perform a wide variety of tasks - as well as relevant specific narrow AI that
could exhibit capabilities that cause harm - which match or exceed the
capabilities present in today’s most advanced models. Substantial risks
may arise from potential intentional misuse or unintended issues of control
relating to alignment with human intent. These
issues are in part because those capabilities are not fully understood and
are therefore hard to predict. We are especially concerned by such
risks in domains such as cybersecurity and biotechnology, as well as where
frontier AI systems may amplify risks such as disinformation.
There is potential for serious, even catastrophic, harm, either
deliberate or unintentional, stemming from the most significant
capabilities of these AI models. Given the rapid and uncertain
rate of change of AI, and in the context of the acceleration of
investment in technology, we affirm that deepening our understanding
of these potential risks and of actions to address them is especially
urgent.
Yes, indeed: “These issues are in part because
those capabilities are not fully understood and are therefore hard
to predict”. “Capabilities” or especially risks not fully understood, or not at all understood? This was the
reason for the upheaval in the episode of the world-wide diffused news in
November 2023 of the sudden
sacking and reinstatement of genius-wizard Sam Altman from the
position of CEO of OpenAI, the creator of (excessively?) successful of ChatGPT.
The news agencies repeated that the explicit reason given for his sacking had
been “unclear-vague” shortcomings in his communications with the board. They
did not dare to advance the hypothesis that the OpenAI investors, employees and
prospective clients were welcoming Altman’s optimism in downplaying the
“existential risks” of AI (see above) in contrast to the OpenAI board’s
eventual mounting political-ethical concerns. This may be an illustration of the coming ethical and political
struggles in the development and applications of AGI, which may be hopeless
if one considers the risk that today political Democracy is a myth. It is a myth hidden behind non-binding assurances that
vague, undefined and problematic “productivity-efficiency-effectiveness” (see
the “theoretical-conceptual framework” in hodgepodge here) will be
“responsibly” managed with expertise, following democratically decided
governmental rules after a rich dialogue with all affected parties, avoiding
the production of fake information or information implying “existential risks”,
controlling for protecting from access by extraneous irresponsible influence
from non-authorized personnel, it all guaranteed by security-safety measures,
etc. etc.
I repeat now from the beginning of the previous
paragraph: “These issues are in part because those capabilities are not
fully understood and are therefore hard to predict”. “Capabilities” or
especially risks not fully understood, or not
at all understood:
For instance, the technocratically avoided issue of
unemployment of people that are simply sacked and replaced by AI-AGI. Even in
so-called intellectual work, mediocre professionals may be replaced by
AGI/ChatGPT, but they may also improve their productivity with its help, while
the best top-professionals may not improve their performance because they, to
begin with, were not reasoning only
“logically-empirically”, as slow computers without even taking stand in the
debates about logical
empiricism, with the consequence that they will not continue to
be rated as “best”.
Paradoxically this defective understanding is hoped to
be achieved by a future long series of conferences. Cf. the above-mentioned AI Safety Summit (never mind what the
concept of safety or security is or should be, as suggested above) that is only one conference in the first waves of
the “conference industry” in the increased hype of AI in years 2022-2023,
immediately followed by e.g. a conferences in London on November 9, 2023 Generative AI Summit, on November 27-28 AI World Congress 2023, and on November
30-December 1, 2023 on Delivering AI and
Big Data for a Smarter Future (also AI & Big Data Expo). In general,
risks are not understood, and AI/AGI are the latest reminder of that ongoing
societal computerization, which may be soon followed by news on applications of mind-blowing computational neuroscience, brain-computer interface and the like, employing, among others, neurophysiologists, with skills
exemplified by the president of the
Allen institute for Brain Sciences, and others who are
introduced as being e.g. a “neurophysiologist and computational neuroscientist”
and work on the “neural basis of consciousness”. That is: whatever that means
for even a highly educated democratically minded citizen who will exert his
democratic duties in future elections that will direct the desirable future of
national scientific efforts. It all can be seen as a gigantic irreversible
global experiment on humanity, as illustrated also in my already mentioned
paper on the
meaning of human-computer interaction. Let me complete this section with
replicating a quotation from the epilogue of my paper on
computerization and logic which I hope may convey an image of the increasing
scope of techno-science and of computer-logical dreaminess, from the atomic
bombs on Hiroshima and Nagasaki, up to today’s super-human, divine
intelligence. It can be seen either as a hymn to a
fantastic promising limitless technological progress (towards…) or as a
document of recurring naïve technocratic hype.
As an illustrative tribute to the genius
and naivety of the celebrated great engineer Vannevar Bush, I
will terminate with a quotation of his most famous prophetic article “As we may think” in The Atlantic Monthly issue
of July 1945, that I recommend to the readers for getting a time perspective on
the drive for computerization:
It is readily possible to construct a
machine which will manipulate premises in accordance with formal logic, simply
by the clever use of relay circuits. Put a set of premises into such a device
and turn the crank, and it will readily pass out conclusion after conclusion,
all in accordance with logical law, and with no more slips than would be
expected of a keyboard adding machine.
9. Case study: “Creating safe AGI that benefits
all of humanity”
On December 24, 2023 I accessed the Open AI’s
policies and terms of use (effective December 23, 2023, previous versions here) for the
European Economic Area (EEA), Switzerland, or UK, that were effective on
December (for others living outside the EEA, Switzerland or UK, see other terms of
use). From the contents I made the following core selection as material for
the case study in the form of subsequent comments along the above text in this
essay, keeping most of the original layout except of the font type:
Who We Are
OpenAI is an AI research and deployment company. Our
mission is to ensure that artificial general intelligence benefits all of
humanity. For more information about OpenAI, please visit https://openai.com/about. Our Services are
provided to you by:
.
OpenAI Ireland Ltd, a company
incorporated in the Republic of Ireland with its registered office at 1st
Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01
YC43, Ireland and company number 737350, if you are resident in the EEA or
Switzerland.
.
OpenAI, L.L.C., a Delaware
company with its registered office at 3180 18th Street, San Francisco,
California 94110, United States and company number 7063675, if you are resident
in the UK.
Additional Service-Specific Terms
Depending on the specific Service or features you use,
additional Service-specific terms and policies may apply to your use of our
Services. The key ones to be aware of, and which form part of these Terms, are
described below:
.
Usage Policies: these policies explain how you may use our Services
and Content.
.
Service Terms: these terms apply when you use certain Services or
features;
.
Sharing & Publication Policy: this
policy sets out rules for when you share Content;
Using Our Services
What You Can Do. Subject to your compliance with
these Terms, you may access and use our Services. In using our Services, you
must comply with all applicable laws as well as the Service-specific terms and
policies listed above.
What You Cannot Do. You may not use our Services for
any illegal, harmful, or abusive activity. For example, you are prohibited
from:
.
Using our Services in a way that infringes,
misappropriates or violates anyone’s rights.
.
Modifying, copying, leasing, selling or distributing
any of our Services.
.
Attempting to or assisting anyone to reverse engineer,
decompile or discover the source code or underlying components of our Services,
including our models, algorithms, or systems (except to the extent this
restriction is prohibited by applicable law).
.
Automatically or programmatically extracting data or
Output (defined below).
.
Representing that Output was human-generated when it
was not.
.
Interfering with or disrupting our Services, including
circumventing any rate limits or restrictions or bypassing any protective
measures or safety mitigations we put on our Services.
.
Using Output to develop models that compete with
OpenAI.
Content
Your Content. You may provide input to the Services (“Input”),
and receive output from the Services based on the Input (“Output”).
Input and Output are collectively “Content”. You are responsible for
Content, including ensuring that it does not violate any applicable law or
these Terms. You represent and warrant that you have all rights, licences, and permissions needed to provide Input to our
Services.
Ownership of Content. As between you
and OpenAI, and to the extent permitted by applicable law, you (a) retain your
ownership rights in Input and (b) own the Output. We hereby assign to you all
our right, title, and interest, if any, in and to Output.
Similarity of Content. Due to the
nature of our Services and artificial intelligence generally, Output may not be
unique and other users may receive similar output from our Services. Our
assignment above does not extend to other users’ output or any Third Party Output.
Our Use of Content. We can use your Content
worldwide to provide, maintain, develop, and improve our Services, comply with
applicable law, enforce our terms and policies and keep our Services
safe.
Opt Out. If you do not
want us to use your Content to train our models, you have the option to opt out
by updating your account settings. Further information can be found in this Help Center article. Please note
that in some cases this may limit the ability of our Services to better address
your specific use case.
Accuracy. Artificial intelligence and machine learning are
rapidly evolving fields of study. We are constantly working to improve our
Services to make them more accurate, reliable, safe, and beneficial. Given the
probabilistic nature of machine learning, use of our Services may in some
situations result in Output that does not accurately reflect real people,
places, or facts.
When you use our Services you
understand and agree:
.
Output may not always be accurate. You should not rely
on Output from our Services as a sole source of truth or factual information,
or as a substitute for professional advice.
.
You must evaluate Output for accuracy and
appropriateness for your use case, including using human review as appropriate,
before using or sharing Output from the Services.
.
You must not use any Output relating to a person for
any purpose that could have a legal or material impact on that person, such as
making credit, educational, employment, housing, insurance, legal, medical, or
other important decisions about them.
.
Our Services may provide incomplete, incorrect, or
offensive Output that does not represent OpenAI’s views. If Output references
any third party products or services, it doesn’t mean
the third party endorses or is affiliated with OpenAI.
My comments:
I see the policies and terms of use mainly as an
expression of goodwill, and mainly as a complex juridical disclaimer based on
two main concepts of service, and user. I myself wonder whether OpenAI, without any notice has deleted my account (as per
January 2024) and started refusing my IP Address (I can no
longer neither log-in nor sign up from my regular IP address) because I have
used its services for commenting them criticizing AI as I showed above in my
present text, as they perhaps were judged to infringe the last paragraph above
on “When you use our Services you understand and agree”.
These policies and terms of use can and must be seen
and related to production-consumption,
or means-goal, or producer-consuming client, or scientist producer of a tool, or politician
user of the tool. I have already dwelled on the problems of this conception
focused on “tool” in my paper on The meaning of
human-computer interaction, and for the
purpose of space and time I must refer the reader to this source in its section
dealing with tool.
The core of the problem, however, can be seen in the
definition of producer, product and user in terms of the explanation of the
meaning of so-called morphological or structural, functional and teleological
classes as introduced in the earlier mentioned DIS. It includes the overview of complications, for instance, that
one cause produces one effect, but another cause or several other causes can
also produce the very same effect. But it is later realized that some of these
productions can only happen under earlier unknown external conditions that are
under the control of others who happen to be interested and motivated by the
desire of seeing or having the result, or motivated to sometimes prevent it,
because of being motivated to counter the happening of such result. And so on.
In the middle of all this, mathematics and its abuse has a special simplifying
function as I consider it in the section of my essay on the
famous rejected parts of Jan Brouwer’s dissertation
(rejected probably for the same cultural reasons that overvalue mathematics and
logic) about the foundations of mathematics, where I write the following:
In presenting The
rejected parts of Brouwer's dissertation, [Walter
Peter] Van Stigt
refers to Brouwer's interpretation of causality as essentially mathematical:
the ability to link events in the mind, to see sequences and repetition of
sequences in time, to link sensations as the immediate source of awareness of
time and discreteness. It is the source of man's power to predict the future
and interfere in the course of events. This "intellectual or
mathematical" way of looking at the world is not only a one-sided
concentration and interpretation of reality: by ignoring and willfully
removing aspects which deviate from the expected course of events, man
supplements and creates more regularity than exists in nature, he makes the
world linear or "one-sided". The regularity observed in nature is
due to the nature of the measuring instruments and physical science has value
only as weapon, not concerning life. It is clearly inferior and has
nothing to do with religion or wisdom. More in detail, in Brouwer's
own words:
"Man has
the faculty, accompanying all his interactions with nature, of objectifying the
world, of seeing in the world causal systems in time. The primordial phenomenon
is simply the intuition of time in which repetition of "thing in time and
again thing" is possible, but in which (and this is a phenomenon outside
mathematics) a sensation can fall apart in component qualities, so that a
single moment can be lived through a sequence of qualitatively different
things. One can, however, restrict oneself to the mere sensation of theses
sequences as such, independent of the various degrees to which objects are
perceived in the world outside are to be feared or desired. (The attention is
reduced to an intellectual observation.) The human tactics of "acting purposively"
then consists in replacing the end by the means (a later occurrence in the
intellectually observed sequence by an earlier occurrence) when the human
instinct feels that chance favours the means."
What happens, then, is that the reader may find the
explanation tedious and will ask, as I have many times asked, to summarize the
whole thing I a few words. So, what happens is what I already have tried to
explain in the context of Information and Debate: People feel that they have no time, no motivation or no
capabilities to read such texts. If they hear that the explanation in summary is partly also found in
Russell Ackoff’s and Fred Emery’s chapter on Structure, Function and Purpose in their
book On Purposeful Systems (pp. 13-32), they may wish to get a summary of the
summary, in words that allow them to grasp the whole in a few hours, or
minutes.
But the summary, which appears in DIS, (p. 59) is not easily understood and accepted when it states:
Thus
in the broader viewpoint one cannot distinguish between science and its
politics; it make no sense to the designer to say that
science is a body of knowledge and politics is people, and therefore the two
must be different. For the designer it is impossible to optimize the system of
acquiring basic knowledge without considering the political problems that such
a system generates. The boundaries of “basic research” expand into the area of
national policy making, and the client becomes larger than the scientific
community.
And, in fact, returning to text starting some lines
above the last paragraph of the selection out of “terms of service”:
.
1. Given the probabilistic nature of machine learning,
use of our Services may in some situations result in Output that does not
accurately reflect real people, places, or facts.
.
2. Output may not always be accurate. You should not
rely on Output from our Services as a sole source of truth or factual
information, or as a substitute for professional advice.
.
3. You must evaluate Output for accuracy and
appropriateness for your use case, including using human review as appropriate,
before using or sharing Output from the Services.
.
4. You must not use any Output relating to a person
for any purpose that could have a legal or material impact on that person, such
as making credit, educational, employment, housing, insurance, legal, medical,
or other important decisions about them.
.
5. Our Services may provide
incomplete, incorrect, or offensive Output that does not represent OpenAI’s
views. If Output references any third party products
or services, it doesn’t mean the third party endorses or is affiliated with
OpenAI.
That is an attempt to express a juridical disclaimer, smart lawyers editing a text that relieves
the producer from risks, if sued. For me it is analogy to a manufacturer of
weapons who advises the buyer/client about the use of the weapons. There are
national laws that regulate such an industry and commerce but there is also a
world of arms trafficking and criminal possession of weapons related to
international politics and conflicts. In our case it is a disclaimer that seeks
an assurance that weapons will not be given to, and used by children but only
used by “us” or by democratic and friendly people in friendly countries, and
only for self-defense, as only by Ukraine in the conflict with Russia,
disregarding all the complications that are suggested in my essay on the conflict. And regarding
the above repeated text of numbered “terms of service” above:
1. The output may not accurately reflect real people,
places, or facts. [Who should and can do what about that it may not be truth?].
2. They say that you should [or must?] not rely on
output from our services as a sole
source of truth [but claims that it is truth?].
3. You must evaluate.
[But how? – this may mean understanding and applying the ignored DIS].
4. You must not use any output that could have a legal
or material impact. [We, who, “must not use”, but how to know whether it
“could” have impact?].
5. Our services may be offensive. [But it probably
intends to claim that “we” are not offensive].
And who are “we” and who are “you”? Among the “we”, I
can claim that I myself understand and follow “musts” but who is responsible
for the whole of “us” and of “you”, and what does it imply to be “responsible”,
for what, and to whom? To the state, in state
individualism, which is often equated to “society”? All guaranteed
by the “myth
of democracy”?
10. Conclusion: The meaning of hype
All this up to the point of raising the question of
what “hype” does mean in our context. Today it is not general popular knowledge
to know about and relate to the historical issue of Gustave Le Bon’s The Crowd: A Study of the Popular Mind, as the beginning of crowd psychology, and at the edge of Groupthink, with its
relation to the wiki-concept and to my own doctoral thesis on Quality-control of Information.
Regarding criticism and influence of the book The Crowd, Wikipedia mentions feelings
of power and security allow the individual not only to act as part of the mass,
but also to feel safety
in numbers. Today the safety
in numbers may be in numbers of fellows on the net who share the
misunderstanding or belief, the number of nodes and hits in the Internet, or
the volume of big data.
For all its
relations to Freudian psychology I think that the message is better understood
in the light of analytical psychology and Carl Jung’s reflections in his Civilization in Transition (Collected
Works, vol, 10). I dare to mention my own association to the
particular chapter in the book (# 589-824) where Jung (mentioned by addicts to
the New
Age movement) analyzes “Flying Saucers: A Modern Myth of Things Seen in the
Skies”. In terms of philosophy of science, this as well as the case of the
technical hubris of the Titanic-case, may very well
be fruitful, if yet apparently farfetched analogies, recalling George Lakoff
and Mark Johnson’s Metaphors We Live By. A thoughtful
book by the professor of sociology Richard Stivers says it all: Technology as Magic: The Triumph of the Irrational, including the article on “The Case of the
University”. It is a serious alternative to Arthur
C. Clarke’s perhaps better known phrase “Any
sufficiently advanced technology is indistinguishable from magic” (see also here). Never mind
that the complexities of the philosophy of technology hide the basic role of
the psychic abuse of formal
thinking. A reminder of the tragedy behind it all is the
tragic historic analogy is the Holocaust, based on a whole culturally advanced
nation’s deep popular, historic, and scientific “knowledge” of the minds,
behavior, and history of mainly Jews as an ethnic group. So many references and
so much, in part misleading, reading would not be necessary if, as Blaise Pascal remarks in his Pensées, (1955, §251) humans would “fall in love for”, or
accept and follow Christianity that unites psychological exteriority with
interiority, or follow at least some of the few great world religions.
I think that the situation is very serious and the
more so because the consequences, grave as they can be, will not be perceived,
except in the long run, in the same way as cultures may degenerate and
conflictual relations between people and nations develop until exploding in
murders and wars, historically even two world wars. More recently we have the
examples of the Russia-Ukraine conflict and Israel-Hamas
war. We have also cultural “civil wars” like feminism (cf. our boomers vs. doomers mentioned earlier) where the gregariousness of
nationalism or sex-gender or analytical leaning attracts individuals who have
not a fully developed psychic identity (individuation) and therefore easily
achieve “meaning and identity” by melting with a group, nationality, political
Right or Left (or “Center”,) or a collective. Even with no visible conflict,
the consequence can be an oppressive passivation of especially old, weak, ill
or handicapped people who will not be able to further their own opinions and
interests, as in my illustration of difficulties and limitations of the already
mentioned human-computer
interaction.
In the meantime, the hype is invading our daily
environment. I picked up a concrete simple example in Sweden but I should have
preferred to wait until another text appeared that I saw much later in the
Swedish design magazine Form, No. 2,
2024, p. 71, written by Björn Nordin, with the title “Människan
och AI” [Humans and AI]. I propose Swedish readers to
also consult it after reading my analysis below because it illustrates what is
coming to drain or drawn us intellectually. By then I had already, painfully,
worked out my comments that follow, to the first mentioned text that is an
advertising brochure enclosed in and distributed together with the newspaper Svenska Dagbladet in mid-December 2023,
with the title (my trans.) AI or HI. I
did not see an explanation of what HI is, probably human intelligence. It was produced by the Swedish branch of the Universum Employer Branding Agency. In two pages
(pp.4-5) it offers a series of affirmations about AI. I will begin by
displaying a short summary of the text (my trans.), followed by the same text
interspersed with my own commentaries [with my emphasis in italics and in square brackets] based on arguments presented
earlier in this essay. The brochure begins by telling
that:
Already in year 1589 Queen Elizabeth I rejected a
patent application for a knitting machine for fear that it would put the
population out of work, so concerns about the impact of new technology on jobs
are neither new nor particularly surprising. Already today many types of basic
counselling and customer service are now carried out without human contact. The
CEO of a firm, Futurion, calls AI
"the super colleague who never sleeps", who knows most languages, has
a number of different degrees”, and at the same time notes that many of the
skills we were able to study in the past, AI is now able to handle perfectly
well on its own: only when it comes to managing human creativity and critical
thinking, however, it is not doing so well. The CEO also finds that the
demanded skills of the future will be very much about social and creative skills
such as dealing with people, the
ability to put together broad and to come up with original and unusual solutions. New roles and responsibilities will
emerge. We will therefore have to learn and re-learn, probably several times over. It is too early to say what these new
roles will look like, but there is already a growing demand for technical specialists in machine
learning, prompt
designers and AI developers.
Generative AI such as ChatGPT cannot replace the creativity and imagination of writers,
designers and other creative professionals. It requires a level of originality,
emotion and expression that machines cannot replicate, at least not yet. What
do we do with the time left over? We don't know yet. But if we can avoid
spending evenings reading meeting documents, getting help to tidy up the
statistics for the presentation and getting an editable first draft of a
document in a couple of minutes, we
should reasonably have a lot of time to spare. The transition from an eight to
six hour working day is possible with increased use of AI, however, it is
important to note that it is not only a technological issue but also a
socio-economic and cultural one. Future changes in working habits are likely to be influenced by a combination of
technological advances, policies, and changes in societal attitudes towards work and leisure. Whatever the outcome, we can
say that AI is here to stay, and if you don't want to be left behind, you might
as well jump on the bandwagon.
WITH MY COMMENTS:
Already in year 1589 Queen Elizabeth-I [but,
please, mind the difference between the worlds of 1589 and 2023, and why not
mention also the better studied luddites
among whom I myself may be classified] rejected a patent application for a knitting machine
for fear that it would put the population out of work, so concerns about the
impact of new technology on jobs are neither new nor particularly surprising.
Already today many types of basic counselling and customer service are now
carried out without human contact. [Already commented
earlier in my earlier text]. The CEO of a firm, Futurion, calls AI "the super colleague who never
sleeps", who knows most languages, has a number of different degrees”, and
at the same time notes that many of the skills we were able to study in the
past, AI is now able to handle perfectly well on its own: only when it comes to
managing human creativity and critical thinking [how to
recognize that?], however, it is not doing so well. The CEO also finds that the demanded
skills of the future will be very much about social and creative skills
such as dealing with people [not in
chat bots!], the ability to put together broad solutions [synonym
to the earlier-mentioned DIS “systems
thinking?] and to come up with original and
unusual solutions [how to recognize and classify them?]. New roles and
responsibilities will emerge. We will therefore have to learn and re-learn, probably several times over [equivalent
to outweigh and repair short-term truths and solutions?]. It is too early
to say what these new roles will look like, but there is already a growing
demand for technical specialists in
machine learning, prompt designers and AI
developers. [guess: what else?].
Generative AI such as ChatGPT cannot replace the creativity and imagination [what
are they and their relation?] of writers, designers and other creative
professionals [but cf. the earlier mentioned infringement of artistic copyright]. It requires a level of
originality, emotion and expression [what are they and
their relation?] that machines cannot replicate,
at least not yet [remark this not
yet]. What to do we do with the time left over? [Left
over by whom to whom? More automation in supermarkets leading to less
employees, transfer of tasks to customers, same length of queues]. We don't know
yet. But if we can avoid spending evenings reading meeting documents [why?], getting help
to tidy up the statistics for the presentation and getting an editable first
draft of a document in a couple of
minutes, we should reasonably have a lot of time to spare [again
who are “we”, employers or fewer employees?] The transition from an eight to
six hour working day is possible [with the same amount
of manpower?] with increased use of AI,
however, it is important to note that it is not only a technological issue but
also a socio-economic and cultural one [and religious about
greed, and political, cf. labor unions?]. Future changes in working habits are likely to be influenced by a combination of
technological advances, policies, and changes in societal attitudes towards work and leisure [typical
ChatGPT-text, and what attitudes?]. Whatever the
outcome, we can say that AI is here to stay, and if you [cf.
“we” vs. “you”] don't want to be left behind, you might as well jump on the [whose?] bandwagon.
11. Conclusion: Beyond the hype
ENOUGH WITH (TOO MANY) COMMENTS. What are to make out
of this nice streamlined text, streamlined except for my own emphasis and
commentaries? They must have been partly and cheaply edited automatically by
ChatGPT, such as many other coming published texts that can be easily read and
accepted but would require an attentive reading and critical examination. It is
like reading how the ChatGPT answered to my questions about the causes of the
conflict between Russia and Ukraine, and compare them with what I write about
in my
paper on the issue. In contrast, we can expect the whole society to be
drowned in paying attention to an immense amount of such ChatGPT/AGI
advertising. The advertising will include the apparently more ambitious texts
such as the above-mentioned intelligently artificial artistic production. The stuff will be almost daily worldwide produced and
broadcasted also in documentaries on AI, in the coming explosive future of the
AI-industry.
In this perspective the obvious real productive
capabilities of AI/AGI invite to the creation of doomers’
dystopias that have already been related to the meaning of the biblical “Tower
of Babel” (Genesis
11:1-9). It means the breakdown of human communication starting from the
breakdown of archetypal communication in the family that I portray in a text of
Reason and Gender. It stands
behind the mythological explosion of what in another context I call mythological disinformation. It follows the
advent of human-computer
interaction, of one additional universal, corrupt formal computer
language adopted and abused by the many but understood only by the few. They
may cause a collective schizophrenia, or cultural crisis or lobotomy of the
“crowd”. It will keep few in the crowd busy in trying hopelessly to “debate”
responding to unending “automatically” ChatGPT-produced texts, including
continuous AI-advertising, produced by lots of people, financing institutions,
businesses, employees who have vested interests in continuing to foster the
AI/AGI-hype that guarantees their own employment or entrepreneurship. Including
in science allied to business, and at universities, such as in year 2024 Nobel
prizes in physics and chemistry or in grandiose pronouncements in international
journals such as The Economist (November
18th 2024) “AI,
science and society: Demis Hassabis and James Manyika”. Recently a
colleague sent me the link to a text that we will see and tempted or feel
obliged to spend time reading in many other close versions and authorships in
the future, this one published under the aegis of “OECD AI Policy Observatory”
and “Existential Risk Observatory”: Artificial General Intelligence: can we avoid the ultimate existential
threat? And see what is
claimed in The Economist (Nov. 20,
2024): “How
AI will lead to a new scientific renaissance”.
We can expect a mind-blowing future in trying to
evaluate and debunk all claims, the latest ones internationally endorsed by the
scientific loudspeaker of the Nobel foundation in the appointment of the Nobel
prizes for year 2024 which became an advertising for AI, showing that mathematization
and logification in the form of also AI
hype is invading or had invaded science itself as the rape of psyche
and nature by the spirit and
psychology of quantum physics, as shown
above when decreeing, centered on we
or us or they, I repeat: “success in creating AI would be the biggest event
in human history” but ignoring that the myth of democracy attributed to doomers will not cope with that “Unfortunately, it might
also be the last, unless we learn how to avoid the risks”. The question of
we-they-who covers the political problem since, as we all might suspect upon
the study of daily conflicts as well as past and present wars, “we” often
project the source of evil on “them”. Main technical AI-pioneers may often have
a psyche whose emphasis on logic, mathematics and physical nature of
environment and mind, implies a downplaying of valuations and feelings, (and
who dares to think about love?). It can be
expressed in categorical atheism and the too common unproblematic acceptance of
the, for many absurd, Turing test, dampened in
the late discussions of AI by the rhetorical substitution of problematized
“intelligence” by “smartness”. Again: “cultural lobotomy”. Conclusion:
when AI pioneers refer to avoiding risks
of AI, they may be themselves one main risk, as Robert
Oppenheimer suspected having been, long after the nuclear bombing
of Hiroshima and Nagasaki at the end of the second world war. As when AI
pioneers should be seriously considering themselves being one main risk when
discussing the possibility of AI gadgets becoming smarter than humans,
acquiring consciousness, requiring themselves human rights, including the right
to the life that is not even accorded to human beings in wars, genocides or
controversially in abortions, and so on. But we should not be surprised if some
AI-gadgets renamed AI-humans will be finally renamed human beings, in a frame
of mind that up to now could be characterized as schizophrenic.
After writing the main of this text I get a modest
example of the consequences of the hype in reading The Economist publishing on August 8, 2024 an article “These
are two new books you need to read about AI” (excerpt
below) referring to the article published August 10, written by “the winner or
The Economist Open Future essay competition in the category Open Progress,
(general nerd and “boomer”?) Frank
L. Ruta, responding to the question “Do
the benefits of artificial intelligence outweigh the risks?” with the
peremptory affirmation that “We need to develop AI that aligns with human
values” (sic), set against the book from year 1936 War with the Newts by (old “doomer”) Karel Čapek. The article
“These are the two new books” states the following:
In the past 12 months at least 100 books
about – AI - have been published in America, reckons Thad McIlroy,
a contributing editor at Publishers Weekly
[check here, read August 14, 2024, archived here] and many multiples of that have been
self-published. At the top of the pile are
two new titles, which represent opposing sides of a noisy debate: whether
enthusiasm about AI’s benefits should outweigh concerns about its downsides.
Analytical-logical pundits (Sasha Luccioni’s example
here, or here) will claim
things on the basis of their qualifications in terms of their verve and
networking (“marketing noise”), with no possibility to question their
competence as expressed in an overview of their overall scientific production
beyond hits for their names in browsers. Problems that would have required a
broad social systemic understanding of the causes of the problem will instead
lead to the discussion of whether continuously arising AI/AGI “tools” developed
by a few or many businesses may help to solve the problems. Simultaneously will
arise alarms for an AI-bubble, as around August 2024, in analogy to the earlier
Dot.com
bubble. By that time there are few articles who more directly treated the
issue, besides the Social Science
Encyclopedia, Reuters, Bloomberg’s Goldman’s Top Stock Analyst Is Waiting for AI Bubble
to Burst, and Goldman Sachs Gen AI: Too much spend, too little benefit? (a redacted version of the original report published
June 25, 2024 (32 pages).
A high rate of unemployment caused by AI/AGI will at
the same time disclose the high fraction of present job opportunities that do
not require humanity for their production, but mainly or only routine
computer-like routine calculation, mechanical movements, and consumption of demanded but unnecessary
products (including future AI/AGI) that keep the economy going. Promises for
the above-mentioned “universal basic
income”, the financing with credits for consumption, weapon industries
irrespective of global warming, will ignore economy, social psychology and
politics, the problem of human greed and of the difficulty to practice the
goodness of Christianity and major religions, all evidenced by wars, cries for
“democracy” and government intervention as substitutes of morality, increased
corruption, criminality and necessity of labor unions related to communism and
socialism.
The Faustian bargain reminds that a good portion of
science and technology could have been seen as legitimate in facilitating
everyday life at unknown costs (e.g. the problem of (electric) energy supply
for Generative AI is a Climate Disaster, and How will we meet artificial
intelligence energy demands?)
but
for whom? Credible long-term cost-benefit analysis will not be possible, as
little as it will be possible to check the accuracy of statistical predictions
of the future effects of new AI applications. When facilitation and increased
productivity will lead to more unemployment, to the production of nuclear
weapons and increased production of more lethal weapons for ongoing wars, and
cause continuous worldwide pollution, this will raise claims of dangers and of
climate warming, leading desperate souls to feel and write analogies to the Industrial Society
and Its Future (see below). The counterparts will claim to strive
for and rely on future solutions by means of “more of the same”, godly
super-human intelligence that makes many humans obsolete, confirming that it
was Faustian hubris. It recalls the famous title of Oswald Spengler’s famous
book The Decline of the West, which may also
be seen as a secular interpretation of, or one more steps in the understanding
of the Book of Revelation.
Atheists and those who cannot surmount the
difficulties of reading and interpreting the Book of Revelation can instead consider the possibility of having
in their minds an Icarus complex, despite vague doubts that
have been expressed as to the therapeutic value of such a diagnosis. It
displays, however, similarities with other better known alternative
concepts, including the already
mentioned autism or other diagnoses as narcissistic personality disorder. The description of
the Icarus complex does refer to the prominent American psychologist Henry
Murray who in turn has had his life story related to the famous case of the
American mathematician Ted Kaczynski, who is further described as
“a mathematician who
went on to be known as the 'Unabomber'. Between 1978 and 1995, Kaczynski
murdered three individuals and injured 23 others in a nationwide mail bombing campaign against people he believed to
be advancing modern technology and the destruction of the natural environment”.
More sophisticated than
simple “luddites” and even activists in
the crusade against global warming, he must have felt an overwhelming crushing
discomfort in the contact with the technical-industrial society at the time, a
feeling similar to the one felt by many contemporaneous citizens who today live
in complex febrile megacity milieus but long for a simple life in contact with
nature. He authored his own description and interpretation of the Apocalypse in
the thoughtful book Industrial Society
and Its Future (Wikipedia report here, printed book here). It is a manifesto and social
critique opposing
industrialization, rejecting leftism, and advocating for a nature-centered
form of anarchism that
today would have a deal in common with the climate and environmental movements. I
treat those commonalities with emphasis on climate in another essay on Climate and
apocalyptic global warming.
My point here, however, is directed to those who have
enough mathematical-logical leaning for having been lured into an
Icarus-complex, pathological narcissism, or denial of possibilities to avoid
the Apocalypse. Even if they do not have the reported high mathematical genius
and intelligence of Ted Kaczynski
with an IQ at 167, better they choose a career in the AI-industry, like
the above-mentioned boomers
who are opposed to doomers. Doing so,
becoming boomers, they may avoid Kaczynski’s tragedy (to be compared with the
above-mentioned William James
Sidis’ claimed “impossible” IQ of more than 250, (recalling an analog Swedish
AI-hype-related cases, see here, here, and interview here) which may be
possible if another more probable child prodigy, Kim
Ung-yong had an IQ above 210. But it
appears to be barely possible when compared with “saint genius” Albert Einstein
with an estimated IQ of only 160, all this leading to studies of his
brain, associated to what I
have recalled as “brain mythology”. Sidis’s
story recalls the need to problematize the ignored ultimate meaning of IQ. Such
problematization fits doomers, his being a good
example of what even a pure mathematical-logical genius can lead to. It can
lead to an additional “logical” tragedy of crime and war such as the one I recently
described for Ukraine, which can be seen as a logically motivated war
between boomers and doomers. But one may guess that
the best solution is to upgrade the desperation of doomers
and exhilaration of boomers to a better understanding of how to avoid computerization
as abuse of formal science, allied in the driving West to an active Christian
attitude. It would prevent the vain hope that
“democracy” will allow us the “freedom” of setting up more laws, more police
and an extended judicial system for forced control of the future use and abuse
of AI/AGI. It would also prevent us from seeing a coming AI/AGI as the saving
Messiah that leads us towards paradise, alternatively as expecting the
Apocalypse as an unavoidable tragedy.
Nevertheless, not everything is prone to be
misunderstood and perceived with cynicism. For instance, the above mentioned
“Christian attitude” can be perceived as promising in its mobilization as
announced in May 2024 by Intouch, (Salesian of Don Bosco, USA West) communicating
the launching of the International Salesian Commission for Artificial
Intelligence (ISCAI), including two interesting messages or the Pope Francis
regarding AI on (1) “wisdom of the heart” in social communications and (2)
peace (respectively here and
especially here). In the Acts of the General Council of the Salesian Society of
St.John Bosco (official organ of Animation and Communication for the
Salesian
Congregation) No. 440, pp. 38-57. Fr Gildasio
Mendes dos Santos, general councilor for Social Communication outlines in a
text from 24 July 2023 several guidelines for the education of youngsters in
meeting the challenges of AI. The question is whether AI-AGI research, in view
of the problems surveyed above, offers support for implementing the
insightfully formulated ambitions, e.g. (p. 46-47).:
Living digitally affects the way we express ideas,
create our communication policy, share information, express ourselves, and see
the world and the realities in which we live. This requires great
responsibility so that we can always communicate without dominating, relate
without controlling people, express ourselves without the temptation of worldly
power. We are also faced with challenges such as individualism and relativism,
malaises that take on the traits of self-referentiality, indifference, lack of
respect for nature, up to and including various forms of violence. Sometimes,
even unconsciously, digital communication propels and leads people to
situations of personal and group conflict, even to forms of radicalism. This
can lead to a digital identity crisis. A kind of contemporary version of
Plato’s “cave myth”. Instead of seeing the shadows on the wall of a life
happening elsewhere, the prisoner is forced not only to observe himself, but
also to see others showing themselves on social media. This can transform us
profoundly.
For the rest, it is symptomatic that I never found in
the Swedish (and barely in international) media an in-depth interview or
discussion of the matters related to this text of mine. That is: until February
2024 when in the Al Jazeera
English (Europe) news channel, in “The AI Series”, Nobel
peace prize laureate Maria Ressa interviewed and
discussed with Urvashi
Aneja (also here) in Goa, technology
policy researcher and founding director of Digital Futures Lab, the subject How currents AI developments impact the Global South (also here, plus additional text here) introduced
with the following text, where “the Global South” refers to the view from Goa,
in Southeast Asia:
While many of today’s headline-grabbing artificial
intelligence (AI) tools are designed in Silicon Valley, much of the k that
fuels the boom is based in the Global South, raising questions over who stands
to gain from the technology and at what cost.
Nobel Peace Prize laureate Maria Ressa and the
director of the Digital Futures Lab,
Urvashi Aneja, explore the impact AI is already having on communities in the
Global South – from labour conditions to democracy
and the environment – and why countries need to move beyond “catching up with
the North” when deciding what role AI can and should play in their societies.
In another
text, the very same Urvashi Aneja relates to the same thought I had expressed above about the
“cementing” of human thinking by people in industry, business, finance and
academia, living in sub-cultures or societies in cultural crisis: ”Artificial
intelligence is a status quo technology, as it reproduces the future based on
the past. […] The current model of AI development concentrates power in the
hands of a few. What we need is a new model that puts public interest at the centre.”
I mean, it is dramatically symptomatic that a finally
welcomed such a more serious discussion of AI/AGI should come from the “Global
South”. It can also be seen as a part of the “conclusion” except for that a new model that puts public interest at the
center must be seen primarily as
the “old” forgotten model, an ethical
and religious rather than political question. As I suggest in my essay on Information and Theology.
For the rest, in updatings
of this paper up to April 8, 2025 I found knowledge of main AI-problems surveyed
in this paper in interviews, as in the Swedish TV-program “Smartare än hjärnan” [Smarten than
the brain], of psychiatrists Anna Lembke and (also
psychoanalyst) Robert Waldinger.
12. Conclusion in one sentence
Long after writing the main of the whole text, I got
the feeling that the core of the whole problem with AI-AGI consists of the
consequences of a culturally widespread misunderstanding of the essence and
function of mathematics and related logic.
In one only sentence I claim that this requires a
further study and development of Jan Brouwer’s (vs. David
Hilbert’s) thought about the foundations of mathematics, as
implicit in Walter P. Van Stigt’s The rejected parts of Brouwer’s dissertation on the
foundations of mathematics,
(Historia Mathematica, vol.
6, 1979, pp. 385-404). This is suggested in my paper on Computers as embodied mathematics and logic, and applied
in the consequent Computerization and abuse of formal science, whose latest expression is the hype of AI-AGI. |
In order to do this it is preferable to have initially
the mathematical knowledge that is necessary in order to appreciate the texts
referenced at the beginning
of my mentioned paper on Computers as embodied mathematics and logic, and
its summary in Wikipedia’s Brouwer-Hilbert controversy. This may, however, be bypassed by those who are able
to grasp the core of the issue that is found in the most relevant text I know
of and quote from the above mentioned Walter P. Van Stigt’s
The rejected parts of Brouwer’s dissertation on the
foundations of mathematics, (Historia Mathematica, vol. 6, 1979, pp.
385-404). In it, the criticism of the Brouwer’s alleged “solipsism and
mysticism” must be deconstructed by understanding it as a reference to the Ego
vs. unconscious in analytical
psychology, which at time was not available. This can be
completed by the downloadable English translation of Brouwer’s Life, Art, and Mysticism (Notre Dame
Journal of Formal Logic, vol. 37, No. 3, Summer 1996) whose title already gives an indication of why the matter “turns
off” the interest of most typical mathematical minds. My understanding is that
this all requires an updating of Brouwer’s implicit view of psychology up to
the standards of analytical psychology as outlined in my discussion of Conscience and Truth.
All this may lead to understanding the need and
possibility to develop further Churchman’s rather abortive references to Carl
Jung, in The Design of Inquiring Systems (esp.
pp. 262 and 271), somewhat amplificated in the subsequent The Systems Approach and its Enemies (pp. 130f, 170f). The core of
the issue is, in other words, to understand the ultimate reasons for, or rather
causes of why Brouwer’s advisor, prof. D.J. Korteweg at the university of Amsterdam required that mainly
the chapter 2 of Brouwer’s dissertation’s manuscript should be deleted from its
final version. I have a personal experience of this kind of academic
disciplinary struggles, both in the Swedish history
of informatics as related to Umeå
university, and from my role as faculty opponent in the
disputation of a controversial
dissertation on computer science at Lund university. If I had not
been familiar with the connection between scientific method, philosophy and
theology I could also have requested the deletion of controversial parts of the
dissertation for accepting the invitation to being opponent. These struggles
are a symptom of the problem of inter- or multi-disciplinarity, and they have
in turn been embodied in the academic history of the systems approach at the university of California-Berkeley, as a latest or last attempt to
avoid the divorce between scientific disciplines, philosophy, and
theology. It may be seen as why philosophy including
philosophy of science, along with theology, because of some interesting reason is
explicitly and forcefully rejected even by some late AI-sympathetic scientists
such as e.g. the mentioned Geoffrey Hinton and Frank Wilczek.
The rejection or
oblivion of the connection to theology is what explains the apparently
paradoxical conclusions and consequent invalidation of the supposed highest
values of democracy as addressed in my essay on Disinformation as a Myth. It explains also the inconsequential powerful rhetoric
of facile criticism of AI based upon misled analogies created by misled
atheists who claim that “AI
has hacked the operating system of human civilization”. It was
already hacked by the time of the world wars, the Holocaust, and later for
instance in the conflict in Gaza with or without
AI. In the Judeo-Christian tradition one may refer to the interpretations of an
unsentimental Apocalypse. In the
Confucian tradition of the I Ching, an account of the life situation
of hexagram
§23 “Splitting Apart” offers, in the
book’s analytic psychological interpretation (foreword), an alternative
for the “Enlightened Europe”, according to the following excerpted words
referring to “the laws of heaven”:
[…] The
inferior, dark forces overcome what is superior and strong, not by direct
means, but by undermining it gradually and imperceptibly, so that it finally
collapses. […] This pictures a time when inferior
people are pushing forward and are about to crowd out the few remaining strong
and superior men. Under these circumstances, which are due to the time, it is
not favorable for the superior man to undertake anything. […] This suggests
that one should submit to the bad time and remain quiet. For it is a question
not of man's doing but of time conditions, which, according to the laws of
heaven, show an alternation of increase and decrease, fullness and emptiness.
It is impossible to counteract these conditions of the time. Hence it is not
cowardice but wisdom to submit and avoid action.
To Fellow Engineers
Over lunch at a Swedish embassy, the hostess asks an
old engineer if he knows the difference between a lady, a diplomat and an
engineer:
- Do you know the difference between a lady, a
diplomat and an engineer?
The old man calmly said yes and explained:
- The lady, when she says NO, means MAYBE; when she
says MAYBE, she means YES; and when she says YES she
is not a lady.
- The diplomat, when he says YES, means MAYBE; when he
says MAYBE, he means NO; when he says NO, he is not a diplomat.
- The engineer, when he says YES, means YES; when he
says NO, he means NO; and when he says MAYBE... he's not an engineer.
That's it!
-------------
[Forwarded by a
Brazilian colleague, from unknown source, translated from the Portuguese]
Spoiler suggestion: The
engineer is the only one who is supposed to deal mainly with inanimate nature.
And is supposed here to ignore statistical probability and measurement theory,
which introduce the MAYBE. The only obligatory party in the engineer’s dialogue
is the one who, YES, pays his (or her but I do not advertise for being a
feminist) monthly salary. And it has been already recalled above that the
German philosopher and theologian Ernst Troeltsch observed that humor may have
similar influence on atheists as religion does, in fostering humility by
downplaying the importance of the big Ego that is inflated by logic of YES and
NO.