THE MEANING OF HUMAN-COMPUTER INTERACTION 

An experientially based theoretical HCI-perspective

 


by 
Kristo Ivanovprof.em., Umeå University

September 2020 (rev. 240109-1430) 


(https://archive.org/details/hci_20200910)

(http://www8.informatik.umu.se/~kivanov/HCI.html)

 

 

 

CONTENTS

 

Link to a General Disclaimer
Abstract
Introduction and an example

Example #1: - Health care center

Preliminary reflections

Interacting - with tools?

Further examples

Example #2 - Telecommunications company

Example #3 - Telephone answering machines

Example #4 - Personal identification: PIN-problem

Example #5 - Public transports and parking

Example #6 - Bank safe deposit box

Example #7 - Dental hygienists and antibiotics

Example #8.- Radio Sweden’s summer program

Example #9 - Airliners’ and traffic’s control

Whither HCI

Quagmire or intellectual jungle

Case study of quagmire

Conclusion

Postscript

 

 

ABSTRACT

 

This is a report on reflections upon experiences of on-line computer interactions, relating them to insights into the core meaning of computers, computation and computerization as advanced mainly in earlier texts on Computers as embodied mathematics and logicand Computerization as logic acrobatics.  The conclusion is that perceived advances in the HCI-field tend to be sterile, socially and politically crippled patchwork because they do not acknowledge the nature of the relation between embodied logic as a human mental construct, and mental faculties of the human psyche.

https://archive.org/details/kant-gramont

 

INTRODUCTION

 

This introduction and the following section on “Further examples” consists mainly of examples of events that I have witnessed (no sampling except for my memory) and have personal knowledge of. Most of them are apparently simple or too simple but with the advantage of being recognizable commonplace. The last example #9 is too complex and exclusive, but very realistic and crucial. The point is to show that the simplicity hides complexity. The risk is that apparent simplicity invites to narrow technical conceptualization of HCI and to simple injunctions to users, whoever they are – if properly defined - about the use of HCI. This corresponds to patchworking the shortcomings of logic embodied in computer software with a transfer of responsibility to users, and it illustrates the meaning of what I elsewhere called “computerization as logic acrobatics.”

 

Example #1 - Health care center

In September 2020 during the covid-19 pandemics, an eighty years old lady who had an appointment with a doctor at a local Swedish healthcare center felt indisposed and not able to walk and take the bus to the center. Not being possible to phone the doctor directly she had to call the center's computerized telephone exchange. After the call's required initial selections among a series of alternatives codes, she chose the one for being called later by a nurse on duty, with an automatically computed and communicated estimated waiting time of one hour and a half. When the nurse called, the lady informed her about the by then missed appointment

 

Two weeks later the old lady got an invoice from the administrative authorities, corresponding to the double amount that would have been charged for a common medical consultation, despite of her having been entitled to a visit free of charge in view of prior accumulated expenses having reached a maximum according to the Swedish welfare rules. The invoice appeared to have been produced automatically by a sort of "elementary AI" on the basis of the computerized data base terminal at the healthcare center and the missed registration at the expected arrival of the lady at the time of the appointment, combined with her PIN (personal identification number) linked to her official home address. It is also the address coupled to the all Swedish governmental institutions and other entities, including the Swedish Tax Agency and all possible consequent debt collection.

 

Surprised by the unexpected invoice the lady was directed to Internet-links (example in Swedish here) stating that new valid operational rules for health services "anonymously-legally" stipulated that if an appointment is not canceled at least 24 hours in advance it will be charged for the stipulated amount, even if the patient was entitled to free care, is below age 18 or above age 85. And this is also valid (which is not stated in all relevant links) if the patient arrives too late to the appointment, when more than half of the (often but not always unstated twenty minutes or half an hour) allotted time has passed. Inquiry with the nurse on duty resulted in her response being that patients all too often simply miss appointments and that they motivate this by just giving excuses and lies. Patients are simply not deserving to be trusted. It is interesting to note something that usually only appears in an “insändare” ["letters to the editor"] of small local newspapers such as Mitti Värmdö (15 September 2020) with the title "Läkaren är alltid sen" [The doctor is always late]: a patient reports, against the background of having to wait 5-6 weeks for an appointment lasting 30 minutes, that the doctor often appears after a delay of about 20 minutes, sometimes up to one hour. Not to mention waiting times of up to 12 hours at an emergency room of a clinic or hospital.

 

The lady in question wondered about her own prior experiences of having to wait at the healthcare center for more than half an hour past the time of appointment with the physician, and on other occasions having been called at home a couple of hours in advance for the canceling of an appointment due to doctors having stayed at home because of sudden sickness of a child. And, opposite to patients, it is supposed and enforced that doctors, as all public servants and authorities "obviously must" be trusted, bypassing studies of trust in science and business, such as Steven Shapin's  A Social History of Truthand Lars Huemer's Trust in Business Relations.

 

The above account, based on personal experience, should be completed with extensive reports of experiences like with the Swedish healthcare application at 1177.sematching the national phone number 1177 for healthcare information services, as well as with other examples that can be classified as HCI. The mentioned 1177.se is said to protect the privacy of patients, who have to legitimate themselves by means of a legitimation app furnished by banks and other entities. In practice 1177.se formalizes and enforces which personal health-care data can be inputted, made available, or communicated for which purposes from and to patients, respectively healthcare personal in various entities. Having missed appointments, pending variable rules, may be recorded in the personal journal of the patient. Several examples could be adduced but I refrain from doing so for the time being.

 

 

PRELIMINARY REFLECTIONS

 

The healthcare example given above appears to be a system which is being also used for surveying and correcting the behavior or citizens. In this sense it can be related with one of the first recorded historical investigations of the effects of computerization, Records, Computers and the Rights of Citizens (mentioned below as RCRC, pdf-download here and here). In particular, the appendix F (pp. 247ff., but see also appendix E on Computerized Criminal Information and Intelligence Systems). The title is Correctionetics: A Blueprint for 1984 (written in 1973!). It has some comments stating that systems for monitoring and control of the population of a prison efficiently create a system with all the earmarks of the worst surveillance data bank any civil libertarian could imagine. Referring to a study at that time to "design a system to enable managers of correctional institutions to make completely objective decisions about the treatment and disposition of criminal offenders", it states that simple substitutions of the words "governmental" for correctional and "citizen" for offender in quoted excerpts from the study transforms serious and humane objectives for prisoners into a nightmare for citizens. The RCRC (p. 250f.) goes on noting that most of the information relevant for computerized "correctionetics" is entered in coded form. "The extent to which coding conventions match the underlying structure of the data determines to a very great extent the ultimate power of the computer program to handle any but the simplest sorting tasks." It is worth noting that at the time many Swedes praised themselves for being the first country in the world to have introduced a national personal identification number (PIN), i.e. coding all its citizens. The RCRC was a controversial American study of whether such an innovation should be introduced in the USA, corresponding to what later became the social security number. This matter can properly be seen as political and cultural as illustrated in the equally controversial Swedish study that I reviewed in a paper about Is the Swede a Human Being?

 

Superficially it may appear that the example above is not representative for HCI as it is understood today, as manipulation of the "interface" between a human and a computer. But the problem is much deeper. The ultimate question is what is human, what is a computer, and consequently an interface, relative to the task at hand. This will be developed along the present text.

 

In the example above, coding corresponds to the class "missed appointment", made equivalent to a "missed payment of invoice" that, however, is framed in a historically grounded system of enforced national (and international?) law. In terms of philosophy of science, the problem of coding can only be the problem of "class assignment" as part of the "teleology of measurement" presented by C.W. Churchman in Prediction and Optimal Decision (chap. 5, pp. 107-110). It can be easily understood that no knowledge of these matters enters into the design of the kind of computer systems we are referring to. In broader philosophical terms the coarse statement on "the extent to which coding conventions match the underlying structure of the data" portrays the question of the relation of (syntax, semantics and pragmatics of) logic to the real world. In this sense the whole matter of HCI reverts to the discussions about logic and computerization as considered in my two earlier essays on Computers as embodied mathematics and logicand Computerization as logic acrobatics. The limitations of logic cannot be patchworked by means of ad-hoc fragments of HCI-models and gadgets, or by translating it into a question of "privacy" that hides and avoids the core of politics and ethics as I tried to show in the book (in Swedish) Systemutveckling och Rättssäkerhet [Systems Development and Rule of Law], a simplified popular version of my doctoral dissertation on Quality-control of information. Or, better, the limitations of logic can appear to be patchworked but the task is then to identify "invisible" capricious consequences and to verify that they cannot be alleviated.

 

This issue has already been the object of considerable attention, e.g. as early in 1972 at the time I happened to present the dissertation on Quality-Control of Information, and Hubert L. Dreyfus published his famous book What Computers Can't Do, focused on "limits of artificial intelligence" - AI. It was followed by an update in What Computers Still Can’t Do in 1992 (preceded by extended popularization e.g. in the journal Creative ComputingJanuary and March 1980) with shortcomings that could be inferred from the later book's presentation by the MIT Press:

 

Today it is clear that "good old-fashioned AI," based on the idea of using symbolic representations to produce general intelligence, is in decline (although several believers still pursue its pot of gold), and the focus of the Al community has shifted to more complex models of the mind.

 

The issue of HCI thereby appears to extricate itself from Dreyfus' criticism, which was directed towards the first wave of artificial intelligence - AI which as Wikipedia puts it was "founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism". And we will see also that it implied an as yet unmentioned HCI. It is as if the matter would have shifted from "artificial intelligence" to "more complex models of the mind", whatever that means beyond a continuous adaptation of AI to the exploit of human interaction with more "complex" computing technology. That is: what do complexity and mind mean beyond a repeated stated pretension to understand and interact with or replace the human mind, a project which is becoming to the fame of the Massachusetts Institute of Technology, while encouraging criticism of technology that in turn is countered with derogatory terms such as neo-luddism and technophobia. It all turns into a squabble with ad-hoc created neologisms, including "AI". All such approaches tend to miss the point of getting to the core of the issue of what intelligence is as related to humans and to reason, mind, brain, psyche or (symptomatically controversial sort of synonym) soul. Dreyfus, for instance, relies heavily on the philosophy of Husserl and Heidegger. I already have commented the latter in several contexts that can be identified by going to the search-field (inserted in my homepage). The famous neuroscientist Antonio Damasio tries to think deeper and relies on Spinoza who is seen to have some important things about inquiry as indicated in Churchman’s The design of inquiring systems (pp. 11, 25-27, 62, 69, 71-72, 261, 263, 282). Damasio borrows from literature and psychology some very important, heavily loaded words like feelings and emotions, as I showed elsewhere that also the philosopher Kant does and abuses. Damasio does so without acknowledging the intellectual struggles of the psychologist Carl Jung, and symptomatically does not dare to go into and still less define intelligence (as related to inquiry). Those who possibly expect Damasio to unravel the relation between brain, computer and intelligence have not understood basic issues of philosophy of science, as presented, for instance, by Abraham Kaplan in his The conduct of inquiry (pp. 323-325 of 1964 ed.), despite of he himself ultimately being led astray in the interpretation of Kenneth Colby’s work as others are continuously led astray to mirages by similar findings such as “Newly detailed nerve links between brain and other organs shape thoughts, memories, and feelings” (Science, June 10, 2021).

 

And it is not only a question of mind and such in relation to logic and computation. There is also que question of "interact" which subsumes the meaning of reciprocal action. Without going into philosophy proper, R. Ackoff & F.E. Emery's book On Purposeful Systems suggests (pp. 65ff.) that an action upon an inanimate object (no human mind, no psyche, no soul) definitionally elicits a "mechanical" reaction from an inanimate object or an instinctually driven human, but a response from animate subject(s) after a  deliberation, choice or design of an appropriate course of action or inaction which are expected to result in appropriate ethically desired consequences. The idea is expanded in advanced HCI by means of reference to particular schools of sociology, social psychology, politics and culture, but there is no way to expect an integration of such dimensions except in terms of a systems theory or overarching religion as alternative to "ideologies". Or, as Wikipedia puts it, present day models, in general, center around a steady input and discussion between clientscreators, and specialists and push for specialized methodologies or frameworks such as activity theory (whatever theory, models, methodologies and frameworks are or should be). In comparison, the conception of Design of Inquiring Systems (chap. 3) proposes that a "steady input and discussion" should be between carefully defined roles of (1) clients-customers-users, (2) researchers-philosophers-designers-analysts-engineers, and (3) decision makers-managers-executives-politicians. The latter are being often neglected despite their representing economy-profitability, politics and ethics in their relations to the clients, (4) shareholders-voters who should but often do not include the affected rest (5) society-community-the general public-people that transcends the narrow meaning of clients and shareholders, and are today mentioned especially in matters of climate change and global warming. 

 

The decision-makers are those who decide about the budgets and rules for use of the computer and, for instance in the example above, about the rules for punishing patients like 80-years old ladies with fines for the "crime" of not meeting doctors' appointments, while doctors who stay away deserve to be trusted. (All this while healthy researchers and philosophers deal with the issues in this present text). When she "interacted" with the computer at 1177.se in order to book an encounter with the doctor, she could not know that an ignored somebody else like a clerk, on the basis of rules changed by a third deciding party, would later also "interact" with the computer in order to use her data for charging an unexpected fine that ignores the circumstances of it all. In other words, all the possibly involved people should know which are the changing "circumstances" of the possible use of the computer and its consequences for themselves and others. "Knowing" this can be conceptualized as everybody, but especially the clients or stated people serviced by the computer and its software, should have and be able to keep in their minds a "user model". All this is at the same time trivial and theoretically-practically impossible, except for (just, paradoxically) trivial cases, and especially when economic-political factors are absent, as some people trying covertly under the cover of complex software and interfaces to gain power or profit at the expenses of others while appealing to "justice". One main point, however, is to realize that these problems may be ignored in HCI and be attributed to and the responsibility swiftly shifted to the field of social informatics and "sociotechnical interaction" where to my knowledge Rob Kling was one of the few who cared for similar problems. This shift, however, undermines the scientific legitimacy of HCI since the facile reference to "user models" mentioned above disguises and avoids the basic problem of the limitations, dangers, and consequences of computation, computers and therefore of interactions themselves.

 

 

INTERACTING – WITH TOOLS?

 

A way to explain the theoretical difficulties of interaction at the level of philosophy of science is to realize that the idea of interaction implies often if not always that a unique somebody interacts, implying "uses", and uses "something" such as a "tool". It is important to notice that the same idea appears under the label of computer “support”. I tried but failed to capture this idea in a project about Essence of computers: presuppositions for support returning to it in Expert-support systems: the new technology and the old knowledge. Its only justification is that by considering the computer to be a tool, the hope or illusion is conveyed that the human may intervene in correcting or completing the shortcomings or errors of the computer, if one knew what “error” is, and consequently also knew the difficulties behind the conception of human error and human factor. The core idea of this expression, tool or support, it should be emphasized, is that the tool is under the full control of an all-knowing user guided by an all-knowing manual which is also a tool and with a necessary experience or tacit, implicit knowledge, which is not equivalent to “unreliable feelings”. Never mind tool-makers or engineers beyond the user have been doing or meantAnd it happens to be also an idea that is or should be subsumed in the expression "human-computer interaction".

 


This is viable so long as people have not understood the controversial presumption that a computer is so "intelligent", whatever intelligence means, that it can be equated to a human. Then it should feel controversial to "use" another person as a tool, even for those who do not care of Kantian ethics of practical reason and the 
categorical imperative “Act so that the maxim of thy will can always at the same time hold good as a principle of universal legislation”, implying the prohibition of treating humans merely as a means. It is, therefore, felicitous if computers neither can nor should be conceptualized as human minds, whatever the often misunderstood “mind” means beyond misunderstood trivial logic. Nevertheless it may be already too late: infantile computer nerds and marketers are already building upon the idea of autonomous AI (whatever programmed autonomy is) including military robotsmicro-drones, and the rest as if it were only a question of supporting campaigns and research for stopping killer robots, or academically: “to ban lethal autonomous weapons”. This despite of the entertaining presentations of dancing and singing humanoid robots, not always within their explicit context of "Army Technology". The whole is then embedded in academia as represented, for instance, by professor Stuart Russell with a puzzling new perspective of HCI as at the Center for Human-Compatible Artificial Intelligence at the University of California. Note: Human compatibility, a new quagmire to be compared with e.g. Human-centered artificial intelligence elsewhere. Humanity (or was it humanism?) as in HCI, seems to be much in vogue, albeit its misuses already had called my attention in a paper so early as 30 years ago on Computers in human science or humanistic computing?.

 

It is therefore quite unfortunate when HCI researchers as commented in Wikipedia

 

observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. The term connotes that, unlike other tools with only limited uses (such as a wooden mallet, useful for hitting things, but not much else), a computer has many uses and this takes place as an open-ended dialog between the user and the computer. The notion of dialog likens human–computer interaction to human-to-human interaction, an analogy which is crucial to theoretical considerations in the field.

 

It is unfortunate, because it is not such an analogy that is crucial to “theoretical considerations” in the field, whatever is meant by “theory” and “field”. The computer “has many uses”, but it is problematic to understand what “use” and “user” is, and even more problematic to know what “legitimate and good use” is. Therefore it is also problematic what the “grand old(est) man” Terry Winograd means in the book co-edited with Paul Adler Usability: Turning Technologies into Tools (1992), contributing to perpetuate the idea of  the computer as a used “tool”. The notion of dialog should not liken human-computer interaction to human-to-human interaction. It is entertaining in this context to note what V.A. Howard and J.H. Barton write, quoting D.N. Perkins, in their remarkable book Thinking on Paper (1988, p. 69) that can be seen as dealing with “dialog” between writers and readers (not tools) through HCI, indicating that HCI, on the contrary, can “freeze” knowledge with its premises and suffocate true reasoning:

 

Formal principles of logic are of course implicit in everyday reasoning, but like grammatical rules, it isn’t necessary to know them to use them. […] “Effective reasoning…depends on an active effort to interrogate one’s knowledge base in order to construct arguments pro and con” […]. “In contrast with reasoning as classically conceived, premises change and accumulate as the argument proceeds rather than being given at the outset. Second, the premises are somewhat constructive, elaborating the reasoner’s understanding of the situation rather than merely objecting”.

 

The problem, however, lies already at the root of it all in that the computer is not basically a tool, but in the best case it is an instrument if the instrument, as in the case of measurement, is not for mechanical direct manipulation but is theory-laden. It is the case of scientific, e.g. optical, or even advanced musical instruments. This stands as the basis for understanding the possible meaning of what is said about measurement in quantum physics, and of an otherwise controversial quotation from some famous remarkable "rejected parts" of the dissertation by the mathematician Jan Brouwer , also reproduced in an earlier essay of mine on computers seen as embodied logic and mathematics:

 

[T]he physicist concerns himself with the projection of the phenomena on his measuring instruments, all constructed by a similar process from rather similar solid bodiesIt is therefore not surprising that the phenomena are forced to record in this similar medium either similar laws or no laws. For example the laws of astronomy are no more than the laws of our measuring instruments when used to follow the course of heavenly bodies.

 

This is the reason why an ambitious international symposium in Sweden in 1979 resulted in an inconsequential hodgepodge of disparate opinions and insights, reported the following year in Is the Computer a Tool?. Nowhere there is a focus on the essence or nature of logic and mathematics and their consequences. Even an insightful participant of the symposium such as Sherry Turkle succeeds only in rising (p. 88) the question of "what computers really are", but regrets the missed "epistemological issue: computation is irreducible". The epistemological background and its limitations can be inferred from a paper she wrote in 1992 with her husband and ("symptomatically") AI-oriented computer scientist Seymour Papert"Epistemological Pluralism and Revaluation of the Concrete" in Journal of Mathematical Behavior,vol. 11 (1). But: and so what? Computation reducible to what, what could it be? Computation of what and what for? Numerical and, or, logical computation? Is computation (what kind of) mathematics or logic? Which are the relations between computation and…what else? And, therefore, which is the value of computation, by whom and for whom? Who cares for the question of whether the computer is a tool or not? Will we see a Kantian reduction of this issue to aesthetics and art?

 

The following quotations (pp. 16-17) from the paper mentioned above may give a taste of its analysis adapted to feminism or, rather, to "women studies" on the basis of psychoanalytic theory, as contrasted to analytical psychology in my old introduction to the study of Reason and Gender:


When women neutralize the computer as "just a tool," it is more than a way of withdrawing because of a lack of authenticity in style of approach. Insisting that the computer is just a tool is one way to declare that what is most important about being a person (and a woman) is incompatible with close relationships to technology. [...]


Achieving epistemological pluralism is no small thing. It requires calling into question, not simply computational practices, but dominant models of intellectual development and the rarely challenged assumption that rules and logic are the highest form of reason. 


The most important latter question is then what the "computational practices" or the computer is, if not "just a tool", or a tool for discussion about differences between women and men. Turkle's (and Papert's) approach in their paper "solves" the problem in a "Kantian spirit" as I suggest in my own study of
Computerization as logic acrobatics.That is, it all ends in talk about art and artful integration between genders, seen as an integrative reason where reason remains being plain logic, supposedly analyzing the human unconscious, all consistent with the limitations of psychoanalysis itself. This seems to be confirmed by the summary of discussions written by Bo Sundin, member of the symposium committee, in his editing of the report Is the Computer a Tool? (p. 128): researchers should not only be scientists, but by having scientists and artists present at the same time one could overcome the limitations of science; the academic community could profit from contacts with artistic institutions, and plays, literature, etc. could be used in the discussions. I myself consider, however, that his most important summarizing observation was instead the following (p. 127):

The complexity of modern society is to a large extent due to the lack of traditions like religion, which in older times acted as an organizing principle. The attempt to install computers to deal with the unorganized, incoherent complexity of our society is harmful in two ways. It increases the complexity and avoids actual problems.


And it is not easy to see why a Kantian reduction of the question to aesthetics, art or to feministic epistemological pluralism should be of any help for the design of interaction with such tools or instruments, except for making them more marketable, profitable or funny to deal with. It will not do to try to counter the references given above by programmatically declare them obsolete if they are older than 10 or 30 years, set against the extraordinary rapid development of computer and communication technology. Without cumulative knowledge or building upon old knowledge, even if for no other purpose than to show what was false, there would be neither science nor technological development (not to mention the problem of
generation gap). This is also the message of C.W. Churchman's last book on Thought and Wisdom (chap. 3 on "Success of failure"), which happens to illustrate the core idea of my own dissertation on Quality-Control of Information. And now, let's see further examples of human-computer interaction, mostly of the apparently most trivial sort that almost escape the common HCI-technicalities until a last most complex example that exceeds such most common technicalities.

 

FURTHER EXAMPLES

Example #2 - Telecommunications company

One main Swedish multinational telecommunications company and mobile network operator illustrates modern trends in the provision of customer service. A customer gets on her mobile phone an automated warning that her subscription to the services ends the following day. She wishes through her phone to make a "semi-automatic" renewal of her periodic monthly services-subscription for the sum (99 SEK, Swedish crowns) that is inferior to her remaining savings at her account at the company totaling a rest of (111). She dials an SMS to a given special number of the operator with the coded text "Fastpris 99" [Fixed price 99] but she gets a computerized answer that it is not possible because the required amount has no coverage in her account. That is, does the system calculate that 99 is more than 111? Her choice is then to dial and check again that her available amount is 111 and repeats the request for the renewal that it is again responded stating that she has not enough money left. The operator's site does not allow for digital or postal messages to people and it is evident that it does not want time-consuming dialogs or questions requiring human answers. The exception is for at a particular phone number of the operator where there are usually waiting queues of up to 30-40 minutes. The lady tries this and her choices among 3-4 steps of alternative codes are successful in that luckily this time within 5 minutes she gets into contact with a customer serviceman who, however does not know what to do and says that he cannot forward this customer's complaint neither to software experts nor to management. He advises her to try again the same procedure on the following day (when her subscription is no more valid) or to try other paths at the company's internet site. The same day the lady tries again starting again at such "other path" at a site arriving to an offer of choice among a series of pre-formatted customer problems. Choosing the closest one to the actual problem the lady arrives to an offer for "chat" where she previously for other problems had been able to chat with a serviceman. After typing a description of the problem she gets a strange chat-answer offering to click and choose among several pre-formatted problems that have nothing to do with his issue. She gives up for this first day. The problem up to this point obliged the lady to call once the operator company's only available phone number for direct human voice-contact, followed by an inquiry into an internet site with hope for a human chat. The call required several steps of choosing among different digital codes for different services, fulfilling obligatory electronic legitimation, followed by phone queue waiting time of about 10 minutes until human contact with a customer service employee of the operator. The Internet inquiry required the same except for a personal legitimation.

 

The following second day, after her repeated explanatory reports of what it was all about, that she was not allowed to use her credit amount in order to renew her monthly subscription, the second customer serviceman, in the guise of goodwill compensation, inputted further (the company's) 20 SEK to her credit than then amounted to 131 SEK and asked her to repeat the loading operation for 99 SEK. Nevertheless, repeated trial also failed: are 99 SEK more than 131? She calls customer service a third time and explains herself - always to a new different serviceman at the company's call center. He consults an expert technician, promising that he or somebody else would call her back. It does not happen (prolonging the story to the point of exhausting the readers and the writer of this present text…) The lady, repeating the same procedure the following day for the fourth time, was finally informed that the personnel of the customer service had been just informed that the operator company had experienced bugs in their systems. The man at the customer service offers to supplement the account at the expense of the company with another addition of 99 SEK corresponding to a one-month subscription free-of-change. It is done, but the lady's account totaling finally 230 SEK could still not allow a prolonged subscription. Repeating the procedure once again, for the fifth time, now after waiting 10 minutes a phone queue of 33 people, and arriving at being serviced, an automatic voice informed that the final connection to a human had failed and she had to start again, waiting this time also 10 minutes in a queue of 27 people. After a repeated explanation with a fourth serviceman who investigated the case for about other 10 minutes, she was at last informed about the mystery. When her account balance was 111 and she wanted to use 99 for a renewal of the subscription it was not allowed by the "system" because 20 out of the 111 consisted of an unperceived complimentary bonus she did not know about, given to her by the operator company because of something she didn't know – but that according to an unknown rule of the company could be only used for expenses outside the conditions-limits of the subscription. For instance, as a charge for surpassing the allowed monthly 1 gigabyte of allowed surfing. In other words: the account balance consisted of two unspecified balances to be used for different purposes. And the lady reflected that her feelings where akin to have been robbed and "raped" in the sense of abused by being obliged to loan her body and mind for a total of at least four hours in order to request to be allowed to spend her own money, being put in a sort of straitjacket. The solution from the beginning would have been for the lady to deposit at least the difference between 111-20 = 91, and 99, that is to deposit at least 8 SEK or to never mind about why and how (as many clients are tempted to do) by deposing on her account a massive sum of, say, 300 SEK, hoping for the best. It had not helped that the customer service officials had tried to deposit complimentary additional amounts of 20+99 up to 230 since they both were automatically and unknowingly classified as "bonus".

 

The same company also offers another example when it wishes to reimburse the customer for some amount that exceeds the estimated costs of some service. The customer receives a communication that a certain amount will be reimbursed, but if he happens to follow up and discover that the reimbursement is not made and asks for an explanation, he is informed that he must return to the company’s site in order to, after waiting in a phone-queue for up to 30-40 minutes, identify himself with a special app, and finally communicate the number of his bank account to which the reimbursement can be deposited, despite of this number having been already declared known by the company because it had been used for automatic regular bank payments of periodical fees. When this does not have any effect, an investigation of the personal digital track of invoices at the company’s site reveals that the announced reimbursements had already been automatically used by the company for advance payment of new wrong invoices that must in turn be questioned. And so on.


Reflections upon example #2

I mentioned above that the lady has to start the process all over again, repeating her text and getting strange answers until a close relative trying to help her understands that she has been dialoging with a chatbotThe relative was an older friend, old enough for knowing what a chatbot is, and knowing its historical precedents in computer science, associated with computer scientist Joseph Weizenbaum and the famous ELIZA would-be-AI software. Finally, reflecting about above mentioned "obsolete" 10-30 years old knowledge: today (September 2020) the Wikipedia declares that ELIZA was "Created to demonstrate the superficiality of communication between humans and machines", but I remember that it was instead seen as a step in a glorious development of AI. Today it is used as a perfectioned reality available to customers, thanks to more advanced computer and communication technology. In August 2022 the world press started to spread news like in The Guardian (9 August 2022) about the Facebook-META’s “BlenderBot”, a chatbot that is supposed to allow anybody’s mind-blowing conversation with AI-computers about “anything”. Its dangers may direct our attention of the type of literature on “sick society” after R.D. Laing’s work. I myself prefer Carl Jung’s reflections on Civilization in Transition (Collected Works CW10). Weizenbaum himself had gone meditating about the for him surprising positive response to ELIZA's fake conversations, and finally wrote an exciting but inconsequential critical book Computer Power and Human Reason, in the same principled and righteous but powerless intellectual style as Turkle and Papert above. Weizenbaum launches a gratuitous appeal to the "difference between deciding and choosing" and deplores computers' "lack of human qualities such as compassion and wisdom", while Wikipedia in this context ambitiously refers to this contribution's relation to the "ethics of artificial intelligence" (and human-computer interaction?). For the rest, it turns out that the "logic" of HCI of the system presupposes that the clients somehow know from somewhere that their account balance consists of two unknown parts and this balance can be used for different purposes according to temporary unknown aspects of the pricing of the operator company. The rest of the example does not deserve further comments.

 

 

Example #3 - Telephone answering machines

We have seen in example #2 that one main feature of modern human-computer interaction, a feature that is inherent to the essence of computation and paradoxically even to (computerized) communication seems to be to try to minimize (costly) contact of customers, clients, or citizens with (for management, owners or supposedly taxpayers) costly employees in business or government. Once upon a time when somebody called a phone number to an absent employee there could exist a secretary who would annotate the caller and transmit the message. Then came telephone answering machines where the caller left a message, often proposing to be called back at a certain time. Phone answering machines could be completed with built-in messages telling at what time the person could be found, or else that would divert the call relieving the called person from any further responsibility. The main "improvement" from the called person's perspective appeared to be to relieved from further responsibility like having to call back all those by whom he had been called, and possibly to discourage people to call at all. The next step became to enumerate most if all the possible question that could be put by the caller and to automatically, by voice recognition or choice alternative dialing codes, redirect him to a series of pre-recorded answers, other numbers, or internet addresses. The "final solution" seems today to be to discourage or outright deny the possibility of calling at all. For business this implies the risk of losing potential customers and to certain extent can be obviated by denying the possibility of calling sales personnel but allow the call to be directed to a central human secretary or call-center with competence for answering questions. This is, as we saw, at the risk of the caller incurring into the above-mentioned event. For government and public agencies, however, one final possibility is to outright deny the possibility to call any human agent. The ultimate idea is to save what in the affluent West is considered most costly for organizations: manpower, yes for organizations, forgetting the counterpart of unemployment as I show it is forgotten in the context of implementations of artificial intelligence, as it is also forgotten in the context of stress for paradoxical lack of time for increased productivity, as analyzed by Staffan Burenstam Linder in his Den Rastlösa Välfärdsmänniskan (originally written in English, as The Restless Welfare Person).

 

And this is often done by means of computerization. It deserves a special analysis I have already attempted to do, and with minimal consideration for long-term social and political consequences starting with focus of the further consequences of the “profitability” of savings of manpower, as studied by, say, (in Swedish) Christer Sanne's Arbetets Tid [The Time of Work] and Keynes Barnbarn [Keynes' Grandchildren], and more recently Roland Paulsen's Empty Labor. It reminds the promise of shortening the waiting time of customers-clients at the checkout queues, starting in supermarket chain stores, thanks to the introduction of International European Article Number (EAN), this without considering that the result would be not the shortening of queue-times but the reduction of the number of cashiers and the transfer of cashier tasks to self-serving customers at automated cashier terminals. It is an example of what has been called as the trend towards Heteromationor a new “division of labor between humans and machines, shifting many people into work that is hidden, poorly compensated, or accepted as part of being a “user” of digital technology”.

 

The additional example promised above is related to the previously mentioned site 1177.seIt is easy to understand that physicians, at least those who are employed and paid in the Swedish welfare system, do not wish to get phone calls and e-mails from patients (who are not really "customers") since this would increase their workload. I myself did not know how taboo such a practice was, since some physicians did approve such simple communication, e.g. for being informed about results of ongoing treatment. I was shocked, however, when a heart specialist who was going to treat me for an emergency crisis of high blood pressure suddenly disrupted our relation and canceled a coming appointment, refusing to treat me further – just because I had sent him a short e-mail in advance in order to prepare for the later consultation. This event motivated my notifying the authority for monitoring of medical ethics, since a delay in treatment could put my life at risk in case of stroke. The history of this notification is documented in a summary available on the net. The case of 1177.se, however illustrates the trend in the perspective of HCI since it shows the increased burden of the patients sometimes seen as clients or customers - in their relation with doctors/physicians seen as decision makers, managers, public officials. Contacts are generally obligatorily channeled through 1177.se under the assumption that it guarantees privacy required in medical contexts. It requires obviously a computer or cell-phone, a login supported by an electronic identification-legitimation (which in turn has required logins and legitimation) all based on PIN personal identification number (commented above), followed by a series of HCI-steps consisting of choice of coded text that identify the relevant health care center, hospital or clinic. Arriving at the latter there are generally neither email-addresses nor phone numbers, and there may be or not be a choice about e.g. request for a new or cancellation of an appointment, renewal of drug prescriptions, reading a selected texts out of one own's medical journal, or of writing a message of a maximum of 2000 characters to a certain person or kind of person whose answer will be announced later per an SMS to have become available, but with a text that can only be read at the 1177.se site after a renewed login. My highlight, however, is to remark that certain clinics do not allow to send messages. Such messages may require qualified manpower such as doctors who are expensive or not available. 


Reflections upon example #3

Beyond some preliminary reflections already contained above, let's recall that once upon a time there was a street or postal address (that today and are seldom advertised) to which one could write a letter with any number of characters at the sender's risk, could put in an addressed envelope with stamps, and mail it. Nobody could stop one to do this, and at the letter's arrival somebody had to take the responsibility to throw it away unread. But recently when I had no other way to question an invoice of a telecommunications-operator, I had to send a registered conventional mail to the given postal address at a cost of about 10 US$, it was returned to me after a couple of weeks because the operator-addressee had moved to another postal address without a "forward". The invoice was on paper but if, as suggested, I had chosen digital invoices to the Internet-bank I would not have had any available postal address at all. The main reflection on 1177.se, however, beyond those already implied above is that HCI purports to facilitate the "use" of the computer but what happens is that it forces the human thought and behavior to adapt to the (logical-technical) requirements of the computer or the requirements of those who decide the introduction of the computer for tasks that earlier where performed in other ways. The basic assumption of computerization and its HCI is “profitability”, to save money of/for specific entities, mostly by saving specific kinds of manpower, either to do more and faster with the same amount of manpower, or to do the same with less manpower. The example only illustrates one case in which the customer, so to say, "HCI-worked" up to 4 hours for "nothing": a case of technologically conditioned "empty labor". And when this is said, we leave aside the question of who's profitability, labor vs. capital etc., since it would require, if not Marxism at least complex PPB (planning, programming, budgeting) evaluations whose complexity already is obfuscated at the old level of the seventies' in Roland McKean's Efficiency in Government through Systems Analysis that before of becoming New Public Management and being applied, among others, to Swedish universities was deadly criticized as early as 1983 in Ida HoosSystems Analysis in Public Policy.

 

 

Example #4 - Personal identification number PIN

At least in Sweden – on the basis of the early and extensive use of PIN, personal identification number, there have been cases in which authorities have sent away wrong communications to relatives and others about the death of a certain person, sometimes to the still living person itself. I have neither kept through the years cases reported in the press nor been able to retrieve documentation on such events except of one example (in Swedish, in Aftonbladet 26 Nov. 2009 & 11 March 2011) where an adoptive son received such a communication about a father who had died several years earlier, entailing serious repercussion. There are analog examples of authorities' registers classifying a certain person having a (in Swedish) "betalningsanmärkning" [payment remark, owed money] which may be questioned, a (Swedish) example being found on the net.


Reflections on example #4

In Sweden the sheer existence of The Swedish Data Protection Authority allows a certain documentation and understanding of the complexity in correcting some data existent in the great number of databases. What is worth of reflection is the difficulty and the amount of labor (including and exceeding HCI) required for correcting wrong information that in belonging to a database can in a short time lead to multiple consequences, which in turn have to be corrected manually where "manuality" is a brainy question and is hard to define. The difficulties are comparable to the earlier example (above) about the client of an Internet operator with the difference that they may have serious juridical and economic dimensions.

 

 

Example #5 - Public transports and parking

About public transports and parking tickets. Once upon a time the passenger paid for a ticket and got the ticket in his hand, as a concrete visible proof of the paid amount. Computerization using magnetic tickets or cards that are loaded with certain sums of money imply that the control of what is loaded on the card and what is spent on each trip occasion is left to the "system" about which the user or client has neither understanding nor overview-control. Information about the transaction appears on some screen. It appears only for a fraction of a second that often is difficult to grasp, or it can ex-post be identified by a reader available at a ticket office. The transportation company itself has ruled that the card itself may lose validity after a certain time, so that in order to use the remained credit amount the customer who did not memorize the expiration date may have to step down and walk to a certain office for renewal. It happened to me. Or the magnetization is lost by contact with a strong magnet or cell phone as the case is with magnetic keys-cards at hotels, and so on. Furthermore: on occasion of the covid-19 pandemics the public’s already paid digital credit cards for loose tickets (“reskassa”) were declared invalid until further notice, after October 28th 2020 in order to offset the company’s accumulating budget deficit. Declared invalid, despite of the traveler having already paid cash, in advance, a whole amount to the transport company corresponding to trips that he may not yet have done, that is, the paid tickets not yet having been “consumed” by the traveler (as in my family’s case), while the company can already have consumed the traveler’s money.  Everybody including old retired travelers were obliged now in covid-times to buy the ticket somewhere in advance of every trip, or if no “somewhere” was available at the start of the trip, they had to have a mobile feature phone (which many elders do not have) with which to pay electronically their single-trip ticket (an operation that many elders have never done) after electronically identifying themselves (operation many elders never have done), all pending a fine corresponding to about 150 dollars, after an eventual ticket checker’s own judgment. All this motivated by the economic losses incurred by the public urban transport company because of the unavailability of digital readers other than at the usual front entrance of the buses that were barred in order to protect bus drivers from covid-contagion, and after that the task of former ticket sellers was taken over by drivers, and later further taken over by digital readers of digital tickets under the supervision of the driver, readers that are unpractical to move. Apparently, an accumulated final burden of failed man-machine interaction is forced upon the client-population in general and elderly in particular who for the rest are advised to avoid traveling in the dangerously overcrowded buses.

 

This transportation case is analogous to payment of parking charges. From the beginning one could put coins in a parking automat, and later one could pay with coins or credit cards in an automat, getting a receipt for the paid amount and time, plus a paper ticket to put under the car's windshield, stating the valid paid time. Ultimately, the client gets no validation except for a statement on the screen of a ticket machine where his credit card pays a certain amount for a stated time that he has to memorize, on the basis of a license plate number that, by the way, can get some character wrongly digited leading to a fine. The process is further complicated by the introduction of parking-apps or mobile phone applications developed by a plurality of parking companies who share parking lots in one same region such as a city. And they require from customers, car owners, further laborious updates of their parking apps for enabling their continued use based on periodic debugging or innovations. All this still disregarding cases such one I know of an elder in cold and windy weather who upon arrival was refused to pay with their credit card in an parking automat because it was out of order. And he got confused by having to interact with two different screens of the automat at consecutives stages of the interaction, one for choice of estimated parking time and the other for validation of his credit card. An app for alternative interaction in place of the automat, which was specified in the displayed instructions, could not be downloaded to the cellular because of occasional poor coverage. He tried in vain to call the displayed phone call number for the responsible service company for communicating this and preventing being fined, but after choosing among a series of coded alternatives was requested to wait his turn in a queue, for an undetermined period of time, the whole totaling more than 15-20 minutes, while he was in urgent need for a rest room.


Reflections on example #5

Some reflections on the urban transportation were already included above because of stylistic editing reasons. Concerning parking, the client has no proof in his hand and often has to memorize the valid parking time. This is so if the parking ticket machine (now with virtual tickets) works properly. If it is noticeable that it does not work properly, the client is requested to call at his own expense a certain phone number for reporting the dysfunction. If it is not noticeable, he can only hope and “trust” (whom?), but reclamations ex-post, after getting a fine, are hopeless without any proof on his hand. It recalls the case above in this text, of trying to deserve consideration and justice vis-à-vis an internet operator. The main reflection here is exactly this loss of control and power by the client or customer who is punished, fined and made responsible for further corrective action, posited that this client-customer identification is possible. That is, in general different classes of people may be considered as clients or customers, not to mention the synonym "users", including engineers ("researchers"?) who get their salary for designing the computer and communications systems. All this similar to the fact that in the political process there may be different (hierarchical?) classes of decision-makers, politicians or CEOs inside the computer and communication industry as well as inside the military-political establishment. One can also say that power is handed over to the "system", so far as one is not obliged to specify how a system is defined, if not in terms of a Design of Inquiring Systems, which in turn risks to "pass the buck" to a "combination" of other disciplines. But it is also possible to see that power is being surreptitiously passed to an anonymous system of "cognitive logic and mathematics" formally in the hands of a certain class of people, akin to what Edmund Husserl criticizes in his, symptomatically unfinished The Crisis of the European SciencesEspecially in its second part about the depletion of mathematical natural science (cf. computer science) into its "mechanization", but leading to a dead end of phenomenology because of a failed critique of Kantian psychology. More on this in my earlier Computerization and logic acrobatics.




Example #6 - Banks safe deposit box

Once upon a time if one wished to get access to one's bank's safe deposit box it was enough to go during the opening hours to the bank's branch office with an identification document. Some year ago the bank mailed a letter to its customers stating that starting a certain date it would be mandatory to call the branch one working day in advance to announce the wish to come the following day. No explanation or reason was given. On occasion I realized that it was no longer easy to find the branch phone number. No yearly paper catalogs are anymore printed and distributed, only sites and data bases on the net with numbers that direct the calls to call centers in order to save the time of employees who would have to answer phone calls at the branches. In some other contexts call centers are even outsourced to cheaper foreign countries where those who answer have even more limited detailed knowledge of the local language and environment. But in my case I called the only available number of the bank's headquarters call center where I was put on a waiting queue of 55 people with a waiting time of 25 minutes in order to get a phone number of the branch that I called and after a digital identification on a special app I could register my wish for a visit the following day in order to get at my safe deposit box.


Reflections upon example #6

Once again, this is an apparently trivial case that can be seen as a digital transfer of labor to the client, possibly motivated by increased risk of robbery that lately has increased with arms and explosives, to be expected in the context of safe deposit boxes. It is also the case that such developments are attempted to be provisorily countered with digital controls that affect customers.

 

 

Example #7 - Dental hygienists and antibiotics

A man was called for a periodic treatment to a dental hygienist. Having incurred earlier in a sepsis caused by dental infection and a consequent heart infarction operated at a hospital, he had also been recommended if not outright commended by the hospital's medical dentist to prepare for future dental hygiene with a complete antibiotic prophylaxis. When he explained this to the dental hygienist, the responsible dentist refused such prophylaxis on the basis of guidelines from the public health authority intended to combat societal antibiotic resistance. Other medical authorities at the hospital observed that antibiotic resistance implies a statistical point of view that must be revoked on the basis of individual considerations. A human life cannot be sacrificed on the basis of such statistical prevention measures. The local dental hygienist, however, had no access to the hospital's digital patient journal, and an inquiry directed to the hospital's medical dentist was responded by a new managing consultant doctor by directing the dental hygienist to those at other departments of the hospital who had taken care of the historical sepsis. The local dental hygienist, however, had no technical access to the hospital's digital patient journal and no prescription for antibiotic prophylaxis was allowed. The risk was taken by the elderly patient who up to date has survived.


Reflections on example #7

This one additional example of reliance upon digital computerization and communication that breaks down at the interstice between different organizations, shifting the burden of responsibility, communication and risks of broken communication upon "others", and ultimately to the client or patient.

 

 

Example #8 - Radio Sweden’s summer program
On August 11, 2020, the public service Sverige Radio [Radio Sweden] broadcasted one of the daily summer radio programs in a series “Sommar i P1” [Summer in Radio’s Channel 1] where selected, often well known, personalities are allowed to talk, alone by themselves for ninety minutes, about their lives, opinions and interests. In this program a young woman, a hockey player, used a part of the program complaining of harassment by a man who was the head coach for Sweden women's national team. A listener perceived that the program contained a personal attack to a person and his organization despite the official requirement of Radio Sweden’s impartiality and its advertised rules, and wished to direct a complaint to the Swedish Press and Broadcasting Authority (here abbreviated SPBA). It directs the public to its Internet site where there is an offer for complaints through its e-service requiring an initial digital authentication of the person’s identity, which is followed by a detailed guide with requests of details about the plaintiff and the program in question, plus his motivation of the complaint. This was done in about half an hour but when the whole digital form was meant to be sent away nothing happened. After repeating the process with all the fill-ins nothing happened again. The plaintiff then copied all the text on the computerized form and keyed it in a digital text document that he sent by e-mail to the official main e-address of the SPBA, together with a complaint that the digital form input did not work, after having spent more than 90 minutes for the process. After about five weeks he received per regular mail a communication (ref. 20/03658) that “The Review Board has reviewed the current program in the light of the notification. According to the board's assessment, the program does not contravene the requirement of impartiality.”

Reflections on example #8. 
The whole effort resulted in a three lines’ answer without any motivation or comment. In this respect the response from SPBA recalls what I already have written about the “repressive tolerance” in the context of complaints regarding medical safety of patients. It is the arrogance of power, now dressed in the digital HCI-form. The public is given the opportunity to believe that it can complain, as an escape valve. It hides a mechanism of requesting an ultimately exhausting investment of energy leading to a mute discouraging answer from a formal “authority” that in our case was represented by the signature of the decision document by a member of the Supreme court of Sweden (appointed by the government and member of the feminist women’s network HILDA) who was given a part-time assignment in judging SPBA matters. This is hoped to be enough for the public’s acceptance of this type of answers from authorities working for other authorities. For the rest the requirement of impartiality means, according to the Review Board's practice, that if serious criticism is directed at a clearly designated party, the critic must be allowed to respond to or comment on the criticism. As a rule, this must take place in the same program or feature, either by the critic himself or herself or by reporting a comment from him or her. The event was the more remarkable because in most if not all other contexts the radio or tv program producer, host, anchor, or moderator very promptly interrupts and silences whoever is speaking with the above mentioned motivation, the more so in questions that may be subsumed under the Swedish law against discrimination (SFS 2008:567, cf. Act concerning the Equality Ombudsman) or questioning the democratic order or the radio and television act (pdf, SFS 2010:696). At the same time various documents remind that constitutional right to and limits to freedom of expression, related to the constitutional fundamental law of freedom of expression (pdf), one of the four fundamental laws of the Swedish constitution. The inherent complexities become more visible in both national and international media’s repercussion of the case of the Swedish professor of geriatrics Yngve Gustafson who also was invited to have his summer radio program in the same series in Radio Sweden on August 13th 2020. He is referred, in contrast to our reported case with the hockey player, to have had his text censored by Radio Sweden in what regards his criticism of health authorities’ treatment of elders in the covid-19 pandemics. Quotation from reddit.com“But for me it became inevitable to talk about the corona storm as well, he says. But I was not allowed to talk about the authorities' ignorant and unethical actions and I do not think that feels good, he says”. Statistics with all its uncertainties show that in October 14th 2020 Sweden was the fourteenth among 216 listed countries of the world in terms of number of covid-19 deaths per million inhabitants, more than 583 (mostly elders), well above other northern or Scandinavian countries Norway (51), Finland (62), Denmark (116), and Iceland (29).


Let us emphasize why this all is worth of consideration in the ongoing reflections: the seriousness and importance of freedom of expression is what motivates the strong (HCI) requirements in its mentioned questioning. At the same time this is not at all respected in the authorities’ answering or countering of such questioning. In terms of HCI “action-interaction theory” there is only digital action but no substantial reaction or response, and this is the nature of power over dialog. The more so if the case happens to be about a feminist hockey player with LGBT-connotations who in a spirit analog to #MeToo accuses a male coach for gender discriminating harassment, in a country where government has engaged itself in Sweden having the first feminist government in the world. It is easy to imagine the design of a digital form for response to a complaint which requires, for instance, at least the filling of a blank field for “motivation” for rejecting the complaint, a motivation beyond that the original complaint was just unmotivated. If this is so in such naively simple technical contexts it is easy to imagine what happens in more complex ones. In any case, the announced review of SPBA by the Swedish National Audit Office with a report announced for December 2020 is most welcome, even if it does not go so far as to analyze the type of problems considered here.

 

 

Example #9 - Airliners’ and traffic’s control

Computer experts and researchers may think that the examples up to now are "trivial" in the sense that they do not represent the technical and logical complexities of modern HCI proper. They were nevertheless enumerated because they transcend the processing of visual-acoustic-haptic stimuli and "neurophysiological, instinctual, behavioral automatisms’, and corresponding technical patterns" that are often discussed in HCI-professional venues, and could be classified as cognitive ergonomics. In order to counter the accusation of triviality in the above examples it is possible to adduce a more or most complex representative example of this kind of "transcending" HCI: the so called Maneuvering Characteristics Augmentation System (MCAS), which is a flight control mode (software mode) that played in the last years a role in airliner accidents, related to the Boeing 737 MAX groundings. This matter has already had enormous, especially economic, consequences and therefore obviously has already been or will be the object of extensive research. On the net there is available a quite ambitious and detailed study by Krister Renard written in Swedish with the title Boeing 737 Max – den sanna och (nästan) kompletta historien [Boeing 737 Max – the true and (almost) complete story]. The Swedish text can be reasonably processed with Google translate, which in my experience has a good quality in translations to/from English language. It integrates HCI with the whole “system” reaching company boards. In a typically engineering, if not technocratic, mood he seems to believe that HCI implementation and policies would improve if the power of decisions is not taken away from engineers and given to graduate economists and lawyers.


Reflection on example #9

It is symptomatic that HCI can be seen as a heir of ergonomics and human factors, and it is worth to suspect (as I suggest in my text on computerization as logic acrobatics) that it is the absurdity of logic acrobatics that originates the doubtful pretense of being able to create a hotchpotch or "combination of psychology, sociology, engineering, biomechanics, industrial design, physiology, anthropometry, interaction design, visual design, user experience, and user interface design". That is, a mixture of disciplines and more or less ephemeral "traditions" that also are seen as characterizing other trendy fields or hypes such as AI and Design (cf. especially the latter's sections on "Design disciplines" and "See also"). The only research on this kind of problems that has reached me and I have noticed  up to now, is Alain Gras' work in France, exemplified by two of the references, #2 and #3, in the French Wikipedia. The only English translation I know is a condensed version of that above #2 Le Pilote, le contrôleur et l'automate (Paris: Institut de Recherche et d'Information Socio-Economique IRIS Editions, 1990, ISBN 002-906860-06-93), co-authored by Gras, getting the title Faced with Automation: The Pilot, the Controller and the Engineer(trans. Jill Lundsten, Paris: Publications de la Sorbonne, 1994, ISBN 2-8544-260X). It contains an (Introduction) For a socio-anthropology of aeronautical technology, (Part I) Socio—historical construction of the aeronautical system, (Part II) The attitudes and representations of the airline pilots vis-à-vis innovation, (Part III) Air traffic control and mutation, (Part IV) The aircraft designers, and a (Conclusion) on Air logics and ground logics, the new flight paradigm, sociology of appropriation, safety-the core of civil aviation, and the return of the automation. I take care to account for some details of this work, despite of its analysis escaping the ambitions of my text, since its sheer layout illustrates a contrast to many common HCI-approaches. 

 

I guess that this approach may come to be criticized for being only or too much “sociology” despite of sociology having been mentioned as one of the legitimate watchwords in the combinatory definition of ergonomics and human factors mentioned above. The complaining for “sociology” is a remnant of the old positivistic attempt to relieve engineers or administrators of the responsibility for the consequences of their actions or use of their “tools”. Such positivism appears also in Herbert Simon being considered a founding father of both artificial intelligence AI and administrative theory. Ultimately, however, it is not a matter of sociology but of a systemic view of HCI which is implied by and implies the need for, and capacity of "combining" of all those scientific "disciplines and more or less ephemeral traditions" represented by the watchwords above. It is a need that is illustrated by all the examples above, except for the last example #9above where the need is visible because it is a question of life and death of people and business or a question of politics and economics.

 

A defective understanding of these issues implies a missed understanding of, for example, of the concept of “internal plan of actions” advanced in Activity theory as mentioned in its article in Wikipedia. It defines the internal plane of actions as 

 

"[...] a concept developed in activity theory that refers to the human ability to perform manipulations with an internal representation of external objects before starting actions with these objects in reality." 

 

I think that was is missed, despite the higher ambitions of activity theory relative to other HCI-approaches, is that this is an age-old problem of simulation as related to so-called realityAnd this is valid also for the hype of virtual reality. What was considered in the previous paragraph above has already been problematized and considered in C.W. Churchman’s “Analysis of the concept of simulation”. (in Hoggatt & Balderston, eds. Symposium on Simulation Models, South-Western Publishing Co., Cincinnati, Ohio, 1963, today more available in the author's summary in The Systems Approach and its Enemies1979, pp. 51-53). What I mean is that if one performed an analysis of the application of “internal plane of action” in a non-trivial HCI-context one would find the kind of complexities that appeared in the previous paragraph according to Le Pilote, le contrôleur et l'automate.

 

But, tragically this may not be the case: no complexities, until further notice may be found by those who believed in and implemented the Remote air traffic control at the London City Airport, a news one could believe it was fake before it was reported by Reuters on April 30, 2021 (not April 1st) and PR-illustrated (on May 5th). Senior researchers may feel that they must leave it at that.

 

 

WHITHER HCI

 

The need for combining knowledge disciplines beyond technical logicalities becomes then visible in questions of life and death. Logical minds who entrust powerful logical tools to simple-minded or credulous, confiding, unsuspecting professionals or general public can pedagogically be compared with naïve arms dealers putting machine-guns in the hands of youngsters for their defense, assuming that they will not use them in ignored or badly judged events. And also hoping that nobody succeeds in implementing a cost-benefit analysis of the deal, which is assumed to be profitable. A test of the problem of cost-benefits, that symptomatically no longer musters research enthusiasm, can be intuited by reading e.g. “Measuring profitability impacts of information technology” (pdf) in the ASIST 2003 meeting of American Society for Information Systems and Technology, which recalls my earlier reference to Ida Hoos’ critique. Logical minds’ assumptions in this context, suggest a rough analogy to climate-activists’ reliance upon children’s leadership for implementing measures for countering global warming.

 

The direction of ongoing HCI may be justified by the idea that it is just a sort of ergonomic “handle” to allow the “handling” of a powerful tool, disregarding the problematics of “tools” advanced earlier in this text. After all, one could say, nobody objects against the modern “use” of mathematics  and related logic, or mathematical software, in the practice of modern science, technology, and industry. There is also a rich supply of statistical software packages that can be easily used and misused by professionals who have a very limited knowledge of the presuppositions and pitfalls of the large-scale “production” of mathematical and statistical results. Related problems in Sweden did motivate the exemplary reaction of an experienced professional statistician, Olle Sjöström, in a PhD dissertation (in Swedish, 1980, with English summary pp. 154-156) on Swedish social statistics, ethics, policy and planning, followed by a book (with English abstract, 2002) on Swedish history of statistics. The dissertation’s main questions run very close to Edgar Dunn’s Social Information Processing and Statistical Systems (1974) and are strategically presented as including

 

has the growth in the “production” of statistical data led to improved information for decision makers, better informed citizens and an increase of knowledge about current social conditions? […]

 

a discussion of human information processing in relation to statistical data…Semantic problems which occur with such words as “data, “information”, “knowledge, “decision under uncertainty” […]

 

It may look as if we are far from HCI but the point is to remind the HCI tends to by its very nature to disregard its “production”, the social context of its use, and consequences. In this context it is useful to also recall a remarkable article and book chapter by Clifford Truesdell, described as American mathematician, natural philosopher and historian of science, with the consciously chosen but perhaps excessively provocative title “The computer: Ruin of science and threat to mankind” (1980/1982). This provocation to which I have also referred in some earlier papers, would have been expected to lead to many rebuttals, but symptomatically I happen to find only one smug “quick rebuttal” of half a page that is not so quick, being dated the year 2017. Paradoxically, it is a very self-confident rebuttal that seems to me to confirm my views in this present text.

 

Humans are said to use software packages that allow for a sort of HCI, human-centered acting on human-compatible computer software that produces a reaction or result rather than a response, with an impact on the same and other humans. As far as I humbly know about HCI, however, much of it is at the level of so-called cognitive ergonomics, which shares all doubts that there must be about cognitive science, which in turn shares all doubts about what reason is or should be, especially since the philosopher Immanuel Kant, and finally related to cognition or “inquiry” in terms of the already mentioned design of inquiring systems. Random examples include for instance recommendations for design of input to computer communication networks along the lines of “Form design is a balance of science and art. It must be functional and clear but also attractive”. Microsoft offers technical support for creating forms that users complete or print in WordThat is, much of daily HCI design would follow form design. There are design enthusiasts who would even claim that high-flown “design theory” or the “design perspective”, whatever theory, design and perspective mean (if not the originally Kantian design-artistry, and the Nietzschean perspectivism), include the capability to guide interaction design. Another example would be the very intelligible and recognizable exhortation to “intelligent response validation" that is “capable of detecting text input in form fields to identify what is written and ask the user to correct (with a maximum number of characters) the information if wrongly input. Please note that the user is asked to “correct” what he writes, his supposedly wrong informational input, and this is to be done within a maximum number of characters; sometimes it is also specified that certain characters are not allowed and that the whole must be done within a given time span, as it is the case with internet-banks, for safety reasons. 

 

These latter considerations recall the case of personal shortcomings of a non negligible percent of people, especially users, since clerks and other professionals would not at all be employed and allowed to work with computers if they have a sort of dyslexia in memorizing sequences of pressing keys of keyboards. Since a long time ago psychological testing has been used for selecting employees, e.g. secretaries who need cognitive-ergonomic proficiency in matters like spelling, language and numbers, or programmers and airplane pilots who need especially logical-spatial proficiency. Those who have “keyboard dyslexia” or are incapacitated by age may instead be allowed to survive so long as they come-through in the increasingly computerized world, hoping for an option of digitalized verbal input into whatever they do no longer understand. Nevertheless it is more than a question of cognition, dyslexia or the like. Ultimately it may be a question of the limits of logicization or mathematization of reality as a consequence of the computerization of digitalization of society that I have analyzed in another article. It is a distortion of reality that requires “distorted” minds. It does not fit minds of people who are not mainly mathematically or logically oriented but, for instance, are verbal and language oriented or, more sophisticatedly, oriented towards “thinking-sensation” along the kind of types elaborated in analytical psychology. Those who have tried to live in computerized China or to travel around in Europe and the world during the Covid-19 pandemic with the support of special Covid vaccination certificates, governmental health certificates, matching passports, their QR-codes based on officially approved computerized registration forms, may already know what we are talking about. It recalls the world of Franz Kafka (1883-1924) who, as described in Wikipedia “typically features isolated protagonists facing bizarre or surrealistic predicaments and incomprehensible socio-bureaucratic powers”, which were already presaged at his time. It resembles the confusing, impossible “reality” of quantum physics or of a sort of brave new world.

 

Complications can also be seen in rich problematizations of technology and specialization (cf. HCI) along lines that are not mine but can be sensed in e.g. Arnold Gehlen’s Die Seele im technischen Zeitalter (1957, read 1986 in Italian, trans. A. Burger) translated as “Man in the age of technology” (cf. other with same title), esp. chaps.VIII and IX on automatism and personality.

 

Unperceived naiveté leads, however, very far and very soon. For instance, a “grand old man” in the field, Gerhard Fischer wrote a very ambitious text brooding about the subject in terms of “user models” problematically and brainly differentiated from “mental models”. It can lead to perceive that 

 

Traditionally, computer usage was modeled as a human-computer dyad in which the two were connected by a narrow explicit communication channel such as text-based terminals in a time-sharing environment. The advent of more sophisticated interface techniques, such as windows, menus, pointing devices, color, sound, and touch-screens have widened this explicit communication channel.” 

 

This means that information is conceived as in a communication channel as I questioned in my doctoral dissertation at the beginning of the seventies on Quality-control of information with an unperceived relation to HCI including “forcing reality to fit the model”, (chaps. 2.8 and 3.3). From such a conception it is easy to go further arriving (p. 70) to the philosophically unperceived enormous question of the “user” (whatever his definition among other neglected classes mentioned above) saying “The right thing and the right time in the right way”, which is a subtitle in Fischer's article. This expression should be put against the background of my examples above in order to sense its inherent hubris. It leads to Fischer’s further considerations such as 

 

“user models are defined as models that systems have of users that reside inside a computational environment. They should be differentiated from mental models that users have of systems and tasks that reside in the heads of users, in interactions with others”. 

 

But is it “systems” or is it their designers who work for managers who have models of users for the managers’ purposes of satisfying (hopefully clients but mainly) shareholders, and then is it users who need to have models of systems and their whole changing environment in order to know whether and how to use them for their own purposes? And further: 

 

“interaction between people and computers requires essentially the same interpretive work that characterizes interaction between people, but with fundamentally different resources available to the participants. People make use of linguistic, nonverbal, and inferential resources in finding the intelligibility of actions and events, which are in most cases not available and not understandable by computers”.[…] The ultimate objective of user modeling is that it has to be done for the benefit of users. Past research has shown that there is often quite a difference between modeling certain aspects of a user's work and behavior, and applying this knowledge for the benefit of the user”.

 

Yes, “linguistic, nonverbal, and inferential resources in finding the intelligibility of actions and events, which are in most cases not available and not understandable by computers”. But: “linguistic, nonverbal and inferential resources” stand for the whole human mind, human science, philosophy and culture which underlines and envelops the logic and mathematics embodied in computers.

 

So, in trying to make it short, referring to the quotation above from Gerhard Fisher I put it in the form of a question: is the ultimate objective of user modeling that it must be done for the benefit of the user? It sounds as a sort of “Kantian categorical imperative” but it may rather be a “paternal” diplomatic wishful thinking, along with a patronizing computerization. It illustrates the main message of my text: that simplicity hides complexity. What is going on is a straitjacket-“logification” of the human mind along the lines of “forcing reality to fit the model” where the model is logic. It can contribute to people refusing to interact with or “use” computers, or in their feeling distressed in the process of interaction, as in the case of “keyboard dyslexia”. Or, worse: instead of refusing, people may no longer refuse and not feel distressed, thanks to the human capability to adapt to the environment. If computers cannot imitate humans as in the unbelievably and symptomatically hyped naïve Turing test, humans can imitate computers, as several scholars like the above referenced Arnold Gehlen with regard to automatism and personality, have already accounted for as a cultural phenomenon. A somewhat less tragic alternative is that among those who do not refuse and do not feel distressed, especially elders, there are people who are experiencing the Hawthorne effectIt may be the case of lonely people, especially in care of elderly and disabled, who are longing for human contact and love but feel temporarily happy in being at least the object of attention and trials with the companionship and assistance of dogs, computers and robots.

 

It is interesting to note that all this can be conceptualized (not too far-fetched) as a mass-experiment that is “democratically” imposed on a population in the name of science, namely computer science and social engineering. In this respect it can be seen in the light of Philip Zimbardo’s ethically problematic Lucifer effectIn my review (The Lucifer effect and research ethics) of Zimbardo’s bookI remark in a postscript that it was a prototype of mass-experiment, according to Rutger Bregman’s Human Kind: A Hopeful History (A new history of human nature), which contains an expanded severe criticism of Zimbardo’s Stanford Prison Experiment.

 

An analog historical wrongdoing in Sweden, on societal scale is represented by the Vipeholm experiments,also illustrated in the Swedish public radio (1 nov. 2020) in which disabled children were submitted to special diets in order to ascertain the deleterious effect of sugars for caries. The facile justification of such experiments in always based on that criticism is always easy ex-post, after the fact but not ex-ante, before the event. 

 

Symptoms of these problems appear already in mass media and in social media, symptomatically still outside of, and unacknowledged by the scientific literature. Two most touching examples and testimonies, more touchingly and elegantly written than my nine examples above are available to Swedish readers and to those who trust translations into English by Google translate. They are the political journalist Anna-Lena Laurén’sApp app app – det börjar likna bolsjevism! [“App app app it's starting to look like Bolshevism!”], in the newspaper Dagens Nyheter, 14 october 2018, economic historian Ylva Hasselberg’sHejda digitalbolsjevismen” [Stop the digital bolshevism], in Dagens Nyheter, 23 december 2018d, and Arne Höök’sFrontsoldat med omöjligt uppdrag” in Mitt i, 26 December 2020. They retell rhetorically their distressing experiences of forced relation to computers, and associate it to the concept of arrogant neo-bolshevism, bolshevik technical rationalization zeal, a rebirth of forcing reality into the model, which is also a source of the term political correctness. I ask the reader of these lines of mine to allow my quoting a culturally justified, illustrative digression in the field of literary essays; an example of how “people” experience these problems. Excerpt from Bodil Malmsten’s book ” The leopards coming from the north” (2009, in Swedish, my trans.,pp. 118-120):

 

Yesterday was the day when my Internet service provider Orange promises sending the Installer of Live Box to get wireless internet here at the Atlantic Ocean [where I live, in Finistère].

Orange. It bodes ill, and indeed - the installer is one and a half hours late. When he finally comes in an orange jacket with Orange logotype I lock the door behind him and sit on the key until he - with phone support from Bordeaux for there is installation firm's expert on Apple computers - after four hours manages to install French wireless internet on my computers where all menus are In Swedish. I force him to doublecheck everything before I release him with the promise of a month's warranty.

Now it was just to keep your breath and pray for my wireless life. Internet service providers, are the same all over the world, throughout the universe, the ISPs are a necessary evil and hell. The individual's vulnerability for the misuse of the evil Internet suppliers, the expensive support numbers, the horrible music, the disconnection, the constant problems.

One day, I could neither dial nor receive on my phone, and I drive to the Orange store in the center of the city: I'm already angry and, indeed, the phone problem is not Orange's problem, it is the subcontractor's, the installer in the orange jacket with Orange's logo. A malicious lady in orange in the Orange store refuses to have to do with me, but since I refuse to leave the store before she calls the subcontractor, she finally has to do it, forces herself to listen to horrible music and push star and square until she just wants to die. When she finally can talk after waiting at the queue, the representative of the subcontractor replies that the only way to fix the line is that the customer himself calls from his home phone.

Even the malevolent Orange woman feels now stuck and tries to explain that it is precisely because it is not possible to call from my home phone that she calls from the Orange store at the customer's - my - explicit request. "It is a special case," says Orange woman and stares unkindly at me. Everything is like a movie of Jacques Tati - not fun - and ends with the representative of the subcontractor laying down the phone and the Orange woman saying the only thing I can do is to drive home and pick up the live box and the phone and come back to the Orange store so they can check whether my telephone is compatible with the box. That my relatively new wireless phone could not be compatible with the box is news, is something no one so far had said anything about, especially the four-hour installer in the clothes with the Orange logotype. I leave the Orange store, look the evil Orange woman in her eyes and say: "This is hell, c'est l'enfer, c'est l'enfer." [it is hell, it is hell.]

I go to the neighbor who hates Orange over everything on earth, he has managed to terminate his Orange subscription even though it is impossible. The neighbor is, or course, willing to help me. But when he tries to call the support number I have received from Orange, it turns out to be not possible from his subscription with ISP Tele2. He still gets the issue solved, and it is due to the neighbor never giving up, he never gives up, he does not know how to give up. If the neighbor had been with Napoleon he would have won the battle of Waterloo.

 

This leads further to the idea of technological determinism, while my approach has been more of a philosophy of technology, which I cursorily approached in e.g. Trends in philosophy of technologyToday it gets hopelessly fragmented in a mindblowing quagmire of philosophy of computers, philosophy of artificial intelligence, and philosophy of information, etc. Whatever is left of philosophy. It seems that democracy under the guide of so-called science also tends to be perceived to become bolshevism in the context of debates about global warming. Swedish readers can again read about it in Lena Andersson’s “Demokratin är hotad om vetenskapen ensam ska styra klimatpolitiken” [Democracy is threatened if science alone is to control climate policy], Dagens Nyheter, 28 sept. 2019, and Hans Bergström’s “Att gömma sig bakom vetenskapen” [To hide oneself behind science]SvenskaDagbladet, 7 nov. 2020. I have already searched for the explanation in my blog on Climate change and global warmingbut ultimately in Information and theology as reduction of theology and science to politics, when theology is not explained away by facile reference to the need of a greater contact with the life and the concrete body (theology of the flesh), or accounting (but how?) for people’s values and valuations. When psychological and social science is not reduced to politics it is because their politics is hidden by the recourse to a superficial conception of the psyche. 

 

It is the case of the study of Clifford Nass, recognized authority in HCI. As expressed in Wikipedia "identifying a social theme in people's interaction with computers, he was able to observe that humans project “agency” on computers, and thus people will interact with computers in the same ways as interacting with people". This idea was developed into The Media Equationa book co-written with Byron Reeves. I have not noticed whether they consider that the "archetypal" case of this phenomenon and its regrettable consequences in the history of computer science and AI-mythology, is its appearance and naïve acceptance in the popularization of the famous Turing TestOne main objection to this kind of consideration should be and have been that it phenomenon depends upon the level of maturity of the involved humans. The degree to which humans feel (interaction with) computers as people is the degree to which they project psychic content because of it having been left unconscious in different degrees. If they have not yet understood and felt what "people" and "humans" mean, they will easily see and feel them to be equivalent to computers and the other way round.  

 

What is then the role of sociology and psychology in relation to politics and ethics as a theoretical background of HCI? Which is the position of the HCI-relevant Activity Theory and its Marxist roots in Leontiev and Vygotski, in this respect? What about the advertised ad-hoc categories of “the individual, the object and the community”? How do they relate to the interplay between work and capital, and to the “cultural sphere” with reference to the broader fields of sociology, philosophy and theology that also are culture in the lights of, say, a Dostoevsky and his political reviewer N. Berdyaev? These are some of the questions that were not considered in the otherwise ambitious doctoral dissertation presented and disputed at my university in 1988 with the title of Work-oriented design of computer artefacts, where work-oriented means very labor union oriented. Vygotski is mentioned but the Czech Marxist philosopher Karel Kosik appears to be a main inspiration in the light of the by now academically glorified Martin Heidegger whom I have dared to criticize in several other contexts. A Marxist-Kosik framework or terminology was used to meet, accommodate or neutralize my advisory objections about considering the computer as a tool which was then renamed “artefact”. This lead ultimately to postmodern reflections and the hype of “design”, all criticized by Christopher Norris in a deconstructivist perspective (which is not mine) in the book What’s Wrong with Postmodernism. Together with the postmodern vagaries of corrupt Kantian aesthetics renamed “DESIGN” that I denounced elsewhere both philosophically and theologically, it all illustrates the hopeless vagaries of the intellect, the impenetrable mindblowing intellectual jungle of logical and ideological “impossible” debates that can be fostered by the misunderstood concept of reason, and awaiting a “Copernican revolution” consisting, I guess, of an (unlikely) reconnection to theology.

 

 

QUAGMIRE OR INTELLECTUAL JUNGLE

I note above the rise of Human compatibility, a new quagmire to be compared with e.g. Human-centered artificial intelligence earlier and elsewhere, both of them implying conceptions of computer-human interaction. At the end of the previous section I also notes the postmodern vagaries of corrupt Kantian aesthetics renamed “DESIGN” that I denounced elsewhere both philosophically and theologically, it all illustrating the hopeless vagaries of the intellect, the impenetrable mindblowing intellectual jungle of logical and ideological “impossible” debates that can be fostered by the misunderstood concept of reason. I wish now to show how this reflects in the ongoing attempts to develop the field of HCI as represented by the Association for Computing Machinery’s (ACM) Special Interest Group (SIG) for Computer-Human Interaction (SIGCHI) with its annual international SIGCHI past and planned conferences. In the present context I will not yet adduce the corresponding organ of the International Federation for Information Processing’s (IFIP) Technical Committee number 13 on Human-Computer Interaction (IFIP-TC13) with its working groups and planned international “INTERACT” biannual international conferences.

To illustrate the ambitions of next online virtual conference CHI-2021, in May 8-13, 2021, anticipating more than 3000 paper submissions, there are no less than 15 subcommittees which may be considered by some as “disciplines”. Prospective authors have been asked to suggest a subcommittee for their submissions, but it is not clear to which subcommittee the present paper of mine could have been sent except possibly for the mild User Experience and Usability, if it is supposed that “usability” is a reliable concept, allowing for a sort of operational definition, and encompassing my nine given very concrete examples:

 

1.    User Experience and Usability

2.    Specific Applications Areas

3.    Learning, Education, and Families

4.    Interaction Beyond the Individual

5.    Games and Play

6.    Privacy and Security

7.    Visualization

8.    Health

9.    Accessibility and Aging

10. Design

11. Interaction Techniques, Devices, and Modalities

12. Understanding People: Theory, Concepts, Methods

13. Engineering Interactive Systems and Technologies

14. Critical and Sustainable Computing

15. Computational Interaction

 

I already pointed out above that it is the absurdity of simple logic thinking that originates the doubtful pretense of ergonomics evolving into human factors, being able to create a hotchpotch of, or claiming to draw on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical engineering, industrial engineering, industrial design, information design, kinesiology, physiology, cognitive psychology, industrial and organizational psychology, and space psychology. It is weird to compare this list with the list above. That is, a mixture of disciplines with more or less ephemeral "traditions" that also happen to characterize other trendy fields or hypes such as artificial intelligence (AI) and general “Design”. Disciplines in research contexts, however, used to be identified as such if they deserved to be considered as academic disciplines. Such strict and historical view has already been shaken with the result that it is claimed that academia is being replaced by different modes of production of knowledge including “post-academic science” while universities themselves gradually lose the “idea of university” compared with a research lab in industry. It is this that comes in display in the list of “subcommittees” for HCI, a list that may even be different for every recurring annual meeting. Anybody who has been working or teaching in whatever “area” for 20-30 years may be considered as expert.

And the area can even be “philosophy” in the sense that philosophy is supposed as not including the meaning and limitations of mathematics and logic which are embodied in the computer, to be dealt with by HCI. If and when it happens to be included, it is as if historically great philosophers are not relevant in the sense that, for instance, Immanuel Kant’s is no longer relevant and it is enough to refer to continental philosophy, phenomenology, Martin Heidegger and analytic philosophy or American pragmatism without differentiation between its various exponents such as John Dewey, Charles Sanders Peirce or William James, or without worrying about the reason for the rise of phenomenology itself. When HCI is sensitive enough to the importance of philosophy, the discipline tends to be decomposed in the hotchpotch of another list (below) that for another HCI conference will look differently. Despite of all goodwill this may have been the case of a Workshop at the CHI-2019 (SIGCHI 2019) conference held in Glasgow in May 2019 with the title of “Standing on the shoulders of giants: Exploring the intersection of philosophy and HCI, with the following thoughtful key goals and topics, several of them touching upon the problems of my present paper:

1.    Describing and reflecting on existing philosophically-informed HCI research.

2.    Identifying emerging phenomena, issues, and challenges in HCI, addressing which would benefit from engaging with philosophy.

3.    Considering how we can practically learn and “do” philosophy. For instance, how might we explicitly meld traditional HCI methodology stemming from disciplines like psychology with philosophy?

4.    Asking what philosophy means for understanding stakeholders and for design of interactive systems.

5.    Developing how can we make philosophy in HCI more accessible.

6.    Outlining an agenda (or agendas) for philosophically informed HCI.

 

Nevertheless, I have found that none in neither the list of workshop’s accepted Position Papers for this workshop, nor in the list of the conference’s Extended Abstracts of the 2019 CHI conference on Human Factors in Computing Systems seems by far to cover the problems and content of the present paper of mine, if it is “philosophy” or whatever. There are, however, workshop position papers just mentioning Kant, phenomenology and Heidegger. Ludwik Fleck is also referred to, a physician and biologist whose philosophizing developed the concepts of e.g. thought style, logology, (science of science), and thought collective (cf. collective consciousness and culture, avoiding the quagmire of Spengler’s distinction between culture and civilization), about the same time as Carl Jung wrote about the ignored parallel type psychology and collective unconscious. A feministic approach to the workshop on philosophy refers mainly Critical theory with unknown connection HCI, and it is not mentioned by other feminist such as Sherry Turkle above. There is a workshop position paper on Chinese philosophy with problematic unknown relation to computers but I guess that a connection between Chinese and Western philosophy related to information systems for the purpose of HCI-research is to be found in my paper on Chinese information systems? East and West. The conference paper in 2019 CHI, Models of the Mind reveals what happens also to be considered as “philosophy” when stating that it draws on “philosophies of embodied, distributed & extend [sic] cognition, claiming the mind is readable from sensors worn on the body. Another conference paper on Failing with style (cf. the above reference to Churchman’s “The success of failure”) may claim that “failure is a common artefact of challenging experiences, a fact of life for interactive systems but also a resource for aesthetic and improvisational performance”. One can wonder whether that is applicable to at least some of my examples. All these implied difficulties of loose philosophical connections to HCI tend to be comfortably submerged in HCI-neologisms and in occasional claims that its research has perceived the problems exposed in this article, at least “aesthetically”, but not yet researched them.

 

In other words, the late “expansion” of HCI research beyond the original, supposedly simple and naïve strict cognitive ergonomics, whatever cognitive means or should mean, does not seem to address any of the problems of this paper. It is possible that HCI is expanding everywhere, or anywhere, and perhaps nowhere. It reminds me of my quotation of Jan Brouwer in the context of computers seen as embodiment of logic and mathematics:

 

Every branch of science will therefore run into deeper trouble; when it climbs too high it is almost completely shrouded in even greater isolation, where the remembered results of that science take on an independent existence. The "foundations" of this branch of science are investigated, and that soon becomes a new branch of science. One then begins to search for the foundations of science in general and knocks up some "theory of knowledge".

 

I submitted the above text of this section on Quagmire and intellectual jungle to a senior researcher and professor who dedicated most of his life to HCI. Here follow the comments I received in two consecutive mails:

 

First of all, I think your examples are very well presented and point to REALLY important issues. The shameless abuse of power by companies and lack of basic empathy to people (those who they think can be ignored), are discriminating and disgusting.

What I do not completely agree with, is that HCI is not mostly about cognitive ergonomics. It used to be the case, but it is changing. The scope of HCI is expanding, and in 2019 there was even a workshop on philosophy at the CHI conference […].

This year the CHI conference introduced a new sub-committee, "critical and sustainable computing", which deals with some relevant issues: https://chi2021.acm.org/for-authors/presenting/papers/selecting-a-subcommittee#Critical-and-Sustainable-Computing.

 

I think your personal stories of frustrating power struggles with the faceless inhumane entities behind "human-computer interaction" resonate well with some of the themes covered by the subcommittee.

I agree with you that conceptually HCI now is a mess. It is a mixture of different agendas, approaches, etc. It used to be conceptually consistent, but not any longer. And it does not properly build on humankind's intellectual history. I disagree that HCI hasn't studied the issues you are highlighting in your examples. Many of them (even though probably not all of them) have been actually studied, or at least pointed out as relevant research issues.

So, I think the issues you mention are being addressed, but often in an incomplete, naive, and inconsistent way.

Why? I think there are many reasons. The current fragmentation of knowledge is probably one of them, and another one, in my view, is that HCI research is trying to address lots of different assorted, often concrete and practical, needs in knowledge about human uses of digital technology. These requests for HCI-type studies are outside researchers' control (they just see that one can get project grants for this and that).

What can be done about it? Beats me...

 

 

CASE STUDY OF QUAGMIRE

 

What I perceive as a general intellectual quagmire is illustrated by two books that are interrelated through one of their authors, and deal with Primitive Interaction Design (2020, 133 pp. originally priced at US$ 170, sic) and The Psychosocial Reality of Digital Travel (2022). The latter I have already commented in another paper less related to HCI, with emphasis on the concept of “reality”. With regard here to the first mentioned book its preface states (pp. viii-ix, my italics) that

 

Tangible interaction, unconsciously executed and informed by peripheral information restores the primacy of action and re-integrates the mind and the body. The technology then disappears from perception in use. That means that products/artefacts have to be designed for human beings, not for users or customers. […] To be pleasant and invigorating, human life should be free of the need to always be conscious of the environment in which it exists. […] …our thinking of design and information-based society should adapt by using more a more universal approach and aspects of human consciousness/unconsciousness in a new, “primitive” coexistence with modern information technology.

 

Such a program already contradicts the position of my own present text inasmuch it is dedicated to the study of the interaction between the human beings differentiated among users or customers, decision-makers, and designers. The program seems instead to be more directed towards the idea of interaction as all-encompassing aesthetic entertainment as suggested by the second book mentioned above, on digital travel by the use of virtual reality. Entertainment is necessarily a means both aesthetics and consumption despite any claims of attaining morality through an integration of mind (whatever it is) and body, or as I identified as the “flesh” in a study of phenomenological ethics. But the book claims more than so (p.15):

 

To find inspiration for the new view of design presented in this book, and to change attitudes about designing. We have looked beyond conventional design to the methodological playgrounds of anthropology, mythology, theology, science, ethics and art. […] A consideration of the spiritual dimension, and of myth, of emptiness and of the unconscious should come as a revelation to the profession […].

 

I find that it claims too much, while not mentioning philosophy. Being aesthetics, it also incurs in its neglected Kantian relation to reason and ethics under the label of “design”, that is covered in  two essays of mine about computers as embodied mathematics and logic, and especially the one about computerization. The reader of these lines who wishes to spare the effort of reading and understanding the rest of this section may go over to study my latter reference dealing with computerization. The whole book on Primitive Interaction Design considered here is in my view a misunderstanding, a failed attempt to see “design” as a way to salvation of humanity by reducing all philosophy to phenomenology and avoiding the mention of Christianism. In fact, the book reminds (p. 33) that in the pursuit of science

 

God was declared dead and religion relegated to one day a week, at best. The separation of our minds from our bodies, our reason from our emotions, was complete. […] However and surprisingly, this most technological and abstract of inventions [computer technology and associated communications capabilities] came to provide the means for a reintegration of being and doing. Virtual reality […].

 

The book does (not) so by recurring mainly to unbounded wholesale references to authors related to phenomenology, which symptomatically overflows into the Wikipedia’s “See also” related to phenomenology. This adduction of phenomenology becomes possible by means of the adoption of an approach that has been called Fact-Nets or Leibnizian Inquiring Systems, mentioned in my survey on Information and Debate.

 

Skill in construction of fact-nets allows, for e.g. an introduction to “the main dimensions of the mind” based on interaction between the Conscious and the Unconscious (p. 37f.) followed by consideration of emotion) without a single reference to the otherwise ambitious bibliographies at the end of each chapter of the book. Otherwise it is this approach to fact-nets that allows for an apparently enormous coverage by the book’s total of 133 pages. References to Carl Jung’s collective unconsciousness (unconscious), immediately followed by Buddhistic thought later in the book (p. 77ff.), however, are not applied to explain Jung’s conception and the earlier mentioned interaction between the conscious and the collective unconscious. Neither are they applied to later (p. 97) tools that embody collective unconsciousness. Not to mention the problem of understanding what consciousness is, as evidenced by repeated frequent confused claims from several quarters, such as Physicist claims to have solved the mystery of consciousness (August 14, 2022). Forget Erich Neumann’s The Origins and History of Consciousness (1970/1949). I think that many readers will get impressed by the our considered book with its ambitious coverage of apparently disparate conceptual frameworks, in what can be perceived as a somewhat rhetorically bloated style, as exemplified by the label of a figure (p. 123): Conceptual spaces for a blended morphogenetic prototyping environment.

 

I must confess that I perceive this kind of approach as “mind-blowing” with the risk that it says more about my mind than about the approach. This is aggravated by the fact that the philosophy and psychology refer to Buddhistic thought without any reference to neither Christianism nor to the differences and relations between western and eastern thought, such as found in Carl Jung’s thought. Not to mention the pitfalls in western people trying to understand in depth eastern thought when they barely understand their own Christian culture. The book announces already in its preface (p. viii, my italics):

 

Tangible interaction, unconsciously executed and informed by peripheral information, restores the primacy of action and re-integrates the mind and the body. The technology then disappears from perception in use. That means that products/artefacts have to be designed for human beings, not users or customers.

 

[…] To be pleasant and invigorating, human life should be free of the need to always be conscious of the environment in which it exists.

 

Disregarding what an undefined “environment” is supposed to be, my conclusion is that paradoxically this is what happened with the authors. They do not define themselves as users or customers but they are either undefined designers or most probably see themselves as just human beings. Technology has disappeared from their perception in use as HCI designers and authors while paradoxically it permeates the whole book whose structure of facts-nets embodies the essence of technology. It is my perception that scientists and technicians who deal with computers such as HCI are proficient in logical-mathematical thinking that expresses itself in fact-nets. Analytical psychology would hypothesize that while they consciously deny and repress their inclination to logical-mathematical thinking they are overpowered by indiscriminate feelings to unconsciously express it in their professional design and authoring activities for which they were employed in academia, in the first place.

 

 

CONCLUSION

 

I dare to leave it at that, asking what is the place of formal sciences embodied in computers, as related to other sciences, as suggested in my other works on mathematicsdigitalization and theology. Long after I had written and published this paper I wrote another weird text with the title Logic as rape vs. Truth and Love, and a bit later I realized that this present paper advanced examples of mental rapes. In the meantime I had experienced and witnessed a number of additional better examples of HCI breakdowns, which would have justified a rework and lengthening of this paper to its double or triple size, if I had been young.

 

The question is more than a fragmentation of knowledge mentioned above. It is also the original and basic fragmentation between formal logical-mathematical knowledge furthered by computers, and other knowledge, as well as what knowledge really is or should be. The question is then whether the computer is forcing human psyche to become a “brain” using (in Brouwer’s sense) only a small “logical” part of itself in reflection and communications between people and with cultural history. And whether this can or cannot be obviated by means of reforms of logic and by eclectic patchworks of logically structured so-called user and mental models that imply an imperceptible gradual impoverishment of dialog and culture. Against this background I have refrained from the impossible task of reviewing all the canonic literature of HCI beyond samples from some of its luminaries as exposed, for instance, in the “Footnotes” and “Further reading”, in Wikipedia. It would have incurred in the problems of “Debate” as I depict them in the context of academic cronyism and socialization.

 

For the rest I am obviously aware that the interaction with computers brings innumerable advantages, and that my text may be interpreted as being too negative, not acknowledging the advantages of a developing technology with only transitional childhood-diseases. Those who interpret and feel so, usually neglect that advantages and disadvantages, or benefits and costs, are concepts of 300 years-old utilitarianism and therefore they also neglect the criticism that has been directed at it. For the rest, I will not go so far as to defend my text only by referring to the Faustian bargain and its history. And I do not believe that it is neo-luddite technophobia to ask whether all latest technical innovations always mean better future for everybody, equaled to that more is achieved faster and cheaper for everybody, without knowing where and how it will end if not in a paradise. I can only affirm that it is my basic positiveness towards technology that motivated me to five years of hard university studies allowing me to start a career as electronic engineer and to study most of the time technical matters at institutes of technology or what today is often called technical universities. And to deal with basically technical disciplines. It is only age (now 83) that prevents me from having the time and energy for further studies which would extend my explanatory thoughts and my writing beyond what I already wrote in Trends in the philosophy of technology and Information and theologyand extend my readings beyond Theology and technology.

Let me terminate with quoting from my earlier essay on the drive towards computerization, as it impacts the discussion of HCI. In July 2021, the Reuters news agency announced a major "private ransomware-as-a-service" (RaaS) attack by REvil  on the U.S. tech provider Kaseya, which forced the Swedish Coop grocery store chain to close all 800 of its stores for about a week: I wrote:

 

The late and perhaps ultimate consequence of short-circuiting the human and social element in the increasingly inclusive logical processes is the expanding phenomenon of Ransomeware and in particular of Ryuk under the general labels of Cyberattacks and Computer security. I do not know of any ex-post "after-the-event" corrective except smart back-ups and the hopeless hodgepodge of the indefinitely expanding "security industry" including FDIR - Forensic Digital Incident Response, to be considered as a sub-field of FDI - Fault detection and isolation. As ex-ante preventive measure I can only naively idealistically think of my dissertation on Quality-control of information, and the related summary in the book (in Swedish) on Systems development and rule of law.

In order to avoid that HCI will mainly contribute to the growth of discontent at the edge of HCI-luddism, allurement for criminal organizations, and consequent challenges to a growing hodgepodge of "security industries" leading to an analogue of a (cyber) "arms race", we have to acknowledge that knowledge cannot be reduced to computerized logic or so-called Leibnizian inquiring systems. One cannot by logical means prevent the breakdown of logical chains and networks. I can now, in an apparently utopian attitude, add that what has been expressed in the present text is the ultimate ex-ante preventive measure.

Let me terminate by reminding that the ongoing computerization of digitalization of society means that an increasing number of elders get handicapped by not being able to necessary daily tasks requiring digital “dexterity”. The first consequence of beginning dementia besides failures of memory is the weakening of logical thinking, which is hidden behind vague expressions such as Experiencing memory loss - poor judgment - and confusion, Trouble handling money responsibly and paying bills, or Taking longer to complete normal daily tasks.

 

My experience is that, among all this, the weak link is elementary logical thinking. A man I know has a neighbor that at the age of 75-76 after successfully having used regular mail, cellular phone and computer suddenly could no longer pay his bills that he had processed manually by regular mail. He hoped that he could put all business letters and invoices in an envelope and mail them to his bank that would take care of the rest. Since his closest relatives live in other cities he began to rely on his neighbor who could take photos of all correspondence and send it digitally to the man’s son who succeeded in arranging being a proxy with an authorized own digital contact with his father’s bank and pay his bills digitally. All this in expectation that the father in the future will move to live near the son. All this affecting also those who are not yet victims of beginning dementia recalls what I did already write above and repeat now as a description of what is like an ongoing societal experiment right now:

 

Those who have “keyboard dyslexia” or are incapacitated by age may instead be allowed to survive so long as they come-through in the increasingly computerized world, hoping for an option of digitalized verbal input into whatever they do no longer understand. Nevertheless it is more than a question of cognition, dyslexia or the like. Ultimately it may be a question of the limits of logicization or mathematization of reality as a consequence of the computerization of digitalization of society that I have analyzed in another article.

 

 

POSTSCRIPT

 

By the end of the year 2023 I had got personal experience and knowledge of about 30 cases of problems in human-computer interaction that would have completed and deeply illustrated the cases advanced in this paper. I happened to describe a few additional simple cases in the introduction to an essay of mine, which I had not foreseen would be among the most downloaded and read, since it deals with personal experiences of problems of medical safety as patient, outside my professional area of interest. They remind that the issue of security has to be expanded by including (the security of) privacy as best exemplified by the increasing discussion of privacy, considered for the time being (April 2023) in Wikipedia’s (provisory) title Privacy concerns with Facebook. Other considerations are presented in another essay of mine on the issue of computerization of society.

 

The problems cannot be resolved by logical computer means, even as formalized in law, for the very same reasons that computer security cannot be resolved by logical means. They increase my conviction about the character and seriousness of this situation and its development. It requires further attention by researchers who are not limited in time and energy as I feel at my present age (86). I hope that they feel encouraged by what I already wrote. The more so with the advent of anonymous “interactions” with or rather through the latest versions of so-called Artificial General Intelligence with the warnings for its “existential risks” that I partly consider in the last section or epilogue of my above mentioned essay on the computerization of society, and finally in my text on Artificial General Intelligence and ChatGPT. In that text, as well in a section of Reason and Gender, becomes gradually more visible that human-machine interaction implies also a progressive substitution of humans by computing machines. This to the point that a fraction of humans who keep really interacting with computers gradually bypasses the thoughts and emotions of an increasing number of humans who do not affect the contents and the operations of the interacted computers. This up to the point that HCI keeps losing a part of its identity, the meaning of interaction because the number of human really participating in it decreases, and the interaction takes place between a gradually decreasing number of active participants while the number of passive (including unemployed!) “users”, users or consumers of the effects of others’ participation, increases.