Rev 921109 11/11/01

COMPUTER-HUMAN INTERACTION AS

CONTINUOUS SYSTEM RECONSTRUCTION

by Kristo Ivanov

University of Umeå, Institute of Information Processing, S-901 87 UMEÅ (Sweden).

Phone +46 90 166030, Fax +46 90 166126, Email (Internet): kivanov@cs.umu.se

 

Abstract:

By means of this paper I wish to suggest some possibilities of considering computer-human interaction in terms of a continuosly self-revising hypersystem which is ultimately always a matter of inter-human interaction. The hypersystem is an implementation of the categorial thinking that is implicit in a particular — dialectical — social systems theory. This paper develops and exemplifies some matrix features of the hypersystem suggesting that they can be the skeleton of constructive computer support of self-learning social information systems. Hypersystem computer support incorporates principles of participative and cooperative computer systems development in order by eliciting, and keeping track of the relations between, various descriptive and normative (IS-OUGHT) system categories which portray and interrelate the views of social actors during the evolution of a particular problem situation. This kind of computer support may be expanded in order to obtain a qualitative follow-up and constructive evaluation of the system's evolution.

 

Keywords: Hypermedia, hypersystems, social systems, computer application, constructivism, educational technology, evolutionary systems, participatory design, interface, interaction, interactivity, cooperative work.

 

Categories

The world or human thinking about the world can be grasped and "manipulated" through the use of categories. The roots of such categorial thinking in Western culture probably is to be sought in Aristotle's categories or Praedicamenta, as they were called in Latin translation: substance, quantity, quality, relation, place, time, etc. (Piltz, 1978, p. 63ff). In the field of computer and information science the most common categorization is in terms of the Newtonian-Lockean conceptions of objects or entities, attributes and relations. They have the advantage of being consistent with the powerful mathematics that was developed for human dealings with the world of natural science. That very same mathematics would eventually be applied to the development of mathematical logic which forms the structure of computer hardware and software.

The philosopher Immanuel Kant's insights in the workings of the human mind as an intermediate between the world and man led to a new categorization — not of the world but, rather of the mechanisms for apprenhending the so called world. This gave eventually rise to a more epistemologically oriented thinking, supported by categories such as space, time, causality, etc., resumed in a geometry, arithmetic, and kinematics. Kantian categorial thinking has later been developed in order to adapt it to the understanding of later science, and to make it more applicable to the Hegelian dialectics between subjectivity and objectivity. The categories that were developed could be gathered in what came to be called "system". The need for some kind of systems thinking is evidenced in many computer-science contexts where one refers to context, frames, and the like.

When we set out in order to develop systems we have to rely on the techniques which historically have grown up in order to "describe" the world in relation to the human motives for such description. It is far from obvious that the mathematical and logical description techniques which were originally developed for the control of the natural world as it was understood in the seventeenth and eighteenth century are convenient for our — more than natural — world, even assuming that the meaning and delimitation of "natural" has been solved.

There are, for instance, professional mathematicians that today question the applicability of Newtonian mathematics to dealings with biological phenomena (Rosen, 1985). The trouble is that classical mathematics can be applied to biological and social phenomena in the sense that it makes some sense, i.e. it is not "impossible". The challenge is, then, to be able to evaluate which are the consequences of trying to apply, or applying a way of thinking to something which, in some sense, is not a proper field of legitimate application for this way of thinking. It is far from obvious that the possible ilegitimacy of such an application is self-correcting by means of some sort of negative feedback in the application situation as evaluated by a certain observer. Among other things, it is necessary for the observer to have some kind of a theory of implementation for the phenomenon or, in more conventional terms, to have at least instruments for the observation of a-priori defined variables that determine the "success" of the application. In a sense, this also introduces us to the ambiguity of the expression "computer application" that tends to mask and obscure the ambiguity of the concept of computation itself. Once more, professional mathematicians themselves are pointing to us the systems implications of the concept of computation (Davis, & Hersh, 1986, pp. 139ff).

I think that enough has been said in order to justify the development and use of categories and sets of categories in computing science, where "computing," once again, has to be submitted to the ambiguities of the concept of computation.

In another context I have developed the concept of hypersystems as a base for the specification of computer-supported self-learning system, exploiting the above mentioned categorial thinking (Ivanov, 1991b). In this paper I will start from a visual summary of the idea with the purpose of reflecting upon it in order to enhance its possible further development.

Systems

The idea of hypersystem implies going beyond the concept of a system when trying to structure information for support of decisions. Webster's (unabridged) Third New International Dictionary gives for the prefix "hyper" the meanings of over, above, beyond. Let's therefore start by summarizing the meaning of system itself.

Starting from the pragmatist conceptualization of teleological behavior in terms of decision-maker, alternative actions, outcomes, and goals (valued outcomes), a theory was developed for the design of inquiring systems (Churchman, 1961). This social systems theory developed a new set of primitive concepts analog to "Kantian" categories of thinking. Systems structuring, with due consideration of ends-means hierarchies in both the physical-artifact and the human-purposeful dimension is there attained in terms of not only morphological-structural (physical) categories that were implicit in most ideas of information about the physical world, but also in terms of functional classes and teleological classes which take into consideration more complex relations, including human activities, i.e. the human striving for goals and values. Problem solving processes are there defined in terms of systems consisting of the following basic categories or sets, containing sub-categories or subsets, labeled as

1) Client, including: (his) purpose, measure of performance

2) Decision maker: (his) components, resources & environment

3) Planner:(his) implementation, guarantor

4) Systems philosopher: (his) enemies of the systems approach, significance (Churchman, 1971, chap. 3; Churchman, 1979, pp. 79-80).

 

This implies that whenever somebody sets out to structure information in order to solve a problem or attain (short-run, or long-run) goals, he is supposed to make this structuring in terms of the above sets and subsets. In a transportation problem, for instance, one will ask what is going to be transported for whom (the client), who will be the driver (decision-maker), who is empowered to pay for the transportation even if ultimately somebody else — the above mentioned client — might be charged for the service, in which steps of subtasks the operation is going to be divided for planning purposes (subsystems), which are the "givens" of the enterprise (load to be transported, roads that are available, traffic regulations which will have to be respected, limits ot the available budget, etc.), which are the available resources that are going to be "consumed" in performing the activities — corresponding to the economic concept of cost (trucks, gasoline, oil, money, tires, etc.).

Information quality

The process of formulating the (information) system in terms of filling out the categories mentioned above raises the question of the accuracy of the information. How does one know whether the information is correct in the broad sense of the word?

This issue of data-entry introduces us also to the economics of the technology of data entry (Churchman, 1971, pp. 79ff). At the embryonic prototype stage of a system it may be easy to motivate, or at least pay, people in order to make them enter in a playful mode some data, answers, opinions, and so on which are required for illustrating the principles of operation of a system, suspending judgements about the ultimate value and quality of the entered data.

During full scale "live" (co)operation or (co)construction, however, it is a tough issue to determine and enforce who will motivate or pay for the data entry operations (not to speak of the data use operations) beyond those which are politically considered to be an absolute operational necessity (Persson, 1976). Stakeholders operating the computer may indeed be willing to spend their own time "for fun" or "for hope", entering data which may be used by themselves and by future unknown decision makers in unknown contexts. There is still, however, the question of who will be willing to pay, if not for the operators' time, at least for the equipment that is used up and for the non-glamorous updating or quality control of the pertinent data bases. If data entry is motivated by the expectations of getting an immediate own profit or advantage, then it is not anymore a question of ethical cooperation and solidarity.

The issue, however, requires a theoretical base for the determination or evaluation of the accuracy of the information.

This introduces the need for a "heuristic tool" which both enhances and measures the quality of the information. This has been shown to be possible in terms of the concept of "measurable error" i.e. consensus as a function of meaningful conflicts of opinion (Ivanov, 1972, chap.4-5; Ivanov, 1986, pp. 46ff). This is akin to certain features of recent research on "minority influence" (Clapper, & McLean, 1990, p. 406). The basic idea is that in order to know whether something is right you have to ask not only one source — specifically its "author" — but other sources as well. These other sources, however, should as matter of principle be chosen for being as "different" as possible, rather than as "many" as possible. To be different in such a context means to be potentially controversial, referring to different decision-makers using different methods for different purposes in measuring what is consensually and avowedly labeled as being the one same thing, the very same concept or the very same variable under consideration. In a charged political context to be different would even refer to "deadly enemies" in the Hegelian or Marxian sense (Churchman, 1971, p. 172ff), as one would consider political adversaries like an employers' association and a labour union when, for example, they foresee unemployment or profitability figures.

It is not a question of having "as many as possible" dialogically active clients or whatever, and to make statistics or communication or negotiation out of their opinions. It is rather a question of who, which clients, are going to deal with what, and how are they going to do it, on the basis of what undiscussed presuppositions, past experiences and historical ideals of ethics. This is probably a sensitive issue to the extent that it covers the problem of tradition, including political and religious stability (absolute or ultimate presuppositions), versus change as envisaged in the concept of constructive learning systems or of so called constructive computer applications (Forsgren, 1990). The question is what ought to be considered, or to be constructed, as being environmental changes or news. Many malconstructed changes or news are unfortunately supposed to require that we counteract them by corresponding counter-change activities. These counterchanges, however, dissipate our attention and our energies, and the cost is that more important "old news" — that is old unrecognized truths — remain unattended. This is certainly an important question for many of us who are supposed to keep up learning about getting flexible with respect to constructed technological news, constructed environmental changes, and constructed negotiations while other difficult "old" problems do not get the attention they deserve (Ivanov, 1988). And now, we have got the constructed news of hypermedia.

And now, the time has come to integrate the system concept and the quality-accuracy concept into the frame of the so called hypersystem. For this purpose I will summarize some of the main messages which were formulated above.

Hypersystems

The basic idea is to systematize in the form of a "mathematical" matrix the issue of subjectivity versus objectivity in systems design and operation. While we all tend to refer to "objects", "purposes", and even "systems", it is all too easy to fall into the illusion that, say, "my" object is the same thing as "our" object, and further "the" object: even constructivist and perspectivistic thinking is sometimes tempted to talk, in a sort of mathematical-logical language, about "consensual domains" of discourse. The point is to encourage the relativization of both the descriptive and normative statements that build up the systems work, without opening the door to an excessively easy or cheap relativism and arbitrariness.

Categories are then used by "somebody" in the paradoxical role of an "over-observer" who goes into the problem situation which is the object of systems development, and tries — as it were — to fill out the matrix form below. One main idea is that if anybody states something it is in first place his statement. Another main idea is that a statement has to follow a certain form if it is to be considered as a meaningful "rational" prose. As a matter of fact most people would agree, I think, that this has to be so even for poetry, and the debates about modern art, modern music, James Joyce's texts, etc. deal to a high degree with such matters of the limits of form. Furthermore, in my theoretical understanding, statements are mainly statement about contexts or systems, and as such they can be structured in terms of systems categories according to the afore mentioned particular theory of dialectical social systems, originally developed in the spirit of American pragmatism (Churchman, 1971, chap, 3). Such a context is formally and socially defined in terms of its subsystems, their social actors goals (such as clients, leaders, and designers), goals, measures of performance, resources, and environment or unquestioned "givens". Nevertheless, that is not enough. The continuous development of the system consists in the continuous follow-up and redesign — by the affected social actors — of these categories and of the relations between them. (Ivanov, 1991b, introduces some details.)

If I state something, I state a problem, a wish, a model, or whatever, in context, and this statement may take the form of a systems, not because it "is" a system, but rather because there are historical motives for believing that this form is basically "consensual" to the reason of humans as especially philosophers and scientists have tried to grasp it in the course of the centuries in our Western civilization. This may be obviously regarded as a controversial assumption, but my point is that it is a soft, gentle assumption when compared with most other assumptions of systems development — notably the assumption of "effectiveness" of the logico-mathematical computer-tool. It may not be a "minimal a-priori" (Churchman, 1971, pp. 133ff) but the point is to offer a fruitful a priori.

Having this in mind, systems development, which — I hope we will see — is not clearly distinct from systems operation, consists basically in filling out and continuosly collating the cells of the matrix below. A system like this when installed in a workplace would allow entries by people who classify themselves in the various categories. If I consider myself to be a designer I will enter my personal identification, stating what I want to do with the system, what I want to do with my entry. For instance, I begin by stating or filling out the first category, i.e. the purpose or goal of "my" system. Then I continue filling out what I believe are the resources of the decision maker that I have in mind, etc. The next group of tasks would be for me to fill out what I believe are the corresponding statements of the decision maker, etc.

If somebody feels like being a decision maker he will enter the system via that sort of input-identification and then input the corresponding set of statements. The same for somebody else who may consider himself as a client.

So long I have limited myself to mention statements in the IS-form, i.e. who IS the decision maker, which ARE the resources, etc. Sometimes, as when I myself enter my own problems statements, the problem as I see it TO BE may occasionally be assumed to be the same thing as the problem as I or somebody else (I think) OUGHT to see it. But this need not be always the case, and it is easy to imagine that I may find that the decision-maker's or manager's view of the problem is so and so, but I think that it should be different (probably, but not certainly like my own — which otherwise tends to be considered as the standard of IS-correctness). For instance, I find that the manager of the system IS the fellow A who only cares about short run profits, while I think that it is fellow B who should be appointed to that managerial position since he is sympathetic to the noble and human (that is, my) idea of democratic participatory design.

In summary, the general layout is:

HYPERSYSTEM MATRIX-BASE

IS or OUGHT judgements: column headings according to row headings

1) Designer's system

2) Decision maker's system

3) Client's system

4) System philosopher's system

5) Witness' system?...Etc.

A) According to designer

       

...

B) According to decision maker

       

...

C) According to client

       

...

D) According to systems philosopher

       

...

E) According to witness?...Etc.

...

...

...

...

...

 

 

That, when filled out may look like:

HYPERSYSTEM MATRIX-BASE

IS or OUGHT judgements: column headings according to row headings

1) Designer's system

2) Decision maker's system

3) Client's system

4) System philosopher's system

5) Witness' system?...Etc.

A) According to designer

Ex.:"I want to help the exploited clients-workers or the helpless students" through my influence on the manager". Often believed to be same as cell C3?

Ex.: "The manager does not appreciate the power of my models". Economic & operational potential. Research: MODERATE (Research grants & consulting).

Ex.: "The helpless clients need and require my influence on the manager". Economic and ethical potential. Research: HIGH (low-payed consulting).

Ex.: "The philosopher offers me unpractical unprofitable unselling data-philosophy with the hope of being able to stimulate me". Research: LOW

...

B) According to decision maker

Ex.: "The designers do not think for profit and do not understand my problems". Economic and political potential. (Research grants & consulting).

Ex.: "I want to create job opportunities through profit". Often corresponding to operational system.

Ex.: "Client-students do not realize the quality of my product". Economic and political potential. Research: MODERATE (marketing). Strategic OUGHT-reality.

As above . Alternative ex.: "He tries to educate designers and long term planners but he is unrealistic"

...

C) According to client

Ex.: "The designer wants to play with high-tech gadgets"

Ex.: "The manager looks only for more profit". IS-"Reality". Political potential and risks (Revolt).

Ex.: "I want necessary goods at minimum costs". Economic and political potential. Research: MODERATE. (Grass-root)..

Ex. "He tries to help me to understand why present systems designers do not meet my needs, but he is politically weak"

...

D) According to systems philosopher

Ex.: "The helpless designer needs my methods". Research HIGH and cf. cell A4. Field for OUGHT.

Ex.: "The manager does not realize that short sightedness jeopardizes long run profits". Research: LOW.

Ex."The client wants to participate in design, but is put off by technology". Research: HIGH (pragmatism). Field for OUGHT.

Ex.:" I am the only one who thinks about long-run implications and deep causes of information technology"

...

E) According to witness?...Etc.

...

...

...

...

...

In this case, the "problem situation" that was exemplified was the systems development problem itself. Obviously there are many question which arise here, mainly relating to the "paradox of the observer", that is, who is putting the questions or controlling the system itself. As a matter of principle, however, there should be no difficulties to "collapse" the matrix above to only some of its cells by assuming that some of the cells have a kind of priority, and this is apparently done in implementations of so called constructive systems development. This is the case when, for example, in my quality of being a designer I acknowledge that I have to work at the service of a decision maker who pays my salary and finances the equipment and time of the input operators. I turn myself, then, into the "objective" facilitator, liberating observer or owner of the matrix, and I can go around interviewing both the manager (decision maker) and the client about what to put into their cells.

Computer support

Hypermedia or multimedia, and, in particular, even the more simple software of the hypertext-hypercard type seem to be technologies that may be used for implementing in different degrees the hypersystem base outlined above. As it is being recognized, the challenge of hypermedia software is the proper balance between the freedom of association, and the need for guidance. That is the point of expanding hypermedia thinking into hypersystems. Some preliminary attempts to implement so called constructive prototypes in this spirit (Forsgren, & Ivanov, 1990) are being reflected upon in order to check to what extent categorial thinking can be realized in computer applications without loosing out of sight the systems philosophy proper.

One of its sensitive aspects is the capability and motivation for keeping track, in the implementation, of the relations between categories. There has been lately, for instance, much attention focused on the proper way for considering so called human activities, especially as related to actions and operations (Kuutti, 1990). In our systems terms it turns out that one way to grasp activities would be to represent them as a function of the consumption of resources, conditioned by a certain environment, which includes culturally determined technology and tools (Sachs, & Broholm, 1989). At the same time, systems theory requires that resources and environment be both defined in terms of a particular same decision maker or leader, since it turns out to be meaningless to correlate resources of one leader with the environment of another leader in the implementation of (who's?) activity. Furthermore, environment must be defined in terms of a particular goal or objective, since it is, for true, something which is not controlled by the leader, but it must be also something which affects the measure of performance against the goals. There are indeed many things which are not under the control of the decision maker but do not qualify as environment because they make no difference in respect to the attainement of goals. And so on.

In summary: The resources of one leader or decision maker of a subsystem can be the environment of another leader in the same way as the costs of one social actor turn out to be the income of another. The goals must be the clients' goals in spite of being formulated by the leader, and this puts an ethical press on the designer who cannot equate the client with the man who pays his salary and implements his system. Environments must be defined in terms of a particular leader since they are by definition something which can affect the measure of performance but is not controllable by the leader, etc. This way of seeing things appears to me as more sophisticated and intuitive than many other ways of talking about context, frames, conceptual models, activities, and such concepts which in one way or another boil down to the basic formal and "empty" concepts of entities and relations.

The reasoning outlined above can be made quite rigorous in terms of so called morphological (structural), functional, and teleological classes, which stand at the basis of the description and manipulation of structures and functions of activities in a systems network (Churchman, 1971, pp. 43-46; Singer, 1959). This kind of reasoning has already been promisingly applied to the computer context, and could guide our own further thinking on the matter (Sachs, et al., 1989). Other developments of this kind of systems thinking (Ackoff, & Emery, 1972, pp. 67, 163), point to the possibility of relating these latter concepts to the possibility of differentiating between reaction and response, akin to the differentiation between activities and actions or operations in so-called activity theory (Kuutti, 1990). The point in mentioning this is to emphasize that computer support should allow to keep track of many important relations that up to today could be most often conceptualized only theoretically, even if they are consistent with many practical applications in the limited area of natural science and technology.

The tracking could be expanded to take care of "chains" of activities where it is not really a question of "chains" but rather of sub-systems that stand in a particular relation to each other and to a conceptualized overall system (Churchman, 1971, pp. 54ff, 66ff). The introduction of "subsystems" with their ambiguities in the possibility to relate conflictual subgoals to an overall monistic social goal (as paradoxically presumed even in e.g. Marxist worldviews) expands also the hierarchical concept of decision-maker or leader. He becomes an "autonomous" social actor where leadership or decision and implementation power are a matter of degree of autonomy or discretion. Even the most oppressed worker has some discretion in his own work in terms of the "environment" which is forced upon him, and conceptually he can be seen as a "decision maker" for a very limited particular subsystem. This seems to be implicitly recognized in German work-environment research in the tradition of "action-regulation theory" which develops measurement tools for following up the quality of work (Oesterreich, & Volpert, 1986; Volpert, 1988; Volpert, Oesterreich, Gablenz-Kolakovic, Krogoll, & Resch, 1983).

When this is done it would be possible to conceive continuous or, rather, periodic discussions-negotiations about the "why?" of the differences between the different cells. The overall difference could, in certain cases, be expressed by a mathematical function of the content of the cells to the extent that it can be represented quantitatively in a meaningful way. There are developments that might be adapted to such uses (Ehrenberg, Eklund, Fedrizzi, & Ventre, 1990). If not, I can imagine some kind of linguistic data-processing that creates some kinds of rough measures of the similarity of the contents of the cells, with due considerations for synonyms, etc. It is also conceivable that ongoing developments of so called "virtual reality" could lead to a capability of visual and other representation that gives a "total feeling" of degree of uniformity between the cells while a negotiating group navigates in a "hypersystems space" (Eisner, 1991). We obviously will arrive some day to questioning the concept of "interface" itself, to the extent that "face" does not relate mainly — as it should — to "doing" (Latin facere), but rather to visual connotations. Maybe we must prepare for the time when some smart scientists will discover that they can launch the concept of "interbody", if this has not already been done by smart advertisers.

The transformation of the discussions and negotiations on hypersystems into some kind of "interbody", inter-action or intermind is supposed, in the spirit of dialectical social systems theory to represent the active dynamic, i.e. continuous, re-definition of the system. In this respect the proposal advanced in this paper is consistent with — but not equivalent to — the most recent visions about the future of methods of systems development and of human-computer interface, including aesthetic virtuality, putting the observer outside the picture, clustering of information in "rooms to move around in", clustering of windows related to particular tasks, (analog to our matrix cells), retrieval of points of view rather than of facts, etc. (Clarkson, 1991; Linderholm, 1991; Ryan, 1991; Stolterman, 1991). There is, for instance, a clear relation between regarding the design of software as being intrinsically a branch of cinema (Linderholm, 1991, p. 42), and advancing the importance of aesthetic and ethical "visions" in the process of systems development (Stolterman, 1991). What is required and afforded by means of the hypersystems concept is to put socially together these visions in a relatively stable meaningful whole, and to ground them in a psychological theory — analytical jungian psychology — which is consistent with the whole systems concept.

It is, then, much more than a question of pressing, say, a HyperCard button in order to check somebody else's opinions or in order to come in contact — as when dialing a phone call — with some particular government official you are depending upon. It may be a matter of eliciting the ethical response of a social actor who would rather ignore or marginalize you. We obviously meet here the problems which correspond to the charges for "counterfactuality" aimed at Habermasian critical social — discursive — approaches. The active involvement of "deadly enemies" is a basic absolute requirement for the fruitfulness of non-trivial applications of the social system idea and for the implementation of the "sweeping-in process" (Churchman, 1971, chaps. 7 and 9 on "Hegelian" and "Singerian" inquiring systems). To the extent that negotiations co-involve "deadly enemies" in the sense that they have reasons for diverging as much as possible in their IS-OUGHT judgements about the system, they may be hoped to result in information and systems of "better quality" (Ivanov, 1972; Ivanov, 1987). That means information that can be expected to be helpful in contributing to the attainment of more general goals, and information that in itself is difficult to misuse. By definition it is also information that is obtained through a participative and cooperative process of continuous systems development, in the spirit of the optimistic trilogy "production - cooperation - progress" (Ivanov, 1991b).

The interbody capability can also be seen as a bridge to the activation of the systems definition that results from the compromise arising from the afore mentioned negotiations. The difference between an "abstract" depictive description that is a pure wishful thinking from the part of a powerless client, and a "concrete" normative description by a powerful manager which is activated and really used operationally in, say, an industrial or commercial firm, is that the latter in praxis gets allocations of certain resources. These are the resources which can be activated for creating changes in the physical world. The former can also create changes, but probably more in the "psychic" world of moods and motivations which also co-determine the physical use of resources, in the same way as this physical world can be affected by the reading of a romance or by the conversation with a psychoanalyst. The frontier between operational constructed reality and "psychic world" may be seen as vague, in the same sense which emerges, for instance, in those cases where an enforced operational and concrete computer systems of a company is sabotaged by employees: it is "applied" and "used", but at the same time it is not used by those who prefer not to act on its orders, or prefer to act on basis of other "manual" information.

Conclusion

By means of this paper I wish to suggest some possibilities of considering human-machine interaction in terms of creating and continuosly adapting a hypersystem that defines an operative computer support. Such computer support can also be seen as an implementation of the categorial thinking that is implicit in a particular dialectical social systems theory. I would appreciate any comments and hints regarding software products, programming languages, or methods of systems development that seem to be appropriate for keeping track of particular instantiations of the proposed systems categories and their interactions.

In this report I have kept to a minimum theoretical reflections and critical comments to the hypersystems idea and its preliminary implementations. In that respect the reader is directed to other pertinent literature (Forsgren, et al., 1990; Ivanov, 1991a; Ivanov, 1991b).

References

Ackoff, R. L., & Emery, F. E. (1972). On purposeful systems: An interdisciplinary analysis of individual and social behavior as a system of purposeful events . Chicago: Aldine-Atherton.

Churchman, C. W. (1961). Prediction and optimal decision: Philosophical issues of a science of values . Englewood Cliffs: Prentice-Hall.

Churchman, C. W. (1971). The design of inquiring systems: Basic principles of systems and organization . New York: Basic Books.

Churchman, C. W. (1979). The systems approach and its enemies . New York: Basic Books.

Clapper, D. L., & McLean, E. R. (1990). Information technology in support of group activity: A theoretical framework and implications for research. In H. E. Nissen, H. K. Klein, & R. Hirschheim (Ed.), Prec. of ISRA-90 The information systems research arena of the 90's: Challenges, perceptions, and alternative approaches. IFIP TC-8, WG-8:2 working conference, Copenhagen, Dec. 14-16. 1990 (pp. 399-413). Lund: Univ. of Lund, Inst. of Information Processing. (Elaborated in book, same eds. Information Systems Research: Contemporary approaches & emergent traditions. Amsterdam: North Holland, 1991, pp. 613-630.)

Clarkson, M., A. (1991). An easier interface: Xerox PARC, originator of the computer desktop, unveils a vision for the future of user interfaces. Byte, (February), 277-282.

Davis, P. J., & Hersh, R. (1986). Descartes' dream: The world according to mathematics . New York and London: Harcourt Brace Jovanovich, and Penguin Books.

Ehrenberg, D., Eklund, P., Fedrizzi, M., & Ventre, A. G. S. (1990). Consensus in distributed soft environments (Research report). Technical Univ. of Leipzig, Åbo Akademi, Univ. of Trento and Univ. of Napoli.

Eisner, R. (1991). Looking at virtual reality: Research. The Scientist, 5(6), 14ff.

Forsgren, O. (1990). Co-constructive computer applications: Core ideas and some complementary strategies in the development of a humanistic computer science. . (Presented at the Distributed Architecture School, Karpacz, Poland, 15-20 October 1990.)

Forsgren, O., & Ivanov, K. (1990). From hypertext to hypersystem. In R. Trappl (Ed.), Cybernetics and Systems '90. Proc. of the Tenth European Meeting on Cybernetics and Systems Research, Vienna, April 17-20, 1990 (pp. 275-282). Singapore: World Scientific. (Also as report UMADP-RRIPCS 9.90, University of Umeå, Inst. of Information Processing.)

Ivanov, K. (1972). Quality-control of information: On the concept of accuracy of information in data banks and in management information systems (Doctoral diss.). The University of Stockholm and The Royal Institute of Technology. (NTIS No. PB-219297.)

Ivanov, K. (1986). Systemutveckling och rättssäkerhet : Om statsförvaltningens datorisering och de långsiktiga konsekvenserna för enskilda och företag [Systems development and rule of law] . Stockholm: SAF:s Förlag.

Ivanov, K. (1987). Rule of law in information systems research: The role of ethics in knowledge-building procedures, especially in the updating of inference networks. In P. Järvinen (Ed.), Proc. of the Tenth Information Systems Research Seminar in Scandinavia, Tampere-Vaskivesi, Aug.10-12 1987 . Tampere: University of Tampere.

Ivanov, K. (1988). Expert-support systems: The new technology and the old knowledge. Systems Research, 5(2), 293-100.

Ivanov, K. (1991a). Computer-supported human science or humanistic computer science? Steps toward the evaluation of a humanistic computing science (UMADP-WPIPCS-41.91). Umeå University, Inst. of Information Processing. (Expanded version of a talk presented at the Tenth International Human Science Research Association Conference, Gothenburg.)

Ivanov, K. (1991b). Hypersystems: A base for specification of computer-supported self-learning social systems (Research report UMADP-RRIPCS-13.91, ISSN 0282-0579). Umeå University, Inst. of Information Processing. (Rev. paper at the NATO Advanced Research Workshop on Comprehensive Systems Design: a New Educational Technology", Monterey-Asilomar, California, 2-7 Dec. 1990.)

Ivanov, K. (1992). Computer-human interaction as continuous system reconstruction. In M. Bazewicz (Ed.), Information Systems' Architecture and Technology — ISAT '92 (pp. 37-49). Wroclaw: Politechnika Wroclawska.

Kuutti, K. (1990). Activity theory and its implications to information systems research and development. In H. E. Nissen, H. K. Klein, & R. Hirschheim (Ed.), The information systems research arena of the 90's: Challenges, perceptions, and alternative approaches. Proc. of the ISRA' 90, IFIP TC-8, WG-8:2 working conference, Copenhagen, Dec. 14-16. 1990 (pp. 195-216). Lund: Univ. of Lund, Inst. of Information Processing. (Elaborated in book, same eds. Information Systems Research: Contemporary approaches & emergent traditions. Amsterdam: North Holland, 1991, pp. 529-549.)

Linderholm, O. (1991). Mind melding: How far can the human/computer interface go? Byte, Special Edition - Outlook'92, 41-46.

Oesterreich, R., & Volpert, W. (1986). Task analysis for work design on the basis of action regulation theory. Economic and Industrial Democracy, 7, 503-527.

Persson, S. (1976). Apropå myndigheternas uppgiftskrav . Stockholm: SAF: Förlag.

Piltz, A. (1978). Medeltidens lärda värld . Stockholm: Carmina. (English trans The World of Medieval Learning. Oxford: Basil Blackwell, 1981.)

Rosen, R. (1985). Organisms as causal systems which are not mechanisms: An essay into the nature of complexity. In R. Rosen (Ed.), Theoretical biology and complexity . New York: Academic Press.

Ryan, B. (1991). Dynabook revisited with Allan Kay. Byte, (February), 203- 208.

Sachs, W., & Broholm, P. (1989). Hypertrophy in the micro-computer revolution. In C. W. Churchman (Ed.), The well-being of organizations (pp. 171-177). Salinas, Calif.: Intersystems.

Singer, E. A., Jr. (1959). Experience and reflection . Philadelphia: University of Pennsylvania Press. (C.W.Churchman, Ed.)

Stolterman, E. (1991). Designarbetets dolda rationalitet: En studie av metodik och praktik inom systemutveckling [The hidden rationale of design work: A study in the methodology and practice of system development] . Umeå: Umeå University. (Doctoral diss. UMADP-RRIPCS 14.91.)

Volpert, W. (1988). What working and learning conditions are conductive to human development? (Research report presented at the Swedish-German workshop on the humanization of working life, Stockholm, December 1988). Institut für Humanwissenschaft, TU Berlin, Ernst-Reuter Platz 7, D-1000 Berlin 10.

Volpert, W., Oesterreich, R., Gablenz-Kolakovic, S., Krogoll, T., & Resch, M. (1983). Verfahren zur Ermittlung von Regulationserfordernissen in der Arbeitstätigkeit — VERA. Handbuch und Manual . Cologne: TÜV Rheinland.

Whitaker, R. (1992). Venues for contexture: A critical analysis and enactive reformulation of group decision support systems . Umeå: Umeå University, Inst. of Information Processing. (Doctoral diss.)