Rev 91-04-20, 11/11/01, 13-11-04

Hypersystems: A Base for Specification of Computer-Supported Self-Learning Social Systems

Kristo Ivanov

University of Umeå, Institute of Information Processing, S-901 87 UMEÅ (Sweden).

Phone +46 90 166030, Fax +46 90 166126, Email (Internet):

© K. Ivanov

This is a pre-print version of the text to be published as
Ivanov, Kristo. (1993). Hypersystems: A base for specification of computer-supported self-learning social systems. In C. M. Reigeluth, B. H. Banathy & J. R. Olson (Eds.), Comprehensive systems design: A new educational technology (pp. 381-407). New York: Springer-Verlag.




Abstract: This paper introduces some general features of the idea of hypersystem seen as a general computer-technological implementation of the concept of a self-learning social system. A hypersystem-base is presented in terms of a matrix that also suggests some necessary future developments of preliminary attempts in terms of what has been called co-constructive systems. Hypersystem computer support may be used in order to elicit, and keep track of the relations between, various descriptive and normative (IS-OUGHT) system categories that relate to the views of particular social actors, both groups and individuals. It is suggested that computer support may be expanded in order to obtain a qualitative follow-up or evaluation of the system's evolution in matrix terms. Some problems that come to mind in the context of the first hypersystems implementations point at the need to follow more closely the original theoretical categories or to reform them in order to prevent their possible misuse in practical situations.

Keywords: Hypermedia, social systems, computer application, constructivism, educational technology, evolutionary systems.

Learning can be seen as a matter of organization of thought processes in terms of information or, rather, inquiry [14]. Since the "categories" of Aristotle, development and use of information systems in general has required often the use of some primitive concepts that also implicitly structure and guide our thinking. While traditional logic spoke of, say, subject and predicate, late formal approaches in the context of information systems, data bases, and programming languages mention, for instance,"entity", "relationship", "object", "function", etc. In the particular context of social information systems including administrative data processing, several schools of thought have advanced different conceptions of which categories constitute what might be called information, information systems, or, rather, inquiring-learning systems. At the interface between formal information systems and social inquiring systems appeared the need to enhance the quality of information and of systems by supplementing the formal categories with categories that deal with error and accuracy [41].

In any case, the disregard of proper categorial thinking in these contexts has apparently resulted in a technological manipulative imagination that seems to be rather arbitrary and problematic, as evidenced in recent visions of the application of computer technology to the educational field [68]. We shall initially see what types of categories were developed in order to support inquiry and learning.

Having started from the pragmatist conceptualization of teleological behavior in terms of decision-maker, alternative actions, outcomes, and goals (valued outcomes) a particular theory for design of inquiring systems or social systems theory developed a new set of primitive concepts analog to "Kantian" categories of thinking. Systems structuring, with due consideration of ends-means hierarchies in both the physical-artifact and the human-purposeful dimension is there attained in terms of not only morphological-structural (physical) categories that were implicit in most ideas of information about the physical world, but also in terms of functional classes and teleological classes which take into consideration more complex relations, including the human striving for goals and values. Problem solving processes are there defined in terms of systems consisting of basic categories and sub-categories, sets and subsets, labeled as 1) Client, (his) purpose, measure of performance, 2) Decision maker, (his) components, resources & environment, 3) Planner, (his) implementation, guarantor, 4) Systems philosopher, (his) enemies of the systems approach, significance [14, chap. 3; 15, pp. 79-80].

A hypersystem is a development of the idea of a system that produces information with improving quality [41, chap. 4; 42, pp. 45ff]. There is obviously a danger in introducing something which may turn into a new buzzword in a field which is already overcrowded with such words. That it the reason why I would have preferred to stick to the well established concepts of social system, computers, information, etc. that have been studied in the past thirty years. The continuous introduction of new buzzwords like knowledge-based systems, cooperative work, constructivism as opposed to constructionism etc. (see section number 6 below) seem to require paradoxically the reintroduction of a systems-related concept like hypersystem in order to order and evaluate all those news from a historical point of view.

A hypersystem in the context of this paper is a learning system that tries to reachbeyond itself and beyond superficial conceptions of computer systems. Webster's gives for the prefix "hyper-": over, above, beyond. As such the language for this conception is very simple and not new, and it is reasonably protected from the charge of being a new buzzword. It is a computer supported application that is built upon the architectural basis of a social system as spelled above, striving to reach over, above, or beyond itself, i.e. attempting to learn. Constructing a hypersystem implies that the software package is built upon the relationships that social systems theory indicates between its primitives according to the system philosopher. It is initially stated, for example, that the clients are by definition the originators of the purposes, and that the measure of performance is a measurable operationalization of these purposes. The clients are to be represented by the decision maker who by definition is the formulator of purposes and, also by definition, is provided or provides himself with the [for him] "changeable" resources or artifacts that, operating under the [for him] unchangeable environment, according to a relational or mathematical model co-produce the measure of performance relative to [the clients'] goals. The planner or designer is by definition the one who, in his analysis and synthesis, purposefully and normatively chooses the decision maker that fosters the clients that should legitimately-ethically (according to a political or religious guarantor) be served by the system.

The core of the hypersystem idea, however, would be to elicit and to keep track, by means of adequate "pointers" or links between simple or composite nodes, of the relationships between instantiations of the primitives during the process of solution, or rather dynamic continuous follow-up of the particular systems problem. There will be a conversation or argumentation, but it will be mainly or initially about certain primitives, within a certain structure, with particular functions or goals. Above all, the software will offer specific data-entry options to particular social actors as defined by the social systems theory, e.g. clients, decision-makers, planners-designers, and systems philosophers or creators of methodologies. The structuring and organization of the software will strive to involve, communicating and tapping the judgements or opinions of, at least, certain particular social actors or role bearers whose absence would hide the problems of power and of emotions, including ethics.

At our present stage of technological development it is convenient that the technical implementation of such a computer application be built upon hypermedia [76] or variants of "multimedia management", typical of recent educational technology and computer-based training [7]. They may be seen as technical outgrowths of the hypertext idea from which the hypersystem term itself might also have been derived [6; 20; 31; 49].

I have up to now outlined the hypersystem idea on the basis of the original social systems categories as developed in dialectical systems theory: it obviously transcends the formal and logical limits of so called object-oriented programming languages and methods for systems development. Other later approaches in a close theoretical tradition propose somewhat different sets of categories. Critical systems thinking — CST — [74] for instance groups twelve categories in four classes, each class comprising three kinds of categories: social roles, role-specific concerns, and key problems. The four classes correspond to the asking for the normative ought of (1) the sources of motivation: client, purpose, measure of performance, (2) sources of control: decision maker, components, environment, (3) sources of expertise: designer, expertise, guarantor, and (4) sources of legitimation: the affected people's witnesses, their emancipation, and their world views—Weltanschauung.

Other categories that are closely related to the above have been proposed by the proponents of soft systems methodology's (SSM). They are the so called CATWOE-categories [8]: (1) Customers, (2) Actors, (3) Transformation processes, (4) Weltanschauung, (5) Ownership of the system, (6) Environment. At an apparent level of meta-systems it has also been suggested that various paradigms of information systems development be described and interpreted in terms of the categories: (1) Key actors (the "who" part of the story), (2) Narrative (the "what", or the key activities), (3) Plot ("why" did the action take place, akin to causes and purposes), (4) Assumptions (the fundamental beliefs or Weltanschauung, or epistemological-ontological assumptions) [39]. Others again [30] wish to launch schemas with e.g. (1) Client, (2) Leader, (3) Actor [subsystem-leader], (4) Designer, (5) Systems philosopher, and (6) Computer application, which performs (7) Activities, with the help of (8) Resources.

Outside the tradition of pragmatist dialectical systems theory other primitive fundamental concepts have been used at the interface between information and systems. This proliferation of primitives adds to the overwhelming profusion, or even confusion, which derive from the fact that authors very seldom justify their expansions of alternatives through reference to other earlier attempts or to theories. One notable exception is the more elaborated and historically grounded structure that has been suggested recently [59], akin to another recent attempt [36], and in the tradition of rhetorical-hermeneutical "argumentation" close to the field of law [71; 73]. A relational structure according to which to analyze argumentation is proposed there in terms of (1) Informer, (2) Actor —for responsible action induced by the informer, (3) People affected —in the field of action, (4) Evidence —for the claim, (5) Warrant —for the evidence, (6) Backing —for the warrant, (7) Claim —for the actor's action, and (8) Rebuttal or counterargument —of the claim, or of an earlier rebuttal.

Among late newcomers we have (1) Environment (in terms of situations, support, actors, and types of task), (2) Systems (in terms of task flow, work contents, and information objects), (3) Provocateur (analog to systems facilitator), (4) Context, (5) Model (related to the environment), and (6) User [38, pp. 35, 46, 56, 75].

Several authors have liked to create their own variants of categories. Often they cannot afford to ground their categorizations in basic considerations or historical controversies in philosophy and in scientific method. The suggestions are "empirically" based, in the sense of being based mainly on ad-hoc experiences and intuitions that are not motivated or reflected upon. One point I want to make by mentioning this is that certain sets of systems categories (notably Churchman's) establish demanding and commiting relations between categories, fostering ethical disciplined thinking. So, for instance, one would not talk, as in SSM, about the category of "environment" without a commitment about who is/ought to be the decision-maker, and which are the clients' purposes. A similar problem is raised by the category "computer application" that was proposed lately in the co-constructive approach, a category that may mask the problematic concept of (mathematical-logical) computation [22, pp. 139ff]. It may also mask the relationship between resources, environment, and other categories of model building that is thus reduced to associative manipulation of hypermedia databases.

A most common reason for proposing new sets of categories seems to have been the wish to adapt language to one's own occasional subculture (often a computer-subculture at a particular level of technology) and the wish for finer distinctions within the category of decision makers. In the original formulations even clerks, machine operators, workers and public servants could be framed in the role of (sub)decision makers for their particular subsystems. The term decision-maker itself, however, did not convey sufficiently — to the satisfaction of some — the connotation of the power relations between teachers and students in an educational system or between, say, executive managers, middle management, and workers in a manufacturing system. Even if this is so, terminological reforms that are justified in this way can barely claim to represent significant theoretical breakthroughs.

If we base our further argument, for the purposes of this essay, to the first set of categories mentioned above, and disregard the various subsequent alternatives, we meet the question of who will use and develop such categories. In practice, for the same explicit systems labeled like "car sales support", "real estate brokerage support", "travel arrangements support", or "course curriculum development support", different designers may choose different clients and decision makers (as representatives of different clients). The degree of cooperativeness in work is measured, ceteris paribus, by the degree to which we approach the ultimate ideal of the democratically autonomous worker-manager, student-professor or, rather, the "unified decision-maker, client, and designer" where the optimistic trilogy of production - science - cooperation merges with the "heroic mood" [14, pp. 201-204, 254].

The who-question mentioned above can be appropriately discussed in the context of one of the latest buzzwords that have been launched in the computer field: CSCW or computer-supported cooperative work. Cooperation, if it is to be meaningful and ethical, presupposes understanding, but there may be understanding without cooperation and understanding that even precludes cooperation. Understanding can be appropriately seen as a kind of "teleological tracking" in which the understander responds to the purposes of the one who is understood [17, p. B85]. The degree of cooperation of A with B is then the extent to which A's activities improve the effectiveness of B's activities relative to B's goals [9, pp. 246ff; 11, pp. 309ff., 375f.; 13, p. 156]. Cooperation is thus seen to be asymmetrical: A may cooperate with B, while B does not cooperate with A. This may be the case, for instance, when A loves (unconditionally) B, or A is employed by B (partial cooperation towards part of B's goals), or when B judges that A's goals are unethical.

The productive unification or dialectical cooperation may get simplified and reduced to conversational interactive co-constructive negotiations and consensus. Negotiations searching for consensus, however, may be as much dangerous as superficial conceptions of democracy based on majority opinion. The degree of meaningful cooperation depends upon the degree of mutual understanding. A meaningful cooperation might therefore be enhanced by the claim that the degree of cooperation or its conflictuality, including potential conflictuality, be expressed and "advertised" explicitly for the purpose of self-control or self-development. At the limit, such a degree of self-control may be idealized in mathematical or relational matrix terms [26; 41, pp. 4.34ff; 47, pp. 47ff].

At the conceptual level of the question as seen by the "systems philosopher" it is then necessary to refine the systems definition by making more explicit the dependence of the whole view upon the dialectics between the IS-OUGHT of the judgements and upon the identity of those who enounce them. The table below, developed on the basis of an early suggestion [42], illustrates this attempt to a reflective system definition for the structuring of hypersystems. It must, however, ultimately and legitimately also run into the paradox of recursivity or infinite self-reflexivity concerning the who-question, i.e. the question of who is overviewing the table and its contents, judging that a given cell of the table is what it is, etc. Instead of presupposing that this recursive reflexivity will be solved uniquely by the intervention of some kind of democratic auditing agency or goodwilled neutral mediator-facilitator employed by the leader, the table, seen as base for hypersystem structures, is intended to foster reflection and controllability by evidencing the complexity of the paradox.


IS or OUGHT judgements: column headings according to row headings

1) Designer's system

2) Decision maker's system

3) Client's system

4) System philosopher's system

5) Witness' system?...Etc.

A) According to designer

Ex.:"I want to help the exploited clients-workers or the helpless students" through my influence on the manager". Often believed to be same as cell C3?

Ex.: "The manager does not appreciate the power of my models". Economic & operational potential. Research: MODERATE (Research grants & consulting).

Ex.: "The helpless clients need and require my influence on the manager". Economic and ethical potential. Research: HIGH (low-payed consulting).

Ex.: "The philosopher offers me unpractical unprofitable unselling data-philosophy with the hope of being able to stimulate me". Research: LOW


B) According to decision maker

Ex.: "The designers do not think for profit and do not understand my problems". Economic and political potential. (Research grants & consulting).

Ex.: "I want to create job opportunities through profit". Often corresponding to operational system.

Ex.: "Client-students do not realize the quality of my product". Economic and political potential. Research: MODERATE (marketing). Strategic OUGHT-reality.

As above . Alternative ex.: "He tries to educate designers and long term planners but he is unrealistic"


C) According to client

Ex.: "The designer wants to play with high-tech gadgets"

Ex.: "The manager looks only for more profit". IS-"Reality". Political potential and risks (Revolt).

Ex.: "I want necessary goods at minimum costs". Economic and political potential. Research: MODERATE. (Grass-root)..

Ex. "He tries to help me to understand why present systems designers do not meet my needs, but he is politically weak"


D) According to systems philosopher

Ex.: "The helpless designer needs my methods". Research HIGH and cf. cell A4. Field for OUGHT.

Ex.: "The manager does not realize that short sightedness jeopardizes long run profits". Research: LOW.

Ex."The client wants to participate in design, but is put off by technology". Research: HIGH (pragmatism). Field for OUGHT.

Ex.:" I am the only one who thinks about long-run implications and deep causes of information technology"


E) According to witness?...Etc.






Basically, this tentative table indicates that a hypersystem must consist of at least four or five subsystems that "belong" to particular role-bearers in the sense that they are in control of the questioning, collection, and possibly also of the structuring and use of the data describing the system. For this purpose all inputs to the system by a particular human must be individualized and identified with reference to his role. At least initially it is supposed that the structuring in the content of each cell will be made along the definition of social system suggested above, including decision-makers, clients, resources, environment, measure of performance, subsystems, etc. The contents of the cells in the particular case of the table above show only my tentative summarizing examples of descriptions and notes about their typical content, not the content itself in a particular system-case. Furthermore, in the present exposition and context of this paper they are limited to the IS-mode, disregarding the OUGHT which should also be considered in a composite table.

The table has for the moment mainly an educational function and for this it is not necessary to presume that it can be eventually computerized in a grand hypersystem. It is supposed to be crudely self-explanatory and this may be satisfactory at this early stage of refinement. It should be noted that columns and rows might be multiplied in order to accomodate several different designers, decision-makers, etc. Row E and column 5 are not filled up since they are only used as an illustration of the possible future expansion of the categories, e.g. with the category of "witness" (as suggested by Werner Ulrich). Cell E2 could be seen as an approach to externalization of systems ethics, etc. In today's systems design practice, however, the whole table usually "collapses" into the cell A1 or A2 (the "functionalist" approach) or A3 (the "partisan" approach).

In the hypersystems perspective of the table above, the qualitative degree of progressive productive cooperation [14, pp. 201-204] will be represented by the degree to which the IS and the OUGHT of the various cells in the table describe the system in the same or synonymous categories and terms. Even within each cell it may be a question of several designers, decision-makers (including subdecision-makers like workers and supervisors), etc., who may reach a certain degree of consensus in the context of maximum possible disagreement, even within their own category. In order to foster such a consensus in the context of the strongest possible disagreement it would be required to gain a better understanding of the contents of, and of the relations between the contents of the various cells. As indicated in the table, only a few of the cells seem to have been the object of research.

4.1. An Example

The term hypersystem encompasses various kinds of attempts to develop computer support of social systems thinking. A first crude example of hypersystems implementation is the attempt to apply some of the categorial systems thinking to design of computer support within the frame of so called co-constructive computer applications [30]. It should be clear that one main feature that differentiates the co-constructive approach from any other of the mushrooming freely manipulative applications of hypermedia technology is, of course, its reliance, to some degree, on the hypersystems categorial base that was presented above.

A recent paper in this co-constructive spirit [34] outlines LIVEBETTER, a prototype implemented in HyperCard on behalf of local government with the purpose of helping a city population to exchange apartments. The paper describes the prototype idea in the following way. You can get an overview of what is available. There are maps to show where in the city the house is situated, and there is information about the area in terms of available shops, public service, recreation possibilities, etc. But the point is that you can do other things as well. For instance (1) Help: if you by chance are not able to interact with the computer, but still want to make an advertisement, you can get in touch with an actor (operator or subsystem leader) only by clicking on a hypercard button, (2) Alternatives: "If I don't find anything today, when and how can I find it?" The system can guide you through the municipal county plans for building new housing areas, where and when they will be built, what they will look like, the foreseen level of rents, etc. (3) Intelligent searching: A possibility to let the computer search the database for a flat that meets all or at least some of your requirements, (4) Mailbox functions: They allow you to write a note to other clients of the system or to the agency which is running the system. The note can be e.g. "I have no flat, but I have a car for sale..." but unfortunately it may also degenerate in "grafitti".

The functions that were mentioned are said in the report to correspond to the point of view of client of the system, but there are also other roles envisaged for participation in the "conversation", for instance leader or sub-leaders (actors or systems operators), systems engineer, experts of various kinds, designer, etc. The actor or mediator of apartment-exchange, however, is seen as the most important role at the stage of prototype development portrayed in the paper. He is said to be the guarantor who counteracts violations of the system's (leader's) intentions through grafitti or downright unethical use. He is supposed to be independent from any authority's rule-machinery and he works rather like a mediator of public communications enhancing quality of work, etc.

The example of ongoing work was chosen above for illustrating the typical spirit in which co-constructiveness has been adduced in practice lately. In particular it illustrates that it is not clear to which degree it incorporates the fundamental ambition of following categorial thinking. There are earlier reports that are pedagogically more ambitious in some details [31]. In any case it should be clear that the co-constructive hypersystem prototype draws upon some of the categorial thinking that was adduced in the previous section. It has not yet incorporated, however, the relational aspects of the categories and some sensitive aspects of the political who- question that motivate our further discussion below.

4.2. Co-Constructive systems

At this point it is convenient to observe that a co-constructive system, seen as a particular implementation of (computer-technological) hypersystems, addresses the questions of some among the above cells. The co-constructive version of the hypersystems idea strives to reach beyond the simplest cells, A1-A3, of the hypersystems base. These cells represent the first idea which comes to the mind when imagining a computer support of the social systems theory. It is the idea of registering and displaying upon request, or perhaps using operationally, the contents of the main categories for a particular system according to a particular designer or group of designers. So, a designer or a so called user could feed in, and retrieve from, a database what he himself or other like-minded designers think that the clients, decision-makers, etc. of the system under consideration are, and ought to be. According to one of the main points of the hypersystem idea, the quality of the systems design or of the information it produces, as well as its implementability, would be directly proportional to the degree of consensus between designers, chosen for maximum disagreement, concerning both the IS and the OUGHT of the categories. The philosophy of the A1-cell could easily be expanded beyond the very same cell A1 by imagining that the designers appoint some decision-makers and some clients as "adjunct-designers" with the right and responsibility to feed their own systems descriptions into the data base. Beyond the stage of mere design, when it is a question of redesigning a system that is already operational, it is obvious that only one particular authorized description will be allowed by the power elite until a subsequent negotiation enables an operational updating. If such negotiations are not conceived within the frame of hypersystems they may become the object of other particular computer applications [55; 56]

In the co-constructive approach the designer is supposed to encourage the main decision-makers to allow that some of the clients' (and the other social role bearers' like the designers themselves) questions, opinions and systems descriptions be stored in the data base, or communicated through. The purpose is to provide "historical" documentation and stimuli that foster insights and future negotiations for the evolution of the system. The purpose of the systems, therefore, includes paradoxically also the development of "its own" purposes which, of course, will also be certain people's purposes. In fact, the system may also encourage, as long as the encouragement is supported or tolerated, that clients exert political pressure on their representatives or decision makers with the purpose that they satisfy e.g. the clients' wants, including redesigns of the support system.

In this way, while incurring the risk of evading, masking or postponing many of the political as well as ethical problems of participative systems development [25; 32], the co-constructive approach at least in an experimental or university set-up may work for some people like a "motorcycle" in the tradition of "Zen and the art of motorcyle maintenance", stimulating interest for the relation between technology and philosophy [63]. The approach may in the future develop and manage to cover well several of the cells in the table above, mainly cells A2 and A3, and possibly also B1 to B3. It seems to be a fortunate coincidence that some of these cells apparently happen to promise some economic pay-off from the point of view of the decision-maker, the entrance ticket to most "praxis".

In any case, the hypersystem base presented in this paper may support co-constructive and other particular hypersystem implementations at least in the self-critical sense of exposing some of their limits or challenging their self-critical and self learning potential. As a matter of fact some primitive embryonic-prototypal attempts to apply the hypersystem idea in a communicative — cooperative — coconstructive mode may come dangerously near what has been called the strategy of "efficiency" in the sense of cost-minimization [13, chap. 2] or, in the best case, "Leibnizian" network-systems [14]. This may be the case, for example, for systems that without any apparent regard for e.g. economic theorizing, free competition versus power and oligopoly, etc. [3], aim at simulating a free market of information. It is the market which is supposed to increase the degree of utilization of apparently idle resources or, as in the case of computer-supported communal brokerage systems mentioned above, for helping the public in a city to exchange their dwellings.

4.3. Advanced Hypersystems Applications

I will offer below for the purpose of brevity within the restricted space available in this paper, by means of a couple of examples, only a hint of which features could characterize an advanced hypersystems application.

At a technical level it is clear that the hypersystems implementation of systems thinking may also imply the establishment of a paradoxically necessary "discipline" [37, p. 235] or the "imposition" of particular structures, routines and tests to those who want to use the liberating potential of the available computer network. This disciplinary structure is represented by the hypersystems base presented above that in itself is a rough first specification of a more encompassing computer application. In contrast to the preliminary co-constructive prototypes that have been developed up to now the hypersystem base as it was presented here suggests emphasis on relations between categories.

If, for instance, a particular computer operator (who inputs data) has entered some data about the resources of the system, the computer program may request a confirmation of the identity and role of the operator, and of the identity and goals of the decision-maker to which these resources refer (the resources are by definition controlled by a particular decision-maker in order to contribute to a goal). The program may also request data on the environment which is the correlate of the resources. If this environment is specified on a later occasion in the context of a specific follow-up, the program may request a confirmation that it refers to the same decision-maker. It is, in fact, theoretically meaningless to match or correlate resources and environment that refer to different non-unified decision-makers. And, still, the correlation is fundamentally important and necessary, since it represents by definition activities (including computer applications) of the (sub-) decision-makers. One thing that differentiates design or negotiation from operation is that the operational systems description has got certain consensually determined resources for the attainement of particular goals.

In a similar vein, whenever the identity of a particular decision-maker is given to the program and its associate databases, an automatic request may be sent to all the registered clients, and to those who later enter the system in their quality of clients, requesting their "anonymous" confirmation that they accept the legitimacy and representativeness of the particular decision-maker. (Obviously that would be politically extremely sensitive and it illustrates one kernel of basic difficulties.) The program would later perform with that type of data appropriate statistical computations with results which can be displayed in visual graphical form as a basis for decisions. Anonymity was put above within quotation marks in order to remind us that what is anonymous for computer users will not be so for those employees at the service of the decision-maker who deal with the operating system of the computer network. This puts into evidence some of the power aspects that may inhibit the implementation of hypersystems [45, pp. 43-52; 46].

I hope to be able to return to examples of detailed features of hypersystems implementations in another paper. In the present context I would like to finalize by stating that it is an important future area of research to investigate how technical specifications could meet "ethical specifications" to the extent that the latter can be "operationalized" at all. It is, for instance, possible to conceive of a computer program that not only works along the regular Kantian categories of systems definitions, but also counters every fundamental proposal for action in an "activity system" with the Kantian question of whether one would like to see the maxim of his action enpowered to become an universal law. Or rather: "Would you act as if the maxim of your proposed action had to be erected by your will to a universal law of nature (understood in its broadest sense of "form")?"

Referring these suggested types of hypersystem manipulation to the table for the hypersystem-base displayed above puts into evidence the complexity of the required network of systems concepts. This is not to say that initial prototypes or embryos must necessarily exploit the whole range of requirements, even if it may prove to be necessary to put some minimum requirements for what deserves to be called meaningfully a hypersystem. It is rather envisaged that each particular implementation should and could spell out clearly its limitations or assumptions in terms of the suggested base. In order to make this possible it will be necessary to expand and structure the technical features above in a more orderly manner. Such a "specification" contradicts the possible impression that a hypersystem might legitimately be reduced to an undefiniable embryonic organism or unstructured formless "essence" which may grow but cannot or does not need be formulated in advance.

5.1. Ideals, Claims and Disclaimers

The remarks in the previous section concerning features of hypersystems implementations introduce the matter of problems and challenges that particular implementations of the hypersystem base — e.g. co-constructive systems — may meet. It is the recognition of these coming challenges and openness to criticism that stand at the heart of my confidence in the potentialities of such implementations. It is dangerous if matters are mainly the object of belief: "for where there is belief there is doubt, and the fiercer and naiver the belief the more devastating the doubt once it begins to dawn" [52, CW 11, §294 & 170]. And belief — for instance in pragmatist systems theory or in democratic constructiveness — is probably lurking wherever earlier naive belief in scientism has been relativized.

Some preliminary objections that were made to the co-constructive approach during its original formulation have been catalogued in the literature [30, p. 167-173, but also 84f]. Such catalog was structured in terms of : (1) Powerful vs. powerless, (2) Sincere vs. unsincere, or honest vs. dishonest, and (3) Thinking vs. feeling. Such a catalog can be useful but it is far from being a research program about the politics, ethics, and psychology, of systems theory, i.e. about the most important challenges of hypersystem applications. It is a catalog that may unintendendly reduce these dimensions to a narrow conception of "ideals", ideals that would be an oversimplification of the original issues of value-measurement [10; 11, pp. 174ff; 14, pp. 189ff].

An oversimplification of the concept of ideals would bypass fundamental questions such as what obstacles does reality, and the "form" of the human mind offer to human desire [69, p. 50]. It may be true that science's characteristic ends are ideals in the sense that they are unattainable but presumedly indefinitely approachable. Even disregarding the embarassing fact that social computer constructiveness often does not measure its presumed gradually increasing approach to ideals, it is, however, easily forgotten that the "advantage" of pursuing the ideal of science (or of scientific constructiveness) is that as a consequence we at the same time presumedly pursue all our other ends more efficiently [9, pp. 189ff; 11, pp. 374f]. How do we know that this is the case? "The selection of ideals lies at the core of interactive planning", but I think that it is dubious to believe that "consensus arises in idealized design because it focuses on ultimate values" [2, pp. 105, 118]. I think that we are not really dealing with ultimate values, for instance as religions and political-ethical systems try to formulate them in the course of attempting to make them more concrete.We are, rather, dealing with more frivolous things that are akin to the once famous and controversial "social indicators" [5].

The recognition of the nature of ideals as related to ethics is not an abstract "philosophical" question that can be barely understood by systems practitioners. I think that, even if it is not a selling issue, it has a very concrete and practical expression. It has, in fact, been observed that disclaimers once offered or pitfalls once mentioned may receive little further attention during the analysis of systems. The analysis may be carried out with apparent disregard for them. "The same expert who modestly admits that the technique can be applied beneficially to only certain, circumscribed types of systems nonetheless plies his trade vigorously and profitably wherever there is a likelihood of contracts. At worst, the analyst who points out pitfalls is trapped by them; at best, he fails to bridge them to a professionally satisfying degree. Having done them lip service, he proceeds as though they had somehow been overcome." [40, p. 8f.].

Further reflections on co-constructive prototypes from the perspective of the hypersystems-base suggest that, regardless the degree of modesty of their claims, some important potential problems can be exemplified by the following.

5.2. The Power of Pragmatism

Particular hypersystems that work with only one or a few of the cells of the hypersystems base may display the apparent convincing proselitising "power" which is inherent of every simplification. They have also the power halo of both pragmatism and computer technology in their alliance with the moneys of industrial capitalism. They may display this power in the sense that they seem to be "applicable" and to yield unexpected insights into widely different areas of vaguely described activities. This matter deserves an appreciation of the problems of philosophical pragmatism, especially in its close historical connections with positivism and utilitarianism. I intend to cover some of these questions in another context. For the time being I refer the reader to a recent work [37, pp. 226ff] which considers some of the problematic aspects of the pragmatism that stands at the base of the "empirical idealism" from which our social systems theory evolved .

In any case pragmatism allied to hypermedia computer technology can be predicted to have a fortunate career ahead, mainly in "rhetorical" functions in the advertising and educational field, and in developing countries (including now the liberalized Eastern Europe) which are thirsting for cheap, generally applicable, a-political, human-sized, user-friendly and "powerful" Western technology.

While I am writing these lines I happen to glance at the call for papers to a conference on systems thinking. Papers are invited for the following streams: problem structuring, systems and operations research, systems and the social sciences, information systems, choice of methodology, use of particular methodologies, project management, applications of systems thinking, etc. I realize with a certain uneasiness that a presentation of hypersystem-coconstructive prototypes could fit in almost all the streams, not to mention other conference subjects like expert systems, decision support systems, computer supported cooperative work, teleconferencing, human-computer interaction, hypermedia, educational technology, computer-aided learning, etc.

It occurs to me that what we may be witnessing is the universal appeal of the computer systems buzzwords. But we may also be witnessing the convincing power of certain kinds of pragmatic doing which overlaps with technological doing and is related to the emphasis on non-systemic efficiency or cost-minimization that was mentioned above. It may relate also to the kind of expertness which is based on superficial isomorphisms, polemically criticized for being nothing more than tired truisms about the the universal applicability of formal structures: "Thus, 2+2=4 prevails whether chicks, cheese, soap, or the solar system are under consideration" [40, p. 40, 66f, 93, 113]. As a substitute of the earlier applied mathematics and logic of operations research we have now hypermedia-like associationistic formal structures [31]. Indeed, they recall much more old psychological associationism than they recall formal sciences like mathematics and logic, an issue to which I intend to return in a future paper. In any case there is apparently no need of theoretical concern with history and tradition beyond the possible wholesale subscription to a philosophy of democratic-communicative strategy that in many situations may work as an alibi.

More than ten years ago a university lecturer colleague of mine warned me by claiming that our teaching of social systems theory at the undergraduate level could equip our students with a vocabulary and an alibi for disguising the superficiality of their work vis-à-vis unsophisticated clients. Pragmatic technological power may, in fact, obfuscate the inherent difficulties in particular areas of applications that are highly dependent upon, say, cultural dimensions or deeper aspects of democracy and science. The disregard of "soft" complexities may enhance an appearance of efficiency and a feeling of intellectual power. The sudden impact of a "Faustian" soaring feeling of intellectual power on practical minds that never before had been in contact with philosophical thought and its ambiguities may in any case contribute to explaining why some pragmatist consultants may have given occasionally the impression of having an "arrogant and stubborn" attitude [16, p. 126]. It has been furthermore noted that fanaticism or violent reactions to criticism, which often appear in the collective coalitional form of "we" versus "they", is found in those who have to stifle a secret doubt [52, CW 8, §582].

From apparently "classical-trivial" areas of computer application, such as operations analysis or sheer programming for inventory control, accounts receivable, or payroll, the designer feels tempted to pass over to computer support of banking operations, industry sales or services, and further to the support of university activities such as planning and evaluation of academic education, psychotherapy, or support of social services delivered by government agencies [23; 50]. As in vulgar Marxism where everything, from manufacturing industries over to marriage and further to the Church, may seem to be explainable by its reduction to the social classes of work and capital, so in popular pragmatist systems thinking everything may seem to be reducible to clients or stakeholders, designers or facilitators, products or services that are equated with goals, and, further, with ideals, etc. And the meaning of ideals can get diluted through emphasis on profitable stakeholders, even if profitability is understood in less narrow political terms. One problematic element in these unwarranted transitions from desires to goals to ideals and, further, to political stakeholders would be the taken for granted validity or fruitfulness of the (often incomplete) pragmatic concepts and systems categories, including their presumed self-evolutionary potential.

5.3. The Use of Categories

Even if we take for granted the universal validity of system categories, they may not be necessarily applied carefully, in the same sense in which they were developed and defined. Categories may get renamed or multiplied at will. The connection to original developments may appear to be pedantic and academically "philosophical", barring the way to the rhetorics of common language that is required for smooth consultancy.

In particular, the category of goals and measure of performance can get divorced from the categories of resources and environment which would require an explicit statement of relationship to the identity of the decision-maker and his legitimacy with respect to the clients. Resources may get reduced to information resources such as files or databases which are, then, relationally cross-referenced for brokerage and for matching demand and supply in the spirit of traditional micro- and macroeconomics. Statement of goals and ideals may get bypassed through crude computer simulations of "markets" of demand and supply, e.g. database matchings of clients' offers and needs. In practical work the so called ideals may get equated to whatever desires or lusts or interests the clients happen to express. Ideals get closer to the wishes or goals that clients would have if there were no practical, economic, technical, or social limitations. In such a way the whole problematic area of moral or value philosophy is bypassed [27, "valore", pp. 964f; 28, "value and valuation"]. What about ethical limitations? The goals transformed into ideals may be then claimed to constitute the bridge over to ethics which is then understood as politics of conflicts of interests and further reduced to ad-hoc organizational negotiation about wishes or wills. In this way the possible need of confrontation between designer and decision-maker in the light of the IS-OUGHT dilemma — which usually is far from obvious at the prototype stage of systems development — will be relegated to the ensuing negotiations among clients and between clients and decision-makers.

The social learning situation will then be conceptualized as constructing the hypermedia in a way that lets people enter in different, explicitly defined but changeable roles. One may refer further to the need of reducing doubts about role influences, power etc., and to the need to keep creative conflict going on, to keep the conflict constructive but not explosive, etc. That may work as a shorthand expression of the complex idea of democracy. But: what would sociologists and political scientists, even those of the less conventional or reactionary type, say about this list of wishful thinking that matches the goodwilled prestige words of the democratic utopia? Even granting the fruitfulness of the questionable concept of role, how and why does one reduce doubts about role influences, and how does one enhance changeability of roles, not to say redistribution of power? Is there anything more or less unconsciously "given" as a presupposition for the process or is everything flexible fluid "flux" to be constructed on the basis of nothing but earlier flux? Why should the answers to these questions be left over until later on, be always postponed until after the consultant-facilitator's expensive implementations of new fashionable technologies?

5.4. Sweeping-In or Unfolding?

The matter is further complicated by the risk of ignoring the relationships between environment and resources in face of decision makers and goals. The learning process relies on the pragmatist "sweep-in" or, rather, "unfolding" process [75]. One basic prerequisite of such process dealing with so-called boundary judgements is the consideration of environment as related to different social actors, and this is what is hindered by the neglect of relations between categories. Sweeping in ever more aspects of the problem context in an effort to be comprehensive clearly begs the problem of boundary judgements, and there is a mistaken belief that an "open systems approach", by sweeping in environmental aspects into a decision maker's considerations, must be more conducive to socially rational decision making than are small and closed systems models. "The unfolding process can take place in a "monological" (self-reflective) or in a "dialogical" (discursive) setting; in both cases the value of the process depends on the extent to which the true concerns of all the stakeholders, especially of those not involved but possibly affected by the decision in question, are considered by those involved...What we need is a heuristic tool for tracing the inevitable lack of comprehensiveness in our maps of, and designs for, social reality" (ibid., pp. 419, 421f).

I agree with this main idea in spite of its diverging from the hypersystem base presented above, specifically with respect to the rather unproblematic assumption of the IS-map of the "true" concerns of the stakeholders, and its flexibility. I think, however, that the hypersystem base takes well care of the need for a "heuristic tool" by embodying one such tool in the form of the concept of "measurable error" i.e. consensus as a function of meaningful conflicts of opinion [41, chap.4-5; 45, pp. 46ff]. This is akin to certain features of recent research on "minority influence" [18, p. 406]. It is not a question of having "as many as possible" dialogically active clients or whatever, and to make statistics or communication or negotiation out of their opinions. It is rather a question of who, which clients, are going to deal with what, and how are they going to do it, on the basis of what undiscussed presuppositions, past experiences and historical ideals of ethics. This is probably a sensitive issue to the extent that it covers the problem of tradition, including political and religious stability, versus change as envisaged in the concept of constructive learning systems. The question is what ought to be considered, or to be constructed, as environmental changes or news. Many malconstructed changes or news unfortunately are supposed to require that we counteract them by corresponding counter-change activities. These counterchanges, however, dissipate our attention and our energies, and the cost is that more important "old news" remain unattended. This is certainly an important question for many of us who are supposed to keep up learning about getting flexible with respect to constructed technological news, constructed environmental changes, and constructed negotiations while other difficult "old" problems do not get the attention they deserve [48]. And now, we have got the news of hypermedia.

5.5. Evaluations: Truth or Usefulness?

The vagueness of ideals and their associated measurement scales which do not integrate conflict of interests, and are themselves continuously submitted to a self-evolutionary process, may turn any systematic evaluation into a purely practical political task.

Consider the following example of the latest research funding policies of a Swedish agency. The agency may contribute with half of the grant which is needed by the project if the project leader finds an industrial company or organization that is willing to finance the other half. Considering that granting chief executives. officials or employees will probably be unwilling to acknowledge their own possible failures, the evaluative process will to a high degree turn into a pure matter of practical politics of personal relations, a dubious extreme opposite to dubious traditional positivistic evaluations, and a phenomenon that was noted long ago in the critical systems literature [40, p. 6, 66, 108ff, 121f, 243ff; 62, p.90f].

In this way the moratorium on objectivity, reality and truth (but, symptomatically enough, not on usefulness) proposed by some representatives of second-order cybernetics [70, referring to some of them], may become a problematic double-edged guarantee of the job-security of university-based consultants. They are in fact already institutionally protected by the state government from an economic bankruptcy, which would be seen on the free market as a result of a test of the usefulness of their consultancy. In the future there is the risk that they won't get evaluated in terms of any presumedly obsolete concept of truth either. Welcomed as it may be by different parties for lowering the visible costs of all involved institutions, it may also represent a concrete example of "the higher capitalism" [58] of modern universities and educational learning systems, a dubious alternative to conventional capitalism and to the so called ivory-tower academicism of yesterday. The point I want to make is that the concept of truth is extremely important and it has been a sore point in the apprehensions about possible misuses of pragmatism: "The drawing of a distinction between truth and falsehood belongs to the very essence of thinking" [19, p. 120]. We should therefore prevent that truth — or the concept of measurable error — be banished and replaced by value that is reduced to consensual usefulness, at the interface between pragmatism and negotiated democratic utilitarianism.

5.6. Sincerity: Morality or Moralism, Privacy or Openness?

Although the concept of truth is considered to be outdated in some contexts of pragmatist consulting science, social actors — particularly clients operating the input of data into the system — may get admonished that they OUGHT to be open, cooperative or "sincere" in their data entry. Since sincerity will probably be understood as fidelity in reproducing or depicting the social actors' "true" opinions, observations, feelings, etc., we run into the paradox of finding that constructivism at the socio-psychological and political level of data-entry may be misunderstood and envisaged in oversimplified positivistic and moralistic terms. Future constructive applications must be able to work not only in routine commonplace matters, since "in commonplace matters all moral schools agree" but "it is only in the lonely emergencies of life that our creed is tested" [51, p. 105].

If the determination of responsibilities is going to get a chance, at the practical level of computer applications the hypersystem base requires that the process of data entry of categorial data be linked to an identification of the subject-actor, not only in terms of attribution to the categorial groups or classes of social actors (designer, decision-maker, client, etc.) but also in terms of personal identification or individuation. It is obvious that outside the artificial world of prototypes it is of primary importance that the systems client has not same manipulating privileges for changing the computer support software as, say, the designer who is by definition enpowered by the main decision-maker. As an extreme example, the set of resources, or the "computer application" of an embezzler of payroll or, more generally, of a dissident designer, will be certainly different from the computer application of the legitimate designer, and it certainly might include a part of the operating system of the computer.

This introduces us, of course, to the daunting paradoxes of "openness" including sincerity, and to political-ethical matters of privacy and personal integrity [45, p.52, 81]. They impose paradoxical limitations to the inherent potential of hypersystems co-constructiveness. There are already studies available which indicate that institutions and their managers may not welcome truths and systems in the context of sensitive or politically loaded negotiations [13, pp. 92f; 21; 32; 33; 43; 44; 53; 66]. This seems to be the most serious menace to the implementation of constructiveness. The ultimate question is whether our education and research should be dedicated to matters in which it competes with commercial consultants, and whether such activities can evolve into a later research phase which faces (the clients', the designers', the decision-makers') lonely emergencies of life.

5.7. The Economics and Politics of Data Entry

This last issue of data-entry introduces us also to the economics of the technology of data entry [14, pp. 79ff]. At the embryonic prototype stage of a system it may be easy to motivate, or at least pay, people or students, in order to make them enter in a playful mode some data, answers, opinions, and so on which are required for illustrating the principles of operation of the system, suspending judgements about the ultimate value and quality of the entered data. At this stage it may also be hoped easily that in a multimedia learning environment the disadvantaged client will be able to get in touch with a serving employee of a commercial firm or a government official by simply clicking a hypercard button or by sending an electronic mail message.

During full scale "live" (co)operation or (co)construction, however, it is a tough issue to determine and enforce who will motivate or pay for the data entry operations (not to speak of the data use operations) beyond those which are politically considered to be an absolute operational necessity [61]. Stakeholders operating the computer may indeed be willing to spend their own time "for fun" or "for hope", entering data which may be used by themselves and by future unknown decision makers in unknown contexts. There is still, however, the question of who will be willing to pay, if not for the operators' time, at least for the equipment that is used up and for the non-glamorous updating or quality control of the pertinent data bases. If data entry is motivated by the expectations of getting an immediate own profit or advantage, then it is not anymore a question of ethical cooperation and solidarity.

"It is a common habit of mind to accept reality as something fixed, out there, that we can question in various ways by means of our senses aided by instruments" [12, p. 160]. What now happens is that we risk to consider the people out there as the reality to be interacted, questioned or pooled by means of computer instruments, forgetting that among other things "some knowledge of the emotional life of every observer must be understood to make sure that the observer's world is separable from this other world" (ibid. p. 189). Defective knowledge about this emotional life, and therefore also political-ethical life, may easily jeopardize the whole process of data entry which stands as the heart of constructiveness with its postulated constructive observer-facilitator.

What I attempted above is to intensify and sharpen in a "Singerian-heroic" mode the dialectics between realism and idealism in the design of educational computer support. On the one hand "The idealist is a restless fellow who sees evil in complacency; he regards the realist as a hypocrite at times because his realism is unrealistic. The realist, on the other hand, accuses the idealist of being impractical, because his insistence on destroying the value of the present way of life precludes positive action" [12, pp. 171-197; 14, pp. 199, 249-257]. In the context of sensed destructivity it may be noted that people who happen mainly to believe — in religion, science, democracy, constructiveness, communication (in the etymological sense of the word), networking, or whatever — continually expose themselves to their worst enemy: doubt. "Wherever belief reigns, doubt lurks in the background [52, CW 11, §170].

So much can be said at the present time regarding the menace of destructiveness when focusing on the relationship between conventional methods for systems development (whatever such conventionalism may mean in the context of, say, databases and expert systems), hypersystems, and co-constructive prototypes. The issue at stake may very well turn out to be a catastrophic computer-supported relativization of truth in terms of communicated and negotiated "attitudes" that are akin to the concept of perspectives, a rejection of what has also been called "absolute presuppositions", and an "eradication of metaphysics from the European mind" under the dubious flag of the Kantian war against "dogma" [19, pp. 33, 46-48, 120, 159, 249]. "If the ancients had not done a bit of thinking we would not possess any dogma about Trinity at all. The fact that a dogma is on the one hand believed and on the other hand is an object of thought is proof of its vitality. Therefore let the believer rejoice that others, too, seek to climb the mountain on whose peak he sits". [52, ibid.].

Alternative systems approaches based on alternative categories were already suggested above in the context of alternatives to the hypersystems categories. I will finally attempt to round up this discussion by acknowledging a couple of alternative approaches to educational technology which do not mention explicitly categorial thinking. Constructiveness, constructivity, constructionism, and perhaps some other similar words are becoming lately prestige words, not the least in the context of computer aided education, computer supported cooperative work, etc. This is happening as conditioned by the availability of computer networks and hypermedia technology twenty years after the skillful revival of "archetypal" cooperative co-constructiveness in the context of Hegelian and Singerian inquiring systems [14], and their practical implementations [2, pp. 116ff]. Recent system approaches to educational technology carry on this tradition in a more or less theoretically "faithful" way [4].

Today we may contemplate old wine in new (better?) bottles, as the term constructivism, for instance, is used to indicate that knowledge is built by the learner, instead of been supplied by the teacher, while constructionism is seen as expressing further the idea that this is enhanced when the learner is engaged in the construction of something external or at least shareable: "Better learning will not come from finding better ways for the teacher to instruct but from giving the learner better opportunities to construct " [1, p. 4]. In this context are adduced concepts such as rich learning environments, information rich environments, rich environments that encourage a plurality of learning styles, rich contexts, dynamics of human transactions, etc. "By giving children opportunity to switch roles from building artificial devices (being engineers) to observing their behaviors as outsiders (being psychologists) we enable ourselves to access their thinking" (ibid., p.5).

Personally I am skeptical of of some versions of the constructivistic or constructionistic "turn" in our theorizing. From what I have learnt up to now I am prone to agree with the critics of Maturana — a main exponent of cybernetic constructivism — in the observation that his kind of message seemingly fascinates the audience by means of the metaphorical-rhetorical form of a system of thought that appears so comprehensive and coherent.

"We have participated in two of Maturana's seminars. They were great performances. To use his own words, they were 'acts of seduction'. To us, as for others in the audience, they were a 'kick', yet intellectually and personally frustrating afterwards.... European post-modernism, which has developed into the anti-message of the intellectual avant-garde of the 1980's also hails the staging of the self and seduction as a rhetorical strategy. There are no universal criteria of truth. While the modernists declared God to be dead, the post-modernists declare Reason even more dead. As anyone can see, there are many points of similarity between American cybernetic constructivism and European post-modernism. They are not identical, however. American cybernetic constructivism is found largely in academia, whereas European postmodernism is such more of a general cultural phenomenon. To put it plainly, constructivism in its cultural practical guise" [64].

I myself would in that case have preferred a honest outright commitment to F. Nietzsche's philosophy or, earlier, to the playful organic and constructive skepticism of the famous encyclopedist — "data-base designer" — D. Diderot.

Another version of constructionist learning [60] seems to have some points in common with the co-constructive approach presented above, with the important difference that it builds upon an explicit historical psychological base, of Piagetian development psychology. In this particular respect of taking up seriously psychological theorizing it resembles those attempts that are being made in order to develop computer interactivity on the basis of so called activity theory or German action control theory [23; 24; 35; 54; 57; 77]. In the USA the latter seems to stand close to early American currents of educational "dialectic psychology" [65]. There is also a criticism that takes issue with Piagetian educational psychology without sharing the position of dialectic psychology [29; 72].

In any case I think that all these approaches with their possible merits open some opportunities for deepening the understanding of educational constructiveness or design of self-learning systems. Personally I have up to now been suspicious of all variants of "self-" educational philosophies, self-reflexivity, self-reference, self-consciousness, and whatnot, the more so if they they do not relate historically to at least to some kind of naturalistic-religious metaphysics in the spirit of e.g. Campanella's attempts to grasp self-reference (Tommaso Campanella, Italian philosopher, 1568-1639). From an overview of literature that has been brought to my attention recently [67; 78] I assume, however, that our co-constructive protoypes may profit from a better knowledge of the constructive - constructionist tradition. This is the more so to the extent that they deviate from the subtleties of the original theories that underline the hypersystems base. Some complementary suggestions for an evaluation and development of hypersystems and co-constructive ideas have been presented in other papers [31; 49].

I have introduced some general features of the idea of hypersystem seen as a typical computer-technological implementation of the concept of a social system. A hypersystem-base was presented in terms of a matrix table that also suggests some necessary future developments of promising preliminary empirical attempts in terms of what has been called co-constructive systems. Computer support may be used in order to keep track of the relations between various descriptive and normative (IS-OUGHT) system categories that relate to the views of particular social actors, both groups and individuals. It is suggested that computer support may be expanded in order to obtain a qualitative follow-up or evaluation in matrix terms of the system's evolution. Some problems that are suggested in the context of the first implementations of co-constructive prototypes point to the need of following more closely the original theoretical categories or to reform them in order to exploit better their potentialities and in order to avoid their possible misuse in practical situations.

With this paper I wish to test the general orientation of my research by submitting it to a broader variety of readers. At the same time the paper is an appeal for assistance in developing the various implications of social and humanistic thinking in computer science according to some kind of distribution of work and allocation of research resources, not the least among younger researchers and graduate students.

If the task seems to be too complicated to be carried out along the theoretical base proposed here, we may be reaching some of the limits of so called rationality and we may start to explore them more directly.

NOTE: The abundance of references in this paper is intended primarily as a "tutorial" for readers who wish to develop the issue and overcome the limits of simple empiricism or doctrinaire isolationism by means of "conversation" with a broader range of authors and works.

[1] Ackermann, E. The epistemology & learning group: Children and cybernetics. The Newsletter. American Society for Cybernetics. (April): 4-6, 1990.

[2] Ackoff, R. L. "Creating the corporate future." 1981 Wiley. New York.

[3] Andrews, P. W. S. "On competition in economic theory." 1966 Macmillan and St Martin's Press. London and New York.

[4] Banathy, B. H. "Systems design of education: A journey to create the future." 1990 Educational Technology Publications. Englewood Cliffs.

[5] Bauer, R. A., (Ed.). "Social indicators." 1966 The MIT Press. Cambridge.

[6] Begeman, M. L. and J. Conklin. The right tool for the job: Even the systems design process falls within the realm of hypertext. Byte. (October): 255-266, 1988.

[7] CBT. "Computer and video in corporate training: Corporate knowledge — corporate training. Fourth International Conference, Lugano, Switzerland, November 19-21, 1990." 1990 Istituto Dalle Molle, George Mason University, & Association for the Development of Computer-Based Instructional Systems ADCIS. Lugano.

[8] Checkland, P. B. "Systems thinking, systems practice." 1981 Wiley. New York.

[9] Churchman, C. W. "Theory of experimental inference." 1948 Macmillan. New York.

[10] Churchman, C. W. "Why measure?" Measurement: Definitions and theories. Churchman and Ratoosh ed. 1959 Wiley. New York.

[11] Churchman, C. W. "Prediction and optimal decision: Philosophical issues of a science of values." 1961 Prentice-Hall. Englewood Cliffs.

[12] Churchman, C. W. "Challenge to reason." 1968 MacGraw-Hill. New York.

[13] Churchman, C. W. "The systems approach." 1968 Delta. New York. (Page references are to the 2nd ed., 1979)

[14] Churchman, C. W. "The design of inquiring systems: Basic principles of systems and organization." 1971 Basic Books. New York.

[15] Churchman, C. W. "The systems approach and its enemies." 1979 Basic Books. New York.

[16] Churchman, C. W. Ackoff comes of age. Systems Practice. 3(2, April): 125-130, 1990.

[17] Churchman, C. W. and A. H. Schainblatt. The researcher and the manager: a dialectics of implementation. Management Science. 11(4, Feb.): 1965.

[18] Clapper, D. L. and E. R. McLean. "Information technology in support of group activity: A theoretical framework and implications for research." Prec. of ISRA-90 The information systems research arena of the 90's: Challenges, perceptions, and alternative approaches. IFIP TC-8, WG-8:2 working conference, Copenhagen, Dec. 14-16. 1990. Nissen, Klein and Hirschheim ed. 1990 Univ. of Lund, Inst. of Information Processing. Lund.

[19] Collingwood, R. G. "An essay in metaphysics." 1940 Clarendon Press. Oxford.

[20] Conklyn, J. Hypertext: An introduction and survey. Computer (IEEE). 20(9): 17-41, 1987. (Reprinted in I. Greif, Ed. Computer supported cooperative work: A book of readings. San Mateo, CA: Morgan Kaufman, 1988, pp. 423-476)

[21] Dahlbom, B. "The idea that reality is socially constructed." Software development and reality construction. Budde, Floyd and al ed. 1990, in print Springer. Berlin. (Page refs. to version of 12 Jan. 1990, University of Gothenburg, Dept. of Philosophy.)

[22] Davis, P. J. and R. Hersh. "Descartes' dream: The world according to mathematics." 1986 Harcourt Brace Jovanovich, and Penguin Books. New York and London.

[23] Docherty, P. and K. Ivanov. "Computer support of decisions in a social-political environment: A case study." Proc. of the IFIP TC 8 Conference on Environments for Supporting Decision Processes, Budapest, Hungary, 18-21 June, 1990. Sol and Vecsenyi ed. 1990 Elsevier Science. Amsterdam. (Page refs to rev. edition as report UMADP-WPIPCS 26.90, Univ. of Umeå, Inst. of Information Processing.)

[24] Dunkell, H. "Contrastive task analysis." Recent developments in job analysis. Landau and Rohmert ed. 1989 Taylor and Francis. London.

[25] Ehn, P. "Work-oriented design of computer artifacts. (Doctoral diss.)." 1988 University of Umeå, Arbetslivscentrum and Almqvist & Wiksell International. Umeå-Stockholm.

[26] Ehrenberg, D., P. Eklund, M. Fedrizzi and A. G. S. Ventre. "Consensus in distributed soft environments" (Research report.) Technical Univ. of Leipzig, Åbo Akademi, Univ. of Trento and Univ. of Napoli. 1990.

[27] Enciclopedia di Filosofia. 1981 Garzanti. Milano.

[28] Encyclopaedia of philosophy. 1967 Macmillan. New York.

[29] Feldman, C. F. and S. Toulmin. "Logic and the theory of mind." Nebraska Symposium on Motivation (Vol. 23). 1976 University of Nebraska Press. Lincoln and London.

[30] Forsgren, O. "Samskapande datortillämpningar [Constructive computer applications]" (Doctoral diss., Report UMADP-RRIPCS-3.88.) University of Umeå, Inst. of Information Processing. 1988. (In Swedish. Summary in English.)

[31] Forsgren, O. and K. Ivanov. "From hypertext to hypersystem." Cybernetics and Systems '90. Proc. of the Tenth European Meeting on Cybernetics and Systems Research, Vienna, April 17-20, 1990. Trappl ed. 1990 World Scientific. Singapore. (Also as report UMADP-RRIPCS 9.90, University of Umeå, Inst. of Information Processing.)

[32] Gibson, D. V. and E. J. Ludl. "Group decision support systems and organizational context." Organizational decision support systems. Lee, Cosh and Migliarese ed. 1988 North-Holland. Amsterdam.

[33] Gross, B. M. The new systems budgeting. Public Administration Review. (March-April): 113-137, 1969.

[34] Grönlund, Å. "Cooperative work: Not just an internal matter. Some notes on a coconstructive view of computer supported cooperative work." Prec. of the 13th IRIS Conference - Information Systems Research in Scandinavia,14-17 August 1989, Turku, Finland. 1990

[35] Hacker, W. "Arbeitspsychologie: Psychische Regulation von Arbeitstätigkeiten." 1986 Deutscher Verlag der Wissenschaften. Berlin.

[36] Hahn, U. and M. Jarke. "A multi-agent reasoning model for negotiation support." Organizational decision support systems. Lee, Cosh and Migliarese ed. 1988 North-Holland. Amsterdam.

[37] Heim, M. "Electric language: A philosophical study of word processing." 1987 Yale University Press. New Haven and London.

[38] Hellman, R. "Approaches to user-centered information systems." 1989 University of Turku, Dept. of Computer Science. Turku, Finland. (Doctoral diss., report A55)

[39] Hirschheim, R. and H. K. Klein. Four paradigms of information systems development. CACM. 32(10): 1199-1216, 1989.

[40] Hoos, I. R. "Systems analysis in public policy: A critique." 1983 University of California Press. Berkeley. (Page references to first 1972 ed)

[41] Ivanov, K. "Quality-control of information: On the concept of accuracy of information in data banks and in management information systems" (Doctoral diss.) The University of Stockholm and The Royal Institute of Technology. 1972. (NTIS No. PB-219297.)

[42] Ivanov, K. "Projekt, system och effektivitet: Några kalkylproblem och förslag till bättre realisering" Stockholm: Statskontoret — The Swedish Agency for Administrative Development. 1975.

[43] Ivanov, K. "Från statistisk kontroll till kontroll över statistiken: Systemisk redovisning av fel i undersökningar inklusive avvägningar mellan kvalitet och medborgerlig integritet" (Research report No.1976:9, ISSN 0347-2108.) University of Stockholm, Dept of Statistics. 1976.

[44] Ivanov, K. "Statistik för datorer: Centraliseringen av svensk statistik, konsekvenser av en organisatorisk anpassning till datoriserad statistikproduktion" (Research report No.1976:7, ISSN 0347-2108.) University of Stockholm, Dept. of Statistics. 1976.

[45] Ivanov, K. "Systemutveckling och rättssäkerhet : Om statsförvaltningens datorisering och de långsiktiga konsekvenserna för enskilda och företag [Systems development and rule of law]." 1986 SAF:s Förlag. Stockholm.

[46] Ivanov, K. "Public records and trade-offs." Legal Informatics. Tuominen ed. 1987 The Inst. for Nordic Law at the University of Lapland. Rovaniemi. (Abstract.)

[47] Ivanov, K. "Rule of law in information systems research: The role of ethics in knowledge-building procedures, especially in the updating of inference networks." Proc. of the Tenth Information Systems Research Seminar in Scandinavia, Tampere-Vaskivesi, Aug.10-12 1987. Järvinen ed. 1987 University of Tampere. Tampere.

[48] Ivanov, K. Expert-support systems: The new technology and the old knowledge. Systems Research. 5(2): 293-100, 1988.

[49] Ivanov, K. "Critical systems thinking and information technology: Some summary reflections, doubts and hopes through critical thinking critically considered, and through hypersystems." Proc. of the ISSS Int. Society for the Systems Sciences 34th Annual Conference, Portland, Oregon, 8-13 July 1990. Banathy and Banathy ed. 1990 Report UMADP-RRIPCS 11.90, Univ. of Umeå, Inst. of Information Processing. ISSN 0282-0579.

[50] Ivanov, K. "Learning to design learning systems: The metaphor of future generations, and computer technology." Proc. of the ISSS Int. Society for the Systems Sciences 34th Annual Conference, Portland, Oregon, 8-13 July 1990. Banathy and Banathy ed. 1990 Report UMADP-RRIPCS 10.90, Univ. of Umeå, Inst. of Information Processing. ISSN 0282-0579.

[51] James, W. "The will to believe: And other essays in popular philosophy, and Human Immortality." 1956 Dover Publications. New York.

[52] Jung, C. G. "Collected Works - CW (20 volumes)." 1953-1979 Princeton University Press. Princeton. (R.F.C. Hull et al., Trans.)

[53] Kremenyuk, V., A. International negotiation: Analysis, approaches, issues. 1991.

[54] Kuutti, K. "Activity theory and its implications to information systems research and development." Prec. of ISRA-90 The information systems research arena of the 90's: Challenges, perceptions, and alternative approaches. IFIP TC-8, WG-8:2 working conference, Copenhagen, Dec. 14-16. 1990. Nissen, Klein and Hirschheim ed. 1990 Univ. of Lund, Inst. of Information Processing. Lund.

[55] Lundquist, T. and M. M. Huston. Information rich environments for continuous organic development—CODE. J. of Applied Systems Analysis. 17: 79-87, 1990.

[56] Lundquist, T. E. "On continuous organic development of information systems." Proc. of the ISSS Int. Society for the Systems Sciences 34th Annual Conference, Portland, Oregon, 8-13 July 1990. Banathy and Banathy ed. 1990

[57] Nilsson, K. "Designing for creativity: Toward a theoretical basis for the design of interactive information systems." Proc. of the 12th IRIS Conference - Information Systems Research in Scandinavia,13-16 August 1989, Skagen, Denmark. 1989 Aalborg University, Inst. of Electronic Systems. Aalborg. (Also as report UMADP-RRIPCS-8.89, University of Umeå, Inst. of Information Processing.)

[58] Nisbet, R. "The degradation of the academic dogma. The university in America, 1945-1970." 1971 Heineman. London.

[59] Nissen, H. E. "Information systems development for responsible human action." Systems development for human progress. Klein and Kumar ed. 1989 Elsevier North-Holland. Amsterdam.

[60] Papert, S. Constructionist learning. 1990.

[61] Persson, S. "Apropå myndigheternas uppgiftskrav." 1976 SAF: Förlag. Stockholm.

[62] Persson, S. "Så tuktas en dator: En diskussionshandbok för icke-tekniker i ledarställning." 1987 Prisma. Stockholm. (Edited by O. Forsgren & K. Ivanov)

[63] Pirsig, R. "Zen and the art of motorcycle maintenance." 1974 Bantam Books. New York.

[64] Ravn, I. and T. Söderqvist. "Having it both ways: Some critical comments on Humberto Maturama's contribution to the constructive movement in cybernetics" University of Gothenburg, Dept. of Theory of Science. 1988. (Manuscript submitted to Cybernetics. Available from author, Blågårdsgade 28, DK-2200, Copenhagen N, Denmark.)

[65] Riegel, K. F. "Foundations of dialectical psychology." 1979 Academic Press. New York.

[66] Schick, A. Systems politics and systems budgeting. Public Administration Review. (March-April): 137-151, 1969.

[67] Schmidt, S. J. Der Diskurs des Radikalen Konstruktivismus. 1987.

[68] Schwartz, J. T. "Computer-aided instruction." Discrete thoughts: Essays on mathematics, science, and philosophy. Kac, Rota and Schwartz ed. 1986 Birkhäuser. Boston. (Originally published 1983.)

[69] Singer, E. A., Jr. "Experience and reflection." 1959 University of Pennsylvania Press. Philadelphia. (C.W.Churchman, Ed.)

[70] Söderqvist, T. "Constructivism, the power of intellectuals and the art of biography" University of Gothenburg, Dept. of Theory of Science. 1988. (Manuscript. Discussion paper presented at the Gordon Research Conference in Cybernetics, Casa Sirena Marina Hotel, Oxnard, California, January 18-22 1988. Available from author, Blågårdsgade 28, DK-2200, Copenhagen N, Denmark.)

[71] Toulmin, S. "The uses of argument." 1958 Cambridge University Press. Cambridge.

[72] Toulmin, S. "The concept of "stages" in psychological development." Cognitive development and epistemology. Mischel ed. 1971 Academic Press. New York.

[73] Toulmin, S., R. Rieke and R. Janik. "An introduction to reasoning." 1979 Collier-Macmillan. London.

[74] Ulrich, W. Critical heuristics of social systems design. European J. of Operational Research. 31: 276-283, 1987.

[75] Ulrich, W. Churchman's 'process of unfolding': Its significance for policy analysis and evaluation. Systems Practice. 1(4, December): 415-428, 1988.

[76] Veljkov, M. D. Managing multimedia. Byte. (August): 227-232, 1990.

[77] Volpert, W. "What working and learning conditions are conductive to human development?" (Research report presented at the Swedish-German workshop on the humanization of working life, Stockholm, December 1988.) Institut für Humanwissenschaft, TU Berlin, Ernst-Reuter Platz 7, D-1000 Berlin 10. 1988.

[78] Whitaker, R. "A matter of perspective: Autopoietic theory and strategic issues in IT design" (Working paper, draft.) Stockholm: The Swedish Agency for Administrative Development — Statskontoret. 1990.