martes, 11 de noviembre de 2008

COMPLEX SYSTEM

http://el.www.media.mit.edu/groups/el/projects/emergence/

A complex system is a system for which it is difficult, if not impossible to restrict its description to a limited number of parameters or characterizing variables without losing its essential global functional properties.
This definition is proposed with reference to our experience with the study of socio-technical cooperative systems
Notes
A more precise definition of a complex system: Formally, a system starts to have complex behaviors (non-predictability and emergence etc.) the moment it consists of parts interacting in a non-linear fashion. It is thus appropriate to differentiate between a complicated system (such as a plane or computer) and a complex system (such as ecological or economic systems). The former are composed of many functionally distinct parts but are in fact predictable, whereas the latter interact non-linearly with their environment and their components have properties of self-organization which make them non-predictable beyond a certain temporal window.

A truly complex system would be completely irreducible. This means that it would be impossible to derive a model from this system (i.e. a representation simpler than reality) without losing all its relevant properties. However, in reality different levels of complexity obviously exist. If we are interested in situations which are highly structured and governed by stable laws, then it is possible, without loosing too many of the system’s properties, to represent and model the system by simplification. Thus, the essential question is to know to what extent the properties of the socio-technical systems that we analyze and design fall into one or the other of these situations. In other words, to what extent can we make an abstraction of microscopic interactions in order to understand macroscopic behaviors? In what measure are microscopic interactions linked in a non-reducible way with the laws that govern more structured behaviors? Finally, is it possible to explain the most structured behavior using rules which control the microscopic behavior (the principle of emergence)? This last question is important from an epistemological and methodological point of view: if we consider theoretical economy, it can be preferable to generate the structural property of a system using knowledge of its microscopic properties (emergence), rather than suggest its macroscopic properties and only validate them with an analytical process.

The reduction of complexity is an essential stage in the traditional scientific and experimental methodology (also known as analytic). After reducing the number of variables (deemed most relevant), this approach allows systems to be studied in a controlled way, i.e. with the necessary replication of results. This approach in itself need not be questioned. However, when considering complex socio-technical systems it is appropriate to analyze precisely the limits of the approach.

Some Properties of Complex Systems:

Non-determinism and non-tractability. A complex system is fundamentally non-deterministic & non-tractable.
Limited functional decomposability.
Distributed nature of information and representation.
Emergence and Self-organization.

Property 1: non-determinism and non-tractability. A complex system is fundamentally non-deterministic. It is impossible to anticipate precisely the behavior of such systems even if we completely know the function of its constituents.

Property 2: limited functional decomposability. A complex system has a dynamic structure. It is therefore difficult, if not impossible, to study its properties by decomposing it into functionally stable parts. Its permanent interaction with its environment and its properties of self-organization allow it to functionally restructure itself.

Property 3: distributed nature of information and representation. A complex system possesses properties comparable to distributed systems (in the connectionist sense), i.e. some of its functions cannot be precisely localized. In addition, the relationships that exist within the elements of a complex system are short-range, non-linear and contain feedback loops (both positive and negative).

Property 4: emergence and self-organization. A complex system comprises emergent properties which are not directly accessible (identifiable or anticipatory) from an understanding of its

Non-determinism and non-tractability

A complex system is usually non-deterministic & non-tractable
In this example, the Medic (Med) is telephoning an external agent (C).

Due to the proximity relationship between the medic and all other agents, the conversation is opportunistically listened to by the agent O who then sends an ambulance (because she inferred the case discussed by the medic was urgent)

Non-determinism of socio-cognitive processes is often considered as being due, either to a lack of knowledge of the observer about the analyzed system, or to a disturbance of the system as a result of unforeseen causes (e.g. exterior events or noise etc.). An analysis of the properties of complex socio-technical systems suggests that non-determinism can have an important functional role. We consider one of the most usual mechanisms concerning cooperative systems: broadcasting. We will show that this mechanism is non-traceable (i.e. that it is difficult, if not impossible, to describe explicitly the information flows that are relevant in understanding how a collective functions) and that it provides a structure for the management of the memory of the collective.

An example of the broadcasting mechanism. A caller, C, telephone a medic (Med) at the emergency centre to request an ambulance. This communication can be overheard by several people depending on their geographical position and the volume of the communication. These people can be either authorized unauthorized, interested or disinterested interlocutors. The fluctuating status of the interlocutors, as well as their geographical positioning or their level of involvement with a task, will significantly influence the development of the common knowledge of the collective. In this example, we can see (in 3) that agent O overheard the conversation between the caller and the medic (1 and 2) because of his spatial proximity to the doctor and the volume of the communication. As a result, agent O dispatched an ambulance without the medic making an explicit request.
The cognitive dimensions of broadcasting are varied (audio, visual, gesture, etc.) and each one contributes to making the process non-deterministic. Some of the main factors contributing to this mechanism are: the number of people present at the time of the communication act, their status (authorized or unauthorized interested, etc.), their availability and the context etc.
It is extremely difficult to trace the flow of associated with this type of communication. Neither the actors involved, nor the observer have the means or the cognitive resources to know who heard the message and even less to know how it was interpreted. In addition, it is often very difficult to separate the environmental factors from the internal factors.

Limited functional decomposability

Plasticity in the division of Labor in Social Insects
Different activities are often performed simultaneously by specialized individuals, but the division is rarely rigid.

Workers switch tasks to adjust to changing conditions maintaining the colony’s variability and reproductive success.

Factors which cause the change in role are due to internal colony perturbations or external challenges, e.g.: food availability, predation, climate change.

This property of complex systems is difficult to understand intuitively since it goes against the principles of the dominant functionalist culture. According to the traditional analytical approach, a system that is functionally decomposable is a system whose global functioning can be completely deduced from knowledge of the function of its sub-components. To take a trivial example, if we know the function of each element of a car (brakes, distributor, engine etc.) it is possible to calculate the global function of the vehicle by combining the functions of each element. Systems theory (cybernetic, automatic) is one of the disciplines essentially dedicated to formalizing this approach.
A truly complex system cannot be represented by combining together a collection of well defined functional components. A principal obstacle to the functional decomposability of complex systems is the dynamic and fluctuating character of its constituent functions. The interaction with the environment, as well as the learning and self organization mechanisms makes it unrealistic to regard such systems as structurally stable.

The notion of distributed information is largely polysemic, conveying widely different concepts. In its most commonly accepted meaning, a system is said to be distributed when its resources are physically or virtually distributed on various sites. Thus, a machine (a computer for example) can distribute its calculations amongst several remote sites and assemble the results according to a pre-defined algorithm. Equally, an operator can distribute his or her work tasks and tools according to a particular strategy. The concept of distribution supports the concept of redundancy, when some distributed resources are redundant.
The notion of distributed representation also exists in the field of cognitive psychology. It covers the fact that, in the interaction between an actor and his environment, artefacts (tools) play an important functional role in the organization of the reasoning and the transmission of knowledge. To illustrate this principle, we will take the frequently used example of paper strips in the domain of air traffic control. Paper strips are small pieces of paper on which aircraft characteristics, such as its call sign, its destination and its route, are written. It has been shown that these strips help the controllers to represent information to themselves (for example by having the strips organized on the strip board according to the dynamic properties of the planes) and also to cooperate between themselves. Thus, we can speak about distributed representation, since some cognitive properties (such as memorizing and structuring of the problem etc.) are partially supported by artefacts in the environment. In one way, this notion is close to the concept of physically distributed systems.

Finally, we could introduce a third meaning to the notion of distributed systems which stems from connectionist models and conveys essential concepts for understanding the robustness of the collective in processing data. In the connectionist meaning, a distributed system is one where it is not possible to localize physically the information since it is more or less uniformly distributed between all of the objects (or actors) in the system.

We can see that the term “distributed representation” is inappropriate here since it is impossible to identify any form of representation in such a network. The representation is “dissolved” either in the nodes of the system or in the links. Thus, a distributed system, in the connectionist sense, does distinguish between concept, representation, and context, since these three entities are “encoded” simultaneously on the same support (nodes and links).

We can say that a truly cooperative system works on both representational and connectionist modes. This is why the system is particularly robust in complex environments, which are unpredictable and non-deterministic.

The following example is concerned with our study on the reorganization of the emergency centre. The aim of the collective in the centre is to maximize cooperative behavior between the actors, in order to respond in the best possible way to events in the environment (such as unexpected calls, work peaks, changes in the physical position of the agents, etc.). The efficiency of this type of collective is based on a situation of co-presence which allows information to be distributed by broadcasting and “floating ear”.

In the case of a normal workload, it is the proximity between the agents which allows them to keep informed of what is said in the collective (floating ear) and to regulate locally the efficiency of information distribution (by talking more or less loudly, by adjusting the volume of the loud speaker and by adopting adaptive ostensive behaviours). In addition to the information distribution between agents there is there is important interaction between the environmental factors (e.g. noise level and space constraints within the room) and more central processes (such as the control of the modes of communication). When a call, which relates to a previous call, is taken by another agent (i.e. one that did not take the initial call), the system is robust enough overall to be able to redirect the call to the correct agent.

Such a system can be regarded as complex because part of its functions (here the functions of information sharing and information distribution) cannot be reduced to a representation where it is possible to locate precisely a relevant piece of information. Neither the actors nor the observer can, at a given moment, give a deterministic plan of this process. Moreover, as we saw previously, the structural properties of the communication system are under local control: each agent can control the way in which he locally distributes the information. Understanding how such a system works requires having a model of this type of dynamics, including mechanisms of training, of self regulation and control of the interaction with the environment.
References:

Cilliers, P. , 1999, Complexity & Post modernism

No hay comentarios:

Publicar un comentario