17.5.07

Autopoesis

"An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network." (Maturana, Varela, 1980, p. 78)
"[…] the space defined by an autopoietic system is self-contained and cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic system, however, we project this system on the space of our manipulations and make a description of this projection." (Maturana, Varela, 1980, p. 89)

Niklas Luhman works with this autopoesis to produce a quite fascinating model of systematicity. I’ll briefly highlight what’s important from our point of view.

A ‘machine’ is defined by the boundary between itself and its environment; a machine is divided from an infinitely complex exterior. Communication within a machine-system operates by selecting only a limited amount of all information available outside (reduction of complexity.) The criterion according to which info is selected and processed is meaning. Machines process meaning, producing desire; each machine’s identity is constantly reproduced in communication (depending, again, on what’s meaningful and what’s not.) If a system fails to main identity, it ceases to exist as a system and dissolves back into the environment. Autopoeisis is this process of reproduction from elements previously filtered from an over-copmlex environment. The operation of autopoesis can be binarily encoded (in a Spencer-Brown logic of distinction) as a program which filters and processes information from the environment.

OK, taking this from a D&G perspective, the question becomes about this connection or boundary-limit... and I think this is where fractality and cognition exhibit a common transitive structure...

Program-agents connect: machines to flows, flows to machines, flows to flows, machines to machines, events to flow-machines, machines to event-flows; they (1) produce mappings (flowcharts) of these connections, (2) dis-join, decode and fracture these mappings, (3) construct new machines->more or less ‘dense’ networks of ‘tubes’, flows->currents of intensity, subagents->communicate the pure imagistic flow of unconscious symbol-automation, a particular agent constructs a tool (or a machine with a hole in the shape of a ‘problem’) by halting this flow, “flattening” it into (n-1) dimensions, where it can be differentially represented by a self-organizing nano-ontology; these subagents compress reality into their ‘micro-worldviews’ but then uncompress them into signification, a stream of images and words whose true ‘symbolic’ value is not in the individual’s ontology, but in the group; so natural evolution works to point individual ontologies towards the assemblage of the group, but also pushes the groups’ ontology towards more effective ways of responding to events; so all agents are partial agents, but these agent/machine networks are not all at the same “level”; machines can be made up of machines and subagents; all agents are subagents, this fundamental fractality is ultimately what allows these flows to be taken as flows, allows agents to be and to perform; “full” agents that skim the surface of language are precisely the question. up til now we have only considered the deeps. and perhaps this is ultimately all we need consider: merely the most fundamental heuristics of cognition. but what about conceptual metaphors? does the machinic framework provide for the possibility of metonymy? does the fractality of cognition really completely account for linguistic competency...?

What is a subagent?

The task of a subagent is to translate an image (scene) into a problem space, an objectivized or idealized space. Geometric regularity is in fact what is here being auto-regulated: the problem of establishing arbitrary limits is taken up as a recursive feedback loop between the systematic and meta-systematic modes of computation. Intensity, attention, or heat is represented by the amount of ‘noise’ (perturbation) allowed by the meta-system in the description of the problem space. This problem space is then populated by sub-subagents who imagine it, and then create sub-sub-subagents who reify it into a problem space; this gradual decomposition amounts to conceptual simplification, that is, until we find an undifferentiable function which decodes the image, i.e., supplies the solution. The image (collapse of solution space) is transcoded into a new problem-space, or returned as feedback to higher levels of the system, which may be in contact with other subagents inhabiting the given problem space.

No comments: