23.4.07

Iterative self-programming.

Let S be the basic symbol set where S is an ordered set of symbols which correspond one-to-one with the effective atomic operational semantics of any programming language (of sufficient power and complexity, see [1].) That is, S represents the complete enumeration of the operational semantics of a programming language P. For example, S1 would be the alphabetically first legitimate symbol in P, and Si would be the ith symbol in P. A function is any compilable, syntactically correct, ordered collection of symbols in S. Let L be the language of all such functions, beginning with simple variations on the basic symbol set, whose rules of expansion we shall return to. Let Li be the ith function in L, and the list of this set L is the complete enumeration of the .basic names of all functions generated so far over S.
Let F be the ordered index of all functions in L which either (a) generate syntactically valid output from valid input, or (b) change the (intra)system state in any way. A structured partition of F is a collection of functions taken as methods of a class and which generate syntactically valid input and output. Let a component be any collection of functions which, taken as methods of a class, generate syntactically valid input and output and utilizes other member (or family) functions or variables. A component is by definition syntactically correct and has a rough, “machinistic” (part-to-whole) meaning.
We define a closed ontology O to be an ordered collection of micro-ontologies. A micro-ontology is a special kind of class structure in which an arbitrary assemblage of component names are related to other component-names/ around five cateogires:
(a) Individuals. An individual is just a named component; let I represent the group of all individuals belonging to the micro-ontology; the first individual is the name of the micro-ontology itself;
(b) Classes. A class C is an ordered group of components with some commonality. This commonality is represented as the name of the class. At the same time, the contents of C are entirely dictated the name of C (which is the addition of a new basic symbol in O.):The name of C names the grouping-function of which C is the result;
(c) Attributtes
(d) Relations
(e) Events: Events are interruptions of a process due to an input stream (words or images) which provoke some response; this response is re-encoded as a necessary relation between an “individual” of the class of “events”;
Let K be an open ontology, the set of all valid programs—just ordered collections of closed ontologies--correlated to themselves within a macro-ontology. We shall say K is equivalent to a simple agent. Let us define three rules which expand K (i.e., generate new functionalities.)…

(i) other:
a. the function calling this function is taken as the foundation for a new program;
b. four variations and cross-variation on this interaction
i. repetition/erasure
ii. extension/contraction
iii. displacement/reogranization
iv. integration/separation
(ii) same
a. this operation is repeated, treating the newly creating network of related programs as the other who calls
b. (repeat the four variations)
(iii) synthesis (new program generation)

[1] – The basic ‘trick’ here is that the recursive step appears in the definition itself, i.e., in the relation of the symbol set to itself. We’ll get into all this later, but the programming language in question has to at the least allow for recursion, the definition of an abstract class, and the treatment of program code as an atomic data type. Provided this, we have the raw mechanics we need to simulate a heterogeneous network of communicative, self-programming agents--which allows for the multiple social instantiations of linguistically competent agency. Treating the program code as a code allows the computer to treat any code as a code—i.e., gradually and smoothly increase its facility with a symbol set by using it, experimenting with it, etc.


(sorry, finals, I’ll finish this soon)

Notes on Natural-Language Artificial Intelligence

Our first axiom must be some kind of logically justifiable affirmation that this goal is indeed reachable:

(1) Linguistic competence is attainable through the appropriate programming of any universal Turing machine.

In order to demonstrate this, let’s suppose that (‘artificial’) natural language agency isn’t possible. Accepting this implies at least one of the following two propositions must be true:

(a) Either linguistic competence will never be attainable for machines due to some kind of ultimately undiscoverable reason, or

(b) Algorithmic linguistic competence is unattainable for a reason which is, at least in principle, discoverable (i.e, empirically demonstrable)

In the case of (a), we have a clear fallacy verging on mysticism, at least without further evidence to adduce it. Formally, positively asserting impossibility based on an absence of evidence is purely inductive and has no necessary truth value. On the other hand, the truth value of (b), rather than resorting to faith in a metaphysical proposition, is based entirely on the principle of impossibility which is to be demonstrated. This is, of course, the more scientific of the two propositions, and the one that most deserves our scrutiny.
Why have we drawn this distinction? First, because it is clear that many “counter-proofs” of natural-language A.I. (“strong” A.I.) advocate (b) on the basis of (a), in a sort of admixture where they end up founding their “empirical” demonstrations upon crude and poorly-concealed metaphysical presuppositions. Second, because the second proposition (b) opens upon an important question which deserves scrutiny as a science in itself and which has been unnecessarily fettered with the supernatural faith of the first.
Let’s take as a common example the objection raised in Searle’s
‘Chinese Room’ thought experiment. He argues in essence that a computer will always and only be a model of a mind and, being just a machine, will be forever incapable of anything resembling human “cognitive states” (i.e., understanding, imagination, etc.) Searle presupposes the existence of a machine which would communicate with some degree of facility in a natural language; that the machine can speak Chinese is an arbitrary choice. In Searle’s conception, the operations of the machine are entirely determined by logical necessity. These operations, therefore, can be represented by any system correlative to a universal Turing Machine—and we must remark at this point that every physical system is of this kind, so that even a human being manually performing the operations on paper could represent the operation of the speaking machine: taking input in Chinese, “mechanically” applying rules, and delivering the result.
This is in fact precisely the case Searle considers: this way, the human is considered only as a rule-bound sign-manipulator, unconscious of meaning as such, to precisely the degree he performs the operations of the language-speaking machine. Moreover, the human being doesn’t need to understand Chineses in order to perform these operations; thus, since the pure operation of the system can’t encounter “cognitive states,” even though these may be simulated by the computer program which the human being is unwittingly but faithfully carrying out. To Searle, this also means that Chinese is not being “understood” by the machine either, even if it is a spectacularly convincing simulation. Insofar as it is just a universal Turing machine, Searle asserts that it is impossible for machines to understand Chinese.
It is easy to see how a large part of the conclusions Searle draws from his experiments are accurate: it is undoubtedly the case that computers, modern or not, are just and always mindlessly manipulating symbols. But to conclude from this that language requires something else than what is being offered. Just because the operations themselves are not cognitive states, this of course doesn’t mean that cognitive states don’t exist or are therefore impossible. Here Searle is against the Strong AI project as well as against Turing, who would seem to accept that a machine which could converse easily and convincingly at length with a human subject would qualify, at the least as ‘intelligent.’ Consciousness cannot be thoughtlessly conflated with linguistic competence.

In other words, (b) does not logically follow from (a), even if we do accept (a)’s truth-value. To speak via the position of an undiscoverable absence is, again, a non-actionable and unscientific presumption verging on the mystical.