23.4.07

Notes on Natural-Language Artificial Intelligence

Our first axiom must be some kind of logically justifiable affirmation that this goal is indeed reachable:

(1) Linguistic competence is attainable through the appropriate programming of any universal Turing machine.

In order to demonstrate this, let’s suppose that (‘artificial’) natural language agency isn’t possible. Accepting this implies at least one of the following two propositions must be true:

(a) Either linguistic competence will never be attainable for machines due to some kind of ultimately undiscoverable reason, or

(b) Algorithmic linguistic competence is unattainable for a reason which is, at least in principle, discoverable (i.e, empirically demonstrable)

In the case of (a), we have a clear fallacy verging on mysticism, at least without further evidence to adduce it. Formally, positively asserting impossibility based on an absence of evidence is purely inductive and has no necessary truth value. On the other hand, the truth value of (b), rather than resorting to faith in a metaphysical proposition, is based entirely on the principle of impossibility which is to be demonstrated. This is, of course, the more scientific of the two propositions, and the one that most deserves our scrutiny.
Why have we drawn this distinction? First, because it is clear that many “counter-proofs” of natural-language A.I. (“strong” A.I.) advocate (b) on the basis of (a), in a sort of admixture where they end up founding their “empirical” demonstrations upon crude and poorly-concealed metaphysical presuppositions. Second, because the second proposition (b) opens upon an important question which deserves scrutiny as a science in itself and which has been unnecessarily fettered with the supernatural faith of the first.
Let’s take as a common example the objection raised in Searle’s
‘Chinese Room’ thought experiment. He argues in essence that a computer will always and only be a model of a mind and, being just a machine, will be forever incapable of anything resembling human “cognitive states” (i.e., understanding, imagination, etc.) Searle presupposes the existence of a machine which would communicate with some degree of facility in a natural language; that the machine can speak Chinese is an arbitrary choice. In Searle’s conception, the operations of the machine are entirely determined by logical necessity. These operations, therefore, can be represented by any system correlative to a universal Turing Machine—and we must remark at this point that every physical system is of this kind, so that even a human being manually performing the operations on paper could represent the operation of the speaking machine: taking input in Chinese, “mechanically” applying rules, and delivering the result.
This is in fact precisely the case Searle considers: this way, the human is considered only as a rule-bound sign-manipulator, unconscious of meaning as such, to precisely the degree he performs the operations of the language-speaking machine. Moreover, the human being doesn’t need to understand Chineses in order to perform these operations; thus, since the pure operation of the system can’t encounter “cognitive states,” even though these may be simulated by the computer program which the human being is unwittingly but faithfully carrying out. To Searle, this also means that Chinese is not being “understood” by the machine either, even if it is a spectacularly convincing simulation. Insofar as it is just a universal Turing machine, Searle asserts that it is impossible for machines to understand Chinese.
It is easy to see how a large part of the conclusions Searle draws from his experiments are accurate: it is undoubtedly the case that computers, modern or not, are just and always mindlessly manipulating symbols. But to conclude from this that language requires something else than what is being offered. Just because the operations themselves are not cognitive states, this of course doesn’t mean that cognitive states don’t exist or are therefore impossible. Here Searle is against the Strong AI project as well as against Turing, who would seem to accept that a machine which could converse easily and convincingly at length with a human subject would qualify, at the least as ‘intelligent.’ Consciousness cannot be thoughtlessly conflated with linguistic competence.

In other words, (b) does not logically follow from (a), even if we do accept (a)’s truth-value. To speak via the position of an undiscoverable absence is, again, a non-actionable and unscientific presumption verging on the mystical.