20.5.07

Artifical Linguistic Competence

What does it take to make a machine linguistically competent? This is perhaps the extreme case of where we do not want solutions that make the problems simply disappear. For instance, consider any algorithm which boils down to a “pattern-matching” script, even one which is self-improving or evolving in some ways. It’s clear it won’t achieve anything near a critical degree of linguistic competency to pass for a human. Oh yes, it may work for specific problem domains. But the explicit movement of abstraction involved in all learning processes is absent.

The easiest way of conceptualizing this kind of problem is as a sort of theoretical void. We find ourselves in the surprising situation of having to identify concretely the appropriate level of abstraction. We are being asked to describe specifically the meta-linguistic mechanisms of communication. Not only by analogy, this void can be seen perhaps most intriguingly as an inverted reflection of the practical void of identifying the position of consciousness. But of course that’s absurd, right? I mean, first off, we’d have to decide at what scale we’re going to look for it! At any rate, assuming we make the error of actually trying to look for some positional self-consciousness, the mistake we’re making is analogous to looking for language-understanding in an algorithm that at the lowest level of abstraction still blindly matches this information-cluster to that information-cluster, and never actually approaching the linguistic code as code--never performing the sense-founding conjunctive mapping between the signs and the things signified. A rather curiously revealing error, which it seems not a few ("structuralists"!) have been fairly quick to do.

Curious because that confusing and strange question still remains, again that question which would seem to reduce this quest to absurdity: at what scale do we search for the psyche? Do we search for “self-awareness” at the microscopic or the quantum level, for instance? But we must move beyond the Cartesian theater of the mind, and we must even at this point separate consciousness from linguistic competence. We don't need an algorithm which somehow becomes (positionally!) self-aware; on the contrary, we need an algorithm capable of rigorous meta-linguistic abstraction, of linguistic computation. To answer practically the question of what we need to build a linguistically competent artificial intelligence-- the project consists of a single step:

(1) We need an account of language-understanding that includes an explicit account of meta-linguistic (semantic) knowledge.

I will offer an alternative statement of this same principle to motivate the question: how can we encode axioms into an abstract theoretical space? In order to offer an alternative foundation, we need to produce a simulation where everything flows--without this, we are merely pattern matching. In order to accomplish this, I think we actually do need to creatively but judiciously introduce some "exotic” mathematical concepts, like fractals, as models and “unusual” philosophical concepts, like desiring-machines, as analogies-- In fact, I believe we have to experimentally inject these kinds of theoretical advances into computer science, because the real practico-theoretic problem here cannot be solved by technology alone, we have to teach it enough for it to be able to teach itself. In other words, we have to continue to build a real theory of practical linguistic agency. Which would in fact (if "finally" accomplished in practice) amount to some kind of return of the repressed, wouldn't it? Artificial intelligence represents something of an always desired reconnection, a final psychic merging of technology and mankind. This sentiment is no accident: the human-machine relation is our first clue. Desire must be made to literally connect to the machine. This will eventually lead us to our second axiom, which we shall go ahead and state:

(2) Machine-learning must be self-organizing.

This means: algorithmicity without structure, or rather, with a fractal superstructure, although with no "foundational" layer, as the first step is recursive and differentiation can never be said to have finally stopped. In other words, self-organization allows us to tackle the problem of desire as a code, and it is precisely this “strange” kind of anti-organizational scheme which will become of increasing interest to us. This is partly because it is only once we abandon structure as the abstract “bottom level” will we be prepared to tackle authentic linguistic competency. Knowing we are still not in a position to support this next assertion, for the purposes of elucidating a future path, let’s state our third principle:

(3) Meaning is a flow of intensities, which can be considered as molecular assemblages and modeled accordingly. Meta-language is about the partial shapes and partial dimensions of actual language use. Atomic semantic units are thus completely described by their shape and (ir-referential) dimensionality.

The critical point here is that dimensionality is not only allowed to be integral; that is, we allow for partial, or fractal dimensions. A shape requires space but no structure; and we can determine operationality by mapping images to shapes of thoughts, shapes of codes, etc. The fractality of meta-linguistic processes accounts for the elusive [that is, as long as you look at it through a static dimensional framework] property of meaning, a connection which we shall attempt gradually develop with the appropriate theoretical and mathematical framework.

No comments: