Monday, September 3, 2007

Suspension of disherence

Some days ago I was riffing on the "one-wayedness" of the Turing test, in the sense that an adequate criterion for intelligence would seem to include detecting error not only in one's own reasoning, but also in another's. In other words, a truly smart bot should not just be able to "chat" with a human without being called out as a robot, but should also be able to challenge logical and semantic claims of the human tester herself.

Apparently, one of the most famous chatbots, known as Eliza, is designed to do this sort of thing. Is Eliza a "Rogerian therapist", as she tries to sound like, or is she just a deluded algorithm? Eliza ostensibly challenges her interlocutor, much like a therapist might. Here's a sample of a "conversation" I had with here:

> Hello
* Good morning
> Don't you say hello?
* Hello and good morning
> So what do you want to discuss
* I'm in Taiwan
> Do you like being in Taiwan?
* More or less
> Can you tell me more?
* How about not?
> Why do you ask?
* You've already failed using intelligible English
> What does that suggest to you?
* You're a bot and Turing is still a dreamer
> Do you sometimes wish you were a bot and Turing is still a dreamer?
* How prescient of you!
[Here I left the chat box to blog]
> You're not really talking about me, are you?
> *** Are We going to Chat? ***
> I can't help you without a dialog!
* Sorry I was distracted
> What feelings do you have when you apologise?
* Anxiety and self-loathing
> I see.
* See what?
> Have you asked such questions before?

I quickly grew tired of this "test" and can see going through it towards "intelligent" dialogue only by suspending my disbelief, indeed by suspending my awareness of just how disherent Eliza is (which is not to say incoherent, just tellingly not coherent enough). SAIers (i.e., advocates of "strong artificial intelligence," like Alan Turing, Rodney Brooks, Marvin Minksy, Douglas Hofstadter, Ray Kurzweil, Daniel Dennett, Andy Clark, et al.), sometimes complain their opponents are always moving the goal posts instead of equitably accepting the progress AI has made and will make. For example, if expert chess was once the challenge to be met, and a robot meets the challenge, the goal post is just moved back to something more challenging, something more enigmatically and truly intelligent, such as vectorial face recognition. My own complaint, as an opponent of SAI (what you might call a Naysaier, like John Searle, Mortimer Adler, Roger Penrose, John von Neumann, Kurt Gödel, Hubert Dreyfus, Stanley Jaki, et al.) is that SAIers do just the opposite by widening the goal posts. An analogous trend is how SAIers claim that within, for example, 50 years, we will have such and such splendors or AI, only to see that, within those 50 years, the next generation is now saying that, within another 50 years we will have those long-sought splendors. (This temporal slippage occurred within Alan Turing´s own short lifetime; in the 40´s he expected robotic success in half a century but only years later he had dilated the time to a century.) If Naysaiers are accused of moving the goal post back farther from the kicker each time he succeeds, SAIers I think are just as guilty of narrowing the idea of intelligence to something almost definitionally possible for AI systems. Once you start saying human intelligence is "really just" this or that algorithm, and those many algorithms in concert, then you´ve eo ipso defined your way to success by narrowing the meaning of intelligence to something so basic (the "stupid homunculus" of the literature) that it turns out in one move that not only robots can attain intelligence but people themselves never really had it to begin with! Demystifying "natural intelligence" is a slippery but very popular way of perfecting artificial intelligence. If basically any problem-solving algorithm can be found for any problem even loosely construed, then of course anything can just as easily pass for intelligence.

My complaint is not that SAIers target specific human goals and conquer them one plank at a time, nor, more crucially, that they redefine intelligence in non-anthropocentric terms (this was very much Alan Turing's approach in the 1940s and 50s). My complaint is that precisely by redefining intelligence in terms amenable to robotics, SAIers forfeit the entire project of "artificial intelligence" as generally understood. The key hidden premise of the SAI project is not that AI amounts to the mere computerization of basic tasks -- which we have in spades, so here's to weak AI! -- but in fact the hominization of computerized cognition. The goal has always been to see robots that "can do everything humans can do, only better," so I find it curious to see both claims being pushed without much awareness of the tension. If on the one hand computers are designed to attain human (anthropocentric) intelligence, well, that still leaves a long empirical road ahead, and one heavily opposed by metaphysical concerns (such as the immateriality of thought and the [Gödelian] incompleteness of any logical system). If on the other hand AI is redefined for properly robotic intelligence, without denigrating it based on inappropriate human standards, then it seems we´ve got a fundamentally different project. The intrinsic embodiedness of intelligence (which is exactly what Aristotelico-Thomistic anthropology entails), and spirit´s transcendent relation to matter, forces a choice for SAIers. Either take human intelligence serious in its totality or take robotic intelligence seriously in its own terms without using it as a "threat" against the "special dignity" of humans. (This dilemma for SAI, which rests on the horns of biological ineluctability and conceptual redundancy, is what I was getting at in "Mother, may I?".) In the first case, SAI will ultimately amount to cloning, since human intelligence, in its current mode of being, can only properly exist in a human body. Designing computers to "pull off" human intelligence will just amount to cloning and manufacturing human intelligence (à la Brave New World).

In the second case, making properly robotic intelligence a reality will just be another case of intelligence being analogously understood from an already anthropocentric perspective. Only because we grasp what we mean by intelligence, do we see in an analogous way that animals "have it too." Now that we can manufacture electropets, AI will just become another case of finding intelligence in the world but still only in an analogous way. All such talk of, and reliance on, analogous levels of being is of course part and parcel with orthodox Catholic theology. What may be troubling for some Christians, namely, to find analogies of will, thought, and feeling in non-human subjects, pales in comparison to the startling awareness for a materialist that mind is a driving force in the universe at all levels. We find meaning in our own intellects because they are analogous reflections of God´s own intelligible personal being-in-relation. We find demi-meaning in beings beneath us because they too partake analogously of God´s being. Man, for his part, is the microcosmal mediator of the sensible and the immaterial, the sub- and the superhuman, which is what saints like Irenaeus, Maximus, Augustine, and Bonaventure have long been talking about.

No comments: