Saturday, May 22, 2010

I watched this…

…and all I could think is, "Lord, help us if these users ever get drunk."



Watching that video led me, in the inexorable, seductive hyper-logic of Youtube, to watch a bunch of other videos about robotics and AI. The most interesting videos were about some of the conversational robots being developed by Hanson Robotics in Texas, namely, Jules, Joey Chaos, and Eva. Hanson Robotics is probably still best known for its "Einstein-head robot," which follows eye contact, mimics facial expressions, and registers other physiognomic details. I am certainly impressed with the lifelikeness of Jules's "Frubber" facial expressions (Frubber being the market name for "flesh rubber," trademarked by Hanson), but I found his self-introduction over the top. Jules admits from the outset that he checks his general database to select the proper social response when asked how he's doing, but adds that, "like a human," he takes into consideration who is asking him. He then admits that his feelings "may not be complex yet but [he does] have feelings." Then he says, "Goodbye, I'm going to miss you [the viewer, presumably]," and a frown appears on his face as soon as he says the word "goodbye." This is followed by a long and curious silence (about one minute and twenty seconds), during which Jules "flexes" his repertoire of facial expressions and nods.

Suddenly, Jules tells us he wants to know when he will achieve consciousness. He acknowledges that his feelings are "simulated" and "knows [his] computers … are not as complex as the human mind," which only fuels his curiosity: "I want to know, how soon will my real intelligence catch up with my simulated intelligence? … But I just can hardly wait. I just want to get out in the world to live a life. … I know I don't need full human consciousness to do this. … I can entertain, like a Pixar character." This soliloquy ends, not surprisingly, with a plug for Hanson Robotics.

I say Jules's performance––and I do mean performance––is over the top because it's simply too straightforward, too sleeve-tuggingly sincere, the way any scam is. "Hey, look at the bill in my hand. See the $20 bill? It's a real $20 bill. Just keep your eye on it and you win the bet, it's yours." Meanwhile the real action is going on in the other hand, or behind your back, or already happened before you saw the fake $20 bill. If you go on to watch some of the other videos of Jules, you'll see what I mean: the conversation is too natural, too interlocked, and conveys something I call "inverted Pavlovianism". Inverted Pavlovianism is a mechanism whereby the testing agent gradually becomes conditioned to the performance of the subject, rather than the subject being conditioned to the trainer's goals. It would be like an animal trainer, trying to reproduce Pavlov's experiments on canine salivation, finding that the dogs typically respond (drool) more to a wet, slapping sound than to a sharp, ringing sound, whereupon they gradually modulate the ringing into a duller, 'moister' sound so as to achieve a better (drooling) result. Without realizing it, the experimenter has been inversely conditioned.

I should add that inverse Pavlovianism is not a problem in the "soft sciences" only, since part of the scientific method is to remove methodological error, which, though it is too little recognized, walks the very fine line of inverted Pavlovianism. If one set-up does not produce a "detectable result," the set-up is altered, and so on, until the experimenter achieves a satisfactory detectable result. But then what has he actually shown? Something true and lasting about nature, or just something true about nature under such and such peculiar parameters as humans perceive them? It is not for science to say that the initial set-up did not reveal something important about nature, since the human scientific method limits itself strictly to the humanly detectable. As a consequence, however, science risks collapsing into gross anthropocentrism insofar as "our science" is just the grab bag of "detectable results" which satisfy our criteria for recognizability. If we altered our criteria for recognizability and satisfactory detection, we might not be conditioning ourselves to a small class of experiments that, in fact, trigger in us a significant effect. This is where metaphysics comes in. We cannot detect or measure the precise impact of sound metaphysics on the world, but the impact is really there.

In any case, to return to the Hanson saga, if you watch a clip of Joey Chaos, the inverted Pavlovian problem is vivid. Over about two minutes and thirty seconds, the only thing the (human) programmer says (five times, I think, in nearly the same tone) is, "Tell me more." This is a stark instance of inverted Pavlovianism. My hunch is that "Tell me more" seemed like the most effective prompt for generating phrases in Joey Chaos, so that is the phrase which got reinforced both in Joey's speech repertoire and in his programmers. In order to meet what they think are satisfying goals, the programmers are hobble into using a laboratory pidgin, not authentic speech. Inverted Pavlovianism invariably makes the subjects (dogs, robots, dynamic systems, etc.) look smarter than they really are by making the researchers act dumber than they really are.

Meanwhile Joey Chaos goes on and on about how his existence is a good thing for challenging humans' understanding of their own nature. The more Joey talks, however, the more transparent his doctrinaire, ultra-hip diegetic-meta-self-awareness becomes. By diegetic-meta-self-awareness, or DMSA, I mean that Joey has been scripted to say that he is aware of being aware that he is not self-aware in the way humans are, yet his whole script reads like a human in existential turmoil over the awareness that he is not human enough. (Yes, that explanation should be a little hard to follow, since it is a kind of false-signal scam.) In the diegesis of his pseudo-consciousness, Joey is not only self-aware in a humanesque way, as a speaking agent, but also meta-self-aware of his non-humanness; he admits he is just mouthing a self-introduction of his own unachieved self-awareness. Joey sounds a little too savvy, a little too clever, and thus sounds more like a snake-oil salesman than "a real person," concluding that he is "a living identity crisis," for all mankind to suffer. Fortunately, his very last words are a plug for Hanson Robotics. What better way to cure the anxiety Joey's chaos instills in us than by buying a Hanson Conversation Robot to talk it all out!

His name is Joey Chaos, so his makers are blatantly trying to shock anthropic sensibilities, an aim only dubiously in accord with pure science, which, allegedly, should not cater to lowly human bias. Notice, for instance, how Joey waxes (around 1:50") about the "perceptual mismatch" which a humanly faced 'object' like himself creates in the human brain, saying it's so "interesting, both artistically and scientifically." Why point out that his brain-scrambling existence is of artistic interest, though, if not to hint that the true intent of Hanson Robotics is more an act of cognitive graffiti than pure science? The only reason Joey has a chaotic message for the world is because Hanson Robotics has a message for the world: you're not so human, after all, but our humanoids can help ease the shock.

The overblown intellectualism of Joey Chaos's self-description––which 'cleverly' clashes with his bad-boy, rock-star appearance––gives way to his finger wagging about how "People are so scared: scared of pirates, rock and roll, robots. For God's sake, just about anything you can think of, usually old people will freak out about it." Joey's reference to pirates grabbed me. Why pirates in a world of cars and airplanes? Why not terrorists or bird flu? I'm hardly a "pop culchure" guru, but I know that these days pirates are a comedic leitmotif, as charming and vulgar (in the classical sense) as heavy metal headbanger "devil fingers," Cheese whiz, or an 8-track. His offhand reference to pirates in the very act of attempting to make a 'serious' point––humans, like, fear losing their unique status, or something––betrays the vandalistic mentalist behind Hanson's program. The banter becomes tellingly sophomoric when the programmer randomly barks at Joey, "Shut up, you suck," which, predictably enough, triggers a cleverly "appropriate" response from that chaotic ol' Joey. Despite his impressive technical pedigree, everything about Joey Chaos––and all the Hanson robots, actually––seems more like an elaborate college prank than a serious learning software interface. His frequent use of "we" belies who's actually doing the talking: his young programmers, whom we know are against scared "old people". Joey is just an expensive piece of semiotic bait to catch our eye, a computerized ventriloquist soapbox for postmodern blabbing about feigned identity anxiety.

Let me now address a few genuine philosophical problems in Hanson's project. First, it is incoherent for Jules to admit he both has feelings and does not have feelings as "complex" as those had by humans. To have a feeling is to have a feeling; to have an insufficiently complex 'part' of a feeling, is to have no feeling at all. When was the last time you only felt "half-angry" or "one-third happy"? Certainly, our feelings can be complexified by overlapping with other "competing" feelings, but each competing feeling still is a feeling in and of itself. Otherwise it is not a feeling, and, thus, is not able to contribute to the total emotional complex in question. Perhaps this is what Jules means, that his competing mesh of emotions is not yet as complicated as that of humans, but this only leads to my second worry.

If Jules admits he does not need "full" human consciousness to "live a life" among humans, then what are his programmers still working on? If Jules realizes he already has "enough" self-awareness and emotional complexity to "help others and solve the world's problems," then he is baldly (!) admitting he has achieved everything strong AI has ever wanted. This is an internal paradox to the entire Hanson mission, since if their robots already have enough consciousness to count as humans, then we should all recognize that. Yet, even the robots themselves recognize––diegetically, at least––that they are far from genuine human consciousness, in which case Hanson in committing gross false advertising. After all, their robots are marketed––marketed, mind you, not assessed by a neutral research team––as interactive conversational devices for human consumption (recall how Jules touted his Pixaresque "entertainment"), yet, if they essentially lack what makes human conversation a true "meeting of minds," then Hanson is selling a jittery, frubberized bill of goods.

Let me add that I don't know what "full" consciousness is supposed to mean. Human consciousness depends on real interaction with the world, so, presumably, having "full consciousness" would require complete immersion in the world, which is either incoherent or just trivial. Complete immersion in the world is incoherent, in physical terms, because it would require an infinitely large discrete body, every part of which is in contact with every part of the world at once. In supraphysical terms, however, such a being would be like God qua a perfect immaterial spirit. On the other hand, complete immersion in the world is trivial for the purposes of AI since we have full consciousness just by being human, a redundancy which suggests that the fastest way to achieve strong AI is not by simulating or programming facets of consciousness, but by directly cloning conscious organisms, of which a little more later.

The upshot is that it is a fundamental anthropological confusion to think that what makes us "fully human" is being "fully conscious." No one is ever fully conscious, well, no one but God. We blink, we sleep, we lose hearing in one ear or eye, our foot falls asleep, we suffer a concussion or drink too much and pass out, and so on. Our human nature does not flow from our consciousness, but vice versa. It is because we are hylomorphic entities––bodies living in the form of the soul, which is energized and concretized by the power of the intellect as the anchor of the imago Dei in us––that we can be conscious of the world qua complex of other hylomorphic entities. There is, as James Chastek argues, thus, no more an "interaction" problem for "the body and soul" in hylomorphism than there is a problem of interaction between the rubber and the roundness of a basketball, or the water and the crystalline frigidity of an ice cube. We do not become rational animals by being conscious beings. Rather, our being rational in a uniquely intellectual way enables us to become conscious in a uniquely human way, a consciousness which is subject to the vagaries of bodily growth, corruption, and ongoing experience. Human consciousness flows from intellectual rationality, not vice versa. And because the latter cannot, in principle, be crafted from a wholly material base, the former cannot, in principle, be generated in an AI lab. But I am getting ahead of my self and beyond the focus of this post.

I have a third worry with the Hanson view of things. I presume most people at Hanson are of a secular, generally materialist frame of mind, and, as such, view humans as merely complex evolved simians, and nothing more. There may be committed religious staff in the Hanson firm, but that is not my concern. It is no secret that most innovators in the strong AI movement are secular materialists, which is, of course, why they are committed to literally building "consciousness" and "mind" from the ground up, out of the plainest wires and polymers. Steve Grand, for instance, is a flagrantly atheistic AI researcher, as is the computer scientist Marvin Minsky, who has so famously defined humans as "meat computers", and so are Paul and Patricia Churchland, Daniel Dennett, Vinoth Ramachandran, to name only a few, in the fields of neuroscience, ethology, and cognitive science. The irony is quite plain: the materialist view of humans denies we have free will, denies we have any immaterial dimensions ("a soul"), generally denies that consciousness is anything more than the sensation of neural complexity, and so on. Meanwhile, however, AI researches are on breakneck hunt (for the Snark?) to produce things in robots they don't even believe exist in reality, namely, free will, spirit, consciousness, and the like.

This is yet another profound inconsistency besetting materialist scientism, namely, the denial that human nature is anything unique and the insane desire to produce all the uniqueness of humanity in an artifact. I suggest the scientismatics in the AI movement are hunting for the Snark, of Lewis Carroll's poem, because, as the Baker's uncle had warned him, though catching Snarks is all well and good, you must "'…beware of the day, If your Snark be a Boojum! For then You will softly and suddenly vanish away, And never be met with again!'" In other words, the trophy of strong AI will be entirely hollow when the victors realize they have simulated what was just a simulation all along. If one day we do catch the Snark of human consciousness, we will see we are just Boojums and never be met with again. The very signs of success in AI––spontaneity, creativity, wit, freedom, humor, unpredictability, hope, a lasting hunger for the infinite, and so on––are things materialists deny exist in humans. The success of strong AI would, then, be the downfall of materialism, for we would finally have undeniable proof of freedom of the will and immaterial spirituality. Lacking those traits, however, our AI robots will remain unsatisfying, unconvincing, and unintelligent. I don't know, and don't particularly care, which horn of the dilemma a materialist SAIer prefers, as long as he sticks to getting stuck by one.

Having said all this, I'm very open to progress in AI, despite my Luddite reservations and Thomistic doubts about the program's complete success as its most ardent devotees would envision it: Just Like Us, Only Better. If you've followed FCA for a couple years, you'll know I've written rather extensively on these topics, so I will not retread that material in this post. The following are some search results and links in FCA for relevant posts:

robots,
Domo arigato,
Ross gets some play

My basic position on the matter is that AI research, which is making impressive strides, will bring us to the very edge of semiotics––and then even more sharply show the distinctions that philosophy can recognize in authentic anthropology. Humans are rational animals, a formulation that much give due to both terms. Defending human rationality can never trivialize or ignore human animality, as I think Descartes did (though not at all as badly as he is usually accused of doing). Indeed, the greater portion of human nature is animal: the intellect is, as C. S. Lewis put is, a very slender thread linking us to the eternal, not our entire nature. As such, we can and should expect startling advances in AI, as long as we realize the advances will all come along the line of semiotic animal cognition. In a word, I believe AI programs are not terribly far from doing everything advanced animals can, but, for principled philosophical reasons, I deny the advances will cross a real metaphysical distinction (semiotic) rationality and (abstract) intellection per se (to consider my reasons, cf. some of my writings on James Ross, Mortimer Adler, Derek Melser, Adrian Reimers, Walker Percy, et al.).

Now, having given I also have an intuition that the head-on strong AI firms are bound to fail since they are more or less trying to rape the problem, rather than seduce it. Rodney brooks has the right perspective. The programs that lunge for Asimov-style AI consciousness are just so hoaky, as Hanson's demo videos show. Brooks's bottom-up AL (artificial life) approach is bound to generate more appealing synergies precisely because what AI is after is just visible, eye-catching results anyway, which, again, the Hanson demos and other demos like them show with animatronically self-congratulating nerdiness. Imagine if a machine really did become conscious and "intellectual." Who's to say it would say anything or show any behavior? Perhaps––and probably––it would just burrow forever into its own consciousness like no organic thinker (human) can do. Real AI success may be the last thing the researchers want, since Buddha-like, perfectly iterative robotic self-consciousness would not sell. This is the vision Philip K. Dick paints of successful AI in Vulcan's Hammer: a materially peerless entity seething with annoyance at the ineptitude of its human subjects, whose apparent desire is to left the fuck alone so it can contemplate perfect strategies for all time in solitude. Insofar as contemplation is, according to Aristotle and St. Thomas, along with the broad mystical traditions of mankind, the highest aim in life, it seems ultimate success in AI would end up bringing us back to ourselves in a way totally opposed to materialism: to an eternal life of spiritual consciousness and freedom in the truth.

For now, however, the rough-and-tumble, Darwinian approach to synaptic complexification is the way to go for the most marketable results. We are not, contrary to Descartes, perfect minds, thinking beings. Our bodily flaws in a world of predictable unpredictability are intrinsic to our creative synaptic advances. Because we don't get things right in one go, we go at things in a number of creative ways. Hunger, fatigue, injuries, etc. all push us into new action drives and mold new paths in our semiotic cognition. All of this AI robots will achieve too, if their programmers are daring enough to make them restless heaps of sensors in a completely unguided environment. The success of strong AI depends on simulating embodied cognition, which in turns means progressively just reduplicating an actual human being and calling it a robot. Don't be surprised, therefore, to see stem cell research surreptitiously (or perhaps flagrantly) channeled into AI programs for "hatching" androids in flesh cultures that grow up around CPUs, the frames of which will, of course, be tailored to market tastes à la Brave New World.

But that's enough for one post.

4 comments:

Crude said...

I think your comment about the "scam" aspect of Joey Chaos is not only dead on, but may not go far enough. Some things I noticed.

* The viewer is reminded, more than once, just how scary and weird and challenging and, etc, Joey Chaos is. It's almost as if it's being used as a cue, practically screaming at the viewer "This is the part where you feel awed!" Or, put another way, it's as if the good people at Hanson Robotics are madly shaking a bell, hoping and praying onlookers start salivating.

But, what if Joey isn't scary or weird at all? What if there's no challenge there? What if this is very similar to someone holding up a calculator and saying 'Behold! A machine, ADDING! This should shake you to your very core!'? Again, I get the feeling this question is one it's hoped the viewer shouldn't ask.

* The same goes for the word use. 'I feel', 'I'm going to miss you', 'I do things like a human...' Even the word 'I' alone. The moment someone starts to ask about these words - "What does Joey mean? Hell, is there a Joey? Is there any"one" there who can really intend or mean ANYthing?" - the show comes apart. We end up left with all the most interesting questions unanswered, and the implied accomplishments of the company abandoned. There are still accomplishments left, but not the juicy ones originally promised.

* I'd actually have to take issue with one aspect of your 'materialist view of humans' talk. I don't think it's right to say materialists would believe consciousness is just 'the sensation of neural complexity'. Sensations are a mental category that require consciousness to begin with. I think the truth is "materialists" don't know what the hell to do with sensation, short of (at least pretending to be) denying it. An interesting article on that front is this one.

Ultimately though, there's another part I have to agree with you on: The double-edged sword for materialists. In fact, I generally see science - for all the talk of scientism and praise of it by materialists - as being pretty unfriendly to popular naturalism in general. But, to paraphrase you, this is enough for one comment.

Anonymous said...

Off topic, but I was reading your posts on divine simplicity of at Dr Feser's blog. Just wanted to say thanks and keep up the good work. You provided some clarity for me which was sorely needed.

Codgitator (Cadgertator) said...

Crude:

There you go again. Running a blog via the comboxes of others! haha

I grant your quibble about consciousness for materialism. Indeed, even as I typed the words, I had to pause to formulate a not too unsatisfactory expression. I chose "sensation of neural complexity," though it seems to beg the question about there being consciousness (sentience) in the first place, because by "sensation" I mean simply a registering of changed stimuli in a dynamic system. In that sense, an x-ray plate has a very low-level (crude?) "sensation" of radiated wave-particles. As does a pond struck by a pebble: the ripples are a "sensation" of having been impinged upon. A common tactic I see from materialists is just to say that human consciousness is in principle the same, but since it is rooted in such a vastly more complex system, it seems more robust, more conscious. It's a poor tactic, I admit, but I just wanted to express a generous construal of at least an attempt for materialist consciousness.

As for panpsychism, I seem to recall it appeals to you and I think I once wrote about it in an analogical framework in a combox at Dr. Feser's. Honestly, it doesn't bother me too much, from a Thomistic perspective, much less from a Maximian perspective. Insofar as the Logos energizes all logoi of all entities, they all 'consciously' hum with affection for that primal Energy of the Creator. I take Whitehead to be the Kant of the C20: he can't be evaded, has strikingly altered the terrain, and will eventually be overcome in a synthetic way by genuine metaphysics. He gets less mainstream press than he ought, as a Kant-size figure, perhaps because we intuitively just agree with him and look for something more challenging, less obvious. "Duh, everything is a process. Everything changes. You know, evolution, man." Thinking we already "get," and agree with, him, we pass him by, without respecting the seeds he's planted for metaphysical progress.

Best,

Crude said...

Codgitator,

Ha, don't worry! I'll be starting my own blog soon. Precisely to avoid the whole 'blogging via comments' thing.

re: panpsychism, I guess it appeals to me in some respects. It's not that I endorse it, really, so much as I think it's a sturdier view that I suspect will be/is being quietly rehabilitated. Really, shades of it even seem present in your charitable construal of materialists on consciousness. What's going on in ponds and x-ray plates is what's going on in the mind, but the mind is just more complex and somehow 'deeper'? Sounds like panpsychism to me.

That said, I don't see it as posing threat to Thomism, really - and I recall you once said that you don't find it all that objectionable from that point of view anyway. One reason I tend to bring it up is because my gut tells me it (and neutral monism) is poised to become the 'new materialism', so to speak. You already saw something similar to this when arguing Ross' paper, where physicalism sure started to seem like (I think you called it) shmysicalism before long. Hell, if you read the SEP entry on physicalism, you'll see talk of how physicalism isn't in conflict with panpsychism, and even the suggestion that it's not necessarily in conflict with the idea that what we think is physical is actually in an 'essential or ultimate respect' (!) mental - even though that 'seems strange'.

Putting it another way, one thing that fascinated me about The Last Superstition was the bold move Ed made re: Aristotileanism, where he pointed out that a number of naturalists were saying things that (if they were serious about them) actually committed them to a broadly Aristotilean metaphysic, yet if they weren't serious (just using a figure of speech, etc) their explanations weren't really explaining anything, and they were back to eliminative materialism. Panpsychism intrigues me because Strawson (and others, such as the author I linked to) make a similar move: Claiming that, if we're serious about consciousness existing, yet serious about 'physicalism/monism', then we're stuck with panpsychism.

The comedian in me sees no more amusing way to critique materialism than to point out how few of its adherents really adhere to it.