Tuesday, August 21, 2007

“Mother, AI?” Yes or no, mother decides

The problem with strong AI is not just the limits of computational completeness for programming a total cognitive “innerverse”, nor just the rank ignorance of cognitive theorists about what real intelligence is, a deficit that makes it quite hard to know how design AI towards — not really understanding a real strawberry pie makes it significantly harder to make an artificial one — but is also the ultimate pragmatic conundrum of what to do with AI once we achieve it. What I mean is, once we achieve a robot that seems to be intellectually autonomous and spontaneous, the problem will be that its cognitive powers are either such a good simulation of our own that its benefits will be redundant, or they will be so far beyond ours as to be practically useless.

It was only last night [21 Aug 07, after this posting this] that I was reading M. Adler's essay "The Challenge to the Computer" and noted these pertinent words:

[I]t is necessary to distinguish between computers that are programmed to perform in certain ways and what I cam going to call "robots" -- machines built for the purpose of simulating human intelligence in its higher reaches of learning, problemsolving, discovering, deciding, etc. ... [The computer's] chief superiority to man lies in its speed and its relative freedom from error. Its chief utility is in serving man by extending his power.... Robots in principle are different from programmed computers. Instead of operating on the basis of predetermined pathways laid down by programming, they operate through flexible and random connections ... [which] Turing calls "infant programming"....

Immediately upon reading these words, I wrote thus:

Presumably, the simulation of human intelligence in robots would be just what detracts from computers' superior processing speed, thus negating the pragmatic benefits of robots as "smart computers." Once they begin to simulate human intelligence, they will ipso facto lose their Vulcan invulnerability. I suspect there is an inverse ratio between robotic "intelligence" and robotic efficiency. Ultimately, the closeness of robot AI to human intelligence will thwart all its current non-human benefits. As long as computers retain their signal computational power, they will ipso facto not attain to simulating human intelligence. [These comments should be read as having occurred to me after posting this entry.]

Now, imagine a real-time war analyst robot at work: it gathers tremendous amounts of data and formulates a decision. But, ultimately, who will decide how, or whether, or when, to effect that decision? All those lumbering humans waiting for the AI-General to make the call. Ultimately, AI runs the risk of becoming a cipher, one more albeit extremely well informed opinion among others. Once, for example, the AI-General has a functional “intuition algorithm,” as artificial intelligence would surely demand, then its intuition will simply be wagered against real men's intuitions and hunches and analyses. As long as the AI system keeps its analyses at levels suitably non-complex for human brains to compute, it will render itself only as intelligent as humans can afford it to be. Unless humans blindly defer to the AI-General in all cases, it has every chance of looking just dumb when a flesh-and-blood general decides to “follow his gut,” “pull rank,” override its plan A and go ahead with plan B — and all turns out well. Once the AI system renders its judgment, humans will reassess the case and realize the solution is too simple or too direct or too naive, etc., and go ahead with whatever they themselves decide in the end.

On the other hand, assuming AI becomes so much vastly smarter than us, it will be as useful as a blind Orphic sage. Or, worse, as threatening as a HAL gone haywire, which is to say, gone utterly in accord with its own intelligent artifices. Assuming AI does “pull a HAL,” we will have no rational choice but to curb its powers, either by limiting its executive clearance or filtering its data system so we control the flow of information. (This would be an “informatics blowback”: when the means of information control themselves require informatic bridling, lest their autonomous efficacy backfire on us.) What the AI system says in a certain situation may be the most important analysis in history, but, tragicomically, unless it puts the analysis in terms humans can grasp and accommodate to their own methods of rationality, the AI-analysis will seem like gibberish. Without condescending to human intelligence (“dumbing things down”), AI would be like a genius with mathematical and theoretical Turet's syndrome. Unless humans can comfortably integrate the much more advanced AI-directives into their own rational goals, AI will just be an inscrutable tease. But, of course, if the AI system does tailor its intelligence back down to human levels, it will eo ipso become, as above, one more analyst to be debriefed and mulled over on the palate of great men at the helm of history. Why does your dog eventually, even quickly, tire of listening to you talk with him? Because while your intelligent behavior may strike a synapse or two in his doggy brain, eventually it will all just be gibberish. Very smart gibberish for us, to be sure, but for a less intelligent creature, ultimately just a confusing tedium. No one likes listening to a lecture that goes totally over her head. If AI starts to go over our heads, we will Super-intelligence seems to mere intelligence like folly, and any intelligence worth its salt will ignore apparent folly and bank instead on mere intelligence—which is to say, once more, human intelligence, albeit with whatever AI input it can reasonably use. No matter what high-flown intelligence algorithms an AI system may cook up, with the help of its natural selection and synthetic neural-formation algorithms, unless these levels of intelligence mesh with—conform to—human intelligence, they will be just that for men in a crunch: high-flown frenzies of theory.

As for the argument that over time, as people see what accurate, reliable benefits “the AI difference” makes, and thus that people will learn to “trust” the machines without squabbling over their all-too-human predilections, this only begs the question: By what criteria would people consider the AI difference to be valuable? Would they ask an AI system to assess the value and safety of … the AI system itself? No, for, in all such cases, humans will do what we always do: exploit technology to the lats drop, gaining thereby whatever edge we can—but then still do what we ourselves, by the light of our own intelligence, deem to be best. If AI never really surpasses human intelligence, it will be a cipher among ciphers. If, on the other hand, it does surpass human intelligence, it will be one more enigma among the very enigmas it was designed to solve.

All that I've said is not meant as a critique of strong AI as a theoretical possibility, though I am heavily inclined to skepticism, given the considerations of J. von Neumann, E. Gilson, R. Penrose, H. Dreyfus, K. Gödel, S. Jaki, E. Wilson, M. Adler, et al. Mine is simply a passing meditation on the picayune peccadilloes of AI as a fundamentally and finally human utilitarian production. Captain AI, whenever he may come into being from the womb of mankind, can and will devise and advise, but still must ask, "Mother, may I? Mother Man, may AI?" Yes or no, mother decides.

No comments: