I seem to be pretty well ahead of the literacy curve in the USA, according to which most people read five books last year, as compared to ten in 1999 and six in 1990.
Happily, this means I can put aside all my books for a month and fall in love with a chatterbot! (Actually, I always have trouble unZIPping files onto my Mac, so I can't install one.) The thing is, if I can't get into any arguments with these programs, I don't care how "Turing-safe" they are, I'm unconvinced of their status as "intelligent". A crucial aspect of using language intelligibly and correctly is being able to detect, and challenge, misuses of it. It's one thing for humans to be tricked by chatterbots' use of language; it's something else entirely, and something extremely important, for the chatterbots themselves to challenge humans' use of language. Grasping truth, and thus possessing intelligence, is not a one-sided affair. True intelligence, as a grasp of a universal (or, a transcendental) in an encounter with a concrete object, entails sensing the limits of that grasp and, in turn, challenging a faulty grasp of it (i.e., based on someone's use of inappropriate language or behavior). I just don't think it is enough to say robots have "passed" the Turing test if they can pass for intelligents (and intelladies, conjoined nouns I may have just coined), since a major feature of being intelligent means spotting unintelligence. Using language like a human, as the Turing test more or less stipulates, requires detecting sub- or nonhuman misuses of it, since the entire premise of the Turing test is that humans involved in it are primed for, and thus capable of, "sniffing out" non-human simulacra.
Something about this line of thought seems reminiscent of St. Augustine's argument from error against skepticism, but I can't now elucidate the connections, if they even properly exist (cf. also "From Relativism and Skepticism to Truth and Certainty" by Professor Josef Seifert). Contra the Sceptics, who in this context I would cast as the proponents of Strong Artificial Intelligence (aka, SAIers) in pursuit of knowledge-as-indubitability (i.e., as flawless programming), a key element of true knowledge is the ability to differentiate error from truth, an analysis which of itself points to truth, or in the AI debate, to a form of rational cognizance far, far beyond the grasp of mere robots-as-responders.
I honestly wonder, would a chatterbot ever refute or argue against my train of thought and claims? This case, c/o Mark Humphrys, does seem promising in the way of a Yes, but I haven't perused the whole convo. Well, install the little bots yourself and find out.
What would be interesting is to see two chatterbots and one human in dialogue, with the bots' goal being to determine which person is the human. Since their own programmatic language is designed to be the best it can be as intelligent conversation, would the bots be able to see anything odd or significant in another bot's speech? I suspect not; moreover, I suspect they would consider the human's meandering, idiosyncratic speech faulty and unintelligent in comparison.
Or maybe I really am a bloggerbot in disguise! You've been had!