If you've taken a peek recently at my "mental diet" (what I'm reading, etc.), you'll see I'm reading Alexei Nesteruk's Light from the East (LFE), one book in Fortress Press' Theology & the Sciences series, specifically about Eastern Orthodox contributions to the "science-religion" debate/dialogue. It had been in my sights for 2-3 years before I finally just decided to spring for it and dive in. Not a regret, now that I'm in. It's good. But oh, it's deep! Oh it's a slog! The font is small, the pages are densely printed and the chapters are long. There are few diagrams so it's just texttexttext, ideaideaidea. And those words are not just the latest Cosmo quiz. The twenty or so pages in which Nesteruk explains the self-transcending limitations of discursive (i.e., dialectical) reason and parses the tension/relation (i.e., antinomy) between apophatic and cataphatic theologia (nous and dianoia, respectively), were, I tell you, some of the roughest going I've had in a long time. Certainly, the words are intelligible and the prose is lucid, so I'm not saying if you picked up the book and read those "head-bending" sections, you'd freeze and dessicate as if in Medusa's garden. I am saying that by putting a focused effort into really grasping the claims, line by line, step by step, while also trying to synthesize them with the previous pages and subliminally seeking connections in other fields of thought, makes it true mental labor. And that is a great feeling. Reading LFE is a probative case of my claim that, when people tell me "you're so smart," I am in fact just very stolid about getting through my ignorance. "I'm not smart, I just work hard." Someone much smarter than me could probably siphon LFE's riches as daintily and expertly as a hummingbird. But for me, having my rockhead bent into something a little sharper is a great and humbling feeling.
There's no question LFE is a "re-read." Reading it once just unrolls the map; reading it again will put things in 3-D. I realize Nesteruk's scattered mention of creation as "intelligible", "contingent", "the event of Christ," etc. make LFE a very good companion-read for Fr. Keefe's Covenantal Theology. In both books, creation is intelligible as gift, as covenant, hypostasized in Christ, not as an antecedently (intrinsically nor extrinsically) determined cosmos.
I got through chapter 4 last night, in the wee hours, and after getting through two chapters of John Searle's Mind, which, while not as arduous as LFE, had its own titillatingly humbling experience. On, say, page 104 Searle mentioned the distinction betwee intentionality-with-a-t and intensionality-with-an-s, which I mistakenly took to refer to two different kinds of objects (as if s stood for one thing and to for another). Then, maybe fifteen pages later, Searle jumps right into explaining the distinction, and I was completely lost. Why doesn't he explain what s and t are first? I kid you not, it was not till I finished that section, scratching my head, making the best of it, the whole way, that I realized he just meant to show a difference between intenTionality and intenSionality! I had learned the distinction in college and grasped it fairly clearly in retrospect once I realized the "trick," but I really had to laugh at myself. I then dutifully re-read the entire section with a 40- instead of a 20-watt bulb over my head. (For the record, though I am still not very confident of my grasp here, intenTionality means a representation of a state of affairs and the conditions needed to make it true [as in volition] or account for its being true [as in belief]. It is how we talk or think about states of affairs. By contrast, intenSionality is how we talk about how we talk about our intentions [as object-indexed representations]. It is the things we need to believe or definitions we need to state in order to make one intenTion "substitutible" for another. IntenSionality describes the representational conditions [i.e., definitions] we'd have to share in order to share the same intenTion.)
My with-a-t vs. with-an-s error reminds me of the time in 8th grade when I kept asking older students and teachers, "What is x?" In my pre-algebra homework, I sometimes had to use x, but I felt unable to do so unless I knew what x was!
"It's a variable," they said.
"Yeeeaah," I answered, slyly, trying to call their bluff.
"So it can be anything," my slippery interlocutor would proceed.
"Well, I know that," I huffed, "so in this problem, what is it?"
Like I said: I'm not smart, I just don't give up.
On a completely different plane, I was happy to get my hands on Book 3 (Twilight Watch) of Sergei Lukyananko's Watch series and indeed burned through its 400 pages in a day or two. Now I've got to wait for the next (and last) two books to come out in English. Hrrumph. One more reason (along with those pipsqueaks Dostoyevsky, Gogol, Tolstoy, Bulgakov, Nabokov, Lossky, Turgenev, Solzhenitsyn, et al.) I am determined to learn Russian, God willing, before I die.
»ἕως θανάτου ἀγώνισαι περὶ τñς ἀληθείας, καὶ Κύριος ὁ θεὸς πολεμήσει ὑπὲρ σοu.« • »Pro iustitia agonizare pro anima tua, et usque ad mortem certa pro iustitia: et Deus expugnabit pro te inimicos tuos.« (Sir. 4:28/33)
Thursday, August 30, 2007
Tuesday, August 28, 2007
This is heartening: a must read
I seem to be pretty well ahead of the literacy curve in the USA, according to which most people read five books last year, as compared to ten in 1999 and six in 1990.
Happily, this means I can put aside all my books for a month and fall in love with a chatterbot! (Actually, I always have trouble unZIPping files onto my Mac, so I can't install one.) The thing is, if I can't get into any arguments with these programs, I don't care how "Turing-safe" they are, I'm unconvinced of their status as "intelligent". A crucial aspect of using language intelligibly and correctly is being able to detect, and challenge, misuses of it. It's one thing for humans to be tricked by chatterbots' use of language; it's something else entirely, and something extremely important, for the chatterbots themselves to challenge humans' use of language. Grasping truth, and thus possessing intelligence, is not a one-sided affair. True intelligence, as a grasp of a universal (or, a transcendental) in an encounter with a concrete object, entails sensing the limits of that grasp and, in turn, challenging a faulty grasp of it (i.e., based on someone's use of inappropriate language or behavior). I just don't think it is enough to say robots have "passed" the Turing test if they can pass for intelligents (and intelladies, conjoined nouns I may have just coined), since a major feature of being intelligent means spotting unintelligence. Using language like a human, as the Turing test more or less stipulates, requires detecting sub- or nonhuman misuses of it, since the entire premise of the Turing test is that humans involved in it are primed for, and thus capable of, "sniffing out" non-human simulacra.
Something about this line of thought seems reminiscent of St. Augustine's argument from error against skepticism, but I can't now elucidate the connections, if they even properly exist (cf. also "From Relativism and Skepticism to Truth and Certainty" by Professor Josef Seifert). Contra the Sceptics, who in this context I would cast as the proponents of Strong Artificial Intelligence (aka, SAIers) in pursuit of knowledge-as-indubitability (i.e., as flawless programming), a key element of true knowledge is the ability to differentiate error from truth, an analysis which of itself points to truth, or in the AI debate, to a form of rational cognizance far, far beyond the grasp of mere robots-as-responders.
I honestly wonder, would a chatterbot ever refute or argue against my train of thought and claims? This case, c/o Mark Humphrys, does seem promising in the way of a Yes, but I haven't perused the whole convo. Well, install the little bots yourself and find out.
What would be interesting is to see two chatterbots and one human in dialogue, with the bots' goal being to determine which person is the human. Since their own programmatic language is designed to be the best it can be as intelligent conversation, would the bots be able to see anything odd or significant in another bot's speech? I suspect not; moreover, I suspect they would consider the human's meandering, idiosyncratic speech faulty and unintelligent in comparison.
Or maybe I really am a bloggerbot in disguise! You've been had!
Happily, this means I can put aside all my books for a month and fall in love with a chatterbot! (Actually, I always have trouble unZIPping files onto my Mac, so I can't install one.) The thing is, if I can't get into any arguments with these programs, I don't care how "Turing-safe" they are, I'm unconvinced of their status as "intelligent". A crucial aspect of using language intelligibly and correctly is being able to detect, and challenge, misuses of it. It's one thing for humans to be tricked by chatterbots' use of language; it's something else entirely, and something extremely important, for the chatterbots themselves to challenge humans' use of language. Grasping truth, and thus possessing intelligence, is not a one-sided affair. True intelligence, as a grasp of a universal (or, a transcendental) in an encounter with a concrete object, entails sensing the limits of that grasp and, in turn, challenging a faulty grasp of it (i.e., based on someone's use of inappropriate language or behavior). I just don't think it is enough to say robots have "passed" the Turing test if they can pass for intelligents (and intelladies, conjoined nouns I may have just coined), since a major feature of being intelligent means spotting unintelligence. Using language like a human, as the Turing test more or less stipulates, requires detecting sub- or nonhuman misuses of it, since the entire premise of the Turing test is that humans involved in it are primed for, and thus capable of, "sniffing out" non-human simulacra.
Something about this line of thought seems reminiscent of St. Augustine's argument from error against skepticism, but I can't now elucidate the connections, if they even properly exist (cf. also "From Relativism and Skepticism to Truth and Certainty" by Professor Josef Seifert). Contra the Sceptics, who in this context I would cast as the proponents of Strong Artificial Intelligence (aka, SAIers) in pursuit of knowledge-as-indubitability (i.e., as flawless programming), a key element of true knowledge is the ability to differentiate error from truth, an analysis which of itself points to truth, or in the AI debate, to a form of rational cognizance far, far beyond the grasp of mere robots-as-responders.
I honestly wonder, would a chatterbot ever refute or argue against my train of thought and claims? This case, c/o Mark Humphrys, does seem promising in the way of a Yes, but I haven't perused the whole convo. Well, install the little bots yourself and find out.
What would be interesting is to see two chatterbots and one human in dialogue, with the bots' goal being to determine which person is the human. Since their own programmatic language is designed to be the best it can be as intelligent conversation, would the bots be able to see anything odd or significant in another bot's speech? I suspect not; moreover, I suspect they would consider the human's meandering, idiosyncratic speech faulty and unintelligent in comparison.
Or maybe I really am a bloggerbot in disguise! You've been had!
Wednesday, August 22, 2007
Your curriculum is so boring...
that you even resort to the dozens to fill a class.
Until we get our textbooks for the semester, I'm basically just doing whatever (a technical pedagogical term, that) during my summer ESL classes. Yesterday, whatever included playing Mafia after learning about "local flavor," which is to say, I taught one class some "down home" American culture by having them try "the dozens," which is just a kind of verbal sparring between people taking turns. For example, as I'm sure you've heard, someone could say, "You're mama's so fat that when she sits around the house, she really sits around the house!" Then an opponent might retort, "Well, your daddy's so poor he can't even pay attention!" And round we go.
It was slow-going at first -- as humor is one of the hardest things to teach across cultures -- but by the end I was pleasantly surprised by their results (at least in terms of ESL-success, I mean). Some of the answers bordered on the surreal (with a little help from me), which made them all the more fun to discuss. One condition was not using any real names, lest tender feelings be trampled in the zeal for learning!
Some of my own suggestions, to get the imprecatory ball rolling, were thus:
Care to add on? Or are you so uncreative that... uh, even your, derr, synapses fire like a jammed gun... uh, yeah.
Until we get our textbooks for the semester, I'm basically just doing whatever (a technical pedagogical term, that) during my summer ESL classes. Yesterday, whatever included playing Mafia after learning about "local flavor," which is to say, I taught one class some "down home" American culture by having them try "the dozens," which is just a kind of verbal sparring between people taking turns. For example, as I'm sure you've heard, someone could say, "You're mama's so fat that when she sits around the house, she really sits around the house!" Then an opponent might retort, "Well, your daddy's so poor he can't even pay attention!" And round we go.
It was slow-going at first -- as humor is one of the hardest things to teach across cultures -- but by the end I was pleasantly surprised by their results (at least in terms of ESL-success, I mean). Some of the answers bordered on the surreal (with a little help from me), which made them all the more fun to discuss. One condition was not using any real names, lest tender feelings be trampled in the zeal for learning!
X is so "motorcycle" that you could ride her in traffic. [NB: In Taiwan, "motorcycle" is slang for idiotic or uncool.]
X is so sweet that bees live in her underwear.
X is so handsome that no other men have girlfriends.
[NB: I didn't help with this one!]
X is so small that even ants can't see "it"! [NB: With this one I deftly avoided asking what "it" is, since the titters from the back of the class made that clear enough.]
X is so ugly that even mirrors break if you say his name.
X is so black that even chocolate is jealous of her.
[By contrast, literally!,] X is so white that when she walks in, people think the sun is rising.
[NB: These last two are fine in situ specimens of Taiwan's unblinking colorist racism.]
Some of my own suggestions, to get the imprecatory ball rolling, were thus:
X is so slow that when he wakes up, the sun is down before he's out of bed.
X is so tall that birds pay rent in his nose.
X is so dirty that even water runs away from him.
Care to add on? Or are you so uncreative that... uh, even your, derr, synapses fire like a jammed gun... uh, yeah.
Tuesday, August 21, 2007
Something on your mind? What's the matter? What's matter?
Let me open with a script-joke, c/o my dad.
In the substantive notes of The End of Faith, Sam Harris, currently going for a doctorate in neuroscience, spends some time arguing for brain-mind identity. This is not surprising, I'm sure, considering Harris is a rabid materialist. All the same, one of the prongs in his argument consists in asking what sense it makes to say there is cognitive/intellective capacity beneath or beyond (“meta”) the wetware of our brains. To use Harris's illustration, a dualist may claim our linguistic ability (say, to speak French and English) actually resides in or stems from the soul. Yet, at every turn, when the brain is damaged the same “soulish” ability of speech is also damaged. Indeed, a severe enough cranial injury can render us complete incapable of any speech. Does it makes sense, Harris asks, to say that down deep there is still some mysterious linguistic ability in the soul? Are we to imagine the soul somehow lost the power of speech concomitant with brain damage? His answer is, of course, no; the very idea shows how silly such dualism is. For in his eyes, the erasure (rature) of linguistic ability at the neural and behavioral level proves that just is where such abilities reside. It may just be hindsight, of course tempered by my alleged rabid Christian desire to paint atheists as black as possible, but looking at this argument now, in my putative mind's eye, it seems to be one of the lamest efforts Harris could make in the mind-body debate.
Framing the rhetorical challenge as he does, while also noting his singularly modern and narrow references, indicates Harris has little if any grasp of classical Aristotelico-Thomist anthropology (ATA) (specifically, noology, the study of the mind). According to that view, the human person is a substantial union of two essential, not integral, parts, namely, a material body informed by an immaterial soul.
Some terms need unpacking here, especially “substantial,” “essential parts” and the phrase, “material body informed by an immaterial soul.” These terms roll out from the basic hylemorphism of ATA. Hylemorphism is the metaphysical theory that all actual finite entities (i.e., substances) are an inseparable “bond” of form and matter (morphē and hulē in Greek). These two analytical metaphysical categories co-exist in what I'll call “transcendent mutuality,” by which I mean that the very essence of each term consists in its actual relation to the other. The two concepts are actually inseparable, though conceptually distinct. The actual existence of form has a transcendent (i.e., “definitional” or “automatic”) relation to matter. And vice versa. The essence of matter, in the concrete existence of a particular substance, is to be configured by its proper form, and the essence of form, in a substance's concrete existence, is to “reside” in matter. There is no intelligible reality to matter unless it has a form to give it concrete particularity (i.e., substance) and no intelligible reality to form unless it has matter to configure. Thus these are essential (i.e., substantial) and not merely integral (i.e., spatial) parts of a prime substance. As we read in the Decree of Approval (27 July 1914) following St. Pius IX's Doctoris angelici,
As for matter without a form, this is known as hulē prote, prime matter, and is actually nonexistent: lacking any and all formal characteristics, there is no way as which pure matter can actually exist. Something utterly colorless, shapeless, featureless, etc.—literally formless and undifferentiated—has no actual existence: its existence is pure potential, not at all actual. Much the same goes for pure form. Unless it has some matter to inform (i.e., to shape and differentiate), form never actually exists. Without a form, matter is always and essentially on the verge of becoming, something, anything, everything. Indeed hulē is the Becoming ingredient in reality. It is the basis for the boundless capacity of reality to change, advance, and revert. Form, by contrast, unless it is “locked in” to matter, is always at risk of being a sheer, self-contained ideal concept, undispersed and unadulterated by the sprawling variability of matter. Form, you might say, is the Be-Thisness of reality. Once the form “horse” is spread throughout concrete horses, it loses its pristine totality as horseness, having to suffice as the form of any number of examples, which are never as perfect as the form on its own. Once a form joins matter, it is no longer pure form in abstractio, as Plato argued about the Forms, but is simply one actual, incomplete “version” of the form. Form is what it is in and of itself, whereas matter is never and not at all “what it is,” since matter qua hulē is not actually anything. Without each other, though, pure becoming and pure thisness are non-actual. While pure matter, as pure potential, never becomes anything actual, pure form, as so to speak total act, never exists in any actual, material way. Their actual existence is, thus, transcendentally mutual.
The two transcendent categories can be likened to two passengers trying to get on the train of actual existence. Unfortunately, Marlo Matter is so amorphous and vast that he can never even get off the ground into the cars. He can not be seated because he is already seated, at every point of his body and for all his days. Meanwhile, Frankie Form is so excited about the ride that he never settles down into a particular seat, and is instead constantly running outside around the train in a blur of pure action. He can not be seated because to sit would be to lose the pure drive to find a seat. Finding a seat would rob him of his only goal: finding a seat! But if the two work together—Mr. Form whipping Mr. Matter into shape and Mr. Matter channeling Mr. Form into as seat—they can actually get a seat (i.e., actually exist).
If the idea of “actual existence” seems redundant—like a bottom-floor basement—you should realize there is in fact more than one way of existing, namely, possibly (or analogously) and necessarily. Ask yourself: Does the next moment from now exist? Yes—but not actually, only potentially. Also ask yourself: Does the immediately preceding moment exist? Yes—but not actually, only analogously with the present moment. Now ask yourself: Does this question exist? Yes—and necessarily, for the very act of posing it actualizes it, whereas its merely potential existence would never have produced the question in question!
In any case, these two characters—form and matter—are the big players in ATA's theory of substances. A prime substance is a singular hylomorphic actuality, possessing actual matter by virtue of its intrinsic form and existing formally by virtue of its proper matter. A secondary substance is a general category, like horses, whereas, this or that particular horse is a prime substance. Although the term substance means literally “that which stands beneath” (sub-stantia in Latin, hypo-stasis in Greek), it should not be confused with the entity's form, as if the form were prevenient and independently floating on its own “out there”, just waiting for matter to come along and give it a material existence. Form, as I said above, does not exist “out there” by itself as an immaterial ghost; it does, however, exist immaterially as the constitutive principle in matter. Being transcendent to one another, form and matter are actual only in their integral co-relation; and this integral bond of a form and its matter is what makes a substance “what it is” in distinction to anything else. Form is that which enables us to answer the question “What's the matter?” Since matter could be anything, the answer to what the matter is actually, is the form. The role of form and matter in ATA are, respectively, to differentiate general categories (secondary substances) and differentiate individual members (prime substances) of a common class. One substance is essentially distinct from another kind of substance by virtue of its form; something with the same substance as another is unique by virtue of its individual (literally individuating) matter. Water, for example, is a substance whose essence is to have two hydrogen atoms and one oxygen atom joined in a polar covalent bond. Changing water's form to, say, two hydrogen atoms in a covalent bond with two oxygen atoms, would render it substantially, essentially, different, and thus no longer make it water. On the other hand, keeping the form of a three-way bond, while changing the hydrogen atoms to oranges and the oxygen to a lemon, would substantially, essentially, alter water into something else.
Humans, in turn, are essentially differentiated from others kinds of entities (beings) by virtue of the essential union of our material bodies with the form of our soul. Our unique substance is to be the “transcendentally mutual union” of a soul (the form of the body) and a body (the matter of the soul). For the human to be a substantial being means it is—we are—a kind of being differentiated essentially from other kinds of being, and this by virtue of our having a rational soul in a properly material body. Changing our form to that of water would destroy us by disintegrating our proper substantial integrity. Conversely, changing our matter to that of cheese would also destroy us as human beings since it is not proper to the form of human substance (our “being-human”) to inform cheese as a human being. Cheese, largely because it has its own proper form for its cheesy matter, is simply not the right matter to allow the human form—the soul—to manifest as a human being (much as lemons and oranges are simply not the proper matter with which to actualize water, regardless how hard the form of water “tries” to affect that change). We can know if something does have a human form if its matter at every point is capable of and “inclined to” forming a human person, which is why human fetuses are not humans-in-the-making, but truly, formally, humans. We cannot decide someone is not a human because of a deficiency or radical change in his materiality (short of death, that is), which is why Terri Schiavo was still a human person whereas corpses, lacking all formal human capacities, are not. A dead body is simply not the proper matter to form a human, though a brain-damaged body on “auto-pilot” is still materially adequate to formal humanity. Her matter can never define someone as a human, though, once at a sufficiently “warped” stage, it can disqualify something from being a human. This is why an ovum and a spermatozoon are not humans, but their union, as the cause of the substantial production of a human person, does constitute that zygote as a human, with adequate material to allow the form to produce a human baby and a proper form to “mold” the matter into that person. The reason we can speak of a human nature is because humans are substantially the same: we have the same form, in our soul as the immaterial principle of our lives, but we are individuated by our unique material composition. We are all formally alike but all materially unique—and thus all substantially the same.
Bringing all this back to bear on Harris's line of reasoning, what should be clear is that his quibbles about brain damage may refute crude dualism but they are meaningless—or rather, self-evident—in ATA. Of course we will lose our ability to speak if we suffer brain damage: such damage to our matter naturally prevents our complete formal manifestations at the behavioral level. Then again, the formal adequacy of our human nature becomes apparent as soon as we imagine the brain damage were completely and perfectly reversed, restoring the brain to its pre-injury condition. What would happen then? The patient would have the same linguistic abilities as before! Why? Because once the material “discrepancies” are straightened out, the formal perfection of his soul allows the patient to reactivate once-lost behavioral powers. As Benjamin Wiker says,
Consider also these comments from Alfred Freddoso, in his lecture, “Good News, Your Soul Hasn't Died Quite Yet”:
So much for those guys, who actually know their stuff. On my own small, dim path, the analogy I've used for many years in these discussions is that of a movie screen and a movie projector. The screen is our brain, the projector is our soul, and the film is our behavior. The soul projects “our very lives” onto the screen, and the screen, importantly, can alter the appearance of the film if it is reshaped. Moreover, if a section of the screen is cut out (i.e., brain damage), although the projector is still burning bright, we can see nothing on the screen. Even if the screen is restitched together, there will be “tell tale” signs of past brain damage which henceforth alter or impair the projector's “performance.” This in no way equates the mind (the film) with its ability to function (the image). As Derrick Hassert says, "This equating of the 'soul' with a set of functions or processes—what cognitive psychologists might refer to as 'mind'—is the same faux paux Descartes made, but one that Aristotle and Aquinas did not: In Aristotle the mind is a subset of abilities defined by the essential nature of the creature (the soul).[5] The soul precedes all else. We are humans first by nature, not by function" (in "Brain, Mind, and Person: Why We Need To Affirm Human Nature in Medical Ethics").
Incidentally, these thoughts were for the most part stirred, and re-stirred, up by my recent reading of Nancey Murphy's slim, lucid book, Bodies and Souls, or Spirited Bodies? While I am very intrigued and indeed attracted by “non-reductive physicalism” (or “supervenient physicalism,” or “emergentism”), and found Murphy's exposition of it very appealing, yet I have fundamental reservations about the whole view not only because of the irreducibly immaterial (i.e., meta-physical-ist) aspects of human existence but also because of some of the chilling ethical repercussions such a view bears within it. If personhood, and human worth, emerges from our physical functionality, and if our relationship with God is entirely predicated on our physical capacity for it, what happens when we lose those capacities, as from brain trauma? If such value can emerge can it not just as easily submerge? Reservations aside, Murphy's book is a superb introduction (or refresher) to the issues and provides numerous good leads for further study.
As always, caveat lector: I am not a professional philosopher, so there can and indeed probably are numerous key errors in my discussion. I welcome enlightenment but provide to others what I can in the meanwhile.
Before he studied physics, Einstein was highly interested in bees.
He even wrote a paper on the subject, saying if the bees disappeared, the global ecosystem would collapse.
You've heard about this in the news. But it's hard to believe.
When she first heard his theory, Einstein's wife was puzzled.
“Bees?” she asked. “What's the matter with bees?”
“What's the matter with—,” Einstein began. “The matter. … Yes, matter. Now that's interesting.”
The rest, as dey say, is hist'ry.
In the substantive notes of The End of Faith, Sam Harris, currently going for a doctorate in neuroscience, spends some time arguing for brain-mind identity. This is not surprising, I'm sure, considering Harris is a rabid materialist. All the same, one of the prongs in his argument consists in asking what sense it makes to say there is cognitive/intellective capacity beneath or beyond (“meta”) the wetware of our brains. To use Harris's illustration, a dualist may claim our linguistic ability (say, to speak French and English) actually resides in or stems from the soul. Yet, at every turn, when the brain is damaged the same “soulish” ability of speech is also damaged. Indeed, a severe enough cranial injury can render us complete incapable of any speech. Does it makes sense, Harris asks, to say that down deep there is still some mysterious linguistic ability in the soul? Are we to imagine the soul somehow lost the power of speech concomitant with brain damage? His answer is, of course, no; the very idea shows how silly such dualism is. For in his eyes, the erasure (rature) of linguistic ability at the neural and behavioral level proves that just is where such abilities reside. It may just be hindsight, of course tempered by my alleged rabid Christian desire to paint atheists as black as possible, but looking at this argument now, in my putative mind's eye, it seems to be one of the lamest efforts Harris could make in the mind-body debate.
Framing the rhetorical challenge as he does, while also noting his singularly modern and narrow references, indicates Harris has little if any grasp of classical Aristotelico-Thomist anthropology (ATA) (specifically, noology, the study of the mind). According to that view, the human person is a substantial union of two essential, not integral, parts, namely, a material body informed by an immaterial soul.
Some terms need unpacking here, especially “substantial,” “essential parts” and the phrase, “material body informed by an immaterial soul.” These terms roll out from the basic hylemorphism of ATA. Hylemorphism is the metaphysical theory that all actual finite entities (i.e., substances) are an inseparable “bond” of form and matter (morphē and hulē in Greek). These two analytical metaphysical categories co-exist in what I'll call “transcendent mutuality,” by which I mean that the very essence of each term consists in its actual relation to the other. The two concepts are actually inseparable, though conceptually distinct. The actual existence of form has a transcendent (i.e., “definitional” or “automatic”) relation to matter. And vice versa. The essence of matter, in the concrete existence of a particular substance, is to be configured by its proper form, and the essence of form, in a substance's concrete existence, is to “reside” in matter. There is no intelligible reality to matter unless it has a form to give it concrete particularity (i.e., substance) and no intelligible reality to form unless it has matter to configure. Thus these are essential (i.e., substantial) and not merely integral (i.e., spatial) parts of a prime substance. As we read in the Decree of Approval (27 July 1914) following St. Pius IX's Doctoris angelici,
Although extension into integral parts is a consequence of a corporeal nature, it is not the same thing for a body to be a substance and for it to be of a certain quantity. By definition, a substance is indivisible, not in the same way as a point, but as something which is outside the order of dimension. Quantity, which gives extension to substance, in reality differs from substance, and is an accident in the fully meaning of the term.
As for matter without a form, this is known as hulē prote, prime matter, and is actually nonexistent: lacking any and all formal characteristics, there is no way as which pure matter can actually exist. Something utterly colorless, shapeless, featureless, etc.—literally formless and undifferentiated—has no actual existence: its existence is pure potential, not at all actual. Much the same goes for pure form. Unless it has some matter to inform (i.e., to shape and differentiate), form never actually exists. Without a form, matter is always and essentially on the verge of becoming, something, anything, everything. Indeed hulē is the Becoming ingredient in reality. It is the basis for the boundless capacity of reality to change, advance, and revert. Form, by contrast, unless it is “locked in” to matter, is always at risk of being a sheer, self-contained ideal concept, undispersed and unadulterated by the sprawling variability of matter. Form, you might say, is the Be-Thisness of reality. Once the form “horse” is spread throughout concrete horses, it loses its pristine totality as horseness, having to suffice as the form of any number of examples, which are never as perfect as the form on its own. Once a form joins matter, it is no longer pure form in abstractio, as Plato argued about the Forms, but is simply one actual, incomplete “version” of the form. Form is what it is in and of itself, whereas matter is never and not at all “what it is,” since matter qua hulē is not actually anything. Without each other, though, pure becoming and pure thisness are non-actual. While pure matter, as pure potential, never becomes anything actual, pure form, as so to speak total act, never exists in any actual, material way. Their actual existence is, thus, transcendentally mutual.
The two transcendent categories can be likened to two passengers trying to get on the train of actual existence. Unfortunately, Marlo Matter is so amorphous and vast that he can never even get off the ground into the cars. He can not be seated because he is already seated, at every point of his body and for all his days. Meanwhile, Frankie Form is so excited about the ride that he never settles down into a particular seat, and is instead constantly running outside around the train in a blur of pure action. He can not be seated because to sit would be to lose the pure drive to find a seat. Finding a seat would rob him of his only goal: finding a seat! But if the two work together—Mr. Form whipping Mr. Matter into shape and Mr. Matter channeling Mr. Form into as seat—they can actually get a seat (i.e., actually exist).
If the idea of “actual existence” seems redundant—like a bottom-floor basement—you should realize there is in fact more than one way of existing, namely, possibly (or analogously) and necessarily. Ask yourself: Does the next moment from now exist? Yes—but not actually, only potentially. Also ask yourself: Does the immediately preceding moment exist? Yes—but not actually, only analogously with the present moment. Now ask yourself: Does this question exist? Yes—and necessarily, for the very act of posing it actualizes it, whereas its merely potential existence would never have produced the question in question!
In any case, these two characters—form and matter—are the big players in ATA's theory of substances. A prime substance is a singular hylomorphic actuality, possessing actual matter by virtue of its intrinsic form and existing formally by virtue of its proper matter. A secondary substance is a general category, like horses, whereas, this or that particular horse is a prime substance. Although the term substance means literally “that which stands beneath” (sub-stantia in Latin, hypo-stasis in Greek), it should not be confused with the entity's form, as if the form were prevenient and independently floating on its own “out there”, just waiting for matter to come along and give it a material existence. Form, as I said above, does not exist “out there” by itself as an immaterial ghost; it does, however, exist immaterially as the constitutive principle in matter. Being transcendent to one another, form and matter are actual only in their integral co-relation; and this integral bond of a form and its matter is what makes a substance “what it is” in distinction to anything else. Form is that which enables us to answer the question “What's the matter?” Since matter could be anything, the answer to what the matter is actually, is the form. The role of form and matter in ATA are, respectively, to differentiate general categories (secondary substances) and differentiate individual members (prime substances) of a common class. One substance is essentially distinct from another kind of substance by virtue of its form; something with the same substance as another is unique by virtue of its individual (literally individuating) matter. Water, for example, is a substance whose essence is to have two hydrogen atoms and one oxygen atom joined in a polar covalent bond. Changing water's form to, say, two hydrogen atoms in a covalent bond with two oxygen atoms, would render it substantially, essentially, different, and thus no longer make it water. On the other hand, keeping the form of a three-way bond, while changing the hydrogen atoms to oranges and the oxygen to a lemon, would substantially, essentially, alter water into something else.
Humans, in turn, are essentially differentiated from others kinds of entities (beings) by virtue of the essential union of our material bodies with the form of our soul. Our unique substance is to be the “transcendentally mutual union” of a soul (the form of the body) and a body (the matter of the soul). For the human to be a substantial being means it is—we are—a kind of being differentiated essentially from other kinds of being, and this by virtue of our having a rational soul in a properly material body. Changing our form to that of water would destroy us by disintegrating our proper substantial integrity. Conversely, changing our matter to that of cheese would also destroy us as human beings since it is not proper to the form of human substance (our “being-human”) to inform cheese as a human being. Cheese, largely because it has its own proper form for its cheesy matter, is simply not the right matter to allow the human form—the soul—to manifest as a human being (much as lemons and oranges are simply not the proper matter with which to actualize water, regardless how hard the form of water “tries” to affect that change). We can know if something does have a human form if its matter at every point is capable of and “inclined to” forming a human person, which is why human fetuses are not humans-in-the-making, but truly, formally, humans. We cannot decide someone is not a human because of a deficiency or radical change in his materiality (short of death, that is), which is why Terri Schiavo was still a human person whereas corpses, lacking all formal human capacities, are not. A dead body is simply not the proper matter to form a human, though a brain-damaged body on “auto-pilot” is still materially adequate to formal humanity. Her matter can never define someone as a human, though, once at a sufficiently “warped” stage, it can disqualify something from being a human. This is why an ovum and a spermatozoon are not humans, but their union, as the cause of the substantial production of a human person, does constitute that zygote as a human, with adequate material to allow the form to produce a human baby and a proper form to “mold” the matter into that person. The reason we can speak of a human nature is because humans are substantially the same: we have the same form, in our soul as the immaterial principle of our lives, but we are individuated by our unique material composition. We are all formally alike but all materially unique—and thus all substantially the same.
Bringing all this back to bear on Harris's line of reasoning, what should be clear is that his quibbles about brain damage may refute crude dualism but they are meaningless—or rather, self-evident—in ATA. Of course we will lose our ability to speak if we suffer brain damage: such damage to our matter naturally prevents our complete formal manifestations at the behavioral level. Then again, the formal adequacy of our human nature becomes apparent as soon as we imagine the brain damage were completely and perfectly reversed, restoring the brain to its pre-injury condition. What would happen then? The patient would have the same linguistic abilities as before! Why? Because once the material “discrepancies” are straightened out, the formal perfection of his soul allows the patient to reactivate once-lost behavioral powers. As Benjamin Wiker says,
[I]f we realize that we are rational animals, then the latest brain research poses no real problem. It is simply a half-truth distended illicitly into a whole truth. If we are indeed rational animals, we should expect to find that thinking depends on our animal nature, including our brain, in the same way that our rational volition, for its execution, depends on the use of our hands, legs, eyes, or ears. If our thinking didn't depend on the brain, then we truly would be angels trapped in animal suits.
So, I don't need to poke about in the brain to realize that a good cup of coffee makes thinking a whole lot easier after a bad night's sleep. … My acts of volition are real, and I use my body, not like an alien machine, but as part of my unified being. … I am neither a Gnostic angel with no need of a brain or body to think, nor am I a slightly elevated ape for whom thinking is merely an elaborate form of sensation. I am, to repeat, a rational animal, an essential unity of immaterial soul and material body. If we try to cling to either extreme, and neglect this golden mean, then we are forced into denying what we actually know and experience. (http://www.tothesource.org/12_11_2002/12_11_2002.htm)
Consider also these comments from Alfred Freddoso, in his lecture, “Good News, Your Soul Hasn't Died Quite Yet”:
[E]ven though the doctrine of the immateriality of the soul entails that our higher cognitive and appetitive operations are not themselves operations of the brain, the anti-dualistic nature of the Catholic view of the human animal … should antecedently prepare us to expect that such higher operations will depend heavily on the normal functioning of the brain and central nervous system. … [F]rom a Catholic perspective dualism is just as wrongheaded and, in the end, just as pernicious as physicalism. Dualism treats body and soul as two separate substances or, at the very least, two antecedently constituted integral parts of an entity whose unity is per accidens; and it identifies the human self with just the immaterial soul. In this it runs afoul of the Catholic teaching that the soul is the form of the body and that the human body and the human soul are so intimately linked that they derive their identity from one another. Perhaps more precisely, the soul is the form of the human organism as a whole and, as such, makes it to be the sort of living substance it is. Thus, the human body and human soul are not two antecedently constituted integral parts, but rather (to use the scholastic phrase) complementary 'essential parts' of an organism whose unity is per se. The Catechism puts it this way:"The unity of soul and body is so profound that one has to consider the soul to be the 'form' of the body: i.e., it is because of its spiritual soul that the body made of matter becomes a living, human body; spirit and matter, in man, are not two natures united, but rather their union forms a single nature."(34) [CCC §365]
So much for those guys, who actually know their stuff. On my own small, dim path, the analogy I've used for many years in these discussions is that of a movie screen and a movie projector. The screen is our brain, the projector is our soul, and the film is our behavior. The soul projects “our very lives” onto the screen, and the screen, importantly, can alter the appearance of the film if it is reshaped. Moreover, if a section of the screen is cut out (i.e., brain damage), although the projector is still burning bright, we can see nothing on the screen. Even if the screen is restitched together, there will be “tell tale” signs of past brain damage which henceforth alter or impair the projector's “performance.” This in no way equates the mind (the film) with its ability to function (the image). As Derrick Hassert says, "This equating of the 'soul' with a set of functions or processes—what cognitive psychologists might refer to as 'mind'—is the same faux paux Descartes made, but one that Aristotle and Aquinas did not: In Aristotle the mind is a subset of abilities defined by the essential nature of the creature (the soul).[5] The soul precedes all else. We are humans first by nature, not by function" (in "Brain, Mind, and Person: Why We Need To Affirm Human Nature in Medical Ethics").
Incidentally, these thoughts were for the most part stirred, and re-stirred, up by my recent reading of Nancey Murphy's slim, lucid book, Bodies and Souls, or Spirited Bodies? While I am very intrigued and indeed attracted by “non-reductive physicalism” (or “supervenient physicalism,” or “emergentism”), and found Murphy's exposition of it very appealing, yet I have fundamental reservations about the whole view not only because of the irreducibly immaterial (i.e., meta-physical-ist) aspects of human existence but also because of some of the chilling ethical repercussions such a view bears within it. If personhood, and human worth, emerges from our physical functionality, and if our relationship with God is entirely predicated on our physical capacity for it, what happens when we lose those capacities, as from brain trauma? If such value can emerge can it not just as easily submerge? Reservations aside, Murphy's book is a superb introduction (or refresher) to the issues and provides numerous good leads for further study.
As always, caveat lector: I am not a professional philosopher, so there can and indeed probably are numerous key errors in my discussion. I welcome enlightenment but provide to others what I can in the meanwhile.
“Mother, AI?” Yes or no, mother decides
The problem with strong AI is not just the limits of computational completeness for programming a total cognitive “innerverse”, nor just the rank ignorance of cognitive theorists about what real intelligence is, a deficit that makes it quite hard to know how design AI towards — not really understanding a real strawberry pie makes it significantly harder to make an artificial one — but is also the ultimate pragmatic conundrum of what to do with AI once we achieve it. What I mean is, once we achieve a robot that seems to be intellectually autonomous and spontaneous, the problem will be that its cognitive powers are either such a good simulation of our own that its benefits will be redundant, or they will be so far beyond ours as to be practically useless.
It was only last night [21 Aug 07, after this posting this] that I was reading M. Adler's essay "The Challenge to the Computer" and noted these pertinent words:
Immediately upon reading these words, I wrote thus:
Now, imagine a real-time war analyst robot at work: it gathers tremendous amounts of data and formulates a decision. But, ultimately, who will decide how, or whether, or when, to effect that decision? All those lumbering humans waiting for the AI-General to make the call. Ultimately, AI runs the risk of becoming a cipher, one more albeit extremely well informed opinion among others. Once, for example, the AI-General has a functional “intuition algorithm,” as artificial intelligence would surely demand, then its intuition will simply be wagered against real men's intuitions and hunches and analyses. As long as the AI system keeps its analyses at levels suitably non-complex for human brains to compute, it will render itself only as intelligent as humans can afford it to be. Unless humans blindly defer to the AI-General in all cases, it has every chance of looking just dumb when a flesh-and-blood general decides to “follow his gut,” “pull rank,” override its plan A and go ahead with plan B — and all turns out well. Once the AI system renders its judgment, humans will reassess the case and realize the solution is too simple or too direct or too naive, etc., and go ahead with whatever they themselves decide in the end.
On the other hand, assuming AI becomes so much vastly smarter than us, it will be as useful as a blind Orphic sage. Or, worse, as threatening as a HAL gone haywire, which is to say, gone utterly in accord with its own intelligent artifices. Assuming AI does “pull a HAL,” we will have no rational choice but to curb its powers, either by limiting its executive clearance or filtering its data system so we control the flow of information. (This would be an “informatics blowback”: when the means of information control themselves require informatic bridling, lest their autonomous efficacy backfire on us.) What the AI system says in a certain situation may be the most important analysis in history, but, tragicomically, unless it puts the analysis in terms humans can grasp and accommodate to their own methods of rationality, the AI-analysis will seem like gibberish. Without condescending to human intelligence (“dumbing things down”), AI would be like a genius with mathematical and theoretical Turet's syndrome. Unless humans can comfortably integrate the much more advanced AI-directives into their own rational goals, AI will just be an inscrutable tease. But, of course, if the AI system does tailor its intelligence back down to human levels, it will eo ipso become, as above, one more analyst to be debriefed and mulled over on the palate of great men at the helm of history. Why does your dog eventually, even quickly, tire of listening to you talk with him? Because while your intelligent behavior may strike a synapse or two in his doggy brain, eventually it will all just be gibberish. Very smart gibberish for us, to be sure, but for a less intelligent creature, ultimately just a confusing tedium. No one likes listening to a lecture that goes totally over her head. If AI starts to go over our heads, we will Super-intelligence seems to mere intelligence like folly, and any intelligence worth its salt will ignore apparent folly and bank instead on mere intelligence—which is to say, once more, human intelligence, albeit with whatever AI input it can reasonably use. No matter what high-flown intelligence algorithms an AI system may cook up, with the help of its natural selection and synthetic neural-formation algorithms, unless these levels of intelligence mesh with—conform to—human intelligence, they will be just that for men in a crunch: high-flown frenzies of theory.
As for the argument that over time, as people see what accurate, reliable benefits “the AI difference” makes, and thus that people will learn to “trust” the machines without squabbling over their all-too-human predilections, this only begs the question: By what criteria would people consider the AI difference to be valuable? Would they ask an AI system to assess the value and safety of … the AI system itself? No, for, in all such cases, humans will do what we always do: exploit technology to the lats drop, gaining thereby whatever edge we can—but then still do what we ourselves, by the light of our own intelligence, deem to be best. If AI never really surpasses human intelligence, it will be a cipher among ciphers. If, on the other hand, it does surpass human intelligence, it will be one more enigma among the very enigmas it was designed to solve.
All that I've said is not meant as a critique of strong AI as a theoretical possibility, though I am heavily inclined to skepticism, given the considerations of J. von Neumann, E. Gilson, R. Penrose, H. Dreyfus, K. Gödel, S. Jaki, E. Wilson, M. Adler, et al. Mine is simply a passing meditation on the picayune peccadilloes of AI as a fundamentally and finally human utilitarian production. Captain AI, whenever he may come into being from the womb of mankind, can and will devise and advise, but still must ask, "Mother, may I? Mother Man, may AI?" Yes or no, mother decides.
It was only last night [21 Aug 07, after this posting this] that I was reading M. Adler's essay "The Challenge to the Computer" and noted these pertinent words:
[I]t is necessary to distinguish between computers that are programmed to perform in certain ways and what I cam going to call "robots" -- machines built for the purpose of simulating human intelligence in its higher reaches of learning, problemsolving, discovering, deciding, etc. ... [The computer's] chief superiority to man lies in its speed and its relative freedom from error. Its chief utility is in serving man by extending his power.... Robots in principle are different from programmed computers. Instead of operating on the basis of predetermined pathways laid down by programming, they operate through flexible and random connections ... [which] Turing calls "infant programming"....
Immediately upon reading these words, I wrote thus:
Presumably, the simulation of human intelligence in robots would be just what detracts from computers' superior processing speed, thus negating the pragmatic benefits of robots as "smart computers." Once they begin to simulate human intelligence, they will ipso facto lose their Vulcan invulnerability. I suspect there is an inverse ratio between robotic "intelligence" and robotic efficiency. Ultimately, the closeness of robot AI to human intelligence will thwart all its current non-human benefits. As long as computers retain their signal computational power, they will ipso facto not attain to simulating human intelligence. [These comments should be read as having occurred to me after posting this entry.]
Now, imagine a real-time war analyst robot at work: it gathers tremendous amounts of data and formulates a decision. But, ultimately, who will decide how, or whether, or when, to effect that decision? All those lumbering humans waiting for the AI-General to make the call. Ultimately, AI runs the risk of becoming a cipher, one more albeit extremely well informed opinion among others. Once, for example, the AI-General has a functional “intuition algorithm,” as artificial intelligence would surely demand, then its intuition will simply be wagered against real men's intuitions and hunches and analyses. As long as the AI system keeps its analyses at levels suitably non-complex for human brains to compute, it will render itself only as intelligent as humans can afford it to be. Unless humans blindly defer to the AI-General in all cases, it has every chance of looking just dumb when a flesh-and-blood general decides to “follow his gut,” “pull rank,” override its plan A and go ahead with plan B — and all turns out well. Once the AI system renders its judgment, humans will reassess the case and realize the solution is too simple or too direct or too naive, etc., and go ahead with whatever they themselves decide in the end.
On the other hand, assuming AI becomes so much vastly smarter than us, it will be as useful as a blind Orphic sage. Or, worse, as threatening as a HAL gone haywire, which is to say, gone utterly in accord with its own intelligent artifices. Assuming AI does “pull a HAL,” we will have no rational choice but to curb its powers, either by limiting its executive clearance or filtering its data system so we control the flow of information. (This would be an “informatics blowback”: when the means of information control themselves require informatic bridling, lest their autonomous efficacy backfire on us.) What the AI system says in a certain situation may be the most important analysis in history, but, tragicomically, unless it puts the analysis in terms humans can grasp and accommodate to their own methods of rationality, the AI-analysis will seem like gibberish. Without condescending to human intelligence (“dumbing things down”), AI would be like a genius with mathematical and theoretical Turet's syndrome. Unless humans can comfortably integrate the much more advanced AI-directives into their own rational goals, AI will just be an inscrutable tease. But, of course, if the AI system does tailor its intelligence back down to human levels, it will eo ipso become, as above, one more analyst to be debriefed and mulled over on the palate of great men at the helm of history. Why does your dog eventually, even quickly, tire of listening to you talk with him? Because while your intelligent behavior may strike a synapse or two in his doggy brain, eventually it will all just be gibberish. Very smart gibberish for us, to be sure, but for a less intelligent creature, ultimately just a confusing tedium. No one likes listening to a lecture that goes totally over her head. If AI starts to go over our heads, we will Super-intelligence seems to mere intelligence like folly, and any intelligence worth its salt will ignore apparent folly and bank instead on mere intelligence—which is to say, once more, human intelligence, albeit with whatever AI input it can reasonably use. No matter what high-flown intelligence algorithms an AI system may cook up, with the help of its natural selection and synthetic neural-formation algorithms, unless these levels of intelligence mesh with—conform to—human intelligence, they will be just that for men in a crunch: high-flown frenzies of theory.
As for the argument that over time, as people see what accurate, reliable benefits “the AI difference” makes, and thus that people will learn to “trust” the machines without squabbling over their all-too-human predilections, this only begs the question: By what criteria would people consider the AI difference to be valuable? Would they ask an AI system to assess the value and safety of … the AI system itself? No, for, in all such cases, humans will do what we always do: exploit technology to the lats drop, gaining thereby whatever edge we can—but then still do what we ourselves, by the light of our own intelligence, deem to be best. If AI never really surpasses human intelligence, it will be a cipher among ciphers. If, on the other hand, it does surpass human intelligence, it will be one more enigma among the very enigmas it was designed to solve.
All that I've said is not meant as a critique of strong AI as a theoretical possibility, though I am heavily inclined to skepticism, given the considerations of J. von Neumann, E. Gilson, R. Penrose, H. Dreyfus, K. Gödel, S. Jaki, E. Wilson, M. Adler, et al. Mine is simply a passing meditation on the picayune peccadilloes of AI as a fundamentally and finally human utilitarian production. Captain AI, whenever he may come into being from the womb of mankind, can and will devise and advise, but still must ask, "Mother, may I? Mother Man, may AI?" Yes or no, mother decides.
Saturday, August 18, 2007
Apropos l'Assumption de Notre Dame
From FCA's old, intermittent Christian Heritage archives.
[I'm stripmining FCA into text files so I can eventually rework the good stuff into real essays, maybe even books. So, as I go through the archives copying and pasting, you may see some recycling here. For example, the quotes from above and this reflection on maths:
[I'm stripmining FCA into text files so I can eventually rework the good stuff into real essays, maybe even books. So, as I go through the archives copying and pasting, you may see some recycling here. For example, the quotes from above and this reflection on maths:
"The flaw was so complex that even explaining it to mathematicians would require the mathematician to study the particular part of the proof intensively for two or three months before making a valid critique." -- from an unnamed book I am reading as a gift for one of my friends.
Now that's my kind of maths. I mean it. Math is best when it's purest. Forget all this practical application work; leave it to the engineers. Since I'll never even make into the parking lot of such maths, give me head-splitting career-stopping abstraction to observe from afar. I never thought I'd get into a maths book, but I have with great interest. I've already got my next pop-maths book on deck.
A co-worker today told me a saying he'd heard when he entered college: "In college, biology is chemistry, chemistry is physics, physics is math, and math is theology." Another co-worker added a clever line: "And sociology is biology." Touche.]
Friday, August 17, 2007
Looking askance at looking askance
A common stereotype about Asians is that they are "squinty-eyed." While this is, of course, based on the fact that Asians have a genetic propensity for an epicanthic (eye-lid) fold, adapted perhaps as a kind of shade against snow blindness, I think something contributes to the stereotype. And I will not say the perception of Asians as quiet and crafty gives them an imagined "sneaky appearance" (e.g., eyes slitted in collusive impenetrability). It's the reverse. Partly the epicanthic fold, but also something else, contributes to the stereotype of Asians as cunning and inscrutable. The other factor? They have bad eyesight.
I've lived in Taiwan over four years and have had lots of experience looking closely at Asian faces. (That's not a boast, just a statement.) I've been in close contact with Japanese, Indonesians, Filipinos, Koreans, et al. Further, I'm a teacher so I must constantly analyze students' faces for hints of understanding, curiosity, impertinence, shame, etc. And what I can say is that just as the myth of the docile, taciturn Asian was exploded countless times by the stunning raucousness of groups of happy students and adults, so the myth of the slanty-eyed Asian saw its demise the longer I saw those eyes. A great many Asians simply do not have a pronounced slant to their eyes... unless, that is, they are squinting.
And that's just the problem: they squint a lot here.
I can't tell you how many times I've been unnerved while teaching, seeing an otherwise sweet, upbeat student scowling at me while I talk. ("What did I say?") Or how many times I've felt my dander rising as people, total strangers, glower at me with thin eyes as I walk on the street, like some interloper. ("What did I do?") Or how clerks will peer at me inquisitorially and my seemingly contraband items. ("Did my make the most wanted list?") It took me a long time to realize all this scowling and peering is actually just a result of people's habit of trying to see more clearly. It's a documented fact that Asians, as a group, have more vision problems than, say, Caucasians, and that's largely because of their languages (being made up of intricate ideographs) and education systems (being hugely memory-based and text-intensive). Asians' eyes simply get more strained learning their own languages and going through school than other peoples' might. The same problem holds for serious non-Asian students of Chinese: their eyes deteriorate the harder they study. This is why the eyeglass industry is simply booming, in Taiwan, at least. You can find eyewear shops and optometrists about every few blocks. In my glasses, I mean, classes over the years, the vast majority of students all wear glasses... unless, that is, they forget to wear them. And this lapse is exactly what leads to all the squinting. Just as students will let their bike helmets fangle from the handelbars without wearing them, and adult scooter drivers will wear helmets without clasping the chin strap, so many people hear are perfectly fine going about the day with subpar vision, whether from not wearing glasses or wearing improperly prescribed lenses. (Indeed, in my last class I just saw a girl squinting with her glasses on!)
As my diction reveals, I'm speaking generally and tentatively about this. I can't claim any rigorous sociological proof for this factor in the formulation of a stereotype. In my defense, though, talking about stereotypes, generalities by definition, entails talking stereotypically, generally. My insight might seem as compelling as saying the that stereotype, "Americans are fat", arises from the fact that Americans wear baggy clothes. But, then again, that kind of logic can't be ignored when looking at culture. Asians seem like "squinty folk" because, the fact happens to be, they squint noticeably often! Such behavior is all the more striking in conjunction with having an epicanthic fold.
I've lived in Taiwan over four years and have had lots of experience looking closely at Asian faces. (That's not a boast, just a statement.) I've been in close contact with Japanese, Indonesians, Filipinos, Koreans, et al. Further, I'm a teacher so I must constantly analyze students' faces for hints of understanding, curiosity, impertinence, shame, etc. And what I can say is that just as the myth of the docile, taciturn Asian was exploded countless times by the stunning raucousness of groups of happy students and adults, so the myth of the slanty-eyed Asian saw its demise the longer I saw those eyes. A great many Asians simply do not have a pronounced slant to their eyes... unless, that is, they are squinting.
And that's just the problem: they squint a lot here.
I can't tell you how many times I've been unnerved while teaching, seeing an otherwise sweet, upbeat student scowling at me while I talk. ("What did I say?") Or how many times I've felt my dander rising as people, total strangers, glower at me with thin eyes as I walk on the street, like some interloper. ("What did I do?") Or how clerks will peer at me inquisitorially and my seemingly contraband items. ("Did my make the most wanted list?") It took me a long time to realize all this scowling and peering is actually just a result of people's habit of trying to see more clearly. It's a documented fact that Asians, as a group, have more vision problems than, say, Caucasians, and that's largely because of their languages (being made up of intricate ideographs) and education systems (being hugely memory-based and text-intensive). Asians' eyes simply get more strained learning their own languages and going through school than other peoples' might. The same problem holds for serious non-Asian students of Chinese: their eyes deteriorate the harder they study. This is why the eyeglass industry is simply booming, in Taiwan, at least. You can find eyewear shops and optometrists about every few blocks. In my glasses, I mean, classes over the years, the vast majority of students all wear glasses... unless, that is, they forget to wear them. And this lapse is exactly what leads to all the squinting. Just as students will let their bike helmets fangle from the handelbars without wearing them, and adult scooter drivers will wear helmets without clasping the chin strap, so many people hear are perfectly fine going about the day with subpar vision, whether from not wearing glasses or wearing improperly prescribed lenses. (Indeed, in my last class I just saw a girl squinting with her glasses on!)
As my diction reveals, I'm speaking generally and tentatively about this. I can't claim any rigorous sociological proof for this factor in the formulation of a stereotype. In my defense, though, talking about stereotypes, generalities by definition, entails talking stereotypically, generally. My insight might seem as compelling as saying the that stereotype, "Americans are fat", arises from the fact that Americans wear baggy clothes. But, then again, that kind of logic can't be ignored when looking at culture. Asians seem like "squinty folk" because, the fact happens to be, they squint noticeably often! Such behavior is all the more striking in conjunction with having an epicanthic fold.
The intentional pro-choicer
[I've updated this post with a somewhat lengthy comment {#4} but thought it tacky to jam it in this post.]
Daniel Dennett is famous for, among other reasons (including his beard, in my book), articulating what he calls "the intentional stance" (tIS) as a way of explaining minds in a material world. In a nutshell, Dennett says that "having a mind" just means displaying behavior that any other self-proclaimed "minder" recognizes, and responds to, as mindful activity. tIS is "a predictive strategy of interpretation that presupposes the rationality of the people--or other entities--that we are hoping to understand and predict" (back cover of book, MIT Press). tIS is but the highest level of three that Dennett describes in his so-called heuristic taxonomy. The second level is the "design stance" and the lowest is the "physical stance." On the design stance, we look at a system or entity's functional capacity and predict what it might do given various changes. On the physical level we zoom in and only consider a system or entity's capacity for change in light of its fundamental chemistry and physics. Within certain constraints, the "behavior" of a chemical system, or a mechanical apparatus, is predictable. tIS just extends this heuristic analysis to the level of desires, goals, action-potentials, etc. All such stances are clearly highly pragmatic. We don't need to know what something is in some mysterious inner, essential sense; we just need to be able to appreciate and then manipulate its physical, design and intentional tendencies. Rising from the first level to the third, we can say a being is what atoms do, what mechanics does, and, lastly, mind is as mind does.
Perhaps tIS sounds circular (and, no, not just to you), but Dennett's point is that mind should not be reified, but rather understood dynamically as the aspect of any organized system or organism that (apparently) directs its actions and reactions. Mind, in this sense, is everywhere, as long as we can recognize "intentions" in any system or organism. A mosquito, for example, may not have a full-blown "mind", as we say we ourselves have, but there's no question a mosquito carries out consistently intentional actions, like, say, buzzing in your ear when you're trying to sleep or sucking your blood and squirting its saliva into you. According to tIS even a thermostat can have a "mind", but this, only very loosely, and probably without any better heuristic value than looking at on the design stance. After all, a thermostat not only responds to (i.e., "minds") its environment (in terms of ambient temperature), but also in fact responds with a sort of efficacious volition, namely, changing the temperature according to its "innermost" intention (i.e., regulating the temperature at X°). But such a binary structure of intentionality hardly counts for what we understand as intentionality (i.e., "aboutness," being about, or for, some object).
The appeal of Dennett's intentional stance is that it is, upon reflection, paradoxically a pre-reflective viewpoint. Dennett has simply taken ordinary language seriously as the key not only to unraveling the mystery of our minds (and this with much explicit debt to Gilbert Ryle's The Concept of Mind), but also as the basis for detecting minds in any circumstances. How often do we say, "Mosquitoes are such tenacious SOB's!" or "The thermostat is acting up"? As often as we say these things, reflexively, naturally, about things, we are treating them as intentional subjects. Dennett has simply seized on this linguistic phenomenon and, ironically, quasi-reified it as the basis for mind. The essence of mind, heuristically speaking, is to be treated as if being mindful. (Meanwhile, of course, we don't really think anything has a mind, like I have an appendix, since, according to Dennett, a reified "ens mentis" is a metaphysical fiction.) Something earns the status of having a mind simply by doing what we typically consider minds to do. "The mind," then, is a verb. Presumably, we recognize our selves as minders just because enough parallel neural systems coordinate to generate a so to speak center of intentional gravity. We do not know our "inner thoughts," Dennett might say, as much as we know what manifests behaviorally, intentionally, from the complex of our nervous systems on the design and physical stances.
One upshot of Dennett's intentional stance is that, because we should not (anthropomorphically) reify mind as our singular prize possession, we should learn to take seriously "minding" wherever we find it, at any level. Indeed, what stimulates our minds most about the world is how it constantly seems to interact with us and interrogate our own intentional stance. "This is what I am doing," physical reality says, "now what will you do about it?" Taking the intentional stance seriously as the criterion for mindedness means taking other "kinds of minds" seriously, in terms of both practical reactions and moral priorities. Indeed, in his book, Kinds of Minds, Dennett argues that much of why animal cruelty is wrong is because, from an intentional stance, there is every reason to believe the grimaces and whimpers of a dog indicate pain is just as malicious to inflict on the animal as it would be to inflict it on ourselves or another human. The dog need not explain to us in terms we understand rationally; its "intentional" signs of pain (i.e., consistent actions "about" pain) speak for themselves. Since it does what a minder does, then, for all intents and purposes, the dog has a mind. Our own kind of mind allows us to imagine and recognize the dog's intentional status, and, in turn, to realize that suffering is just as bad for it as for us. The animal's so to speak mental integrity and its mental dignity need not conform to our own level (or kind) of mind to count as morally significant. As soon as we recognize "mindedness" in a system or an organism, we must take that entity seriously as both a causal agent and, to some extent, a moral being.
Something I think Dennett ignores, though, is how his view figures into the abortion debate. If we should take minds seriously as (at least provisionally) the powers of moral agents, then we have no choice but to take embryos just as seriously. They display intentionality in numerous ways, possessing not only organic integrity directed towards growth and adaptive change but also particular responses to painful or pleasant stimuli. Moreover, they impinge on a separate system (i.e., the mother) in distinct and intentionally “self-centered” ways (e.g., their appetite augments the mother's appetite, alters her palette). If animal pain is a moral concern on tIS, a fortiori fetal pain is an ineluctable consideration in the abortion debate. Fetuses have as much mental and moral “weight” as dogs and mosquitoes, so they should not be callously discarded for the sake of designer humanism.
Daniel Dennett is famous for, among other reasons (including his beard, in my book), articulating what he calls "the intentional stance" (tIS) as a way of explaining minds in a material world. In a nutshell, Dennett says that "having a mind" just means displaying behavior that any other self-proclaimed "minder" recognizes, and responds to, as mindful activity. tIS is "a predictive strategy of interpretation that presupposes the rationality of the people--or other entities--that we are hoping to understand and predict" (back cover of book, MIT Press). tIS is but the highest level of three that Dennett describes in his so-called heuristic taxonomy. The second level is the "design stance" and the lowest is the "physical stance." On the design stance, we look at a system or entity's functional capacity and predict what it might do given various changes. On the physical level we zoom in and only consider a system or entity's capacity for change in light of its fundamental chemistry and physics. Within certain constraints, the "behavior" of a chemical system, or a mechanical apparatus, is predictable. tIS just extends this heuristic analysis to the level of desires, goals, action-potentials, etc. All such stances are clearly highly pragmatic. We don't need to know what something is in some mysterious inner, essential sense; we just need to be able to appreciate and then manipulate its physical, design and intentional tendencies. Rising from the first level to the third, we can say a being is what atoms do, what mechanics does, and, lastly, mind is as mind does.
Perhaps tIS sounds circular (and, no, not just to you), but Dennett's point is that mind should not be reified, but rather understood dynamically as the aspect of any organized system or organism that (apparently) directs its actions and reactions. Mind, in this sense, is everywhere, as long as we can recognize "intentions" in any system or organism. A mosquito, for example, may not have a full-blown "mind", as we say we ourselves have, but there's no question a mosquito carries out consistently intentional actions, like, say, buzzing in your ear when you're trying to sleep or sucking your blood and squirting its saliva into you. According to tIS even a thermostat can have a "mind", but this, only very loosely, and probably without any better heuristic value than looking at on the design stance. After all, a thermostat not only responds to (i.e., "minds") its environment (in terms of ambient temperature), but also in fact responds with a sort of efficacious volition, namely, changing the temperature according to its "innermost" intention (i.e., regulating the temperature at X°). But such a binary structure of intentionality hardly counts for what we understand as intentionality (i.e., "aboutness," being about, or for, some object).
The appeal of Dennett's intentional stance is that it is, upon reflection, paradoxically a pre-reflective viewpoint. Dennett has simply taken ordinary language seriously as the key not only to unraveling the mystery of our minds (and this with much explicit debt to Gilbert Ryle's The Concept of Mind), but also as the basis for detecting minds in any circumstances. How often do we say, "Mosquitoes are such tenacious SOB's!" or "The thermostat is acting up"? As often as we say these things, reflexively, naturally, about things, we are treating them as intentional subjects. Dennett has simply seized on this linguistic phenomenon and, ironically, quasi-reified it as the basis for mind. The essence of mind, heuristically speaking, is to be treated as if being mindful. (Meanwhile, of course, we don't really think anything has a mind, like I have an appendix, since, according to Dennett, a reified "ens mentis" is a metaphysical fiction.) Something earns the status of having a mind simply by doing what we typically consider minds to do. "The mind," then, is a verb. Presumably, we recognize our selves as minders just because enough parallel neural systems coordinate to generate a so to speak center of intentional gravity. We do not know our "inner thoughts," Dennett might say, as much as we know what manifests behaviorally, intentionally, from the complex of our nervous systems on the design and physical stances.
One upshot of Dennett's intentional stance is that, because we should not (anthropomorphically) reify mind as our singular prize possession, we should learn to take seriously "minding" wherever we find it, at any level. Indeed, what stimulates our minds most about the world is how it constantly seems to interact with us and interrogate our own intentional stance. "This is what I am doing," physical reality says, "now what will you do about it?" Taking the intentional stance seriously as the criterion for mindedness means taking other "kinds of minds" seriously, in terms of both practical reactions and moral priorities. Indeed, in his book, Kinds of Minds, Dennett argues that much of why animal cruelty is wrong is because, from an intentional stance, there is every reason to believe the grimaces and whimpers of a dog indicate pain is just as malicious to inflict on the animal as it would be to inflict it on ourselves or another human. The dog need not explain to us in terms we understand rationally; its "intentional" signs of pain (i.e., consistent actions "about" pain) speak for themselves. Since it does what a minder does, then, for all intents and purposes, the dog has a mind. Our own kind of mind allows us to imagine and recognize the dog's intentional status, and, in turn, to realize that suffering is just as bad for it as for us. The animal's so to speak mental integrity and its mental dignity need not conform to our own level (or kind) of mind to count as morally significant. As soon as we recognize "mindedness" in a system or an organism, we must take that entity seriously as both a causal agent and, to some extent, a moral being.
Something I think Dennett ignores, though, is how his view figures into the abortion debate. If we should take minds seriously as (at least provisionally) the powers of moral agents, then we have no choice but to take embryos just as seriously. They display intentionality in numerous ways, possessing not only organic integrity directed towards growth and adaptive change but also particular responses to painful or pleasant stimuli. Moreover, they impinge on a separate system (i.e., the mother) in distinct and intentionally “self-centered” ways (e.g., their appetite augments the mother's appetite, alters her palette). If animal pain is a moral concern on tIS, a fortiori fetal pain is an ineluctable consideration in the abortion debate. Fetuses have as much mental and moral “weight” as dogs and mosquitoes, so they should not be callously discarded for the sake of designer humanism.
[What Dennett, of course, appears to skirt is the irreducibly intellective nature of intentionality, such that even beneath the physical stance is the intellective power of the mind to grasp meaning in terms of, say, physical signs, design compositions and intentional concepts. Mind may be as mind does, but grasping that fact is, as a metaphysical, eo ipso prevenient on formulating it. Intellection trumps intentionality, otherwise intentionality is not intentionality {i.e., it is propositionally void}. Cf. Wikipedia's "Intentional stance" for more.]
Thursday, August 16, 2007
My proposed graduate study
[Tell me what you think!]
Our age is one of “golems among us,” or so says Byron Sherwin in his book of the same title. [Chris Hables Grey, in Cyborg Citizen, tells of how, as part of exploring the age of robot toys, he roved the streets of Prague for golem puppets.] In October 2002, to explain his country's Project Golem 2002/5763, Argentine Ambassador Juan Eduardo Fleming said, “Today's Golem means artificial intelligence, robots, cloning, the Internet, computers" (cf. Prague Post 2 Oct'02). Fleming is right—but how did we get here?
I'd like to explore the cultural roots of artificial intelligence as a form of computerized gematria and, further, transhumanism as a form of secular eschatology.
As for AI, I would like to deepen the work of Byron Sherwin with a specific focus on cognitive science. His work is more of an ethical exploration of so to speak golemic bioethics, whereas I am hoping to research the theory of robotic intelligence and see how that discourse is framed, or anticipated, by the mystical sensibilities of making life à la Rabbi Löw and Dr. Frankenstein. It is one thing to imagine, with the conscious or unconscious help of ancient mystical lore, making a body for practical (utilitarian) purposes. It is quite something else, as Borges's poem “Scholem's Golem” suggests, to make something with any “inner” conscious life. What are the historical, cultural roots of the shift towards making a sentient golem, as opposed to makinf merely useful workers, or, “robotnik” (as Karel Čapek coined the term in his 1921 play, RUR: Rossum's Universal Robots)?
As for transhumanism, à la Ray Kurzweil, I would like to see how an explicitly secular movement may or may not use and reject elements of Judeo-Christian eschatology. Sherwin's work examines how golem lore can guide our increasing use of biotechnology. In that case, golem are still only one heuristic device in the larger cybernetic movement. What I would like to explore is how transhumanism elevates the golem from a mere device for our current benefit to a paradigm for human destiny. According to transhumanism, it is by making ourselves into new golem, inscribed not with the tetragrammaton or any other religious esoterica, but with the genetic code of life and the imperishable markings of machines, that we become our own Rabbi Löw's. Sherwin's work, by contrast, keeps golem as what they were traditionally, namely, magically manufactured servants of human (viz., Jewish) interests. This I call “golemic humanism,” in which technologically, no matter how humanoid it may be, is still subject to humanistic values and authority. We can learn how to control golemic potential of biotech for our own good, and should do so consciously, lest biotech calamities run out of control like a golem on the loose in the streets of Prague. The meancing air of Gustav Meyrink´s Der Golem colors those streets to this day.
A similar kind of golemic humanism is present in Keith Stanovich's The Robot's Rebellion, but Stanovich's position does lean towards transhumanism. Although he acknowledges “universal Darwinism” does indeed dissolve some, even most, of humanity's most hallowed, ancient values, as traditionally understood, yet he insists we can reclaim those values precisely by being aware of the threat Darwinian thought poses to them. In this respect, Stanovich casts homo sapiens darwinus as a demi-golem, a creature inscribed by a biogenerative code (i.e., “selfish genes” and memes) that operates according to a separate level of concerns (i.e., genetic replication). Once we realize we are golem, however, we can transcend our golemic servitude and battle against the mindless steamroller effect of genetic replication. We are only “owned” as golem if we choose to be. Such is Stanovich's neo-Darwinian evangelion. The freedom to transcend our strict genetic demands is clearly a transhumanist impulse, even if it is ultimately directed towards current human interests. So while Stanovich's robotic evangelion has a transhumanist inspirtation, it is still fairly standard golemism, like Sherwin describes. Our golemic heritage and status is still properly subject to greater human aims. Transhumanism, by contrast, subverts present human interests to a greater, literally “superhuman,” future along the lines of a super-golem. Hence, from a golemic perspective, I am inclined to call transhumanism “transgolemism.” Whereas standard golemic thought uses cybernetic and biotech potential for human interests, transhumanism—transgolemism—fully converts humanity into its own future ideal golem.
While this may seem like a broad “cultural history of an unlikely future,” as skeptics of strong AI and transhumanism would have it, I want to focus my study historically on the German-speaking reception and development of these themes. The links between Nietzsche's Philosophie des Übermenschens, the hopes of science in the atomic age, and the “old European” heritage of golemism and kabbalism are ripe for study. While Nietzsche rejected Christian eschatology, he could not escape matters of final import, which is why his Übermensch can be seen as a critical surrogate for the New Man of traditional Christian hope. What would, or did, Nietysche think about his Übermensch in light of Jewish golemism? Ernst Benz, a 20th-century German theologian, examined the tension and links between evolution and Christian hope, but he really did not apply his analysis to artificial intelligence. Norbert Wiener drew explicitly on golemism in his cybernetic manifesto, God and Golem, Inc. Might there be similar links between golemic lore and both Kurt Gödel´s and John von Neumann´s ultimate pessimism about strong artifical intelligence (at least, by computational means). The powder keg of 20th-century European anti-Semitism was Vienna. How was the Jewish heritage of robotics (viz., golemism) received in age steeping in anti-Semitism? Are there not perhaps two currents from which transhumanism can draw, namely, Jewish religious golemism and Nietzschean Übermenschismus?
As the work of Pierre Duhem and Stanley Jaki shows, science, as a mature self-sustaining rational inquiry, required a metaphysical climate of thought in order to take off. And while science may no longer need to refer to that religious worldview explicitly any more to continue, nevertheless it still merits looking closely at how consciously or unconsciously the vision and goals and perceived limits of AI and transhumanism are grounded in earlier religious (and mystical) and philosphical traditions.
I saw it with mine own two eyes!
Driving from Hong Wen to Victor yesterday, I stopped at a red behind a guy in a white construction hat. On the back of the helmet he, or someone equally zestful, had written this:
This was written, presumably, to assure any inconsiderate or jaded foreigners realize afresh just how lucky we are. I have been advised.
Such is one more small tile in the mosaic of why I love Taiwan.
TAIWAN IS
*****BEAUTEFUL
**COUNTRY
This was written, presumably, to assure any inconsiderate or jaded foreigners realize afresh just how lucky we are. I have been advised.
Such is one more small tile in the mosaic of why I love Taiwan.
Homo sapiens darwinus
[The term above has apparently been used by a Professor Fox, but in the sense of a superior, genetically "pure" species of homo sapiens, which is not how I am using the term.]
The title is the name I give, ex hypothesi et arguendo, to humans living in what Keith Stanovich calls the Age of Darwin. This is our age. It is the age in which Darwin's breakthrough has finally begun manifesting its deepest implications and broadest applications in society at large. It is the age in which arguing against Darwin, other than in order to refine his basic theory, is to assume a huge burden of proof and to hear in reply, "Adversus solum ne loquitor."
All I will say, now, about this sea change, as Stanovich would have it, is this:
The universalization of genetics (i.e., "geneticism") has merely biologized the dogma of original sin.
As you walk through any shop or on any street, the people you see may appear (phenotypically) fine and dandy. But what we know of genetics is that, in fact, lurking deep within, on the helical tracks of their very being, there could be any number of genetic disorders waiting to manifest. Everyone is, knowingly or unknowingly, the victim of a great fall from genetic purity and it is only a matter of time until the effects become apparent. Further, this primal corruption, which subjects the individual to the uncaring, unceasing vagaries of selfish nature, is transmitted by propagation, not by imitation (as at Trent, "... hoc Adae peccatum, quod origine unum est et propagatione, non imitatione tranfusum omnibus...."). No matter how virtuous or "integrated" a person may seem, the fact is, in the aetas Darwinum, they are corrupt, like everyone else.
What this shift indicates, of course, is a total demoralization of what the doctrine of the Fall means. Fr. Keefe, in volume 1 of Covenantal Theology, makes this point very clear. If the Fall is not seen as a radically moral, and therefore radically free, occurrence ,it loses its intrinsic intelligibility. The biologization of our fallenness then is but one indication of the intrinsically anti-freedom and amoral dynamic of our age. Our biological fallenness seems as chilling and alienating as any skewed version of the Fall. For unless Adam is seen as our metaphysical, and not necessarily temporal, head, humanity is just as subject to a temporal (i.e., amoral) necessitarianism as a biological one. What Adam did in his own time, a moment integrally coterminous with the Event of Creation, is the metaphysical basis for our own fallenness. His fall is ours, and vice versa, synchronically, and we can see this fallen link in the course of historically discrete diachronic consequences.
[P.S. I make no claims for the quality of my Latin. I use it for my own edification and I misue it, I appreciate being told how to correct it. Also, as the Solemnity of our Lady's Assumption was this week, I hope I can soon post a few thought from Fr. Keefe on that event.]
My mental diet, of late
I've posted some book reviews lately, so I figured I should update my recent (as in, in the last 8-10 months) mental diet. So, here's my current reading journey (copied from a text document, but I don't feel like italicizing all those titles).
All right, then, until next...
cornhobbling!
cornhobble.blogspot.com
This is my buddy, Craig's, new blog. He is the former landlord of Craig's Cranny. Now, as you can see, he is back in black (but not in Mac).
Cornhobble, but of course, means:
What I love most about this entry is that it's listed as the fifth (of...?) definitions for cornhobble. English rules.
cornhobble.blogspot.com
This is my buddy, Craig's, new blog. He is the former landlord of Craig's Cranny. Now, as you can see, he is back in black (but not in Mac).
Cornhobble, but of course, means:
(v) : to strike a person with a fish, particularly in the face "Captain Jim threatened to cornhobble the insolent sailor."
What I love most about this entry is that it's listed as the fifth (of...?) definitions for cornhobble. English rules.
[TRUE STORY: It was not until five minutes ago, as I was gmailing with Craig himself, that I realized the "(v)" just means "verb". I am an English teacher. This reminds me of a time in high school when Kevin V. wrote "goat" on a boat stand (we were at crew practice) and Isaac N. tried to read it but he could only see and say "go...at" like three times in a row, the whole time Kevin is just staring at him like Isaac's vomiting kittens and saying, "Isaac, are you serious, that's what you read there?" "Go...at...." "You don't see anything else?" "Go...at...."]
Wednesday, August 15, 2007
You do it to yourself, or...
Why I am Travis Bickle in a Room-Temperature Redux
If you've seen Taxi Driver, you might realize what I mean. Travis "Are You Talkin' to Me?" Bickle had delusional paranoia and a sense that his best efforts for excellence and virtue were always being undermined by outside enemies. But in fact, it was his own habits and fears and obsessions that eroded his character, so much so that the only way his fragmented character could be salvaged was by distilling it into one pure act in one pure fugue of retributive violence. Pretty hard core stuff. I am like Bickle in that while I may feel jeopardized, say, in my pursuit of holiness or wisdom, by outsider aggressors, in fact I do it to myself. I am my own worst enemy, as are many of us. But I am only a pallid redux of Bickle in that my redemptive fugues of will are not nearly as gory, nor as cinematic. The best I can do for cinematic effect is bury myself in study (my own Chariots of Fire anthem playing in the background), or do some pull-ups (to the mental beat of the Rocky theme), or gorge on some good grub (while listening within myself to the tune from Fat Albert). I hear PSP is a good way to go, too, though.
Well, last night I was up much later than I should have been, partly because I spent time tidying up my room. This morning I spatula'd myself out of bed and enjoyed Mass (oddly, not devoted to the Assumption... I'm constantly baffled by the intricacies of the Roman rite calendar), went back upstairs to get ready, and then hustled off to work, feeling good at least knowing "my house was in order." As I came down the steps, though, I felt something firm but not really hard in the toe of my left shoe. "Odd," I mused, but had no time to examine and remove what I thought was a plebeian (albeit curious) shoe-borne dust bunny.
The ride to work was smooth and fast, enhanced by the Peppers jamming in my iPod. I hurried into class and tried to get my bearings. It drizzled this morning so I had donned my rain gear. When I tried to remove the pants, they caught on my shoe heels, so I slipped the shoes off and shook them out while I had the chance... and then saw the true form of my mysterious dust bunny.
It was a two-inch cockroach.
His sojourn in my shoe had left him much the worse for wear. He was down to half of one antenna and only one attached legs (three others tumbled out like stalk snowflakes to catch up with their corpse). I don't know if the little stowaway was alive or dead when I put him, I mean, my shoe on. He wasn't telling, either. In fact, all I could read on his face, askew on a thin broken neck, was this:
"You've been Bickled."
If you've seen Taxi Driver, you might realize what I mean. Travis "Are You Talkin' to Me?" Bickle had delusional paranoia and a sense that his best efforts for excellence and virtue were always being undermined by outside enemies. But in fact, it was his own habits and fears and obsessions that eroded his character, so much so that the only way his fragmented character could be salvaged was by distilling it into one pure act in one pure fugue of retributive violence. Pretty hard core stuff. I am like Bickle in that while I may feel jeopardized, say, in my pursuit of holiness or wisdom, by outsider aggressors, in fact I do it to myself. I am my own worst enemy, as are many of us. But I am only a pallid redux of Bickle in that my redemptive fugues of will are not nearly as gory, nor as cinematic. The best I can do for cinematic effect is bury myself in study (my own Chariots of Fire anthem playing in the background), or do some pull-ups (to the mental beat of the Rocky theme), or gorge on some good grub (while listening within myself to the tune from Fat Albert). I hear PSP is a good way to go, too, though.
Well, last night I was up much later than I should have been, partly because I spent time tidying up my room. This morning I spatula'd myself out of bed and enjoyed Mass (oddly, not devoted to the Assumption... I'm constantly baffled by the intricacies of the Roman rite calendar), went back upstairs to get ready, and then hustled off to work, feeling good at least knowing "my house was in order." As I came down the steps, though, I felt something firm but not really hard in the toe of my left shoe. "Odd," I mused, but had no time to examine and remove what I thought was a plebeian (albeit curious) shoe-borne dust bunny.
The ride to work was smooth and fast, enhanced by the Peppers jamming in my iPod. I hurried into class and tried to get my bearings. It drizzled this morning so I had donned my rain gear. When I tried to remove the pants, they caught on my shoe heels, so I slipped the shoes off and shook them out while I had the chance... and then saw the true form of my mysterious dust bunny.
It was a two-inch cockroach.
His sojourn in my shoe had left him much the worse for wear. He was down to half of one antenna and only one attached legs (three others tumbled out like stalk snowflakes to catch up with their corpse). I don't know if the little stowaway was alive or dead when I put him, I mean, my shoe on. He wasn't telling, either. In fact, all I could read on his face, askew on a thin broken neck, was this:
"You've been Bickled."
Tuesday, August 14, 2007
Powers of x?
I'm ramping up into a new school year, which means training my students on some basic classroom protocol. One of my little canons is to have “real paper” (i.e., a notebook or folder) as opposed to “nice garbage” (i.e., random sheets of paper from class to class). Another canon is, when taking notes, to write the date on each new day's section. This is because one of my little mottoes is, “Everything is history.” Math equations have a history, medical disorders have a history, novels have both a creative and narrative history—everything has a history. Well, as I was indicating the date to write down, August 14th, I said in my mind, “My brother's birthday is August 28th, which is 14 * 2, or 14 + 14. So if I wanted a snappy way to tell someone my brother's birthday, I could say, 'Double today's date.'” This transformation led me, reflexively, to envision the idea, being so numerical, as an equation: 14ˆ2 = 28— which you'll notice is totally incorrect, as 14ˆ2 = 196. All the same, my psychological tendency (and, I found out, others' as well) to imagine 14 * 2 as 14ˆ2 got me thinking about exponents.
Of late, I have been trying to get a much better handle on physics, calculus and chemistry, so equations and numbers are running through my mind much more frequently. I've got books by and on Einstein, Hawking, Feynman, Gödel, et al. all lined up and I'm even working my way through Hugh Neill's Teach Yourself © Calculus and Michael Kelley's Humongous Book of Calulus Problems. The other night, for example, before I fell asleep I found myself working out a simple fractions problem (viz., “What fraction of Fr. Keefe's Covenantal Theology is endnotes and, therefore, how much of the text have I actually read?”). Instead of just waving the question off with a lazy approximation, I went through the right steps as correctly and systematically as I could: reducing fractions, cross-multiplying, etc. Math has always been the bane of my education, but this does not negate the fact that I have a wary respect for it and even an enjoyment of the math I can grasp. My motivation for learning calculus is not simply autodidacticism (though it is that); it's vocational as well. Realizing I want to study the philosophy and history of science means I must understand physics at a more than elementary level, which, in turn, means I must understand calculus at least an elementary level.
In any case, with numbers on the brain, I pondered how I could express my birthday-equation exponentially. Given that 14ˆ2 = 196, how can I get from 14 to 28 exponentially? Formally,
At first I thought it just entailed raising the number to the desired product's (2's) square root, as if the squaring process would nullify the square root and double 14. But that process seemed not only too simplistic and magical, but also seemed to be mathematically inaccurate. It was beyond me to fathom how the square root of an exponent could yield a product. Incredibly, after only a few minutes of “thinkering” (props to Michael Ondaatje), I came up with x = 1/7. I grasped the fact that squaring 14 to get 28 requires reducing one of the 14's to 2, so that 14 * 14 becomes 14 * 2 = 28. To anyone competent in math, this must seem like a drooling parlor trick. But to me it was a real mental victory!
But something still bothered me. Is there a law, a rule, I can find to help me with this kind of problem using any number? In other words, formally,
I ran this by my students, and have offered a class prize if they can figure out a law.
My own gambit is this:
Consider:
But the rule seems not to hold as a rule for all other numbers. Consider:
On the other hand,
does work.
I seek a rule in this matter, especially because I'm trying to extend the transformation to triplets and quadruplets, as in 14ˆx = 3 * 14, x = ? Is it just the cube or fourth root?
My best formulation, after more thinkering in and out of class tonight, is this:
I really want some input on this! I'm so dumb. This is a pre-calculus quandary. Please contribute!
Of late, I have been trying to get a much better handle on physics, calculus and chemistry, so equations and numbers are running through my mind much more frequently. I've got books by and on Einstein, Hawking, Feynman, Gödel, et al. all lined up and I'm even working my way through Hugh Neill's Teach Yourself © Calculus and Michael Kelley's Humongous Book of Calulus Problems. The other night, for example, before I fell asleep I found myself working out a simple fractions problem (viz., “What fraction of Fr. Keefe's Covenantal Theology is endnotes and, therefore, how much of the text have I actually read?”). Instead of just waving the question off with a lazy approximation, I went through the right steps as correctly and systematically as I could: reducing fractions, cross-multiplying, etc. Math has always been the bane of my education, but this does not negate the fact that I have a wary respect for it and even an enjoyment of the math I can grasp. My motivation for learning calculus is not simply autodidacticism (though it is that); it's vocational as well. Realizing I want to study the philosophy and history of science means I must understand physics at a more than elementary level, which, in turn, means I must understand calculus at least an elementary level.
In any case, with numbers on the brain, I pondered how I could express my birthday-equation exponentially. Given that 14ˆ2 = 196, how can I get from 14 to 28 exponentially? Formally,
14ˆx = 28 ∴ x = ?
At first I thought it just entailed raising the number to the desired product's (2's) square root, as if the squaring process would nullify the square root and double 14. But that process seemed not only too simplistic and magical, but also seemed to be mathematically inaccurate. It was beyond me to fathom how the square root of an exponent could yield a product. Incredibly, after only a few minutes of “thinkering” (props to Michael Ondaatje), I came up with x = 1/7. I grasped the fact that squaring 14 to get 28 requires reducing one of the 14's to 2, so that 14 * 14 becomes 14 * 2 = 28. To anyone competent in math, this must seem like a drooling parlor trick. But to me it was a real mental victory!
But something still bothered me. Is there a law, a rule, I can find to help me with this kind of problem using any number? In other words, formally,
pˆx = 2p ∴ x = ?
I ran this by my students, and have offered a class prize if they can figure out a law.
My own gambit is this:
pˆx = 2p iff x = 2/p
Consider:
14ˆ(2/14) >> 14ˆ(1/7) = 14ˆ(.1414) ≅ 28
But the rule seems not to hold as a rule for all other numbers. Consider:
2ˆx = 4 iff x = 2/2 >> 2ˆ1 = 2, not 4.
On the other hand,
3ˆx = 6 iff x = 2/3
does work.
I seek a rule in this matter, especially because I'm trying to extend the transformation to triplets and quadruplets, as in 14ˆx = 3 * 14, x = ? Is it just the cube or fourth root?
My best formulation, after more thinkering in and out of class tonight, is this:
pˆx = n * p iff x = n/p (uhm, except, for some reason, for 2).
I really want some input on this! I'm so dumb. This is a pre-calculus quandary. Please contribute!
Why I love Taiwan
My teaching schedule is in full swing again (full-time daily classes at Hong Wen and evening classes at Victor), and, sure enough, my throat is feeling the strain. Soon enough I'll get over my summer loquaciousness and start habitually speaking only when I really have to. Like an employee at, oh, say, a spice company, who gradually cringes at seeing more spices, spices, spices outside work, an English teacher eventually starts treating his voice like a work tool: important but not as novel and "innocent" as many people feel about speaking. When speaking a means of income its pleasure as a means of communication tends to dwindle, if not for psychological reasons, at the very least because your throat just needs time to recover.
Anyway, it's time for me to re-stock my throat care medicine cabinet, which means drinking lots of hot water and buying things like Strepsils, pipagao, luohanguo, and pengdahai, the last three which are all good (and cheap) Chinese medicines. Well, I went to one of Taiwan's countless pharmacies last night after dinner to get the pipagao (which is like a jar of throat lozenges in molasses form), but then was encouraged by the clerk to get something else, which I'd never tried, called runfei... [uh, well, let me check the bottle when I get home, heheh]. Pipagao was only $140NT, but this stuff was $200NT. I only had $197NT. I hemmed and hawed and was about to just buy the pipagao but then the clerk said, "Oh that's okay. You've got 200." I resisted politely but he insisted yet more politely. "We'll call it national foreign relations." I was extremely thankful as I walked out to my scooter and realized, "This is why I love Taiwan."
My throat is doing a little better since yesterday, thank you.
Anyway, it's time for me to re-stock my throat care medicine cabinet, which means drinking lots of hot water and buying things like Strepsils, pipagao, luohanguo, and pengdahai, the last three which are all good (and cheap) Chinese medicines. Well, I went to one of Taiwan's countless pharmacies last night after dinner to get the pipagao (which is like a jar of throat lozenges in molasses form), but then was encouraged by the clerk to get something else, which I'd never tried, called runfei... [uh, well, let me check the bottle when I get home, heheh]. Pipagao was only $140NT, but this stuff was $200NT. I only had $197NT. I hemmed and hawed and was about to just buy the pipagao but then the clerk said, "Oh that's okay. You've got 200." I resisted politely but he insisted yet more politely. "We'll call it national foreign relations." I was extremely thankful as I walked out to my scooter and realized, "This is why I love Taiwan."
My throat is doing a little better since yesterday, thank you.
Etienne Gilson's Linguistics and Philosophy
[Here's a review I wrote of this book at Amazon...]
I'm glad I own this book, for a number of reasons. First, because the translation, by John Lyons, was commissioned by Fhttp://www.blogger.com/img/gl.link.gifr. Stanley Jaki (who also prompted Mr. Lyons to translate Gilson's _D'Aristote a Darwin et retour_ in 1984, another book I'm thrilled to have got my hands on), and because Fr. Jaki is one of my intellectual heroes, _Linguistics and Philosophy_ (LP) has a mildly sentimental value for me.
Second, the translation itself is excellent, providing key insights into the complex use of "language" in Gilson's French (i.e., shifting between parole, langue and langage).
Third, as with all of Gilson's books, it's a pleasure to read and you learn a lot without consciously realizing you're "learning." Although he notes no one writes like they talk, and vice versa (and this asymmetry for a philosophical reason, namely, that written language [langue ècrite] is the signum signi, the third-order sign of speech [parole], which is a second-order signification of intellectually grasped ideas, a first-order process), reading Gilson is a bit like having a nice conversation with a really smart guy. It's a deep read but not hard.
I was pleasantly surprised to find LP is an excellent but curiously neglected companion to Foucault's _Le mots et les choses_ (The Order of Things); LP was published, in 1969, only a few years after Foucault's book and deals with exactly those topics: mots et choses (words and things). (In a similar postmodern parallel, Gilson, in some endnotes, references Derrida's 1967 _De la grammatologie_, but before Derrida was a pomo celebrity, and therefore before he had to be taken seriously as a matter of academic style, which makes Gilson's comments all the funnier: basically, Gilson says, while he'd like to appreciate Derrida's arguments, he finds the writing itself mostly incomprehensible and uncompelling... but maybe he is missing something.)
Gilson's key thesis in LP is that meaning is not decomposable into purely linguistic elements. Meaning is, then, one of the "philosophical constants of language." Meaning is always complex, integral, and vastly richer than the significatory power of any basic word or term. This insight goes a long way in humbling the goals of both positivism, which saw its rightful decline in the 20th century, and its newest attempt at renaissance, robotic computational programming (à la _The Matrix_, ~"I don't see meaning; I just see code.") Words, written and spoken, are physical -- but meaning, language, is inescapably metaphysical. Every word grasps a universal and therefore the only way to avoid metaphysics is to avoid speech altogether. (This point, what I call the "adequacy of means", or "Jakian adequacy", is taken by Fr. Jaki to great lengths, in many directions, in his enchiridion philosophicum, _Means to Message_.) Because meaning, a properly intellectual possession, cannot be reduced to a sub-intellectual level of signification and therefore cannot be constituted in a non-intellectual, "intraphysical" mode of being, neither can material signs (i.e., basic code) be built up into metaphysical meaning (i.e., conscious intellection). Programming "up into" consciousness, then, presumes what it can only deny in the very act making an intelligible claim, namely, that meaning is reducible, convertible, to basic codes/algorithms. And, as Gilson points out, while animal language would present some difficulties, ultimately surmountable, to a Christian philosopher, it would not trouble a philosopher as such, since finding language in animals would only amount to finding something, meaning, that we already grasp linguistically, intellectually, metaphysically. By analogy, finding life on Mars would not add anything to a biological analysis, much less a philosophical grasp, of "life itself" (bios kata noumenon).
Although LP should be taken seriously in any cognitive-science debate, it is by no means a treatise on or even explicitly against artificial intelligence. Along the way, Gilson explores poetry, logophilia, the ardor of writing, and many other things properly enjoyable for any lover of language. As for a linguist proper, Gilson makes the point that the only thing that keeps linguistics from being fully scientific is its object of study: because linguistics requires using language, it is inextricable from the formation and propagation of language, and therefore must analyze itself, in a Gödelian spiral of reflexivity, to be rigorously analytical. Language cannot be grasped scientifically, nailed down empirically, which is why it always points toward metaphysics. Gilson makes much the same point about biology in his _D'Aristote a Darwin_, namely, that the only thing biology cannot study is life itself. These two books should be read together, not the least because they offer hope and inspiration to lowly philosophers that they can in fact say something credible and interesting without being academic experts in every field. Gilson was neither a biologist nor a linguist, so he frequently admits his tentative argumentation in matters of fact, yet he has made important contributions to the dialogue between philosophy, linguistics and biology. Indeed, for philosophical pep-talks, few can be beat by Gilson's seventh chapter, "The Seventh Letter," about Plato's advice to young philosophers.
Monday, August 13, 2007
And just because it's true
Fr. Donald Keefe's Covenantal Theology (reviewed)
[I posted this review at Amazon.com recently. Feedback very welcome.]
At the risk of exploding my carefully wrought veneer of sober scholarship and orphic wisdom, let me just say: this book is freakin' awesome. It takes some doing to get your hands on a copy, but allow me to undercut Amazon a little by saying you can get a nice hardcover one-volume edition of this book for $[...] (not incl. S&H) from a professor at Saint Anselm College in New Hampshire. I'd had Fr. Keefe's Covenantal Theology (CT) in my sights for several weeks when I finally found the seller, and then fit it into my reading budget a year later so that, at long last, I could enjoy this true masterpiece of theology. (I don't want to divulge the seller's details without permission, but you can email me at fidescogitactio AT gmail DOT com for the his contact info.)
Lest I sound too over the top in praise, CT's high quality rests on a number of grounds. First, it is a THEOLOGICAL masterpiece because Fr. Keefe rigorously, systematically takes theology serious on its own terms, with a solid grasp on its proper methodological consistency. As Fr. Keefe argues in numerous ways as CT progresses, too much theology is based not on theology's proper object -- the historical Event of the offered Covenant by and in the One Flesh (mia sarx) of Christ and Theotokos -- but instead on either Aristotelian logical consistency, or Platonic (tragic hylemorphic) dehistoricization of the concrete, or the later manifestations of these pessimistic cosmological worldviews. There is no inherent, antecedent, necessary reason for the fall, nor for the creation, nor for our redemption. Rather, the Event itself, in one integral and unbroken act of Gift, is its own self-authenticating, self-explaining, free and thus intrinsically intelligible foundation. Such is history: the realm of moral freedom and moral personhood.
Second, CT is highly commendable because it covers so much ground and connects so many theological dots. The 2nd edition is nearly 800 pages, 231 of which are endnotes, including a meaty bibliography and two indices. (I recommend you read the text through once before going through the notes, as I did, lest you get bogged down in minutiae.) While there are great depths in the body of CT, Fr. Keefe also saves a lot of goodies for these endnotes. I am always pleased when an author cites Stanley Jaki, and Fr. Keefe does so in crucial ways. Indeed, one of Fr. Keefe's more important methodological insights (or caveats) is the noetic parallelism between theology and physical science as objective rational inquiries. Just as science is not necessarily, intrinsically exhausted by formal coherence (cf. Gödel's incompleteness theorems) nor evacuated of progressive relevance by a Platonic flight from the phenomena, but rather tirelessly forges ahead into the depths of material reality, so theology is always on a quest -- a quaerens -- into the bottomless metaphysical riches of the concrete Event of the Covenant of the One Flesh as it is historically, freely manifested and appropriated in the Eucharist.
It helps to have a decent grasp of Platonic, Aristotelian, Augustinian and Thomistic philosophy before reading CT, as Fr. Keefe takes such knowledge for granted in his sweeping discussion of these systems. (It would also help not to cling too toghtly to any one of these theological schools, as Fr. Keefe gives them much due criticism, which might irritate die-hard fanboys of any pet system.) In order to get a taste of CT, as well as to keep a bead on its North Star as you trek through its many pages, I cannot recommend strongly enough reading
1) Fr. David Meconi's article "A Christian View of History"
and
2) John Kelleher's "Knucklehead's Guide" to CT
before reading the book itself. (Mr. Kelleher, on his website W W W DOT catholiclearning DOT com, has a fine essay on Darwinism and Eucharistic realism, titled "Divine Reason and the Skyhook", which could also enrich your reading of CT.)
Third, I can recommend CT because it's not fluff. Fr. Keefe is extremely lucid in his exposition and very explicit about his premises, so he's easy to follow (except for when he's dealing with things that don't lend themselves to snappy slogans, like, oh, say, the coterminously integral nature of the fall and creation, or the metaphysical as opposed to temporal priority of the first Adam in our solidarity with sin, which is itself predicated on and rooted in -- and redeemed in -- the second Adam.) This is a book for serious Catholics, and I don't mean that disparagingly. It is a book for Catholics serious about their faith because it is a book about the most serious thing in that Faith, and indeed in the cosmos which it illumines, namely, our appropriation of the divine life in the free historical offering of the Eucharist. This book is not simply a tour de force theology of history -- indeed, Fr. Keefe calls history itself a theological category -- but a tour de force of specifically Eucharistic history. His greatest influences, as indicated by CT's indices and Fr. Keefe's hat tips in the text, include Henri de Lubac, Joseph Ratzinger (now Pope Benedict XVI), John Paul II, Fr. Stanley Jaki and Sts. Irenaeus, Augustine and Bonaventure. Under such tutelage, and more, the cumulative vision which CT offers is simply stunning. The tripartite analogies between virtually every aspect of theology have done more to synthesize the Catholic vision than almost any other book I've read (save the latest Catechism, etc.). The senses of Scripture (literal, allegorical-tropological, anagogical), the structure of the Mass (Offertory, Canon, Communion), the ordo salutis (sarx, mia sarx, pneuma) and the historia salutis (Old Covenant, New Covenant, fulfilled Kingdom), Thomistic metaphysics (ousia prote, ousia deutera, sumbebekos; ens, Esse, essentia; etc.) -- when you see all these in concert, the Faith makes a tremendous amount of sense!
CT is a seriously Catholic, that is, Roman Catholic, book, so while it covers a lot of ground, do not expect to find much interaction with Eastern Catholicism and Orthodoxy. In many ways, Fr. Keefe is "cleaning house" for the history and future of Roman Catholic theology. As such, he does not feel inclined, nor perhaps permitted, as it were, to complicate matters with such very distinct theologies. I admit, however, I would have really liked to see at least some rigorous discussion of Palamism and St. Maximus' theology of logoi and the issue of God's "absolute divine simplicity." Fr. Keefe explicitly rejects the (pagan) Deus Unus (as did, for example, Cdl. Ratzinger in _Feast of Faith_), so it would be nice to see how he deals with increasingly shrill accusations that the Catholic dogma of divine simplicity, without the doctrine of a real essence-energies distinction, necessarily entails creation and redemption and thus compromises God's personal freedom. Fr. Keefe would deny this claim root and stock, of course, for in CT "freedom is the prime analogate" of redemption, but all the same, it would, as I say, be nice to see him do so explicitly and formally. (Then again, the objection may be so partial in his sight that it ipso facto merits no refutation. Likewise, he brushes aside the debates of mongenism-polygenism and de auxiliis in a few paragraphs here and there.)
CT is a great book to see in a coherent way the big picture of Catholic theology, and, what's more, to see it afresh. CT's arguments add a lot of weight to the all-too-easily forgotten goodness of the Church's Good News, which stands in perpetual contradistinction to the pessimistic, deterministic, cosmological imaginings of fallen man. Despite how slow-going it was at times, I could hardly put the book down once I got my hands on it. CT will make you think and, in the process, will help you believe. Get yerself a copy and treasure it. The Good News is still good and freedom is historically real, as real as man is historical, for the second Adam is immanent -- yet also transcendent -- at every point in space-time.
Pope on a rope?
[This is in response to Acolyte's posting, at Energetic Procession, of one of his key patristic evidences against papal supremacy...]
I have commented on this quotation before on my blog (it’s linked in my large, and notorious, prooftext-florilegium, as some here would have it, of papal supremacy), so all I shall say at this point is: the knife cuts both ways. The bolded claims can just as easily be read as a basis for papal autonomy as for empirochoretic collectivity, since the Roman pope does not need the input of other apostolic successors in the execution of his work. If anything, ironically, the kind of each-to-his-own episcopacy these quotes are motivating, undercuts the collegial thrust of Orthodoxy, wherein everyone really does need the collaborative chrism of everybody else (magisterially speaking). Indeed, since all cannot do without all, it is a prickly issue how any patriarch feels he can do without another one, especially one as Peter, long understood to “reside” in Rome.
In any case, as Arthur has noted, the dogma of papal infallibility does not negate the collegiality of the episcopal college– its presupposes it. The pope’s only function (to use such a term) is to be the visible sign of episcopal union, which is a union primarily expressed in worship (concelebration) and teaching (magisterial orthodoxy); clearly he can only be that sign of union IN the episcopal communion. Supposing his Petrine autonomy removes him from his episcopal obligations exactly reverses the intent of the papal orthodoxy. In order to show the Pope is really at odds with this quote, is find a case of papal authority being exercised totally and explicitly apart from and in opposition to the larger episcopal (as apostolic) federation. Further, as Michael L argues, what is most clear about this passage is that it is establishing the apostolic basis for episcopal councils, not the impotence of the pope, nor, moreover, the sacramental equivalence of bishops and the Apostles.
I find it very telling that a very large amount of patristic “matter” (not to presume here to call it evidence nor to deign to call it prooftexting), is supposedly answered by this one quote. You’ve trotted this horse out many times before, so it must be a prize (perhaps lone?) stallion among lesser mules. It may be the silver bullet you want, but that’s just it: it is THE bullet, and, by my lights, mostly a dud for the apposite issue.
Subscribe to:
Posts (Atom)