Friday, March 27, 2009

When arguments attack!

0 comment(s)
James Franklin, professor of maths at the University of New South Wales and the literary executor of the late, great(ly controversial) David Stove, has a delightful article highlighting Stove's 1985 contest for people to find "the worst argument in the world." Stove ended up giving the award to himself, for finding the following as the worst argument:

We can know things only as they are related to us, under our forms of perception and understanding, insofar as they fall under our conceptual schemes, etc. So, we cannot know things as they are in themselves.

As Franklin comments, "...all there is to ... arguments ... [of this kind] is: our conceptual schemes are our conceptual schemes, so, we cannot get out of them (to know things as they are in themselves)." Or, Franklin continues, "[i]n Alan Olding's telling caricature, 'We have eyes, therefore we cannot see.'"

Franklin's essay, like Stove’s contest campaign, is a delightful skewering of skeptical pretensions. As long as you keep qualifying "things" to be "things in themselves," or "things as they exist unperceived in and for themselves," you can define your way right out of knowledge, and eventually construct "a seriously heavyweight argument, like: 'We can eat oysters only insofar as they are brought under the physiological and chemical conditions which are the presuppositions of the possibility of being eaten. Therefore, We cannot eat oysters as they are in themselves.'" (cf. Stove, 1991, The Plato Cult, pp. 151, 161)

This Worst Argument is at the heart of Kuhnian (and subsequent Bloorean) constructivist irrationalism in the philosophy of science. In response, Franklin argues that a mere causal story (i.e., a mere historical recounting of how various scientists reached certain conclusions), in no way negates the importance of rational truth in science and human thought in general.

So far it has been taken for granted that the invalidity of the 'Worst Argument' is obvious. That is because its invalidity is obvious: its conclusion, 'we cannot know things as they are in themselves' just does not follow from the premises about how we can know things only as they are related to us. Nevertheless, what is wrong with the argument can be seen more clearly through a parallel case. Take an electronic calculator. Why does the calculator show 4 when someone punches in 2+2? On the one hand, there is a causal story about the wiring inside, which explains why 4 is displayed. But the explanation cannot avoid mention of the fact that 2+2 is 4. On the contrary, the wiring is set up exactly to implement the laws of arithmetic, which are true in the abstract. The causal apparatus is designed specifically to be in tune with or track the world of abstract truths. If it succeeds, the causal and abstract stories co-operate, and the explanation of the outcome requires both. If it fails, the mistake is explained by some purely causal story. It is the same with brains that do science. For whatever reason, brains have the ability to gather reliable basic sensory information about the world. Information gathered by observation and experiment supports some scientific theories better than others, and a brain that draws the correct conclusions is performing well, while a brain that does not do well needs its mistakes explained.

The point that I take from this, vis-à-vis my recurring interests at FCA in rationality and neuroscience, is that reasons qua rational ends have an irreducible causal potency in the world, which is not to be reduced to, nor simply deduced from, bare causal effects. Because we can reason, and because reasons are indeterminate among their causal instantiations, we are free to act reasonably and purposefully. Or, as St. Thomas says in De Veritate (24, 2, resp.), "Totius libertatis radix est in ratione constituta [the root of liberty, whole and entire, is constituted in reason]."

Saturday, March 21, 2009

Anonymous Anonymous…

1 comment(s)
A: "Hello, my name is, well, forget about that for the time being."

B: "Hello,…, well, hello there."

A: "Thank you. Yes, my name is, well, as I said… and I am anonymous."

B: "So are we, anonymous––anonymous. So are we all. You are not alone, whoever you are."

A: "Wow. It feels so good to be understood."

B: "The first step to overcoming anonymity is admitting you have no name."

Know your need…

0 comment(s)
Knowing the need for prayer is itself an answer to prayer.

Knowing other people's need for love is itself the seed for meeting that need.

Know you need. Know your need.

Know you're needed.

Friday, March 20, 2009

Called up, now walk...

0 comment(s)
Walk how? Largely irrelevant. Provided you keep walking.

All are called up to Him, out of the prison of themselves. All are called upward to the Holy of Holies, away from the Folly of Follies: men and women on toilets they call thrones. All are called up the same flight of stairs, lit by the same light, nourished by the same sustaining air. Who is beside you, ahead of you, behind you as you walk, is a matter of historical providence. Yet, how you walk is a matter of personal vocation and disposition.

Just as walking up stairs with different strides works your muscles differently, so walking up the Way works your soul in different ways. Walking up one step at a time, for instance, works the calves and soleus more than the quadriceps and hamstrings. By contrast, taking two steps at a time works your quads more than other muscles. Or again, taking three steps in a stride puts the power in your hamstrings and glutes. So we might imagine the laity taking each step at a time, priests striding two steps at a time, and bishops covering three steps at once as they go.

Visibly, the latter two strides might seem more impressive, and do require greater power per footstep, but intrinsically, all three modes of walking require the same thing: will power and faith to persevere towards the same goal, spurred on by the same vocation from the same Master.

Another point in common is that falling is the same catastrophe for all three kinds of walkers. We are equal by virtue of our calling, our goal, and equal in the indignity of our fallings. Even so, we are not identical in the way we ourselves agree to walk and the way we ourselves bring about our fall.

Friday, March 13, 2009

How to kill yourself...

2 comment(s)
...and live to tell the tale.

[This is a first draft, so pardon any glitches or gaffes. What's more, please tell me about the glitches and gaffes! Thanks!]

What a hoot I had yesterday in a couple of my classes. We were doing a lesson on animals. I think the students got the point about behavior––what we see animals do––and their abilities––what they can do in various, perhaps unusual, situations. We did the CD exercises and vocab. sections, and then were left with 5 or 10 minutes of dead time before the bell. I didn't feel like forging ahead with more book-content, so I decided to teach the kids some "natural skills." Simple things, like, you know, how to kill a wolf, how to catch and kill a raccoon, and how to hunt a polar bear. ("See," I reminded them, "isn't English fun!")

Seeing as some of my readers may not be savvy about the above natural skills, allow me to tell you how to kill a wolf, a raccoon, and a polar bear. You never know when might need to know. More important, however, I realized, while I was driving home, how the three methods I taught throw startling light on what the Church means by sin and salvation. Take a minute to learn how to kill a wolf, a raccoon, and a polar bear, since it just might save your––here below and in eternity.

If you should someday find yourself stuck in the woods with wolves, first, hopefully you'll live long enough to pull the following stunt in the interest of self-preservation. Your food is dwindling. Wolf meat sounds great. But how to catch and kill one? First, you need a good knife, like a buck knife or a Kabar, or just, well, not just a Swiss Army knife. Second, you need to find a spot away from your shelter to lay the trap. Third, you dig a small hole in the ground, large enough to bury the hilt, and then secure the knife upwards in the hole. Fourth, apply a fair amount of blood to the blade (either by cutting yourself somewhere not too harmful, or bleeding some smaller creature you caught). Fifth, go back to your shelter and wait. In time, a wolf is bound to smell the bloody blade and approach it for a free snack. A blood lollipop, if you will. He will, naturally enough, lick the blade to slurp the blood, but as he does so, he will slice into his tongue, which will, in turn, just add blood to the blade. The minute, clean cuts into the wolf's tongue is not, by a wolf's standards noticeable enough to stop slurping a blood fountain in the forest. The more it licks, the more it drinks–and the weaker it gets… and the more hungrily it slurps. Eventually, the wolf will either drop dead there, or stagger away, leaving a bloody trail for you, the Lupine Candyman, to track and secure for meals.

That's one way to kill a wolf. Or a dog, if you really are that draconian about keeping the neighbor's pooch off your lawn.

Killing a raccoon works in a similar, albeit less gory, manner: the animal's psychology is its own downfall. The wolf, being a fierce carnivore, instinctively drinks himself to death. The raccoon, by contrast, can be done in by his sneaky scavenger brain. Killing a wolf is more daunting, because you still have to reckon on its pack mates looming by your trap, but killing a raccoon does require more sophisticated equipment. You need a canister or can and a lid or some kind of insert that can occlude the top of the can. You need to leave (or make) a hole in the lid, and then jam nails or hard sticks downward through the lid into the can, so that it forms a shrinking opening, like an upside safety cone or an inverted ice cream cone… made of nails… in the forest.

At any rate, next you place something shiny or tasty, but not flexible or fragile, in the bottom of the can. Leave the trap and wait (maybe the wolf is done drinking by now). Eventually, a raccoon will come to inspect the can, notice the object inside it, and reach in to grab it. When he retracts his hand, however, it is now too large to slip out of your ice cream cone lid. The raccoon will tug and tug, but he will not let go of the object; scavenger's don't become scavengers by dropping what they seize. As you make your rounds, check the trap, and, if a raccoon is stuck there, serenely, and politely, approach him and club him to death. Then reclaim your shiny prize for another round of raccoon roast.

The final method is certainly the most grueling and obscure, but, hey, why not at least know how to kill a polar bear? How many of us––of you––can honestly say you know how to hunt a creature with ten centimeters of blubber in its coat, that can run over 30 MPH, stands about ten feet tall, and weighs about 680 kg (1,450 lbs.)? Well, hopefully, in a few moments of reading, that many more of you can.

You are an Eskimo. Don't worry, no seals were harmed in the making of your tundra clothes… no more, I mean, that really had to be killed. Anyway, you are a hungry Eskimo. Winter has passed. You notice a hibernation hole and, to speed things up, see a skinnier, very hungry polar bear emerge into the warm subarctic spring air. Being an Eskimo, you carry a pouch of blubber inside your parka, in case you get hungry, or in case you want to kill a polar bear. Like now. So, you withdraw a hardy, springy stick (probably a treated strip of tendons as hard as wood), sharpen both ends of it, bend it in half (s0 it looks like a giant pair of tweezers… made of tendon), and hold it in one hand while you take a hunk or blubber out of your pouch. Insert the tendon tweezers into the ball of blubber, roll the device on the ice to harden the blubber, and then chuck it towards the bear (without, of course, letting him see the big, warm piece of meat––you––waving its arm from behind a snow drift). The bear will smell the blubber and go to eat it, tweezers and all. Inside his stomach, of course, the blubber will soften, melt, and the tiny spear inside it will spring open, tearing open the bear's stomach.

At this point, you have bested the animal, but that doesn't mean he will be bested if you go to claim your prize. The bear will, naturally, start scrambling away in pain, but if you get too close to him, he will easily destroy you. So, as a savvy Eskimo, you bide your time… over the course of two or three days as the big animal staggers for some kind of respite, leaving behind him a glowing trail of red on white, blood dripping from his mouth and anus as he goes across the snow. Finally, he will collapse, bled out, and then you can go to eat his eyes, tongue, brain, and the like. It is also in your best interest to mark the location, as you will most likely store most of the animal in ice to reclaim months or years later, bringing home only the better portions of meat for your long trek home.

Now, this is all grimly fascinating enough, I know, but what does it have to do with you, a modern, relatively very well-off citizen of the Technoglobopolis we call the world? You are the wolf. You are the raccoon. And you might have been, or may yet become, the polar bear of the above tragic fame. Correspondingly, the knife is sin, the shiny trinket is your leading idol, and the blubber grenade is potentially both sin-and-idolatry and your deliverance therefrom. Let me explain.

In the forest we call the world, traps are laid out for us, like weeds in an otherwise good field of wheat. We are drawn to the pleasures of sin as a wolf is drawn to blood. As I've quipped for years, "Sin tastes good." There is no denying the prima facie appeal of hedonism: if it feels good, it can't be all that bad. Indeed, if it makes you feel good, it must be good. Alas, however, the pleasure of sin is but the crimson patina on its true nature: a lethal knife that we willingly lick and lick until we die. The most tragic aspect of sin, is that it uses, exploits, a truly good fruit of creation (metaphorically, the otherwise naturally good of blood for a wolf) in order to undo us. Sin, like a sly drug dealer, gives you the first lick, the first taste, for free; every lick, every hit, after that is on you––nay, it is you. As with the wolf, the blood, the life, we give to maintaining a seemingly "sustaining relationship" with sin is exponentially more than the blood, the life, it gives us in return. Sin leaves us like a tattered tangle of cold shredded flesh dangling from a blood-soaked mouth. Indeed, the harder and faster we embrace it, the deeper it cuts into us.

Let us now turn to the raccoon, or, better, let us now turn you into the raccoon that you in fact are. A raccoon looks like a bandit, in my telling, at least, for a very good reason: he likes to grab pretty things and thinks he can get away with it, that is, with them. So it is with us and our idols. We are naturally, and legitimately, drawn to pretty things, to things better and higher than us. It is, in other words, of the essence of the world of created good(s) to draw us naturally out of ourselves and upwards to their Source. We are created to worship, but we are prone to worship anything but the One True God. We see an idol within our reach and latch onto it as a raccoon grasps his shiny trinket.

The problem with being prone to clutch onto idols is that we thereby become prone to them. The idol allows us to enter its temple and seize its alluring power, but it does allow us to walk away as long as our hearts are clenched around it. All we have to do is release our idols--the little gods of greed, lust, deceit, pride, indolence, and more--and we will be free. But we won't let go. Won't let go, that is, until our Enemy comes along and bashes our brains in for his eternal feast below. Only then does the idolater's grip loosen and the idol "releases" him to eternal unfreedom.

Lastly, we turn to the polar bear. We can turn you into the polar bear qua sinner and idolater, or we can turn the polar bear into our Enemy. In the first case, sin is like a blubber grenade we find as we stumble along in search of nourishment. We are created to consume divine glory, but we quickly settle for lesser things, like licking bloody knives and clutching trinkets to death. On the outside, sin seems unambivalently appealing and safe. Look, everyone is doing it. It tastes good. It's constantly being thrown in our direction. On the outside, sin seems firm and nourishing, but, of course, once we take sin into ourselves, it dissolves its facade in our innermost regions, revealing its lethal nature. The few moments of satisfaction, savoring the slippery richness of sin, quickly melt into a nightmare or pain and looming death. The trap springs inside us; our life blood becomes a stone within. We can still function, as the polar bear can still walk away and fight if cornered, but we are effectively dead the moment the stick rips open our gut. Then it is a question of how much blood and moaning we can disperse over the face of the earth as we stagger, blindly and futilely, away from our patient Enemy tracking us. Your fate is sealed from within. It is only a matter of time. Only if a skilled and brave surgeon intervenes, and only if he subdues us as we relent to being subdued, will we be saved from our swelling internal death. Such is the life of a sinner under the guise of a polar bear.

In the second case, we can imagine our Enemy as the polar bear. He emerges from the darkness of winter, seeking food, seeking to devour whatever he can find. The hunter knows the bear's hunger, like the wolf's, is his greatest weakness. So he sets a trap. He places a precious jade spring-blade inside a body of blubber, tosses it into world of the bear, leaving it helpless before the bear's gaping maw. God was in Christ as the spring-blade is in blubber. At the Cross, Jesus Christ allowed Himself to be consumed, devoured, chewed up and swallowed, by the Enemy of God and Mankind. He let the Devil have his great moment of victory: the Son of Life became the main course of Death. But once inside the body of death, deep inside the bowels of the earth, the Son of Man transfigured His body of lowly blubber into an incandescent oil lantern, unleashing the infinitely sharp razor of life inside the Enemy himself. He then left the Enemy's enclosing power and returned to the Father so that all His brethren "according to the blubber" might also be transfigured into burning lanterns of incense offered to God.

Meanwhile, of course, the bearish Enemy has nothing to do but stagger over the face of the earth to maul and consume whatever he can in his last agonized moments. He is, of course, still a mighty foe not to be engaged or trifled with too carelessly, but his fate is sealed from within. It is only a matter of time, he knows. So, as Christians, it is merely our job to keep ourselves safe from his gnashing jaws and deliver other bystanders from his lurching path of doomed destruction.

If you know the answer, please raise your foot...

1 comment(s)
Consider this factoid:

Numerous studies have shown a strong correlation between the size of students' feet (and, thus, shoes) and their reading ability. Statistically speaking, the bigger a student's shoe size is, the better his reading level is.

Neat, huh?

My entire factoid, while true, is a logical sham. Can you explain why?

Descartes teaches ESL...

0 comment(s)
... or at least, his coordinate system has given me a bright idea for those of us that do teach English to non-native speakers.

In Taiwan I've found one of the most chronic frustrations is (aside from the absolutely baffling inability of nearly all my students to "get" eye contact and being-pointed-at... they simply can't tell if I am looking at them, or all the people around them, and then, if they do see me peering directly into their eyeballs, they don't grasp that it means I want them to respond... but I'm venting!)--is being unable to specify clearly where students should look on the page, or what word they should circle, or where they should write a note, and so forth. No matter how vigorously I point at the page, the best students seem capable of for up to a minute is wobbling their head around and asking their peers where I'm pointing, which of course takes their eyes off where I'm actually pointing.

So it dawned on me this morning: all textbooks should have coordinate axes on the page margins. Letters along the bottom, numbers along the outer edge, or vice versa.

Why there are falling rocks…

0 comment(s)
…and why there are rocks at all.

These are two very different matters. Sort of like why I laughed this evening, and why I exist at all up to and including this evening.

Continuing in my "fair is fair" repartee with unBeguiled, I bring your attention to a post of his debunking God––in a few hundred words, of course––as "the ultimate non-explanation." He asks us to

…compare … [a strong, simple scientific] explanation [e.g., why rockets fall, or, at high enough velocity, will orbit the Earth] to a God-of-the-Gaps explanation. Suppose I had been told as a child that God kept the moon in orbit. Would I have gained any kind of understanding? None at all, because this kind of explanation tries to explain a mystery in terms of another mystery. So many arguments for God are like this. Theists pick something complicated like consciousness, morality, or the existence of the universe and claim that only God can be the explanation.

But I must ask, can he tell me anything that is not explained in terms of (i.e., with relations to) anything else? Are there logical atoms and purely abstract monads? I should say not. For, the world is analogical through and through. Even the simplest phenomena are mysteries in their own ways, albeit only analogical sub-echoes of the Mystery of Mysteries. We speak of atomic "forces" and chemical "attractions", and the like, because they bear a suitable analogical resemblance to something we know better, namely, the force of our hands throwing a ball and the attraction we feel to a lush red apple or, say, (in my case, as a man) a lush redhead named Cherry. And this analogical resemblance is not linguistic deviance: it is a legitimate use of one level of reality to explain another level. If we wanted to, we could commence speaking of "being covalent" with people, or "quantumming to a new house", and so on. In any event, the logic of analogy is tied up with all other -ologies (i.e., reasoned discourses), and rightly so, as the illustrations about atoms and chemicals should make clear. We make sense of psychology by drawing from biology, and vice versa. We make sense of "atomology" by drawing from psychology and biology, and so on. Ultimately, we make sense of all levels of analogical being by following the "analogic" upwards to God: atoms resemble agents and agents resemble the Agent of Agents. We needn't "posit" God to "explain" why or how rocks fall; but we do look to Him in order to explain why there are rocks in the first place, as well as why rocks only fall in certain terrestrial conditions, as opposed to falling, hovering, leaping, etc. unpredictably. This is what St. Thomas means to express with the fifth way (i.e., the teleological, or, as I also like to call it, the formal-nomological).

Seeing as the title of his argument rests on the words "God" and "ultimate non-explanation," I would like to ask unBeguiled a simple question: Does he, or anyone of a like mind, even believe there is such a thing as an ultimate explanation? If so, what is it? If not, then he has, like many atheists, I believe, ruled out reference to God from the start.

Thursday, March 12, 2009

You have my word that my word is good for nothing…

0 comment(s)
Fair is fair. For the past month or so, unBeguiled has been a regular interlocutor on these digital premises, commenting on nearly all of my posts. And I have to thank him. The challenges presented by his retorts, while, I admit, not always satisfactorily rigorous in my opinion, have really helped me clarify my grasp and exposition of ArisThomistic truth on a number of issues. He has a blog too, of course, so, after weeks of him adding his two cents about my two cents, I decided to add my own two cents about his two cents.

In one post, unBeguiled admits a fondness for the ontological (Anselmian) argument. A fondness, however, that only sweetens his more basic rejection of it. In a word, he says, the argument fails because it attempts to prove the existence of something––God––just by virtue of being able to imagine it. "My bottom line response," he pronounces, "is that we can't just declare by fiat that part of a thing's definition is that it exists." So, he concludes:

Language is useful, but we should not take it too seriously. Whatever we might think words can tell us about the actual world is irrelevant. Reality relentlessly imposes itself, regardless of what we say or think.

That's right: "Whatever … words can tell us about the actual world is irrelevant," said the man, using words, to explain reality.

I am reminded of the man who preached a sermon on the non-reality of human speech, or the woman who wrote a book on the meaninglessness of words, or the solipsist who threw a party for her friends.

I must ask, Why is it illegitimate that some definitions entail existence, but some definitions entail non-existence? That is, if language is totally irrelevant to establishing the truth about reality, why can it do a very good job of specifying and delimiting non-reality?

A "square circle" is a perfectly legitimate linguistic entity, since it uses phonemes, graphemes, and syntax in all the right ways, but it is a totally illegitimate concept in reality. Is, then, such a linguistic entity, as one among many examples, truly irrelevant to pointing out important things about reality? If it is not, by what criterion dooes one exclude, say, the ontological argument from revealing an important truth about reality?

Further, can unBeguiled, or any critic of the same mind, so sure that no definition entails existence? What, for instance, is the definition of "the present moment"? Or of "a real pair of shoes"? How about "a necessarily existent entity"?

Finally, would he grant that a necessarily existent being is possible?

If you are a determinist…

0 comment(s)
…you will, by your own confession, live your whole effortless life and die your one purposeless death without ever having done anything. If determinism is true, no one an no-thing has ever done anything on and of its own; only "things have happened" as a function of blind, unsleeping Nature. You have never done and can never do anything wrong, since you cannot, by definition, do anything not already inscribed in you by an illiterate, mute Nature. Moreover, you will never have done anything good, since you can't do anything good, insofar as good is just what ends up happening in Nature by means of your paralyzed bundle of atoms-on-loan, on loan, mind you, from no one.

Good thing for the Scholastics…

1 comment(s)
At his blog, unBeguiled, a regular commentor here of late, cited page 109 Edward Feser's The Last Superstition:

"Angels, not being material, are pure forms or essences on Aquinas's view, but even with them their essence needs to be combined with existence in order for them to be real, so that they too are composite."

He then complained:

Putting aside whether parsing the nature of angels could ever be rational, how could anything be both "pure" but also a "composite"? Professor Feser's muddled book is rife with this sort of linguistic deviance.

In response, one reader added:

Angels are pure "essences"? What does that even mean?

I can't help thinking about Gen. Jack Ripper in Dr Strangelove and his obsession with Purity of Essence and the need to protect it by starting a nuclear war.

The passion people pour into meaningless phrases continues to amaze me.

Sigh. You'd think the Middle Ages had never happened. Good thing for those hoary old Scholastics they never had to face down such mighty objections.

Two is a pure form, a purely formal object, the essence of which is strictly independent of any material instantiation of it. Once "2" gets written on paper or typed onto a computer screen, however, it is "dematerialized" and thus becomes a composite of a '2'-essence and a materially specific existence. Every instance of "2" instantiates the essence of 2, but no composite instantiation of it in material existence exhausts the essence of two, since it can always be instantiated in its essential purity by some other material instance. That is, we can't say this instance of 2 is "more truly" 2 than that instance of it; they both enjoy the identically pure essence of 2, but do so in materially, compositely specific ways as they happen to exist. Hence, while a written "2" enjoys a composite existence, it does so by virtue of the pure essence of 2 informing the matter involved.

The same goes, although even more vividly, for all formal operations, such as addition, subtraction, modus ponens, and so forth. Every instance of such formally pure operations enjoys a composite existence when it is dematerialized, but no physical set of instantiations can exhaust the essentially pure formality of any such operation. Hence, any physically instance of a formal operation partakes of a formally pure reality that exceeds the power of physical computation. Any instance of, say, addition could always be challenged and revised as but a covert case of "quaddition." Kripke, Goodman, the grue problem, etc.

To get the full story, read James Ross's "Immaterial Aspects of Thought" here

Monday, March 9, 2009

Beware the Magnevonian Myth!

0 comment(s)
In an earlier post, "Actions and events…", I described three bodily motions (a., b, and c.), which, though they all look the same and produce the same effects, are not the same action at all. I claimed that intentionality, as a non-physical ascription of formal coherence, needs to be "layered" onto a., b., and c. (and onto all such scientifically observable realities), otherwise they are "behaviorally… and neurologically… indiscernible."

A commentor disagreed, however, by saying, "…the [neurologically distinct] brain states would be discernible, not indiscernible. From one brain state we get swatting the fly and from the other we get waving."

I believe preempted the commentor's objection in an ensuing paragraph of the post to which he was reacting, but I did so too cryptically. So I will highlight my garbled answer from the earlier post, and then offer a better, fuller defense of what I meant by it. I said:

The critic that … [deploys purely neurological] descriptions of all behavior is oblivious to the fact that we can only imagine a coherent difference between the a., b., and c. actions at the neural level because we already know how those actions are formally (viz., teleologically) distinct. We know the brain phenomena will be different "on the inside" because we already know the actions in which they are involved are formally different "on the outside." If, however, we strip away the explanations of the actions that I gave in listing them, they are effectively indiscernible.

I realize now (for the second round of reader responses) that I should modify the phrasing of my point about "indiscernibility." I muddled things by using words in a way that appear to be contradictory. So, to clarify:

The adverb is used to modify the act of discerning, not the physical structure of the motions and/or neural networks. Accordingly, I (in the above quote) spoke of an effective indiscernibility, though unfortunately that phrasing followed a reference to "neurological indiscernibility." Despite this error, however, my point is that, viewed from a strictly behavioral and/or strictly neurochemical perspective, what we see happening in a., b, and c. are indiscernible without (surreptitious and ineluctable) reference back to what we already know is happening. Consider:

If you took the famous man on the street and played three video clips for him—of one and the same man, call him Alfred, doing a., b., and c., and/or showed three video clips of Alfred's brain via an fMFRI for a., b., and c.—he would have no idea what he was seeing. You could, of course, instruct him that the fMRI clips displayed Alfred's brain when he performed a., b., and c., but our man from the street would still need to know what those three visually intrinsically indiscernible motions were "about" (cue intentionality and teleology!) in order to begin to parse similar fMRI clips that he might be shown later. His pedagogy in brain images might be perfect, such that he could dictate Alfred's daily happenings without seeing him, just by watching a streaming fMRI broadcast of Alfred's brain (as he wore, say, an fMRI helmet). A sort of Neo for the Neuro-Matrix: he doesn't even see code anymore, just what it means. Aye, there's the rub: what it means. For at every turn, our Neuro-Neo he would be converting the fMRI lightshow images back into the formally distinct and metaphysically "autonomous" realities of human action.

The point, which still stands, is that the effective discernibility of physical occurrences hinges on a formal recognition of them (X, Y, Z, etc.), a discernibility, however, which is not to be discerned in the naked contents of identical motions and/or intentionally unfiltered neural flashes. (As always, unless I state otherwise, I am not using "intentional" in the sense of "purposive," but of "ascriptively-being-about.")

As far as explaining actions goes, there is nothing intrinsically discernibly useful for that purpose in the same class of skeletomuscular movements (viewed behaviorally), nor in the same class of neural flashes, since it is only by an act of intentional ascription (by the observer) that they have any reference to what is being observed on the macro-level. If our species saw only atoms, or had fMRI goggles from birth, we would still have to learn what those atomic swirls and synaptic light shows mean in terms of macro-reality and human behavior. In either case, whether it's the macro-motions or (neural) micro-events of a., b., and c., we are just seeing a human physically perform some action for some purpose. We can and should speak of Alfred's brain as unself-consciously and coherently as we speak about his hand or eye. Seeing a man "wave for X's attention" may be more "interesting" (like all fads are) now that we have "an all new way" to "look into" humans, but once the novelty wears off, we are just seeing the same old same old: a hylomorphic entity dynamically informing its proper matter toward certain ends in certain formally coherent ways, in sensory connection with an intelligible environment. Unfortunately the fans of neuro-reductionism, the more everyday fMRI and the like become, the more everyday their allegedly novel explanations will become. Today's scientific craze, like pop-up toasters or yore, is tomorrow's… well, pop-up toaster.

Even somewhat long-term readers of this blog (aka, souls in Purgatory) know I have written about the "mind-body problem" before, specifically in criticism of what I call neuribilism (neuro-, brain + ibi-, where). Neuribilism is, as I wrote in "Lower the bar…", the claim that there is a one-to-one correspondence between human cognitive capacities and the structure of the brain. The problem with neuribilistic reductionism (or neuromania) is that, while it is portrayed as a supremely better form of explanation than the “folk theory” of action (i.e., purposive rational behavior), and while it is vaunted, in a devil-may-care way, as a radical, corrosive undermining of our feeble “traditional” patterns of thought, in fact it can easily be turned on its head and shown to be just as mysterious and provincial as good old folk psychology.

Let us imagine a race of beings—call them Magnevons—with eyes like fMRI’s, who see each other’s neural actions like we see each other’s limbs and bodies in action. Indeed, Magnevons can perceive neurochemical interactions and processes as subtly as any human can use language or people-watch in the park. They only perceive what we, for about four centuries, at least, have called primary qualities: extension, firmness, electromagnetic waves, etc., and deny the reality of what we (for about the same amount of time) have called secondary qualities: color, musical tone, fragrances, etc.

For centuries the Magnevons labor under the atavistic superstition that they (to take a few crude examples from Magnevonian folk psychology) display (neurologically) Q when they are hungry and Qq when they eat, P when they are in love and Pp when they copulate, T when they are angry and Tt when they murder, and so forth. Over time, however, a few brave skeptics scale the walls of folk philosophy and bring down the king of Magnevonian commonsense. They are the few, ahem, souls brave enough to argue that, in truth, Magnevons don’t Q when they are hungry, nor Qq when they eat, but, in truth, Magnevons “clutch their stomachs” (!) when they Q and “raise food to their mouths and chew” (!) when they Qq. What they are "really just" doing for all these "folk concepts" (i.e., A, Aa, B, Bb, etc.) is finding their fMRI displays subject to "bodily actions" at a "higher causal level." What they are "really" seeing when they display C413, is

The new wisdom is immensely popular, not only because it is so novel and dashingly audacious and up-to-date, but also because it makes so much more sense of Magnevonian reality. The new wisdom, for instance, explains in crisp causal terms how (to Magnevons) invisible but theoretically evident “corporeal objects in the ‘external’ ‘superatomic’ world” impinge on Magnevons’ “bodies” (!) and thus, with perfect predictability, push and pull them, constrain and develop them, the wee Magnevons. Lost, in ignominy, is the old illusion that Magnevons can just use their "minds" and “mentally” manipulate their visible chemicals and neurons to achieve what they “want” to do. In its stead is the new insight that their "bodies" perform certain rationally intelligible operations in response to the presence and absence of certain physical stimuli. When they think they see "themselves" manifest this or that "brain state", in fact, the Magnevons are instructed, they are but seeing the lawlike effects of "fingers on a piano," an ear next to a radio speaker," "moist lips kissing at dusk," and the like.

The protests of “traditionalist” Magnevons are futile: willfully altering one’s brain chemicals and activating or suppressing one’s various neural networks are not real categories of explanation, since Magnevons are ceaselessly bombarded by “external” stimuli which mercilessly dictate how they “manifest” (in fMRI terms). "That is the real world, out there," declare the new corporealists, "Your 'inner' world, driven by ornate ideas about parallel-neurology and chemical states, is but an illusion, imposed upon your 'brain' by the truly potent casues: bodies and objects. The macro-laws of objects predictably entail Magnevonian 'mental manifestations', not vice versa." The icing on the corporealist cake is that the ever-new optical technology allows Magnevons to see, in real time, their “bodies” perform things they’d always thought were just the powers of their neurochemical personalities. Who could withstand such compelling evidence?

It takes centuries, but ultimately the spell is broken, the rainbow unweaved. Ultimately, the rubble of Magnevonian folk psychology—crowded with esoteric, gratuitous fictions like T-neurons and Qq-goals—is destroyed and swept away, leaving in its place a halcyon new world of rational truth: Magnevons have flesh and blood bodies which act in certain physically determined ways according to the higher laws of bodily needs and satisfaction.

This little fable is intended ("purposive" meaning!) to show how relatively provincial the neuribilist position–nay, obsession–is. It seems, from the perspective of our pet scientific methodology, that the reductive, neuribilist campaign just is the only thoroughgoing rational approach to action and causation. After all, just look at the fancy display monitor in the expensive new lab! Science speaks for itself! … Right?

Certainly, if we could observe people’s brains without them being “attached” to the people, we would have pristinely lucid neural dynamos subject to complete deterministic explanation and prediction. Indeed, a thoroughgoing neuro-reducitvist should, by my lights, stop talking about "his brain" or anyone else's, and instead speak of "the brain's me". After all, on the neuribilist view, it's just "the brain's neurochemistry" in the driver's seat of action and explanation. As the commentory himself wrote, "From one brain state we get swatting the fly and from the other we get waving [my emphasis added]" But why not just say that from swatting a fly we get one neural state and from waving we get another? The emphasis on brain states comes from the neuro-reductivists's naive impression that real brains are just like the ones we study in neuro-labs, as if there were these things called brains that just churn out all kinds of wild brain states that result in hohum bodily behaviors. As if the brain states were ordered strictly in connection with their own self-contained chemicals fountains. This is the world according to crude old-school epiphenomalism.

In reality, however, the animal brain is immersed by the senses in the world of matter and dnymaic action, whereupon it finds itself, and its chemical fountains, formally and teleologically subject to the demands and aims of its corporeal "bearer." The brain only does what it does because it is rooted in the body, which, in turn, only does what it does because it is immersed in the sensible world of intelligibile reality. Hence, we do not "get" the agent's actions from the overflow of its brain fountains; quite the contrary: we get those fountains as tools–organs, in the classic sense of the word–of the agent-in-the-world. The neuribilist, therefore, is allowed to keep up his wild goose chase for the mythical exact locations of exact actions. Sadly, however, it is a wild goose chase down a rabbit hole, since, by his own logic, all such "human endeavors" are but the random effects of a chemical sponge in a living ossuary atop a spinal pedestal, vainly trying to catch a glimpse of itself apart from anything like a self.

By contrast, as I wrote before in "A no-brainer…", for a thoroughgoing Aristhomist,

…the Self… is not in the brain! On the contrary: the brain is in the self! Notice I do not say the brain is the self. The self is… that greater whole that transcends and orders the lesser parts. The self… does not "use" these parts like some detached repairman surveying a workbench. The truth is that the person acts and, in metaphysical but not temporal turn, the parts of the self act in their appropriate ways.

But again, in contrast, according to neuro-reductionism, given what we know about these chemicals and those chemicals, in conjunction with these and those physical parameters, of course we could predict a live brain’s performances as easily as we can predict an air bubble’s movement up and down in a straw as we “drink it up”. But this is only because the parameters for such tiny neurochemical microcosms are so elementary (by definition), that our understanding of them is identical to our predictions of them. Consequently, if humans, let alone microbes, were as elementary as neurochemical clumps, we would be able to understand and predict them as well as we do brain reactions. There is, alas, an inverse correlation between metaphysical potency and scientific predictability. The more active (in the Thomistotelian sense) a thing is, the more it defies a predictive understanding. Thus, insofar as He is pure Act, God is beyond all reduction to our pet explanations: He is pure freedom, which means He is utterly free to love.

Not that the mundane reductionist can be bothered with such flights of fancy. “Science proves,” he claims, “that X [i.e., some neural impulse which results in some bodily event b(X)] is simply what a human’s brain will do when Y [i.e., some other neural impulse causally antecedent to X] happens. It’s been demonstrated countless times in the lab.” Presumably, therefore, we can secretly know that when we see Alfred perform b(X), he is also “unwittingly” performing X and, further, that he also “unwittingly” had Y occur in his head. And so the picture seems complete, and completely deterministic.

Unfortunately, however, what gets left out of this seemingly complete portrait is Alfred himself. We may now know that a sudden flash of bright light and a huge booming sound cause Y in a human brain, and that Y (i.e., a certain aural-retinal stimulus in response to bright, loud surprises) in Alfred’s brain causes X, and that X, in turn, causes him to Xb (that is, fall to his knees and evacuate his bladder)—but we still have no idea why the flash and boom happened. So, as soon as we view Y, X, and b(X) in the larger world of objects, agents, and actions, our perfect grasp of why Alfred Xb–namely, since Y ⊃ (X ⊃ b(X))–just gets stretched out by one order of magnitude in the theoretical system and thereby becomes only one clue among others, as opposed to the defining key to b(X). As J. R. Lucas says on page 60 of The Freedom of the Will (Oxford University Press, 1970),

"It is always possible, as every child knows, to ask a further 'Why?' at the end of every explanation. And even if we have expanded an explanation to be a complete explanation, it is still possible to put the whole explanation in question. Given a purely deductive explanation, we may ask why the rules of inference should be what they are, and then when given a meta-logical justification, ask for further justification of that: the tortoise can always out-query Achilles."

Lucas's point brings our, first of all, the transcendental freedom of reason as the keystone of human freedom; as St. Thomas writes in De Veritate 24, 2, resp., "totius libertatis radix est in ratione constituta [the root of liberty, whole and entire, is constituted in reason]". Second, Lucas's point reminds us that only an infinite explanation, amenable to being asked "Why?" as a mother is open to a child, can truly satisfy the human mind. But I digress.

Even though we don't know why the flash and boom happened, we do know something important, something we have always known: b(X) happened because Alfred got the piss scared out of him. We simple “folk” already knew that such flash-and-boom stunts cause people to fall down and, easily enough, wet themselves. So, when our betters “enhance” the “explanastory” for us by telling us that “what we already knew” (i.e., the falling and peeing) is “really just” Y three steps removed, they are taking an eruditely convoluted road to saying, “Apparently, when people fall down and soil themselves, it causes them to b(X). And apparently, when people have loud, bright shocks, they Y in their heads and then they b(X) in their pants.” At which point, which explanation you favor is more or less an aesthetic choice. A free aesthetic choice, to boot.

Sunday, March 8, 2009

Thanks for the belly laugh, guys...

2 comment(s)
See if you can fill in the blank that I added. I add it, with accompanying notes in brackets, as a visual aid to highlight what, from my historical perspective, leaps off the page.

"In Part III, we begin the study of philosophy itself from the perspective of cognitive science. We apply these analytical methods to important moments in the history of philosophy: Greek metaphysics, including the pre-Socratics, Plato, and Aristotle [ca. 600-300 BC]; [_________________________; ] Descartes's theory of mind and Enlightenment faculty psychology [ca. 1500-1800 AD]; Kant's moral theory; and analytical philosophy. These methods, we argue, lead to new and deep insights into these great intellectual edifices."

--- Philosophy in the Flesh, George Lakoff & Mark Johnson (New York: Basic Books, 1999), p. 8.

Friday, March 6, 2009

Actions and events, minds and brains, men and mice…

5 comment(s)

The problem of intentionality vis-à-vis physicalism is not an empirical problem; it is a categorical problem. Formal operations, for instance, are determinate in a way that physical operations cannot be. Intellection is, for instance, universally abstract in a way that physical “signs” cannot be. The contents of sensory experience are analytically non-identical with the physical correlates we infer as their causal substrate. And as for intentionality…

Intentionality and physical order are simply, categorically mutually irreducible. This is hardly a “pet claim” of Thomists. Read some J. Levine, or J. Searle, C. S. Pierce, or F. Brentano, or J. Kim, or W. Vallicella, or S. Kripke, or K. Gödel, or D. Melser. Indeed, read some D. Dennett: he is so committed to physicalism, and yet aware of the intentionality problem, that he denies the latter on behalf of the former.

The issue is simply not one that can be overcome by “more brain studies.” Intentionality, and its related immateriality, is, like purpose and action, simply not reducible to behavioral categories. Intentionality, purpose, action, formal order––these are simply “their own things” and not to be trifled with. Certainly, it is true to say that intellection occurs “naturally” insofar as it is metaphysically contiguous with the operations of its natural agents; but this must be qualified by the fact that nature, thus, operates with both material and immaterial powers.

In any case, consider the following scenarios (which I borrow from Richard Taylor in Action and Purpose):

a. A man attending a debate raises his hand in order to get the attention of debater Q, and thereby both attracts the attention of debater Q and scares off a fly.

b. A man attending a debate raises his hand in order to get the attention of the moderator, and thereby both attracts the attention of the moderator and scares off a fly.

c. A man attending a debate raises his hand in order to scare off a fly, and thereby both attracts the attention of debater Q and scares off the fly.

Behaviorally, and I should say neurologically, these events are indiscernible. Only on the supposition of a distinct purpose (”in order to”) can they be differentiated. Likewise with intentionality. Physically indiscernible phenomena can have different intentionality, different meaning. Physical causation is, to palm off of Walker Percy and Pierce, metaphysical dyadic, whereas language–-qua intentionality in action––is conceptually triadic, and, indeed, intersubjectively tetradic. Physical things only stand in formal, theoretical bonds with each others as we evoke those bonds by the intentional, immaterial power of referential language. An atom simply does not and cannot “refer to” something else; but language can and, incessantly, does “conscript” an atoms, and hordes or atoms, for such bonds.

Someone might, however, object that there could be neurological difference between a, b, and c. If neurologists discover which neuronal firings cause pain and various thoughts, then they could discover which combination of neuron firings n1 produce the impulse, “Get X’s attention.” In this case, the physicalist could produce a fancy detailed neurological story about how n1 causes some other set of neuron firings n2, which produce the thought “Raise your hand to get X’s attention.” It does seem coherent to say that in each of the “¬¬p in order to q” cases, the different q’s are discernible.

In reply, I would ask, "Which set of neuronal impulses, call it nx, accounts for the intentional ascription of intentional action to n1 and n2 for a., b. and c.?" And which neurons, in turn, account for the ascription between nx and that second-order ascription of intentionality? And so on. The terms of an intentional system may be, indeed are, scientifically measurable, but intentionality itself is not measurable, and therefore not a physical reality. Nothing intentional (which, again, does not mean "purposive" but rather "notionally directed at") simply and naturally falls out from any nx; nx orders n1 and n2 only if we can produce an intentional bond from outside the neuronal complex under observation.

The critic that invokes n1 and n2 as pure descriptions of all behavior is oblivious to the fact that we can only imagine a coherent difference between the a., b. and c. actions at the neural level because we already know how those actions are formally (viz., teleologically) distinct. We know the brain phenomena will be different "on the inside" because we already know the actions in which they are involved are formally different "on the outside." If, however, we strip away the explanations of the actions that I gave in listing them, they are effectively indiscernible. Without already knowing why we are observing different happenings in the brain, we can't coherently say the brain happenings correspond to distinct actions. For all the "action blind" observer knows, he is simply seeing minutely different neural patterns which result in visibly indistinguishable limb motions. Even if we suppose we could ask the person what he intended after each neural-behavioral episode, we would still just be resorting to the categorically autonomous, and truly explanatory, category of purpose and formal action. Even if we could observe a man’s brain on the sly, so he weren’t conscious of later having to talk about his intentions, we could only “decode” the neural activity by translating it into, and correlating it with, his stated aims.

But perhaps the physicalist is still not satisfied. "It seems," he goes on, "that there might very well be a physical difference between a neural pattern for 'raising one’s hand in order to get somebody’s attention' and 'raising one’s hand to brush away a fly'. Neural patterns are presumably very complex, with plenty of room for internal variation even when the external effects—the raising of the hand—are identical."

But, in reply, let us ask, "Would the raising of two separate men’s hands for the (same) purpose of shooing a fly be the same action?" Clearly, yes. Let us then ask, "Seeing as their actions are formally identically, would their neural states be proportionately identical?" It seems not. The formal determinateness of an action-for-some-end is irreducibly incommensurable with the complexities of specific neural acts.

Arguably, of course, since two different men are doing the swatting, they would be doing two technically different acts in spacetime, and this spatiotemporal difference could be construed as providing analogously sufficient identity between neural types of happenings and formal actions. Presumably, the same kinds of brain-stuff are involved in the same kinds of actions, so there is a rough but satisfactory identity between their actions and their brain events.

This a weak retort, however, for a couple reasons. First, the original scenario of two men doing exactly the same action-for-a-purpose preempts such a rough identity. Second, more importantly, if the same man were asked to repeat his own action-for-the-same-purpose numerous times, it seems incredible that each formally identical action would correlate to one and the same neural process every time. Presumably, given enough time, the man’s brain cells would change to such an extent that it would literally be different neurons and chemicals involved in the same action. Furthermore, imagine if we had the man perform the action for ten minutes, then had him drink two Jolt colas and perform it another ten minutes, and then had him take a Valium to perform it ten more minutes. Over thirty minutes, his brain would undergo vast amounts of neurochemical changes, which, while admittedly peripheral to his successful performance of the action, would certainly alter the specific properties of his relevant brain matter in performing the action (e.g., a hand-raise impulse mingled with a stifle-giggle or yawn impulse).

All the while, of course, the action remains one and the same. If it did not remain one and the same, on a purely formal level, there would be no way, let alone desire, to compare the neural happenings with the behavioral. I.e., if we didn’t already grasp that he is doing one and the same action in a formally identical way, we would not have any one thing to explore in relation to his brain states. We would just have numerous materially discrete happenings that we would correlate by Humean fiat.

Aside from these technical difficulties in the brain-reductionist platform, Taylor’s Action and Purpose provides much food for thought about how odd the idea of “mental desires” is for explaining action in sheer causal terms. If desires cause actions, what causes the desires? Do we have a certain desire to have a certain desire? Only if one is willing to reduce all behavior to sheer caused events is one able to circumvent this regress into a desire-cause spiral. There is a categorical difference between “a man’s hand being caused to go up and x occurring in succession” and “a man raising his hand in order to bring about x”. If action, as distinct from mere bodily movements, is just a pinball of neuronal fireworks, no one can be said to act rationally, i.e., for a cogent end, for an intelligible reason. Causes are not rational but actions are. Thus, if actions just are behavior, then there are no rational actions. But, of course, ends and reasons are integral to our entire taking-of an event to be an action as opposed to a mere happening in the first place. Smoke is caused to go up, but that is not an action. A man’s hand goes up for some reason but that is not simply a causal event.

A related problem for the causal reductionist is Popper’s argument that, without an extra-mental, formally discrete grasp of the event under analysis, we have no purely physical basis for calling this neurochemical or physical occurrence the “beginning” of the event and that occurrence the “end.” What is there about this particular neural firing, or this particular phoneme, considered physically, that dictates its formal, intentional place in our analysis? (Hint: nothing.)

Related to this is D. Melser’s claim (in a way, very reminiscent of Taylor’s criticisms in Action and Purpose) that a science of language and behavior are strictly impossible, since understanding language––i.e., knowing what's going on when certain sounds are emitted––requires cooperative use of it with the speaker being viewed. But since science by definition limits itself to an objective, non-participatory “view,” a purely scientific observer is asymptotically removed from "what it takes" to understand language. And if one cannot understand what is happening when one observes what is happening, one can't reasonably explain what is happening. The more objective science becomes, the less traction or right it has for “getting” the language events and treating them as meaningful (i.e., intentional); and the more involved science becomes in the meaning and use of language, the less objective it is. See the last section of his essay here

Apophatic humility…

1 comment(s)
In Scholastic theology, three methods of analogical inquiry were used in discussion of God: the via causalitatis, the via remotionis, and the via eminentiae (or, excellentiae). St. Thomas, for instance, treads the via causalitatis in the first ten or so chapters of the Summa Contra Gentes, where he argues from the effects of the Creator to the existence and nature of the Creator. Since, however, His effects are woefully inadequate to convey God's nature in a fitting way, the other two viae are invoked by the Scholastics to balance out the limited gains of the via causalitatis. God is a maker of effects, yes, but He is very "remote" from the limitations of makers as we think of them. His divine remoteness as Creator stands out principally in the way that He creates ex nihilo, whereas lesser makers always have to rely on some medium or tool or model outside themselves. The method of remotion is a constant reminder to us that our best arguments about God and our highest praises of Him are still far removed from what and how He actually is in se. As the IVth Lateran Council stated in 1215, “between the Creator and the creature there cannot be a likeness so great that the unlikeness is not greater [semper maior dissimilitudo in tanta similitudine].”

Further, the aspects which we can ascribe to God by analogy with lesser created things (by way of the viae causalitatis et remotionis), we must ascribe to God in an "eminent" or "excellent" way. Thus, by invoking the via eminentiae, we speak of God as a wise artisan, but eminently and supremely so. God is not simply a rough idea of love, but supereminently love; the Father not simply a paternal pattern, but a supereminently good father, etc. This trifold methodological tension is integral to Scholastic theology.

Critics of Scholastic theology, and Christian theology in general, often mistakenly assume that apophatic, or "negative", theology is a specifically Christian sort of obfuscation (as they would call it). But I believe apophatic limitations are a part and parcel of basic human thought, and that the Scholastic emphasis on God's transcendent "otherness", safeguarded by the via remotionis, is just a forthright way of connecting existential apophaticism to its Source.

Daoism is among the most apophatic belief systems known to humanity. The locus classicus is of course chapter 1 of the Daodejing: "道可道, 非常道。名可名, 非常名。Dao ke dao, feichang dao. Ming ke ming, feichang ming." (I leave it to the reader to view the translation via the link.) The basic idea is that, whatever you can think or say about the power (德) and the way (道), and however close you might get to it, you are still far from describing or knowing the highest reality in itself.

Similarly, in chapter 28 the 'power of the way' (道之德 Daozhide) is portrayed as an uncarved block (or "unwrought material," J. Legge) which, though formless and empty, yet holds within itself the fullness of all possible forms. Hence, the Dao defies our attempts to visualize and explain it. In the same chapter, the infant (or fetus) is lauded as a symbol of Daoist strength; though weak, immobile, untrained and unshaped, yet the infant possesses greater power than all just by being open to all forms and all levels of growth. Likewise, while God is entirely simple, yet He contains within Himself the fullness of being and all possible forms of created beings. Consider William Riordan's comments in Divine Light (2008, Ignatius Press, p. 128) on Denys the Areopagite's theology of the divine names:
God is described as great [μέγας] because of the multitude of His gifts (δορεάς: doreas), which are the perfections that He gives to His creatures. The myriads of creatures come forth as outgushings or springs (τὰς πεγαίας: tas pegaias) from Him, and yet He is in no way diminished. … But God is celebrated as small [μιχρός: micros] because He, in His perfections, "penetrates without hindrance through everything" (το διὰ πάντων αχωλύτως χωροΰν: to dia panton akolutos choroun). … Denys is drawing attention to God as the Small, who as Wisdom, permeates all beings.

Nor is there any shortness of apophaticism in Hinduism. Indeed, I think this sort of "natural apophatic-theology" is a classical component of all great human traditions. Even modern exact science has come to realize, grudgingly for some, perhaps, that even our best models are always true-but-only-rough and tentative approximations of natural reality. Our most secular theories, then, are subject to the a kind of Scholastic via remotionis. As Niels Bohr said, "If you think you can talk about quantum theory without feeling dizzy, you haven't understood the first word about it (Wenn es Ihnen beim Studium der Quantenmechanik nicht schwindelig wird, dann haben Sie sie nicht wirklich verstanden)." Coincidentally enough, Bohr was a Daoist most famous for his Taiji-like principle of complementarity.

There is, thus, an apophatic humility proper not only to most philosophical traditions but also to modern natural science itself. I think this is so because all reality stems from God in variously analogical levels of likeness to and conformity with His own supernature. Even in personal relationships, we come to see that there is an inner mystery about even our most intimate friends which surpasses, remotionally, as it were, our most complete descriptions of them and our most vivid experiences with them. Nothing fully discloses itself in purely cataphatic terms, not even a stone. Thus, existence reflects the Creator's revealed hiddenness in levels of analogy proper to their own corresponding laws of being.

Good God, man! Good God-Man!

1 comment(s)
The following was inspired by a thread at Just Thomism concerning James Chastek's post there about atheism and the idea of a good God. Chastek writes:

Let's face it, the claim that God is a loving father to the whole world is a Christian claim. Both authors above are therefore critiquing a Christian claim, but they sever it from its Christian basis and tie it to an interpretation that is downright silly. The Christian argument for the divine love is well known. Finish the following sentences: "God so loved the world that _____" or "Greater love hath no man than that ____". If you are Heather Mac Donald, you fill in the blank with "he attends on us", if you are Medawar, you fill it in with "he looks after small children as a doting schoolteacher". Having written in an obviously ridiculous answer they then- quite reasonably- call the answer ridiculous, and then (of course) see no evidence for it. ... The Christian claim for the fatherly love of God is based on faith. If it is by faith that we hold that God became man, then it is by faith that we hold that God became man to die out of love for us, in order that we might saved.

An atheist named Ben critcized this post for the error he sees endemic in Christianity, namely, that of setting low standards for God's love and then claiming His actions raise no "problem of evil".

It seems to me, however, that what is happening in the thread is the perennial mistake of subsuming God and His creatures under some idealized rubric of categories. This is the presumption of an a-Christological worldview. The key premise for Ben, it seems, is the moral primacy of "individual autonomy." Unfortunately, however, this relies on an assumption that "individuality" not only is coherent per se (which I deny) but also presides over God and creatures univocally. Likewise it is assumed that there is some highest standard of goodness, to which both God and man must submit.

But the Catholic protest to all such pagan "subsumption" is simply the Eucharist. In the Eucharist alone do we find our canon of humanity and divine goodness. In Chrsit alone, as He is truly given to us in the divine litrugy, we find all the treasures of wisdom and goodness. The Eucharist, which is one with the Cross, is not something that happens in some larger "given" field of being (viz., the supposedly "neutral" universe as such), but something which simultaneously grounds creation as stemming from the Father in the Son by the Holy Spirit and elevates it to the same divine persons in common. Ontologically, Christ Incarnate is the basis for there being individual humans at all. Only insofar as a creation suitable for humans is ratified and redeemed in His Incarnation (made present historically and concretely in the Eucharist), can we fathom the creation of humans. Christ partakes of our humanness, not as if it were some antecedent metaphysical category limiting God, but as the ordained pattern for our existence. Thus, our humanity becomes the means by which we find (or lose) God. It is not that Christ partook of humanity qua ideal form, but that humanity is privileged to exist actually by participation in the kenotic glory of Christ Incarnate. God did not look ahead and see "humanity," and then decide that was a fitting way for us to know and love Him. Quite the contrary: He looked ahead at a myriad of ways in which creatures might reflect and share in His goodness, and decided "humanity" was a fitting way for that self-diffusion to happen. Christ is not to be measured by His likeness to our instinct for "the good man," for we are human only insofar as we possess a likeness to Him as the Suffering Servant.

This shows us that our canon for good human conduct is to be patterned after gratuitous suffering on behalf of others who can give us nothing in return. This is precisely why much of "being human" means enduring life for the good of future generations and people we don't even know. This impulse in humans to keep living and to "make things better" is but an analogical reflection of Christ’s own preeminent one-way kenosis on our behalf. As long as the standard for "good conduct" is reified anonymously and pitted against God-in-Christ, the atheist critic is simply not engaging the Catholic Church’s own claims about good and evil. This, I believe, is James Chastek's point. Precisely in the intersection of the gratuitous existence of the world (i.e., nothing need have been the case apart from God) and the gratuitous suffering we can offer for others, we find a clue to the mystery of evil. Ezekiel denies a man will be punished individually for the individual sins of another, but unfortunately, no one exists individually. We exist collectively, derivatively, as members of the human race. Hence, we can individually experience the collective evils of our race, as well as individually add to them. So, if we desire to exist as humans, we have no choice but to exist as the heirs of concrete humans before us. This, of course, entails inheriting humanity from them as much as inheriting the woes of sin. Certainly, if God wanted to "dote on" us individually, so that we would never experience the rotten fruit of our ancestors, He could—namely, by not creating us as humans. We are not punished for the sins of others, but we are subject to the punishments given to others insofar as others are the ontological and psychological basis for our particular humanity.

Along many of the above lines, I highly recommend the reading of Michael Liccione's essays, "Mystery and Explanation in Aquinas's Account of Creation" and "The Problems of Evil" and a reading of Donald Keefe's Covenantal Theology and his other writings on creation, theology of history, and the Eucharist. (John Kelleher has good introductory materials to Keefe's work. Also, Fr. David Meconi, SJ [PDF!], has a good essay on Keefe's theology of history, but it seems to be offline. I have a copy on my computer, which I can send to those who request it.)

Lastly, as for Max Tegmark's surd-omniverse, which Ben cited as a coherent naturalistic explanation of existence, I must interject: if it a) is mathematically-axiomatically formalizable and b) claims to provide a necessarily true description of the physical universe, it is subject to Gödel's incompleteness theorems, and is therefore not necessarily true. Further, insofar as it purports to be a scientific theory, it needs empirical backing. Suffice to say, the empirical backing for the unified existence of every logically possible state of affairs (SoA) is not only slim but also asymptotically hard to come by.

I have to wonder: surely "a purely possible state of affairs" is logically coherent, but can such a SoA be said to exist in Tegmark-space? Likewise: surely "a metaphysically simple cosmos with no parallel universes or alternative modes of being" is logically coherent, but can such a SoA be said to exist in Tegmark-space?

Wednesday, March 4, 2009

Random Chinese notes afore I forget...

0 comment(s)
  • Niao bu lei, lei bu niao.

  • Pa bu jiang, jiang bu pa.

  • Zhe ke zhe, feichang ruan. Zhe ke zhe, feichang ying.
  • Monday, March 2, 2009

    Wisdom from…

    2 comment(s)
    JOHN CHRYSOSTOMOS (347–407): Gathering the harvest

    It was to save the apostles from anxiety that the Lord called the gospel a harvest. It was almost as if he said: Everything is ready, all is prepared. I am sending you to harvest the ripe grain. You will be able to sow and reap on the same day. You must be like the farmer who rejoices when he goes out to gather in his crops. He looks happy and is glad of heart. His hard work and many difficulties forgotten, he hurries out eagerly to reap their reward, hastening to collect his annual returns. Nothing stands in the way, there is no obstacle anywhere, nor any uncertainty regarding the future. There will be no heavy rain, no hail or drought, no devastating legions of locusts. And since the farmer at harvest time fears no such disasters, the reapers set to work dancing and leaping for joy. You must be like them when you go out into the world — indeed your joy must be very much greater. You also are to gather in a harvest — a harvest easily reaped, a harvest already there waiting for you. You have only to speak, not to labor. Lend me your tongue, and you will see the ripe grain gathered into the royal granary.

    And with this he sent them out, saying: Remember that I am with you always, until the end of the world.
    (Dernières homélies, Hom. 10, 2-3.)

    John Chrysostom, patriarch of Constantinople, spent a life of preaching and earned the title "the golden-mouthed."

    ST. AUGUSTINE: The Lord Builds the House

    Unless the Lord builds the house, the builders labor in vain, declares the Psalmist. Who are those who labor to build it? All those who in the Church preach the word of God, the ministers of God's Sacraments. But "unless the Lord builds the house, the builders labor in vain." We speak from without; he builds from within. It is he who builds, counsels, inspires fear, opens your minds and directs them to the faith.
    -- Commentary on Psalm 126, 1

    Prayer. Lord, I have asked for only one thing from you, to live in your house all the days of my life, to gaze upon your delight.
    -- Commentary on Psalm 26 (2), 17


    [In Scholastic theology, three methods of analogical inquiry were used in discussion of God: the via causalitatis, the via remotionis, and the via eminentiae (or, excellentiae). St. Thomas was treading the via prima (i.e., causality) in the above couple chapters of SCG, where he argued from the effects of the Creator to the existence and nature of the Creator. Since, however, His effects are woefully inadequate to convey God's nature in a fitting way, the other two viae are invoked to balance out the limited gains of the via causalitatis. God is a maker of effects, yes, but He is "remote" from the limitations of makers as we think of them, principally in the way that He creates ex nihilo, whereas lesser makers always have to rely on some medium or tool or model outside themselves. As the IVth Lateran Council stated in 1215, “between the Creator and the creature there cannot be a likeness so great that the unlikeness is not greater [semper maior dissimilitudo in tanta similitudine].” Further, the aspects which we can ascribe to God by analogy with lesser created things (by way of the viae causalitatis et remotionis), we must ascribe to God in an "eminent" or "excellent" way. God is a wise artisan, but eminently and supremely so. God is not simply a rough idea of love, but supereminently love; the Father not simply a paternal pattern, but a supereminently good father, etc.

    The methodological "tension" of the
    viae tritae occurs and re-occurs through SCG and most Scholastic works, so you should keep an eye open for it.]

    [1] We have shown that there exists a first being, whom we call God. We must, accordingly, now investigate the properties of this being.

    [2] Now, in considering the divine substance, we should especially make use of the method of remotion. For, by its immensity, the divine substance surpasses every form that our intellect reaches. Thus we are unable to apprehend it by knowing what it is. Yet we are able to have some knowledge of it by knowing what it is not. Furthermore, we approach nearer to a knowledge of God according as through our intellect we are able to remove more and more things from Him. For we know each thing more perfectly the more fully we see its differences from other things; for each thing has within itself its own being, distinct from all other things.

    [3] However, in the consideration of the divine substance we cannot take a what as a genus; nor can we derive the distinction of God from things by differences affirmed of God. For this reason, we must derive the distinction of God from other beings by means of negative differences. And just as among affirmative differences one contracts the other, so one negative difference is contracted by another that makes it to differ from many beings. For example, if we say that God is not an accident, we thereby distinguish Him from all accidents. Then, if we add that He is not a body, we shall further distinguish Him from certain substances. And thus, proceeding in order, by such negations God will be distinguished from all that He is not. Finally, there will then be a proper consideration of God’s substance when He will be known as distinct from all things. Yet, this knowledge will not be perfect, since it will not tell us what God is in Himself.

    [4] As a principle of procedure in knowing God by way of remotion, therefore, let us adopt the proposition which, from what we have said, is now manifest, namely, that God is absolutely unmoved. The authority of Sacred Scripture also confirms this. For it is written: “I am the Lord and I change not” (Mal. 3:6); ...“with whom there is no change” (James 2:17). Again: “God is not man... that He should be changed (Num. 23:19).
    (SCG I, xiv)


    How fortunate is that soul who is willing to have a great deal of tribulation before departing from this life! How can one possibly learn how to love deeply and sincerely if not among the spines, the crosses and the feeling of abandonment over a long period? Our dear Savior thus proved his limitless love in the agony of His passion. Learn well how to love Christ on the bed of sorrow; on this bed He formed your heart before creating it, foreseeing it in His divine plan. Yes the Savior has numbered all your sorrows and all your sufferings. He has paid for them with His blood, with all the patience and love that is necessary. Be satisfied, therefore, to accept generously all that God has in store for you.
    (Letters 1043; O. XVI, pp. 300-301)


    OUR modern mystics make a mistake when they wear long hair or loose ties to attract the spirits. The elves and the old gods when they revisit the earth really go straight for a dull top-hat. For it means simplicity, which the gods love.
    ('Charles Dickens.')