Monday, November 30, 2009

Let me get this straight...

0 comment(s)
Bon Jovi's "We Weren't Born to Follow" has reached No. 1 in several countries, including Taiwan. (I can safely say it hasn't reached a saturation point, though, since I have no recollection of hearing it.)

Now, let me get this straight: a song about not-following has become an international following.

Such irony, such sweet irony.

How marvelous a thing is the collective commodification of non-conformity.

Speaking of science…

0 comment(s)
…normal, post-normal, and abnormal, here's one more reason (a decade old, I admit) why I love The Onion.

World's Top Scientists Ponder: What If The Whole Universe Is, Like, One Huge Atom?

Pausing for a moment to collect himself, the renowned scientist then placed his hands on his forehead before extending them outward in a sweeping gesture and making a buzzing "space-noise" sound effect with his lips, non-verbally indicating the degree to which his mind was blown by the whole freaky deal.

Read it and laugh.

Oh, and while I'm at it, and the whole Climategate thing, here's another reason to love The Onion.

EPA Warns Of Rise In Global Heartwarming
Unchecked Spread of Touching Sentiment May Spell Disaster

"In the last 12 months, global heartwarming has created a hole 700 miles in diameter in the earth's protective cynicism layer," Wyler said. "That hole, located over Siberia, has grown 15 percent since last month's release of What Dreams May Come alone.

Sunday, November 29, 2009

What is the placebo effect?

0 comment(s)
A fancy scientific term for that old-fashioned thing called "the power of mind." Consider the following from Vinoth Ramachandran:

I might mention that I have long known that prayer was a placebo; but upon learning recently of a study that showed that a drug works even when you know it is a placebo, I immediately started praying. There are two Ramachandrans—one an arch skeptic and the other a devout believer. Fortunately I enjoy this ambiguous state of mind, unlike Darwin who was tormented by it. It is not unlike my enjoyment of an Escher engraving.

I hope I am not the only one that finds alarming such gross insensitivity to truth in a professional scientist. The grotesque taste for "truthiness" over truth evident in Ramachandran's "placebo piety" speaks for itself. Of additional interest, for this post, is Ramachandran's explicit endorsement of the efficacy of so to speak "consciousness therapy" apart from any physical intervention. (HT to Jime for his discussion of empirical evidence for the causal efficacy of consciousness.) Presumably, a physicalist would say it is the auditory (physical) effect of words-related-to-the-placebo-effect-in-question which trigger healthy responses in the brain… but then why can't science just become regimented pep talks? Or is it just that?

For Ramachandran praying to nonexistent deities is good for his health; thus does Escheresque art trump the scientist's instinct for veracity. What Ramachandran seems to forget is that Escher's art is fascinating precisely because it is art––a fabrication of unreal worlds which, admittedly, throws interesting light on our real world by contrast. Indeed, perhaps the greatest display of Escheresque ambiguity is the existence of Escher's art itself. For Escher consciously and freely created his bizarre images by "playing off" normal reality, not by denying the truth of the world we inhabit in favor of the unreal worlds he illustrated. Escher could never had produced what he did had the world functioned like the worlds depicted in his art. Without the 'canvas' of reality, Escher would have had no stable, comprehensible platform on which to make such marvelous illusory worlds. Thus, by subverting the 'canvas' of truth and reality to the illusion of his perversely pious placebo, Ramachandran is but a half-baked Escherite. It should be some solace, though, that he keeps good scientific company on account of his half-and-half recipe for truth. Niels Bohr, a leading proponent of the Copenhagen interpetation of quantum mechanics and pioneer of the principle of complementarity, was also fond of black-and-white contrasts, so fond, in fact, as to take the Yin-Yang circle as his crest of arms. In typical yin-yang fashion, Bohr was fond of saying that a great truth is one the contrary of which is also true. Yet, as Stanley Jaki notes, it is a very open––or, by complement, a very closed––question whether Bohr would have been as tolerant of the contrary of his atomic theory.

The complacency for truth which Ramachandran displays towards placebo piety––a double standard he would hardly accept in his own field, neuroscience––seems especially eerie in light of the recent Climategate scandal (which I discussed earlier). Whatever the truth may be about climate change––and I fully grant that climate change is a reality, albeit not one simply put down to human industrialism and CO2––what is sadly evident from the CRU emails and documents is how leading climate "scientists" saw fit to conform data to a desired model. In an absolutely fascinating post, "Climate Change and the Death of Science", the author (whose name, I believe, is Kevin McGrane) notes how the idea of "post-normal" science has warped science itself most visibly under the banner of the climate change movement. I quote:

Once there was modern science, which was hard work; now we have postmodern science, where the quest for real, absolute truth is outdated, and ’science’ is a wax nose that can be twisted in any direction to underpin the latest lying narrative in the pursuit of power. Except they didn’t call it ‘postmodern’ science because then we might smell a rat. They called it PNS (post-normal science) and hoped we wouldn’t notice. It was thus named and explicated by Silvio O. Funtowicz and philosopher Jerome R. Ravetz, who in 1992 wrote the paper The good, the true and the postmodern, and in their 1993 paper Science for the post-normal age, where they promoted the idea that "…a new type of science – ‘post-normal’ – is emerging… in contrast to traditional problem-solving strategies, including core science, applied science, and professional consultancy… Post-normal science can provide a path to the democratization of science, and also a response to the current tendencies to post-modernity."

Elsewhere, according to McGrane, Ravetz opines, "For us, quality is a replacement for truth in our methodology. We argue that this is quite enough for doing science, and that truth is a category with symbolic importance, which itself is historically and culturally conditioned." In the same vein, McGrane reports, Ravetz says,

"…climate change models are a form of 'seduction'…advocates of the models…recruit possible supporters, and then keep them on board when the inadequacy of the models becomes apparent. This is what is understood as 'seduction'; but it should be observed that the process may well be directed even more to the modelers themselves, to maintain their own sense of worth in the face of disillusioning experience.

"…but if they are not predictors, then what on earth are they? The models can be rescued only by being explained as having a metaphorical function, designed to teach us about ourselves and our perspectives under the guise of describing or predicting the future states of the planet…."

Sound far-fetched and unfair to ascribe such "post-normal" duplicity to those exposed in Climategate? Well, for one thing, consider the use of inverted data to substantiate anthropogenic global warming. As the Wall Street Journal reported:

"…even a partial review of the emails is highly illuminating. In them, scientists appear to urge each other to present a 'unified' view on the theory of man-made climate change while discussing the importance of the 'common cause'; to advise each other on how to smooth over data so as not to compromise the favored hypothesis; to discuss ways to keep opposing views out of leading journals; and to give tips on how to 'hide the decline' of temperature in certain inconvenient data."

Similarly, writes Robert Tracinski,

"These e-mails show, among many other things, private admissions of doubt or scientific weakness in the global warming theory. In acknowledging that global temperatures have actually declined for the past decade, one scientist asks, 'where the heck is global warming?... The fact is that we can't account for the lack of warming at the moment and it is a travesty that we can't.' ... I don't know where these people got their scientific education, but where I come from, if your theory can't predict or explain the observed facts, it's wrong."

Oh, if only science were so simple, if only it weren't such a human endeavor. Alas, in a vein reminiscent of the Escheresque taste for a vision over a verity, Tracinski adds that "Judging from this cache of e-mails, they [i.e., the CRU scientists] even manage to tell themselves that their manipulation of the data is intended to protect a bigger truth and prevent it from being 'confused' by inconvenient facts and uncontrolled criticism." As only one instance of controlling criticism, there is the stonewalling of Vincent Courtillot, a French geo-magneticist whose research indicates that "Aside from a very cold spell in 1940, temperatures were flat for most of the 20th century, showing no warming while fossil fuel use grew. Then in 1987 they shot up by about 1 C and have not shown any warming since. This pattern cannot be explained by rising carbon dioxide concentrations, unless some critical threshold was reached in 1987; nor can it be explained by climate models." That's certainly one way to "discover" anthropogenic global warming. Let's call it anthropogenic anthropogenic global warming and give humans the credit we really deserve!

The seductive power of useful fictions––placebo politics––shows up, at a higher level, precisely in the climate change debate. For climate change––which, as McGrane notes, is a term suspiciously distinct from climate science––is nothing less than a useful fiction defended under cover of darkness for "the good of humanity." As always, socialist utopianism sacrifices truth––whether it's the sad truth of primal human fallenness, the middling truths of climatology, or the glorious truth of ultimate human redemption solely by grace––on the altar of idealized human bliss. What also becomes evident is that, if the power of rhetoric is sufficient to boost one's own health and to shape global energy policy, the power of consciousness--the sheer power of immaterial will despite the inconvenient indications and limitations of physical reality--emerges once more as a potent causal power in reality. While the efficacy of intentional action does not grant one the right to abuse it for Escheresque duplicity, the abuse of the will in science suggests there is much more to human agency than mere stimulus-response determinism. Indeed, the ability of Ramachandran and various reckless climate scientist willfully to violate the "input" of their senses and informed rationality suggests that a will affectively drawn to truth, rather than its own immediate good, is essential to rational compliance with the truth.

As I hope to elucidate in a subsequent essay, if determinism is true, and the human will is not free in a contrafactual way, then there is no way of saying that the truths that humans affirm in a regnant scientific paradigm are true, only that there is no way for determined rational agents not to affirm them. If, however, determinism is false, rational humans are free to reject false theories and creatively pursue new lines of inquiry, a flexibility which seems essential to science as a rational tool for arbitrating between truth and falsity. In turn, if rational freedom of the will is real–-as science, language, and moral responsibility strongly indicate––then so is absolute truth, since, as I said, only a will affectively attached to truth as something objectively greater than the immediate, lesser "truth of one's own well being" can will actions in accord with truth. If truth is not objectively greater than one's own immediate good––or, by analogy, the species' climatological good––then truth just is one's immediate good, and, as appalling as it sounds, the Escheresque utilitarianism of post-normal science seems fully justified.

Friday, November 27, 2009

The global warming thing...

2 comment(s)
Here's how it's gone down so far. I posted this at my Facebook:

What the Global Warming Emails Reveal - -- Wow. The rock is lifted and beneath…. This is why, as always, you should read "Science tells us…" as "Certain humans occupied as scientists tell us...". The plot thickens!

Then I got this reply:

I think 3 facts are sufficient to motivate a change in the production and consumption of (at least) petro-chemicals:

1) the gas/diesel engine is on average 25-30% efficient (" Gasoline engines are typically 25% efficient while diesel engines can convert over 30% of the fuel energy into mechanical energy."

2) There is a limited amount of it, regardless what that amount is, the rate at which we consume vs. the rate at which the earth "produces" is nearly infinite....

3) If I leave the engine on in my closed garage, my car will kill me.

Time for a new technology. These points are provable, they are non-partisan and (for me) compelling.

I replied:

... [Leaving aside fuel and engines,] my beef with the AGW (anthropogenic global warming) movement is threefold: earth cycles are most likely much bigger than us and so "anthropogenic" is a quaint term. Only two decades ago the big scare was a global freeze [cf. "The Ice Age Is Coming"]... but now... WTF? Big Science like climate modeling is inherently partisan and profit-driven, which is precisely why a "consensus" can be generated where maybe it shouldn't be said to exist; I'm uncomfortable with the population control ideology connected with AGW phobia. [Cf. this article via Concerned Women for America. Cf. also this expose of John Holdren and his quasi-eugenic views on "ecoscience." Cf. also this excerpt from Wiki on methods for mitigating AGW. All of this should make exquisitely ironic China's presentation of one the worst pollution records combined with one of the most aggressive population control platforms in the world.] I'm not being dogmatic but I am skeptical by default; the AGW movement seems too "cinematic" and too handy a "global narrative" to be driven purely by the contested and highly complex claims of climate modeling. ...

The discussion is a bit wobbly since we're addressing at least two (maybe three) technically distinct issues. First is the reality or falsity of the climate science "consensus" in light of the recent emails. Second (or First-b) is the truth or falsity of AGW. Third is the course of public energy consumption based on both market trends and fear of AGW. I'm all for fuel efficiency, since that's an intrinsic trend of technological development itself. What I'm much less "on board" with is basing radical and prompt technological changes across the board based on fear of AGW. All natural resources are finite (unless they're supernatural), so that point is kind of a non-starter. Even by 2020 the projected usage of hybrid cars in the USA would only skim off a few million barrels of oil consumption per annum. In order for those tech changes to really make a major impact on Oil, Inc. AND (hypothetically) to "save the planet," pretty much every engine in the world would have to be converted. What will such a change require-- and who will get the shaft in via?

Anyway, as a would-be historian of science, what I'm particularly interested in with this news is how it reveals the often all-too-human character of science. That's why I find it interesting that you commented on the news by implying that even if the AGW "consensus" is obviously manipulated in striking ways, yet we still have to fall in line with the energy-tech changes AGW seems to require-- but the whole point is establishing just how legitimate it is to base such major changes on AGW. As soon as scientific "models" become social "messages" or "visions," science is no longer science but propaganda, if not modern mythology.

Then I followed my skeptical sacrilege up with another link:

"CLIMATEGATE" -- This is what I mean about the "all-too-human" character of science. And why I'm very wary of the "consensus."

A friend replied:

I think these emails are entirely blown out of proportion -- given the climate skeptics' strategy of exploiting any appearance of debate within the community as a lack of consensus on the overall concept of anthropogenic climate change it makes sense that there would be attempts to fuzz over the edges of debate in public, nor to ignore journals that you think aren't upholding standards of peer review. As for the 'destruction of evidence', I don't know, maybe they were afraid of people hacking into their email and getting them? It's not really paranoia if they're really out to get you. :)

I replied:

As I'm sure you know, AGW is distinct form global climate change as such. I agree the latter has occurred, and is always occurring. I'm simply not as "sold" on the anthropic part of it all. Finding over a thousand emails by leading AGW proponents to the effect that fudging data to "save the model" is hardly a peripheral issue, since the debate consists of how/whether the data fit a projection-model, not how the model can "deselect" pertinent data. E.g., how the CRU code was manipulated for mid-C20 data. Whatever happened to falsification? It baffles me that vigorous agonistic critique should be scolded, when in fact "skeptical inquiry" is the heart of science. Based on my reading of the hx of sci, "scientific consensus" is virtually an oxymoron. Paradoxically, science exists rather by making itself defunct.

He replied:

Sure, but paradigms shouldn't be thrown out because the model is off a bit -- which is what the blogger you posted is implying should be done. What is 'false' is a socially produced category -- how far 'off' does your model have to be before you decide the model is 'false', or just needs to be tinkered with? My impression is that skeptics make that window of 'non-falseness' very small given that, as you say, science is an iterative process. I should be up front and say that my sympathies are with these scientists, who have to work in a highly-politicized environment that prevents them from being as open and transparent as they perhaps would like to be. E.g., the kind of environment where people hack their email accounts.

I answered:

The degree of falsity is relevant to the total relevant data. And what creeps me out about the 3000+ documents that were hacked is that the relevant data are considered "relevant" precisely by being filtered through a regnant model. Kuhnian cognitive dissonance only lasts so long and these documents indicate just how conscious some of that dissonance has become. Referring to the larger background of the "peer-reviewed" consensus misses the point in this kerfuffle, since the emails indicate just how much of a wax nose "peer review" is. I feel for the scientists too, but not so much that I condone blatant confabulation in key modeling. The woes of being hacked remind me of the woes an adulterous husband feels when he finds out his wife had his phone tapped. I'm perfectly willing to accept the idea in principle that AGW is true, but so far I am unconvinced on empirical grounds and the latest "apocalypse" only weakens my confidence.

Meanwhile, another friend objected, "Does it matter? Pollution is bad anyway for so many reasons." I replied:

Does it matter that a scientific model which shapes global energy policy might be confabulated? I think so. Reducing pollution is an intrinsically valuable goal in its own right, and shouldn't need the "big stick" of AGW doomsdayism to make it worthwhile. Pollution is an inevitable by-product of human productivity. It's as old as the mythical "ecological Indian", as only one example. It's just life in an entropic world. So the only way to completely stop pollution would be to so radically alter our way of life that we couldn't even be having this discussion (by means of tech, leisure time, etc.). There's a big difference between saying "pollution is bad for the local environment and society's health" and saying "pollution is inevitably going to destroy the entire ecosystem in a decade or so." What I gather from the climate science scene is this: the earth has natural periodic cycles of heat and cold and human pollution seems to be exacerbating those tendencies. But could an immediate halt of all industrial activity STOP the cycle? That seems absurd and so any global policy driven by that premise seems just as absurd. (Cue charges of flat-earthism, crude creationism, superstition, obscurantism, etc. ;) )

I think these current events shed fascinating light on the nature of science in general. Moreover, I think they greatly compromise the climate modeling behind AGW, which in turn basically calls for a moratorium on energy policy. For details, read the following to see how "wonky" the CRU's (Climate Research Unit's) data-analysis code is for climate projection:

The Real Problem...: "Sexing up a graph is at best a misdemeanor. But a Declan McCullough story suggests a more disturbing possibility: the CRU's main computer model may be, to put it bluntly, complete rubbish."

CRU Data Dump: "I'll tell you what it looks like to this ancient, gray-bearded software who has constructed many software models in his career: this looks like someone manipulating the input to a model to get the desired results."

Smoking Gun...: "I have seen inklings of how bad the CRU code is and how it produces just garbage. It defies the garbage in-garbage out paradigm and moves to truth in-garbage out. I get the feeling you could slam this SW with random numbers and a hockey stick would come out the back end. There is no diurnal corrections for temperature readings, there are all sorts of corrupted, duplicated and stale data, there are filters to keep data that tells the wrong story out, and there are create_fiction sub routines which create raw measurements out of thin air when needed. There are modules which cannot be run for the full temp record because of special code used to ‘hide the decline’."

CRU Smoking Guns...: "...the minor problem for the AGW alarmists was dealing with the Roman and Medieval Warm Periods. The big headache was the warm period prior to 1960, which we can see in the CRU data was equal to or higher than today. This warm period in the first half of the last century is a real problem for the theory of CO2 driven, man-made warming."

Shibboleth, schmibboleth...

3 comment(s)
Today's shibboleths.

Can't challenge Darwinism in any way.

Can't raise any worries about gay sex and/or contraception and/or abortion.

Can't assert your own faith as your highest personal value and can't suggest it is objectively good and true for all people.

Can't suggest humans might differ in any unique way from other animals (related to the Darwinist landmine).

Can't pick at the global warming "thing".

These are the "good citizen" hurdles of our day. Comply and you shall be "advanced." Resist and you shall be "old-fashioned."

Why is it...?

0 comment(s)
Why do we tend to fail worst at what we value most?

And why do we seem to be able to do best what we consider "lesser"?

Thursday, November 26, 2009

Signs of life…

3 comment(s)
ADDENDUM to "The eyes have it…", an earlier post on final causality:

"'The knowledge of a thing’s purpose never leads to a knowledge of the thing itself.' [note 62, citing R. des Cartes] He does not seem to realize that anyone who does not know what an eye is for does not know what an eye is."

–– Robert Augros, "Nature Acts for an End", p. 25.


ADDENDUM to my earlier post on so-called "Fourfold Semiotics":

"Of all living things we can say that they are semiosic creatures, creatures which grow and develop through the manipulation of sign-vehicles and the involvement in sign-processes, semiosis. What distinguishes the human being among the animals is quite simple, yet was never fully grasped before modern times had reached the state of Latin times in the age of Galileo. Every animal of necessity makes use of signs, yet signs themselves consist in relations, and every relation (real or unreal as such) is invisible to sense and can be understood in its difference from related objects or things but never perceived as such. What distinguishes the human being from the other animals is that only human animals come to realize that there are signs distinct from and superordinate to every particular thing that serves to constitute an individual in its distinctness from its surroundings."

–– John Deely, "The Semiotic Animal: A postmodern definition of human being superseding the modern definition ‘res cogitans’", p. 10.

Wednesday, November 25, 2009

Reminds me of...

0 comment(s)
"Because the Macula-Fovea of an infant is not quite ready to lay down a green baseline 2-D map into Area 17, the first sights of human infants are in the near and far reds of the area of the retina below the optic nerve. The first neurolinguistic pattern of Broca-Wernicke that an infant can induce is the enfoldment pursing of the lips into an 'M' sound to suck and draw nutrients into the mouth. The vibration and warmth sensory modalities of the human infant are also at play. Given this, the proto-semiotic world of the human infant is one of more or less integrated hues of red, warmth, sucking, genital motion, smooth shapes and specific hard wired smells."

-- John D. Norseen, "Images of Mind: The Semiotic Alphabet"

Norseen's description of infant proto-semitiotics immediately made me think of the denizens of Amsterdam! Think about it and tell me I'm wrong. ;)

As it turns out, though, Norseen, whose work I only discovered last week, is no longer with us. Here's an excerpt from a colleague-friend's eulogy:

"In spite of his preoccupation with fringe ideas and technology, as a person John was a very conventional individual. He prayed at church, adored his wife and children and dog, loved Penn State football games, lived in a suburb and was passionate about the United States of America, the Armed Services and the people who served with him professionally, both in the service and in industry. In spite of these virtues, John was possessed with an almost heretical belief that something deep and important was missing in his life, creating profound bouts of depression."

Saturday, November 21, 2009

This is vrey intrestring...

0 comment(s)
"Dyslexia Varies Across Languages" (HT to my Mom!)

"In English, the alphabetic letters that form visual words are pronounceable, so access to the pronunciation of English words is made possible by using letter-to-sound conversion rules," Siok said. "Written Chinese maps graphic forms––i.e., characters––onto meanings; Chinese characters possess a number of intricate strokes packed into a square configuration, and their pronunciations must be memorized by rote. This characteristic suggests that a fine-grained visuospatial analysis must be performed by the visual system in order to activate the characters' phonological and semantic information. Consequently, disordered phonological processing may commonly coexist with abnormal visuospatial processing in Chinese dyslexia."

Friday, November 20, 2009

Crossing your legs and cutting paper with your tongue...

0 comment(s)
I've wanted to write about the following topics for some time, so I hope my thoughts are clear without being rambling.

In the past few years I have taken notice of the way women sometimes stand. I'm sure you've seen it: feet crossed, legs like an X. I have tried standing like this before and I find it unnatural and unstable. But time and again I see women waiting at tea stands, at movie theaters, outside restaurants, at school, etc, "like that." I have no recollection of ever seeing a man stand that way and I am aware women do not always stand like that, but the disparity between men and women is remarkable. What accounts for this "odd" stance by women? I have three hypotheses, and please pardon my "bluntness" on at least on of them.

First, it could have to do with the female hip structure. Wider hips might incline their legs toward the center. We can see this in the female stride as their feet step inside a narrower space than men's wider plodding. Indeed, part of a "sexy walk" involves "swishing" the rear end as the feet click-clack on a narrow line. So, the women's wider hips might make it more comfortable for them unconsciously, to stand X-wise.

My second hypothesis is that it has something to do with the female crotch and the hymen especially. Bluntly: maybe virgins or sexually less experienced women are literally tighter and less flexible in the crotch, so that natural tension inclines their legs to criss-cross. By contrast, "looser" women (literally) might find less resistance to taking a wider stance. My "virginal hypothesis" could mean virgins and generally less sexually "open" females subconsciously guard their nether regions by standing X-wise. It's no secret in body language that a lusty man, or a man consciously or unconsciously conveying attraction, will sit with his legs wide open like a cowboy on a bar stool. By contrast, women as a rule sit with their legs tightly crossed and a pair of sprawled legs on a seated woman is extremely suggestive.

My third hypothesis is neurological in nature. Based on what I have read in neuroscience and psychology, women's corpus callosum plays a more active, balanced role in integrating the brain's two hemispheres. This is why women are more "intuitive": they naturally display dual perception and dual reflection in the left and right hemispheres because they are simply in better cross-communication than in men's brains. Another important feature of the brain in this hypothesis is what's called "decussation." Decussation basically means neural "crossing-over." It is what accounts for you using your "right brain" when you use your left hand and vice versa. It is what accounts for right-side paralysis in a left-hemisphere stroke. And so on. Now, decussation, like pretty much all features of the brain, is a plastic neural feature, meaning, we can develop it or see it atrophy. One way to enhance decussation, and "stimulate both sides of the brain," is, predictably enough, to do crossed-over exercises. One crossed-over exercise you can try is weaving your fingers together (with your thumbs down) and rotating them inward (along with complex finger motions). Another decussation drill is to criss-cross your feet and hands while doing a vertical stretch and deep breathing. Now, the significance of decussation vis-à-vis the curious female stance I'm discussing should be clear: women frequently stand X-wise as a natural expression, and perhaps even unconscious enhancement, of their decussated brains.

Obviously, I lack the neurological and/or psychological acumen to pick the right hypothesis (or even to discover if any of them is correct). At the same time, though, any combination of them could be true, so perhaps I don't need to weed out the worst hypothesis. Maybe they are all just facets of a single phenomenon: "X-legs."


Now, to introduce my second neurological oddity, let me ask you this: What can I bet you do almost every time you cut a pattern out of a sheet of paper? Stick out your tongue and move it as you cut. Since I started teaching a lot more Small children, I've had to prepare more materials, such as pieces of colored paper, pictures of fruit and animals, paper shapes, etc. So I've had to print a lot of images and then cut them out. I really noticed what my tongue was doing when I cut paper a few weeks ago when I was cutting out the outlines of students' hands which I had traced in class (pattern: "How big is your/my hand?"). The more I focused on cutting closely along the outline, the more my tongue got involved. I thought it was silly for me to act like a child, so I consciously kept my tongue in my mouth... but the more naturally and thoughtlessly I cut, the more my tongue crept out. "How weird!" I thought. Why was my tongue, of all things, so insistent on being involved with my fingers?

Here's my hypothesis: The tongue is one of the most sensitive and neurally articulate organs in the body, rivaled perhaps only by the fingers in sensitivity and precision. I know of a device designed for blind people, which helps them "see" with their tongues. (Here is a long discussion of the device and the benefits of "tongue vision.") The device is like an electronic retainer with a pad of movable "pistons" on it; it's basically an automated Braille reader on your tongue. Over time, patients can learn to navigate the world by "seeing" a tactile representation on their tongues, a representation encoded by optical devices connected to the tongue pad. My hunch is that our tongues move in synchronicity with our fingers as a way of stimulating neural areas used for finger dexterity. Interestingly, I bet it works both ways: the intricate use of our fingers co-activates areas in our brains which stimulates motor action in the tongue.

The soul in some way becomes what it knows...

0 comment(s)
"For a parcel of matter to take on a form is for it to become a thing of the kind the form is a form of; for example, for it to take on the form of triangularity is just for it to become a triangle. Now for the intellect to grasp the nature of a thing is just for it to take on the form of that thing. And in that case, if the intellect were material, it would become a thing of the kind that it grasps; for instance, it would become triangular when it grasps the form of triangularity. But this is obviously absurd. So the intellect is not material."

-- Edward Feser, "Plato's affinity argument"

Recall that in De veritate q.I a.i St. Thomas says says of the soul (anima): "quae quodammodo est omnia" (it is in some manner all things).

Thursday, November 19, 2009

From there to here...

0 comment(s)
[Cf. an addendum here.]

Fourfold Semiotics:
Why We Say What We Say Even When What We Say Isn't What We're Really Saying

by Elliot Bougis

[I didn't intend this to become a "real essay," as it rather forced itself out of me late last night. But in retrospect, especially when I added the title as a coda, it seems substantive enough to join the growing heap of my "real essays," which I hope can make up a couple-three "real books" and earn me a little "real money." Scandalous thought!

Also, I have learned to add footnote links in my HTML. If you want to read the content of a footnote, click on the bracketed number and you will "jump" to the footnote. To go back in the body of the text, just click "back" on your web browser. I'm still working out the kinks.]

A tingle, then an ache. The corpus moves ever so slightly. Wakes. Something called hunger, but without the name. The corpus, now a moving animal, licks its fingers for some nourishment. Comes away empty. Its eyes range over the bush, looking for some nourishment. Looks down at the bare ground. It lets out a small grunt. Others of its kind glance at it. Return to their own morsels. It whimpers again, gazing at what they eat. They snuff in reply and munch on the remnants of roots, grubs, berries. Incipient speech goes begging.

Advance many eons. One of countless of that hungry animal's descendants (it found food after all) feels a tingle, than an ache. Hunger. And the words: "I'm hungry. I would love a steak." The corpus raises itself and exits to find nourishment, eyes awash in a cornucopia of choices. Recalls that steak is above his budget. Settles for soup. And the words: "I would like a bowl of soup and two pieces of bread, please." Satisfaction ensues, hunger disperses. Speech strikes gold. But steak will have to wait till his fate next Friday.

Advance a few generations. One of that man's countless descendants (his date was a success, after all!) feels a tingle, mumbles "I'm hungry" to his coworkers and heads off for some grub (how proud his ancient forebears would be!). He reaches into his duffel bag and wedges his headphones into his ears as he makes his way to the subway. He's learning Spanish with a Pimsleur audio course. "Tell me you're hungry," the voice says. "I'm hungry," he mutters, to no one in particular. Funny thing is, he really is hungry, so the lesson sticks that much better. "Tengo hambre. Yo tengo hambre." Speech flexes its muscles.

Advance an hour. Our man has eaten his fill and is back on the subway. He wants to review the lesson he was in when he sat down for lunch. Soon enough the voice says, "Tell me you're hungry." He smiles at the irony. "Tengo hambre. Yo tengo hambre." Only he doesn't; he just ate. But he still says he's hungry. Speech hallucinates.


This is a meditation on what I will provisionally call the fourfold structure of semiotics. I take it that virtually all sentient beings are also semiotic beings. That is, after all, basically what sensation is: responding to natural signs with natural signs. (For those interested in the esoteric machinations of my presentation here, my term of art for the Thomistotelian "animal nature," or "sensitive soul," is the semiotic capacity.) The objects of desire, as well as the mere objects among which we always find ourselves, with or without a desire or repulsion towards them, are "first-order signs." They are signs waited to be "answered" by second-order signs like gestures, glances, grunts, and so forth.

In the first vignette, a primal ancestor lacked the verbal wherewithal to express his hunger, much less name it for himself. He was stuck at the first level of semiotic competence, namely, grunting and gesturing. Certainly this kind of semiotic skill is a form of "expressing" himself, but it lacks a key aspect of advanced semiotic ability, namely, the ability to refer to things outside the immediate semiotic environment. That ancestor was stuck in what Walker Percy (following Charles S. Peirce) calls a dyadic form of existence. Without an immediate semiotic impulse, the creature was unable to generate signs that transported it beyond its immediate semiotic Umwelt. Even with semiotic input, the creature could only respond in kind: two sign-makers responding dyadically, im-mediately. This is the first level of semiotic ability.

By the second vignette, a hominid descendant had ascended to a marvelous level of semiotic ability: representational verbal speech. Such a man could not only respond dyadically to its immediate semiotic Umwelt (hunger pangs, billboards, smells of cooking food, etc.), but could also go beyond that Umwelt by way of speech. He could, in other words (!), respond mediately to a range of signs beyond his immediate dyadic engagement with the world. He could so to speak (!) put verbal signs between himself and unseen realities; language could function as a bridge from one semiotic realm to the next. This introduces what Percy (à la Peirce) calls a "triadic" existence. The words he uses to navigate immediate and mediate signs can be completely detached from his dyadic responses. This is why he said he wanted a steak, even though it was a mere fancy. As a triadic semiotic creature, the hungry man can juggle his own interior "signs" (hunger, etc.), a range of mediate and immediate signs (the fanciful steak in the sky or the steaming soup behind the counter, etc.), and his own words about all of it ("hungry, steak, soup, etc."). This is the second level of semiotic ability.

Now here's where dyadic and triadic semiotics diverge. If the hungry man had a dog, he could easily train it to respond not only to the sound of a can opener at feeding time (or the tap of a spoon on the bowl, etc.), but also to human speech (e.g., "cookie, dinner, Let's eat, hungry?, etc."). Yet, could the man anywhere near as easily train his dog not to respond dyadically to such cues? Could he, in other words (…), train the dog to ponder "cookie" rather than to look up hopefully, begin drooling, amble over, sit patiently, etc.? It seems he could not. For the only reason he bothered to teach his dog those verbal cues, was so that they were entirely cues for action. Moreover, what means would the owner use to train his dog to dissociate dyadic stimuli from dyadic responses if not dyadic stimuli themselves? Again, dyadic productions are but "second-order signs" linked to "first-order signs."

If, to cite one of Percy's examples, I am with a two-year-old boy and I point to a red balloon, the lad will probably reach out for it in a typical dyadic response. If I then point at the balloon and say "balloon" a few times, I can teach him how to say "balloon" when I point at it, or, in little more time, to reach for the balloon when I merely say "balloon" without pointing. But could I then teach the child to simply reflect on the word in isolation from the immediate semiotic context? If I pointed at the balloon, could I expect him to do anything else than say "balloon" and reach for it? If I said the word "balloon," could I expect him to do anything but mimic the word and look at the balloon for his next cue? If you've ever played with children that small, you know the answer is no. Children at that age, like dogs at any age, are stuck in a dyadic world. The sole function of dyadic speech is to signal desires and trigger responses from hearers.

Now perhaps someone will interject that "baby talk" manifests a crude form of triadic speech, since when babies babble, they are not doing so (dyadically) in response to or in pursuit of objects. Yet, I deny baby talk even belongs to lower-level dyadic semiotics, precisely because baby talk is incoherent for the infants themselves, and still less for procuring certain wants. Indeed, bare howling and crying are superior to babbling as dyadic speech. The whole point of triadic semiotics is to elevate dyadic semiotics into its own self-subsistent world of signing, in which verbal signs can be signs about dyadic signs about first-order signs. This is why infant babbling can't count as triadic signing, though it can count as nascent triadic signing precisely insofar as human infants develop into third-order signers: we know what the baby is "doing"––signing triadically in the enclosure of language itself––even if the baby doesn't, and even if it isn't successful. As far as I know, baby talk is just the overflow of intense neural wiring going on inside young brains as synaptic connections and well formed speech modules are forged. At most, babbling is a floating bridge between a baby's natural aptitude to sign dyadically and its human aptitude to master triadic signing.

Indeed, far from showing the seamless continuity of babbling as a supposed form of triadic semiotics and lower-level dyadic semiotics, baby talk highlights their discontinuity. If there is too much ambiguity about the second-order signs a child (or a dog) produces, she risks not attaining the first-order signs she is after. If, for instance, a verbally precocious toddler kept chanting "not hungry, not hungry" while his mother tried to feed him, he would risk not eating when his body needed nourishment. Because the toddler displays only an incompetent production of third-order verbalism, there is a "disconnect" between what his mother receives as a meaningful dyadic refusal and what his brain produces as inadvertent third-order signing. His brain, in other words, is producing irrelevant third-order signs ("not hungry"), irrelevant because detached from his actual dyadic needs. This hapless play-acting will last only as long as his dyadic system allows. Once he gets hungry enough, the toddler will naturally manifest its genuine dyadic desires by a rumbling stomach, confused moaning, automatic glances at the food, and so on. Thus, far from being an "automatic transition" from dyadic to triadic signing, baby talk is a threat to both dyadic signing and mature triadic signing if the child is not granted "citizenship" in the properly triadic world as a whole.

Only once a child is old enough––perhaps six years old––can she step back from the frenzy of dyadic semiotics and the neural acrobatics of babbling, and reflect on language itself as higher order of stimulus. The words themselves become semiotic objects. Children below a certain threshold constantly ask, "What's that?" (as apparently I did all the time as a toddler), but once they know a thing's name they don't go on to ask, "What does _____ mean?" Above a certain threshold, however, a child will not only ask "What's that?" but will also proceed to ask, for example, "But what does fire hydrant mean? Why is a fire hydrant called a 'fire hydrant'? Why do we use that word for that thing?" Entry into the world of triadic semiotic ability entails positing words themselves as "mediaries" between one's self and one's dyadic objects. In the triadic world of human speech, words themselves become objects of semiotic engagement, which is why we so easily "argue about words." We are not satisfied to refer to things by mere gestures or even by articulate sounds (as plenty of other species can do); we insist on saying things well, being accurate, not being obscure, and so on. In fact, we can't move onto a dyadic course of action until we first agree, on a triadic level, about what we are saying. Neurologically, poetry and song may just be exalted forms of baby talk (i.e., ways of reinforcing synaptic complexity), but they are certainly more than neural babbling insofar as they are forms of conscious enjoyment of words themselves among triadic agents. (As I stressed above, precisely because baby talk is sheer babbling, it is eo ipso not triadic, whereas poetry and song are triadic babbling, and eo ipso include the intentionality-referentiality of triadic semiotics.) We don't heed baby talk, since it neither refers to a baby's dyadic interests nor demonstrates verbal excellence on her part. By contrast, we heed artistic verbal performances, not because they direct us to dyadic objects, but because they bring us to dwell on words themselves as uniquely human "toys." Once a semiotic creature masters how to navigate between dyadic signing and sheer triadic signing, it no longer (or at least very seldom [1]) risks failing to obtain its first-order objects of desire.

Just now I mentioned being obscure, and now I want to expand on the feature of language which motivate objections to obscurity, namely, deceit. Deceit brings us to the third facet of semiotic ability. (Notice that I do not refer to this as the third level of semiotics, since I am still not sure whether deceit is properly considered a form of triadic semiotics. I'm inclined to say that deceit is a form of semiotic behavior that can be done dyadically or triadically, as I will explain presently.) With deceit, semiotic creatures can use their own signs as diversions from other signs. A mother bird's mimicry of being wounded so as to divert a predator from her nest is dyadically deceitful. All she is doing is giving the predator a more enticing dyadic stimulus so as to draw it away from her brood. Humans can do the same thing (such as faking an upset stomach so onlookers ignore an accomplice breaking into a car, pretending to notice something remarkable over someone's shoulder so you can grab the last oyster at a soirée, etc.) on a dyadic level, but can also use articulate verbalization to mask their intentions. As I said, though, verbal prevarication seems to be only a very ornate kind of dyadic deceit, since it puts a verbal "smokescreen" in front of the contested dyadic objects. It reduces triadic signs to diversionary second-order signs.

At the same time, though, I admit prevarication––verbal deception––may just be normal triadic semiotics taken to grotesque levels. For while triadic semiotics inserts words between agents and objects as objects of reflection in their own right, prevarication does so crucially detached from the reality of its referents. In other words (…), the reason we bicker over words is to ensure that the speaker is not misleading us about the referents of speech. There really is no way to prevaricate dyadically, since to so would just be to replace one sign with another (as in the case of the mother bird feigning a broken wing to draw the predator away), and thus no longer count as triadic signing. The threat of prevarication is however endemic to triadic semiotics, since the validity of the "third-order signs" (viz., words) depends on those terms actually referring to the dyadic objects in question.[2]

In any case, let me proceed to the fourth level––and I do mean level––of semiotic ability. This is what the young office worker was doing on the subway while learning Spanish. Clearly he wasn't using Spanish for dyadic gains. Even if he happened to be hungry when he said "yo tengo hambre," his uttering those words had nothing to do with his the procurement of food. Nor was his response to the Pimsleur narrator a properly triadic form of signing, not the least because the narrator could not respond to the man's "yo tengo hambre." What the man was doing was something which I think is truly unique to human beings: he was using language in a purely fictitious way, yet without being deceitful.

When the Pimsleur narrator said, "Tell me you're hungry," the listener did not respond deceitfully. He didn't say, "Tengo hambre" in order to procure or obscure a dyadic object, nor did he utter the words as mere verbal play. It is one thing to respond dyadically to a dyadic sign, such as when a dog whines for scraps from the table or nudges its bowl to prompt its owner to feed it. It something else to respond triadically to a dyadic sign, such as when we see a Ferrari on the street and say, "Man, I'd love to drive that baby." But in the case of the man on the subway, there seems to be a wholly new level of semiotic performance involved. Dyadically, the man showed his hunger by saying, "I'm hungry" and by heading for food. Triadically, he could sign his hunger by being asked, "Are you hungry?", answering "Yes, I'm hungry," and then consulting his coworkers about what he should eat ("What do you mean by a 'good cheap burger'? etc."). But as he listened to the Pimsleur course he was told, triadically, to pretend to say he was hungry, which suggests that he was being deceitful about being deceitful. It would be easy enough for the man to tell his friend he's hungry, when in fact he isn't, so as to get a chance to speak in private over lunch, but something wholly novel and recursive––and I would say uniquely human––occurs as he plays along with the Pimsleur prompts. It is an exercise in what I will call honest deceit.

The point I am trying to convey is quite subtle and came to me in a flash of intuition under circumstances very similar to those of the man on the subway, so I'm sorry if I'm being obscure (!). Let me just try to blurt out my point and see what sticks: we can imagine a dog responding to the word "hungry" by running to its bowl, and we can (I suppose for the sake of argument) imagine the same dog "acting hungry" so as to draw his owner into the kitchen for some other purpose––but can we possibly imagine a dog being signaled to "act hungry" and then "playing along" without being hungry at all? Again, while dogs and other animals commonly respond to verbal cues like "cookie, cookie," and while they commonly generate signals to obtain cookie-cookie, I know of no non-human behavior which simultaneously and consciously responds to hunger-cues with hunger-signals as a sheer exercise in signaling.[3] This––the utter gratuity of human language learning––seems to be a wholly unique semiotic capacity of human beings. Like all animals, we can respond to and produce signals relevant to our dyadic ends. Perhaps, also, some species of animals can revel in their form of communication as we do in poetry and song (such as whales and their whale songs). But I know of no other species which emits signals for the sole purpose of generating signals which themselves can be dyadic, triadic, poetic, and ironic all at once.

The whales might sing a certain song to signal a desire for plankton elsewhere in the sea; the whales might also sing songs to mislead other whales about water currents, salinity, a bounty of plankton, etc.; the whales might also sing songs just to elicit harmonic responses from other whales. The whales might––and probably do––perform all these semiotic feats, but I know of no whale behavior in which a song is generated in order to evoke a song which sounds exactly like a dyadic signal for, say, food, but is simultaneously evoked as a false signal for food.[4] I would say this bizarre stricture, found only at the fourth level of semiotics, applies to all non-human animals.

Lastly, let me say that my reflections in this post are not motivated to establish a unique aspect of human-being. Yet I think my considerations do indicate a unique dimension of human existence. Aside from my fourfold schematization and my fumbling over the fourth semiotic level, my considerations herein are not original with me. Mortimer Adler, Stanley Jaki, Dennis Bonnette, Walker Percy, Charles S. Peirce, Adrian Reimers, et al., have helped me see just how "mysterious" human language is a genuinely biological phenomenon. I hope you can see some of the mystery too.

[1] Even very competent third-order signing can backfire at times, such as when verbal irony is taken to be literal. "That's what you said!" –– "Yes, but you should have known that's not what I meant!" And so on. I experienced a similar backfire once when I worked in a restaurant. The customer ordered a pizza and nonchalantly asked for "everything on it." But that restaurant charged per item, so I asked the customer at least twice, "Are you sure you want everything on it? All these toppings?" He insisted on "the works," whereupon I tapped every single key we had for toppings, until the pizza came out to about $30. The customer was incensed and accused me of overcharging him. So I had to call out my manager to explain that each of "all the toppings" was anywhere from $.25 to $1. He found this absurd, but then settled for a handful of toppings.

[2] Obviously, I'm not settled on how to classify deceit in an ascending model of "the fourfold structure of semiotics," but I will say this: If we imagine the four dimensions of semiotics which I am proposing as steel plates, dyadic signing is the first level of semiotics, triadic signing is the second level, and deceit is a column of steel running through both. So I guess my proposal looks like one side of a vertical barbell with weights stacked on it!

[3] A possible objection might be that "gratuitous signaling" is just an elaborate form of neural exercise, like baby talk and poetry (at one level). I grant that just as poetry has a neural-boosting dimension but still is something else at a higher level, so the "pure signaling" of some human semiotics both includes and surpasses mere neural calisthenics. The pure signaling of someone responding to a Pimsleur course is, as I have said, a strange blend of neural calisthenics, dyadic training (e.g., to obtain food if one is ever hungry in Spain), triadic engagement (i.e., with the narrator and native speakers in the Pimsleur course), and something else, which I have called "honest deceit." Perhaps my analysis is just another way of talking about Wittgensteinian "language games," but I think the difference is that my account is literally about language games as a uniquely human form of semiotics. Certainly other animals play games, and some might even "play with 'words'" (à la whale songs), but does any other animal besides man play with playing with words? It seems not, and that is my point, even if neural calisthenics enter into the picture.

[4] Just imagine it: "Hey, Lenny, I want you to sing that song, you know, the plankton song." –– "Why, Mac? You need plankton?" –– "Nope." –– "You love its melody?" –– "Nope." –– "Well, then, why?" –– "Just to hear you say it when I ask you." –– "You're friggin' weird."



Look into your eyes...

0 comment(s)
Man is a natural entity among natural entities. All human knowledge comes to humans via the senses (images, etc.). Insofar as matter as such is unintelligible to us, and pure essences are inscrutable to us, we must somehow "bridge" our sensory contact with the world and our intellectual descriptions of it. We do this by "abstracting" a thing's nature from our sensory contact with it. We know universals IN particulars, not AS particulars; and we know particulars BY universals, not AS universals.

The problem is this: just because we cannot think without "images" does not mean we think only BY images. The problem is this: can we STOP at our private visualizations (as an adequate grasp of reality)? Thomism insists all thought involves a visual dimension but abstraction is a purely intellectual act, and therefore surpasses total visualization. Just because we can't speak without word doesn't mean we can say all we know IN words. We don't "know" essences themselves; we know things IN their act of existence; and the plenitude of existence endlessly surpasses our verbal or visual description of it.

It's not an "empirical verifiable" fact that "knowledge is a function of empirical verification." What's more, that claim ITSELF cannot be verified empirically. It begs the question. Being a claim ABOUT empirical facts, empirical verificationism is not itself an empirical FACT.

Kant reminds us of the crucial "unity of consciousness," but is such a thing empirically measurable? And what about "now" itself? Can we "touch" or "measure" the present? The present has no extension or duration since it either "just was" or "is about to be" but it exists in a transcendental way. It is the transcendental "dimension" which enables us to have empirical experience at all.

Tell me what is happening RIGHT NOW. You can't, because "now" is already "past." The "now" obviously it exists, but it is not subject to an empirical description at all, nor a direct empirical verification. Can we measure the present? WHEN we do measure it? We only "measure" the duration of "now" on the boundaries of it. It's like trying to look at the inside of your own eyeball.

Wednesday, November 18, 2009

The collar is to the cat what idols are to man...

0 comment(s)

Good lyrics, cute video…

0 comment(s)

Why fertilizer?

4 comment(s)
Why do farmers use fertilizer? Because it enhances the yield of crops by supplementing nitrogen levels and--here I plead ignorance on the chemistry--maximizes the gains products of photosynthesis. Clearly, fertilizer is used to alter the chemical structures involved in plant life, but it does so in accordance with a certain aim. And while this aim--increased yield--is an anthropic "goal" superimposed on the crops, it is a goal fully cognizant of the crops' natural ends. Fertilizer enhances and accelerates crops' attainment of their natural ends. Farming, as Charles De Koninck points out in "The Lifeless World of Biology", is man's way of working with nature, of helping nature actualize her own potentialities. Certainly we know the chemical interactions involved in fertilization--and the application of pesticides, for that matter--are essential to the production of various ends. But the chemical interactions themselves cannot and do not bring about the natural ends involved. For, if an adequate natural explanation consisted solely in considering the physico-chemical elements and forces in any entity or system, we could bathe the crops in one chemical after another and "let the chemicals do the work." Unfortunately, while the chemicals would do what they do with relentless thoroughness, what they would do for, and to, the crops is an entirely different matter. Some of the chemicals might combine to promote crop yield, thus, of course, allowing the crops to "get what they need," but most likely we would "end up" with fields of dead crops. Why? Because what crops need is not chemical interactions as such, but fitting material "input" to produce their natural "output." Interestingly, finality is intrinsic even to physico-chemical machinations, since all such interactions are explained in roughly the same way: "Beginning with such and such elements under such and such conditions will naturally tend to result in such and such products."

Even when biotechnology "perverts" a plant's natural ends, it does so based on a concurrent grasp of how specific plants will react to various chemical changes. If the chemical alterations themselves were adequate to produce the desired changes, such changes could be applied across the board to all plants. But because we tailor our biotechnology to each specific plant, in its specific environment, we thereby recognize and "cooperate with" the plant's specific nature. Why make these changes and not others? Why avoid these changes and not others? In each case, for each aim, we alter a plant's "natural nature" (natura naturata) based on predictions which that nature will make under different conditions. A thing's act of existence--its essence--is the foundation for its reactions in existence. Indeed, prediction itself is based on the fact that plants and animals--and even molecular systems--have various analogous forms of finality. If a farmer predicted a bad harvest on the horizon, he would do so in light of his knowledge of his crops' specific natures. He would recognize where the harvest is "headed" based on how far or near from a certain end the crops are at the time he predicts a poor harvest. His resignation to this fact, and his decision, for example, not to waste fertilizer on trying to "make up for lost time," or helping the crops "catch up," is rooted in his knowledge that a chemical bath just won't be enough to attain the desired end.

I emphasize the connection between finality and prediction in light of some comments Daniel Dennett made in response to two other thinkers concerning natural teleology. He recognized that teleology--directedness or intentionality--is integral to prediction... and yet he denies what I am here calling heuristic finality actually exists in nature. Why? Because our predictions are never fully accurate. Real, heuristic teleology "is too idealized, because of the omnipresent possibility of error." Our predictions, like all ideal mathematical formulae, are idealized and therefore don't really obtain in physical reality. As an instance of an attempt to defend real teleology, Dennett cites Fred Dretske's attempt to naturalize intentionality. "Teleological principles," argues Dretske, "provide a basis for predicting what response to new circumstances a system which conforms to them will produce" (italics not in original). Dennett explains that such a system "would not just happen to track appropriateness; it would do so in a principled way. It would be caused, in Dretske's view, to track meanings in an appropriate way." This seems sensible enough, but Dennett faults it for the following reason: "But there are no such systems, human or otherwise. There are only better and worse approximations of this ideal--which is rather like the ideal of a frictionless bearing, or a perfectly failsafe alarm system." Before I comment on Dennett's "deflation" of natural teleology, I think I should explain the broader context of his views.

Dennett is well known for advocating "the intentional stance", which I have discussed before at FCA. According to the intentional stance we should act "as if" nature displays goal-directedness, since this useful fiction will help us better understand why a certain system or organism does what it does. The intentional stance is the third and highest of three stances Dennett proposes in philosophy, the second and first, respectively, being the mechanical and the physical stances. The physical stance examines the most basic physico-chemical processes of an organism/system; the mechanical examines the internal kinetics and structure that complex organism/system. In a thermostat, for example, the physical stance would help explain why the casing is "plastic," why the coiled spring-lever is "metallic," why the wires are "electroconductive," and so forth. Taking the mechanical stance would help explain why the coiled spring-lever expands or contracts and then why it closes the circuit to activate the theromstat, which, in turn, triggers the boiler to heat the air, etc. Finally, the intentional stance would have us say, tongue in cheek, that the thermostat "wants" to maintain the temperature; putting it like that is not only less cumbersome than explicating all the physical and mechanical intricacies of the contraption, but also "true in a sense" as far as our interactions with the thermostat are concerned. Similarly, a mosquito can be "refracted" by Dennett's the three stances, thus: the physical stance accounts for why its legs and wings are such a hardness, etc.; the mechanical stance accounts for why its wings elevate it at such and such a velocity and its legs help it land, etc.; finally, the intentional stance accounts for why the little demon keeps coming at us and buzzing in our ears (vit., it "wants" to feel warmth), why it extrudes a proboscis into our skin (viz., it "desires" our blood), etc. This is the context for Dennett walking a tightrope between Ringen's more austere anti-teleologism and Bennett's more Aristotelian teleology: Dennett is a teleologist in so far as taking an "intentional stance" in biology is a handy way of speaking.

Now, it should come as little surprise that I find Dennett's deflation of "real" teleology not only feeble; what may not be as obvious is that I also think it is methodologically self-destructive for him as a philosopher of science (and as a 'devout' Darwinist, to boot). In my opinion, Dennett's key argument against heuristic finality is this: "…there are no such ['really' teleological and in-principle predictable] systems, human or otherwise. There are only better and worse approximations of this ideal…" (italics not in original). As far as I understand Dennett's point, if heuristic finality is unreal due to the fact that our predictions of certain outcomes are always idealized and slightly off the mark, then our mathematical descriptions of nature are just as unreal, for they too are idealized and always slightly off the mark. This, the denial of accurate theoretical explanations of physical systems, seems like an incredibly high price to pay for deflating heuristic finality. After all, if no scientific formula is real because we never find perfectly exact manifestations of it in our quantitative observations, then can experimental science even be true in principle? If, due to physical indeterminacy and "the omnipresent risk of [observational] error," the predictive "leverage" provided by heuristic finality is unsound, then how does any other "idealized" scientific prediction hold up? If all idealized scientific theories are only useful predictive fictions, is Dennett's own pet theory of natural selection really true? If it is true merely as a "general principle," or as a researcher's working "rule of thumb" (like a biologist's reliance on the intentional stance), is natural selection accurate enough on a case by case basis to count as exact, quantitative science? If, in other words, natural selection is a logical axiom of scientific exploration, how is it any less a priori and unfalsifiable than the old Aristotelian saw that "Nature never acts without a purpose" or that "Nature abhors a vacuum"? At the same time, if natural selection is true in more than as-if, idealized way, then how does the omnipresent possibility of error and the idealized nature of theoretical prediction cut against a fruitful predictive model––including heuristic finality?

The dilemma is this: if natural selection is adequate for making clear, quantitative predictions of trait inheritance and population variation, then it is at least one instance of a predictively successful scientific theory in physical reality. If this is the case, two consequences emerge. First, the "idealization" argument against heuristic finality loses its force, since clearly at leaser certain "ideal" theories obtain in physical reality. Second, we are brought right back to the provenance of heuristic finality itself, since, if in no other case than in the theory of natural selection, there really is a place for adequate heuristic finality in science. Recall that the value of heuristic finality is that we are able to make natural predictions just because we recognize how and when specific natures act for certain ends rather than others. More generally, heuristic finality enables scientists to make accurate natural predictions based on tendencies––or, composite functions––inherent in nature at large. Interestingly, natural selection seems to provide a solid means for making just that kind of natural predictions based on a recognition that nature "tends to" work in a certain way. It is this dogged recognition of "how nature operates" as a coherently ordered system which motivates and protects experimental science. But, if Dennett is right that "there are no such [in-principle predictable] systems, human or otherwise," then natural selection itself, as a human "system" of heuristic finality, is undercut by the same token.

Unwittingly, then, Dennett seems to be advocating a fourth stance in his philosophy: the nomic stance. The nomic stance, like the intentional stance, is a useful fiction we rely on in order to render an intrinsically non-teleological world more intelligible as far as our interactions with the world are concerned. (Sound familiar?) Just as taking the intentional stance toward a thermostat is easier than doing an exhaustive physical-mechanical inventory of it, so taking the nomic stance toward nature is easier than a tedious empirical inventory of "the story so far." Under the aegis of the intentional stance, we can "play act" with nature's apparent "ends," all the while, of course, recognizing that thermometers don't really act so as to control the temperature and mosquitoes don't really act so as to suck our blood and propagate their species. Likewise, under the aegis--or should I say the spectre?--of the nomic stance, we can "play along" with nature while always reminding ourselves that there aren't really formally actual laws in nature. The nomic stance is a spectre indeed, for, if it were to take hold in the collective scientific mind––as its ancestor, Humean sensationism, never did––then science as such would self-destruct. For if there are no actual laws in nature––no real patterns which commence at X and naturally tend towards Z––then there is nothing to discover in nature. If, in turn, there is nothing to discover in nature, there is nothing for science to do. As much of a devotee of science as Dennett is, he is recklessly incautious about the fact that, if all science can "discover" is its own operational as-if "stances," then real science collapses into sheer constructivism, if not idealism.

My previous post about forms qua functional essences (and the attendant place of finality in natural selection) rather came out of the blue, but I am glad to see that it ties in, from another angle, with two largish essays that I have been working on for the past couple weeks. They deal precisely with what I think Dennett, unwittingly, both sacrifices and invokes. First, his Humean nomic stance subverts the metaphysical conviction in the existence of natural laws as a condition for science. Second, Dennett's worries about the inadequacy of theoretical idealization (or, the underdetermination of theory in general à la the Duhem-Quine thesis) invokes the metaphysical truth that, while, perfectly formal explanations do not appear "without remainder" in physical nature, yet we can and do grasp their truth precisely in a meta-physical way.

The first point is a matter of historical record. Suffice it for now to cite Stanley Jaki's immensely well informed dicta on pages 106, 107, and 109 of The Road of Science and the Ways to God:

"Clearly, Hume's posturing [in Dialogues on Natural Religion] as a champion Copernicus's and Galileo's way of thinking was an imposture. A perusal of Galileo's Dialogue should make it clear… that the creative science of Galileo was anchored in his belief in the full rationality of the universe as the product of the fully rational Creator, whose finest product was the human mind, which shared in the rationality of its Creator. … [Hume] wanted a universe of instincts, devoid of objective laws as well as of objective facts. … Typically enough, among Hume's admirers there have been many philosophers of science and even some scientists, but no great creative scientist has ever been a consistent and persistent Humean."

The second point is a strong vote of support for Thomistotelian metaphysics, anthropology, and epistemology. For if physical reality itself cannot "existentiate" formal mathematical laws, then certainly the brain as a physical organ cannot do so either, in which case the physical human brain either does not existentiate and grasp formal mathematical laws, or the human agent does so by non-physical means.

I hope to substantiate both claims in, as I say, two longish upcoming posts. Stay tuned!

Tuesday, November 17, 2009

Lost in translation...

0 comment(s)
I was recently hired to do some freelance translating for a church. I have to translate the key points in a running curriculum. Each key point has a blank space or two in it for students to fill in as they listen to the sermon or complete a lesson. I discovered that while I could translate the words around the blank spaces, I couldn't form a proper English sentence without knowing what belongs in the blank space.

Prima facie I could directly translate the given Chinese phrases and just leave a suitable blank space in the English sentence. But I realized that the meaning of the answer could alter what I directly translated. For example, I initially translated one point as "The Great Commission is not only _______, but also _______." But then I realized the Chinese answer might be a verb or a noun, which would require a very different translation in each case. For example, if the answer were a noun, it might be, "The Great Commission is not only A DUTY, but also A PRIVILEGE." If it were a verb, the sentence might be, "The Great Commission means not only TEACHING but also LEARNING."

I explained this to someone in the church office: I have to know what the sentence means before I can translate what it says. This might seem like a trivial consideration, but I think it highlights some important truths in life. In language, semantics rules syntax; the meaning molds the matter. Or, in a more McLuhanesque way, "The message molds the medium." It should not be hard to discern that, given my broader interest in classical metaphysics, what I am getting at is that translation bespeaks another instance of hylomorphism. Nothing about the particular syntactic elements (matter) of the sentence I was trying to translate could, of itself, manifest the intended meaning. Rather, I had to know what the sentence was "getting at," what its "point" was, in order to modulate the syntax and diction correctly. Thus, the formal "end" of the sentence controlled the material mechanics of the sentence. Indeed, a comical or ironic meaning could very well distort an accepted lexical usage, in which case the intended meaning could radically redefine the strict lexical and syntactic "rules." Thus is why it is significant that I invoked McLuhan: he was a committed Thomist, though few people know it.

The upshot of this meditation is that in so far as language is teleological, and in so far as language is the tool whereby we express ourselves in truth, self-expression is inherently teleological. The proper end of speech is expression of thought (or at least feeling) and the proper end of thought is truth (as the proper end of feeling is goodness and beauty qua two convertible forms of truth).

Latin conjugations! What could be more fun?

0 comment(s)
Here's a mnemonic I just devised to remember the order of the (five) Latin conjugations and sample verbs for each one.

First, you say, "AH, RAF!" (As in, "Well, the Royal Air Force!") As with all mnemonics, there is a brute amount of arbitrary memorization. Why RAF? No idea, but it works for learning the conjugations.

"AH, RAF!" is the anchor which helps you remember the following five verbs: Amare, Habere, Regere, Audire, Facere.

Each verb is a common verb in each of the five conjugations. Amare means love (as in, amicable); habere means have (as in, "habve" or, perhaps, have a bad habit); regere means rule (as in, regnant, registry or reign); audire means listen (as in, audio); and facere means do or make (as in facility or manufacture). (Facere is also famous because, in classical pronunciation, it sounds rather like "fuck": facio, facis, facit, facimus, facitis, faciunt.)

Now, the entire mnemonic is a little moral instruction on the ascent to power. If you want to be a powerful leader...

First, you must love, for all power is rooted in love, whether the love of glory or the love of others.

Second, you must "have what it takes." You must have the means to take power when you have the chance.

Third, you must rule. To reign is to rule a region.

Fourth, you must listen to the petitions of your subjects and to the auditions of your jesters.

Fifth, you must make a decision for your subjects and do what you say you will do.

As for the problem of remember which A in "AH, RAF!" is amare or audire, just remember that "conjugal love" should be your "first love."

Being in Asia...

0 comment(s)
It's fun being in Asia because you might have people ask you, "Do you have the MSN?" or have kids tell you about their brother's shoes--"Big they are, yes!" Yoda was a yogi, after all.

Cut me down, shcut me down...

8 comment(s)

This is a great song (HT to F. Beckwith), but I find it "amusing" that so many ostensible hedonists and secularists were used in this video. As if any of them really cares what the lyrics mean. The confused look on Flea's face behind the crucifix says it all heheh.

Monday, November 16, 2009

Chinese medicine...

0 comment(s)
I was appalled at dinner last night as I finally saw on TV why the Chinese term for the male member is what it is. It had always seemed a naively apt metaphor, but last night it became graphically clear to me just how accurate a name it is. Just about returned my duck noodles into the bowl. Gotta love culture shock.

Thumbs up...

0 comment(s)
Sir William Bragg: "Sometimes people ask if religion and science are opposed to one another. They are: in the sense that the thumb and fingers of my hand are opposed to one another. It is an opposition by means of which anything can be grasped."

I found this quotation (cited around page 482 in Stanley Jaki's The Relevance of Physics) especially remarkable since I found it shortly after writing my essay on smoking and drinking and the human opposable thumb.

The eyes have it…

4 comment(s)
[Cf. an addendum here.]

Forms as functional essences. The myriad of particular "versions" of a seeing device produced by natural selection all are just that––variations on a theme. The formal essence of an "optical sensory device" transcends the various material instantiations of it known by biologists. The issue of formal causation versus sheer mechanism is very removed from the weight IDers might give the eye, or any other ornate biological apparatus, namely, that it is "too complex" not to be designed by an Intelligent Creator. The issue of form in nature has to do with the simple, twin ideas that the same formal operations can appear under many different material circumstances, and that if they could not, nature would be an unintelligible mishmash of material causal collisions. In other words, formal causality in nature not only orders material causation in organisms themselves but also orders our knowledge of material causation in objects of study.

It is precisely the formal operations in question which give both biological "value" to the organism's matter and theoretical clarity to the scientific inquiry. If there were not formal order in nature, we would not see anything distinct in nature, much as if there were no pattern "in" 3D illusions we would literally stare all day without seeing anything. Even if we were able to "see" patterns in nature, as an automatic confabulation process in our brains, we would have to admit the patterns we see are but illusions of order superimposed on nature by our precocious brains. Once you grasp the definition of an eye as, say, an organ for detecting light, you can immediately grasp that formal coherence in any eye you happen to find in nature. If one day a marine biologist found an unspecified critter in the coral with an unusual protrusion on its back, he could analyze the tissue all day without knowing what the protrusion is (much less what it is for). He might call it an "eye-like protuberance," but unless he understood the functional essence of the protrusion, he could not grasp what it is (nor, again, what it is for). If, however, he later discovered that a fine beam of light shone on the protrusion animated the critter, he would instantly realize the protrusion, innocuous in its material aspects, is an eye. By one stroke of metaphysical coherence, "the form of an eye" both activates the biologist's mind to grasp what the material protrusion is and orders that matter itself into being an actual eye. This is what Michael Polanyi, of whom more presently, meant by arguing the structure of our knowing really corresponds with the structure of the thing(s) known, otherwise our knowledge is falsely so-called.

It should go without saying that the formal (and final) discovery of the critter's eye dynamically includes the material and efficient modes of causality involved in constituting and "triggering" the eye. Indeed, it is the formal coherence of the protrusion which "promotes" merely "eye-like" matter on its back to the status of a genuine eye. This is not suggest that formal causality is mysteriously exempt from material or efficient causality. Far from it, which is why, first, Aristotle considered causation a unified reality albeit with a fourfold structure and, second, he rejected Platonic Forms in favor of saying that forms only exist as actual substances. On page 39 of The Tacit Dimension (New York: Anchor Books, 1966), Michael Polanyi expresses nicely the interrelations of formal and material causality by noting that

a complete physical and chemical topography of an object wold not tell us whether it is a machine, and if so, how it works, and for what purpose. Physical and chemical investigations of a machine are meaningless, unless undertaken with a bearing on the previously established operational principles of the machine. But there is an important feature of the machine which its operational principles do not reveal; they can never account for the failure and ultimate breakdown of the machine. … Only the physical-chemical structure of a machine can explain its failures. Liability to failure is, as it were, the price paid for embodying operational principles in a material the laws of which ignore these principles.

To take a second example: If you saw a lump of dissected organic tissue lying on the floor, nothing about its material structure would clearly indicate that it is an "eye." If, however, I told you I had dropped a fish eye on the floor, the lump of matter would instantly "become" an intelligible object––an eye––even despite the pitiful condition of the matter (having been plucked out of a fish's head and dropped on the floor). It is not because the tissue––the matter––was diseased or deficient that the lowly fish eye is no longer "an eye" qua an active seeing device; it is only because the matter, in itself perfectly sound, is no longer ordered by its formal nature that it is incorrect to call that chunk of organic tissue an eye.

Let me cite some of what Francis Beckwith recently posted about this, not the least because his post was the catalyst for this post of mine:

Although it is true that final causes imply design, the ID movement is a project in which the irreducible or degree of specified complexity of the parts in natural objects are [sic] offered as evidence that these entities are designed. But that is not the same as a final or formal cause, which is something intrinsic to the entity and not detectable by mere empirical observation. For example, if I were to claim that the human intellect’s final cause is to know because the human being’s formal cause is his nature of “rational animal,” I would not be making that claim based on the irreducible or degree of specified complexity of the brain’s parts. Rather, I would be making a claim about the proper end of a power possessed by the human person. That end cannot be strictly observed, since in-principle one can exhaustively describe the efficient and material causes of a person’s brain-function without recourse to its proper end or purpose. Yet, the end or purpose of the human intellect seems in fact to be knowable.

As Beckwith notes, and as I have intimated above, there is a close, perhaps even redundant, connection between formal and final causality. Indeed, to return to my earlier line of thought, the evolved success of versions of any formally coherent organ ("tool" in Greek) is evocative of the close connection between form and finality. The many versions of the eye in nature all "succeed" over time because they all accord with the proper function of an eye: namely to see better than not to see. As Stephen Barr writes:

Some people think that the Darwinian mechanism eliminates final causes in biology. It doesn't; the finality comes in but in a different way. Why does natural selection favor this mutation but not that one? Because this one makes the eye see better in some way, which serves the purpose of helping the creature find food or mates or avoid predators, which in turn serves the purpose of helping the animal to live and reproduce. Why do species that take up residence in caves gradually lose the ability to see? Because seeing serves no purpose for them, and so mutations that harm the faculty of sight are not selected against. … Darwinian explanations can account for very little indeed without bringing intrinsic finality into the explanation.