## Saturday, June 26, 2010

### No hitter, no brainer…

The Next No-Hitter: May?
Mathematician Uses Statistics to Predict Rare Baseball Events
–– May 1, 2005

Researchers use a simple statistical tool known as the Poisson distribution to predict no-hitters and also the number of players hitting for the cycle, in which a player gets a single, double, triple and home run in the same game. A Poisson distribution predicts the number of events that will occur in a fixed time interval, provided that the events occur at random, independently in time, and at a constant rate.

This is one of those "interesting" bits of "science news" I peruse from time to time, and upon which I openly co(d)gitate even more rarely. (I suppose there is a Poisson curve for predicting roughly when I will make such exco(d)gitations….)

In any case, what struck me about this news is that, from a philosophical perspective, it might be very insignificant. (I think it's quintessentially philosophical that this "news" is five years out of date. Authentic philosophizing is in no need of deadlines or relevance, says I.)

So the Poisson curve for baseball stats places a no hitter in late May or early June. Yet the article mentions many times that it still can't predict which player or players will be involved, much less exactly when and where it will happen. It also grants the prediction might be off. So I have to wonder: what does this add to baseball culture, aside from a new meta-level of making bets (viz., bets about the accuracy of a Poisson prediction about prior bets on the season champs)? More to the point, what does this discovery––having so much to do with necessity, foreknowledge, freedom, contingency, determinism and indeterminism––add to the philosophical debate about all the topics just listed? Very little, I contend.

First of all, baseball fans have relied on statistics to make predictions for decades. Batting averages, in-game errors, RBI's, home-field advantage, etc., as well as weather forecasts, players' ages and injuries, etc.––all these data have been used since, presumably, baseball began to make predictions. The Poisson applications to baseball just add a new facet to the old gamble, not anything inherently novel from a philosophical perspective.

Second, baseball fans have been able to make very significant predictions about no-hitters well before Poisson analyses, to wit: in all likelihood, there will be at least a no-hitter between X and Z, where X and Z are the beginning and end points of the baseball season. According to Wiki:

In recent seasons, the schedule runs from the beginning of April to the end of September, followed by the post-season tournament in October. The endpoints of the season have gradually changed through the years. In the late 1800s, the regular season began in late April and ran through late October. By the early 1900s, however, the season was running from late April to late September or early October, with the World Series capping the season in October, sometimes actually starting in the last days of September.

It would be interesting to see what a slight modification of the rules about bats did to the Poisson predictions. For instance, suppose the girth of a legal bat were widened by a millimeter or two. This would significantly lower the odds of a no-hitter every season. Then imagine players could select from a whole range of differently sized bats. What kind of complex mathematics would be needed to predict which bat a player would select at an otherwise optimally predicted time for a no-hitter? Once you start making predictions about randomly selected bats, you are already at a wider "variable perimeter" from which you hoped to predict a mere no-hitter. Where would such a "variable regress" end?

The question I am worrying is, what makes for a truly stunning prediction? How precise does it have to be to support "hard determinism"? Within a month? Within a day? Within a second? Why isn't it stunning enough to, say, predict that "scientifically speaking, in light of past stats, there will be 1.78 no-hitters this season"? That kind of prediction has enough metaphysical punch to do away with Hume's arguments against natural continuity and inference. By contrast, does a Poisson prediction within a minute and square foot of where the next no-hitter will happen have enough metaphysical "oomph" to validate determinism? I deny that it does.

As I hinted with my musings on randomly sized bat-girth, any system that is sufficiently "open" and complex (call it an A·n System), renders predictions made about, and from within, a system of lesser complexity (call it an A·n-1 System) useless. Why? Because predictions about/in/for A·n-1 occur within A·n and must, to be rigorously predictive, take into account factors in/from A·n which might impinge upon A·n-1. By contrast, attempting to predict X in A·n requires a grasp of factors in A·n+1 which would impinge upon A·n. And so on. Even a simple prediction, like, "Johnny will land this three-pointer [in A·n]," is predicated on a belief in, or cognitive 'access' to, a ceteris paribus clause from/to A·n+1 that "the laws of physics and biology ['enclosing' A·n] will not radically and suddenly alter between Johnny's drive down the court and his sinking the three-pointer."

In any case, I take it as a general rule of thumb that there is an inverse proportion between the rigor of scientific prediction and the ontological scope in question. Theoretical precision, in other words, declines with cognitive ambition. The larger and "realer" the system of inquiry is, the less amenable it is, in principle, not only to making accurate, specific predictions, but, moreover, even to admitting such a prediction could be made at all. This is not a methodological flaw, given, say, the lowly cognitive powers of human agents, but, rather, a limitation inimical to the very idea of total predictions, a totality required in order for determinism to be coherent.

The upshot is that I am, currently anyway, very inclined to agree with much of the work of a philosopher whom I only recently discovered, Patrick Suppes (Stanford). I'm too tired right now to elaborate on Suppes's "probabilistic metaphysics," but his "Indeterminism or instability, Does it matter?" [PDF ALERT!] is a very good introduction to many of its themes. My point in all this base-running at the mouth is that the attempt to formulate a total prediction of "the world", is a lost cause. More strongly, the ex arguendo incoherence of saying the world is amenable to a necessarily causally determined predictive description, is dispositive of determinism. I've written about this idea before, for instance, in "Reporting live" and, more recently, in the latter half of the dialogue in "Optimus E-Prime". Recently, though, the idea has been percolating in The Codgitank © is that the very coherence of "the world" requires its conceptual intelligibility to a self-conscious mind. (I know, did I mention I'm a theist?) I was tired when I mentioned Suppes and the trend has only continued so I'll keep this brief (you can thank me later).

My fundamental question is, "Can anything really exist which is intrinsically incoherent?" If so, well, then, I need some examples. If not, though, can anything exist without also being comprehended in its integral coherence? I am inclined to say No, since, ex hypothesi, one of a thing's essential features be that it is not conceptually incoherent. The issue is not that there are things the intelligibility of which we have not yet discovered, since, if those things actually exist, they exist coherently. If they do no coherently exist, then, of course, they do not exist. We are, then, not waiting to discover anything's intrinsic coherence, but only this or that unknown thing's existence. A thing's notional coherence must be coterminous with its actual existence, otherwise its existence would lack actual notional coherence, whereby it would not actually exist. The snag is that actual notional coherence, present by definition in the very "warp and woof" of actual existents, is only notionally coherent to a mind capable of grasping notions. (Consider the etymology of "notion": from Latin nōtiō a becoming acquainted (with), examination (of), from noscere to know; Gk. ennoia "act of thinking, notion, conception," or prolepsis "previous notion, previous conception.") As I have argued before (cf. "Uncomprehended logic"), eternal intelligibility (e.g., numbers, logical laws, etc.) presupposes eternal intelligence. Nous (νοῦς) is to notional coherence what notional coherence is to existence. And, yet, existence makes notional coherence ('essence') actual only because Absolute Existence makes certain coherently possible existents actual (cf. É. Gilson on Platonism, Scotism, and Suarezinism in Being and Some Philosophers).

At a higher level, if some things are "computationally inaccessible" (a phrase I picked up from Suppes, and which could apply to my quandary about deterministic predictions and A·n[-1/+1] Systems), their existential coherence requires they be intelligible by a non-computational (i.e., immaterial, non-racionative) means of knowledge. If nothing can exist which is not at the same time subsumable to a coherent definition (in the exact opposite way that something, like a square circle, with an incoherent definition cannot exist), then anything's existence––and more importantly, Everything's existence––must be actively subsumable to an intelligible ratio. The grasping of Everything's existential coherence must be coterminous with Everything's actual existence, in which case, an intelligence inclusive of both the world's total existential coherence and its own intelligence 'over' that totality must exist. To cop out by saying that perhaps the world is fundamentally incoherent is a dead end, not only since referring to "the world" presupposes its coherent existential unity, and "the world's fundamental incoherence" only makes sense when it is further asked, "Incoherent with respect to what?"––a question which of course displaces alleged fundamental incoherence into a higher frame of coherence. Ultimate coherence is inescapable, for, as Cratylus 'argued' so 'eloquently', there is literally no way logo-ically to argue otherwise. As such, total coherence points towards total consciousness.

As a final "meditation," consider the etymology of coherence (and its startling antonym): L. cohaerentem (nom. cohaerens), prp. of cohaerere "cohere," from com- "together" (see co-) + haerere "to stick"; L. haesitationem (nom. haesitatio) "irresolution, uncertainty," from haesitare "stick fast, stammer in speech, be undecided," frequentative of haerere "stick, cling," from PIE *ghais-eyo (cf. Lith. gaistu "to delay, tarry"). Here we see that in the very roots of language, coherence is allied with unified conscious volition, speech, action––Pure Act, Abiding Logos.