Sunday, October 16, 2011

The halting halting halting halting halting halting…

…problem.

This an argument I think I devised while teaching this morning. I say I think I devised it, since it is still so tentative that I need to present what the strands to see if it is even an argument, or rather just an observation, sprung, like so much, from my endless ignorance.

So.

I was thinking about the halting problem (HP) (I'll save you the click):

In computability theory, the halting problem can be stated as follows: Given a description of a computer program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever. Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

(Here is a delightful proof of the halting problem, for those of you seeking reading materials for your children.)

Now, here was my wondering:

Assuming that "the mind is the brain and the brain is a computer"––a thesis which I do not accept (cf. e.g. Schulman, Searle, Tallis, Dreyfus, Ross, Bougis, et al.)––, but for argument's sake, say that the thesis of the computational mind (TCM) is true.

If TCM is true, then there is nothing in the brain itself qua algorithmic engine, to halt its own computations. A stop would only come from outside influences, say, if all sensory input were blocked, or if so many portions of the brain were mangled that the brain simply shut down.

Which brings us to a second prong of the inquiry: the modularity of the mind (MoM).

I think that TCM nearly goes hand in hand with MoM these days, since it is a lemma of TCM that the "programmer" of the mind is blind natural selection, and therefore there is no "central programmer" for making a unified mind. This lemma is extended in research programs about the modality of the mind. What biological sub-systems still exist in the human nervous system and comprise the modular mind? Apart from the different lobes of the brain (visual, somatic, motor, etc.), there are also cognitive modules that are all vestiges of earlier cognizant organisms, and just happen to be confined in one space––the human cranium––by natural selection. The mind is a "kludge", as some would put it. The mind is actually just a quilt of neural-cognitive modules which have tended to increase reproductive success in humanoid populations over time.

Fine. We'll take TCM and MoM as true for the purpose of argument. What follows?

If the mind is the brain and the mind is a computer, then the mind:brain is subject to HP.

If the modules of the human brain:mind are themselves kids-of-minds by virtue of being algorithmic engines, differing from "the human brain" on in degree, then human neural-cognitive modules (hNCM) are subject to HP. This is modus ponens.

If, however, the brain, either as an agglomerate of hNCM or as any one hNCM, is subject to HP, it (or they) should never halt in any algorithmic state. hNCM do/does not, however, endlessly "fail to halt", therefore…. This modus tollens, but I must elaborate on some conditional conclusions.

Since human cognition does not endlessly fail-to-halt––otherwise, how do we perform any action?––, but our behavior is governed by hNCM, then something peculiar is happening in our mind:brain. Enter my hypothesis.

Even if we grant humans are governed wholly by hNCM, we can see why humans have free will. If each hNCM is its own little non-halting Turing machine (á la TCM), then none of them should ever halt in a concrete decision. Halting is a function of decidability, but purely algorithmic halting is undecidable. So, how does/do our hNCM ever halt? From MoM, we must treat each hNCM as an external environment to every other hNCM. It is because our hNCM mutually interact that they can halt in ways that produce our unified-modular behavior.

The upshot is that there is a radical indeterminacy in the total hNCM nexus known as our consciousness, yet one that does not generate simple randomness––and this is the basic meaning of free will: non-random indeterminacy. The competing pre-halting computations of our hNCM lead to a dynamic series of non-random but non-deterministic actions aka our selves. Interestingly, even research into the irrational biases of hNCM itself relies on a standard of rationality that surpasses those very biases: we are not determined by our modular biases, although our modular mind is wholly physically deterministic. Even the mind:brains of cognitive scientists defending TCM indicate the non-random, non-deterministic nature of human consciousness.

This is akin to Robert Kane's account of free will, in that at any moment of conscious decision (note that MoM-TCM accounts for unconscious decision processes), we are literally an indeterminate, yet wholly physical, complex of unhalted, unweighted decision variables. Once the mutual interference among hNCM generates a halting state which propagates non-centrally but uniformly through the mind:brain, we act freely. We act because hNCM halt; we act freely because there is nothing deterministic about hNCM qua computational algorithm engines. The understanding of chance qua the overlapping of otherwise disparate causal chains goes back to Aristotle, and it is that sense of indeterminate causality which I invoke here. We are free because––assuming TCM-MoM––there is an intrinsically indeterminate congeries of hNCM which could not function unless they mutually limited each other. A jumble of indeterminate rational engines produces a stream of rational action.

So, I propose that free will is intelligible even on a materialist account of the brain, and certainly intelligible on a non-materialist account of human existence. Regardless which ontology of persons is true, the doctrine of rational, human free will is true.

2 comments:

djr said...

I'm sure I could find some problems with this if I thought about it for a bit, but I thought I would just confirm that you kick ass.

Codgitator (Cadgertator) said...

Well, at least somebody got it! Good to hear from you. This post needs a lot of technical tightening up, but I think the principle abides, and very much in the Codgitank for another round.