- Students of physics find it hard to escape the notion of pre-determinism
because of the deterministic nature of differential equations (which
mostly remain so, even when chaotic dynamics is accounted for).
A predetermined universe certainly makes concepts such as 'free
will' difficult to deal with. Indeed, free will is frequently dismissed
as an illusion. In particular,
the Gestalt of Determinism is the notion that once a sequence of
events is fixed in time, we can jump out, god-like, and view this
sequence from afar, as a whole: an inanimate graph with one axis
labelled 'time', or, like some Christian God sitting on a mountain
looking down on an intersection with two fast-approaching
vehicles, 'seeing' that a collision is inevitable and
'predetermined'.
- Sir Roger Penrose makes a number of (convincing) arguments that Turing
machines could never be self-aware or posses free will as we know it.
[Pen90].
At a simplistic level, this is because Turing machines are
ultimately deterministic, even when fed 'random' external stimuli
(e.g. eyes, ears, a good email box), and a determinisitic device
can never have free will. This is the basic argument of the
'Chinese Room'.
Penrose makes stronger arguments, though:
the strongest of these are the arguments about reasoning about
the Godel paradox and reasoning about mathematics in general.
The gestalt is that no Turing machine can reason about Turing
machines in general, for if they could, then paradoxes arise.
Penrose concludes that humans can solve problems and have
visionary intuitions into mathematics that no machine could have.
- Recent advances in the notion of quantum computation suggest that it's properties are more in tune with what would be called consciousness than any phenomenon known in the world of Turing-machine AI.

Lets assume for the moment that (be it true or false) that the human brain is intimately tied to quantum-computation mechanisms. Merely introducing such a supposition into the mind-body argument does not inherently or explicitly simplify the mind-body problem.

Why do I say this? Lets take a step back:

- The ability to make decisions is not an indicator of free will.
Indeed, it is straightforward to argue that computers can make
'decisions' based on 'inputs', and take 'actions' that have dramatic
macroscopic effects. In particular, complexity and 'emergent
properties' are equally hollow: the weather is extremely sensitive
to initial conditions, is extremely hard to predict, but that does
not imply that the weather has free will or consciousness.
- In particular, the ability to amplify a microscopic state to
macroscopic size is not an indicator of conscious action. For
example, the amplification of nano-ampere current to a scale sufficient
to drive a meter is not an indicator of a consciousness.
- In particular, the use of 'classical chaos' is not sufficient to
escape determinism. By 'classical chaos' I mean the behavior of
mathematical systems with an infinite number of digits of precision.
Difficulty of prediction is not a substitute for inability to
predict. Note however: many (most?) chaotic systems seem to
have a lyapunov exponent that appears to be roughly constant over
all scales. In particular, this implies that although minute
differences in the millionth decimal place might be amplified to
order-unity effects in finite time, it does not imply that
differences in the infinith-decimal place will be amplified to finite
effect in finite time.
- Note however, that one might envision a chaotic system that does
amplify an infinitesimally small disturbance in a finite time.
To have such amplification, we'd need something that grew faster
than exponentially: e.g. the inverse of a pole (1/x). With a pole
(1/x), the pole reaches 'infinity' after a 'finite time' (i.e. imagine
approaching zero with finite velocity; then 1/x goes infinite in
finite time).
If we were able to find chaotic systems that amplified infinitesimally small disturbances, then there are some unexplored mathematical playgrounds to be considered, viz. the number/game constructions of John Conway. It is not clear to me that some of the 'games' that live in the 'gaps' between 'infinitesimals' couldn't be amplified to have finite effect in finite time through some particularly singular differential equations or other chaotic processes. Were this to be the case, one must clearly explore the connection. None-the-less, in absence of any such concrete example, one would still be tempted to posit that any outcome of such amplification would still be wholly deterministic in the standard sense of the word 'determinism'. That is, unless one discovered that such a process injected some sort of innate randomness

- Yet, free will does seem to imply that as a result of its action,
the state of the universe is altered. Furthermore, that alteration
is not pre-determined (ala classical dynamic equations) or
predictable. However, nor is the change random: free will flows
from consciousness, rather than from statistical averages.
It seems irreversible, in the sense that it is
somehow tied to the flow and passage of time.
- The introduction of uncertainty into outcomes by means of quantum
mechanics does not in itself escape the strangle-hold of determinism,
and, in particular, seems to introduce a kind of 'God plays with
dice with the universe' type of randomness into results. Whatever
leeway it leaves for the action of 'conscious control' is not
obvious.

So if determinism and randomness are not acceptable, what is? Well, lets try to imagine some mathematical framework that might conceivably fit the notion of free will. It would have to be odd: not predictable, so one couldn't use the ordinary vocabulary of algebra and geometry in the ordinary sense: free will seems at odds with statements such as 'if A and B then C', preferring instead 'if A and B then maybe C or D, depending on the choice made'. Nor is it stochastic, so we can't talk in the following fashion: 'if A and B then C 37% of the time'. So how can we talk about free will? What sort of a physical theory, rooted in mathematics, can we construct, that will somehow be shown to support and allow free will? It is not obvious that this is even possible.

Should we take this to be a proof-by-contradiction? 'free will is not tractable by ordinary mathematics, ergo it doesn't exist'? That seems at best absurd.

Lets take a moment to drive this point home. One catch-phrase occasionally bandied about is 'emergent behavior'. For example, consider a collection of round objects of various sizes, with some amount of surface friction. The mathematical equations describing round objects with friction are not terribly complex. Add gravity to the mix, they are still not complex. In particular, one could stare at these equations for a lifetime without realizing that by piling up the round objects, one will get landslides of various sizes. Yet landslides occur, and once one knows that, then one knows how to approach these equations as well, in such a way that landslides will appear in them as well. The landslide is the 'emergent behavior': viz, some aspect that is not obvious from the underlying description of the system. None-the-less, once one knows of the emergent behavior, one can once again pin it down mathematically: landslides occur with certain stochastic distributions. There's an element of randomness and unpredictability, but certainly no element of mystery or magic.

Through similar reasoning, some argue that computers are particularly rich in 'emergent behavior', and that, in particular, it will soon be seen that consciousness and free will will emerge from them. The standard rebuttal to this argument is that computers are utterly deterministic, and that therefore, there is no way in which they could exhibit true free will. The standard counter-argument is that determinism doesn't matter, that by mixing in outside, environmental influences and interactions, the deterministic computer gains a sufficient measure of indeterminism. Unfortunately, these arguments resemble those for a gust of wind: the wind, controlled by partial differential equations, is ultimately deterministic, no matter how difficult it may be to predict, and no matter how much outside influences may affect it. We are not in the habit of ascribing consciousness to gusts of wind; why should another deterministic system, such as a computer, be any different?

Finally, note that adding a component of stochastic behavior does not change the above arguments: thus, saying that we've built a quantum computer out of some anti-ferro-electric spin field of microtuble tublin dimers does not provide an escape hatch. Ultimately, spin field are tractable through standard mathematics, and their behavior can be modeled by computer. Random behavior, random results of quantum measurements do not imply free will. It would seem not to matter if it was 'God playing dice', or the computer's random number generator providing the randomness.

Secondly, to better expose the workings, one might well need to expand the discussion into broader domains of metaphysics and ontology. We need to be aware that the question of free will is tied to the broader questions of being, of existance, such as Heidegger's 'Da-sein'. Chairs did not exist before man, yet the concept of 'chair' seems to be timeless; or is it? If indeed, the Universe and Everything In It is a part of Physics, and is subject to physical laws, then 'Being-ness' is something that is ultimately intertwined with Time. We are able to reason and contemplate because we are able to remember the past, and to think about it. 'Being-ness', likewise, seems to be a memory of the past that we are able to call on in the present, as we flow through time and reason about existance. Free will is our appearent ability to shape the future; but free will can also be viewed as our ability to pose questions, and then maybe answer them: it presupposes memory and existance. Free will seems to operate, at least in part, in the Platonic realm of concept and being; thus, if we fail to sketch out the physics for free will, it may be because we've failed to sketch out the physics underlaying platonic reality first.

- Escape A: let's make the argument that
simple computer systems are not conscious, but, as they get more
complex, then, by some measure, consciousness and free will creep in.
This begs the question: what is the measure, the metric, the yardstick,
that makes consciousness so infinitesimally small for simple programs,
and yet quite present in large ones? Aside from the Turing test, have
we other means of ascertaining?
- Escape B: Although computers that are perfectly isolated from their
environment are perfectly deterministic, computers that are in
contact with smooth, continuous external events can be
non-deterministic in certain ways. It is well known that most
irrational numbers are not Turing-computable; however, by hooking
up the computer to variable, infinite-precision irrational
inputs, some additional domain may creep in. It might be that
this additional opening of the domain of computability might
be sufficient to introduce free will and consciousness. But the eye
of this needle seems narrow to me.
- Escape C: Although quantum-mechanical systems can be modeled by
any computer with a good random number generator, it may be true
that no digital random number generator is 'good enough'. Indeed,
the question of detecting non-random sequences, and the question
of factoring products of large primes are tied. I find it curious
that one of the first envisioned applications of a quantum computer
is to provide high-speed factorization: this in turn seems to imply
that quantum computers will be able to efficiently and quickly
differentiate pseudo-random sequences from true noise.
- Escape C2: A variation of the above, it is argued that quantum
randomness is not entirely random, but is somehow holistically
coupled to the depths of the universe, and in particular, are
influenced in some ESP-like way by psychics. Although I've worded
this alternative with some pejorative terms which currently have no
scientific basis, in fact this alternative cannot be entirely
dismissed.
Lets look at this in the light of some discredited/objectionable lines of research: homeopathic dilutions of various organic compounds. For a while, there were reports of scientific instruments detecting trace amounts of certain molecules in dilutions so dilute that at best there should have been only one molecule per liter of solution. The proposed theoretical explanation for this was that the water molecules somehow 'remembered' the presence of the solute, and that the instruments were detecting this memory. In fancy terms, that the successive dilutions somehow permanently altered the physical-chemical properties of the solvent. Although most serious scientists give these ideas no shelter, it is important to understand in what way they may have to (have been) treated seriously. First of all, there are well known phenomenon, such as superconductivity, which do invoke large-scale coherent organization as part of their theoretical explanation. Thus, one cannot apriori dismiss claims of microscopic phenomena leading to macroscopic effects in bizarre ways. Second of all, in the world of pure physics, very basic assumptions must always be challenged and re-affirmed: The speed of light must be measured both parallel and transverse to the movement of earth through the 'ether', or, more recently, the slowing of the Pioneer 10/11 spacecraft as they leave the solar system must be carefully calibrated to make sure that there are no long range repulsive forces. However comfortable one may be with the current dogma about the absence of ether, or the absence of a fifth force, one must always be on the lookout. Similarly, it behooves chemists to take the homeopathic claims seriously enough to at least rule them out through sensitive experiments, even if 'intuition' and 'common knowledge' make homeopathic claims appear to be absurd. In a similar vein, claims of ESP and psychic effects remain completely unreproducible and scientifically discredited; this does not, however, rule out that the conscious mind cannot somehow affect quantum measurements in bizarre ways. The real problem is the absence of any sort of experiments that could reasonably test this effect. Indeed, even if it were true that microtubules really were large, complex quantum computers, we are incredibly far off from any experiment that would probe the coupling between microtubules and the nebulous activity of free will in a living brain. That is, even if it were true that 'free will' somehow affected microtubules in some ESP-like, psychic way, we have no way of making measurements of such an action.

**Modeling Chaotic Dynamics**- One way in which computers can be made to resemble
indeterministic systems is when they model chaotic dynamics.
Such modeling tends to make the outcome very sensitive to
initial conditions. Note that the 'initial conditions' are
a quasi-continuous input (i.e. 'floating point numbers'), and
the output is quasi-continuous as well (i.e. are also 'floating
point numbers').
**Discretely-valued NP-Complete Problems**- NP-Complete problems such as the the Traveling Salesman or Linear
Programming have discretely different outcomes that cannot be
continuously transformed into one another. That is, two
different solutions to the traveling salesman problem are inherently
discrete. The distances might be quite close to one-another, but
the distances are still differ by a finite value, and there is
no continuous-valued solution set. Algorithms used to solve the
traveling salesman problem, such as simulated annealing, or neural
networks, do tend to be sensitive to continuous-valued
intermediate states and inputs: e.g. in simulated annealing, the
outcome is sensitive to the cooling schedule in unpredictable
ways. This class of problems are interesting because they
resemble the 'either-or' type decisions that we associate with free
will.
**Random Input**- Programs that respond to ongoing external input (random,
continuous or discrete). In one way, one is tempted to label
these programs as 'open systems' in the sense that they are
not closed off from the environment. In another way, they
resemble 'closed systems' which do not read all of their input
all at once, but rather a little bit at a time, reacting to
each new stimulus. That is, responding to input 'in real time'
is not fundamentally different than recording the input, and
then analyzing the recording at a later time.
This identity ceases to be the case when the computer then
performs actions to change the external environment:
the changed environment may not be predictable, and thus,
the end result is no longer deterministic in the computational
sense. See 'Open System' below. Note, however, if the
environment is governed by classical deterministic differential
equations, then ultimately, the behavior of the combined
system is ultimately predictable (however difficult it may be to
do so in practice).
**Open System**- Hameroff gives an interesting example of a discretely valued
outcome subject to continuous-valued random disturbances:
A purely-deterministic, robot sailor hoping to dock at one of
three ports. While still far at sea, the smallest of wind gusts
may make the sailor choose a different port; once closer in,
only the largest of gusts that might blow the sailor from the
mouth of one port to the mouth of another will change the outcome.
By having a computer interact with a strongly-(chaotically-)mixing external world, outcomes become very hard to predict. However, this does not imply that the combined system is non-deterministic. After all, the wind currents can be modeled mathematically to arbitrary precision, and results can be predicted within the error bounds imposed by chaotic divergence.

**Phase Transitions/Critical Exponents**- Consider the vibrations of a long, weighted pole.
In real life, as the pole is lengthened, it eventually
breaks. Mathematically, we can determine the length at which
the pole becomes unstable (it is the length at which minor
perturbations grow without bound).
This problem seems to be algorithmically non-tractable:
beyond the critical length, a minor perturbation in the millionth
decimal place will grow to a large bend in finite time.
**Re-iterated Monte-Carlo**- Lets throw dice. If an algorithm likes the result, we are done. If it doesn't, then roll again. ...

This is at the limit of my knowledge of the subject, but here goes:

**Devil's Staircase**- The devil's staircase is a construction the graphs the
Farey-addition of real numbers. Alternately, it can be
expressed as a construction with continued fractions (Conway).
It has the curious property that all derivatives are zero at
all rational numbers, yet, none-the-less, the function is
strictly monotonically increasing, viz. f(x) < f(y)
whenever x < y.
This is but one of a whole class of functions that are well defined on rational values, but bizarre or undefined at irrational values. (See an artistic exploration in my Art Gallery).

Gestalt: can we make an algebra out of statements about undecidability? Another classic example: 'even God cannot change the value of 2+2, or the value of Pi'. This appears to be a concrete statement about the powers of God, a discussion of whom is in most other respects impossible.

At best, free will seems to be constructive: as in one of Conway's games, we are given a menu of choices, with each choice made resulting in a new game to play. But that misses the point: we want to talk about free will, and not the choices that free will is presented with. We want to talk about the how and why of decision making process, rather than the outcome of it. How can we begin to imagine what sort of (microscopic) physics could talk about it objectively without also removing its very course of action?

With that in mind, lets review what we do seem to know about consciousness and its place in the physical universe.

- Consciousness seems to be intimately and inescapably tied to the
perception of the passage of time, and indeed, the idea that the past is
fixed and perfectly deterministic, and that the future is unknowable.
This fits well, because if the future were predetermined, then
there'd be no free will, and no point in the perception of the
passage of time.

umm, with that as the introduction, we now come to the main point, which is .... 'I have discovered the most marvelous proof, for which the margin of this web page is too small...'. Ha ha. Later duude.

- [pen90] The Emporer's New Mind, Roger Penrose, 1990
- Is the World
Provably Indeterminate? is a recorded discussion as to what the
word 'deterministic' actually means. Of particular interest are the
comments regarding Chaitin's reformulation of Godel's paradox.
No matter what formal (logical) system we choose, we can never
formally prove that a given sequence of observations has been
produced by a non-deterministic process. The operative word here is
'observations': if one knew
*a priori*or axiomatically that a given process was non-deterministic, then 'proving it' is not a problem. - Consciousness as
a Quantum Phenomenon, a quick, popular review of the ideas.
- Hameroff Home Page.

**Formal Systems**- Formal mathematical systems consist of (a finite) set of syntactical
rules. The expression of these rules define what is say-able
in such systems. Since there is a strict correspondence between
what is say-able and what is computable (via Church), one is limited
in formal systems to what one can compute. Yet we also know that
there are things that are not computable, ergo, these are
unreachable to formal systems.
Indeed, Penrose (Emp. new clothes) works at length to establish this point in layman's terms. The upshot is that formal systems employ axioms as their 'inputs'. These axioms are understood by humans as somehow being 'obviously true' statements. These axioms are not provable within the formal system. Mathematics sits on shaky foundations in that we, as humans, do not know that any given set of axioms does not lead to a set of internal contradictions within a formal system. We have to believe that something like Peano's axioms are not self-contradictory; we cannot formally prove it. (See Clear and Certain Notions, a conversation exploring the issue of intuitionist viewpoints (e.g. 'platonic mysticism') vs. the limits of formal systems).

On a related note, we have additional metaphysical questions about the way in which humans use language: Do we in fact use language to communicate ideas that are not expressible as formal systems? Certainly, any act of communication that we engage in as human necessarily results in a finite-length string, and so, naively, we might try to ask whether such a string might be 'computable'. But this seems irrelevant, and beside the point, since it fails to address what the mind does in understanding the communication.

March, August 2000

Linas Vepstas linas@linas.org