Infinite Algebras?

There a variety of constructions in which define a transfinite ordinal number 'omega', w, which can be loosely identified with 'infinity'. The construction also shows how to define w+1, w-1, w2 and so on. It is easiest to understand these results by forgetting for a moment that w is larger than any finite number, and instead think of the construction as defining some peculiar algebra. The algebra itself seems fairly boring, and the excitement derives purely from the fact that one of the axioms of the construction defines an inequality, and that this inequality establishes that w is indeed larger than any finite number (as well as more 'boring' results, such as w-1 < w < w+1, etc.)

The goal of this web page is to ask the following questions, and perhaps hunt for some answers:

Transfinite Turing Machines?

The concept of a 'transfinite Turing machine' is probably completely imaginary, but lets imagine one for an instant. Consider the following rather inefficient algorithm that implements the 'Cantor slash':
Start with an enumerated list of numbers. Loop n=1 until omega: pick digit that differs from n'th digit of n'th enumerant. Put resulting number at beginning of list. Goto start.
Clearly, this Turing machine never halts. But the algorithm is oddly specified: instead of running 'forever', we say, 'run until omega', which is the same thing, except that after running 'forever' we have constructed a number, using the Cantor slash, that is not in the original list. Then, we invoke the 'infinite hotel' to add this freshly-constructed number to the list. Not a terribly efficient algorithm. But we still might use it to argue that w2 < 2w or something to that effect.

More broadly, suppose we could somehow effectively work with transfinite algorithms. Then various proofs fail: e.g. the halting problem, or proofs that make use of the diagonal slash. In particular, might this give a new way of looking at meta-mathematics & formalism?

Ordinarily, to be formal, one starts with a set of axioms and applies the rules of logic to derive theorems. Godel's incompleteness theorem states that there are true statements that are not provable. But maybe we should amend this with the qualification 'not provable in finite time'. Supposing that one could use this notion of writing algorithms that 'run forever, and then a bit more', then might not previously uncomputable problems become suitable for examination? The holy grail would be, of course, to compute Chaitin's fraction of finite turing machines that halt.

Of course, this is extremely dangerous territory; set theorists spend entire lives trying to replace an infinite sequence of theorems by one theorem, and religiously avoid infinitely long expressions; yet here, we advocate the very opposite. So this has all outward appearances of being 'crazy math'.

For example, consider the algorithm that counts to N, and then outputs whether N is even or odd. Asking whether omega is even or odd is nonsense, and so an algorithm that counted to infinity, and then determined 'even-ness' afterward, is somehow faulty. One must not be able to write algorithms such as this; but how can we distinguish such 'invalid' algorithms from the valid ones? Can we develop some rules that tell these apart?


Axiomatic Set Theory (mirror) provides an excellent historical review of axiomatic set theory. Among other things, it reviews how Cantor came to construct the transfinite ordinal numbers.
Continuum Hypothesis
sci.math continuum FAQ
Peano's Axioms (mirror) lists the five Peano axioms.
Zermelo-Fraenkel's system (mirror) Lists the seven axioms of set theory.

Copyright (c) 2001 Linas Vepstas