Back to title page.

Left

Adjust your browser window

Right

 

1. Platonism - the Philosophy of Working Mathematicians

2. Investigation of Stable (Self-contained) Models - the Nature of the Mathematical Method

 

3. Intuition and Axioms

The stable character of mathematical models and theories is not always evident - because of our Platonist habits (we are used to treat mathematical objects as specific "world"). Only few people will dispute the stable character of a fully axiomatic theory. All principles of reasoning, allowed in such theories, are presented in axioms explicitly. Thus the principal basis is fixed, and any changes in it will mean explicit changes in axioms.

Could we also fix those theories that are not fully axiomatic yet? How could it be possible? For example, all mathematicians are unanimous about the ways of reasoning that allow us to prove theorems about natural numbers (other ways yield only hypotheses or errors). Still, most mathematicians do not use, and even do not know axioms of arithmetic! And even in the theories that seem to be axiomatic (as, for example, geometry in Euclid's "Elements") we can find aspects of reasoning that are commonly acknowledged as correct, yet are not presented in axioms. For example, properties of the relation "the point A is located on a straight line between the points B and C", are used by Euclid without any justification. Only in XIX century M.Pasch introduced the "axioms of order" describing this relation explicitly. Still, until this time mathematicians somehow managed to treat it in a uniform way...

Trying to explain this phenomenon, we are led to the concept of intuition. Intuition is treated usually as "creative thinking", "direct obtaining of truth", etc. But I'm interested in a much more prosaic aspect of intuition.

The human brain is a very complicated system of processes. Only a small part of these electrochemical fireworks can be controlled consciously. Therefore, similar to the processes going on at the conscious level, there must be a much greater amount of thinking processes going on at the unconscious level. Experience shows that when the result of some unconscious thinking process is very important for the person, it (the result) can be sometimes recognized at the conscious level. The process itself remains hidden, for this reason the effect seems like a "direct obtaining of truth" etc., (see Poincare [1908], Hadamard [1945]).

Since unconscious processes yield not only arbitrary dreams, but also (sometimes) reasonable solutions of real problems, there must be some "reasonable principles" ruling them. In real mathematical theories we have such unconscious "reasonable principles" ruling our reasoning (together with the axioms or without any axioms). For me, relatively closed sets of unconscious "ruling principles" represent the most elementary type of intuition used in mathematics.

See also David G. Myers: Intuition: Its Powers and Perils (Yale U. Press, September 2002).

We can say, therefore, that a theory (or model) can be stable not only due to some system of axioms, but also due to a specific intuition. So, we can speak about intuition of natural numbers that determines our reasoning about these numbers, and about "Euclidean intuition" that makes the usual geometry completely definite, though Euclid's axioms do not contain many essential principles of geometric reasoning.

How could we explain the emergence of intuitions that are ruling uniformly the reasoning of so many different people? It seems that they can arise because human beings all are approximately equal, because they deal with approximately the same external world, and because in the process of education, practical and scientific work they tend to achieve accordance with each other.

While investigations are going on, they can arrive at a level of complexity where the degree of definiteness of intuitive models is already insufficient. Then various conflicts between specialists may appear about the ways of reasoning which could be accepted, and which should not. It may happen even that commonly acknowledged ways of reasoning lead to absurd conclusions...

Such situations appeared many times in the history of mathematics: the crash of discrete geometric intuition after the discovery of incommensurable magnitudes (the end of VI century BC), problems with negative and complex numbers (up to the end of XVIII century), the dispute between L.Euler and J.d'Alembert on the concept of function (XVIII century), groundless operation with divergent series (up to the beginning of XIX century), problems with the acceptance of Cantor's set theory, paradoxes in set theory (the end of XIX century), controversy around the axiom of choice (the beginning of XX century). All that was caused by the inevitably uncontrollable nature of unconscious processes. It seems that "ruling principles" of these processes are picked up and fastened by something like the "natural selection" which is not able to a far-reaching co-ordination without making errors. Therefore, the appearance of (real or imagined) paradoxes in intuitive theories is not surprising.

The defining intuition of a theory does not always remain constant. Frequent changes happen during the beginning period, when the intuition (as the theory itself), is not yet stabilized. During this, the most delicate period of evolution, the most intolerant conflicts appear. The only reliable exit from such situations is the following: we must convert (at least partly) the unconscious ruling "principles" into conscious ones and then investigate their accordance with each other. If this conversion were meant in a literal sense, it would be impossible, since we cannot know the internal structure of a specific intuition. We can speak here only about a reconstruction of a "black box" in some other - explicit - terms. Two different approaches are usually applied for such reconstruction: the so-called genetic method and the axiomatic method.

The genetic method tries to reconstruct intuition by means of some other theory (which can also be intuitive). Thus, a "suspicious" intuition is modeled, using a "more reliable" one. For example, in this way the objections against the use of complex numbers were removed: complex numbers were presented as points of a plane. In this way even their strangest properties (as, for example, the infinite set of values of log x for a negative x) were converted into simple theorems of geometry. After this, all disputes stopped. In a similar way problems with the basic concepts of the Calculus (limit, convergence, continuity, etc.) were cleared up - through their definition in terms of epsilon-delta.

It appeared, however, that some of these concepts, after the reconstruction in terms of epsilon-delta, obtained unexpected properties missing in the original intuitive concepts. Thus, for example, it was believed that every continuous function of a real variable is differentiable almost everywhere (except of some isolated "break-points"). After the concept of a continuous function has been redefined in terms of epsilon-delta, it appeared that a continuous function could be obtained, which is nowhere differentiable (the famous construction by K.Weierstrass).

The appearance of unexpected properties in reconstructed concepts means that here, indeed, we have a reconstruction - not a direct "copying" of intuitive concepts, and that we must consider the problem seriously: is our reconstruction adequate?

The genetic method clears up one intuition in terms of another one, i.e. it is working relatively. The axiomatic method, conversely, is working "absolutely": among commonly acknowledged assertions about objects of a theory some subset is selected, assertions from this subset are called axioms, i.e. they are acknowledged as true without proof. All other assertions of the theory must be proved using the axioms. These proofs can contain intuitive moments that must be "more evident" than the ideas presented in axioms. The most famous applications of the axiomatic method are: Euclid's axioms, Hilbert's axioms for the Euclidean geometry, Peano axioms for arithmetic of natural numbers, Zermelo-Fraenkel axioms for set theory.

The axiomatic method (as well as the genetic method) yields only a reconstruction of intuitive concepts. The problem of adequacy can be reduced here to the question, whether all essential properties of intuitive concepts are presented in axioms? From this point of view the most complicated situation appears, when axioms are used to rescue some theory which had "lost its way" in paradoxes. Zermelo-Fraenkel's axioms were developed exactly in such a situation - after paradoxes appeared in the intuitive set theory. Here, the problem of adequacy is very complicated: are all positive contents of the theory saved?

What criteria can be set for the adequacy of a reconstruction? Let us remember various definitions of the real number concept in terms of rational numbers, presented in the 1870s simultaneously by R.Dedekind, G.Cantor and some others. Why do we regard these reconstructions as satisfactory? And how can the adequacy of a reconstruction be justified when the original concept remains hidden in the intuition and every attempt to get it out is a reconstruction itself with the same problem of adequacy? The only possible realistic answer is taking into account only those aspects of intuitive concepts that can be recognized in the practice of mathematical reasoning. It means, first, that all properties of real numbers, acknowledged before as "evident", must be proved on the basis of the reconstructed concept. Secondly, all intuitively proven theorems of the Calculus must be proved by means of the reconstructed concept. If this is done, it means that those aspects of the intuitive concept of real number that managed to appear in mathematical practice, all are explicitly presented in the reconstructed concept. Still, maybe, some "hidden" aspects of the intuitive real number concept have not yet appeared in practice? And they will appear in future? At first glance, it seems hard to dispute such a proposition.

However, let us suppose that this is the case, and in 2102 somebody will prove a new theorem of the Calculus using a property of real numbers, used never before in mathematical reasoning. And then will all the other mathematicians agree immediately that this property was "intended" already in 2002? At least, it will be impossible to verify this proposition: none of the mathematicians living today will survive 100 years.

Presuming that intuitive mathematical concepts can possess some "hidden" properties that do not appear in practice for a long time we fall into the usual mathematical Platonism (i.e. we assume that the "world" of mathematical objects exists independently of mathematical reasoning).

Still, let us consider

Freiling's Axiom of Symmetry (1986, see http://www.faqs.org/faqs/sci-math-faq/AC/ContinuumHyp). Let A be the set of functions mapping Real Numbers into countable sets of Real Numbers. Given a function f in A, and some arbitrary real numbers x and y, we see that x is in f(y) with probability 0, i.e. x is not in f(y) with probability 1. Similarly, y is not in f(x) with probability 1. Freiling's axiom AX states: "for every f in A, there exist x and y such that x is not in f(y) and y is not in f(x)". The intuitive justification for it is that we can find the x and y by choosing them at random. In ZFC, AX is equivalent to "not CH", i.e. neither AX, nor "not AX" can be derived from the axioms of ZFC. Do you think AX is a counter-example for my previous thesis? I.e. does AX reveal a "hidden" property of the real line that did not appear in the mathematical practice until 1986?

Christopher F. Freiling. Axioms of Symmetry: Throwing Darts at the Real Line", Journal of Symbolic Logic, Vol. 51, 1986, pp 190-200. (See also: Devlin's Angle, June 2001, http://arxiv.org/pdf/math.AG/0209244 by Yuri Manin)

Some of the intuitive concepts admit several different, yet, nevertheless, equivalent explicit reconstructions. In this way an additional very important evidence of adequacy can be given. Let us remember, again, various definitions of real numbers in terms of rational numbers. Cantor's definition was based upon convergent sequences of rational numbers. Dedekind defined real numbers as "cuts" in the set of rational numbers. One definition more can be obtained by using (infinite) decimal fractions. All these definitions are provably equivalent. We cannot prove strongly the equivalence of an intuitive concept and its reconstruction, yet we can prove - or disprove - the equivalence of two explicit reconstructions.

Another striking example is the reconstruction of the intuitive notion of computability (the concept of algorithm). Since 1930s several very different explicit reconstructions of this notion were proposed: recursive functions, "Turing-machines" by A.M. Turing, the lambda-calculus by A. Church, canonical systems by E. Post, normal algorithms by A.A. Markov, etc. And here, too, the equivalence of all reconstructions was proved. The equivalence of different reconstructions of the same intuitive concept means that the volume of the reconstructed explicit concepts is not accidental. This is a very important additional argument for replacing the intuitive concept by an explicit reconstruction.

The trend to replace intuitive concepts by their more or less explicit reconstructions appears in the history of mathematics very definitely. Intuitive theories cannot develop without such reconstructions normally: the definiteness of intuitive basic principles gets insufficient when the complexity of concepts and methods is growing. In most situations reconstruction can be performed by the genetic method, yet to reconstruct the most fundamental mathematical concepts the axiomatic method must be used (fundamental concepts are called fundamental just because they cannot be reduced to other concepts).

Goedel's incompleteness theorem has provoked very much talking about insufficiency of the axiomatic method for a true reconstruction of the "alive, informal" mathematical thinking. Some people say that axioms are not able to cover "all the treasures of the informal mathematics". Of course, this is once again the usual mathematical Platonism converted into a methodological one (for a detailed analysis see Podnieks [1981, 1992], or Section 6.1).

Does the "axiomatic reasoning" differ in principle from the informal mathematical reasoning? Do there exist proofs in mathematics obtained by not following the pattern "premises - conclusion"? If not and every mathematical reasoning process can be reduced to a chain of conclusions, we may ask: are these conclusions going on by some definite rules that do not change from one situation to another? And, if these rules are definite, can they (being a function of human brains) be such that a complete explicit formulation is impossible? Today, if we cannot formulate some "rules" explicitly, then how could we demonstrate that they are definite?

Therefore, it is nonsense to speak about the limited applicability of axiomatization: the limits of axiomatization coincide with the limits of mathematics itself! Goedel's incompleteness theorem is an argument against Platonism, not against formalism! Goedel's theorem demonstrates that no advanced, stable, self-contained fantastic "world of ideas" can be perfect. Any advanced, stable, self-contained "world of ideas" leads us either to contradictions or to undecidable problems.

In the process of evolution of mathematical theories, axioms and intuition interact with each other. Axioms "clear up" the intuition when it has lost its way. Still, axiomatization has also some unpleasant consequences: many steps of intuitive reasoning, expressed by a specialist very compactly, appear very long and tedious in an axiomatic theory. Therefore, after replacing an intuitive theory by an axiomatic one (this replacement may be non-equivalent because of defects discovered in the intuitive theory), specialists learn a new intuition, and thus they restore the creative potency of their theory. Let us remember the history of axiomatization of set theory. In 1890s contradictions were discovered in Cantor's intuitive set theory, and they were removed by means of axiomatization. Of course, the axiomatic Zermelo-Fraenkel's set theory differs from Cantor's intuitive theory not only in its form, but also in some aspects of contents. For this reason specialists have developed new, modified intuitions (for example, the intuition of sets and proper classes) that allow them to work in the new theory efficiently. Today, again, people are proving serious theorems of set theory intuitively.

What are the main benefits of axiomatization? First, as we have seen, axioms allow correcting intuition: to remove inaccuracies, ambiguities and paradoxes that arise sometimes due to the insufficient controllability of intuitive processes.

Secondly, axiomatization allows a detailed analysis of relations between basic principles of a theory (to establish their dependency or independence, etc.), and between the principles and theorems (to prove some of theorems only a part of axioms may be necessary). Such investigations may lead to general theories that can be applied to several more specific theories (let us remember the theory of groups).

Thirdly, sometimes, after the axiomatization, we can establish that the theory considered is not able to solve some of naturally arising problems (let us recall the continuum problem in set theory). In such situations we may try to improve the axioms of theory, even by developing several alternative theories.

 

4. Formal Theories

How far can we proceed with the axiomatization of some theory? Complete elimination of intuition, i.e. full reduction to a list of axioms and rules of inference, is this possible? The work by Gottlob Frege, Bertrand Russell, David Hilbert and their colleagues showed how this could be achieved even with the most complicated mathematical theories. All these theories can be reduced to axioms and rules of inference without any admixture of intuition. Logical techniques developed by these people allow us today complete axiomatization of any theories based on stable, self-consistent systems of principles (i.e. of any mathematical theories).

What do they look like - such 100% axiomatic theories? They are called formal theories (formal systems or deductive systems) underlining that no step of reasoning can be done without a reference to an exactly formulated list of axioms and rules of inference. Even the most "self-evident" logical principles (like, "if A implies B, and B implies C, then A implies C") must be either formulated in the list of axioms and rules explicitly, or derived from it.

The exact definition of the "formal" can be given in terms of theory of algorithms (or recursive functions): a theory T is called formal theory, iff an algorithm (i.e. a mechanically applicable computation procedure) is presented for checking correctness of reasoning via principles of T. It means that when somebody is going to publish a "mathematical text" calling it "a proof of a theorem in T", we must be able to check mechanically whether the text in question really is a proof according to the standards of reasoning accepted in T. Thus, in formal theories, the standards of reasoning must be defined precisely enough to enable checking of proofs by means of a computer program. (Note that we are discussing here checking of ready proofs, and not the problem - is some proposition provable or not.)

As an unpractical example of a formal theory let us consider the game of chess, let us call this "theory" CHESS. All the possible positions on a chessboard (plus the flag: "whites to move" or "blacks to move") we shall call propositions of CHESS. The only axiom of CHESS will be the initial position, and the rules of inference - the rules of the game. The rules allow us to pass from one proposition of CHESS to some other ones. Starting with the axiom we obtain in this way theorems of CHESS. Thus, theorems of CHESS are all the possible positions that can be obtained from the initial position by moving chessmen according to the rules of the game.

Exercise 1.1. Could you provide an unprovable proposition of CHESS?

Why is CHESS called a formal theory? When somebody offers a "mathematical text" P as a proof of a theorem A in CHESS, it means that P is a record of some chess-game stopped in the position A. And, of course, checking correctness of such a "proof" is an easy task. Rules of the game are formulated precisely enough - we could write a computer program that will execute the task.

Exercise 1.2. Try estimating the size of this program in some programming language.

Our second example of a formal theory is only a bit more serious. It was proposed by P. Lorenzen, let us call it theory L. Propositions of L are all the possible "words" made of letters a, b, for example: a, aa, aba, abaab. The only axiom of L is the word a, and L has two rules of inference:

X
--------------
Xb     aXa

It means that (in L) from a proposition X we can infer immediately the propositions Xb and aXa. For example, the proposition aababb is a theorem of L:

a |- ab |- aaba |- aabab |- aababb
rule1- rule2--- rule1----rule1--------

This fact is expressed usually as L |- aababb ( "L proves aababb").

Exercise 1.3. a) Describe an algorithm determining whether a proposition of L is a theorem or not.

b) Could you imagine such an algorithm for the theory CHESS? Of course, you can, yet... Thus you see that even, having a relatively simple algorithm for checking of proof correctness, the problem of provability can appear a very complicated one.

A very important property of formal theories is given in the following

Exercise 1.4. (for smart students) Show that the set of all theorems of a formal theory is computably denumerable (similar terms - effectively denumerable, recursively denumerable).

This means that for any formal theory a computer program can be written that will print on an (endless) paper tape all theorems of this theory (and nothing else). Unfortunately, such a program cannot solve the problem that the mathematicians are mainly interested in: is a given proposition provable or not? When, sitting near the computer we see our proposition printed, it means that it is provable. Still, until that moment we cannot know whether the proposition will be printed some time later or it will not be printed at all.

T is called a solvable theory (or, computably solvable), iff an algorithm (mechanically applicable computation procedure) is presented for checking whether some proposition is provable using principles of T or not. In the exercise 1.3a you proved that L is a solvable theory. Still, in the Exercise 1.3b you established that it is hard to state whether CHESS is a "feasibly solvable" theory or not. Checking correctness of proofs is always much simpler than provability checking. It can be proved that most mathematical theories are unsolvable, the elementary (first order) arithmetic of natural numbers and set theory included (see, for example, Mendelson [1997], or click here).

Normally, mathematical theories contain the negation symbol not. In such theories solving the problem stated in a proposition A means to prove either A or notA. We can try to solve the problem by using the enumeration program of the exercise 1.4: let us wait sitting near the computer for until A or notA is printed. If A and notA would be printed both, it would mean that T is an inconsistent theory (i.e. using principles of T one can prove some proposition and its negation). Totally, we have here 4 possibilities:

a) A will be printed, but notA will not (then the problem A has a positive solution),

b) notA will be printed, but A will not (then the problem A has a negative solution),

c) A and notA will be printed both (then T is an inconsistent theory),

d) neither A, nor notA will be printed.

In the case d) we will sit forever near the computer, yet nothing interesting will happen: using the principles of T one can neither prove nor disprove the proposition A, and for this reason such a theory is called an incomplete theory. Goedel's incompleteness theorem says that most mathematical theories are incomplete (see Mendelson [1997] or click here).

Exercise 1.5. (for smart students) Show that any complete formal theory is solvable.

 

5. Hilbert's Program

At the beginning of the XX century the honor of mathematics was questioned seriously - contradictions were detected in set theory. Till that time set theory was acknowledged widely as a natural foundation and a very important tool of mathematics. In order to save the honor of mathematics David Hilbert proposed his famous program of "perestroika" in the foundations of mathematics:

a) Convert all the existing (mainly intuitive) mathematics into a formal theory (a new variant of set theory cleared of paradoxes included).

b) Prove the consistency of this formal theory (i.e. prove that no proposition can be proved and disproved in it simultaneously).

Solving the task (a) - it was meant simply to complete the axiomatization of mathematics. This process proceeded successfully in the XIX century: formal definition of the notions of function, continuity, real numbers, axiomatization of arithmetic, geometry etc.

The task (b) - contrary to (a) - was a great novelty: an attempt to obtain an absolute consistency proof of mathematics. Hilbert was the first to realize that a complete solution of the task (a) enables one to set the task (b). Really, if we have not a complete solution of (a), i.e. if we are staying partly in the intuitive mathematics, then we cannot discuss absolute proofs of consistency. We may hope to establish a contradiction in an intuitive theory, i.e. to prove some proposition and its negation simultaneously. Still, we cannot hope to prove the consistency of such a theory: consistency is an assertion about the set of all theorems of the theory, i.e. about the set, an explicit definition of which we do not have in the case of an intuitive theory.

Still, if a formal theory replaces the intuitive one, then situation is changed. The set of all theorems of a formal theory is an explicitly defined object. Let us remember our examples of formal theories. The set of all theorems of CHESS is (theoretically) finite, yet from a practical point of view it is rather infinite. Nevertheless, one can prove easily the following assertion about all theorems of CHESS:

In a theorem of CHESS one cannot have 10 white queens simultaneously.

Really, in the axiom of CHESS we have 1 white queen and 8 white pawns, and by the rules of the game only white pawns can be converted into white queens. The rest of the proof is arithmetical: 1+8<10. Thus we have selected some specific properties of axioms and inference rules of CHESS that imply our general assertion about all theorems of CHESS.

With the theory L we have similar opportunities. One can prove, for example, the following assertion about all theorems of L:  if X is a theorem, then aaX also is a theorem.

Really, if X is axiom (X=a), then L |- aaX by rule2. Further, if for some X: L |- aaX, then we have the same for X'=Xb and X"=aXa:

aaX |- aa(Xb),  aaX |- aa(aXa)
rule1-----------rule2----

Thus, by induction, our assertion is proved for any theorem of L.

Hence, if the set of theorems is defined precisely enough, one can prove general assertions about all theorems. Hilbert thought that consistency assertions would not be an exception. Roughly, he hoped to select those specific properties of the axiom system of the entire mathematics that make deduction of contradictions impossible.

Let us remember, however, that the set of all theorems is here infinite, and, therefore, consistency cannot be verified empirically. We may only hope to establish it by means of some theoretical proof. For example, we proved our assertion:

L |- X -> L |- aaX

by using the induction principle. What kind of theory must be used to prove the consistency of the entire mathematics? Clearly, the means of reasoning used to prove the consistency of some theory T must be more reliable than the means used in T itself. How could we rely on a consistency proof when suspicious means were used in it? Still, if a theory T contains the entire mathematics, then we (mathematicians) cannot know any means of reasoning outside of T. Hence, proving consistency of such a universal theory T we must use means from T itself - from the most reliable part of them.

There are two different levels of "reliability" in mathematics:

1) Arithmetical ("discrete") reasoning - only natural numbers (or similar discrete objects) are used;

2) Set-theoretic reasoning - Cantor's concept of arbitrary infinite sets is used.

The first level is regarded as reliable (only few people would question it), and the second one - as still suspicious (Cantor's set theory was cleared of contradictions, still...). Of course, Hilbert's intention was to prove the consistency of mathematics by means (of a subset) of the first level.

As soon as Hilbert announced the initial versions of his project in a series of papers and lectures between 1900 and 1905, Henri Poincare expressed serious doubts about its feasibility (see Poincare [1908], Volume II, Chapter IV). He pointed out that proving consistency of mathematics by means of the induction principle (the main tool of the first level) Hilbert would use a circular argument: the consistency of mathematics means also consistency of the induction principle ... proved by means of the induction principle! At that time few people could realize the real significance of this criticism... (Brouwer [1912] was one of the few exceptions). Still, in 1930, Kurt Goedel proved that Poincare was right: an absolute consistency proof of essential parts of mathematics is impossible! (For details see Section 5.4.)

 

6. Some Replies to Critics

1. I do not believe that the natural number system is an inborn property of human mind. I think that it was developed from human practice with collections of discrete objects. Therefore, both - the properties of discrete collections from human practice and the structure of human mind, influence the particular form of our present natural number system. If so, how long was the development process of this system and when it was finished? I think that the process ended in VI century BC, when first results were obtained about the natural number system as the whole (theorem about infinity of primes). In human practice only relatively small sets can appear (and following the modern cosmology we believe that only a finite number of particles can be found in the Universe). Hence, results about "natural number infinity" can be obtained in a theoretical model only. If we believe that general results about natural numbers can be obtained by means of pure reasoning, without any additional experimental practice, it means that we are convinced that our theoretical model is stable, self-contained and (sufficiently) complete.

2. (See Sections 5.4, 6.5 and Appendix 2 for details) The development process of mathematical concepts does not yield a continuous spectrum of concepts, yet a relatively small number of different concepts (models, theories). Thus, considering the history of natural number concept we see two different stages only. Both stages can be described by the corresponding formal theories:

 - Stage 1 (the VI century BC - 1870s) can be described by first order arithmetic,

 - Stage 2 (1870s - today) can be described by arithmetic of ZFC.

I think that the natural number concept of Greeks corresponds to first order arithmetic and that this concept remained unchanged up to 1870s. I believe that Greeks would accept any proof from the so-called elementary number theory of today. Cantor's invention of "arbitrary infinite sets" (in particular, "the set of all sets of natural numbers", i.e. P(w)) added new features to the old ("elementary") concept. For example, the well-known Extended Ramsey's theorem became provable. Thus a new model (Stage 2) replaced the model of Stage 1, and it remains principally unchanged up to day.

Finally, let us consider the history of geometry. The invention of non-Euclidean geometry could not be treated as a "further development" of the old Euclidean geometry. Still, Euclidean geometry remains unchanged up to day, and we can still prove new theorems using Euclid's axioms. Non-Euclidean geometry appeared as a new theory, different from the Euclidean one, and it also remains unchanged up to day.

Therefore, I think, I can retain my definition of mathematics as investigation of stable self-contained models that can be treated, just because they are stable and self-contained, independently of any experimental data.

3. I do not criticize Platonism as the philosophy (and psychology) of working mathematicians. On the contrary, as a creative method, Platonism is extremely efficient in this field. Platonist approach to "objects" of investigation is a necessary aspect of the mathematical method. Indeed, how can one investigate effectively a stable self-contained model - if not thinking about it in a Platonist way (as the "last reality", without any experimental "world" behind it)?

4. By which means do we judge theories? My criterion is pragmatic (in the worst sense of the word). If, in a theory, contradictions have been established, then any new theory will be good enough, in which main theorems of the old theory (yet not its contradictions) can be proved. In such a sense, for example, ZFC is "better" than Cantor's original set theory.

On the other side, if, in a theory, undecidable problems appear (as continuum problem appeared in ZFC), then any extension of the theory will be good enough, in which some of these problems can be solved in a positive or a negative way. Of course, simple postulation of the desired positive or negative solutions leads, as a rule, to uninteresting theories (such as ZFC+GCH). We must search for more powerful hypotheses, such as, for example, "V=L" (axiom of constructibility), or AD (axiom of determinateness). Theories ZF+"V=L" and ZF+AD contradict each other, yet they both appear very interesting, and many people make beautiful investigations in each of them.

If some people are satisfied neither with "V=L" nor with AD, they can suggest any other powerful hypothesis having rich and interesting consequences. I do not believe that here any convergence to some unique (the "only right") system of set theory can be expected.

5. Mathematicians are not in agreement about the ways to prove theorems, yet their opinions do not form a continuous spectrum. The existing few variations of these views can be classified; each of them can be described by means of a suitable formal theory. Thus they all can be recognized as "right", and we can peacefully investigate their consequences.

6. I think that the genetic and axiomatic methods are used in mathematics not as heuristics, and not to prove theorems. These methods are used to clarify intuitive concepts that appear insufficiently precise, and, for this reason, investigations cannot be continued normally.

The most striking application of the genetic method is, I think, the definition of continuous functions in terms of epsilon-delta. The old concept of continuous functions (the one of XVIII century) was purely intuitive and extremely vague, so that one could not prove theorems about it. For example, the well known theorem about zeros of a function f continuous on [a, b] with f(a)<0 and f(b)>0 was believed to be "obvious". It was believed also that every continuous function is almost everywhere differentiable (except of some isolated "break points"). The latter assertion could not be even stated precisely. To enable further development of the theory a reconstruction of the intuitive concept in more explicit terms was necessary. Cauchy did this in terms of epsilon-delta. Having such a precise definition, the "obvious" theorem about zeros of the above function f needs already a serious proof. And it was proved. The Weierstrass's construction of a continuous function (in the sense of the new definition) that is nowhere differentiable shows unexpectedly that the volumes of the old (intuitive) and the new (more explicit) concept are somewhat different. Nevertheless, it was decided that the new concept is "better", and for this reason it replaced the old intuitive concept of continuous functions.

In similar way the genetic method was used many times in the past. The so-called "arithmetization of the Calculus" (definition of real numbers in the terms of natural numbers) also is an application of the genetic method.

7. Our usual metatheory used for investigation of formal theories (to prove Goedel's theorem etc.) is the theory of algorithms (i.e. recursive functions). It is, of course, only a theoretical model giving us a somewhat deformed picture of how are real mathematical theories functioning. Perhaps, the "sub-recursive mathematics" will provide more adequate picture of the real processes (see, for example, Parikh [1971]).

 

7. References

Back to title page.