model theory, interpretation, completeness theorem, Post, truth table, truth, Skolem, table, paradox, model, satisfiable, completeness, Skolem paradox, formula, logically valid, true, false, satisfiability

Back to title page

Left

Adjust your browser window
In this book, constructive logic is used as a synonym of intuitionistic logic!

Right

4. Completeness Theorems (Model Theory)

  • 4.1. Interpretations
  • 4.2. Classical propositional logic - truth tables
  • 4.3. Classical predicate logic - Goedel's completeness theorem
  • 4.4. Constructive propositional logic - Kripke semantics
  • 4.5. Constructive predicate logic - Kripke semantics
  •  

    4.1. Interpretations

    Let us recall the beginning part of Section 1.2:

    The vision behind the notion of first order languages is centered on the so-called "domain" - a collection of "objects" that you wish to "describe" by using the language. Thus, the first kind of language elements you will need are variables:

    x, y, z, x1, y1, z1, ...

    The above-mentioned "domain" is the intended "range" of all these variables.

    ...

    The next possibility we may wish to have in our language are the so-called constant letters - symbols denoting some specific "objects" of our "domain".

    ...

    In some languages we may need also the so-called function letters - symbols denoting specific functions, i.e. mappings between "objects" of our "domain", or operations on these objects.

    ...

    The last kind of primitives we need in our language are the so-called predicate letters - symbols denoting specific properties (of) or relations between "objects" of our "domain".

    ...

    Thus, the specification of a first order language includes the following primitives:

    1) A countable set of variable names (you could generate these names, for example, by using a single letter "x": x, xx, xxx, xxxx, ...).

    2) An empty, finite, or countable set of constant letters.

    3) An empty, finite, or countable set of function letters. To each function letter a fixed argument number must be assigned.

    4) A finite, or countable set of predicate letters. To each predicate letter a fixed argument number must be assigned.

    Different sets of primitives yield different first order languages.

    By using the language primitives, we can build terms, atomic formulas and (compound) formulas.

    ...

    ... we can define the notion of formula of our language as follows:

    a) Atomic formulas are formulas.

    b) If B and C are formulas, then (B->C), (B&C), (BvC), and (~B) also are formulas.

    c) If B is a formula, and x is a variable, then (AxB) and (ExB) also are formulas.

    ...

    All really working formal mathematical theories are formulated as the so-called first order theories.

    Each first order theory T includes:

    a) a specific first order language L(T),

    b) logical axioms and rules of inference for this language (the classical, constructive or minimal version may be adopted),

    c) a set of specific non-logical axioms of T.

    End of quotes from Section 1.2.

    In principle, to do the so-called pure mathematics, i.e. simply to prove theorems, we need only axioms and rules of inference.

    But how about "applying" of our formal mathematical theories? In natural sciences we are used to think of theories as "theories of something" - classical mechanics, relativistic mechanics, general relativity theory, quantum mechanics, thermodynamics, electrodynamics, cosmology etc.

    We will not attempt investigating applications of mathematical theories to real world structures (with observation, measurement, and other delicate problems involved). Instead, let us investigate applications of first order theories to mathematical structures - the so-called model theory.

    Model theory is a very specific approach to investigation of formal theories as mathematical objects. Model theory is using (up to) the full power of set theory. Model theory is investigation of formal theories by using set theory as a metatheory.

    The main structures of model theory are non-constructive (see below) interpretations. Let L be a first order language containing constant letters c1, ..., ck, ... , function letters f1, ..., fm, ..., and predicate letters p1, ..., pn, .... An interpretation J of the language L consists of the following objects:

    a) a non-empty set DJ - the domain of interpretation (it will serve as the range of variables),

    b) a mapping intJ that assigns:

    - with each constant letter ci - a member intJ(ci) of the domain DJ (thus, constants "denote" particular objects in DJ),

    - with each function letter fi - a function intJ(fi) from DJ x ... x DJ into DJ (of course, intJ(fi) has the same number of arguments as fi),

    - with each predicate letter pi - a predicate intJ(pi) on DJ, i.e. a subset of DJ x ... x DJ (of course, intJ(pi) has the same number of arguments as pi).

    As an example, let us consider the so-called standard interpretation S of the first order (Peano) arithmetic PA:

    a) The domain is DS = {0, 1, 2, ...}.

    b) The mapping intS assigns: with the constant 0 - the number 0, with the constant 1 - the number 1, with the function letter "+" - the function x+y (addition of natural numbers), with the function letter "*" - the function x*y (multiplication of natural numbers), with the predicate letter "=" - the predicate x=y (equality of natural numbers).

    Having an interpretation J of the language L, we can define the notion of true formula (more precisely - the notion of formulas that are true under the interpretation J).

    As the first step, terms of the language L are interpreted as members of DJ or functions over DJ. indeed, terms are defined as constant letters, or variable letters, or their combinations by means of function letters. The term ci is interpreted as the member intJ(ci) of DJ. The variable xi is interpreted as the function Xi(xi) = xi. And, if t = fi(t1, ..., tq), then intJ(t) is defined as the function obtained by substituting of functions intJ(t1), ..., intJ(tq) into the function intJ(fi).

    For example (first order arithmetic), the standard interpretation of the term (x+y+1)*(x+y+1) is the function (x+y+1)2.

    Important - non-constructivity! Note that, for an infinite domain DJ, the interpretations of function letters may be non-computable functions.

    As the next step, the notion of true atomic formulas is defined. Of course, if a formula contains variables (as, for example, the formula x+y=1), then its "truth-value" must be defined for each combination of values of these variables. Thus, to obtain the truth-value of the formula pi(t1, ..., tq) for some fixed values of the variables contained in t1, .., tq, we must first "compute" the values of these terms, and then substitute these values into the predicate intJ(pi).

    For example (first order arithmetic), under the standard interpretation S, the formula x+y=1 will be true, iff either x takes the value 0, and y takes the value 1, or x takes the value 1, and y takes the value 0. Otherwise, it is false.

    Important - non-constructivity! Note that, for an infinite domain DJ, the interpretations of predicate letters may be non-computable predicates.

    And finally, we "define" the notion of true compound formulas of the language L under the interpretation J (of course, for a fixed combination of values of their free variables):

    a) Truth-values of the formulas ~B, B&C, BvC and B->C must be computed from the truth-values of B and C (by using the classical truth tables from Section 4.2 below).

    b) The formula AxB is true, iff B(c) is true for all members c of the domain DJ.

    c) The formula ExB is true, iff there is a member c of the domain DJ such that B(c) is true.

    For example (first order arithmetic), the formula Ey((x=y+y) v (x=(y+y+1)) says that "each number is even or odd". Under the standard interpretation S, this formula is true for all values of its free variable x. Similarly, AxAy(x+y=y+x) is a closed formula that is true under S.

    Important - non-constructivity! It may seem that, under an interpretation, any closed formula is "either true or flase". However, note that, for an infinite domain DJ, the notion of true formulas is extremely non-constructive: to establish, for example, the truth-value of the formula AxB, or the formula AxAy(x+y=y+x), we must verify the truth of B(c) for infinitely many values of c (or a+b=b+a for infinitely many values of a and b). Of, course, this verification cannot be performed on a computer. It can only (sometimes) be proved... in some other theory. The "degree of constructivity" of the formulas like as AxEyC(x,y), AxEyAzD(x,y,z) etc. is even less than that...

    Empty Domains?

    Do you think, we should consider also empty domains of interpretation DJ? According to the axiom L13: (B->B)->Ex(B->B), hence, Ex(B->B). In an empty domain, this formula would be false. Thus, to cover the empty domain, we would be forced to re-consider the axioms and/or re-consider the traditional meaning of Ex - see (c) above.

    Let us say that a formula of the language L is always true under the interpretation J, iff this formula is true for all combinations of values of its free variables.

    Logically Valid Formulas

    Some formulas are always true under all interpretations, for example:

    (B->C) & (C->D) -> (B->D),

    F(x) -> ExF(x),

    AxF(x) -> F(x),

    Ax(F(x)->G(x)) -> (AxF(x)->AxG(x)),

    Ax(F(x)->G(x)) -> (ExF(x)->ExG(x)).

    Such formulas are called logically valid. More precisely, in a first order language L, a formula is called logically valid, iff it is true in all interpretations of the language L for all values of its free variables.

    Thus, a logically valid formula is true independently its "meaning" - the interpretations of constants, functions and predicates used in it. But note that the (classical!) interpretations of propositional connectives and quantifiers remain fixed.

    The notion of logically valid formulas is doubly non-constructive in the sense that the universal quantifier "for all interpretations" is added to the (already) non-constructive definition of a (simply) true formula.

    As an example, let us verify that the axiom L12: AxF(x)->F(t) is logically valid. Let us assume that, under some interpretation J, for some values of its free variables, L12 is false. According to the classical truth tables, this could be only, iff AxF(x) were true, and F(t) were false (under the interpretation J, for the same above-mentioned values of free variables). Let us "compute" the value of the term t for these values of free variables (since the substitution F(x/t) is admissible, t may contain only these variables - recall the Exercise 1.2.5), and denote it by c. Thus, F(c) is false. But AxF(x) is true, hence, F(a) is true for all a in the domain DJ, i.e. F(c) also is true. Contradiction. Hence, L12 is true under all interpretations for all combinations of its free variables (if any).

    Exercise 4.1.1. Verify that the above 5 formulas are logically valid. (Hint: follow the above example - assume that there is an interpretation J such that the formula under question is false for some values of its free variables, and derive a contradiction.)

    To conclude that some formula is not logically valid, we must build an interpretation J such that the formula under question is false for some values of its free variables.

    As an example, let us verify that the formula Ax(p(x) v q(x)) -> Ax p(x) v Ax q(x) is not logically valid (p, q are predicate letters). Why it is not? Because the truth-values of p(x) and q(x) may behave in such a way that p(x) v q(x) is always true, but neither Ax p(x), nor Ax q(x) is true. Indeed, let us take the domain D = {0, 1}, and set

    p(0) = true, q(0) = false, p(1)=false, q(1)=true.

    In this interpretation, p(0) v q(0) = true, p(1) v q(1) = true, i.e. Ax(p(x) v q(x)) is true. But Ax p(x), Ax q(x) both are false. Hence, in this interpretation, Ax p(x) v Ax q(x) is false, and Ax(p(x) v q(x)) -> Ax p(x) v Ax q(x) is false. We have built an interpretation, making the formula under question false. Q.E.D.

    Exercise 4.1.2. Verify that the following formulas are not logically valid (p, q, r are predicate letters):

    a) p(x, y) & p(y, z) -> p(x, z),

    b) q(x)->Ax q(x),

    c) (Ax q(x)->Ax r(x))->Ax(q(x)->r(x)),

    c1) Ex(p(x)->B)->(Ex p(x)->B), where B does not contain x,

    d) AxEy p(x, y)->EyAx p(x, y),

    e) Ex q(x) & Ex r(x) -> Ex(q(x)&r(x)),

    f) Ax ~p(x, x) & AxAyAz(p(x, y) & p(y, z) -> p(x, z)) -> AxAy(x=y v p(x, y) v p(y, x)).

    (Hint: follow the above example - use natural numbers or other objects to build an interpretation J such that the formula under question is false for some values of its free variables.)

    Upside Down?

    If T is a first order theory, and J is an interpretation of its language, then (traditionally) J is called a model of T, iff J makes true all axioms of T (and hence, all theorems of T - verify that! - or see Lemma 4.2.2 below). To normal people, such a notion of model may seem somewhat strange: in "normal" branches of science theories serve as a basis for building models of natural phenomena or technical devices. In the mathematical logic, we have turned this approach upside down - theories, not phenomena or devices, are "modelled"!

    Two Notes

    1. Specific axioms of a first order theory T are not logically valid formulas - they are not true in all interpretations, they are true only in the models of T. Models of T are a proper subclass of all the possible intepretations. For example, the "obvious" arithmetical axioms like as x+0=x (or theorems like as x+y=y+x) are not logically valid. If we would interpret 0 as the number "two", then x+0 will be not x! Logically valid formulas must be true under all interpretations!

    Exercise 4.1.2X. Verify that x*0=0, x+y=y+x and x+(y+z)=(x+y)+z are not a logically valid formulas.

    2. In a sense, logically valid formulas "do not contain information". Indeed, let us consider the formulas x+0=x -> x+0=x, and x+0=0 -> x+0=0. Both are logically valid, but do we know more about zero and addition after reading them? Another example: 2*2=5 -> 2*2=5.

    Satisfiability

    Recall that, in a first order language L, a formula is called logically valid, iff it is true in all interpretations of the language L for all values of its free variables.

    In a first order language L, a formula F is called satisfiable, iff there is an interpretation of the language L such that F is true for some values of its free variables. A set of formulas F1, ..., Fn is called satisfiable, iff the conjunction F1& ...&Fn is satisfiable, i.e. if the formulas F1, ..., Fn can be satisfied simultaneously.

    Exercise 4.1.3. Verify that: a) F is satisfiable, iff ~F is not logically valid, and b) F is logically valid, iff ~F is unsatisfiable.

    Exercise 4.1.4. a) Verify that the formula Axy(p(x)->p(y)) is always true in all one-element interpretations (i.e. when the interpretation domain consists of a single element), but is false in at least one two-element interpretation (p is a predicate letter).

    b)Verify that the formula AxAyAz[(p(x)<->p(y))v(q(y)<->q(z))v(r(z)<->r(x))] is always true in all one- and two-element interpretations, but is false in at least one three-element interpretation (p, q, r are predicate letters).

    c) Prove that the formula ExAy F(x,y) is logically valid, iff so is the formula Ex F(x, g(x)), where g is a function letter that does not appear in F.

    d) Prove that the formula AxAyEz F(x,y,z) is satisfiable, iff so is the formula AxAy F(x, y, h(x,y)), where h is a function letter that does not appear in F.

    4.2. Classical Propositional Logic - Truth Tables

    Emil Leon Post (1897-1954). "... Post's Ph.D. thesis, in which he proved the completeness and consistency of the propositional calculus described in the Principia Mathematica by introducing the truth table method. He then generalised his truth table method, which was based on the two values "true" and "false", to a method which had an arbitrary finite number of truth-values... In the 1920s Post proved results similar to those which Gödel, Church and Turing discovered later, but he did not publish them. He reason he did not publish was because he felt that a 'complete analysis' was necessary to gain acceptance." (According to MacTutor History of Mathematics archive).

    First, let us investigate possible interpretations of the propositional connectives - conjunction, disjunction, implication and negation. Do you think, there is only one way to do this?

    The simplest possible approach to interpretation of propositional connectives assumes that if we know "truth-values" of operands, then we must be able to compute the "truth-value" of the expression. Thus we arrive to the so-called truth tables (in fact, classical truth tables)..

    If B is false, and C is false, then B&C is false.
    If B is false, and C is true, then B&C is false.
    If B is true, and C is false, then B&C is false.
    If B is true, and B is true, then B&C is true.

    B C B&C
    0 0 0
    0 1 0
    1 0 0
    1 1 1

    If B is false, and C is false, then BvC is false.
    If B is false, and C is true, then BvC is true.
    If B is true, and C is false, then BvC is true.
    If B is true, and C is true, then BvC is true.

    B C BvC
    0 0 0
    0 1 1
    1 0 1
    1 1 1

    If B is false, then ~B is true.
    If B is true, then ~B is false.

    B ~B
    0 1
    1 0

    No problems so far.

    If B is false, and C is false, then B->C is what? False? But, why?
    If B is false, and C is true, then B->C is what? False? But, why?
    If B is true, and C is false, then B->C is false.
    If B is true, and C is true, then B->C is what? Perhaps, not false? Hence, true?

    How to answer the 3 what's? If B is false, then B->C possesses no real meaning. If we already know that B is true, and C is true, then B->C is no more interesting. But, if a definite "truth-

    value" for B->C is mandatory in all cases, then we can greatly simplify our logical apparatus by assuming that B->C is always true, except, if B is true, and C is false. Thus:

    If B is false, and C is false, then B->C is true.
    If B is false, and C is true, then B->C is true.
    If B is true, and C is false, then B->C is false.
    If B is true, and C is true, then B->C is true.

    B C B->C
    0 0 1
    0 1 1
    1 0 0
    1 1 1

    This definition is equivalent to saying that B->C is true, iff ~(B&~C) is true (or: B->C is false. iff B is true, and C is false).

    Lemma 4.2.1. Under the above classical truth tables, all the classical propositional axioms L1-L11 take only true values.

    Proof. First, let us verify L11 and L10:

    B ~B Bv~B
    0 1 1
    1 0 1
    B C ~B B->C ~B->(B->C)
    0 0 1 1 1
    0 1 1 1 1
    1 0 0 0 1
    1 1 0 1 1

    Exercise 4.2.1. Verify L1-L9.

    Lemma 4.2.2. Under the classical truth tables, if the formulas B and B->C take only true values, then so does C. I.e. from "always true" formulas, Modus Ponens allows deriving only of "always true" formulas.

    Proof. Let us assume that, in some situation, C takes a false value. In the same situation, B and B->C take true values. If B is true, and C is false, then B->C is false. Contradiction. Hence, C takes only true values. Q.E.D.

    Note. In the proof of Lemma 4.2.2, only the third row of implication truth table was significant: if B is true, and C is false, then B->C is false!

    Theorem 4.2.3 ("soundness" of the classical propositional logic). If [L1-L11, MP]: |- F, then, under the classical truth tables, F takes only true values.

    Proof. By induction, from Lemmas 4.2.1 and 4.2.2.

    How about the converse statement of Theorem 4.2.3: if, under the classical truth tables, formula F takes only true values, then [L1-L11, MP]: |- F? I.e., are our axioms powerful enough to prove any formula that is taking only true values? The answer is "yes":

    Theorem 4.2.4 (completeness of the classical propositional logic). Assume, the formula F has been built of formulas B1, B2, ..., Bn by using propositional connectives only. If, under the classical truth tables, for any (possible and impossible) truth-values of B1, B2, ..., Bn, formula F takes only true values, then:

    a) in the constructive logic, [L1-L10, MP]: B1v~B1, B2v~B2, ..., Bnv~Bn |- F,

    b) in the classical logic, [L1-L11, MP]: |- F.

    Thus, the classical propositional axioms [L1-L11, MP] are, in a sense, "complete" - within the range of their "responsibility" they allow proving of any formula that takes only true values.

    Of course, (b) follows from (a) immediately - all the premises B1v~B1, B2v~B2, ..., Bnv~Bn are instances of the axiom L11.

    Note. Assume, the formula F is built of formulas B1, B2, ..., Bn by using propositional connectives only. If, under the classical truth tables, for any (possible and impossible) truth-values of B1, B2, ..., Bn, formula F takes only true values, then F is called a tautology. Theorem 4.2.4 says that any tautology can be proved in the classical propositional logic.

    Completeness of the classical propositional logic - Theorem 4.2.4(b) was first proved by Emil L.Post in his 1920 Ph.D. thesis:

    E.Post. Introduction to a general theory of elementary propositions. Amer. journ. math., 1921, vol. 21, pp.163-195

    Following an elegant later idea by Laszlo Kalmar we need two lemmas before trying to prove this theorem.

    L.Kalmar. Ueber Axiomatisiebarkeit des Aussagenkalkuels. Acta scientiarium mathematicarum (Szeged). 1934-35. vol. 7, pp. 222-243.

    Lemma 4.2.5. In the constructive logic, one can "compute" the classical truth-values of ~B, B->C, B&C, BvC in the following sense:

    Negation Implication Conjunction Disjunction
    [ ]: ~B |- ~B [L10, MP]: ~B, ~C |- B->C [L1, L2, L3, L9, MP]: ~B, ~C |- ~(B&C) [L1-L9, MP]: ~B,~C |- ~(BvC)
    [L1, L2, L9, MP]: B |- ~~B [L10, MP]: ~B, C |- B->C [L1, L2, L3, L9, MP]: ~B, C |- ~(B&C) [L7, MP]: ~B, C |- BvC
      [L1, L2, L9, MP]: B, ~C |- ~(B->C) [L1, L2, L4, L9, MP]: B, ~C |- ~(B&C) [L6, MP]: B, ~C |- BvC
      [L1, MP]: B, C |- B->C [L5, MP]: B, C |- B&C [L6, MP]: B, C |- BvC

    Note. Thus, to "compute" the classical truth-values, L11 is not necessary!

    Proof.

    ~B |- ~B

    Immediately, in any logic.

    B |- ~~B

    By Theorem 2.4.4. [L1, L2, L9, MP]: |- A->~~A.

    ~B, C |- B->C
    ~B, ~C |- B->C

    By axiom L10: ~B->(B->C) we obtain ~B |- B->C. This covers both cases.

    B, ~C |- ~(B->C)

    This is exactly Theorem 2.4.1(c) [L1, L2, L9, MP].

    B, C |- B->C

    By axiom L1: C->(B->C) we obtain C |- B->C.

    ~B, ~C |- ~(B&C)
    ~B, C |- ~(B&C)

    By axiom L3: B&C->B and the Contraposition Law (Theorem 2.4.2) [L1, L2, L9, MP]: |- (A->B)->(~B->~A) we obtain |- ~B->~(B&C), or ~B |- ~(B&C). This covers both cases.

    B, ~C |- ~(B&C)

    By axiom L4: B&C->C and the Contraposition Law (Theorem 2.4.2) [L1, L2, L9, MP]: |- (A->B)->(~B->~A) we obtain |- ~C->~(B&C), or ~C |- ~(B&C).

    B, C |- B&C

    By axiom L5: B->(C->B&C) we obtain B, C |- B&C.

    ~B, ~C |- ~(BvC)

    By Theorem 2.4.10(b).

    ~B, C |- BvC

    By axiom L7: C->BvC we obtain C |- BvC.

    B, ~C |- BvC
    B, C |- BvC

    By axiom L6: B->BvC we obtain B |- BvC. This covers both cases.

    Q.E.D.

    As the next step, we will generalize Lemma 4.2.5 by showing how to "compute" truth-values of arbitrary formula F, which is built of formulas B1, B2, ..., Bn by using more than one propositional connective. For example, let us take the formula BvC->B&C:

    B C BvC B&C BvC->B&C
    0 0 0 0 1
    0 1 1 0 0
    1 0 1 0 0
    1 1 1 1 1

    We will show that, in the constructive logic [L1-L10, MP]:

    ~B, ~C |- BvC->B&C,
    ~B, C |- ~(BvC->B&C),
    B, ~C |- ~(BvC->B&C),
    ~B, ~C |- BvC->B&C.

    Lemma 4.2.6. Assume, the formula F has been built of formulas B1, B2, ..., Bn by using propositional connectives only. Assume that, if the formulas B1, B2, ..., Bn take the truth-values v1, v2, ..., vn respectively, then, for these values, formula F takes the truth-value w. Then, in the constructive logic, we can "compute" the truth-value of F in the following sense:

    [L1-L10, MP]: v1B1, v2B2, ..., vnBn |- wF,

    where: wF denotes F, if w is true, and ~F, if w is false, and viBi denotes Bi, if vi is true, and ~Bi, if vi is false.

    Proof. By induction.

    Induction base. F is one of the formulas Bi. Then w=vi, and, of course, in any logic, viBi |- wF.

    Induction step.

    Note that Lemma 4.2.5 represents the assertion of Lemma 4.2.6 for formulas built of B1, B2, ..., Bn by using a single propositional connective.

    1. F is ~G. By induction assumption, [L1-L10, MP]: v1B1, v2B2, ..., vnBn |- w'G, where w' represents the truth-value of G. By Lemma 4.2.5, [L1-L10, MP]: w'G |- wF, hence, [L1-L10, MP]: v1B1, v2B2, ..., vnBn |- wF.

    2. F is G o H, where o is implication, conjunction, or disjunction. By induction assumption, [L1-L10, MP]: v1B1, v2B2, ..., vnBn |- w'G, where w' represents the truth-value of G, and [L1-L10, MP]: v1B1, v2B2, ..., vnBn |- w''H, where w'' represents the truth-value of H. By Lemma 4.2.5, [L1-L10, MP]: w'G, w''H |- wF, hence, [L1-L10, MP]: v1B1, v2B2, ..., vnBn |- wF.

    Q.E.D.

    Proof of Theorem 4.2.4(a). By Lemma 4.2.6:

    [L1-L10, MP]: B1, v2B2, ..., vnBn |- F,
    [L1-L10, MP]: ~B1, v2B2, ..., vnBn |- F,

    because F takes only true values. By [L1, L2, MP] Deduction Theorem 1,

    [L1-L10, MP]: v2B2, ..., vnBn |- B1->F,
    [L1-L10, MP]: v2B2, ..., vnBn |- ~B1->F,

    Let us merge these two proofs and append an instance of the axiom L8:

    |- (B1->F)->((~B1->F)->(B1v~B1->F)).

    Hence, by MP:

    [L1-L10, MP]: v2B2, ..., vnBn |- B1v~B1->F,

    and

    [L1-L10, MP]: B1v~B1, v2B2, ..., vnBn |- F.

    By repeating this operation we obtain Theorem 4.2.4(a):

    [L1-L10, MP]: B1v~B1, B2v~B2, ..., Bnv~Bn |- F.

    Q.E.D.

    From now on, we could forget our ability of proving formulas in the classical propositional logic, learned in Section 2. In order to verify, is a formula provable in [L1-L11, MP], or not, we can simply check, under the classical truth tables, takes this formula only true values, or not. Is this checking really simpler than proving formulas in [L1-L11, MP]? Of course, if your formula contains 2 or 3 atoms (like as (A->B)->~AvB, or the axiom L2), then its "truth table" consists of 4 or 8 rows - for most people this is a feasible task. But the "truth table" for a formula containing 32 atoms will contain more than a billion of rows... Let us try inventing a more efficient algorithm? It seems, we will never succeed - see Section 6.1.

     

    4.3. Classical Predicate Logic - Goedel's Completeness Theorem

    Kurt Gödel (1906-1978) "He is best known for his proof of Gödel's Incompleteness Theorems. In 1931 he published these results in Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme . ...Gödel's results were a landmark in 20th-century mathematics, showing that mathematics is not a finished object, as had been believed. It also implies that a computer can never be programmed to answer all mathematical questions." (According to MacTutor History of Mathematics archive).

    As David Hilbert and Wilhelm Ackermann published in

    D.Hilbert, W.Ackermann. Grundzuege der theoretischen Logik. Berlin (Springer), 1928,

    in a sense, the "final" version of the axioms of the classical logic, they observed: "Whether the system of axioms is complete at least in the sense that all the logical formulas which are correct for each domain of individuals can actually be derived from them, is still an unsolved question." (quoted after

    S.C.Kleene. The Work of Kurt Goedel. "The Journal of Symbolic Logic", December 1976, Vol.41, N4, pp.761-778).

    Indeed, as we will verify below, a) all axioms of the classical logic (L1-L11, L12-L15) are logically valid, b) the inference rules MP, Gen allow to prove (from logically valid formulas) only logically valid formulas. Hence, in this way only logically valid formulas can be proved. Still, is our list of logical axioms complete in the sense that all logically valid formulas can be proved? - the question asked by Hilbert and Ackermann in 1928. The answer is "yes" - as Kurt Goedel established in 1929, in his doctoral dissertation "Ueber die Vollstaendigkeit des Logikkalkuels"(visit Goedel's Archive in the Princeton University Library). The corresponding paper appeared in 1930:

    K.Goedel. Die Vollstaendigkeit der Axiome des logischen Funktionenkalkuels. "Monatshefte fuer Mathematik und Physik", 1930, Vol.37, pp.349-360.

    Goedel's Completeness Theorem. In any first order language, a formula is logically valid, iff it can be proved by using the classical logic [L1-L11, L12-L15, MP, Gen].

    First, let us prove the easy part (sometimes called the soundness theorem) - that all the formulas that can be proved by using the classical logic [L1-L11, L12-L15, MP, Gen] are logically valid.

    Lemma 4.3.1. All axioms of the classical logic (L1-L11, L12-L15) are logically valid.

    Proof.

    1) Under the classical truth tables, the propositional axioms L1-L11 take only true values (Lemma 4.2.1). Hence, these axioms are true under all interpretations.

    2a) L12: AxF(x)->F(t), where F is any formula, and t is a term such that the substitution F(x/t) is admissible.

    Let us assume that, under some interpretation J, for some values of its free variables, L12 is false. According to the classical truth tables, this could be only, iff AxF(x) were true, and F(t) were false (under the interpretation J, for the same above-mentioned values of free variables). Let us "compute" the value of the term t for these values of free variables (since the substitution F(x/t) is admissible, t may contain only these variables - recall the Exercise 1.2.5), and denote it by c. Thus, F(c) is false. But AxF(x) is true, hence, F(a) is true for all a in the domain DJ, i.e. F(c) also is true. Contradiction. Hence, L12 is true under all interpretations for all combinations of its free variables.

    2b) L13: F(t)->ExF(x), where F is any formula, and t is a term such that the substitution F(x/t) is admissible.

    Similarly, see Exercise 4.3.1.

    2c) L14: Ax(G->F(x))->(G->AxF(x)), where F is any formula, and G is a formula that does not contain x as a free variable.

    Let us assume that, under some interpretation J, for some values of its free variables, L14 is false. According to the classical truth tables, this could be only, iff Ax(G->F(x)) were true, and G->AxF(x) were false (under the interpretation J, for the same above-mentioned values of free variables)

    If Ax(G->F(x)) is true, then G->F(c) is true for all c in DJ. Since G does not contain x, this means that if G is true, then F(c) is true for all c in DJ.

    From the orher side, if G->AxF(x) is false, then G is true, and AxF(x) is false. And finally, if AxF(x) is false, then F(c) is false for some c in the domain DJ. But, as we established above, if G is true, then F(c) is true for all c in DJ. Contradiction. Hence, under all interpretations, L14 is true for all combinations of its free variables.

    2d) L15: Ax(F(x)->G)->(ExF(x)->G), where F is any formula, and G is a formula that does not contain x as a free variable.

    Similarly, see Exercise 4.3.1.

    Q.E.D.

    Exercise 4.3.1. Verify that the axioms L13 and L15 are logically valid.

    Lemma 4.3.2. From logically valid formulas, inference rules MP and Gen allow deriving only of logically valid formulas..

    Proof.

    1. Modus Ponens. Assume, B and B->C are logically valid formulas. By MP, we derive C. Assume, C is not logically valid, i.e., under some interpretation J, for some values of its free variables, C is false. Under this interpretation J, for these values of free variables of C, the formulas B and B->C are true. Then, according to the classical truth tables, C also must be true. Contradiction. Hence, C is logically valid.

    2. Generalization. Assume, F(x) is logically valid, but AxF(x) is not, i.e., under some interpretation J, for some values of its free variables, AxF(x) is false. Hence, under this interpretation J, for these values of free variables of AxF(x), there is c in DJ such that F(c) is false. But F(x) is logically valid, i.e. F(c) is true. Contradiction. Hence, AxF(x) is logically valid.

    Q.E.D.

    Corollary 4.3.3. All the formulas that can be proved by using the classical logic [L1-L11, L12-L15, MP, Gen], are logically valid.

    Proof. Immediately, by Lemmas 4.3.1 and 4.3.2.

    Of course, this (the so-called soundness theorem) is the easy half of the completeness theorem. To complete the proof, we must prove the converse: if some formula is logically valid, then it can be proved by using the classical logic [L1-L11, L12-L15, MP, Gen].

    Goedel's initial very complicated proof from 1929 was greatly simplified in 1947, when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem. The result was published in 1949:

    L.Henkin. The completeness of the first-order functional calculus. "J. Symbolic Logic", 1949, vol.14, pp.159-166.

    Henkin's proof was simplified by Gisbert Hasenjaeger in 1953:

    G.Hasenjaeger. Eine Bemerkung zu Henkin's Beweis fuer die Vollstaendigkeit des Praedikatenkalkuels der ersten Stufe. "J. Symbolic Logic", 1953, vol.18, pp.42-48.

    Henkin's Model Existence Theorem. If a first order classical formal theory is consistent (in the sense that, by using the classical logic, it does not prove contradictions), then there is a finite or countable model of this theory (i.e. an interpretation with a finite or countable domain, under which all axioms and theorems of the theory are always true).

    This theorem solved a serious mental problem of anti-formalists. They thought that mere consistency of a theory (in the syntactic sense of the word - as the lack of contradictions) is not sufficient to regard it as meaningful - as a "theory of something". Model Existence Theorem says that (syntactic!) consistency of a theory is sufficient: if a theory does not contain contradictions, then it is a "theory of something" - it describes at least some kind of "mathematical reality". You may think that Euclidean geometry is "meaningless" - because it does not describe 100% correctly the spacial properties of the Universe. But it's your problem, not Euclid's - use another theory.

    Let us assume the Model Existence Theorem (we will prove it later in this section).

    Proof of the Completeness Theorem

    Let us assume that some formula F in some first order language L is logically valid, yet it cannot be proved by using axioms and rules of the classical logic. Let us consider the theory T in the language L which contains (besides the logical axioms) only one non-logical axiom - the negation of F, i.e. the formula ~Ax1...AxnF, where x1, .., xn are exactly all the free variables of F (if F contains free variables x1, .., xn, then, to negate its assertion, we must add the quantifiers Ax1...Axn). Since F cannot be derived from the logical axioms, T is a consistent theory.

    Indeed, if T would be inconsistent, i.e. we could prove in T some formula C and its negation ~C, then we had proofs of [L1-L11, L12-L15, MP, Gen] ~Ax1...AxnF |- C, and ~Ax1...AxnF |- ~C. Since ~Ax1...AxnF is a closed formula, by Deduction Theorem 2, [L1-L11, L12-L15, MP, Gen] |- ~Ax1...AxnF ->C, and |- ~Ax1...AxnF ->~C. Now, by axiom L9: (B->C)->(B->~C)->~B, we obtain that [L1-L11, L12-L15, MP, Gen] |- ~~Ax1...AxnF. By the (classical) Double Negation Law, this implies [L1-L11, L12-L15, MP, Gen] |- Ax1...AxnF, and by axiom L12: AxB(x)->B(x) - [L1-L11, L12-L15, MP, Gen] |- F. But, by our assumption, F cannot be proved in [L1-L11, L12-L15, MP, Gen]. Hence, T is a consistent theory.

    Now, by the Model Existence Theorem, there is a model of T, i.e. an interpretation J that makes all its axioms always true. Under this interpretation, the formula ~Ax1...AxnF (as an axiom of T) is always true. On the other hand, since F is logically valid, it is always true under all interpretations, i.e. it is always true also under the interpretation J. Hence, formulas F and ~Ax1...AxnF both are always true under J. This is impossible, hence, F must be provable from logical axioms. Q.E.D.

    1. Such a simple proof seems almost impossible! We are proving that the logical axioms and rules of infernce are strong enough, but where come these axioms in? They come in - in the proof of the Model Existence Theorem. This theorem says that if some formal theory T does not have models, then the logical axioms and rules of inference are strong enough to derive a contradiction from the axioms of T. But the proof of the Model Existence Theorem we will consider below, is positive, not negative!

    Exercise 4.3.2. a) Explain, how logical axioms are used in this (positive!) proof.

    b) (Thanks to Sune Foldager for the idea.) Prove the following version of Goedel's Completeness Theorem: if T is a first order formal theory, and M is an interpretation where all the AXIOMS of T are true, and F is a formula that is true in M, then T proves F (i.e. F is true in M, iff F is a theorem of T).

    2. The above simple proof seems extremely non-constructive! "If F is logically valid, then it can be proved [L1-L11, L12-L15, MP, Gen]". How could we obtain this proof? Still, how do we know that F is logically valid? Only, if we had a constructive procedure verifying that F is logically valid, we could ask for an algorithm converting such procedures into proofs in [L1-L11, L12-L15, MP, Gen]!

    Lemma 4.3.4. In any first order language the set of all logically valid formulas is effectively denumerable. I.e. given a language L, we can write a computer program that (working ad infinitum) prints out all logically valid formulas of L.

    Proof. Immediately from Exercise 1.1.4.

    This makes Goedel's completeness theorem very significant: it shows that the "doubly non-constructive" notion of logically valid formula is at least 50% constructive!

    Still, unfortunately, this notion is not 100% constructive. In 1936, Alonzo Church proved that at least some first order languages do allow an algorithm determining, is a given formula logically valid or not (i.e. an algorithm solving the famous Entscheidungsproblem - decision problem):

    A.Church. A note on the Entscheidungsproblem. "Journal of Symb. Logic", 1936, vol.1, pp.40-41

    After this, Laszlo Kalmar proved that, if a first order language contains at least one binary predicate letter, then it does not allow an algorithm determining, is a given formula logically valid or not. Thus, none of the serious first order languages allows such an algorithm (languages of PA and ZF included):

    L.Kalmar. Die Zurueckfuehrung des Entscheidungsproblems auf den Fall von Formeln mit einer einzigen, binaeren Funktionsvariablen. "Compositio Math.", 1936, Vol.4, pp.137-144

    For details, see Section 6.x, or Mendelson [1997]. Sometimes, this fact (the 50% constructiveness of the notion of the logical validity) is expressed a follows: the logical validity of first order formulas is semi-decidable.

    Exercise 4.3.3. (For smart students). Prove Henkin's Model Existence Theorem by using the following smart ideas due to L.Henkin and G.Hasenjaeger. Let T be a consistent theory. We must build a model of T. What kind of "bricks" should we use for this "building"? Idea #1: let us use language constant letters! So, let us add to the language of T an infinite set of new constant letters d1, d2, d3, ... (and adopt the corresponding additional instances of logical axioms). Prove that this extended theory T0 is consistent. The model we are building must contain all "objects" whose existence can be proved in T0. Idea #2: for each formula F of T0 having exactly one free variable (for example, x) let us add to the theory T0 the axiom ExF(x)->F(di), where the constant di is unique for each F. If T0 proves ExF(x), then this constant di will represent in our model the "object" x having the property F. Prove that this extended theory T1 is consistent. Idea #3: prove the (non-constructive) Lindenbaum's lemma: the axiom set of any consistent theory can be extended in such a way, that the extended theory is consistent and complete (the axiom set of the extended theory may be not effectively solvable). After this, extend T1 to a consistent complete theory T2. Idea #4: let us take as the domain of the interpretation M the set of all those terms of T0 that do not contain variables. And let us interpret a function letter f as the "syntactic constructor function" f', i.e. let us define the value f'(t1, ..., tn) simply as the character string "f(t1, ..., tn)". Finally, let us interpret a predicate letter p as the relation p' such that p'(t1, ..., tn) is true in M, iff T2 proves p'(t1, ..., tn). To complete the proof, prove that an arbitrary formula G is always true in M, iff T2 proves G. Hence, all theorems of the initial theory T are always true in M.

    Adolf Lindenbaum, 1904-1941? - logico, collaboratore e amico di Alfred Tarski (according to Matematici famosi). "No one knows what happened to Lindenbaum after his arrest by the Gestapo." (Marcel Guillaume, Mathematical Reviews, 2001c:03006).

    Lindenbaum's Lemma. Any consistent first order theory can be extended to a consistent complete theory. More precisely, if T is a consistent first order theory, then, in the language of T, there is a set A of closed formulas such that T+A is a consistent complete theory. (In general, T+A is not a formal theory in the sense of Section 1.1, see below.)

    Note. By T+A we denote the first order theory in the language of T, obtained from T by adding the formulas of the set A as non-logical axioms.

    Exercise 4.3.4. Verify that, in any first order language L, only countably many formulas can be generated. I.e. produce an algorithm for printing out a sequence F0, F1, F2, ... containing all the formulas of L.

    Proof of the Lindenbaum's Lemma (Attention: non-constructive reasoning!)

    Let us use the algorithm of the Exercise 4.3.4 printing out the sequence F0, F1, F2, ... of all formulas in the language of T, and let us run through this sequence, processing only those formulas Fi that are closed.

    At the very beginning, the set of new axioms A0 is empty.

    At the step i, we already have some set Ai-1 of new axioms. If the formula Fi is not closed, let us ignore it, and set Ai=Ai-1. Now, let us suppose that Fi is a closed formula. If T+Ai-1 proves Fi, or T+Ai-1 proves ~Fi, then we can ignore this formula, and set Ai=Ai-1. If T+A does not prove neither Fi, nor ~Fi, then let us simply add Fi (or ~Fi, if you like it better) to our set of new axioms, i.e. set Ai=Ai-1U{Fi}.

    Etc., ad infinitum. As the result of this process we obtain a set of closed formulas A = A0UA1UA2U... UAiU....

    Let us prove that T+A is a consistent complete theory.

    Consistency. If T+A would be inconsistent, we would have a proof of [T+A] |- C&~C for some formula C. If, in this proof, no axioms from the set A would be used, we would have a proof of [T] |- C&~C, i.e. T would be inconsistent.

    Otherwise, the proof of [T+A] |- C&~C could contain a finite number of axioms B1, ..., Bk from the set A. Let us arrange these axioms in the sequence, as we added them to the set A. Thus we have a proof of [T]: B1, ..., Bk |- C&~C. Let us recall Theorem 2.4.1(a):

    If A1, A2, ..., An, B |- C&~C, then A1, A2, ..., An |- ~B.

    Hence, we have a proof of [T]: B1, ..., Bk-1 |- ~Bk. But this is impossible - we added Bk to the set A just because T+Ai-1 could not prove neither Bk, nor ~Bk. Q.E.D.

    Completeness. We must verify that, for any closed formula F in the language of T, either T+A |- F, or T+A |- ~F. Let us assume, this is not the case for some closed formula F. Of course, F appears in the above sequence F0, F1, F2, ... as some Fi. If neither T+A |- F, nor T+A |- ~F, then neither T+Ai-1 |- Fi, nor T+Ai-1 |- ~Fi. In such a situation we would add F to the set A, hence, we would have T+A |- F. Q.E.D.

    This completes the proof of Lindenbaum's Lemma.

    Attention: non-constructive reasoning! T+A is a somewhat strange theory, because, in general, we do not have an effective decision procedure for its axiom set. Indeed, to decide, is some closed formula F an axiom of T+A, or not, we must identify F in the sequence F0, F1, F2, ... as some Fi, and after this, we must verify, whether T+Ai-1 proves Fi, or T+Ai-1 proves ~Fi, or none of these. Thus, in general, T+A is not a formal theory in the sense of Section 1.1.

    Proof of the Model Existence Theorem (Attention: non-constructive reasoning!)

    Inspired by the beautiful exposition in Mendelson [1997].

    Step 1. We must build a model of T. What kind of "bricks" should we use for this "building"? Idea #1: let us use language constant letters! So, in order to prepare enough "bricks", let us add to the language of T a countable set of new constant letters d1, d2, d3, ... (and extend the definitions of terms, atomic formulas and formulas accordingly, and add new instances of logical axioms accordingly). Let us prove that, if T is consistent, then this extended theory T0 also is consistent.

    If T0 would be inconsistent, then, for some formula C, we could obtain a proof of [T0]: |- C& ~C. If, in this proof, constant letters from the set {d1, d2, d3, ...} would not appear at all, then, in fact, we had a proof of [T]: |- C&~C, i.e. we could conclude that T is inconsistent. But, if the new constant letters do appear in the proof of [T0]: |- C& ~C? Then, let us replace these constant letters by any variables of T that do not appear in this proof (this is possible, since each first order language contains a countable set of variable letters). After these substitutions, the proof becomes a valid proofs of T, because:

    a) The logical axioms remain valid.

    b) The non-logical axioms of T do not contain the constants d1, d2, d3, ..., i.e. they do not change.

    c) Applications of inference rules MP and Gen remain valid.

    Hence, [T]: |- C'&~C', where the formula C' has been obtained from C by the above substitutions. I.e., if T0 would be inconsistent, then T also would be inconsistent.

    Step 2. The model we are building must contain all "objects" whose existence can be proved in T0. Idea #2: for each formula F of T0 having exactly one free variable (for example, x) let us add to the theory T0 the axiom ExF(x)->F(di), where the constant di is unique for each F. If T0 proves ExF(x), then this di will represent in our model the "object" x having the property F. Let us prove that, if T is consistent, then this extended theory T1 also is consistent. Note that in T1 the same language is used as in T0.

    To implement the Idea #2 correctly, first let us use the algorithm of the Exercise 4.3.4 printing out the sequence F0, F1, F2, ... of all formulas in the language of T0, and let us run through this sequence, processing only those formulas Fi that have exactly one free variable. Let us assign to each such formula Fi a unique constant dc(i) in such a way that dc(i) does not appear neither in the non-logical axioms of T, nor in Fi, nor in the axioms EyFj(y)->Fj(dc(j)) for all formulas Fj preceding Fi in the sequence F0, F1, F2, .... And, if x is the (only) free variable of Fi, let us adopt ExFi(x)->Fj(dc(i)) as an axiom of T1.

    Now, let us assume that the extended theory T1 is inconsistent, i.e. that, for some formula C of T0, we have a proof of [T1]: |- C&~C. In these proofs, only a finite number n of axioms ExF(x)->F(dc(F)) could be used. If n=0, then we have [T0]: |- C&~C, i.e. then T0 is inconsistent.

    If n>0, then let us mark the axiom ExF(x)->F(dc(F)) with F having the largest index in the sequence F0, F1, F2, .... And, in the proof of [T1]: |- C&~C, let us replace the constant c(F) by some variable y that does not appear in this proof (this is possible, since each first order language contains a countable set of variable letters). After this substitution, the proof remain a valid proof of T1, because:

    a) The logical axioms remain valid.

    b) The non-logical axioms of T do not contain the constant c(F), i.e. they do not change.

    c) The axiom ExF(x)->F(dc(F)) becomes ExF(x)->F(y). Since F does not contain the constant c(F), the premise ExF(x) does not change.

    d) The remaining n-1 axioms EyFj(y)->Fj(dc(j)) of T1 do not contain the constant c(F), i.e. they do not change.

    e) Applications of inference rules MP and Gen remain valid.

    Thus we have now another proof of a contradiction - [T1]: |- C'&~C', where the formula C' has been obtained from C by substituting y for c(F). Let us recall Theorem 2.4.1(a):

    If A1, A2, ..., An, B |- C&~C, then A1, A2, ..., An |- ~B.

    Let us take the formula ExF(x)->F(y) for B, and C'- for C. Thus, there is a proof of ~(ExF(x)->F(y)), where only logical axioms, non-logical axioms of T, and the remaining n-1 axioms EyFj(y)->Fj(dc(j)) of T1 are used. Let us recall the Exercise 2.6.3(b) [L1-L11, MP]: |- ~(A->B) <-> A&~B. Thus, ~(ExF(x)->F(y)) is equivalent to ExF(x)&~F(y), and, in fact, we have a proof of ExF(x), and a proof of ~F(y). By applying Gen to the second formula, we obtain a proof of Ay~F(y), which is equivalent to ~EyF(y) (indeed, let us recall Section 3.2, Table 3.2, Group IV, constructively, |- Ax~B<->~ExB). By Replacement Theorem 3, ~EyF(y) is equivalent to ~ExF(x). Thus, we have a proof of a contradiction ExF(x)&~ExF(x), where only logical axioms, non-logical axioms of T, and the remaining n-1 axioms EyFj(y)->Fj(dc(j)) of T1 are used.

    Let us repeat the above chain of reasoning another n-1 times to eliminate all occurrences of the axioms ExF(x)->F(dc(F)) from our proof of a contradiction. In this way we obtain a proof of a contradiction in T0, which is impossible (see Step 1). Hence, T1 is a consistent theory.

    Step 3. Idea #3: let us use the (non-constructive!) Lindenbaum's lemma, and extend T1 to a consistent complete theory T2. Note that in T2 the same language is used as in T0.

    Step 4. Let us define an interpretation M of the language of T0, in which all theorems of T2 will be always true. Since all theorems of the initial theory T are theorems of T2, this will complete our proof.

    Idea #4: let us take as the domain DM of the interpretation M the (countable! - verify!) set of all constant terms of T0, i.e. terms that do not contain variables (this set of terms is not empty, it contains at least the countable set of constant letters added in Step 1). And let us define interpretations of constant letters, function letters and predicate letters as follows.

    a) The interpretation of each constant letter c is the letter c itself.

    b) The interpretation of a function letter f is the "syntactic constructor function" f', i.e., if f is an n-ary function letter, and t1, ..., tn are constant terms, then the value f'(t1, ..., tn) is defined simply as the character string "f(t1, ..., tn)" (quotation marks ignored).

    c) The interpretation of a predicate letter p is the relation p' such, if p is an n-ary predicate letter, and t1, ..., tn are constant terms, then p'(t1, ..., tn) is defined as true in M, iff T2 proves p(t1, ..., tn) (note that T2 is a consistent complete theory, i.e. it proves either p(t1, ..., tn), or ~p(t1, ..., tn), but not both!).

    Step 5. To complete the proof, we must prove that, in the language of T0, an arbitrary formula G is always true in M, iff T2 proves G (let us denote this, as usual, by T2 |- G. This will be proved, if we will prove that, if x1, ..., xm is the set of at least all free variables contained in the formula G, and t1, ..., tm are constant terms, then G(t1, ..., tm) is true in M, iff T2 |- G(t1, ..., tm). The proof will be by induction.

    Induction base: G is an atomic formula p(s1, ..., sn), where p is a predicate letter, and the terms s1, ..., sn contain some of the variables x1, ..., xm. In s1, ..., sn, let us substitute for x1, ..., xm the terms t1, ..., tm respectively. In this way we obtain constant terms s'1, ..., s'n. Thus G(t1, ..., tm) is simply p(s'1, ..., s'n). By definition (see Step 4), p(s'1, ..., s'n) is true, iff T2 |- p(s'1, ..., s'n), i.e., iff T2 |- G(t1, ..., tm). Q.E.D.

    Induction step.

    Note. Since, T2 is a complete consistent theory, for any closed formula F, T2 proves either F, or ~F (but not both). Hence, if we know that F is true in M, iff T2 |- F, then we can conclude that F is false in M, iff T2 |- ~F. Indeed, if F is false, then F is not true, i.e. T2 does not prove F, i.e. T2 |- ~F. And, if T2 |- ~F, then T2 does not prove F, i.e. F is not true, i.e. F is false. And conversely: if we know that F is false in M, iff T2 |- ~F, then we can conclude that F is true in M, iff T2 |- F. Indeed, if F is true, then ~F is not true, i.e. T2 does not prove ~F, i.e. T2 |- F. And, if T2 |- F, then T2 does not prove ~F, i.e. F is not false, i.e. F is true.

    Case 1: G is ~H. Then, according to the classical truth tables, G(t1, ..., tm) is true in M, iff H(t1, ..., tm) is false in M. By induction assumption, H(t1, ..., tm) is true in M, iff T2 |- H(t1, ..., tm). Then, by the above note, since H(t1, ..., tm) is a closed formula, H(t1, ..., tm) is false in M, iff T2 |- ~H(t1, ..., tm), i.e., iff T2 |- G(t1, ..., tm). Q.E.D.

    Case 2: G is H->K. Then, according to the classical truth tables, G(t1, ..., tm) is false in M, iff H(t1, ..., tm) is true in M, and K(t1, ..., tm) is false in M. By induction assumption, H(t1, ..., tm) is true in M, iff T2 |- H(t1, ..., tm), and K(t1, ..., tm) is true in M, iff T2 |- K(t1, ..., tm). By the above note, K(t1, ..., tm) is false in M, iff T2 |- ~K(t1, ..., tm). Hence,

    G(t1, ..., tm) is false in M, iff T2 |- H(t1, ..., tm), and T2 |- ~K(t1, ..., tm),

    or,

    G(t1, ..., tm) is true in M, iff not (T2 |- H(t1, ..., tm), and T2 |- ~K(t1, ..., tm)).

    Let us recall Theorem 2.2.1: a) [L5, MP] A, B |- A&B, b) [L3, L4, MP]: A&B |- A, A&B |- B, and Exercise 2.6.3(a), [L1-L11, MP]: |- (A->B) <-> ~(A&~B). In T2, all the axioms of the classical logic are adopted, hence (verify!),

    G(t1, ..., tm) is true in M, iff T2 |- H(t1, ..., tm)->K(t1, ..., tm),

    or, G(t1, ..., tm) is true in M, iff T2 |- G(t1, ..., tm). Q.E.D.

    Case 3: G is H&K. Then, according to the classical truth tables, G(t1, ..., tm) is true in M, iff H(t1, ..., tm) is true in M, and K(t1, ..., tm) is true in M. By induction assumption, H(t1, ..., tm) is true in M, iff T2 |- H(t1, ..., tm), and K(t1, ..., tm) is true in M, iff T2 |- K(t1, ..., tm). Let us recall Theorem 2.2.1: a) [L5, MP] A, B |- A&B, b) [L3, L4, MP]: A&B |- A, A&B |- B. In T2, all the axioms of the classical logic are adopted, hence (verify!),

    G(t1, ..., tm) is true in M, iff T2 |- H(t1, ..., tm)&K(t1, ..., tm),

    or, G(t1, ..., tm) is true in M, iff T2 |- G(t1, ..., tm). Q.E.D.

    Case 4: G is HvK. Then, according to the classical truth tables, G(t1, ..., tm) is false in M, iff H(t1, ..., tm) is false in M, and K(t1, ..., tm) is false in M. By induction assumption, and by the above note, H(t1, ..., tm) is false in M, iff T2 |- ~H(t1, ..., tm), and K(t1, ..., tm) is false in M, iff T2 |- ~K(t1, ..., tm). Let us recall Theorem 2.2.1: a) [L5, MP] A, B |- A&B, b) [L3, L4, MP]: A&B |- A, A&B |- B, and Theorem 2.4.10(b): [L1-L10, MP] |- ~(AvB) <-> ~A&~B (the so-called Second de Morgan Law). In T2, all the axioms of the classical logic are adopted, hence (verify!),

    G(t1, ..., tm) is false in M, iff T2 |- ~(H(t1, ..., tm)vK(t1, ..., tm)),

    or, G(t1, ..., tm) is false in M, iff T2 |- ~G(t1, ..., tm). Thus, by the above note, G(t1, ..., tm) is true in M, iff T2 |- G(t1, ..., tm). Q.E.D.

    Case 5: G is ExH. Then, by definition, G(t1, ..., tm) is true in M, iff H(x, t1, ..., tm) is "true for some x", i.e., iff H(t, t1, ..., tm) is true in M for some constant term t in M. By induction assumption, H(t, t1, ..., tm) is true in M, iff T2 |- H(t, t1, ..., tm). Let us recall our above Step 2. Since H(x, t1, ..., tm) is a formula containing exactly one free variable, in T2 we have an axiom ExH(x, t1, ..., tm)->H(cH, t1, ..., tm), where cH is a constant letter.

    First, let us assume that G(t1, ..., tm) is true in M. Then H(t, t1, ..., tm) is true in M for some constant term t in M, hence, T2 |- H(t, t1, ..., tm) for this particular t. Let us recall the axiom L13: F(t)->ExF(x). Since t is a constant term, this axiom is valid for t. We need the following instance of L13: H(t, t1, ..., tm)->ExH(x, t1, ..., tm). In T2, all the axioms of the classical logic are adopted, hence, T2 |- H(t, t1, ..., tm)->ExH(x, t1, ..., tm), and, by MP, T2 |- ExH(x, t1, ..., tm), i.e. T2 |- G(t1, ..., tm). Q.E.D.

    Now, let us assume that T2 |- G(t1, ..., tm), i.e. T2 |- ExH(x, t1, ..., tm). By the above-mentioned axiom, T2 |- ExH(x, t1, ..., tm)->H(cH, t1, ..., tm), where cH is a constant letter. Thus, by MP, T2 |- H(cH, t1, ..., tm). Since cH is a constant term, by induction assumption, if T2 |- H(cH, t1, ..., tm), then H(cH, t1, ..., tm) is true in M. Hence, H(cH, t1, ..., tm) is true in M, i.e. H(x, t1, ..., tm) is true "for some x", i.e. ExH(x, t1, ..., tm) is true in M, i.e. G(t1, ..., tm) is true in M. Q.E.D.

    Case 6: G is AxH. By definition, G(t1, ..., tm) is true in M, iff H(x, t1, ..., tm) is "true for all x", i.e., iff H(t, t1, ..., tm) is true in M for all constant terms t in M. By induction assumption, H(t, t1, ..., tm) is true in M, iff T2 |- H(t, t1, ..., tm).

    Let us prove that G(t1, ..., tm) is false in M, iff T2 |- Ex~H(x, t1, ..., tm).

    First, let us assume that G(t1, ..., tm) is false in M. Then H(t, t1, ..., tm) is false in M for some constant term t in M. By induction assumption, and by the above note, T2 |- ~H(t, t1, ..., tm). As in the Case 5, let us recall the axiom L13: ~H(t, t1, ..., tm)->Ex~H(x, t1, ..., tm). In T2, all the axioms of the classical logic are adopted, hence, by MP, T2 |- Ex~H(x, t1, ..., tm).

    Now, let us assume that T2 |- Ex~H(x, t1, ..., tm). As in the Case 5, let us recall the axiom Ex~H(x, t1, ..., tm)->~H(c~H, t1, ..., tm), where c~H is a constant letter. Hence, by MP, T2 |- ~H(c~H, t1, ..., tm), i.e. T2 does not prove H(c~H, t1, ..., tm). Then, by induction assumption, H(c~H, t1, ..., tm) is false in M, i.e. AxH(x, t1, ..., tm) is false in M, i.e G(t1, ..., tm) is false in M.

    Thus, we know that G(t1, ..., tm) is true in M, iff T2 does not prove Ex~H(x, t1, ..., tm). Since T2 is a complete theory, G(t1, ..., tm) is true in M, iff T2 |- ~Ex~H(x, t1, ..., tm). Now, let us recall from Section 3.2, Table 3.2, Group I, [L1-L11, L12-L15, MP, Gen]: |- ~Ex~B<->AxB. In T2, all the axioms of the classical logic are adopted, hence, T2 |- ~Ex~H(x, t1, ..., tm), iff T2 |- AxH(x, t1, ..., tm), i.e. G(t1, ..., tm) is true in M, iff T2 |- G(t1, ..., tm). Q.E.D.

    This completes the proof of the Model Existence Theorem. Q.E.D.

    Attention: non-constructive reasoning! The above construction of the model M seems to be "almost constructive". The domain DM consists of all constant terms from the language of T0. The axiom set of T1 is effectively solvable (verify!). The interpretations of function letters are computable functions (verify!). But the interpretations of predicate letters? We interpreted the predicate letter p as the relation p' such that p'(t1, ..., tn) is true, iff T2 proves p(t1, ..., tn). This would be, in general, effectively unsolvable (see Section ZZ), even if the axiom set of T2 would be solvable! But, in general, the axiom set of theory T2 (obtained by means of Lindenbaum's Lemma) is not effectively solvable! Thus, our construction of the model M is essentially non-constructive.

    Exercise 4.3.5. (For smart students) Determine the "degree of non-constructiveness" of the Model Existence Theorem. (Hint: prove that it is DELTA2, i.e. that all the necessary functions are "computable in the limit". A function f(x) is called computable in the limit, iff there is a computable function g(x,y) such that, for all x, f(x) = lim g(x,y) as y->infinity.)

    Consistency and Satisfiability

    A set of formulas F1, ..., Fn is called inconsistent, iff a contradiction (i.e. a formula B&~B) can be derived from it. For example, the set {B, B->C, C->~B}is inconsistent (verify).

    The Model Existence Theorem allows to connect the notions of consistency and satisfiability.

    Exercise 4.3.6. Verify, that a set of formulas in a first order language: a) is consistent in the classical logic, iff it is satisfiable, b) is inconsistent in the classical logic, iff it is unsatisfiable. (Hint: use the result of Exercise 4.1.3).

    Skolem's Paradox

    Initially, the Model Existence Theorem has been proved in a weaker form in 1915 (by Leopold Loewenheim) and 1919 (by Thoralf Skolem): if a first order theory has a model, then it has a finite or countable model (the famous Loewenheim-Skolem theorem). Proof (after 1929): if T has a model, then T is consistent, i.e. T has a finite or countable model.

    L.Loewenheim. Ueber Moeglichkeiten im Relativkalkuel. "Mathematische Annalen", 1915, Vol.76, pp.447-470 (see also http://thoralf2.uwaterloo.ca/htdocs/scav/lowen/lowen.html)

    Th. Skolem. Logisch-kombinatorische Untersuchungen über die Erfüllbarkeit und Beweisbarkeit mathematischen Sätze nebst einem Theoreme über dichte Mengen. Videnskabsakademiet i Kristiania, Skrifter I, No. 4, 1920, pp. 1-36 (see also http://thoralf2.uwaterloo.ca/htdocs/scav/skolem/skolem.html)

    Loewenheim-Skolem theorem (and the Model Existence theorem) is steadily provoking the so-called Skolem's Paradox, first noted by Skolem in 1922:

    Th. Skolem. Einige Bemerkungen zur axiomatischen Begruendung der Mengenlehre. Matematikerkongressen i Helsingfors den 4-7 Juli 1922, Den femte skandinaviska matematikerkongressen, Redogörelse, Akademiska Bokhandeln, Helsinki, 1923, pp. 217-232.

    Indeed, in all formal set theories (for example, in ZF) we can prove the existence of uncountable sets. Still, according to the Model Existence theorem, if our formal set theory consistent, then there is a countable model where all its axioms and theorems are true. I.e. a theory proves the existence of uncountable sets, yet it has a countable model! Is this possible? Does it mean that all formal set theories are inconsistent? Platonists could say even more: any consistent axiomatic set theory has countable models, hence, no axiom system can represent the "intended" set theory (i.e. "the" Platonist "world of sets") adequately.

    For a formalist, Skolem's Paradox is not a paradox at all. I would rather call it Skolem's effect - like as the photo-effect, it is simply a striking phenomenon. Indeed, let J be a countable model of our formal set theory. In this theory, we can prove that the set r of all real numbers is uncountable, i.e.

    ~Ef (f is 1-1 function from r into w), -----------(1)

    where w is the set of all natural numbers. What is the meaning of this theorem in the countable model J? Interpretations of r and w are subsets of the domain DJ, i.e. they both are countable sets, i.e.

    Ef (f is 1-1 function from rJ into wJ). ----------(2)

    Interpretation of (1) in J is

    ~Ef((f in DJ) & (f is 1-1 function from rJ into wJ)).

    Hence, the mapping f of (2) does exist, yet it exists outside the model J! Do you think that f of (2) "must" be located in the model? Why? If you are living (as an "internal observer") within the model J, the set rJ seems uncountable to you (because you cannot find a 1-1 function from rJ into wJ in your world J). Still, for me (an "external observer") your uncountable rJ is countable - in my world I have a 1-1 function from rJ into wJ!

    Hence, indeed, Skolem's Paradox represents simply a striking phenomenon. It is worth of knowing, yet there is no danger in it.

    Back to title page

    model theory, interpretation, completeness theorem, Post, truth table, truth, Skolem, table, paradox, model, satisfiable, completeness, Skolem paradox, formula, logically valid, true, false, satisfiability