Handbook of Philosophical Logic Second Edition 11

334  Download (0)

Full text

(1)
(2)

2ND EDITION VOLUME 11

(3)

HANDBOOK

OF PHILOSOPHICAL LOGIC

2nd Edition

Volume 11

edited by D.M. Gabbay andF. Guenthner

Volume 1 - ISBN 0-7923-7018-X Volume 2 - ISBN 0-7923-7126-7 Volume 3 - ISBN 0-7923-7160-7 Volume 4 - ISBN 1-4020-0139-8 Volume 5 - ISBN 1-4020-0235-1 Volume 6 - ISBN 1-4020-0583-0 Volume 7 - ISBN 1-4020-0599-7 Volume 8 - ISBN 1-4020-0665-9 Volume 9 - ISBN 1-4020-0699-3 Volume 10- ISBN 1-4020-1644-1

(4)

HANDBOOK OF PHILOSOPHICAL LOGIC

2nd EDITION

VOLUME 11

Edited by

D.M.GABBAY

King's College, London, U.K.

and

F.

GUENTHNER

Centrum fUr Informations- und Sprachverarbeitung, Ludwig-Maximilians-Universitiit Munchen, Germany

....

"

(5)

A C.I.P. Catalogue record for this book is available from the Library of Congress.

Printed on acid-free paper

All Rights Reserved

© 2004 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 2004

Softcover reprint of the hardcover 2nd edition 2004

No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording

or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. ISBN 978-90-481-6554-4 ISBN 978-94-017-0466-3 (eBook)

(6)

Editorial Preface

Dov M. Gabbay

Modal Logic and Self-Reference

Craig Smorynski

Diagonalization in Logic and Mathematics

Dale J acquette

Semantics and the Liar Paradox

Albert Visser

The Logic of Fiction

John Woods and Peter Alward

Index

VB 1

55

149

241

317

(7)

PREFACE TO THE SECOND EDITION

It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then.

The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as well as to consumers of logic in many applied areas. The main logic article in the Encyclopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good!

The first edition was the second handbook published for the logic commu-nity. It followed the North Holland one volume Handbook of Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook of Philosophical Logic, published 1983-1989 came at a fortunate temporal junction at the evolution of logic. This was the time when logic was gaining ground in computer science and artificial intelligence circles.

These areas were under increasing commercial pressure to provide devices which help and/or replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisa-tion on the one hand and to provide the theoretical basis for the computer program constructs on the other. The result was that the Handbook of Philosophical Logic, which covered most of the areas needed from logic for these active communities, became their bible.

The increased demand for philosophical logic from computer science and artificial intelligence and computational linguistics accelerated the devel-opment of the subject directly and indirectly. It directly pushed research forward, stimulated by the needs of applications. New logic areas became established and old areas were enriched and expanded. At the same time, it socially provided employment for generations of logicians residing in com-puter science, linguistics and electrical engineering departments which of course helped keep the logic community thriving. In addition to that, it so happens (perhaps not by accident) that many of the Handbook contributors became active in these application areas and took their place as time passed on, among the most famous leading figures of applied philosophical logic of our times. Today we have a handbook with a most extraordinary collection of famous people as authors!

The table below will give our readers an idea of the landscape of logic and its relation to computer science and formal language and artificial in-telligence. It shows that the first edition is very close to the mark of what was needed. Two topics were not included in the first edition, even though D. GabbaI/ and F. Guenthner (eds.),

Handbook of Philosophical Logic, Volume 11, vii-ix. © 2002, Kluwer Academic Publishers.

(8)

they were extensively discussed by all authors in a 3-day Handbook meeting. These are:

• a chapter on non-monotonic logic

• a chapter on combinatory logic and A-calculus

We felt at the time (1979) that non-monotonic logic was not ready for a chapter yet and that combinatory logic and A-calculus was too far re-moved.1 Non-monotonic logic is now a very major area of

philosophi-cal logic, alongside default logics, labelled deductive systems, fibring log-ics, multi-dimensional, multimodal and substructural logics. Intensive re-examinations of fragments of classical logic have produced fresh insights, including at time decision procedures and equivalence with non-classical systems.

Perhaps the most impressive achievement of philosophical logic as arising in the past decade has been the effective negotiation of research partnerships with fallacy theory, informal logic and argumentation theory, attested to by the Amsterdam Conference in Logic and Argumentation in 1995, and the two Bonn Conferences in Practical Reasoning in 1996 and 1997.

These subjects are becoming more and more useful in agent theory and intelligent and reactive databases.

Finally, fifteen years after the start of the Handbook project, I would like to take this opportunity to put forward my current views about logic in computer science, computational linguistics and artificial intelligence. In the early 1980s the perception of the role of logic in computer science was that of a specification and reasoning tool and that of a basis for possibly neat computer languages. The computer scientist was manipulating data structures and the use of logic was one of his options.

My own view at the time was that there was an opportunity for logic to playa key role in computer science and to exchange benefits with this rich and important application area and thus enhance its own evolution. The relationship between logic and computer science was perceived as very much like the relationship of applied mathematics to physics and engineering. Ap-plied mathematics evolves through its use as an essential tool, and so we hoped for logic. Today my view has changed. As computer science and artificial intelligence deal more and more with distributed and interactive systems, processes, concurrency, agents, causes, transitions, communication and control (to name a few), the researcher in this area is having more and more in common with the traditional philosopher who has been analysing

1 I am really sorry, in hindsight, about the omission of the non-monotonic logic chapter.

I wonder how the subject would have developed, if the AI research community had had a theoretical model, in the form of a chapter, to look at. Perhaps the area would have developed in a more streamlined way!

(9)

PREFACE TO THE SECOND EDITION ix

such questions for centuries (unrestricted by the capabilities of any hard-ware).

The principles governing the interaction of several processes, for example, are abstract an similar to principles governing the cooperation of two large organisation. A detailed rule based effective but rigid bureaucracy is very much similar to a complex computer program handling and manipulating data. My guess is that the principles underlying one are very much the same as those underlying the other.

I believe the day is not far away in the future when the computer scientist will wake up one morning with the realisation that he is actually a kind of formal philosopher!

The projected number of volumes for this Handbook is about 18. The subject has evolved and its areas have become interrelated to such an extent that it no longer makes sense to dedicate volumes to topics. However, the volumes do follow some natural groupings of chapters.

I would like to thank our authors are readers for their contributions and their commitment in making this Handbook a success. Thanks also to our publication administrator Mrs J. Spurr for her usual dedication and excellence and to Kluwer Academic Publishers for their continuing support for the Handbook.

Dov Gabbay King's College London

(10)

Logic Temporal logic Modal logic. Multi-modal logics Algorithmic proof Non-monotonic reasoning Probabilistic and fuzzy logic Intuitionistic logic Set theory, higher-order logic, A-calculus, types

II

IT Natural language processing Expressive power of tense operators. Temporal indices. Sepa-ration of past from future generalised quantifiers Discourse rep-resentation. Direct com-putation on linguistic input Resolving ambigui-ties. Machine translation. Document classification. Relevance theory logical analysis of language Quantifiers in logic Montague semantics. Situation semantics

Program Artificial

in-control spec- telligence ification,

verification, concurrency Expressive power for re-current events. Specification of tempo-ral control. Decision prob-lems. Model checking. Action logic New logics. Generic theo-rem provers Loop checking. Non-monotonic decisions about loops. Faults in systems.

Real time sys-tems Constructive reasoning and proof theory about specifi-cation design Non-well-founded sets Planning. Time depen-dent data. Event calculus. Persistence through time-the Frame Problem. Tem-poral query language. temporal transactions. Belief revision. Inferential databases General theory of reasoning. Non-monotonic systems Intrinsic logical discipline for AI. Evolving and com-municating databases Expert sys-tems. Machine learning Intuitionistic logic is a better logical basis than classical logic Hereditary fi-nite predicates Logic pro-gramming Extension of Horn clause with time capability. Event calculus. Temporal logic programming. Negation by failure and modality Procedural ap-proach to logic Negation by failure. Deduc-tive databases Semantics for logic programs Horn clause logic is really intuitionistic. Extension of logic program-ming languages A-calculus ex-tension to logic programs

(11)

PREFACE TO THE SECOND EDITION xi Imperative Database vs. declar- theory ative lan-guages Temporal logic as a declarative programming language. The changing past in databases. The imperative future Dynamic logic Types. Term rewrite sys-terns. Abstract interpretation Temporal databases and temporal transactions Database up-dates and action logic Abduction, rel-evance Inferential databases. Non-monotonic coding of databases Fuzzy and probabilistic data Semantics for Database programming languages. Martin-Lof theories Semantics for programming languages. Abstract in-terpretation. Domain recur-sion theory. transactions. Inductive learning Complexity theory Complexity questions of decision pro-ced ures of the logics involved Ditto Ditto Ditto Ditto Ditto Ditto

Agent theory Special

com-ments: A look to the future An essential component Possible ac-tions Agent's implementation rely on proof theory. Agent's rea-soning is non-monotonic Connection with decision theory Agents con-structive reasoning Temporal systems are becoming more and more so-phisticated and extensively applied Multimodal logics are on the rise. Quantification and context becoming very active A major area now. Impor-tant for formal-ising practical reasoning Major area now Still a major central alterna-tive to classical logic More central than ever!

(12)

Clusical logic. Basic back- Program syn- A basic tool Clusical Crag- ground lan- thesis

ments guage

Labelled Extremely use- A unifying Annotated

deductive ful in modelling framework. logic programs

systems Context

theory.

Resource and Lambek calcu- Truth

substructural Ius maintenance

logics systems

Fibring and Dynamic syn- Modules. Logics of space Combining

fea-combining tax Combining and time tures

logics languages

Fallacy theory

Logical Widely applied

Dynamics here

Argumentation Game

seman-theory games tics gammg

..

ground

Object level/ Extensively

metalevel used in AI Mechanisms: ditto Abduction, default relevance Connection with neural nets Time-action- ditto revision mod-els

(13)

PREFACE TO THE SECOND EDITION xiii

Relational Logical com- The workhorse The study of

databases plexity classes of logic fragments is

very active and promising.

Labelling Essential tool. The new

unify-allows for ing framework

context for logics

and control.

Linear logic Agents

have limited resources

Linked Agents are The notion of

databases. built up of self-fibring

al-Reactive various fibred lows for

self-databases mechanisms reference

Fallacies are really valid modes of rea-soning in the right context. Potentially ap- A dynamic plicable view of logic

On the rise in all areas of applied logic. Promises a great future Important fea- Always central . ture of agents in all areas

Very important Becoming part for agents of the notion of

a logic Of great im-portance to the future. Just starting A new theory A new kind of of logical agent model

(14)

MODAL LOGIC AND SELF-REFERENCE

o

INTRODUCTION

Ever since Epimenides made his startling confession, philosophers and math-ematicians have been fascinated by self-reference. Of course, mathemati-cians are not free to admit this. To the orderly mind of the mathematician the man who says

'I am lying'

is witty, but not to be taken seriously, and the barber who shaves the heads of those in his village who do not shave their own heads simply does not exist-nor does anyone else in his lousy village and, besides, no self-respecting mathematician would want to live there anyway. The Russell paradox is a different matter:

R={x:x~x}

is not merely a linguistic trick, but something which, if admitted as an entity leads to real trouble:

R E R iff R ~ R.

The existence of R has a clear mathematical purpose-it shows Frege's set theory to be inconsistent.

In short, whereas the philosopher takes self-reference, even the Liar, seri-ously, the mathematician associates it with inconsistency or inexpressibility. That is, the mathematician did so until 1930 when Kurt GOdel turned self-reference from a philosophically puzzling or mathematically suspect object into a respectable mathematical tool. GOdel's starting point was, oddly enough (or naturally enough), the Liar. The sentence,

1 am lying,

when uttered can neither be true nor false, hence cannot be uttered. Well-it can be uttered (I just tried it); but it cannot be uttered coher-ently, i.e. not if it is to have a definite truth value (and we are to keep the usual laws of logic). But, observed G6del, if a language is expressible enough and a theory T in the language is both simple and powerful enough, it can express provability. If, moreover, a certain amount of self-reference is available, a watered-down Liar can assert

I am unprovable,

D. Gabbay and F. Guenthner (eds.),

Handbook of Philosophical Logic, Volume 11, 1-53. © 2002, Kluwer Academic Publishers.

(15)

2 CRAIG SMORYNSKI

or, rather,

What I am saying is unprovable. This sentence will, in fact, be unprovable.

As one might guess, GOdel's observation was a big hit. Everyone, the mathematician as well as the philosopher, was impressed by GOdel's argu-ment and the conclusions he drew from it.

GODEL'S FIRST INCOMPLETENESS THEOREM. If T is a sufficiently

pow-erful formal theory and T is sufficiently sound, then T is incomplete; i.e. there are true sentences undecided by T.

GODEL'S SECOND INCOMPLETENESS THEOREM. 1fT is a sufficiently

pow-erful formal theory, then T cannot prove its own consistency.

The story of GOdel's Theorems and their effect on the Philosophy of Mathematics need not be repeated here. What should be emphasised is that a mathematical theory of self-reference was long in developing. Philosophers tried simulating a few paradoxes other than the Liar and mathematicians developed Recursion Theory (cf. the chapter by Van Dalen (this Handbook

volume 1)) as a safe alternative.

A theory of self-reference could have emerged in the 1950s when Leon Henkin asked the question: we know that GOdel's sentence asserting its own unprovability is unprovable; what about the sentence asserting its won provability? For example, if a sentence declares

I am provable,

is it telling the truth? At the International Congress of Mathematicians in 1954 Martin H. Lob proved that the answer was yes. Unfortunately, the referee for his paper, which appeared the following year, noticed that Lob's argument established something a bit more general-a bit more philo-sophically interesting-and the cute fact of the provability of the statement asserting its own provability was overlooked. Proof theorists emphasised the philosophical importance of Lob's Theorem and the mathematical dabblers backed off.

In the early 1970s, however, the story changed. Suddenly, from several directions at once it was recognised that, modulo the background analysis by Lob of the representation of provability within a system, the proofs of GOdel's Theorems and Lob's Theorem were propositional in character, that is they used propositional logic with an additional operator and some familiar laws-Leo modal logic.

In the sequel I shall exposit some of the modal analysis of self-reference. My plan is fairly simple: in the immediately following section I shall dis-cuss the arithmetical background-self-reference in (say) arithmetic, Lob's

(16)

Derivability Conditions, GOdel's theorems, and Lob's Theorem. There fol-lows in Section 2 a description of a system of modal logic called Provability Logic, or PrL. The analysis of self-reference in PrL is given in the next section. The key result is the De J ongh-Sambin Theorem: every appropri-ate formula has a unique, explicitly definable fixed point, i.e. self-referential sentences arising from modal contexts have genuine meanings determinable without resort to self-reference.

Up to this point, modal logic will only have been used notationally. In Section 4, I discuss the model theory of PrL. This is not only of interest in its own right, but it also serves as a tool for a further analysis of arithmetic derivability.

In Section 5, I discuss arithmetical interpretations of Pr L. The main theorems here are Solovay's two Completeness Theorems. The First Com-pleteness Theorem asserts that PrL is the logic of arithmetic provability, whence the modal analysis is complete in a sense. The Second Theorem characterises the schemata valid with respect to truth and is, in effect, the strongest single incompleteness theorem known.

Having exhausted, to some extent, the study of pure provability by the end of Section 5, I next lead the reader into the applied theory of self-reference. This material is both a bit more advanced and more skimpily presented: first, in Section 6, I discuss Rosser's sentences and their relatively complete modal analysis. In Section 7, the goal is different-to use modal logic to unify many different self-referential formulae and explain all their known applications at once. For the instances of self-reference falling under the scope of the explanation, the explanation is completely satisfactory; for other similar self-referential instances, I can only refer to the literature for the beginnings.

1 THE INCOMPLETENESS THEOREMS

Nowadays, mathematical logicians would prefer a discussion of the set the-oretic encoding of syntax in a weak set theory-except, of course, for the proof theorists, who would find a theory of finite sequences most natural. Traditionally, however, one discusses the language of arithmetic-or, rather, a language of arithmetic. In view of the handy discussion of indexing in Van Dalen's chapter on recursion theory (this Handbook volume 1), I find it

eas-iest to assume the reader is familiar with such encoding and simply discuss the end result rather than the process of such an encoding of syntax within the language of arithmetic. In the sequel, we will mostly need this only as an explanation of the modal systems and the types of questions we will ask about them.

To begin with, we should specify the language of arithmetic and declare some axiomatisation for a formal theory of arithmetic. For the language, we

(17)

4 CRAIG SMORYNSKI

have, in addition to the logical apparatus (variables, equality, connectives and quantifiers), the individual constant 0, function constants 5 (successor)

+

(addition), . (multiplication), and / for each (primitive recursive defini-tion of a) primitive recursive funcdefini-tion

f,

and a binary relation ~ (order). Numerals

1,2,3, ...

are abbreviations for 50,550,5550, ... , respectively.

The axioms of formal number theory, called Peano Arithmetic, or PA, consist, in addition to the usual logical ones, of the following:

I. II. III. 0=1 5x 5x= 5y

-+

x

=

y M(x}' ... , Xt) = Xi &';(X1, ... ,Xt)

=n

X ~ y

++

3z(x

+

z = y) x+O=x x+5y=5(x+y) x·O=O x·5y

=

x·y+x /(0, Xl, . .. ,Xt) = g(x}, . .. , Xt) /(5x, Xl , ... ,Xt) = /,,(/(X, Xl , ... ,Xt),X1, ... ,Xt,x),

if

f

is defined from g, h by primitive recursion /(x}, . .. ,Xt) = g(/"l (Xl, ... ,Xt), . .. ,/"p(X1, . .. ,Xt)),

if

f

is defined from g, hI, .. . , hp by composition.

IV. cpO 1\ Vx(cpx

-+

cp(5x))

-+

Vxcpx, all cpx with X free.

The first group of axioms merely concerns the initial functions from which the primitive recursive ones are generated; the second defines order; the third simply gives equations corresponding to the definitional principles for the primitive recursive functions generated from the initial ones; and the fourth schema is just induction. The axioms for

+

and . are redundant insofar as

+

and . are defined by recursion from 5. These functions are, however, special: the full system PA can be shown to be a definitional extension of that generated by using only 5,

+,.

and their axioms (including the subschema of IV in the restricted language).

Now, using the primitive recursive indexing of primitive recursive func-tions discussed in Van Dalen's chapter (this Handbook volume 1), one can

get a primitive recursive encoding of syntax: there is an assignment of num-bers r

t',

r cp' to terms

t,

formulae cp, respectively, such that the usual syntac-tic operations are primitive recursively simulated. E.g. there are primitive recursive functions con, neg, sub such that

con(r cp " r

tP ,)

=

r cp 1\

tP'

neg(r cp ')

=

r -,cp'

(18)

Moreover, formal derivations can be viewed as finite sequences of formulae and the relation 'x codes a derivation of the formula with code y' is primitive recursive, i.e. there is a formula,

satisfying:

Prov(x,y): p(x,y) =

0,

Prov(x, y) is true iff x = (r CPI I, ... , r CPn ') and

y = r CPn I and CPI, ... ,CPn is a formal proof. From this we get a formula,

Pr(y) : 3x Prov(x,y),

asserting the provability of the formula with code y.

Suppose now that I(XI,"" Xk) is a primitive recursive function. If mt, ... ,mk are given and I(rril , ... , mk)

=

n, we can actually calculate

this value using the defining clauses for I-and, hereditarily, those for the functions entering into the definition of

I.

But, these defining clauses are, in fact, axioms of PA, whence the calculation exhibiting I(ml,' .. ,mk) = n

is virtually a formal derivation of

/(ml,' ..

,mk) = ii, i.e. we have (i) I(ml,"" mk) = n =} PA I-

/(mt, ... ,

mk) = ii.

If we look carefully at this argument, we see that it is inductive-an overall induction on the number of steps used to define

I,

and in the case

I

is defined by primitive recursion, an additional induction on the variable of the recursion. Since PA has the induction schema IV, it follows that this argument can be formalised:

(ii) PA I-\lXI,' .. , xky[/(xt, . .. ,Xk)

=

y -t

-t Pr(sub(r I(XI,' .. , Xk)

=

y I, r Xl I, ... ,ry I, Xl, ... ,y))].

Actually, the proof of (ii) presupposes decent behaviour of Prov(x, y). The crucial property, which is easily built into the definition of Prov(x, y) is the provable closure of Pr(x) under modus ponens:

(iii) PA I-Pr(r cP ') " Pr(r cP -t

t/J ')

-t Pr(r

t/J ').

Properties (i)-(iii) are all we will need to know about Pr(x). They are, however, not in a very elegant form. Applying (i) and (ii) to the primitive recursive characteristic function p(x, y) for Prov(x, y) we can derive the following:

THEOREM 1 (Lob's Derivability Conditions). For all sentences cP,

t/J,

Dl. PA I- cP =} PA I- Pr(rcp')

(19)

6 CRAIG SMORYNSKI

D2. PA I- Pr(r If' I) A Pr(r If' ~ 'IjJ I) ~ Pr( 'IjJ I)

D9. PA I- Pr(r If' I) ~ Pr(rpr(r If' I) I).

Proof. Dl. Note that PAl- If' implies that there is a derivation If'l, ... If'n-l, If' of If'. Thus p«r If'l I, ... , r If'n-l I, r 1f'1), r If' I) = O. By (i), we have PA I- proV«rlf'l', ... ,rlf'n_l',rlf"),rlf"), Le. PA I- 3xProv(x,rlf"), Le. PA I- Pr(rlf'l).

D2. This is just point (ii) above.

D3. This is just the formalisation of Dl. •

Conditions D1-D3 are the key properties of Pr(x). Essentially, they are all we will need to know about Pr(x) until Section 6, where we will replace it by a new predicate. For this purpose, we introduce here a little terminology and discuss the generalisation of D 1 and D3 that we will need. Both of these are quite simple.

DEFINITION 2. A formula If' is a PR-formula if it is of the form /(tl, ... , tk-d = tk, where each term ti is either a variable or a numeral and / is a primitive recursive function constant. A formula If' is an RE-formula if it has the form 3x1jJx, where 'ljJx is aPR-formula.

Thus, a PR-formula is a canonical definition of a primitive recursive re-lation and an RE-formula is such for an RE rere-lation (as defined in Van Dalen's chapter (this Handbook volume 1».

In Section 6 we will need the following generalisations ofDl and D3, both of which follow from (i) and (ii) respectively, in the same manner in which D1 and D3 followed therefrom:

THEOREM 3 (RE-Completeness). Let If'Xl, ... ,Xk be an RE- (or PR-)for-mula.

(i) For all ml, ... ,mk,

cpffll, ...

,mk

holds

=>

PA I-cpffl!, ... ,

mk

(ii) PA I- I(JYl, ... , Yk ~ Pr(sub(r If'Xl, ... , Xk I, rX1 I, ••• , rXk I,y!, ... ,Yk».

Getting back to our immediate needs, we will require one additional pow-erful principl~the Diagonalisation Lemma:

THEOREM 4 (Diagonalisation Lemma). Let 'ljJx have only x free. There is

(20)

Proof. We use sub in much the same way in which the s:!'-function is used to prove the Recursion Theorem. Fix the variable x and consider

Ox: 'I/J (sub (x, rx',x».

Let m

=

rox' and cp

=

Om.

Notice:

I have stated the Diagonalisation Lemma in a weak form. In full strength it is as general as the Recursion Theorem-indeed, the two are basically the same: the differences are (1) the choices of languages to which they apply, and (2) the fact that one deals with relations and one with functions. For a little about the expositional history of the Diagonalisation Lemma, cf. [Smorynski,1981].

Once we have the Diagonalisation Lemma and Lob's Derivability Condi-tions, GOdel's Incompleteness Theorems are easy exercises:

THEOREM 5 (GOdel's Incompleteness Theorems). Let

Then: (i) PA¥ cp (ii) PA ¥ ,cp

(iii) PA ¥ Con(pA), where Con(pA) is the sentence ,Pr(rO

=

I').

Proof. (i) Observe

PA I- cp

=>

PA I- Pr(rcp'), by Dl

=>

PA I-'cp, by the definition of cp. But this contradicts the consistency of PA, whence PA ¥ cp.

(ii) Again,

PA I- ,cp

=>

PA I- Pr(rcp'), by choice of cp

(21)

8 CRAIG SMORYNSKI

since PA proves only true theorems. But again, we have an inconsistency unless PA ¥ 'cp.

(iii) Since PA ¥ cp, it suffices to show PA r Con(PA) ~ cpo We prove

the contrapositive: PA r 'cp ~ Pr(rO = I'). A few applications of D1 and D2 to PA r ,cp

++

Pr(r cp ') yield: PA r Pr(r ,cp ,)

++

Pr(rpr(r cp ,) '). But whence But PA

r

cp ~ (,cp ~

0

=

I)

and a few additional applications of D1, D2 yield PA r ,cp ~ Pr(ro = 1').

It is probably worth noting at this point the following: COROLLARY 6 (Kreisel's Fixed Point Calculation). Let

PA r cp

++

,Pr(rcp'). Then: PA r cp

++

Con(PA).

Proof. We have already shown PA r Con(PA) ~ cpo For the converse,

apply D1, D2:

PArO=I~cp =} PArPr(rO=I~cp')

=} PA r Pr(rO = I') ~ Pr(rcp') =} PA r ,Con(PA) ~ 'cp.

There are a few quick remarks that should be made. First, with respect to the formulation of the Incompleteness Theorems, it is customary to in-corporate the safety assumptions into the statements. Thus, e.g. instead of saying CPA ¥ cp', one says 'H PA is consistent, then PA ¥ cp'. Frankly, I

object to this latter version because it misleads the reader into believing the consistency of PA to be in question. However, there is a good reason to discuss the safety assumptions: by the first Incompleteness Theorem (Le. 5(1», PA cannot directly formalise 5(i):

(22)

What can be proven (indeed, what was the proof of 5(3)) is the implication from the safety assumption:

PA I- Con(PA) ~ -,Pr(cp').

similarly, 5(3) is formalised as

PA I-Con(PA) ~ -,Pr(Con(PA)').

Our second remark incorporates another approach to the problem just cited. Recall that, in the Introduction, we stated the Incompleteness Theo-rem in terms of 'sufficiently strong formal theories T'. The fact is, the proof of Theorem 1.5 required only that (1) PA be strong enough to carry out some encoding of syntax, and (2) PA have a decently encodable syntax. Now, the former is true of any theory T containing PA (Containment via interpretability suffices.) and the latter can be met by T's having a recur-sively enumerable set of axioms: If we call a theory T satisfying this recur-sive enumerability condition an HE theory, we obtain a rigorously stated general form of the Incompleteness Theorems:

THEOREM 5' Let T be an RE theory containing PA and let

where PrT(x) is the proof predicate for T. Then: 1. If T is consistent, T jot cp,

2. If T is sufficiently sound, T jot -'cp,

3. If T is consistent, T jot Con(T), where Con(T) is -,PrT

nj

=

1')

In the sequel, we will primarily restrict our attention to T = PA. There is yet a third remark I want to make about the Theorem, or, rather, about the Corollary. The equivalence of any sentence asserting its own unprovability with the assertion of consistency allows one, first of all, to assert the uniqueness up to provable equivalence of such sentences and thus to refer to the sentence asserting its own unprovability. But even more

important, the equivalence shows the sentence to be explicitly definable. Does this remove some of the mystery of the self-reference?

The reader will recall Henkin's question about sentences asserting their own provability. Lob's Theorem and its formalisation answer this question readily:

(23)

10 CRAIG SMORYNSKI

COROLLARY 8 (Henkin's Problem). Let PA I- cp t+ Pr(rcp"). Then:

PA I-cpo

The Corollary follows immediately from the Theorem.

Proof.[of Theorem 7] We could content ourselves with a proof of 7(1) and the remark that 7(2) is just its formalisation. And we will see the inter-deducibility of these two assertions in the next section. Nonetheless, I present proofs of both results here.

1. (Lob's proof). Assume PA I-Pr(r'IjJ") -+ 'IjJ and choose cp by Diago-nalisation so that

Now Dl, D2 yield

PA I-Pr(r cp") t+ Pr(rpr(r cp") -+ 'IjJ.,) -+ Pr(rpr(r cp")") -+ Pr(r'IjJ")

and D3 eliminates the redundant part to yield

From the assumption, we conclude

i.e.

PA I-cpo

Dl yields PA I-Pr(rcp"), whence (*) and modus ponens yield PA I-'IjJ. 2. (Kreisel's proof; cf. [Kreisel and Takeuti, 1974]). The formalisation is

easier with the fixed point:

(Using Dl, D2, and the equivalence of cp with a statement of the form Pr(·)) D3 yields

PA I-cp -+ Pr(r cp "),

which, by choice of cp and D2 yields

PA I-cp -+ Pr(r'IjJ"). However, D2 and the tautology 'IjJ -+ (cp -+ 'IjJ) yield

(24)

i.e.

PA I-Pr(r 1jJ ')

-+

cpo

Hence, cp is equivalent to Pr(r 1jJ ,) and substitution into the defining

equivalence for cp (legitimate by 01, 02) yields

PA I-Pr(r1jJ')

++

Pr(rpr(r1jJ')

-+

1jJ'), which is slightly more than required.

I have already remarked that Lob's Theorem settles Henkin's question. Georg Kreisel has often remarked that Lob's Theorem is a generalisation of the Second Incompleteness Theorem: choosing

0

=

I

for 1jJ, it reads

PA I-Pr(ro

=

I') -+ 0

=

I

*

PA I-

0

=

I,

i.e.

PA ¥ -,Pr(ro = I').

Lob's Theorem is, in fact, 'merely' the contraposition to GOdel's Second Incompleteness Theorem for all finite extensions of PA:

PA

+

-,1jJ consistent

*

PA

+

-,1jJ ¥ -,Pr(r 1jJ ,)

*

PA ¥ -,1jJ

-+

-,Pr(r1jJI).

Because of this proof, it has become fashionable to call Lob's Theorem the Second Incompleteness Theorem and credit it to GOdel. This is not quite fair. Where GOdel's Theorem gives the important information of the underivability of consistency, Lob's Theorem goes further and actually char-acterises the provable instances of soundness (Le. the truth of theorems). Although the reduction of Lob's Theorem to the validity of the Second Incompleteness Theorem in a class of theories is easy, it is by no means obvious: it is true that this proof was independently hit upon by several people (including the author in 1974), but the earliest I've been able to trace it is 1967, when Saul Kripke showed it to various people at the UCLA set theory meeting-a full 12 years after Lob's Theorem had been published.

The reader-particularly the one who has filled in the missing steps wherever I wrote 'by 01, 02, ... ,'-should have noticed that, in proving all these results, we only used propositional logic, 01-03, and the exis-tence of fixed points. In [Macintyre and Simmons, 1973], Angus Macintyre and Harry Simmons attempted to replace this last tool by some powerful principle like 01-03. The principle they hit upon was Lob's Theorem, 7(1). They showed, among other things, the equivalence of 7(1), 7(2), the existence of the fixed point CPI

++

.Pr(r CPI ,)

-+

1jJ, the existence of the fixed point CP2

++

Pr(r CP2

-+

1jJ '), and the respective explicit calculations CPI

++

.Pr(r 1jJ ,)

-+

1jJ and CP2

++

Pr(r 1jJ I)-they showed all these equiva-lences using only propositional logic and 01-03.

(25)

12 CRAIG SMORYNSKI

2 THE SYSTEM PRL OF PROVABILITY LOGIC The language of modal logic consists of

Propositional variables: p, q, r, . .. Troth values: T, 1.

Propositional connectives: -', ", V, ~

Modal operator: D.

Modal formulae will be denoted by capital letters A, B, C, ....

The system Pr L of provability logic is a simulation of the proof theory outlined in the preceding section. As indicated by the results of Macintyre and Simmons, there are several possibilities for the simulation of the 'ad-vanced' part of the theory. To enable us to discuss this easily, I shall fist introduce a neutral system of Basic Modal Logic, BML, simulating Lob's Derivability Conditions:

DEFINITION 9. The modal system BML is the system of logic whose axioms and rules of inference are the following schemata:

Axioms

(AI) All (Boolean) tautologies (A2) DA" D(A ~ B) ~ DB

(A3) DA~DDA

Rules

(Rl) A,A

-+

BIB

(R2) AIDA.

The system BML is a known system of modal logic and is almost cer-tainly discussed in Bull and Segerberg's chapter (of this Handbook volume 3), where it appears under a modally more familiar name (K4?). For our purposes, it is more convenient not to place it in the context of a multitude of disparate systems; to us, BML is merely a convenient background for PrL.

Before we discuss PrL, let us acquaint ourselves slightly with BML. First, a short list of useful modal tautologies:

LEMMA 10.

1. BML I- D(A" B)

++

DA" DB 2. BML I-DA V DB

-+

D(A V B)

(26)

9. BML I-D(A ~ B) ~ .DA ~ DB

4.

BML I-D(A # B) ~ .DA # DB

5. BML I- 0..1 ~ DA

6. BML I- -,0..1 # .DA ~ -,D-,A.

The derivations of these are routine exercises and I omit them. The converse implications of 2.2(2)-2.2(5) are not derivable in BML or PrL-nor are they generally true under arithmetical interpretation.

Following a list of simple tautologies is usually a proof of the Deduction Theorem. This theorem fails for BML, however, because of R2: although

A I-DA by R2, we cannot generally derive A ~ DA. A good arithmetical counterexample is given by interpreting A as Con(PA): since

PA I-Con(PA) ~ -,Pr(rCon(PA)'),

we cannot have

PA I-Con(PA) ~ Pr(rCon(PA)'),

as this would entail PA's proving its own inconsistency. However, R2 is the only obstruction to the Deduction Theorem.

THEOREM 11 (Modified Deduction Theorem). If

r

is a set of sentences and there is a derivation of B from

r

+

A over BML which does not use R2, then there is a derivation of A ~ B from rover BML which also does

not use R2.

The proof of Theorem 11 is a routine induction and I omit it. [Inciden-tally, another solution to the problem of the Deduction Theorem is to drop R2 and augment the axioms of BML by adding DA as a new axiom for every instance A of an axiom. R2 is then a derived rule of inference, but no longer an obstacle to the validity of the Deduction Theorem.]

We are almost past the routine stuff. First, a useful derived rule: LEMMA 12. Let ML be any system of modal logic containing BML and closed under R2. Then:

ML I-DA ~ B

=>

ML I-DA ~ DB. Proof. ML I-DA ~ DB

=>

ML I-D(DA

-+

B), by R2

=>

ML I-DDA ~ DB, by 2.2(3))

=>

ML I-DA ~ DB, by A3.

(27)

14 CRAIG SMORYNSKI

With Lemma 12, we have completed our first group of preliminaries. Our next goal is to handle substitutions. This is motivated not merely by the customary metaphysical question of substitution into modal contexts, but also by mathematical necessity: in Section 1, the steps I avoided giving in the proofs of the Incompleteness Theorems and Lob's Theorem were precisely those corresponding to such a substitution. Slicker proofs are obtained when we know how to perform substitutions.

There are essentially two types of substitutions to be made-inside a modal context and outside such. The latter substitution can be handled by the usual result from propositional logic: if all occurrences of pin A(P) lie outside the scopes of boxes, then

BML I-(B t+ C)

-+

.A(B) t+ A(C).

Substitution inside a modal context will clearly require more than mere equivalence; it will require at least D(B t+ C). By axiom A3, this will be

enough and substitution in general contexts will require (B t+ C) /\ D(B t+

C). Before proving this, it is convenient to introduce an abbreviating

oper-ator and list some of its properties.

DEFINITION 13. The strong box [s], is defined by: [s]A

=

A /\ DA.

LEMMA 14. BML(D) I-BML([s]), i.e. 1. BML I- [s]A /\ [s](A

-+

B)

-+

[s]B

2. BML I- [s]A

-+

[s][a]A

3. BML I-A ~ BML I-[alA.

By Lemma 14, [a] is as good a modal operator as D. In particular, Lemma 10 holds with 0 replaced by [s]. Moreover, 14(3) holds for any modal logic

ML containing BML closed under R2. Thus, Lemma 12 also holds with 0 replaced by [a].

The following Lemma lists a few additional properties of [a].

LEMMA 15.

1. BML I-[alA

-+

A

2. BML I-[alA t+ [a][a]A

3. BML I-D[s]A t+ DA t+ [a]DA.

The proof of this Lemma makes yet another exercise in axiom pushing for the reader. Such things are tedious, but necessary. And they do payoff; we can now prove some lemmas of substance-the Substitution Lemmas:

(28)

LEMMA 16 (First Substitution Lemma; FSL). For all A(P), B, C, BML f- [s](B +-* C) -+ .A(B) +-* A(C).

LEMMA 17 (Second Substitution Lemma: SSL). For all A(P),B,C, BML f-D(B +-* C) -+ D[A(B) +-* A(C)].

These Lemmas are equivalent, as we shall see later. For the moment, it suffices to note that the First readily implies the Second, and then to prove the First.

Proof that FSL implies SSL: Write D for B +-* C,E for A(B) +-* A(C), and notice that

BML f- [s]D -+ E, f- D([s]D -+ E), f-D[s]D -+ DE, f- DD -+ DE, by FSL by R2 by 2.2(3) by 2.7(3).

Proof of FSL: By induction on the complexity of A(P).

(i) A(P) is p: BML f- [s](B +-* C) -+ .B +-* C by 2.7(1). (i') A(P) is q: BML f- [s](B +-* C) -+ .q +-* q by AI. (ii)-(iii) A(P) is T or ..l: the proof is as in case (i').

(iv)-(vii) A(P) is obtained from simpler Al and A2 by means of proposi-tional connectives: Apply the induction hypothesis and the sub-stitution lemma for propositional calculus.

(viii) A(P) is DD(P): This is the interesting case. Note

BML f- [s](B +-* C) -+ .D(B) +-* D(C), by induction hypothesis,

f-D[s](B +-* C) -+ D[D(B) +-* D(C)], by 2.2(3»

f- [s](B +-* C) -+ D[D(B) +-* D(C)], (*) by the definition of [s] and 15(3) (the use of A3 mentioned before). From (*), one additional application of 10(3) yields the desired equivalence. •

We now have all the syntactic preliminaries for which we needed BML and can now consider the problem of axiom at ising the 'advanced' properties of the proof predicate. The most elegant solution uses the Formalised Lob Theorem:

DEFINITION 18. The modal system PrL is the extension of BML by the addition of the axiom schema

(29)

16 CRAIG SMORYNSKI

(A4) D(DA -t A) -t DA.

As proven by Macintyre and Simmons [1973], one can also use the unfor-malised Lob Theorem:

LEMMA 19. PrL is equivalent to the system obtained by adding to BML

the rule of inference: (LR) DA -t A/A.

Proof. It is easy to see that Pr L is closed under LR:

PrL I-DA -t A

'*

PrL I- D(DA -t A), by R2

'*

PrL I- DA, by A4

'*

Pr L I-A, by assumption DA -t A.

Conversely, let T denote the extension of BML by the addition of the rule LR. By A3,

BML I-D(DA -t A) -t DD(DA -t A) and by A2,

BML I-D[D(DA -t A) -t DA] " DD(DA -t A) -t DDA. Combining these yields

BML I-D[D(DA -t A) -t DA]/\ D(DA -t A) -t DDA. But again A2 yields

BML I-D(DA -t A) /\ DDA -t DA,

whence

BML I-D[D(DA -t A) -t DA] -t .D(DA -t A) -t DA. A single application of LR yields

T I-D(DA -t A) -t DA.

By this Lemma, the choice of Formalised or Unformalised Lob Theorem to axiomatise the more advanced results of informal provability logic (Le. the stuff of Section 1) is immaterial and we can make the choice on aesthetic grounds. We chose the Formalised version because an axiom schema is generally easier to handle model theoretically than a rule of inference.

(30)

What I have not directly addressed is the justification of basing Pr L on Lob's Theorem rather than on the more obvious Diagonalisation Lemma. Systems based on Diagonalisation can be given and, proof- theoretically, they are not totally inelegant. But, it happens that they are no stronger than Pr L-a fact that will require the rest of this and all of the next section to prove. That part of the proof occupying the rest of this section consists of the slicker modal derivation of the Incompleteness and Lob's Theorems accessible once the Substitution Lemmas have been established.

First, the range of diagonalisation must be isolated:

DEFINITION 20. The variable p is boxed in A(P) if every occurrence of p

in A(P) lies within the scope of a D.

(I am tempted to say 'p is boxed in in A(P).')

The point of this definition is that, in arithmetic interpretations (cf. Sec-tion 5 below), the property of p's being boxed in A(P) corresponds to that

of a sentence cp's occurring only in contexts of the form Pr(r ... cp .. . ') in

another sentence 'Ij;. In this case, we can write 'Ij; as 'Ij;(r cp ') and apply the

Diagonalisation Lemma to obtain a sentence cp such that

PA I-cp H 'Ij;(cp').

In other words, if p is boxed in A(P), the equivalence pH A(P)

will always be solvable in arithmetical interpretations. Hence, a modal simulation of diagonalisation must allow for solutions to pH A(P) whenever p is boxed in A(P).

How do we modally simulate diagonalisation? An ugly, but workable method is to add, for each p and A(P) with p boxed in A(P), a new constant

CA and axiom CA H A(CA)' A more elegant approach that is proof

theoret-ically, if not obviously model theoretically equivalent is to treat the CA'S S

eliminable, i.e. to add a Diagonalisation Rule to BML.

DEFINITION 21. The modal system DiL of Diagonalisability Logic is the extension of BML by the addition of the rule of inference:

(DR) [s][P H A(P)]-t BjB,

where p is boxed in A(P) and has no occurrence in B.

The form of eliminability of self-reference, i.e. the assumption of a strongly boxed equivalence rather than a mere equivalence or boxed equivalence is explained by the FSL: in actual practice, as we shall see, we need to substi-tute p and A(P) for each other in general contexts.

(31)

18 CRAIG SMORYNSKI

The first major result about PrL was Dick de Jongh's proof that PrL is closed under DR, i.e. that PrL coincides with DiL. This proof was model-theoretic and has been superseded by later developments. After reading the introduction in Section 4 to the model theory of PrL, the interested reader can consult [Smorynski, 1978] for De Jongh's original proof.

By way of proving the coincidence of PrL with DiL, let me slowly show DiL to contain PrL.

LEMMA 22 (Incompleteness Theorems).

(i) DiL I- [s][P ++ ,Op] A ,0.1 -+ ,Op

(ii) DiL I-,0.1 -+ ,0,0.1

(iii) DiL I- [s][P ++ ,Op] -+ .p ++ ,0.1.

Proof. (i) Assume· [s][P ++ ,Op]. Now,

[Op -+ OOp] -+ [Op -+ O,p] by the FSL. On the other hand, Op -+ Op, whence

Op -+ OpAO,p -+ O(PA ,p) -+ 0.1.

Hence BML I- [s][P ++ ,Op] -+ .Op -+ 0.1, and contraposition yields BML I-[s][P ++ ,Op] A ,Ol. -+ ,Op.

(ii) Let us skip this for a moment. (iii) Obviously,

BML I-Ol. -+ Op I-,Op -+ ,0.1. With (i), this yields

BML I- [s][P ++ ,Op] -+ .,Op ++ ,0.1 -+ .p ++ ,0.1. (ii) By (iii), we have

BML I-[s][P ++ ,Op]-+ .p ++ ,0.1, whence

(32)

and we can substitute ,OJ.. for p. Do so in (i):

BML I- [s][P

++

,Op] ~ .,OJ.. ~ ,Op

to conclude

BML I- [s][P

++

,Op] ~ .,OJ.. ~ ,O,O . .L

A final application of DR yields

DiL I- ,OJ.. ~ ,O,OJ...

Of course, what we really want to prove in DiL is A4. LEMMA 23 (Lob's Theorem). DiL I- O(OA ~ A) ~ OA.

Proof. Assume [s][P

++

O(p ~ A)]. Again

[O(P ~ A) ~ 00(p ~ A)] ~ .p ~ Op by the FSL. Thus

p ~ Op 1\ O(p ~ A) whence

p~OA.

Conversely, BML I- OA ~ O(p ~ A), and the assumption on p yields

OA~p. Thus:

BML I- [s][P

++

O(p ~ A)] ~ .p

++

OA

I- [s][P

++

O(p ~ A)] ~ .[s][P

++

OA], by 2.4 ([s])

I- [s][P

++

O(p ~ A)] ~ .OA

++

O(OA ~ A), by FSL, whence DR yields

DiL I-OA

++

O(OA ~ A).

REMARK The proofs of 22 and 23 are not really different from those of the Incompleteness Theorems and Lob's Theorem in Section 1; they are merely more explicit in their use of FSL.

(33)

20 CRAIG SMORYNSKI

3 SELF-REFERENCE IN PRL

Ostensibly, the goal of the present section is the proof that PrL is closed under the Diagonalisation Rule. We will actually encounter something a bit stronger, namely the existence of explicitly definable fixed points to any legitimate self-referential equivalence p

++

A(P). That is, for p and A(P)

with p boxed in A(P), we will find a sentence D such that PrL I-D

++

A(D).

This will immediately yield the closure of PrL under DR. For, applying R2 yields

PrL I- [s][D

++

A(d)].

If, on the other hand, we have a proof of [s]fp

++

A(P)] -+ B with p not occurring in B, we can replace every instance of p in the proof by D and see that

PrL I-[s][D

++

A(D)] -+ B.

From this and (*), modus ponens yields PrL I-B.

Before I show how the fixed points are constructed, let me first point out that they are unique.

THEOREM 24 (Uniqueness Theorem). Let p be boxed in A(P). Then:

PrL I- [s]fp

++

A(P)] " [s][q

++

A(q)] -+ .p

++

q.

Proof. Obviously, we have to apply Lob's Theorem. Thus, our goal is to derive p

++

q from D(P

++

q). Write A(P)

=

B[DC} (P), . .. ,DCn(P)], with p not occurring in B[q}, ... ,

qnl.

Observe

BML I-D(P

++

q) -+ D[Ci(P)

++

Ci(q)], by SSL, I-D(P

++

q) -+ .DCi(p)

++

DCi(q),

I-D(P

++

q) -+ [S][DCi(P)

++

DCi(q)], by 2.4,

I-D(P

++

q) -+ .A(p)

++

A(q), by FSL.

We now drag in the fixed point assumptions to conclude BML I-[s]fp

++

A(P)] " [s][q

++

A(q)] -+

[D(P

++

q) -+ (P

++

q)].

By 12 we can add a box to the right hand side to get BML I-[s]fp

++

A(P)] " [s][q

++

A(q)] -+

(34)

Thus, A4 yields

PrL I-[s]fp t+ A(P)] A [s][q t+ A(q)] -+ D(P t+ q),

which, with (*), yields the conclusion:

PrL I-[s]fp t+ A(P)] A [s][q t+ A(q)] -+ (p t+ q).

The Uniqueness Theorem is due independently to Dick de Jongh, Claudio Bernardi, and Giovanni Sambin. The above proof is Bernardi's. De Jongh's model theoretic proof can be found in [Smorynski, 1978]; Sambin's rather more difficult syntactic proof appears in [Sambin, 1976].

The existence proof for fixed points, i.e. the construction of explicitly definable fixed points in PrL, is also not too difficult. It requires one clever use of Lob's Theorem to generalise Lob's Theorem, and then the rest is a simple algebraic computation.

LEMMA 25. PrL I- DC(T) t+ DC[DC(T)].

Proof. The left-to-right implication is fairly simple:

whence

BML I-DC(T) -+ .T t+ C(T),

I-DC(T) -+ [s][T t+ DC(T)], (*)

I-DC(T) -+ .DC(T) t+ DC[DC(T)], by FSL

BML I-DC(T) -+ DC(DC(T)]. For the converse implication, start with (*):

BML I- DC(T) -+ [s][T t+ DC(T)], whence A4 yields I- DC(T) -+ .CDC(T) t+ C(T), by FSL, I-DC(T) -+ .CDC(T -+ C(T), I-CDC(T) -+ .DC(T) -+ C(T), I-DC[DC(T)] -+ D[DC(T) -+ C(T)], by 2.2(3), PrL I- DC[DC(T)] -+ DC(T).

COROLLARY 26. Let A(P) = B[DC(P)]. Then

PrL I- AB(T) t+ A[AB{T)].

(35)

22 CRAIG SMORYNSKI

Proof. By the Lemma,

PrL I- DCB(T)

++

DCB[DCB(T)].

Applying R2 and FSL we get

PrL I- BDCB(T)

++

BDCB[DCB(T)],

i.e.

PrL I-AB(T)

++

A[AB(T)].

With Corollary 26, we already have enough to determine the fixed points for the historically most important modally expressible instances of self-reference:

Godel's sentence. A(P) = ,Dp. Here, B(q) = ,q and the fixed point is D

=

AB(T)

=

,0, T

=

,D.L

Henkin's sentence. A(P) = Dp. here, B(q) = q and the fixed point is D

=

AB(T)

=

DT

=

T.

Lob's sentence. A(p,q)

=

Dp ~ q. Here, B(r)

=

r ~ q and the fixed point

is

D

=

AB(T)

=

OCT ~ q) ~ q

=

Oq ~ q.

Kreisel's variant. A(p,q) = O(p ~ q). Here, B(r) = r and the fixed point

is

D

=

AB(T)

=

O(T ~ q)

=

Oq. We can now proceed to the general case:

THEOREM 27 (De Jongh-Sambin Theorem). Let A(P,q,l , ... ,qn) have only the propositional variables p, ql, ... ,qn and let p be boxed in A. There

is a modal sentence D(ql,' ., ,qn) containing only the propositional variables

ql, ... ,qn such that

1. PrL I-[s][P

++

A(P)] ~ .p

++

D

2. PrL I-D

++

A(D).

Proof. By 24, we need only prove (2). Suppressing ql, ... , qn, A(p, ql, ... ,

qn) can be written in the form B[DC1 (P), .. . , OC1;(P)] , where the DCi(P)'s

(36)

[REMARK. The decomposition is not unique. For example,

A(P) = D(D--,p V Dp) -+ Dp can be written in the form B[DCl (P),DC2(p)) with

or with

We prove the Theorem by induction on k. For k = 0, there is nothing to prove. For k = 1, we can simply refer to Corollary 26.

Suppose k

>

1. Let A*(p,qI, ... ,qn,qn+d

=

B[DCl(P), ... ,DCk-l(p), DCk(qn+d]. A* has only k - 1 components DCi (P) , and, by induction

hypothesis, has a fixed point D* = D*[ql, ... , qn+l], i.e.

PrL I-D*[ql' ... ,qn+d f-t B[DCl (D*), ... ,

Ck-l(D*),DCk(qn+d)· (*)

Let D = D[ql, ... , qn) be a fixed point of D*[ql' .. . ,qn, qn+l] in the variable qn+l, i.e.

PrL I-D f-t D*[ql, ... , qn,D].

Letting D'

=

D* [ql , ... , qn, D) and replacing qn+ 1 by D in (*) yields:

PrL I- D' f-t B[DCl (D'), ... , DCk - l (D'), DCk(D)).

Using FSL to replace D' by D in this yields

PrL I-D f-t B[DCl (D), ... , DCk - l (D), DCk (D)],

i.e. a fixed point for A(P).

The whole procedure behind the proof is best clarified by an example: EXAMPLE 28. Let A(P,ql,q2)

=

D(P -+ qd V D(P -+ (2). Then we have

A = B[DCl(P), DC2(P)), where

By the above procedure, we replace the second occurrence of p by one of a new variable q3 and consider

(37)

24 CRAIG SMORYNSKI

We can find the fixed point to this by appeal to 3.3: A*(P) = B*[DC*(P)], where

B*(r) = r V D(q3 -+ Q2), C*(P) = p -+ ql· The fixed point is

which simplifies to

We now replace the newly introduced q3 by p : A' (p, ql, q2) = Dql VD (p -+

q2) and find its fixed point. Corollary 26 readily yields D

=

Dql V DQ2. The De Jongh-Sambin Theorem was independently proven by Dick de Jongh and Giovanni Sambin. De Jongh's original proof was model theoretic and more difficult;· Sambin's was syntactic and complicated. The present simple version is essentially due to De Jongh. Claudio Bernardi and C. Smorynski independently proved a special case somewhat earlier and their proofs are still interesting. Cf. [Bernardi, 1975] and [Smorynski, 1979] for these. There are now a number of proofs of this theorem-cr. [Boolos, 1979; Sambin, 1976] and [Smoryriski, 1978]. Most of these proofs are model theoretic. We turn now to this model theory.

4 MODEL THEORY FOR PRL

For a full discussion of the Kripke model theory of modal logic, I refer the reader to the chapter by Bull and Segerberg (this Handbook volume 3). Here I will only describe as much model theory as necessary for arithmetical discussion. This means that (i) I will not define Kripke models in full generality, and (ii) I will not prove the basic theorems-these proofs can be gleaned from Segerberg and Bull's discussion.

DEFINITION 29. A frame is a triple (K, R, ao), where K is a non-empty

set of nodes a, (3, ,,(, ... (in fact ao E K), R ~ K x K is transitive (Le. for a,(3,"( E K,aR(3 and (3R"( imply aR,,(), and ao is a minimum element with

respect to R (Le. for (3 E K other than ao, aoR(3).

DEFINITION 30. A Kripke model is a quadruple K = (K,R,ao 11-), where K, R, ao) is a frame and II- is a satisfaction relation between nodes a and

modal sentences. The assertion 'a II- A' is read either 'a forces A' or 'A is true at a' and is assumed to satisfy the following conditions:

(38)

(ii)-(iii) a II- T; a If ..L

(iv) a II- -,A iff a If A

(v)-(vii) a II-A 0 B iff (a II-A) 0 (a II-B) for 0 E {I\, V, -t}

(viii) a II- DA iff V{3[aR{3

=>

(3 II-AJ.

The next Remarks collect some trivial lemmas that follow immediately from the definition.

REMARK 31. (i) A forcing relation II- on a frame K, R, ao) is completely and freely determined by its decisions on the atoms. That is, any decision on the truth or falsity of atomic formulae at nodes (Le. the decision for each a

and p whether or not a II-p) extends uniquely to a forcing relation II-making those same decisions. In particular, in describing a model (K, R,ao, II-) we need only specify the choices a II-p or a If p.

(ii) The relation a II- A depends only on a and those {3 such that aR{3. Thus, given K = (K,R,ao,m 11-) and a E K, one can define

Ka = (Ka, Ra,a, II-a) by (a) Ka = {a} U {{3 E K: aR{3}

(b) Ra = R

r

Ka x Ka

(c) II-a: For {3 E Ka,{3ll-a p iff {311-p.

Ka is a Kripke model and, for all {3 E Ka and all sentences A, {3 II-a A iff

{3 II-A.

(iii) 0 is persistent with respect to R: If a II-DA and aR{3, then (3 II-DA.

For, let aR{3 and note that

a II- DA

=>

V'Y(aR'Y

=>

'Y II- A),

=>

V'Y({3R'Y

=>

'Y II-A),

=>

{311-DA,

by definition by transitivity by definition.

A model theory must have notions of truth and semantic consequence: DEFINITION 32. (i) Let K

=

(K, R, ao, II-) be a Kripke model. A sentence

A is true in K, written K 1= A, iff A is forced at ao : K 1= A iff ao II- A. A

set

r

of sentences is true in K, written K 1=

r

iff every sentence B E

r

is true in K.

(ii)

r

semantically entails A, written

r

1= A, iff, for all models K, if K 1=

r

then K 1= A.

(39)

26 CRAIG SMORYNSKI

The customary thing to do with formal systems and model theories is to prove completeness:

THEOREM 33 (Strong Completeness Theorem). For all

r,A,r

I- A over BML

iff r

1= A.

Proof. A proof of this can be gleaned from Bull and Segerberg's chapter .

Our interest is not in BML, however; it is in PrL. By proving a Strong Completeness Theorem for BML, one can conclude strong completeness of PrL with respect to models of PrL; in particular, one can prove weak completeness:

PrL I-A iff PrL 1= A.

Unfortunately, such a result is not very useful. A good model theory for a formal theory provides recognisable models. We can get something like this for PrL at the cost of the strength of the completeness result. The fact is that we can recognise the frames which always yield models of PrL, but these are not good enough for strong completeness.

DEFINITION 34. A sentence A is valid in a frame (K, R, ao) if a II- A for

all a E K and all models K = (K, R, ao, II-) on the frame. A set of sentences

r

is valid in a given frame if every sentence in

r

is valid in the frame. To determine the frames Pr L is valid in, we simply write down what it means for O(Op -+ p) -+ Op to be valid in (K,R,ao). For notational convenience in doing this, we let

R

denote the converse relation to R and X ~ K any set of nodes we intend to be those at which p is to be forced.

In terms of X,a II- O(Op -+ p) -+ Op iff

V.8Ra[V'YR.8h E X) :::} .8 E X] :::} V.8Ra(.8 EX).

In words: A4 is valid iff transfinite induction on

R

holds.

DEFINITION 35. A frame (K, R, ao) is reverse well-founded if it has no

as-cending sequence oflength w, i.e. ifthere is no infinite sequence aORal R ....

THEOREM 36 (Characterisation Theorem). The frames in which PrL is

valid are precisely the reverse well-founded frames.

I have already indicated why this is true. There will probably be some disagreement as to whether or not this indication constitutes a proof; but I shall let it go at that.

As already remarked, what is really needed is a completeness theorem. A filtration argument (cf. Bull and Segerberg's chapter) yields the following: THEOREM 37 (Completeness Theorem). For any sentence A, the following are equivalent:

(40)

1. PrL f- A

2. A is valid in all (finite) reverse well-founded frames

9. ao II- A for all models K = (K, R, ao, 11-), with (K, R, ao) a (finite) reverse well-founded frame

4.

ao II-A for all models K = (K,

<,

ao, 11-), with (K,

<,

ao) a finite tree with root ao.

The proof yields, if PrL jot A, an effective bound on the size of the finite

tree (K,

<,

ao) on which there is a forcing relation II- satisfying ao I,)' A.

Thus:

COROLLARY 38. PrL is decidable.

Decidability results offer but one type of application of a completeness theorem. Another quick appli~tion is independence results, the exhibition

of countermodels to underivable sentences. Slightly less quick, but often also easy is the application of model theory to prove closure under non-obvious rules of inference. And then there are deeper applications: at the end of the preceding section I remarked on the use of model theory to analyse fixed points. Unfortunately, that would require a bit more space than is available and I must settle, in this respect, for referring to Smorynski [1978; 1979; 1982] for applications of Kripke models to the study of fixed points in PrL. In the present section I shall give only the simple applications just described- independence results and closure rules-and in the next section I shall discuss a most important application, namely Solovay's Completeness Theorem with respect to arithmetical interpretations.

One tiny notational remark: frames are already transitive by definition. Reverse well-founded frames must also be irreflexive-hence asymmetric. Thus, reverse well-founded frames are strict partial orders with least ele-ments and, in particular, we can write

'<'

in place of'R'.

LEMMA 39.

1. PrL If Op -+ 0.1

2. PrLIfO(pVq)-+OpVOq 9. PrL If (Op -+ Oq) -+ O(p -+ q)

4.

PrL If (Op f-t Oq) -+ O(p f-t q).

In words, the implications of Lemma 10 are, as announced directly follow-ing the statement of Lemma 10, not reversible. I shall construct a counter-model to 0 (p V q) -+ Op V Oq and let the reader handle the other formulae. Let K = {aO,aba2}, with ao

<

at

<

a2 , and define II- by:

Figure

Updating...

References

Updating...