Suppose you’re a Humean best-system theorist about laws of nature. For S to be a law of nature is for S to feature (as an axiom? As a theorem?) in the best theory that predicts local matters of fact.

Suppose we have some candidate theory T—mentioning perhaps material objects, their physical properties, and locations in space-time. T is interpreted—-perhaps via interpretation I with domain D. Now apply the Loewenheim-Skolem theorem to this, to produce a countable domain D* with pretty much the same interpretation I. (That is: any names have the same referents, and the extension assigned to predicates is as on the original except intersected with the new domain. If you thought of the interpretation as first assigning properties, whose extension relative to a domain was just the set of things within that domain that instantiate the property, they we can leave the interpretation untouched). D*, I will make true exactly the same sentences in the language of T as D,I does. And in particular, it makes true the sentences that figure in T. So let T continue to stand for the interpreted theory, whose quantifiers range over D, and T* stand for the distinct interpreted theory, whose quantifiers range over D*. T and T* are syntactically identical (hence have the same syntactic consequences). And since their interpretations differ only wrt the quantifiers, if we concentrate on their predictions for “local matters of fact” (which I’ll assume for now can be expressed using quantifier-free vocabulary) we won’t find any differences in how these consequences are interpreted. So it looks like they “fit” the data to exactly the same extent.

If T is our best theory, then the laws will be exceptionless regularities. If T* is the best theory, then the laws will be restricted. After all, the domain of T*, and hence the scope of the quantifiers featuring in its generalizations, is countable, and so only ranges over e.g. countably many regions of space-time.  Datum: the scientific laws are those that T projects, not the cunning skolemite restrictions that T* talks about. How does the Humean secure this?

In Lewis’s story, we get this via simplicity considerations. Simplicity is measured for Lewis by economy of expression in a canonical language. And (on the way I read Lewis) the canonical language will contain an unrestricted quantifier (or at least, something unrestricted enough to range over all physically relevant stuff). But there’s a weirdness here. The principled fix that Lewis gives us on the canonical language concerns the predicates it contains (those corresponding to natural properties, rather than artificial stuff like “being such that T obtains”). But it’s really not clear what the principled reason is for selecting the rest of the canonical language—-why it should contain these connectives, those quantifiers, or whatever. Ted Sider notes this sort of lacuna, and proposes extending the Lewisian natural/non-natural distinction so it covers all categories of entity. That would do the trick here, if with Sider you think that the unrestricted quantifier is perfectly natural. But this is a considerable metaphysical commitment, both in extending the naturalness ideology and in endorsing the particular thesis about its application to unrestricted quantification (that’s partly Sider’s point). Instead of going Sider’s way, perhaps the line will be just that the canonical language contains unrestricted quantification, and there’s *no* deeper explanation for why. That’s just the way it has to be for the Humean proposal to return the extensionally correct results. Ok: but a bit disappointing if that’s the end of the story.

Another thing to focus on here is whether T* is as informative as T. After all, telling us only about how countably many things behave *sounds* less informative than telling us about how *all* things behave. But there’s an issue with how we unpack this idea. First, as noted, syntactically they have the same consequences (and in the quantifier-free case there’s no relevant difference of interpretation).

But there’s a more semantic way of understanding “informativeness”. It could be measured by, e.g. the volume of worlds at which the theory is true (smaller volume=more informative). In general, modal approaches to informativeness (whether framed in terms of volume, or the shape of the region defined in modal space, or anything else) will turn on whether the theory is true in this world vs. that world. To get a grip on this, we need a theory equipped not merely with an extensional interpretation, but with an intensional one. We can run the skolemite trick to build up something like this too (walk round from world to world, and at each world w, apply the skolemite construction to the extensions induced at w by the intended unrestricted interpretation. You now have countable domains for each w, such that holding the rest of the interpretation untouched, you get exactly the same L-sentences coming out true at w on the restricted as on the unrestricted interpretation. Now you can piece together a property P, whose extension at w is the skolemite domain you constructed at w. And by keeping the original intended intensional interpretation, except restricting the quantifier to P, we guarantee that that the resulting intensional interpretation will assign the same sentence-intensions to L-sentences as the intended one did. Net result: if we take T* to be the interpreted theory with quantifiers restricted to P, we have the same informativeness as T by any modal criterion. And of course, at every world, T* ranges over only countably many things.

So simplicity might knock T* out, but at least on Lewis’s proposal, only because we built unrestricted quantification rather than skolemite quantification into the canonical language from which simplicity is assessed from the get-go. Informativeness I don’t think would discriminate T from T*, if we extend the skolemite construction as indicated (this takes a bit of thinking through—at first it seems *obvious* that restricted claims are less informative than unrestricted ones—it’s well worth thinking through exactly how the skolemite construction works to undercut that natural thought). Is there some more satisfying way for the Humean to establish that laws of nature are truly general and unrestricted?

Here’s one idea : in order to derive *any* predictions about local matters of fact, it looks like we need in the language of the theory singular terms for local circumstances (the *local* prediction isn’t that any scenario with such-and-such qualitative characteristics will have thus-and-so features; but that *this* scenario has thus-and-so features, *because* it has such-and-such characteristics). And there’s a question about what singular terms do feature in the  language of the theory. The pure language of science (or Lewis’s canonical language) may not contain demonstratives—but to account for applications then it’s natural to supplement it with just such resources. If we just supplement the canonical language with just that stuff we actually use in applications, then presumably there’ll only be countably many such singular terms, and the argument can run as previously. If we supplement it with terms for each scenario whatsoever, then the argument is blocked—-for there’ll be more than countably-many local scenarios, and each such named scenario will have to feature in the domain.

(Thanks to Jason Turner for discussion of some of these ideas; though he shouldn’t be blamed for deficiencies in the above. Literature question: are there places out there where this issue is discussed?)

Meaningless identities?

I run across the idea from time to time that some identity statements are “category mistakes” or “meaningless”—even when the terms flanking the identity have referents. In the philosophy of mathematics, it’s pretty common to see it suggested that “Julius Caesar=7” and “the natural number 2=the third von Neumann ordinal” are meaningless. Agustin Rayo’s recent book “The construction of logical space”, for example, advertises it as an advantage of his favoured approach to mathematical language that it allows for this phenomenon. Huw Price’s collection of papers on Naturalism and Representationalism suggests something similar for terms taken from different “discourses”. It’s an intriguing suggestion, but I can’t see how to make it work—for reasons independent of the “big picture” theories of ontology and meaning that it is supposed to articulate/challenge.

Here’s an argument against meaningless identities between referential singular terms. (The cogniscienti will recognize it as a variant of Evans’ famous argument against indeterminate identity; and in the light of that, I wouldn’t be surprised to learn that the point is already in the literature. References please!)

To start with, suppose that “A=B” is our candidate for a meaningless identity, and “A” and “B” have referents (thus, I’ll feel free to use those terms, rather than simply mention them). I offer two premises:

(1) A is an x such that there’s some term N referring to x, such that “N=A” is meaningful and true.
(2) B is not an x such that there’s some term N referring to x, such that “N=A” is meaningful and true.

By Leibniz’s law, since A possesses a property that B lacks, it follows:

(3) A is not identical to B.

I say that anyone who takes “A=B” to be meaningless is committed to (1) and (2), but must not commit herself to (3). Since (1) and (2) entail (3), this leaves her in an incoherent position.

The commitment to (1) is the easiest to defend. Just let the term N be “A”. I wouldn’t recommend resisting at this point. 

The need to avoid commitment to (3) is also pretty obvious, so long as nothing meaningless can be true.

The most involved case to make is that our target theorist is committed to (2).  Here’s the argument for that: Suppose that there was an N such that B possessed the property mentioned. Then, ex hypothesi “N=A” would be meaningful and true, where “A” refers to A and “N” refers to B. Hence B=A. So possessing that property would entail the truth of something that’s meaningless. Just as in the previous paragraph, that’s impossible. So B must lack the property, just as (2) reports. 

As mentioned, this is a variant of Evans’ argument. And various—but not all—the responses to Evans might have traction here. For example: a nonclassical logic might help resist certain of the moves; or endorse certain versions of Leibniz’s law but resist the one relied on above. On the other hand, what I think of as the most widely accepted caveat/objection to Evans’ argument—-inapplicability in cases of referential indeterminacy in the terms involved—doesn’t seem to me have any interesting analogue here.

There’s certainly new material in here that could be pinpointed as the problem. For example, the argument shoots up and down the semantic staircase, between the metalinguistic claim that “A=B” is meaningless, and the object language claims: A=B/~A=B. These are involved in (3) and (in more than one place) in the defence of (2). So questioning those moves, once they’re formalized in a fully spelled out form of the argument sketched, is one way forward our advocate of meaningless identities might favour.

However, though I see some options in the table, I’ve no idea exactly which way the friends of these “meaningless identity” view would jump in response. Ideas welcome!

Substitutions and Models, Part 3: Boolos (and Gödel)

Last time, we saw Quine’s argument that, in languages rich enough for elementary number theory,

(EQ-R) |\!\!\models P iff \models P

And in Part 1, we saw a compelling looking argument — relying on deduction and compactness theorems for substitutional and model-theoretic consequence — that (EQ-R) will entail

(EQ) \Delta |\!\!\models P iff \Delta \models P

Today, we’ll see Boolos’s argument that (EQ) is false in languages rich enough for elementary number theory.

Section 1: Consequence and Consistency

Boolos’s argument is not direct. It goes, rather, via the notion of consistency. Let’s remind ourselves of the connection between the two.

(Boolos in fact talks of satisfiability; like many logicians (and for decent reasons)  he reserves the term “consistency” for a purely syntactic notion. I’m not going to follow suit, though. “Consistency” here will be a catch-all term for concepts in the general neighborhood, and we’ll define “satisfiability” in a moment. This shouldn’t cause any confusion, since the syntactic notion Boolos would use “consistency” won’t show up in today’s post.)

Let’s first remind ourselves of the connection between consistency and consequence.

The basic idea is this: if a set of sentences is consistent, all members of the set can be true together — “can” in the same sense that, if an argument is valid, the conclusion can’t be false if the premises are true.

We say that a set \Delta is substitutionally consistent if and only if there is a substitution scheme S where S(\Delta) is true, and we say that it is model-theoretically consistent, or satisfiable, if and only if there is a model on which all the members of \Delta are true.

The crucial connection we want to preserve with these definitions is that, if \Delta \Rightarrow P, then the set \Delta \cup \{\neg P\} is not consistent. Let’s verify that this holds for our two different notions.

First, model-theoretically: if \Delta \models P, then every model of \Delta is a model of P; since no model is a model of both P and \neg P, no model is a model of \Delta \cup \{\neg P\}, and so that set is unsatisfiable. Conversely, if \Delta \not\models P, then there is some model M of \Delta that is not also a model of P; it will be a model of \neg P, and so \Delta \cup \{\neg P\} will be satisfiable.

Now, substitutionally: if \Delta |\!\!\models P, then every substitution scheme that makes \Delta true makes P true, too. Since no substitution scheme makes both P and \neg P true, every substitution scheme that makes \Delta true does not make \neg P true, and so \Delta \cup \{\neg P\} is not substitutionally consistent. Conversely, suppose \Delta |\!\!\not\models P. Then some substitution scheme S makes all of \Delta true but does not make P true. Since S(P) has to be a sentence, and every sentence is either true or false, S(P) must be false. So \neg S(P) is true; but \neg S(P) = S(\neg P) (since S has to preserve truth-functional connectives), so S is a substitution scheme that makes all of \Delta \cup \{\neg P\} true.

This gets us two biconditionals:

(TransM) \Delta \models P iff \Delta \cup \{\neg P\} is not satisfiable.

(TransS) \Delta \!\!\models P iff \Delta \cup \{\neg P\} is not substitutionally consistent.

Putting these biconditionals together with EQ and negating both sides gets us

(ConEQ*) For any set \Delta and sentence P, \Delta \cup \{\neg P\} is satisfiable iff it is substitutionally consistent.

But notice that, if ConEQ* holds, so does

(ConEQ) For any set \Gamma, \Gamma is satisfiable iff it is substitutionally consistent.

Why is that? Well, either one of the sentences in \Gamma is of the form \neg P, or not. If so, then \Gamma = \Gamma \cup \{\neg P\}, and so we get ConEQ by plugging this identity into ConEQ*. So suppose no sentence in \Gamma is of the form \neg P; then they will all be of the form P. Choose one such sentence, and notice that (a) every model of P is also a model of \neg \neg P, and (b) for every substitution scheme S, S(P) is true iff S(\neg\neg P) is true. So \Gamma will be satisfiable/substitutionally consistent if and only if \Gamma \cup \{\neg\neg P\} is. But by ConEQ*, \Gamma \cup \{\neg\neg P\} is satisfiable iff it is substitutionally consistent, and the result follows.

(The limiting case not covered here is the empty set. We can make an arbitrary choice as to whether to count the empty set as consistent or not; so long as we choose the same way on both the model-theoretic and substitutional side of things, ConEQ will hold.)

So if EQ is true, so is ConEQ. Or, more precisely (since these theses have to be implicitly indexed to languages), if EQ holds of a language, so does ConEQ. Boolos, however, will show us that ConEQ fails for first-order languages rich enough for arithmetic. Thus, so does EQ.

Section 2: Gödel and Tarski: An Aside

In the next section, I’ll give Boolos’s counterexample to ConEQ. But first we’ll briefly touch on some much-needed background on Gödel’s famous incompleteness theorem and a related result by Tarski. (I brushed up against Gödel’s results in the last post, but they’ll be more central to Boolos’s argument here so I want to go through them in a bit more detail.)

Gödel proved that first-order arithmetic was incomplete, in (roughly) the following sense: every finitely specifiable axiomatization of arithmetic will fail to prove some true arithmetical statement. The way he proved it was very clever. He noticed that syntactic entities, such as finite strings of symbols, etc., could each be “coded” by a unique natural number. He then showed it was possible to define arithmetical predicates that applied to natural numbers if and only if those numbers coded syntactic strings with various properties (i.e., properties like being a sentence, being a proof, and so on). And using these resources he showed how to build a sentence that essentially said of itself that it was unprovable. If it was false, it would be provable (and so true); so it must be true, and therefore unprovable.

Let’s get a bit more precise. We suppose we have a language LA strong enough for simple arithmetic, and PA is a true axiomatization of Peano Arithmetic in LA. Then using a Gödel numbering scheme, we can assign each finite string of LA symbols a unique code number (or “Gödel number”). There are actually many different, equally good Gödel numbering scheme; but let’s suppose we’ve fixed on one in particular.

If A is a string of symbols of LA, let \ulcorner A\urcorner denote its Gödel code number. Gödel also showed it is possible to define predicates SENT and PRF_{T} where:

(1a) If A is a sentence, PA \vdash SENT(\ulcorner A \urcorner), and
(1b) If A is not a sentence, PA \vdash \neg SENT(\ulcorner A \urcorner).

(2a) If Pr is a proof (from the axioms of a well-behaved theory T) of A, and Pr’s Gödel code is n, then PA \vdash PRF_{T}(n,\ulcorner A\urcorner); otherwise
(2b) PA \vdash \neg PRF_{T}(n,\ulcorner A\urcorner)

Now, there are a number of ways to proceed from here. But one way of proving Gödel’s result (which is not quite the way Gödel himself originally did it) is via the diagonalization lemma:

DIAG: If a theory T includes PA, and if F(x) is a formula open in x, then there is a sentence A where T \vdash A \leftrightarrow F(\ulcorner A\urcorner).*

Now take F(x) as \neg\exists y(PRF_{PA}(y,x)). By DIAG, we have that there is a sentence A_{G} where

PA \vdash A_{G} \leftrightarrow \neg\exists y(PRF_{PA}(y,\ulcorner A_{G}\urcorner))

(PRF_{PA}(n,m), recall, means that there is a proof from the axioms of PA with Gödel number n, and the theorem it proves has Gödel number m.) Call this sentence DIAG_{G}. Since PA proves it, PA is true, and our proof theory is sound, DIAG_{G} must be true.

Now suppose A_{G} is false. But since DIAG_{G} is true, this means that \exists y(PRF_{PA}(y,\ulcorner A_{G}\urcorner). So there is a PA-proof of A_{G}. But PA is true and its proof system is sound, so whatever it proves is true, so A_{G} is true. Contradiction.

This means A_{G} must be true. But if it is true, DIAG_{G} tells us it can’t be proven in PA. So there is a true arithmetical statement — namely, A_{G} — that cannot be proven in PA. This is known as PA’s Gödel sentence.

(Could you add A_{G} to PA’s axioms to get a new theory, PA+ that could prove A_{G}? Yes. But now you could re-run the argument from DIAG using the open sentence \neg\exists y(PRF_{PA+}(y,x)), where PRF_{PA+} codes for provability-in-PA+. This gives us a new sentence, A_{G+}, which by the above reasoning must be true but not provable in PA+. And so it goes.)

As interesting as all this is, what we need to understand Boolos’s counterxample isn’t Gödel’s proof itself, but rather a related corollary, proven by Tarski, that no sufficiently rich language can define its own truth predicate. More accurately: if L is a language with all the arithmetical vocabulary needed for arithmetic, and if PA (as formulated in L) is true, then there is no arithmetical formula T of L such that T(n) is true if and only if n is the Gödel number of a true sentence of L.

This proof also goes via the diagonalization lemma. Suppose we have a language L, and the axioms of PA are among the truths of L. Suppose L also has a formula \neg T(x) which is true only of Gödel numbers of sentences that are not truths of L. Then the diagonalization lemma tells us that there is a sentence LIAR such that

PA \vdash \textrm{LIAR} \leftrightarrow \neg T(\ulcorner\textrm{LIAR}\urcorner)

Since PA is true and has a sound proof system, this sentence must be true as well. But if LIAR is true, then \neg T(\ulcorner\textrm{LIAR}\urcorner) is false, in which case LIAR isn’t a true sentence of L after all. And if LIAR is false, then \neg T(\ulcorner\textrm{LIAR}\urcorner) is false, in which case \ulcorner\textrm{LIAR}\urcorner numbers a true sentence of L, in which case LIAR is true. Either way leads to contradiction; this is a reductio of the premise that L contains a predicate T true of all and only the Gödel numbers of truths of L.

With Gödel’s proof, we could enrich PA to be able to prove A_{G} (but were then faced with a new unprovable truth, A_{G+}). Similarly, with Tarski’s proof, we can enrich L to a new language, L+, and in L+ it may be possible to define an arithmetical formula T such that T(n) is true if and only if n numbers a true sentence of L. But T cannot get things right for all the sentences that are in L+ but not L, too, in just the same way that PA+ had a new Gödel sentence it could not prove.

Section 3: The Counterexample

The Setup

OK, so here’s the deal. Let LA be a language that is rich enough for simple arithmetic, in the very strongest sense: it has expressions for the various numerical operators, and these mean their associated numerical operations. To fix ideas, let’s suppose LA has no constants at all, and the following predicates:

Predicate Means
Z(x) x = 0
S(x,y) y is the successor of x (y = x + 1)
A(x,y,z) x + y = z
P(x,y,z) x \times y = z

We’ll also suppose LA has one further predicate, G(x), which means `x is a number’. (We also have the quantifiers of LA implicitly restricted to natural numbers, so \forall x G(x) is a truth of LA.) Let’s denote the set of all truths of LA, “Tr(LA)”.

Before going on, let’s notice something about how this language will talk about numbers. Suppose we wanted to say that 1+2=3. We usually do this using a functor “+”, numerals (“1”, “2”, etc.), and an equality sign “=”. Many standard formulations of PA work like this, except that they only have one (constant) numeral, “0”, and then a “successor function” “S”, where “Sx” is the number one greater than x. Here, we would represent “1+2=3” as something like “S0+SS0=SSS0”. In such formulations of PA, the expressions consisting of “S” applied n times to “0” is called a standard numeral, and (more precisely) the standard numeral for n. Sometimes it can be useful to use some sort of typographical shorthand, e.g. a boldfaced convention where “3” is understood to stand for 3’s standard numeral, “SSS0”. Here, we can take an ordinary numerical statement, say “1+2=3”, bold each of its terms, to get “1+2=3“, and then unpack the boldfaced convention to get “S0+SS0=SSS0”.

But our language LA isn’t like this at all: we don’t have identity, and we don’t have functors. So if we use boldfaced numerals to stand for some conventional expression of a certain number in a certain language, we can’t trade out a boldfaced numeral for one cooked up out of “0” and a successor function. We must do something else.

What we do instead is describe the number. For instance, we know that three is something which succeeds something which succeeds something which succeeds zero. (And “succeeding zero” is the same as “succeeding something which zeroes” — that is, succeeds an x where Z(x).) So, formally, our boldfaced convention will work as follows: when we get a sentence, say

3 …,

we understand that as shorthand for

\exists x_{0}\exists x_{1}\exists x_{2}\exists x_{3}(Z(x_{0}) \wedge S(x_{0}, x_{1}) \wedge S(x_{1}, x_{2}) \wedge S(x_{2}, x_{3}) \wedge \ldots x_{3} \ldots)

Of course, that’s pretty long, but it does the trick. (And it should be clear how to extend this for arbitrary “numerals”.) There’s another, equivalent way to describe what we’re doing. Instead of having standard numerals, we’ll have standard numeral-predicates. Thus, for instance, 3(x) will be a shorthand for the open sentence

\exists x_{0}\exists x_{1}\exists x_{2}\exists x_{3}(Z(x_{0}) \wedge S(x_{0}, x_{1}) \wedge S(x_{1}, x_{2}) \wedge S(x_{2}, x))

Then we can read



\existsx(3(x) \wedge … x … )

This predicate-numeral convention will be useful later.

(Why the convoluted strategy? Boolos does this to stack the deck in Quine’s favor; Quine’s favored language was free of functors, which he replaced with a description trick like this.)

OK, enough of LA. Now consider a language LB. Syntactically, it is exactly the same. Semantically, it is almost the same — but not quite. The only difference is that, in LB, G(x) is true of all and only the numbers that are Gödel numbers of sentences in Tr(LA). Following earlier usage, we’ll denote the set of all truths of LB, “Tr(LB)”. And we’ll use the same boldfaced numeral (and numeral-predicate) convention for LB as we used for LA.

Notice that the terms of the each languages are syntactically the same. This means that they will share Gödel coding schemes. More precisely, since Gödel schemes simply assign numbers to strings on the basis of their syntactic properties — semantic ones never enter into it — then any sensible Gödel coding scheme will assign syntactically identical strings of LA and LB (whether they be sentences, proofs, whatever) the very same Gödel codes.

Despite this syntactic similarity, it will be useful for us to keep tabs on which language we’re talkig about, though. We’ll want to distinguish, e.g., `G‘ as it appears as a term of LA, and as it appears as a term of LB. So we’ll add subscripts to each of the terms as appropriate, writing the (non-logical) terms of LA as

Z_{A}, S_{A}, A_{A}, P_{A}, and G_{A},

and the (non-logical) terms of LB as

Z_{B}, S_{B}, A_{B}, P_{B}, and G_{B}.

And we’ll adopt the further convention that for any syntactic string S (belonging to both languages), we’ll use S_{A} to denote that string as a string of LA, and S_{B} to denote it as one of LB. Notice that these subscripts are not an official part of the languages LA and LB; rather, they’re just our own metalinguistic commentary on elements of those languages. So these subscripts don’t figure into the Gödel codes strings get assigned. Notice, however, that we can now rewrite our earlier observation as follows: if \ulcorner\cdot\urcorner is any well-defined system of Gödel coding, then for any string S, \ulcorner S_{A}\urcorner = \ulcorner S_{B}\urcorner.

Notice also: G_{B} is a truth-predicate for LA. By definition, for any number n, G_{B}(n) is true if and only if n is the Gödel number of a truth of LA. And so it must also be a truth-predicate for the fragment of LA that doesn’t have `G_{A}‘ in it.

3.a: A First Pass

OK, now let’s look at Boolos’s argument. Consider the set Tr(LB)_{A} — the set of all the sentences of LA that are syntactically just like the ones in Tr(LB). This set is satisfiable: we know full well it has a model. We know this because the intended interpretation of LB provides just such a model. More precisely, let M be a model of LA that assigns each predicate P_{A} the extension that P_{B} in fact has; since Tr(LB) Is in fact true, and M makes LA’s terms mean just what LB’s already do, M must be a model of Tr(LB)_{A}. The question, then, is whether it is substitutionally consistent — that is, whether there is a substitution scheme T where T(Tr(LB)_{A}) is true, too.

Let’s start by noting that, if there is, then T(Tr(LB)_{A}) \subseteq Tr(LA). This is because we’re working in the language LA here; if T(Tr(LB)_{A}) are all true, then there some (if not all) of the truths of LA, and so some of the members of Tr(LA).

Now one thing we may have noticed is that Tr(LB)_{A} and Tr(LA) have a lot of overlap. In fact, any sentence that does not have “G” in it will be in one of these if and only if it is in the other. So we might have thought that an easy way to cook up a translation scheme T that makes T(Tr(LB)_{A}) all true would be to have it give an interesting substitution for “G” and leave the other predicates alone. But this won’t work, and it’s worth taking a minute to see why.

Suppose n is a Gödel number of a true sentence of LA. Then G_{B}(n) is in Tr(LB), and so G_{A}(n) is in Tr(A). And this, remember, just is the sentence

\exists x_{0}, x_{1}, \ldots, x_{n}(Z_{A}(x_{0}) \wedge S_{A}(x_{0},x_{1}) \wedge \ldots \wedge S_{A}(x_{n-1}, x_{n}) \wedge G_{A}(x_{n}))

Now, when we take the substitution instance of this that T gives us, that simply swaps out all the predicates here for their values under T. But T, by hypothesis, leaves everything but “G” alone. So the result of applying T to this sentence gives us

\exists x_{0}, x_{1}, \ldots, x_{n}(Z_{A}(x_{0}) \wedge S_{A}(x_{0},x_{1}) \wedge \ldots \wedge S_{A}(x_{n-1}, x_{n}) \wedge T(G_{A})(x_{n}))

In other words, it gives us the sentence T(G_{A})(n). Likewise, if n is not a Gödel number of a true sentence of LA, \neg G_{B}(n) is in Tr(LB), and a similar argument shows that \neg T(G_{A})(n).

We’re supposing that T takes us to all and only truths. But (since the satisfier of the numeral-predicate n just is the number n!) this means that T(G_{A}) is a formula of LA that is true of all and only the Gödel numbers of truths of LA. And this is what Tarski’s theorem told us we couldn’t have. Reductio.

3.b: For More General Cases

But a reductio of what? Of the assumption that there was a substitution scheme T that made Tr(LB)_{A} true and didn’t fiddle with any of the predicates other than G_{A}. What happens if we relax that second assumption? Basically, the same thing. Only it takes more effort to see why. That’s what I’ll argue in this section.

OK, so suppose we have a substitution scheme T, where T[Tr(LB)_{A}] are all true.

Here’s the first thing to notice. G_{B}(\textbf{n}) will be true if and only if n is the Gödel number of a true sentence of LA. So T[G_{A}(\textbf{n})] will be true if and only if n is the Gödel number of a true sentence of LA, too.

But what is G_{A}(\textbf{n})? Recall our numeral-predicate strategy: it is the sentence

\exists x(\textbf{n}(x) \wedge G_{A}(x))

So the substitution instance of this is

(1) \exists x(T[\textbf{n}](x) \wedge T[G_{A}(x)])

And notice: (1) is true if and only if n is the Gödel number of a true sentence of LA. But this doesn’t give us quite what we want: an open formula that is true of and only of the true Gödel numbers of LA sentences. So we’re not quite done.

What we need is a formula of LA, R, where is true of n and m if and only if m satisfies T[n(x)]. If we had that, then (1) would be equivalent to

(2) \exists x(R(\textbf{n},x) \wedge T[G_{A}(x)])

and would be true if and only if n was the Gödel number of a truth of LA. Moreover, the open formula

(3) \exists xR(y,x) \wedge T[G_{A}(x)]

would be a ‘truth-formula’ for LA: it would be true of and only of the Gödel numbers of LA truths. Since Tarski’s theorem tells us there can’t be any such formula, our reductio would be complete.

But how do we show that there is such a formula R? Our answer comes in two parts. The first requires a bit of background. Start with the notion of a (total) recursive function. A function f is recursive if and only if (more or less, assuming the Church-Turing thesis) there is an effective algorithm for determining, for each \vec{x} and y, whether f(\vec{x}) = y. From this, we can also define the notion of a recursive relation: an n-place relation R’ is recursive if and only if there is an n-place recursive function f where f(\vec{x}) = 1 iff \vec{x} stand in R’, and 0 otherwise. And finally, we have the following useful result: if R’ is any n-placed recursive relation, then there is a formula R* of LA where

PA \vdash R*(\vec{\textbf{x}}) if \vec{x} stand in R’, and
PA \vdash \neg R*(\vec{\textbf{x}}) otherwise.

OK, so far, so good. Now notice that, given any number n, there is a recursive function that will deliver the Gödel number of the formula “T[n](x)”. We can see this by imagining a purely mechanical process for taking n and generating “T[n](x)”‘s Gödel number.

The first thing we need to do is figure out just what formula T[n](x) is. But this will clearly be a simple algorithmic process: it’s purely mechanical to write down Z_{A}, n instances of S_{A}, and then conjoin them and fill in and bind the right variables in the right places. This gets us the formula n(x). And it’s also purely mechanical to get from this to T[n](x): just look up each predicate’s assignment in T’s table and write it in the right spot. But (given a fixed Gödel coding) it’s also purely mechanical to take any formula and find its Gödel number. So there is a binary recursive relation GN* that n bears to x if and only if m is the Gödel number of “T[n](x)”. And since there is such a relation, there is also a predicate GN where GN(n,x) is satisfied when and only when x is the Gödel number of “T[n](x)”.

Second, there will be a satisfaction predicate SAT for PA that applies to “T[n](x)” — that is, a predicate SAT where SAT(x,m) is true if and only if m satisfies the formula with Gödel code x.** So now consider the formula

\exists x(GN*(n,x) \wedge SAT(x,m))

This is a formula that is satisfied by n and m if and only if m satisfies T[n]. But (putting this all together), this means that the open formula

\exists y[\exists x(GN*(n,x) \wedge SAT(x,y)) \wedge T[G_{A}](y)]

is a truth-formula for LA. For n satisfies this formula if and only if there is a number m that satisfies both T[G_{A}] and T[n] if and only if the sentence T[G_{A}(\textbf{n})] is true if and only if the sentence G_{B}(\textbf{n}) is true if and only if n is the Gödel number of a truth of LA. Boolos’s reductio is complete.

*Really for this theorem, T need not include PA but only include the weaker Robinson arithmetic; but since PA includes that, and since we’re going to be dealing explicitly with cases where our theories include PA, we can ignore this slight weakening.

**This isn’t quite right, and for reasons that have to do with Tarski’s theorem. Just as there can be no perfectly general truth predicate for LA, there can also be no perfectly general satisfaction predicate for LA. What we do have, though, is a hierarchy for satisfaction predicates that follows the arithmetical hierarchy. That is, for every complexity \Pi_{n} or \Sigma_{n} in the arithmetical hierarchy, there is a satisfaction predicate SAT_{\Pi_{n}} or SAT_{\Sigma_{n}} that applies to all formula of that complexity. That is, if P is (say) a \Sigma_{n} formula, then SAT_{\Sigma_{n}}(\vec{x},\ulcorner P\urcorner) if and only if \vec{x} satisfies P. However, if P is a formula of greater complexity than \Sigma_{n}, SAT_{\Sigma_{n}} can’t be trusted to ‘get the facts right’ about what satisfies it.

However, for any translation scheme T, there will be some particular level of complexity that all formulae of the form T[n(x)] will share. This is because all such formulae will be of the form

\exists y_{0},\ldots,y_{n-1}(T(Z_{A})(y_{0}) \wedge T(S_{A})(y_{0},y_{1}) \wedge \ldots T(S_{A})(y_{n-1},y))

That is, it will be a block of quantifiers followed by a conjunction of translations of primitive formulae of LA. We can think of this conjunction as having two main conjuncts:

(T(Z_{A})(y_{0})) \wedge (T(S_{A})(y_{0},y_{1}) \wedge \ldots T(S_{A})(y_{n-1},y))

Here’s an argument for one of the possible cases; it will be easy to reconstruct the other cases from it. Suppose T[Z_{A}] is of complexity \Delta_{i}, and T[S_{A}] is of complexity \Delta_{j}. Notice that, in this case, any conjunction of formulae all of the form of T[S_{A}] is also \Delta_{j}. If i = j, then the complexity of this entire formula is \Delta_{j}. If i j, the complexity of this formula is \Delta_{i}. So no matter what there is a fixed k such that, no matter how many iterations of T(S_{A}) are added, this formula is of complexity \Delta_{k}, in which case the complexity of T[n(x)] is \Sigma_{k+1}, and we can use SAT_{\Sigma_{k+1}} for the argument in the text.

Substitutions and Models, Part 2: Quine

The second post in the series that started here. Apologies for the months-long delay! The third will go up later this week.

Quine claims that, for a certain class of languages, the substitutional and model-theoretic definitions of logical truth are equivalent:

(EQ-R) |\!\!\models P iff \models P

Which languages? Ones that have two properties:

(P1) They are “rich enough for elementary number theory” (Philosophy of Logic, p. 53).
(P2) They are first-order. (In particular, completeness and a specific version of the Loewnheim-Skolem theorem hold for them.)

Here are today’s questions:

  1. What counts as being “rich enough for number theory”?
  2. How is this result consistent with the arguments of section 3 of the last post?
  3. What, exactly, is Quine’s argument? How does it work, and on what assumptions does it depend?

I won’t tackle these in order, though. I’ll start with 2, and then move on to 3. 1 will be covered towards the end.

Section 1: Motivation

Before getting into the nitty gritty, it’s worth taking a step back to ask why we — or Quine — should care, either about EQ-R or the full EQ.

First, consider how we would expect Etchemendy to react. Since the EQs hold, at best, only for a restricted class of languages, he will likely be unimpressed. Etchemendy’s approach in The Concept of Logical Consequence shows that he wants a perfectly general analysis which, when thrown an arbitrary language, will give the right predictions. Since we know a substitutional account generates bad predictions for at least some languages, by Etchemendy’s lights there’s no use continuing to go on about it.

Not so for Quine. He has little truck with the notion of an analysis. (If we could analyze logical consequence, we would have an instance of synonymy, which he famously decries.) As a pragmatist, the important question for him is, “What well-behaved definition will suit us best for our purposes?” And if our purposes keep us well within the limited class of languages for which an EQ result holds, then this tells us that, if the model-theoretic account works for our purposes, so does the substitutional.

For Quine, though, the result also goes two steps deeper. For one thing, Quine thinks there are reasons to use the substitutional account when we can for — unlike the model-theoretic one — we don’t have to deploy sets for the substitutional account. The objection here isn’t the “there are no sets, so let’s be substitutionalists” one — Quine eventually accepted sets, after all, thinking them indispensable to science — but rather that we should do as much as we can with as little as we can get away with, so if we’re not forced to, we should avoid appealing to them in our definition of logical consequence.

Second, and in Boolos’s eyes most important, Quine thinks the EQ results give an argument for identifying “logic” with first-order logic. (There is, I think, a real issue as to just what this identification means — both for us and for Quine — but set that aside.) We choose a logic just as we choose other theories: by seeing which does best at the various theoretical virtues. Since the EQ result, as well as completeness, apply only to first-order theories, only if logic is first-order can we collapse proof-theoretic, model-theoretic, and substitutional accounts of consequence. Quine sees this as a virtue, and thus grist for his “logic-is-first-order” mill.

Section 2: Reconciliation

Ok, next: resolving the last post’s puzzle. There, we saw that a counting sentence like

(E2) \exists x\exists y(x \ne y)

will count as a substitutional, but not a model-theoretic, truth, and thus threaten a counterexample to (EQ-R). How is Quine not worried about this? The answer comes a few pages after his argument, where we learn “Truths of identity theory [such as \forall x(x=x)] do not count as logical truths under the contemplated definitions of logical truth, since they are falsifiable by substituting other predicates for =.” (p. 61). Quine does not think that all substitution schemes take (E2) to (E2), for — unlike what Etchemendy assumed — he does not reckon the identity predicate one of the logical constants.

That answers one side of the problem. But in standard textbook model-theoretic treatments of first-order logic with identity, \forall x(x=x) does come out true on all models. If this is not to be a further counterexample to EQ-R, Quine had better have a not-quite-so textbook model theory. And indeed he does. How his differs from the textbook depends on just what the textbook says.

In some textbooks, the interpretation function of a model does not assign an extension to =. Rather, satisfaction-conditions for sentences of the form \alpha = \beta are hard-wired into the definitions of truth and satisfaction on a model. In other textbooks, the definitions of truth and satisfaction on a model don’t treat identity predicates in any special way. But a set \langle D, I\rangle only counts as a model if the interpretation function I assigns the set \{\langle x, x\rangle : x \in D\} to I. Quine’s model theory differs from the first textbook’s by not having any special truth-or-satisfaction clause for identity statements, and differs from the second’s by allowing models to assign different extensions to =. So on Quine’s model theory, \forall x(x=x) ends up not being a model-theoretic truth either, and the potential counterexamples are all blocked.

(For those keeping track: this will, clearly, affect Quine’s overall pro-first-order argument. If he wants to argue that first-order logic is best on the basis of its collapsing model-theoretic, proof-theoretic, and substitutional accounts of logical truth, he’ll have to argue that first-order logic without identity is best — because with identity, first-order logic doesn’t have this property.

Alternatively, as Günther pointed out in the comments to the last post, Quine might instead plump for the modified substitution scheme. Then he can keep identity in, so long as he is willing to argue (as he apparently is) that inclusive (i.e., empty-domain-allowing) first-order logic without names is best. Either way, Quine has to depart from the textbook in some direction or other. As a matter of fact, he departs in all three: he also eschews names and endorses (in Mathematical Logic, anyway) inclusion. So he has a fairly large menu of options here.)

Section 3: The Argument

Here is Quine’s basic argument for EQ-R:

(I). If P is true on all substitution schemes, P is true on all number-theoretic substitution schemes.
(II). If P is true on all number-theoretic substitution schemes, it is true on all models.
(III). If P is true on all models, it is true on all substitution schemes.
(C). Therefore, P is true on all substitution schemes iff P is true on all models.

But (C) is (EQ-R), the desired conclusion. This argument is clearly valid (it’s a standard proof technique to prove a biconditional P iff Q by proving a string of conditionals that start with P, go through Q, and end up back at P again.) But what of the premises?

Some of this, of course, will depend on what “number-theoretic substitution scheme” means. But whatever it means, so long as these are a kind of substitution scheme (and they are!), I is trivial. II and III not so much, though, so we’ll look at Quine’s arguments for them in detail.

3.a: The Argument for III.

Quine argues for this by appeal to

(COMPLETENESS) If \models P, then \vdash P

(where \vdash P represents the provability of P in any standard first-order system). His reasoning is apparently something like the following. Where S is any (logical-term-preserving) substitution scheme whatsoever:

  1. Suppose P is satisfied by every model.
  2. Then, by COMPLETENESS, it is provable. So there is a proof Pr of P.
  3. If Pr is the proof of P, then S(Pr) is a proof of S(P).
  4. Thus, S(P) is provable.
  5. Therefore, S(P) is true.

The first thing to note is that the move from 4 to 5 relies on the thought that falsehoods can’t be proven or, equivalently, that first-order deductive systems are intuitively sound. ‘Intuitive soundness’ is a notion that crops up in eg Kreisel’s squeezing argument.* The idea here (as Quine puts it, at any rate) is that, among the proof systems we can pick, there are some where we can tell, by inspection, that every individual move takes us from truths to truths. (Or from nothing at all to truths, as happens when e.g. we have an axiomatic system.) Furthermore, we can tell that stringing together series of moves each of which preserves truth will also preserve truth. So we can tell, just by inspection, that there will be no proof of a falsehood. So, since there is a proof of S(P) (line 4), it must be true (line 5).

The second thing to note is that this requires also that, if Pr is a proof, then S(Pr) (the result of taking the proof and replacing each sentence Q in it with S(Q)) is also a proof. This is premise 3. Call proof systems with this property preservational. Notice that not every proof system we might pick is preservational. Suppose, for instance, that we chose a system of semantic tableaux (or “truth-trees”) where only contradictions between literals (atomic predications or their negations) were allowed to close. A proof of ‘R(a) \vee \neg R(a)‘ will be a tree that starts with ‘\neg(R(a) \vee \neg R(a))‘ and ends with ‘R(a)‘ and ‘\neg R(a)‘ closing the (only) branch. Now consider the substitution scheme that takes R(a) to F(b) \wedge G(a). The tree that you get by taking the above and substituting with this scheme has ‘F(b) \wedge G(a)‘ and \neg(F(b) \wedge G(a))‘ closing, which is not a proper proof of (F(b) \wedge G(a)) \vee \neg( F(b) \wedge G(a)) in the imagined system, because it doesn’t have any contradiction between literals.

This isn’t a huge problem: there are preservational proof systems. And there are intuitively sound proof systems, too. In fact, there seem to be proof systems that are both — standard axiomatized systems seem to have both properties. (They are preservational because whether something counts as a proof is entirely schematic — depends entirely on whether lines of the proof fit certain schemas — and since the substitution schemes preserve logical constants, they can’t make a sentence stop matching a schema when it did before. They certainly seem to be intuitively sound, too, although this is perhaps a more controversial matter.) And good thing, too, because the argument for III goes through only if there are proof systems with both properties. Otherwise, we might have intuitively sound proofs turning into intuitively unsound ones under substitution (violating the move from 4 to 5), or instead intuitively sound proofs turning into things that aren’t even proofs (falsifying 3).

3.b: The Argument for II: Hilbert-Bernays Theorem.

OK, enough of III. It looks secure enough. On to II. Quine’s argument for it relies on a remarkable theorem, a strengthening of the Lowenheim-Skolem theorem, first shown by Hilbert and Bernays:

(HB) If P has a model, it has a constructible model.

A model is constructible iff its domain is a subset of the natural numbers, and each of its predicate extensions is given by a formula of simple arithmetic. That is to say: if F is an n-placed predicate and M a constructible model, then there is some formula A of simple arithmetic, open in n variables, where \langle x_{1}, \ldots x_{n}\rangle is in F’s extension on M iff x_{1}, \ldots x_{n} satisfy A. (Note: F is a predicate of the object-language here, but A is an arithmetical formula of the metalanguage.)

What does it mean to say that a formula is a ‘formula of simple arithmetic’? We’ll deal with this quite a bit more in Section 4. For now, just note that it will be a formula that uses the primitive vocabulary used in, say, Peano Arithmetic.

The argument for II goes as follows. Call a substitution scheme number-theoretic iff every predicate and term gets replaced for some formula or term of simple arithmetic. We first use (HB) to argue for

(HB*) If Q is true on some model, then for some number-theoretic substitution scheme S, S(Q) is true.

The very basic idea is that, if F is given an extension E on the number-theoretic model, and if A is the formula of simple arithmetic where \langle x_{1}, \ldots x_{n}\rangle \in E iff x_{1}, \ldots x_{n} satisfy A, then we can replace F for A. Notice what’s happening here: A appears both in the metalanguage and the object language, so we use the metalanguage characterization of the model (in terms of A) to cook up an object-language substitution scheme (using A). If we do this for all of the predicates and terms in Q, we end up with a sentence S(Q) that basically describes the constructible model we began with, and is true because S(Q) accurately describes that model. In other words: since P is true on the model, and S(Q) says, more or less, that P is true on the model, S(Q) is flat-out true.

Since (HB*) is a general principle, it holds even if Q is of the form ~P (for any sentence P). If we substitute in ~P, we get:

(HB~) If ~P is true on some model, then for some number-theoretic substitution scheme S, S(~P) is true.

Now take the contrapositive:

(HBC) If for every number-theoretic substitution scheme S, S(~P) is false, then ~P is false on every model.

But S(~P) = ~S(P), and ~S(P) is false if and only if S(P) is true. Likewise, ~P will be false on a model iff P is true on that model. So (HBC) becomes:

(II) If P is true on every number-theoretic substitution scheme, then P is true on every model.

This is Quine’s second premise. So we have it — if we can substantiate the move from HB to HB*.

4. “Rich Enough for Elementary Number Theory”.

Can we substantiate the move? We can if ‘is a formula of simple arithmetic’ means something like (i) ‘the formula uses Peano Arithmetic’s vocabulary’, (ii) ‘the vocabulary of Peano Arithmetic means arithmetic (‘+’ means plus, and so on), and (iii) Peano Arithmetic is true. In that case, HB ensures that there is a model made out of numbers where every formula’s satisfaction-conditions can be expressed in PA, and for any formula F we can, by uniform substitution, get a formula S(F) that explicitly states those numerical satisfaction-conditions.

4.a: Another Substitution Scheme?

Those are very strong assumptions, though. We might have hoped that we could get away with something considerably weaker. It’s tempting to think that we could have gotten away with mere provability from PA, in which case ‘rich enough for elementary number theory’ would only need to mean something like ‘has the syntactic resources of PA’. The temptation comes, in large part, because we want to think of HB as coding up a kind of logical connection: the premises of mathematics, so to speak, entail that if P has a model, S(P) is true.

Mere syntactic resources won’t be enough, but the tempting thought can be made substitutionally. We might reason like this. Suppose that  PA is false (formulated in our ‘vocabulary of number theory’), but has a true substitution instance under a scheme T. HB — the claim which tells us that every satisfiable sentence has a constructible model — tells us that PA \vdash S(P). In this case, since our proof theory is preservational, T(PA) \vdash T(S(P)). But since T(PA) is true and our proof theory is intuitively sound, T(S(P)) is true. And T(S(P)) is a substitution instance of P; so the fact that P has a model is enough to show that it has a true substitution instance.

There are two problems with this. The first is that any interesting mathematical consequences of PA will have to involve claims of identity. If Hilbert and Bernays had showed that, when P is satisfiable, PA \vdash S(P), they surely would have relied on a proof theory that included identity. Since Quine doesn’t reckon identity among the logical constants, it’s not clear whether he can appeal to this claim. Perhaps he could finesse it by adding the axioms of identity theory to PA, but I’d want to see the reasoning spelled out in detail.

This first problem may not be a big deal. The second is: there’s no reason at all to think that PA \vdash S(P), and at least some inductive reason to think that sometimes it doesn’t.

First, why is there no reason to think it does? Well, suppose that S(P) is in fact the Godel sentence for PA — the sentence G which says, in effect, that G — it itself — is not provable in PA. Since PA is (various kinds of) consistent, we know that PA \not\vdash G. But G is going to be true on some constructible model. (It’s true on the standard model of PA, for instance, which is a constructible model.) And there’s just no guarantee that, when we applied S to P, G wasn’t what we ended up with.

Of course, this itself doesn’t tell us that there are any P for which S(P) is G; it just raises a spectre of doubt. Here’s another spectre of doubt. The result that Quine appeals to when discussing the Hilbert-Bernays theorem appears to be a corollary of Bernay’s Lemma. Where Prf is the formula that captures (via Godel coding — if you’re not familiar, I’ll explain more in the next post) the provability predicate for the “empty theory” (that is, it captures theorem-of-first-order-logic-(without identity(??))-hood) this lemma says that, for any sentence P, there is a substitution scheme S_{P} such that

(BL) PA \vdash S_{P}(P) \rightarrow Prf(\ulcorner P \urcorner)

(where \ulcorner P \urcorner represents P’s Godel number). Since P will be a theorem if and only if Prf(\ulcorner P \urcorner) is true, this means that the truth of S_{P}(P) suffices for the theoremhood of P.

We can deduce HB* from this as follows:

  1. PA \vdash S_{\neg P}(\neg P) \rightarrow Prf(\ulcorner \neg P \urcorner) (Bernays lemma, with \neg P substituted for P.)
  2. PA \vdash \neg Prf(\ulcorner \neg P \urcorner) \rightarrow \neg S_{\neg P}(\neg P) (Contraposing)
  3. PA \vdash \neg Prf(\ulcorner \neg P \urcorner) \rightarrow S_{\neg P}(P) (The internal negation commutes, and the two cancel)

If P is consistent, there is no proof of not-P, and so (assuming PA is true and our proof theory sound) \neg Prf(\ulcorner \neg P \urcorner) will be true, in which case so will S_{\neg P}(P). This means that S_{\neg P} is the substitution scheme we need.

OK, so here’s that next spectre of doubt. The most natural way to show that S_{\neg P}(P) will be provable (as opposed to just true) in PA for every P would be if \neg Prf(\ulcorner \neg P\urcorner) was provable in PA for every consistent P. This latter sentence says, in effect, that P is first-order consistent. What we might hope to do is to show

(*) PA \vdash \neg Prf(\ulcorner \neg P \urcorner)

for all consistent P. Then we could use * and line 3 above to conclude (with appropriate subscripts) that PA \vdash S(P).

But, alas, there’s no hope of this. Notice that we already have a procedure for telling whether any sentence Q is a theorem of first-order logic that’s guaranteed to terminate and give us a “yes” answer if Q in fact is a theorem. Moreover, we know that there is no “decision procedure” for determining Q’s theoremhood: there’s no procedure that, for any Q we throw at it, is guaranteed to eventually terminate and tell us “yes” if Q is a theorem and “no” if it’s not.

But if (*) were true for every consistent P, then we would have such a procedure. Just substitute \neg Q for P. If Q is not a theorem, then there’s no proof of Q, and (since there is a proof of Q iff there is a proof of \neg\neg Q) \neg Prf(\ulcorner \neg \neg Q \urcorner) is true. But if the truth of this latter claim guaranteed its derivability from PA (as (*) would have it do), then — since we have a procedure that will always deliver a “yes” verdict when there is a proof — we would have a procedure guaranteed to give us an “yes” verdict whenever we fed it an invalid Q.

But we could then stitch these procedures together to cook up a decision procedure for first-order theoremhood. First, spend five minutes on the procedure looking for a proof of Q. Then, if you haven’t gotten an answer yet,  spend five minutes looking for a proof of \neg Prf(\ulcorner \neg \neg Q \urcorner). If you don’t find one in that time, go back to the first, and keep swapping every five minutes. Since no matter whether Q is a theorem or not we are guaranteed that at least one procedure will terminate, this procedure will eventually terminate. If it terminates with a “yes” answer on the “proof of Q” front, then you have the answer “theorem”; if it terminated with a “yes” on the “proof of \neg Prf(\ulcorner \neg \neg Q \urcorner)” front, you have the answer “not a theorem”. But since any such procedure is impossible, (*) must be false after all.

Now, none of this shows that PA can’t prove S(P) for every consistent P. It just shows that it can’t (always) do it via (*). But it is difficult to see what other considerations would lead us to think that PA could prove S(P) for every P.

I’d love to know whether PA’s proving power along these lines has been sussed out yet. I have a sneaking suspicion that this is an open problem (in the sense of nobody knowing the answer, rather than in the sense of people actively caring about and working on the answer). But the overall inductive weight of the evidence is pretty hefty against PA’s ability to prove S(P) for every P. I sure wouldn’t place much of a wager on it. And so, at the very least, it leaves the T-using argument above underdeveloped.

4.b: What Can We Say?

Let’s suppose that there are some consistent P for which PA can’t prove the relevant S(P). Then what happens to Quine’s result when PA is true but doesn’t mean arithmetic? Or when PA is false, regardless of whether it means numbers or not?

Suppose first that PA is true, and also means arithmetic. Then we’ll call its standard model \mathbb{N}, and following standard usage, let Tr(\mathbb{N}) be the set of all of the sentences of L that are true on \mathbb{N}. (Godel’s result shows us, in effect, that no recursive axiomatization can prove every sentence in Tr(\mathbb{N}).)  Then Quine’s argument at least shows us that, if P is consistent, S(P) is in Tr(\mathbb{N}).

Now suppose that PA is true but doesn’t mean arithmetic. Then we need to know whether the set of all the truths of our arithmetical language L is again Tr(\mathbb{N}). If PA is true because reality provides the (right sort of) non-standard model, then not all of Tr(\mathbb{N}) will be true. That’s just the result of PA not proving all of the sentences of Tr(\mathbb{N}). Suppose Q is a sentence of PA and PA \not\vdash Q; then by the completeness theorem for first-order logic, there will be a model of PA that’s not also a model of Q. And if PA \not\vdash S(P), then (if reality is making PA be true by being the wrong kind of model) it might be that S(P) is false.

So, if PA doesn’t mean arithmetic, we can say that Quine’s result holds so long as the set of truths of its language is Tr(\mathbb{N}). That might not seem very informative: it comes awfully close to saying something like “Quine’s result holds so long as, for every consistent P, it’s associated S(P) is true”. But we can do at least slightly better if we allow ourselves certain extensions of first-order logic in fixing the language. For instance, Hartry Field notes that if we have a “finitude” quantifier \mathscr{F}, where \mathscr{F}x(Gx) means “finitely many things are G”, then PA plus the axiom

(FIN) \forall x\mathscr{F}y(y \leq x))

are all true only on “standard” models, i.e., models isomorphic to \mathbb{N}, and therefore PA plus FIN guarantee the truth of Tr(\mathbb{N}).** Likewise, if the logic of PA is enriched with the omega rule, then we will again have PA \vdash Q for every Q \in Tr(\mathbb{N}) (for our newly omega-enriched \vdash relation), and so the arguments of the Quine post can be re-given.

But those sorts of enrichments seem especially problematic for Quine. For Quine is trying to argue that substitutional and model-theoretic notions collapse, which in turn relies on the notions applying to the first-order case. If we enrich the logic by introducing new rules or operators, we would have to go back and re-start the argument from the beginning. And it’s entirely unclear how we would do this, since the Hilbert-Bernays theorem it relies on was proven for first-order consistencies, rather than sentences were which “consistent” in the extended sense that would come with an extra inference rule or quantifier. It is not at all clear that if we have an enriched proof relation, and an associated enriched notion of consistency, anything like the Hilbert-Bernays theorem could be proven. If we enrich our proof system in certain ways, the resulting notion of “proof” won’t be effectively decidable, and so there will be no arithmetically definable predicate Prf that codes for it, and so there will be no Hilbert-Bernays theorem to appeal to, and so there will be no Quinean collapse argument.

(Perhaps the best Quinean strategy in this area is to go ahead and allow the finitude quantifier, deny it any status as “logic” (including any inferential rules coded into the proof theory), but then insist that we’re only considering interpretations where it means “finitely many”, and so only have to worry about models where it is true of some formula iff finitely many things satisfy the formula. The biggest worry with this approach, I think, would be that the notion of “consistency” would no longer look even reasonably plausible: sentences such as

\mathscr{F}xGx \wedge \neg\exists x Gx

would come out as consistent, when they seem flagrantly not so. If we can make substitutional and model-theoretic accounts of logic collapse only by introducing new resources and then asserting what look like gross implausibilities about what is and is not consistent using those resources, we might legitimately wonder whether the game’s still worth the candle.)

So perhaps we can do no better than saying something like, “Quine’s theorem will apply if PA is true and means arithmetic, or if Tr(\mathbb{N}) is true (regardless of whether or not it means arithmetic).

But what if PA is flat-out false?

Well, suppose PA is flat-out false. (Then Tr(\mathbb{N}) will be flat-out false, too, because it has PA as a part.) But suppose also that there is a substitution scheme U such that U(PA) is true. Then we can ask: is U(Tr(\mathbb{N})) true, too? If the answer is yes, then U(S(P)) is also true. (As S(P) \in Tr(\mathbb{N}), U(S(P)) \in U(Tr(\mathbb{N})).) And the question as to whether U(Tr(\mathbb{N})) is true is (more or less) the question as to whether the chunk of reality that makes U(PA) true is like a standard model of PA, or like a non-standard one. Or, more precisely, it’s asking something like this: if you took each predicate P that appears in PA, looked at the formula F that U swapped P out for, and then built a model M by putting all and only the F-satisfiers in the domain of P, would the result be a standard model of PA? If yes, then U(Tr(\mathbb{N})) will be true, and so Quine’s result will follow. Notice that if the answer is “no”, though, that doesn’t quite tell us that Quine’s result does not follow. For there are non-standard models of PA that are also models of Tr(\mathbb{N}); perhaps the model built is one of these. If so, then U(Tr(\mathbb{N})) will again be true and Quine’s result will again follow. But if it’s the wrong sort of non-standard model of PA, S(P) may be false and we can’t rely on collapse.

So that, I submit, is what we can say with confidence: for Quine’s theorem to work, “rich enough for simple arithmetic” needs to mean that there is a substitution scheme U (which might be the identity scheme) for which U(Tr(\mathbb{N})) is true. A sufficient (but not necessary) condition for this is for the model of PA reverse-engineered from the (true) U(PA) to be a standard model of PA.***

* Peter Smith argues that there is no genuinely “intuitive” notion of validity (and hence, I imagine, no “intuitive” notion of soundness); to get Kreisel’s argument off the ground, you need rather a semi-technical, quasi-intuitive notion first, and then the argument can show some fully technical apparatus sufficient to capture the semi-technical notion you’ve gotten toward. I think all of that can be accepted while still accepting that any quasi-intuitive notion of soundness you might end up with has this property: no proof that is quasi-intuitively sound has all true premises and a false conclusion. And that’s the strength of premise Quine needs at this stage of the argument.

** If I’m understanding aright, this constraint is strictly stronger than merely making all of Tr(\mathbb{N}) true, as there will be non-standard models of Tr(\mathbb{N}) that aren’t also models of PA+FIN.

*** An equivalent (and less messy-sounding) way to state the condition is that the domain of the so-constructed model (or the extension of “is a natural number” on the model, if the quantification isn’t restricted) is an \omega-sequence under “<“. But that will hold iff the domain of the quantifiers of L is an \omega-sequence under U(<) (or, if the domain isn’t restricted, that the extension of “U(natural number)” is an \omega-sequence under U(<)). So a different, not so model-theoretic-sounding way of phrasing the sufficient condition is simply “has as a domain (or as the extension of ‘U(natural number)’) an \omega-sequence under U(<)”.

1 year lecturership in moral and political philosophy at Leeds

Deadline end of May.  See advert here.

Thought Special Issue: Time and Modality

Thought is having a special issue on the metaphysics of time and modality.  Deadline is the end of May.  Call for papers follows:

Metaphysicians of modality argue over whether ontology extends beyond the actual just as metaphysicians of time argue over whether ontology extends beyond the present; and we might also ask whether it is a stable position to hold that reality includes the non-present but not the non-actual. There are modal analogues of McTaggart’s infamous argument for the unreality of time, and we can ask whether the modal and temporal arguments stand or fall together. We might wonder whether trans-world identity should be treated differently from identity across time, and whether if existence is contingent it must also be temporary, etc.

For this special issue of Thought we invite papers that make a contribution to either the metaphysics of time or of modality, or that illuminate the connections between them. Papers should correspond to the standard Thought guidelines and be no longer than 4500 words, including footnotes. Papers are to be submitted before 31st May 2013. When submitting please ensure you select article type as “The Metaphysics of Time and Modality Special Issue” to ensure your paper is reviewed via the special issue route.

NIP early career conference (papers to be published in Thought)

University of Aberdeen, Scotland.
27-29 June 2013

Professor Jennifer Saul (Sheffield)
Professor Brian Weatherson (Michigan)

Our ultimate aim is to showcase outstanding research by early career researchers. While there are a number of opportunities for graduate students and more senior philosophers to present and discuss their research, there are relatively few for researchers who have recently finished their PhDs and are building a career in philosophy. This series has been established as a step towards addressing this imbalance.

The papers selected for the conference will be invited for inclusion in a special Early Career issue of the journal Thought.

The editors of the journal may request revisions which must be met to their satisfaction before publication of the paper will be guaranteed.

We invite submissions from researchers who received their PhD within the last 5 years. Papers must fall under the research remit of the Northern Institute of Philosophy: metaphysics, epistemology and the philosophies of logic, language, mathematics and mind.

We invite papers suitable for a 40-minute presentation and of no longer than 4000 words. Papers should be accompanied by an abstract of no more than 150 words and must be suitable for blind refereeing. Please indicate at the top of the first page of your paper which of the following categories it falls under: Metaphysics, Epistemology, Philosophy of Logic, Philosophy of Language, Philosophy of Mathematics, Philosophy of Mind. Please include a separate cover sheet including name, title, institution, month and year of PhD award and contact details. The deadline for receipt of submissions is 12th April 2013. We aim to notify authors of the decision regarding their papers by 21st May 2013. Submissions must be in .doc or .pdf format and are to be submitted by e-mail to

Meals (lunches and one social dinner) and accommodation will be provided for speakers.

Please direct any questions to the conference organisers on

Substitutions and Models, Part 1: Bolzano, Quine, Tarski, and Boolos

Let ‘\Rightarrow‘ stand for a (not-yet analyzed) relation of logical consequence. According to the substitutional analysis of quantification, \Delta \Rightarrow P iff every substitution scheme \Delta is true under is also one P is true under. According to the model-theoretic analysis, \Delta \Rightarrow P iff every model on which \Delta is true is also a model on which P is true. The substitutional account is generally associated with Bolzano, and the model-theoretic with Tarski. Tarski’s objections to Bolzano’s account are generally thought to tell against the substitutional account and in favor of the model-theoretic. And what goes for logical consequence goes for logical truth — especially as P‘s being a logical truth is generally thought equivalent to it’s being a consequence of the empty set.

Enough of background. Now for the plot: Quine, in Philosophy of Logic, has an argument that — provided the background language is rich enough for elementary arithmetic (say, Peano Arithmetic) — the two accounts of logical truth are equivalent. This is prima facie surprising, and for two reasons: First, Tarski’s objection against Bolzano’s account shows that the two aren’t equivalent in general, and so it’s surprising that they would collapse for languages of this expressive power. Second, certain objections raised by e.g. John Etchemendy (The Concept of Logical Consequence) against Tarski’s official account (or, at least, Etchemendy’s reading of it) seem easily parlayed into an argument that the two are not equivalent for any first-order language.

To thicken the plot, in “On Second-Order Logic,” George Boolos argues that, although Quine is right about the two accounts of logical truth, the result does not extend to logical consequence, even in first-order languages. This is surprising, too: given compactness results, we ought to be able to move from logical truth to logical consequence.

I want to get to the bottom of these issues in a series of four posts. Here’s my plan of attack. In this post, I’ll sketch the two theories and lay out these three prima facie puzzles in more detail. The next post focuses on Quine’s side of things, first solving some of the puzzles and then examining Quine’s argument — and the scope of the premises it requires — in more detail. The third post will be dedicated to understanding Boolos’s argument and seeing how it interacts with the central puzzle to be outlined here. A final post will look at philosophical issues kicked up in the first three.

Here’s this post’s plan of attack. In section one, I’ll lay out the substitutional account in detail and give Tarski’s argument against it. Section two explains the model-theoretic account. Section three gives an argument (derived from Etchemendy’s) that the substitutional account of logical truth predictably delivers different results for first-order languages than the model-theoretic. And in section four, I’ll outline the argument that, if the accounts of logical truth are equivalent, then so are the accounts of logical consequence.

Section 1: The Substitutional Account

Suppose L is an interpreted language. Let a substitution scheme be a function from syntactically simple terms of L to any expressions of L at all that could be substituted without turning sentences into non-sentences. (If there are bound variables, we’ll also need some restrictions to keep them from interfering with each other, but we can ignore that here.) So, for instance, a syntactically simple predicate like `is tired’ (I’m pretending copula are syntactically embedded — just work with me here) could be taken to another simple one like `is hungry’, or to a complex one like `is about to eat every Taco in the tri-state area’. And a connective such as `and’ could be taken to another simple connective, such as `or’, or to a complex connective, such as ‘unless I hear by tomorrow that _______, John is going to think that _________’.

Some substitution schemes will take some simple expressions to themselves. In this case, say that it preserves those expressions.

One final bit of terminology. If P is a sentence and S a substitution scheme, let S(P) be the result of replacing all of P’s simple expressions with their values given by S. And if \Delta is a set of sentences, let S(\Delta) be the set of S(P) for P \in \Delta.

OK, now we’re ready for our definitions of logical consequence (and, by extension, logical truth). If E is a set of expressions, then say that P is an E substitutional consequence of \Delta, written \Delta |\!\!\models_{E} P, iff for every E-preserving substitution scheme S, if all of S(\Delta) are true, then S(P) is true, too.

Now, this doesn’t quite give us a definition of logical consequence; it gives us instead a definition of something like `logical consequence relative to a set of preserved expressions’. The idea, though, is that if we can pick the right expressions, we’ll have our definition of logical consequence: logical consequence is substitutional consequence under substitutions that preserve those. Those are the expressions we call the `logical constants’. Of course, for an arbitrary language, it’s no easy going deciding which expressions are the logical constants. But for, say, first-order languages, the idea is that the logical constants are those that are treated as logically special in our textbooks and so on (you know, quantifiers, truth-functional connectives, etc.)

That, anyway, is the idea. But in “On the Concept of Logical Consequence,” Tarski argued against the view more-or-less as follows. Suppose that L is really impoverished. Suppose, for instance, that along with logical vocabulary, it only has one predicates and two names (each of which means what they do in English):

Names Predicates
Washington was president.

Now consider the argument:

  1. Lincoln was president.
  2. Therefore, Washington was president.

Intuitively, this shouldn’t count as a valid argument. But — given the extreme impoverishment of the language — it does. Any substitution scheme that preserves the logical terms won’t have any way of substituting in something for (1) that makes it true while substituting in something for (2) that makes it false. The problem, essentially, is that the only predicate the language has doesn’t divide between Lincoln and Washington, and the language doesn’t let us talk about anything else.

Consider, for instance, the scheme that sends `Lincoln’ to `Washington’, `Washington’ to `Lincoln’, and `was president’ to `was not president’. This makes the conclusion false — but also makes the premise false! And no other substitution scheme does the job either.

The problem is apparent: it’s simply the impoverishment of the language that’s to blame. If we had the predicate `had a beard’ in the language, the argument would count as invalid: the scheme that sends `was president’ to `had a beard’ (and kept the names the same) would make the premise true and the conclusion false. But since our language doesn’t have such a predicate, the problem remain.

Notice the nature of the problem. The complaint isn’t that the substitutional account delivers the wrong verdict about any arguments we ever actually encounter. We speak a very rich language (English, at least for readers of this blog, and perhaps others as well). The problem is that the account delivers the wrong verdict about arguments in some possible language. So if we’re going to let this worry move us to a different account of logical consequence, notice that we must be implicitly endorsing a principle like,

UNIVERSAL: If an account of logical consequence is correct, it must deliver the right results for any possible language.

For those who accept UNIVERSAL, the natural “fix” behind the basic Bolzanian idea is to move to the model-theoretic account of logical consequence.

Section 2: The Model-Theoretic Account

The basic idea behind the model-theoretic account is that there are a range of things, models, with a certain structure, and a relation of `truth in’ defined between these models and sentences of L. If the language is first-order, we can be a bit more precise: a model is an ordered pair \langle D, I\rangle of a domain D and an interpretation function I. D is just any old set of things. I is a function from names of L to objects in D and predicates of L to subsets of D (or subsets of D^{n}, for n-placed predicates) that provides the `extensions’ of those predicates on the model — provides the things in the domain that, according to the model, are the way the predicate says they are. We also need a definition for `truth in’ — thanks to the necessity of dealing with quantifiers and variables, this is a fairly complex recursive one in terms of satisfaction, but I’m not going to make hay about the details here, since it’s pretty easy to see which sentences should be true on a model given that we know what terms’ denotations and predicates’ extensions are.

For instance, a model for our simple language from above would give us a set of things D, and then tell us which of those things got assigned `Lincoln’, which got assigned `Washington’, and which were in the extension of `was president’. Then a sentence like, say, `Lincoln was president’ would be true iff the thing assigned to `Lincoln’ on the model was in the extension assigned to `was president’ on that model, and a sentence like `Something was not president’ would be true iff there were things in the domain that weren’t in the extension of `was president’.

A model is called a model of a sentence iff the sentence is true on it, and a model of a set of sentences iff all the sentences in the set are true on it. If every model of \Delta is also a model of P, we write \Delta \models P, and say that P is a model-theoretic consequence of \Delta.

The model-theoretic account of consequence insists that logical consequence is model-theoretic consequence. It’s fairly easy to see that the model-theoretic account won’t suffer from the problem that plagued the substitutional. Consider again the argument:

  1. Lincoln was president.
  2. Therefore, Washington was president.

For it to be valid, every model of 1 must be a model of 2. But there are models of 1 that aren’t models of 2. Just take a model with a domain of two things, say a and b which assigns `Lincoln’ to the first, `Washington’ to the second, and assigns the set \{a\} to the predicate `was president’.

Section 3: Inequivalence

The foregoing is enough to show that the two accounts are inequivalent; that is, it’s enough to show that the following does not hold for arbitrary languages L:

(EQ) \Delta |\!\!\models P \textrm{ iff } \Delta \models P

We might think that’s not a huge deal: perhaps for suitably expressive languages EQ does hold. (This, in effect, is what Quine will argue.)

But here’s a simple argument, inspired by one John Etchemendy gives against (his reading of) Tarski in The Concept of Logical Consequence. The (adapted) argument shows that for any first-order language, EQ fails.* It goes a little something like this.

Let L be a first-order language. Presumably, its quantifier ranges over more than one object. Now consider the `counting sentence’

(E2) \exists x\exists y(x \ne y)

This sentence is true. Is it a logical truth? Or — more to the point — do we have |\!\!\models (E2)? Do we have \models (E2)?

Well, note first off that \not\models (E2). There are always models with just one thing in the domain; on the model-theoretic conception, then, this does not count as a logical truth.

But the substitutional account disagrees. All of the terms in (E2) are logical terms, so any of the substitutions we care about for evaluating consequence will be ones that keep (E2) the same. (That is: they’re substitutions S where S(E2) = (E2)!) Since (E2) is in fact true, and the substitutions don’t turn it into any different sentence, it remains true under all substitutions — and so |\!\!\models (E2).

Maybe we can fiddle with this. Here’s one way: allow that `preservation’ of quantifiers can include trading in simple quantifiers for complex `restricted’ ones — allow \exists x, to be taken, say, to \exists x(Fx \wedge \underline{\ \ \ \ \ })‘. (If we want to preserve duality — that is, if we want \exists x P to be equivalent to \neg\forall x \neg P — we’ll need to insist that substitution schemes have to `mesh’ in a certain way with how they treat their existential and universal quantifiers.) Once we revise our account, we no longer have |\!\!\models (E2). If `F‘ is a predicate that in fact applies to only one thing, the following counts as a false substitution instance:

(E2*) \exists x(Fx \wedge (\exists y(Fy \wedge (x \ne y)))

But this modification brings model-theoretic and substitutional consequence into agreement on (E2) only by making them disagree about the argument form:

  1. Fa
  2. Therefore, \exists x Fx

This is straightforwardly valid on the model-theoretic account. But on the (new, modified) substitutional account, the following is a fair substitution instance:

  1. Fa
  2. Therefore, \exists x(\neg Fx \wedge Fx)

(We get it by the substitution scheme that takes \exists x to \exists x(\neg Fx \wedge \underline{\ \ \ \ \ }).) And this disagreement about valid arguments will turn into a disagreement about logical truth, since the model-theoretic account will, and the substitutional account will not, reckon the conditional with 1 as antecedent and 2 as consequent a logical truth.

It is possible to further modify the substitutional account to make it march in step with the model-theoretic one, at least given that L’s quantifiers in fact range over infinitely many objects. The modifications basically impose large scale constraints on the substitution instances — constraints that, for instance, if you substitute \exists x(Fx \wedge \underline{\ \ \ \ \ }) for \exists x, you have to also substitute names of things that satisfy F for names, and predicate expressions that are satisfied only by F-things for predicates. However motivated these restrictions might be (and they don’t seem very), they don’t help the prima facie puzzle. For the prima facie puzzle is that the two accounts disagree, but Quine is going to argue that they do agree, contra what’s already happened. And Quine’s argument isn’t predicated on any fancy-schmancy jazzing up of the old substitutional account; it’s predicated on the boring substitutional account of the sort described back in Section 1. So what gives?

Section 4: Consequence and Truth

The puzzle in Section 3 admits a simple solution, but I’ll wait until the next post to give it. Here, I want to present a deeper puzzle. At the end of the next post, we ought (if Quine does his work right) to be convinced that, for expressive enough languages, the following restricted version of EQ holds:

(EQ-R) |\!\!\models P \textrm{ iff } \models P

In the final post, Boolos is going to give us a general argument that, even for these more expressive languages (especially for them, in fact), EQ fails. All the languages in question are first-order. And there is a fairly straightforward argument that EQ holds for a given first-order language iff EQ-R holds for that language.

The argument stems from three properties of first-order logic. The first is compactness. I’ll give a generic version of it here:

(COMPACTNESS) \Delta \Rightarrow P iff there is a finite subset \Gamma \subseteq \Delta where \Gamma \Rightarrow P.

(In other words, no valid first-order arguments require infinitely many premises in order to be valid.) The second theorem is a kind of generalized deduction theorem, which I’ll write as follows:

(DEDUCTION) Q_{1}, Q_{2}, \ldots, Q_{n} \Rightarrow P iff \Rightarrow (Q_{1} \wedge Q_{2} \wedge \ldots \wedge Q_{n}) \supset P

Finally, we have

(MONOTONICITY) If \Delta \Rightarrow P and \Delta \subseteq \Gamma, then \Gamma \Rightarrow P.

That is: adding premises to an argument can’t take validity away. For all of these results, we can consider model-theoretic and substitutional versions by replacing ‘\Rightarrow‘ with ‘|\!\!\models‘ or ‘\models‘, respectively.

DEDUCTION and MONOTONICITY are relatively trivial, and can be seen to hold by inspecting the definitions of each kind of consequence, plus (for DEDUCTION) the truth-table for the material conditional, \supset. COMPACTNESS is not at all trivial, although in the model-theoretic setting it’s very well established. The substitutional variant is probably going to show itself as the cause of all the bother when all is said and done, but let’s not get ahead of ourselves. Let’s just go through the argument.

I’m actually going to just go through one half of the argument: I’ll argue that, if \Delta |\!\!\models P, then \Delta \models P. But it is trivial to reverse the argument, because all of the premises (other than EQ-R, which is a biconditional) apply equally well to both kinds of consequence. One bit of notation will help: if \Gamma is any finite set, then I take the privilege of rewriting it as \{Q_{1}, Q_{2}, \ldots, Q_{n}\}. OK, here goes.

  1. Suppose \Delta |\!\!\models P
  2. By COMPACTNESS, for some finite \Gamma \subseteq \Delta, \Gamma |\!\!\models P
  3. Rewriting, \{Q_{1}, Q_{2}, \ldots, Q_{n}\} |\!\!\models P
  4. By DEDUCTION, |\!\!\models (Q_{1} \wedge Q_{2} \wedge \ldots \wedge Q_{n}) \supset P
  5. By EQ-R, \models (Q_{1} \wedge Q_{2} \wedge \ldots \wedge Q_{n}) \supset P
  6. By DEDUCTION, \{(Q_{1} \wedge Q_{2} \wedge \ldots \wedge Q_{n}\} \models P
  7. Since \{Q_{1}, \ldots, Q_{n}\} \subseteq \Delta, by MONOTONICITY, \Delta \models P

The argument works just as well in the other direction, and EQ follows. But if both Quine and Boolos are right, EQ doesn’t follow. Something’s gone wrong. COMPACTNESS for substitutional consequence looks like the best bet. But how? What sentences follow substitutionally from an infinite set of sentences but no finite subset?

This is the central puzzle. We’ll have to wait a few more posts before hitting the bottom of this mess.

* Well, maybe just almost any — there might be some wiggle room if some first-order languages have quantifiers semantically restricted to range over just one object.

Oxford Studies in Metaphysics Younger Scholar Prize

Oxford Studies in Metaphysics Younger Scholar Prize

The Younger Scholars Prize program, funded by The Ammonius Foundation (  and administered by the Editorial Board of Oxford Studies in Metaphysicsis an annual essay competition open to scholars who are within 15 years of receiving a Ph.D. or students who are currently enrolled in a graduate program. Independent scholars may also be eligible, and should direct inquiries to the Editor of OSM (see below).  The award is $8,000, and winning essays will be published in Oxford Studies in Metaphysics.

Submitted essays must report original research in metaphysics.  Essays should generally be between 7,500 and 15,000 words; longer essays may be considered, but authors must seek prior approval by providing the Editor with an abstract and a word count prior to submission.  Since winning essays will appear in Oxford Studies in Metaphysics submissions must not be under review elsewhere. To be eligible for next year’s prize, submissions must be received, electronically, by 31 January 2013.  Refereeing will be blind; authors should omit remarks and references that might disclose their identities. Receipt of submissions will be acknowledged by e-mail. The winner will be determined by a committee of members of the Editorial Board of Oxford Studies in Metaphysics and will be announced in late February or early March 2013. (The Editorial Board reserves the right to extend the deadline further, if no essay is chosen.) At the author’s request, the Board will simultaneously consider entries in the prize competition as submissions for publication in Oxford Studies in Metaphysics, independently of the prize.

Inquiries should be directed to the Editor, Dean Zimmerman, at, or by post through regular mail at:

Professor Dean Zimmerman
OSM Younger Scholars Prize
Philosophy Department
Rutgers University
Davison Hall, Douglass Campus
26 Nichol Avenue
New Brunswick, NJ  08901-2882

Impossible Lewisian Modal Realism

Many people think that talking about possible worlds is useful in philosophy.  A good number of those people think that talking about impossible worlds is also useful.  In most cases, talking about impossible worlds as well as possible worlds is innocuous.  On most of our views about what worlds are, impossible worlds are no more ontologically problematic than possible worlds: sets of propositions all of which can’t be true together are no more mysterious than sets of propositions all of which can be true together; if talk of possible worlds is a merely pragmatically useful fiction, talk of impossible worlds can be such without any further mystery.  And so if you hold some such view about worlds, the question as to whether we should talk about impossible worlds depends solely on whether to do so is useful – there is no metaphysical problem in doing so.


Not so, seemingly, if you are a Lewisian realist about worlds.  For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with round squares as parts.  And so, to believe in the latter world is to believe in round squares.  And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which could not exist.  In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc.  Impossible worlds are not metaphysically mysterious on the latter views because there is no metaphysical puzzle in there being a description of something that couldn’t exist, or a picture of something that couldn’t exist; but they are a metaphysical puzzle for Lewis, because there is a metaphysical puzzle in there being something that couldn’t exist.


Nonetheless, some think this is a price worth paying: they like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to include impossible worlds.  I’ve heard this move a few times in conversation, but the one person I know of who has defended it in print is Ira Kiourti.  (Yagisawa defends a view similar to Lewisian realism with impossible worlds, but with some crucial differences.)  Now, there are some big, and familiar, problems with believing in genuine impossible worlds that they each try to deal with, but I am yet to be convinced can be solved.  (See my critical study of Yagisawa.)  But I want to raise a problem for Lewisian realism with impossible worlds that I haven’t seen discussed and which I don’t even know how one would start to answer.


I can see how Lewisian realism with impossible worlds is supposed to deal with impossibilities like ‘There is a round square’ or ‘Frank is taller than Jim and Jim is taller than Frank’.  I just need to believe in impossible objects – a round square and a man that is both taller and shorter than some other man – and then I can believe in worlds composed in part of such objects.  Now, personally I can’t conceive of such objects – but so what?  If I’ve got good reason to believe in them, I can postulate them.  But I don’t see how we are meant to account for an impossibility like ‘2+2=5’.  For Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4.  And so, since contingency just is, for Lewis, variation across certain portions of concrete reality – namely, the worlds, which are just big concrete objects – there is simply no room for contingency with respect to the mathematical truths.  Necessary truths about the realm of concreta are necessary because each relevant portion of concrete reality is a certain way: that is, no matter what variation you get across these concrete portions of reality, things are that way.  But necessary truths about the realm of pure abstracta are necessary because they have nothing to do with how concrete reality is: so irrespective of what variation you get across these concrete portions of reality, things are that way.  (See my paper on Lewis and reduction, esp. footnotes 3-5, and the corresponding discussion in the text.)


In that case, while I can add to my ontology round squares if I wish, and hence believe in a big concrete object with a round square as a part, and thereby have a world that represents the impossible circumstance of there being a round square, I don’t see even what weird metaphysical move to make to get a Lewisian world that represents 2+2 as being 5.  Worlds don’t have numbers as parts: they are sums of concrete individuals; and if we give that up, we don’t have something worth calling an extension of Lewisian realism (Yagisawa gives this up, which is why I’m ignoring his view here).  A Lewisian world nonetheless represents that there are numbers, because numbers exist from the standpoint of that world.  And while the concrete objects that exist in one Lewisian world are never the same as the concrete objects that exist at another world, the numbers that exist from the standpoint of each world are just the same.  And that’s why mathematical truths are necessary, for Lewis: because a world represents some mathematical claim as being the case just because the numbers represented as existing from the standpoint of that world are as the claim says they are.  And since the same numbers exist from the standpoint of any two worlds, no two worlds differ in what mathematical claims they represent as being the case.  Given Lewis’s account of what a world is and how they represent something to be the case, there is simply no room for variation across worlds in what mathematical claims are represented as true: hence there is no room for contingency in mathematical claims, but nor is there room for a world that represents some impossible mathematical claim as true, no matter what we think about the extent of what concrete worlds there are.  So the Lewisian simply can’t extend her ontology to admit worlds that account for every impossible situation.