Think: A Compelling Introduction to Philosophy (28 page)

BOOK: Think: A Compelling Introduction to Philosophy
10.88Mb size Format: txt, pdf, ePub
ads

This expresses a contradiction-the ultimate no-no. And we now
have a precise sense in which it is a no-no. For it is easy to show
from the two tables we have, that whateverthe truth-value of p, the
truth-value of this formula comes out as F. There is no way it could
be true. Because when one of the conjuncts is true the other is false:
there is always a false element. And the truth-table for conjunction
shows that in that case the overall formula is false.

Now suppose we complicate things by negating it:

The brackets here show that the outside - negates the whole thing.
They act like the brackets in 3 x (4 + 2), which show that the result
is to be 18, rather than what we would get if we had (3 x 4) + 2,
which is 14. This bracketing is extremely important in logic, as it is
in arithmetic: many fallacies in formal and informal reasoning can
be avoided by knowing where the brackets fall. This is called knowing the scope of operation of the negations and conjunctions and
the rest. In this example the outside negation has the whole of the rest of the formula to operate upon. A quite different reading
would be given by -p& gyp, which simply conjoins - p to itself, and,
incidentally, is false in the case in which p is true (saying something
false twice does not make it any better). One of the terrific virtues
of formal logic is that it sensitizes people to scope ambiguities,
which arise when it is not clear where the brackets lie, or in other
words what is governing what. Without knowing this, you do not
know in what ways your premises and your conclusions might be
true, and hence whether there is any way your premises might be
true without your conclusion being so.

This new formula, -'(p & -ip), reverses the truth-value of the old
contradiction. So it is true, whatever the truth-values of its components. It is called a tautology. This is an important notion. In
propositional logic if we have premises blah-blah-blah and conclusion yadda-yadda, we want it to be true that'll' blah-blah-blah
then yadda-yadda' is a tautology. There is no interpretation (no
way of assigning truth-values) that is to make the premises true,
while the conclusion is false. When this is so, the argument is valid
in exactly the sense we have been talking about.

One way of discovering whether an argument is valid is common enough to deserve a name. You can find whether `If blahblah-blah then yadda-yadda' is valid by adding `not yadda-yadda'
to `blah-blah-blah' and seeing if you can get out a contradiction. If
you can, the argument was valid. This corresponds directly to there
being no way that the premises could be true and the conclusion
false. There is no interpretation or no model for that state of affairs.
Contradiction bars the way. This is called `assuming towards a contradiction' or'assuming towards a reductio, from the Latin name for this kind of procedure: the reduc do ad absurdum, or reduction
to absurdity. Anselm's ontological argument in Chapter 5 had that
form.

In mathematics we can have not only 2 + 2, but also 3 x (2 + 2)
and ((2 + 3) x (2 + 2)) - 5, and so on forever, and so it is with information. In so far as complex hits of information are produced by
applying and reapplying truth-functional combinations, we can
keep perfect control of the interpretations under which we have
truth and falsity.

NOTHING TO BE AFRAID OF

So logic studies the structure of information. Its aim is to exhibit
that structure, and thereby also exhibit what follows from what:
what is sufficient to prove p and what follows from p, for p of any
complexity. The connection between structure and proof is just
this: the structure shows us if there is no way that the premises can
be true without the conclusion being true. Because to understand
the structure of information is to understand the ways it can be
true.

So far, we have looked at complexity of information arising because propositions are negated or conjoined, or connected by implication. But we have not broken inside propositions. As far as the
analysis so far goes, `Some persons are philosophers' and 'All persons are philosophers' will come out looking alike. Each is just an
example of a proposition, p. But we cannot get inside the proposition, and understand how these mean different things.

The breakthrough that cracked this problem created modern
logic. It was made by the German mathematician and logician
Gottlob Frege (1848-1925), in his seminal Begriffsschrift (`concept
writing') of 1879. Consider this argument: every inquiry stops
somewhere, so there is somewhere every inquiry stops (it is sometimes supposed that the foundationalists we met in Chapter 1 advanced something like this). Something must be wrong, for a
parallel would be: everyone has a mother, so there is someone who
is everyone's mother. Or, everyone ties his own laces, so someone
ties everyone's laces. Until Frege, people could see that there was
something wrong, but, lacking any understanding of how this kind
of information is built, they could not say what it was.

The key to understanding Frege's achievement is to think in
terms of two quite different kinds of information. The first is very
familiar. It corresponds to attaching a term to a name or other expression that refers to a particular person or thing: Bill is rich,Tony
grins, this is an orange. Here we have a subject term (the names
`Bill' and `Tony, and the demonstrative `this' ), and things are said of
what they pick out: `is rich', `grins', or `is an orange' These terms
stand for conditions that things might meet. They are called 'predicates': the rich things satisfy the predicate `is rich, and other things
do not. This is the basic subject-predicate form of information.

Now we can do something surprising. Suppose we delete the
term that stands for the subject. We are left with only a gappy sentence, or predicate: `is rich, and so on. We can better signal the gap
by the expression called a variable, usually written x, y, z..., as in algebra. So we have `x is rich'. This is no longer a sentence carrying a
piece of information, because nobody is being said to be rich. It is a sentence with a hole in it: a predicate, or an open sentence, in logicians' jargon.

Now, here conies the magic. Suppose I ask you to take an open
sentence into a particular domain, such as a classroom, or New
York City, and come back giving me some information. You could
just reconstruct a piece of information like the one we started with,
naming some particular individual, and saying that he or she is
rich. But you don't have to do this. You can do a fundamentally different kind of thing. You can come back and tell me about the
quantityof times the predicate is satisfied. And you can tell me this
without telling me who satisfies it. It is as if you use the open sentence by pointing the'x' in it at all the different people in the domain in turn, and note how often you get a hit. Suppose we
symbolize the predicate by ~ (the Greek letter'phi').'Then you ask:
'Is this, is this?' of each of the members of the domain in succession. Then you can tell me what happened.

Perhaps the simplest kind of thing you could tell me is that at
least once, somewhere, you got a hit. This is equivalent to 'Something is 4' Or you might tell me that somewhere you got a miss:
`Something is not-~' Contrast this last with getting a hit nowhere:
`Nothing is ~' Or it might be that everywhere you got a hit: 'Every-
thingis4)'

`Something is 4)' is given by a new piece of symbolism: the existential quantifier. It is written as (3x)4)x (the fact that the variable
conies after the predicate in `4)x' whereas in English predicates usually finish sentences and things like names start them is irrelevant).
If you never get a hit, you can enter _(3x)4)x: nothing is 4). If, somewhere, you get a result that is not a hit, you have the very different (3x)h(4)x). If you nowhere get a result other than a hit, you have
-'(3x)'4)x. This says that nowhere is there anything that is not 4).
Or, in other words, as far as this domain goes, everything is 4). This
last kind of information is sufficiently important to have its own
symbol, the universal quantifier, written as (Vx)4)x: `Everything
is 4).

Leibniz thought that if we had a sufficiently logical notation,
dispute and confusion would cease, and men would sit together
and resolve their disputes by calculation. The invention of the
quantifier did not bring about this utopia, but it does an astonishing amount towards it. Its full power is exhibited when we get multiple quantifications. This is information built with more than one
quantifier in play. When we have more than one quantifier, we use
different variables (x, y, z...) to indicate the different gaps to which
they correspond. To illustrate the idea, we can see how easily it dissects the invalid argument: everybody has a mother, so someone is
everyone's mother. If we write `x is the mother of y' as `xMy' we
symbolize the first by (Vy)(3x) xMy. The second is (3x)(`dy) xMy.
How are these different?

Start with a sentence claiming motherhood between two different people: Beth is the mother of Albert. Knock out reference to
Beth, and we have the open sentence xMa (where `a' abbreviates Albert) We know that this predicate is satisfied (it is satisfied by
Beth), so we know (3x) xMa. Somebody is Albert's mother. Now
knock out reference to Albert: (3x) xMy. We have a gappy, or open,
sentence again, with y marking the gap. It corresponds to the predicate `having someone as a mother'. We can take this into the domain and point the variable yat each in turn: does this person have a mother, does this ...? If we get the answer 'yes' on each occasion
(which we do), we can universally quantify (Vy)(3x) xMy. Everyone has a mother.

Now look at the second formula. To get this, we similarly start
with Beth (h) being the mother of Albert. But now we knock out
reference to Albert first: bMy. We take this round the domain. IJwe
could (as in the real world we cannot) write bMy, this would
be because Beth is the mother of everyone (whoever you point the
variable yat, it turns out that Beth is their mother!). What has just
been supposed of Beth, might be supposed true of someone (if not
Beth): in that case you can knock out reference to Beth, take the
predicate'being mother of everyone', or in other words (Vy) xMy,
round the domain, and find eventually someone giving the answer
yes. In that case you would be able to write (3x)(V),) xMy. But the
point to notice is that this is an entirelydifferent procedure. It gives
an entirely different kind of information (false of the domain of
human beings). And the quantificational structure shows the difference on its face, because the stringing out of the quantifiers
shows how the information is built.

In the real world, nobody is the mother of everybody. Before we
understood quantification, that might have sounded weird, as if
the human race sprung out of Nothing. This might have seemed a
creepy metaphysical thesis. But now it is tamed. It just means that
'(3x)(Vy) xMy. And this is a simple truth. At least, unless you use
the relation 'mother' to include more remote kinds of ancestry, in
which case you might want to claim that there is someone, biological Eve, the first female iwmo sapiens, who is the mother of everyone. But I would regard that as an illegitimate or metaphorical usage. My grandmother is not literally my mother.

We can give more precise information about the quantity of
times some condition is met in a domain. We might say that there
is exactly one thing satisfying the condition. This means that any
time you get a hit, if you go on pointing the variable at the rest of
the things in the domain, whenever you get a hit it turns out to be
the same one. There are no two distinct hits.This is the core of Russell's famous theory of definite descriptions. For it to be true that
the unique king of France has a beard, there would need to be
someone who rules France and no other person who rules France,
and it should be true of whoever does rule France that he has a
beard. Otherwise, the claim is false.

Quantificational structure is just one thing, but a very important thing to be aware of. Ordinary language is good at generating
ambiguities that it easily resolves.`All the nice girls love a sailor'said
the song. There is some lucky sailor they all love? They all have one,
but perhaps a different sailor that they love? Take any sailor, then all
the nice girls love him (or her)? Very different things, true in very
different circumstances. A related ambiguity is responsible for
some thirty thousand deaths a year in the United States. `A wellregulated militia being necessary to the security of a free state, the
right of the people to keep and bear arms shall not be infringed'
Each person? Or the people as a collective, as in `The team can have
a bus'? If the founding fathers had been able to think in terms of
quantificational structure, a lot of blood might not have been
spilt.

BOOK: Think: A Compelling Introduction to Philosophy
10.88Mb size Format: txt, pdf, ePub
ads

Other books

His Firm Hand by Shelly Douglas
The Magic Cake Shop by Meika Hashimoto
Vegas by Dahlia West
Who He Is (FireNine, book 1) by Shanora Williams
Dead Man's Chest by Kerry Greenwood
Training Tess by Sabrina York