Topic 44: Logical Connections

How much does logic structure our sentences, and what kind of logic should we use? In this episode, we talk about sentential logic: where it came from, how we connect things up systematically, and in what ways language looks like it moves away from pure logic.

Quick Summary:

There's a certain logic that lays underneath language, but what is it? Philosophers have been working on the question of what makes a good, logical argument for thousands of years. Aristotle, the Greek philosopher, described a system in which a number of logical arguments can be shown to be valid. These arguments, where a combination of premises can lead to a logical conclusion, can work even when disconnected from anything real - they make sense even if you use nonsense words! But unfortunately, Aristotelian logic can't really capture language - it doesn't work well with individuals, or with multipart sentences.

So instead, we have to look at different frameworks. Here, we talk about sentential logic, a system that takes sentences as its input, and connects up sentences to look at them using a number of different logical connectives, like "and" and "or." By using these symbols and examining the outputs, we can form truth tables, that allow us to tell at a glance whether a given pair (or more!) of sentences leads to a true conclusion, based on whether the different sentences we feed into the system are true. This system also allows us to look at whether language behaves in a purely logical manner! But even when it doesn't, it may be because of other, pragmatic reasons, rather than logic itself not working on language.

Extra Materials:

In the field of logic, the connectives that glue together sentences are said to be fully truth-functional. This is because the contribution that they make to the meaning of a logical expression is determined completely by — that is, it is a function of — the truth or falsity of the sentences they link together. To see this, consider the connective logicians call the biconditional, which we represent with a double-headed arrow (↔), and which is closely related to the English phrase “if and only if”. If we wanted to represent the English sentence in (1) as a purely logical expression, we might write it out as (2).

(1) T-Rex spoke with Utahraptor if and only if Utahraptor spoke with T-Rex.

(2) T↔U

Now let’s say that both T and U were true; how can we determine the truth of the whole knowing only the truth of its parts? Because the biconditional is truth-functional, we only have to consider what kind of function it is. As it happens, the biconditional is a kind of matching function, which means that it spits out a value of “true” only when the values it’s been given match each other — when they’re both true, or both false. So, if both T and U are true, so is the whole of (2).

With this idea of truth-functionality in mind, it’s interesting for linguists to consider whether the connectives of English are truth-functional in the same way that the connectives of logic are. That is, do the meanings of English sentences that contain connectives depend only on the truth or falsity of their parts, or is there something more going on?

In the episode, we already began to see hints that there might be more to a language like English than there is to logic. For example, we saw that the word “and” suggests that there’s a temporal order to the sentences it connects together. It appears that the sentence in (5) suggests that (3) happened first, and then (4) followed.

(3) T-Rex spoke to Utahraptor.

(4) The raccoons spoke to T-Rex.

(5) T-Rex spoke to Utahraptor and the raccoons spoke to T-Rex.

And while the sentence in (6) would be perfectly acceptable if translated into the language of logic, English sentences that contain “and” sound a bit funny when the first sub-sentence presupposes the second (the number sign is used to point out that the sentence is semantically odd):

(6) #T-Rex’s arms are short and T-Rex has arms.

It sounds strange to our ears to include the statement “T-Rex has arms” when that information is already contained in the first part of the sentence, in the phrase “T-Rex’s arms”.

And as we suggested, in order to rule out the redundancy of (6) and account for the impression of order in (5), we can make use of the philosopher Paul Grice’s Conversational Maxims, which offer us a pragmatic explanation of why the word “and” in English is not entirely truth-functional. Grice’s Maxim of Manner might explain why we interpret (5) as we do: because we assume speakers describe events in the order they happened. And Grice’s Maxim of Quantity generally bars speakers from spending too much time repeating information that’s already been established, which would basically block the sentence in (6).

The hypothesis that pragmatics is at work here can even be tested! If the ordering that’s implicit in (5) is an implicature, it can be cancelled. And, sure enough, we can easily imagine such a sentence being followed up with something like “but not necessarily in that order”. So we can get rid of the implicature if we just try a bit.

As it turns out, a big part of why Grice introduced these rules of conversation was to account for the ways that language deviated from logic. He supposed that language was actually quite a bit like logic, but that certain hidden assumptions made the connections less obvious. And if we consider, for instance, the Maxim of Relevance, we find yet more examples of the non-truth-functional behaviour of a word like “and”. The fact that (7) and (8) have little to do with each other should render (9) unlikely at best.

(7) T-Rex is a carnivore.

(8) It’s Wednesday.

(9) #T-Rex is a carnivore and it’s Wednesday.

Grice’s ideas might even explain why we, at least some of the time, interpret “or” as introducing two mutually exclusive choices, even though its logical cousin is much more inclusive. While (10) suggests the hearer can’t accept both, the logical version in (11) is true even if both sub-parts are true — in other words, even if both events occur.

(10) Either T-Rex will stomp on the house or he’ll stomp on the car

(11) H∨C

We can always follow up a sentence like the one in (10) with “or he’ll do both”, which suggests that the exclusivity we find in (10) is the result of an implicature. If we can cancel it, it can't be part of the truth of the sentence itself.

However, there are other deviations from logic that aren’t so straightforward. Like, take the fact that we can say something like (12).

(12) Come closer and give us a kiss.

The “and” in this sentence can’t be truth-functional, because there’s no truth to be found! All we have in (12) are two imperative statements (commands), neither of which can easily be said to be either true or false. Questions can be combined in this way, too.

Or take the fact that we can say something like (13), but not (14).

(13) T-Rex stomped around and Utahraptor did too.

(14) *Utahraptor did too and T-Rex stomped around.

The ordering here, seems to matter a lot, even though it doesn’t in logic.

Finally, we face some pretty tricky problems with conditional sentences (if . . . then) and negation (not). Imagine the following situation: T-Rex is going to a party, and he’s decided that if he sees Morris, he’ll stay. But if he sees any octopuses, he’ll leave. In the end, neither Morris nor any octopuses showed up, and T-Rex stayed. Now consider (15) and (16).

(15) If T-Rex had seen Morris, he would have stayed.

(16) If T-Rex had seen an octopus, he would have stayed.

According to the situation we described, (15) seems true and (16) seems false. Yet, in both cases, the first half of each sentence is false (because T-Rex saw neither Morris nor any octopuses), and the second half is true (since T-Rex stayed). In other words, the truth or falsity of the overall sentence seems to depend on more than just the truth of its parts; it depends on their relationship to each other. These unruly counterfactual conditionals are definitely a headache for linguists.

And think about the sentence in (17):

(17) T-Rex doesn’t have short arms, he’s brachially challenged.

The negation in there isn’t denying the truth of the sentence, it’s serving to correct an error in word-choice. It isn’t behaving truth-functionally. This kind of meta-linguistic negation has exactly the opposite effect from what you’d normally expect!

Logic gets us a long way towards better understanding language, and helps us come up with questions we might never have thought to ask. But the examples above show that a lot of serious challenges face anyone who wants to hold on to a strong connection between the two.

Discussion:

So how about it? What do you all think? Let us know below, and we’ll be happy to talk with you about the ways in which logic and language interact. There’s a lot of interesting stuff to say, and we want to hear what interests you!