Formal Logic/Merged Versions/Sentential Logic

From Wikibooks, open books for an open world
Jump to navigation Jump to search




Sentential Logic




Goals[edit | edit source]

Sentential logic[edit | edit source]

Sentential logic attempts to capture certain logical features of natural languages. In particular, it covers truth-functional connections for sentences. Its formal language specifically recognizes the sentential connections

It is not the case that _____
_____ and _____
Either _____ or _____
_____ or _____ (or both)
if _____, then _____
_____ if and only if _____

The blanks are to be filled with statements that can be true or false. For example, "it is raining today" or "it will snow tomorrow". Whether the final sentence is true or false is entirely determined on whether the filled statements are true or false. For example, if it is raining today, but it will not snow tomorrow, then it is true to say that "Either it is raining today or it will snow tomorrow". On the other hand, it is false to say "it is raining today and it will snow tomorrow", since it won't snow tomorrow.

"Whether a statement is true or false" is called the truth value in logician slang. Thus "Either it is raining today or it is not raining today" has a truth value of true and "it is raining today and it is not raining today" has truth value of false.

Note that the above listed sentential connections do not include all possible truth value combinations. For example, there is no connection that is true when both sub-statements are true, both sub-statements are false or the first sub-statement is true while the other is false, and that is false else. However, you can combine the above connections together to build any truth combination of any number of sub-statements.

Issues[edit | edit source]

Already we have tacitly taken a position in ongoing controversy. Some questions already raised by the seemingly innocuous beginning above are listed.

  • Should we admit into our logic only sentences that are true or false? Multi-valued logics admit a greater range of sentences.
  • Are the connections listed above truly truth functional? Should we admit connections that are not truth functional sentences into our logic?
  • What should logic take as its truth-bearers (objects that are true or false)? The two leading contenders today are sentences and propositions.
  • Sentences. These consist of a string of words and perhaps punctuation. The sentence 'The cat is on the mat' consists of six elements: 'the', 'cat', 'is', 'on', another 'the', and 'mat'.
  • Propositions. These are the meanings of sentences. They are what is expressed by a sentence or what someone says when he utters a sentence. The proposition that the cat is on the mat consists of three elements: a cat, a mat, and the on-ness relation.
Elsewhere in Wikibooks and Wikipedia, you will see the name 'Propositional Logic' (or rather 'Propositional Calculus', see below) and the treatment of propositions much more often than you will see the name 'Sentential Logic' and the treatment of sentences. Our choice here represents the contributor's view as to which position is more popular among current logicians and what you are most likely to see in standard textbooks on the subject. Considerations as to whether the popular view is actually correct are not taken up here.
Some authors will use talk about statements instead of sentences. Most (but not all) such authors you are likely to encounter take statements to be a subset of sentences, namely those sentences that are either true or false. This use of 'statement' does not represent a third position in the controversy, but rather places such authors in the sentences camp. (However, other—particularly older—uses of 'statement' may well place its authors in a third camp.)

Sometimes you will see 'calculus' rather than 'logic' such as in 'Sentential Calculus' or 'Propositional Calculus' as opposed to 'Sentential Logic' or 'Propositional Logic'. While the choice between 'sentential' and 'propositional' is substantive and philosophical, the choice between 'logic' and 'calculus' is merely stylistic.



The Sentential Language[edit | edit source]

This page informally describes our sentential language which we name . A more formal description will be given in Formal Syntax and Formal Semantics

Language components[edit | edit source]

Sentence letters[edit | edit source]

Sentences in are represented as sentence letters, which are single letters such as and so on. Some texts restrict these to lower case letters, and others restrict them to capital letters. We will use capital letters.

Intuitively, we can think of sentence letters as English sentences that are either true or false. Thus, may translate as 'The Earth is a planet' (which is true), or 'The moon is made of green cheese' (which is false). But may not translate as 'Great ideas sleep furiously' because such a sentence is neither true nor false. Translations between English and work best if they are restricted to timelessly true or false present tense sentences in the indicative mood. You will see in the translation section below that we do not always follow that advice, wherein we present sentences whose truth or falsity is not timeless.

Sentential connectives[edit | edit source]

Sentential connectives are special symbols in Sentential Logic that represent truth functional relations. They are used to build larger sentences from smaller sentences. The truth or falsity of the larger sentence can then be computed from the truth or falsity of the smaller ones.

  • Translates to English as 'and'.
  • is called a conjunction and and are its conjuncts.
  • is true if both and are true—and is false otherwise.
  • Some authors use an & (ampersand), (heavy dot) or juxtaposition. In the last case, an author would write
instead of our

  • Translates to English as 'or'.
  • is called a disjunction and and are its disjuncts.
  • is true if at least one of and are true—is false otherwise.
  • Some authors may use a vertical stroke: |. However, this comes from computer languages rather than logicians' usage. Logicians normally reserve the vertical stroke for nand (alternative denial). When used as nand, it is called the Sheffer stroke.

  • Translates to English as 'it is not the case that' but is normally read 'not'.
  • is called a negation.
  • is true if is false—and is false otherwise.
  • Some authors use ~ (tilde) or . Some authors use an overline, for example writing
instead of

  • Translates to English as 'if...then' but is often read 'arrow'.
  • is called a conditional. Its antecedent is and its consequent is .
  • is false if is true and is false—and true otherwise.
  • By that definition, is equivalent to
  • Some authors use (hook).

  • Translates to English as 'if and only if'
  • is called a biconditional.
  • is true if and both are true or both are false—and false otherwise.
  • By that definition, is equivalent to the more verbose . It is also equivalent to , the conjunction of two conditionals where in the second conditional the antecedent and consequent are reversed from the first.
  • Some authors use .

Grouping[edit | edit source]

Parentheses and are used for grouping. Thus

are two different and distinct sentences. Each negation, conjunction, disjunction, conditional, and biconditionals gets a single pair or parentheses.

Notes[edit | edit source]

(1) An atomic sentence is a sentence consisting of just a single sentence letter. A molecular sentence is a sentence with at least one sentential connective. The main connective of a molecular formula is the connective that governs the entire sentence. Atomic sentences, of course, do not have a main connective.

(2) The and signs for conditional and biconditional are historically older, perhaps a bit more traditional, and definitely occur more commonly in WikiBooks and Wikipedia than our arrow and double arrow. They originate with Alfred North Whitehead and Bertrand Russell in Principia Mathematica. Our arrow and double arrow appear to originate with Alfred Tarski, and may be a bit more popular today than the Whitehead and Russell's and .

(3) Sometimes you will see people reading our arrow as implies. This is fairly common in WikiBooks and Wikipedia. However, most logicians prefer to reserve 'implies' for metalinguistic use. They will say:

If P then Q

or even

P arrow Q

They approve of:

'P' implies 'Q'

but will frown on:

P implies Q

Translation[edit | edit source]

Consider the following English sentences:

If it is raining and Jones is out walking, then Jones has an umbrella.
If it is Tuesday or it is Wednesday, then Jones is out walking.


To render these in , we first specify an appropriate English translation for some sentence letters.

It is raining.
Jones is out walking.
Jones has an umbrella.
It is Tuesday.
It is Wednesday.


We can now partially translate our examples as:


Then finish the translation by adding the sentential connectives and parentheses:

Quoting convention[edit | edit source]

For English expressions, we follow the logical tradition of using single quotes. This allows us to use ' 'It is raining' ' as a quotation of 'It is raining'.

For expressions in , it is easier to treat them as self-quoting so that the quotation marks are implicit. Thus we say that the above example translates (note the lack of quotes) as 'If it is Tuesday, then It is raining'.




Formal Syntax[edit | edit source]

In The Sentential Language, we informally described our sentential language. Here we give its formal syntax or grammar. We will call our language .

Vocabulary[edit | edit source]

  • Sentence letters: Capital letters 'A' – 'Z', each with (1) a superscript '0' and (2) a natural number subscript. (The natural numbers are the set of positive integers and zero.) Thus the sentence letters are:
  • Sentential connectives:
  • Grouping symbols:

The superscripts on sentence letters are not important until we get to the predicate logic, so we won't really worry about those here. The subscripts on sentence letters are to ensure an infinite supply of sentence letters. On the next page, we will abbreviate away most superscripts and subscripts.

Expressions[edit | edit source]

Any string of characters from the vocabulary is an expression of . Some expressions are grammatically correct. Some are as incorrect in as 'Over talks David Mary the' is in English. Still other expressions are as hopelessly ill-formed in as 'jmr.ovn asgj as;lnre' is in English.

We call a grammatically correct expression of a well-formed formula. When we get to Predicate Logic, we will find that only some well formed formulas are sentences. For now though, we consider every well formed formula to be a sentence.

Construction rules[edit | edit source]

An expression of is called a well-formed formula of if it is constructed according to the following rules.

The expression consists of a single sentence letter
The expression is constructed from other well-formed formulae and in one of the following ways:

In general, we will use 'formula' as shorthand for 'well-formed formula'. Since all formulae in are sentences, we will use 'formula' and 'sentence' interchangeably.

Quoting convention[edit | edit source]

We will take expressions of to be self-quoting and so regard

to include implicit quotation marks. However, something like

requires special consideration. It is not itself an expression of since and are not in the vocabulary of . Rather they are used as variables in English which range over expressions of . Such a variable is called a metavariable, and an expression using a mix of vocabulary from and metavariables is called a metalogical expression. Suppose we let be and be Then (1) becomes

'' '' ''

which is not what we want. Instead we take (1) to mean (using explicit quotes):

the expression consisting of '' followed by followed by '' followed by followed by '' .

Explicit quotes following this convention are called Quine quotes or corner quotes. Our corner quotes will be implicit.

Additional terminology[edit | edit source]

We introduce (or, in some cases, repeat) some useful syntactic terminology.

  • We distinguish between an expression (or a formula) and an occurrence of an expression (or formula). The formula

is the same formula no matter how many times it is written. However, it contains three occurrences of the sentence letter and two occurrences of the sentential connective .

  • is a subformula of if and only if and are both formulae and contains an occurrence of . is a proper subformula of if and only if (i) is a subformula of and (ii) is not the same formula as .
  • An atomic formula or atomic sentence is one consisting solely of a sentence letter. Or put the other way around, it is a formula with no sentential connectives. A molecular formula or molecular sentence is one which contains at least one occurrence of a sentential connective.
  • The main connective of a molecular formula is the last occurrence of a connective added when the formula was constructed according to the rules above.
  • A negation is a formula of the form where is a formula.
  • A conjunction is a formula of the form where and are both formulae. In this case, and are both conjuncts.
  • A disjunction is a formula of the form where and are both formulae. In this case, and are both disjuncts.
  • A conditional is a formula of the form where and are both formulae. In this case, is the antecedent, and is the consequent. The converse of is . The contrapositive of is .
  • A biconditional is a formula of the form where and are both formulae.

Examples[edit | edit source]

By rule (i), all sentence letters, including

are formulae. By rule (ii-a), then, the negation

is also a formula. Then by rules (ii-c) and (ii-b), we get the disjunction and conjunction

as formulae. Applying rule (ii-a) again, we get the negation

as a formula. Finally, rule (ii-c) generates the conditional of (1), so it too is a formula.


This appears to be generated by rule (ii-c) from

The second of these is a formula by rule (i). But what about the first? It would have to be generated by rule (ii-b) from

But

cannot be generated by rule (ii-a). So (2) is not a formula.




Informal Conventions[edit | edit source]

In The Sentential Language, we gave an informal description of a sentential language, namely . We have also given a Formal Syntax for . Our official grammar generates a large number of parentheses. This makes formal definitions and other specifications easier to write, but it makes the language rather cumbersome to use. In addition, all the subscripts and superscripts quickly get to be unnecessarily tedious. The end result is an ugly and difficult to read language.

We will continue to use official grammar for specifying formalities. However, we will informally use a less cumbersome variant for other purposes. The transformation rules below convert official formulae of into our informal variant.


Transformation rules[edit | edit source]

We create informal variants of official formulae as follows. The examples are cumulative.

  • The official grammar required sentence letters to have the superscript '0'. Superscripts aren't necessary or even useful until we get to the predicate logic, so we will always omit them in our informal variant. We will write, for example, instead of .
  • We will omit the subscript if it is '0'. Thus we will write instead of . However, we cannot omit all subscripts; we still need to write, for example, .
  • We will omit outermost parentheses. For example, we will write
instead of
  • We will let a series of the same binary connective associate on the right. For example, we can transform the official
into the informal
However, the best we can do with
is
  • We will use precedence rankings to omit internal parentheses when possible. For example, we will regard as having lower precedence (wider scope) than . This allows us to write
instead of
However, we cannot remove the internal parentheses from
Our informal variant of this latter formula is
Full precedence rankings are given below.

Precedence and scope[edit | edit source]

Precedence rankings indicate the order that we evaluate the sentential connectives. has a higher precedence than . Thus, in calculating the truth value of

we start by evaluating the truth value of

first. Scope is the length of expression that is governed by the connective. The occurrence of in (1) has a wider scope than the occurrence of . Thus the occurrence of in (1) governs the whole sentence while the occurrence of in (1) governs only the occurrence of (2) in (1).

The full ranking from highest precedence (narrowest scope) to lowest precedence (widest scope) is:

    highest precedence (narrowest scope)
     
     
     
    lowest precedence (widest scope)

Examples[edit | edit source]

Let's look at some examples. First,

can be written informally as


Second,

can be written informally as


Some unnecessary parentheses may prove helpful. In the two examples above, the informal variants may be easier to read as

and


Note that the informal formula

is restored to its official form as

By contrast, the informal formula

is restored to its official form as





Formal Semantics[edit | edit source]

English syntax for 'Dogs bark' specifies that it consists of a plural noun followed by an intransitive verb. English semantics for 'Dogs bark' specify its meaning, namely that dogs bark.

In The Sentential Language, we gave an informal description of . We also gave a Formal Syntax. However, at this point our language is just a toy, a collection of symbols we can string together like beads on a necklace. We do have rules for how those symbols are to be ordered. But at this point those might as well be aesthetic rules. The difference between well-formed formulae and ill-formed expressions is not yet any more significant than the difference between pretty and ugly necklaces. In order for our language to have any meaning, to be usable in saying things, we need a formal semantics.

Any given formal language can be paired with any of a number of competing semantic rule sets. The semantics we define here is the usual one for modern logic. However, alternative semantic rule-sets have been proposed. Alternative semantic rule-sets of have included (but are certainly not limited to) intuitionistic logics, relevance logics, non-monotonic logics, and multi-valued logics.

Formal semantics[edit | edit source]

The formal semantics for a formal language such as goes in two parts.

  • Rules for specifying an interpretation. An interpretation assigns semantic values to the non-logical symbols of a formal syntax. The semantics for a formal language will specify what range of values can be assigned to which class of non-logical symbols. has only one class of non-logical symbols, so the rule here is particularly simple. An interpretation for a sentential language is a valuation, namely an assignment of truth values to sentence letters. In predicate logic, we will encounter interpretations that include other elements in addition to a valuation.
  • Rules for assigning semantic values to larger expressions of the language. For sentential logic, these rules assign a truth value to larger formulae based on truth values assigned to smaller formulae. For more complex syntaxes (such as for predicate logic), values are assigned in a more complex fashion.

An extended valuation assigns truth values to the molecular formulae of (or similar sentential language) based on a valuation. A valuation for sentence letters is extended by a set of rules to cover all formulae.

Valuations[edit | edit source]

We can give a (partial) valuation as:

(Remember that we are abbreviating our sentence letters by omitting superscripts.)

Usually, we are only interested in the truth values of a few sentence letters. The truth values assigned to other sentence letters can be random.

Given this valuation, we say:

Indeed, we can define a valuation as a function taking sentence letters as its arguments and truth values as its values (hence the name 'truth value'). Note that does not have a fixed interpretation or valuation for sentence letters. Rather, we specify interpretations for temporary use.

Extended valuations[edit | edit source]

An extended interpretation generates the truth values of longer sentences given an interpretation. For sentential logic, an interpretation is a valuation, so an extended interpretation is an extended valuation. We define an extension of valuation as follows.

For all sentence letters and from

Example[edit | edit source]

We will determine the truth value of this example sentence given two valuations.


First, consider the following valuation:

(2)  By clause (i):

(3)  By (1) and clause (iii),

(4)  By (1) and clause (iv),

(5)  By (4) and clause (v),

(6)  By (3), (5) and clause (v),

Thus (1) is false in our interpretation.


Next, try the valuation:

(7) By clause (i):

(8) By (7) and clause (iii),

(9) By (7) and clause (iv),

(10) By (9) and clause (v),

(11) By (8), (10) and clause (v),

Thus (1) is true in this second interpretation. Note that we did a bit more work this time than necessary. By clause (v), (8) is sufficient for the truth of (1).




Truth Tables[edit | edit source]

In the Formal Syntax, we earlier gave a formal semantics for sentential logic. A truth table is a device for using this form syntax in calculating the truth value of a larger formula given an interpretation (an assignment of truth values to sentence letters). Truth tables may also help clarify the material from the Formal Syntax.

Basic tables[edit | edit source]

Negation[edit | edit source]

We begin with the truth table for negation. It corresponds to clause (ii) of our definition for extended valuations.

 
T F
F T

T and F represent True and False respectively. Each row represents an interpretation. The first column shows what truth value the interpretation assigns to the sentence letter . In the first row, the interpretation assigns the value True. In the second row, the interpretation assigns the value False.

The second column shows the value receives under a given row's interpretation. Under the interpretation of the first row, has the value False. Under the interpretation of the second row, has the value True.

We can put this more formally. The first row of the truth table above shows that when = True, = False. The second row shows that when = False, = True. We can also put things more simply: a negation has the opposite truth value than that which is negated.

Conjunction[edit | edit source]

The truth table for conjunction corresponds to clause (iii) of our definition for extended valuations.

 
T T T
T F F
F T F
F F F

Here we have two sentence letters and so four possible interpretations, each represented by a single row. The first two columns show what the four interpretations assign to and . The interpretation represented by the first row assigns both sentence letters the value True, and so on. The last column shows the value assigned to . You can see that the conjunction is true when both conjuncts are true—and the conjunction is false otherwise, namely when at least one conjunct is false.

Disjunction[edit | edit source]

The truth table for disjunction corresponds to clause (iv) of our definition for extended valuations.

 
T T T
T F T
F T T
F F F

Here we see that a disjunction is true when at least one of the disjuncts is true—and the disjunction is false otherwise, namely when both disjuncts are false.

Conditional[edit | edit source]

The truth table for conditional corresponds to clause (v) of our definition for extended valuations.

 
T T T
T F F
F T T
F F T

A conditional is true when either its antecedent is false or its consequent is true (or both). It is false otherwise, namely when the antecedent is true and the consequent is false.

Biconditional[edit | edit source]

The truth table for biconditional corresponds to clause (vi) of our definition for extended valuations.

 
T T T
T F F
F T F
F F T

A biconditional is true when both parts have the same truth value. It is false when the two parts have opposite truth values.

Example[edit | edit source]

We will use the same example sentence from Formal Semantics:

We construct its truth table as follows:

 
T T T T T F F
T T F T T F F
T F T F T F T
T F F F F T T
F T T F T F T
F T F F T F T
F F T F T F T
F F F F F T T

With three sentence letters, we need eight valuations (and so lines of the truth table) to cover all cases. The table builds the example sentence in parts. The column was based on the and columns. The column was based on the and columns. This in turn was the basis for its negation in the next column. Finally, the last column was based on the and columns.

We see from this truth table that the example sentence is false when both and are true, and it is true otherwise.

This table can be written in a more compressed format as follows.

 
 
 
 
 
 
 
 
 
 
 
 
 
(1)
 
 
 
 
 
(4)
 
 
(3)
 
 
 
 
 
(2)
 
 
 
 
 
T T T   T   F F   T  
T T F   T   F F   T  
T F T   F   T F   T  
T F F   F   T T   F  
F T T   F   T F   T  
F T F   F   T F   T  
F F T   F   T F   T  
F F F   F   T T   F  

The numbers above the connectives are not part of the truth table but rather show what order the columns were filled in.






Satisfaction and validity of formulae[edit | edit source]

Satisfaction[edit | edit source]

In sentential logic, an interpretation under which a formula is true is said to satisfy that formula. In predicate logic, the notion of satisfaction is a bit more complex. A formula is satisfiable if and only if it is true under at least one interpretation (that is, if and only if at least one interpretation satisfies the formula). The example truth table of Truth Tables showed that the following sentence is satisfiable.

For a simpler example, the formula is satisfiable because it is true under any interpretation that assigns the value True.

We use the notation to say that the interpretation satisfies . If does not satisfy then we write

The concept of satisfaction is also extended to sets of formulae. A set of formulae is satisfiable if and only if there is an interpretation under which every formula of the set is true (that is, the interpretation satisfies every formula of the set).

A formula is unsatisfiable if and only if there is no interpretation under which it is true. A trivial example is

You can easily confirm by doing a truth table that the formula is false no matter what truth value an interpretation assigns to . We say that an unsatisfiable formula is logically false. One can say that an unsatisfiable formula of sentential logic (but not one of predicate logic) is tautologically false.

Validity[edit | edit source]

A formula is valid if and only if it is satisfied under every interpretation. For example,

is valid. You can easily confirm by a truth table that it is true no matter what the interpretation assigns to . We say that a valid sentence is logically true. We call a valid formula of sentential logic—but not one of predicate logic—a tautology.

We use the notation to say that is valid and to indicate is not valid.

Equivalence[edit | edit source]

Two formulae are equivalent if and only if they are true under exactly the same interpretations. You can easily confirm by truth table that any interpretation that satisfies also satisfies . In addition, any interpretation that satisfies also satisfies . Thus they are equivalent.

We can use the following convenient notation to say that and are equivalent.

which is true if and only if

Validity of arguments[edit | edit source]

An argument is a set of formulae designated as premises together with a single sentence designated as the conclusion. Intuitively, we want the premises jointly to constitute a reason to believe the conclusion. For our purposes an argument is any set of premises together with any conclusion. That can be a bit artificial for some particularly silly arguments, but the logical properties of an argument do not depend on whether it is silly or whether anyone actually does or might consider the premises to be a reason to believe the conclusion. We consider arguments as if one does or might consider the premises to be a reason for the conclusion independently of whether anyone actually does or might do so. Even an empty set of premises together with a conclusion counts as an argument.

The following example show the same argument using several notations.

Notation 1
Therefore
Notation 2
Notation 3
Notation 4
    


An argument is valid if and only if every interpretation that satisfies all the premises also satisfies the conclusion. A conclusion of a valid argument is a logical consequence of its premises. We can express the validity (or invalidity) of the argument with as its set of premises and as its conclusion using the following notation.

(1)   
(2)   

For example, we have


Validity for arguments, or logical consequence, is the central notion driving the intuitions on which we build a logic. We want to know whether our arguments are good arguments, that is, whether they represent good reasoning. We want to know whether the premises of an argument constitute good reason to believe the conclusion. Validity is one essential feature of a good argument. It is not the only essential feature. A valid argument with at least one false premise is useless. Validity is the truth-preserving feature. It does not tell us that the conclusion is true, only that the logical features of the argument are such that, if the premises are true, then the conclusion is. A valid argument with true premises is sound.

There are other less formal features that a good argument needs. Just because the premises are true does not mean that they are believed, that we have any reason to believe them, or that we could collect evidence for them. It should also be noted that validity only applies to certain types of arguments, particularly deductive arguments. Deductive arguments are intended to be valid. The archetypical example for a deductive argument is a mathematical proof. Inductive arguments, of which scientific arguments provide the archetypical example, are not intended to be valid. The truth of the premises are not intended to guarantee that the conclusion is true. Rather, the truth of the premises are intended to make the truth of the conclusion highly probably or likely. In science, we do not intend to offer mathematical proofs. Rather, we gather evidence.

Formulae and arguments[edit | edit source]

For every valid formula, there is a corresponding valid argument having the valid formula as its conclusion and the empty set as its set of premises. Thus

if and only if


For every valid argument with finitely many premises, there is a corresponding valid formula. Consider a valid argument with as the conclusion and having as its premises . Then

There is then the corresponding valid formula

There corresponds to the valid argument

    

the following valid formula:

Implication[edit | edit source]

You may see some text reading our arrow as 'implies' and using 'implications' as an alternative for 'conditional'. This is generally decried as a use-mention error. In ordinary English, the following are considered grammatically correct:

(3)    'That there is smoke implies that there is fire'.
(4)    'There is smoke' implies 'there is fire'.

In (3), we have one fact or proposition or whatever (the current favorite among philosophers appears to be proposition) implying another of the same species. In (4), we have one sentence implying another.

But the following is considered incorrect:

There is smoke implies there is fire.

Here, in contrast to (3), there are no quotation marks. Nothing is the subject doing the implying and nothing is the object implied. Rather, we are composing a larger sentence out of smaller ones as if 'implies' were a grammatical conjunction such as 'only if'.

Thus logicians tend to avoid using 'implication' to mean conditional. Rather, they use 'implies' to mean has as a logical consequence and 'implication' to mean valid argument. In doing this, they are following the model of (4) rather than (3). In particular, they read (1) and (2) as ' implies (or does not imply) .




Expressibility[edit | edit source]

Formula truth tables[edit | edit source]

A formula with n sentence letters requires lines in its truth table. And, for a truth table of m lines, there are possible formulas. Thus, for a sentence of n letters, and the number of possible formulas is .

For example, there are four possible formulas of one sentence letter (requiring a two-line truth table) and 16 possible formulas of two sentence letters (requiring a four-line truth table). We illustrate this with the following tables. The numbered columns represent the different possibilities for the column of a main connective.

 
(i) (ii) (iii) (iv)
T T T F F
F T F T F

Column (iii) is the negation formula presented earlier.

 
(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) (xii) (xiii) (xiv) (xv) (xvi)
T T T T T T T T T T F F F F F F F F
T F T T T T F F F F T T T T F F F F
F T T T F F T T F F T T F F T T F F
F F T F T F T F T F T F T F T F T F

Column (ii) represents the formula for disjunction, column (v) represents conditional, column (vii) represents biconditional, and column (viii) represents conjunction.

Expressing arbitrary formulas[edit | edit source]

The question arises whether we have enough connectives to represent all the formulas of any number of sentence letters. Remember that each row represents one valuation. We can express that valuation by conjoining sentence letters assigned True under that valuation and negations of sentence letters assigned false under that valuation. The four valuations of the second table above can be expressed as

Now we can express an arbitrary formula by disjoining the valuations under which the formula has the value true. For example, we can express column (x) with:

(1)     

You can confirm by completing the truth table that this produces the desired result. The formula is true when either (a) is true and is false or (b) is false and is true. There is an easier way to express this same formula: . Coming up with a simple way to express an arbitrary formula may require insight, but at least we have an automatic mechanism for finding some way to express it.

Now consider a second example. We want to express a formula of , , and , and we want this to be true under (and only under) the following three valuations.

      (i)   (ii)   (iii)
    True   False   False
    True   True   False
    False   False   True


We can express the three conditions which yield true as

Now we need to say that either the first condition holds or that the second condition holds or that the third condition holds:

(2)     

You can verify by a truth table that it yields the desired result, that the formula is true in just the interpretation above.

This technique for expressing arbitrary formulas does not work for formulas evaluating to False in every interpretation. We need at least one interpretation yielding True in order to get the formula started. However, we can use any tautologically false formula to express such formulas. will suffice.

Normal forms[edit | edit source]

A normal form provides a standardized rule of expression where any formula is equivalent to one which conforms to the rule. It will be useful in the following to define a literal as a sentence letter or its negation (e.g. , and as well as , and ).

Disjunctive normal form[edit | edit source]

We say a formula is in disjunctive normal form if it is a disjunction of conjunctions of literals. An example is . For the purpose of this definition we admit so called degenerate disjunctions and conjunctions of only one disjunct or conjunct. Thus we count as being in disjunctive normal form because it is a degenerate (one-place) disjunction of a degenerate (one-place) conjunction. The degeneracy can be removed by converting it to the equivalent formula . We also admit many-place disjunctions and conjunctions for the purposes of this definition, such as . A method for finding the disjunctive normal form of a arbitrary formula is shown above.

Conjunctive normal form[edit | edit source]

There is another common normal form in sentential logic, namely conjunctive normal form. A formula is in conjunctive normal form if it is a conjunction of disjunctions of literals. An example is . Again, we can express arbitrary formulas in conjunctive normal form. First, take the valuations for which the formula evaluates to False. For each such valuation, form a disjunction of sentence letters the valuation assigns False together with the negations of sentence letters the valuation assign true. For the valuation

   :    False
   :    True
   :    False

we form the disjunction

The conjunctive normal form expression of an arbitrary formula is the conjunction of all such disjunctions matching the interpretations for which the formula evaluates to false. The conjunctive normal form equivalent of (1) above is

The conjunctive normal form equivalent of (2) above is

Interdefinability of connectives[edit | edit source]

Negation and conjunction are sufficient to express the other three connectives and indeed any arbitrary formula.

     
     
           


Negation and disjunction are sufficient to express the other three connectives and indeed any arbitrary formula.

     
     
           


Negation and conditional are sufficient to express the other three connectives and indeed any arbitrary formula.

     
     
           

Negation and biconditional are not sufficient to express the other three connectives.

Joint and alternative denials[edit | edit source]

We have seen that three pairs of connectives are each jointly sufficient to express any arbitrary formula. The question arises, is it possible to express any arbitrary formula with just one connective? The answer is yes, but not with any of our connectives. There are two possible binary connectives each of which, if added to , would be sufficient.

Alternative denial[edit | edit source]

Alternative denial, sometimes called NAND. The usual symbol for this is called the Sheffer stroke, written as (some authors use ↑). Temporarily add the symbol to and let be True when at least one of or is false. It has the truth table :

 
T T F
T F T
F T T
F F T

We now have the following equivalences.

     
     
     
     
     

Joint denial[edit | edit source]

Joint denial, sometimes called NOR. Temporarily add the symbol to and let be True when both and are false. It has the truth table :

 
T T F
T F F
F T F
F F T

We now have the following equivalences.

     
     
     
     
     




Properties of Sentential Connectives[edit | edit source]

Here we list some of the more famous, historically important, or otherwise useful equivalences and tautologies. They can be added to the ones listed in Interdefinability of connectives. We can go on at quite some length here, but will try to keep the list somewhat restrained. Remember that for every equivalence of and , there is a related tautology .

Bivalence[edit | edit source]

Every formula has exactly one of two truth values.

     Law of Excluded Middle
     Law of Non-Contradiction

Analogues to arithmetic laws[edit | edit source]

Some familiar laws from arithmetic have analogues in sentential logic.

Reflexivity[edit | edit source]

Conditional and biconditional (but not conjunction and disjunction) are reflexive.

Commutativity[edit | edit source]

Conjunction, disjunction, and biconditional (but not conditional) are commutative.

   is equivalent to   
   is equivalent to   
   is equivalent to   

Associativity[edit | edit source]

Conjunction, disjunction, and biconditional (but not conditional) are associative.

   is equivalent to   
   is equivalent to   
   is equivalent to   

Distribution[edit | edit source]

We list ten distribution laws. Of these, probably the most important are that conjunction and disjunction distribute over each other and that conditional distributes over itself.

   is equivalent to   
   is equivalent to   


   is equivalent to   
   is equivalent to   
   is equivalent to   
   is equivalent to   


   is equivalent to   
   is equivalent to   
   is equivalent to   
   is equivalent to   

Transitivity[edit | edit source]

Conjunction, conditional, and biconditional (but not disjunction) are transitive.

Other tautologies and equivalences[edit | edit source]

Conditionals[edit | edit source]

These tautologies and equivalences are mostly about conditionals.

     Conditional addition
     Conditional addition
   is equivalent to         Contraposition
   is equivalent to         Exportation

Biconditionals[edit | edit source]

These tautologies and equivalences are mostly about biconditionals.

     Biconditional addition
     Biconditional addition
   is equivalent to       is equivalent to   

Miscellaneous[edit | edit source]

We repeat DeMorgan's Laws from the Interdefinability of connectives section of Expressibility and add two additional forms. We also list some additional tautologies and equivalences.

     Idempotence for conjunction
     Idempotence for disjunction
     Disjunctive addition
     Disjunctive addition
           Demorgan's Laws
           Demorgan's Laws
           Demorgan's Laws
           Demorgan's Laws
   is equivalent to         Double Negation

Deduction and reduction principles[edit | edit source]

The following two principles will be used in constructing our derivation system on a later page. They can easily be proven, but—since they are neither tautologies nor equivalences—it takes more than a mere truth table to do so. We will not attempt the proof here.

Deduction principle[edit | edit source]

Let and both be formulae, and let be a set of formulae.

Reduction principle[edit | edit source]

Let and both be formulae, and let be a set of formulae.




Substitution and Interchange[edit | edit source]

This page will use the notions of occurrence and subformula introduced at the Additional terminology section of Formal Syntax. These notions have been little used if at all since then, so you might want to review them.

Substitution[edit | edit source]

Tautological forms[edit | edit source]

We have introduced a number of tautologies, one example being

(1)   

Use the metavariables and to replace and in (1). This produces the form

(2)   

As it turns out, any formula matching this form is a tautology. Thus, for example, let and . Then,

(3)   

is a tautology. This process can be generalized to all tautologies: for any tautology, find its explicit form by replacing each sentence letters with distinct metavariables (written as Greek letters, as shown in (2)). We can call this a tautological form, which is a metalogical expression rather than a formula. Any instance of this tautological form is a tautology.

Substitution instances[edit | edit source]

The preceding illustrated how we can generate new tautologies from old ones via tautological forms. Here, we will show how to generate tautologies without resorting to tautological forms. To do this, we define a substitution instance of a formula. Any substitution instance of a tautology is also a tautology.

First, we define the simple substitution instance of a formula for a sentence letter. Let and be formulae and be a sentence letter. The simple substitution instance is the result of replacing every occurrence of in with an occurrence of . A substitution instance of formulae for a sentence letters is the result of a chain of simple substitution instances. In particular, a chain of zero simple substitutions instances starting from is a substitution instance and indeed is just itself. Thus, any formula is a substitution instance of itself.

It turns out that if is a tautology, then so is any simple substitution instance . If we start with a tautology and generate a chain of simple substitution instances, then every formula in the chain is also a tautology. Thus any (not necessarily simple) substitution instance of a tautology is also a tautology.

Substitution examples[edit | edit source]

Consider (1) again. We substitute for every occurrence of in (1). This gives us the following simple substitution instance of (1):

(4)   

In this, we substitute for . That gives us (3) as a simple substitution instance of (4). Since (3) is the result of a chain of two simple substitution instances, it is a (non-simple) substitution instance of (1) Since (1) is a tautology, so is (3). We can express the chain of substitutions as

Take another example, also starting from (1). We want to obtain

(5)   

Our first attempt might be to substitute for ,

(6)   

This is indeed a tautology, but it is not the one we wanted. Instead, we substitute for in (1), obtaining

Now substitute for obtaining

Finally, substituting for gets us the result we wanted, namely (5). Since (1) is a tautology, so is (5). We can express the chain of substitutions as

Simultaneous substitutions[edit | edit source]

We can compress a chain of simple substitutions into a single complex substitution. Let , , , ... be formulae; let , , ... be sentence letters. We define a simultaneous substitution instance of formulas for sentence letters be the result of starting with and simultaneously replacing with , with , .... We can regenerate our examples.

The previously generated formula (3) is

Similarly, (5) is

Finally (6) is


When we get to predicate logic, simultaneous substitution instances will not be available. That is why we defined substitution instance by reference to a chain of simple substitution instances rather than as a simultaneous substitution instance.

Interchange[edit | edit source]

Interchange of equivalent subformulae[edit | edit source]

We previously saw the following equivalence at Properties of Sentential Connectives:

(7)       is equivalent to   

You then might expect the following equivalence:

   is equivalent to   

This expectation is correct; the two formulae are equivalent. Let and be equivalent formulae. Let be a formula in which occurs as a subformula. Finally, let be the result of replacing in at least one (not necessarily all) occurrences of with . Then and are equivalent. This replacement is called an interchange.

For a second example, suppose we want to generate the equivalence

(8)       is equivalent to   

We note the following equivalence:

(9)       is equivalent to   

These two formulae can be confirmed to be equivalent either by truth table or, more easily, by substituting for in both formulae of (7).

This substitution does indeed establish (9) as an equivalence. We already noted that and are equivalent if and only if is a tautology. Based on (7), we get the tautology

Our substitution then yields

which is also a tautology. The corresponding equivalence is then (9).

Based on (9), we can now replace the consequent of with its equivalent. This generates the desired equivalence, namely (8).

Every formula equivalent to a tautology is also a tautology. Thus an interchange of equivalent subformulae within a tautology results in a tautology. For example, we can use the substitution instance of (7):

   is equivalent to   

together with the tautology previously seen at Properties of Sentential Connectives:

to obtain

as a new tautology.

Interchange example[edit | edit source]

As an example, we will use the interdefinability of connectives to express

(10)   

using only conditionals and negations.

Based on

   is equivalent to   

we get the substitution instance

   is equivalent to   

which in turn allows us to replace the appropriate subformula in (10) to get:

(11)   

The equivalence

is equivalent to

together with the appropriate substitution gives us

(12)   

as equivalent to (11).

Finally, applying

     

together with the appropriate substitution, yields our final result:

Summary[edit | edit source]

This page has presented two claims.

  • A substitution instance of a tautology is also a tautology.
  • Given a formula, the result of interchanging a subformula with an equivalent is a formula equivalent to the given formula.

These claims are not trivial observations or the result of a simple truth table. They are substantial claims that need proof. Proofs are available in a number of standard metalogic textbooks, but are not presented here.




Translations[edit | edit source]

The page The Sentential Language gave a very brief look at translation between English and . We look at this in more detail here.

English sentential connectives[edit | edit source]

In the following discussion, we will assume the following assignment of English sentences to sentence letters:

 2 is a prime number.
 2 is an even number.
 3 is an even number.

Not[edit | edit source]

The canonical translation of into English is 'it is not the case that'. Given the assignment above,

(1)   

translates as

It is not the case that 2 is a prime number.

But we usually express negation in English simply by 'not' or by adding the contraction 'n't' to the end of a word. Thus (1) can also translate either of:

2 is not a prime number.
2 isn't a prime number.

If[edit | edit source]

The canonical translation of into English is 'if ... then ...'. Thus

(2)   

translates into English as

(3)    If 2 is a prime number, then 2 is an even number.


Objections have been raised to the canonical translation, and our example may illustrate the problem. It may seem odd to count (3) as true; however, our semantic rules does indeed count (2) as true (because both and are true). We might expect that, if a conditional and its antecedent are true, the consequent is true because the antecedent is. Perhaps we expect a general rule

(4)    if x is a prime number, then x is an even number

to be true—but this rule is clearly false. In any case, we often expect the truth of the antecedent (if it is indeed true) to be somehow relevant to the truth of the conclusion (if that is indeed true). (2) is an exception to the usual relevance of a number being prime to a number being even.

The conditional of is called the material conditional in contrast to strict conditional or counterfactual conditional. Relevance logic attempts to define a conditional which meets these objections. See also the Stanford Encyclopedia of Philosophy entry on relevance logic.

It is generally accepted today that not all aspects of an expression's linguistic use are part of its linguistic meaning. Some have suggested that the objections to reading 'if' as a material conditional are based on conversational implicature and so not based on the meaning of 'if'. See the Stanford Encyclopedia of Philosophy entry on implicature for more information. As much as a simplifying assumption than anything else, we will adopt this point of view. We can also point out in our defense that translations need not be exact to be useful. Even if our simplifying assumption is incorrect, is still the closest expression we have in to 'if'. It should also be noted that, in mathematical statements and proofs, mathematicians always use 'if' as a material conditional. They accept (2) and (3) as translations of each other and do not find it odd to count (3) as true.

'If' can occur at the beginning of the conditional or in the middle. The 'then' can be missing. Thus both of the following (in addition to (3)) translate as (2).

If 2 is a prime number, 2 is an even number.
2 is an even number if 2 is a prime number.

Implies[edit | edit source]

We do not translate 'implies' into . In particular, we reject

2 is a prime number implies 2 is an even number.

as grammatically ill-formed and therefore not translatable as (2). See the Implication section of Validity for more details.

Only if[edit | edit source]

The English

(5)    2 is a prime number only if 2 is an even number

is equivalent to the English

If 2 is not an even number, then 2 is not a prime number.

This, in turn, translates into as

(6)   

We saw at Conditionals section of Properties of Sentential Connectives that (6) is equivalent to

(7)   

Many logic books give this as the preferred translation of (5) into . This allows the convenient rule ''if' always introduces an antecedent while 'only if' always introduces a consequent'.

Like 'if', 'only if' can appear in either the first or middle position of a conditional. (5) is equivalent to

Only if 2 is an even number, is 2 a prime number.

Provided that[edit | edit source]

'Provided that'—and similar expressions such as 'given that' and 'assuming that'—can be use equivalently with 'if'. Thus each of the following translate into as (2).

2 is an even number provided that 2 is a prime number.
2 is an even number assuming that 2 is a prime number.
Provided that 2 is a prime number, 2 is an even number.


Prefixing 'provided that' with 'only' works the same as prefixing 'if' with only. Thus each of the following translate into as (6) or (7).

2 is a prime number only provided that 2 is an even number.
2 is a prime number only assuming that 2 is an even number.
Only provided that 2 is an even number, is 2 a prime number.

Or[edit | edit source]

The canonical translation of into English is '[either] ... or ...' (where the 'either' is optional). Thus

(8)   

translates into English as

(9)    2 is a prime number or 2 is an even number

or

Either 2 is a prime number or 2 is an even number.


We saw at the Interdefinability of connectives section of Expressibility that (8) is equivalent to

Just as there were objections to understanding 'if' as , there are similar objections to understanding 'or' as . We will again make the simplifying assumption that we can ignore these objections.

The English 'or' has both an inclusive and—somewhat controversially—an exclusive use. The inclusive or is true when at least one disjunct is true; the exclusive or is true when exactly one disjunct is true. The operator matches the inclusive use. The inclusive use becomes especially apparent in negations. If President Bush promises not to invade Iran or North Korea, not even the best Republican spin doctors will claim he can keep his promise by invading both. The exclusive reading of (9) translates into as

or more simply (and less intuitively) as


In English, telescoping is possible with 'or'. Thus, (8) translates

2 is either a prime number or an even number.

Similarly,

translates

2 or 3 is an even number.

Unless[edit | edit source]

'Unless' has the same meaning as 'if not'. Thus

(10)   

translates

(11)    2 is a prime number unless 2 is an even number

and

(12)    Unless 2 is an even number, 2 is a prime number.

We saw at the Interdefinability of connectives section of Expressibility that (10) is equivalent to (8). Many logic books give (8) as the preferred translation of (11) or (12) into .

Nor[edit | edit source]

At the Joint denial section of Expressibility, we temporarily added to as the connective for joint denial. If we had that connective still available to us, we could translate

Neither 2 is a prime number nor 2 is an even number

as

.

However, since is not really in the vocabulary of , we need to paraphrase. Either of the following will do:

(13)    .
(14)    .


The same telescoping applies as with 'or'.

2 is neither a prime number nor an even number

translates into as either (13) or (14). Similarly,

Neither 2 nor 3 is an even number

translates as either of

.
.

And[edit | edit source]

The canonical translation of into English is '[both] ... and ...' (where the 'both' is optional'). Thus

(15)   

translates into English as

2 is a prime number and 2 is an even number

or

Both 2 is a prime number and 2 is an even number.


Our translation of 'and' as is not particularly controversial. However, 'and' is sometimes used to convey temporal order. The two sentences

She got married and got pregnant.
She got pregnant and got married.

are generally heard rather differently.

'And' has the same telescoping as 'or'.

2 is both a prime number and an even number

translates into as (15)

Both 2 and 3 are even numbers

translates as

.

If and only if[edit | edit source]

The canonical translation of into English is '... if and only if ...'. Thus

(16)   

translates into English as

2 is a prime number if and only if 2 is an even number.


The English sentence

(17)    2 is a prime number if and only if 2 is an even number

is a shortened form of

2 is a prime number if 2 is an even number, and 2 is a prime number only if 2 is an even number

which translates as

or more concisely as the equivalent formula

(18)    .

We saw at the Interdefinability of connectives section of Expressibility that (18) is equivalent to (16). Issues concerning the material versus non-material interpretations of 'if' apply to 'if and only if' as well.

Iff[edit | edit source]

Mathematicians and sometimes others use 'iff' as an abbreviated form of 'if and only if'. So

2 is a prime number iff 2 is an even number

abbreviates (17) and translates as (16).

Examples[edit | edit source]



Derivations[edit | edit source]

Derivations[edit | edit source]

In Validity, we introduced the notion of validity for formulae and for arguments. In sentential logic, a valid formula is a tautology. Up to now, we could show a formula to be valid (a tautology) in the following ways.

  • Do a truth table for .
  • Obtain as a substitution instance of a formula already known to be valid.
  • Obtain by applying interchange of equivalents to a formula already known to be valid.

These methods fail in predicate logic, however, because truth tables are unavailable for predicates. Without an alternate method, we cannot use the second and third methods since they rely on knowing the validity of other formulas. An alternative method for showing a formula is valid—without using truth tables—is the use of derivations. This page and those that follow introduce this technique. Note, the claim that a derivation shows an argument valid assumes a sound derivation system, see soundness below.

A derivation is a series of numbered lines, each line consisting of a formula with an annotation. The annotations provide the justification for adding the line to the derivation. A derivation is a highly formalized analogue to—or perhaps a model of—a mathematical proof.

A typical derivation system will allow some of the following types of lines:

  • A line may be an axiom. The derivation system may specify a set of formulae as axioms. These are accepted as true for any derivation. For sentential logic the set of axioms is a fixed subset of tautologies.
  • A line may be an assumption. A derivation may have several types of assumptions. The following cover the standard cases.
  • A premise. When attempting to show the validity of an argument, a premise of that argument may be assumed.
  • A temporary assumption for use in a subderivation. Such assumptions are intended to be active for only part of a derivation and must be discharged (made inactive) before the derivation is considered complete. Subderivations will be introduced on a later page.
  • A line may result from applying an inference rule to previous lines. An inference is a syntactic transformation of previous lines to generate a new line. Inferences are required to follow one of a fixed set of patterns defined by the derivation system. These patterns are the system's inference rules. The idea is that any inference fitting an inference rule should be a valid argument.

Soundness and validity[edit | edit source]

We noted in Formal Semantics that a formal language such as can be interpreted via several alternative and even competing semantic rule-sets. Multiple derivation systems can be also defined for a given syntax-semantics pair. A triple consisting of a formal syntax, a formal semantics, and a derivation system is a logical system.

A derivation is intended to show an argument to be valid. A derivation of a zero-premise argument is intended to show its conclusion to be a valid formula—in sentential logic this means showing it to be a tautology. Given a logical system, the derivation system is called sound if it achieves these goals. That is, a derivation system is sound (has the property of soundness) if every formula (and argument) derivable in its derivation system is valid (given a syntax and a semantics).

Another desirable quality of a derivation system is completeness. Given a logical system, its derivation system is said to be complete if every valid formula is derivable. However, there are some logics for which no derivation system is or can be complete.

Soundness and completeness are substantial results. Their proofs will not be given here, but are available in many standard metalogic text books.

Turnstiles[edit | edit source]

The symbol is sometimes called a turnstile, in particular a semantic turnstile. We previously introduced the following three uses of this symbol.

  (1)     satisfies .
  (2)     is valid.
  (3)     implies (has as a logical consequence) .

where is a valuation and is a set of premises, as introduced in Validity.


Derivations have a counterpart to the semantic turnstile, namely the syntactic turnstile. Item (1) above has no syntactic counterpart. However, (2) and (3) above have the following counterparts.

  (4)     is provable.
  (5)     proves (has as a derivational consequence) .


Item (4) is the case if and only if there is a correct derivation of from no premises. Similarly, (5) is the case if and only if there is a correct derivation of which takes the members of as premises.

The negations of (4) and (5) above are

  (6)  
  (7)  


We can now define soundness and completeness as follows:

  • Given a logical system, its derivation system is sound if and only if:
  • Given a logical system, its derivation system is complete if and only if:



Inference Rules[edit | edit source]

Overview[edit | edit source]

Inference rules will be formated as in the following example.

Conditional Elimination (CE)

The name of this inference rule is 'Conditional Elimination', which is abbreviated as 'CE'. We can apply this rule if formulae having the forms above the line appear as active lines of text in the derivation. These are called the antecedent lines for this inference. Applying the rule adds a formula having the form below the line. This is called the consequent line for this inference. The annotation for the newly derived line of text is the line numbers of the antecedent lines and the abbreviation 'CE'.

Note. You might see premise line and conclusion line for antecedent line and consequent line. You may see other terminology as well, as most textbooks avoid giving any special terminology here.

Each sentential connective will have two inference rules, one each of the following types.

  • An introduction rule. The introduction rule for a given connective allows us to derive a formula having the given connective as its main connective.
  • An elimination rule. The elimination rule for a given connective allows us to use a formula already appearing in the derivation having the given connective as its main connective.

Three rules (Negation Introduction, Negation Elimination, and Conditional Introduction) will be deferred to a later page. These are so-called discharge rules which will be explained when we get to subderivations.

Three rules (Conjunction Elimination, Disjunction Introduction, and Biconditional Elimination) will have two forms each. We somewhat arbitrarily count the two patterns as forms of the same rule rather than separate rules.

The validity of the inferences on this page can be shown by truth table.

Inference rules[edit | edit source]

Negation[edit | edit source]

Negation Introduction (NI)

Deferred to a later page.


Negation Elimination (NE)

Deferred to a later page.

Conjunction[edit | edit source]

Conjunction Introduction (KI)

Conjunction Introduction traditionally goes by the name Adjunction or Conjunction.


Conjunction Elimination, Form I (KE)


Conjunction Elimination, Form II (KE)

Conjunction Elimination traditionally goes by the name Simplification.

Disjunction[edit | edit source]

Disjunction Introduction, Form I (DI)


Disjunction Introduction, Form II (DI)

Disjunction Introduction traditionally goes by the name Addition.


Disjunction Elimination (DE)

Disjunction Elimination traditionally goes by the name Separation of Cases.

Conditional[edit | edit source]

Conditional Introduction (CI)

Deferred to a later page.


Conditional Elimination (CE)

Conditional Elimination traditionally goes by the Latin name Modus Ponens or, less often, by Affirming the Antecedent.

Biconditional[edit | edit source]

Biconditional Introduction (BI)


Biconditional Elimination, Form I (BE)


Biconditional Elimination, Form II (BE)

Examples[edit | edit source]

Inference rules are easy enough to apply. From the lines

(1)   

and

(2)   

we can apply Conditional Elimination to add

(3)   

to a derivation.

The annotation will be the line numbers of (1) and (2) and the abbreviation for Conditional Elimination, namely '1, 2, CE'. The order of the antecedent lines does not matter; the inference is allowed regardless of whether (1) appears before or after (2).

It must be remembered that inference rules are strictly syntactical. Semantically obvious variations is not allowed. It is not allowed, for example, to derive (3) from (1) and

(4)   

However, you can get from (1) and (4) to (3) by first deriving

(5)   

and

(6)   

by Conjunction Elimination (KE). Then you can derive (2) by Conjunction Introduction (KI) and finally (3) from (1) and (2) by Conditional Elimination (CE) as before. Some derivation systems have a rule, often called Tautological Implication, allowing you to derive any tautological consequence of previous lines. However, this should be seen as an (admittedly useful) abbreviation. On later pages, we will implement a restricted version of this abbreviation.

It is generally useful to apply break down premises, other assumptions (to be introduced on a later page) by applying elimination rules—and then continue breaking down the results. Supposing that is why we applied CE to (1) and (2), it will likely be useful to derive

(7)   

and

(8)   

by applying Biconditional Elimination (BE) to (3). To further break this down, you might then attempt to derive or so that you can apply CE to (7) or (8).

If you know what line you want to derive, you can build it up by applying introduction rules. That was the strategy for deriving (2) from (5) and (6).



Constructing a Simple Derivation[edit | edit source]

Our derivations consists two types of elements.

  • Derived lines. A derived line has three parts:
  • Line number. This allows the line to be referred to later.
  • Formula. The purpose of a derivation is to derive formulae, and this is the formula that has been derived at this line.
  • Annotation. This specifies the justification for entering the formula into the derivation.
  • Fencing. These include:
  • Vertical lines between the line number and the formula. These are used to set off subderivations which we will get to in the next module.
  • Horizontal lines separating premises and temporary assumptions from other lines. When we get to predicate logic, there are restrictions on using premises and temporary assumptions. Setting them off in an easy-to-recognize fashion aids in adhering to the restrictions.

We often speak informally of the formula as if it were the entire line, but the line also includes the line number and the annotation.

Rules[edit | edit source]

Premises[edit | edit source]

The annotation for a premise is 'Premise'. We require that all premises used in the derivation are in the first lines. No non-premise line is allowed to appear before a premise. In theory, an argument can have infinitely many premises. However, derivations have only finitely many lines, so only finitely many premises can be used in the derivation. We do not require that all premises appear before other lines. This would be impossible for arguments with infinitely many premises. But we do require that all premises to appear in the derivation appear before any other line.

The requirement that premises used in the derivation appear as its first lines is stricter than absolutely necessary. However, certain restrictions that will be needed when we get to predicate logic make the requirement at least a useful convention.

Inference rules[edit | edit source]

We introduced all but two inference rules in the previous module, and will introduce the other two in the next module.

Axioms[edit | edit source]

This derivation system does not have any axioms.

An example derivation[edit | edit source]

We will construct a derivation for the following argument:

    


First, we enter the premises into the derivation:

 
1.     Premise
2.     Premise
3.     Premise


Note the vertical line between the line numbers and the formulae. That is part of the fencing that controls subderivations. We will get to subderivations in the next module. Until then, we simply put a single vertical line the length of the derivation. Note also the horizontal line under the premises. This is fencing that helps distinguish the premises from the other lines in the derivation.

Now we need to use the premises. Applying KE to the first premise twice. we add the following lines:

 
4.     1 KE
5.     1 KE


Now we need to use the second premise by applying CE. Since CE has two antecedent lines, we first need to derive the other line that we will need. We thus add these lines:

 
6.     4 DI
7.     2, 6 CE


Now we will use the third premise by applying CE. Again, we first need to derive the other line we will need. The new lines are:

 
8.     5, 7 KI
9.     3, 8 CE


Line 9 is , the conclusion of our argument, so we are done. The conclusion does not always fall into our lap so nicely, but here it did. The complete derivation runs:

 
1.     Premise
2.     Premise
3.     Premise
4.     1 KE
5.     1 KE
6.     4 DI
7.     2, 6 CE
8.     5, 7 KI
9.     3, 8 CE



Subderivations and Discharge Rules[edit | edit source]

As already seen, we need three more inference rules, Conditional Introduction (CI), Negation Introduction (NI), and Negation Elimination (NE). These require subderivations.

Deriving conditionals[edit | edit source]

Example derivation[edit | edit source]

We begin with an example derivation which illustrates Conditional Introduction, then follow with an explanation. A derivation for the argument

    

is as follows:

 
1.     Premise
 
2.       Assumption
3.       1 KE
 
4.     2–3 CI


Lines 2 and 3 constitute a subderivation. It starts by assuming desired formula's antecedent and ends by deriving the desired formula's consequent. There are two vertical fences between the line numbers and the formulae to set it off from the rest of the derivation and to indicate its subordinate status. Line 2 has a horizontal fence under it to separate the assumption from the rest of the subderivation. Line 4 is the application of Conditional Introduction. It follows not from one or two individual lines but from the entire subderivation (lines 2–3) as a whole.

Conditional Introduction is a discharge rule. It discharges (makes inactive) that assumption and indeed makes the entire subderivation inactive. Once we apply a discharge rule, no line from the subderivation (here, lines 2 and 3) can be further used in the derivation.

The Conditional Introduction rule[edit | edit source]

To derive a formula , the rule of Conditional Introduction (CI) is applied by first assuming in a subderivation that the antecedent is true, then deriving as the conclusion of the subderivation. Symbolically, CI is written as

Here, the consequent line is not inferred from one or more antecedent lines, but from a subderivation as a whole. The annotation is the range of lines occupied by the subderivation and the abbreviation CI. Unlike previously introduced inference rules, Conditional Introduction cannot be justified by a truth table. Rather it is justified by the Deduction Principle introduced at Properties of Sentential Connectives. The intuition behind why we assume in order to derive , however, is that is true by definition if is false. Thus, if we show is true whenever happens to be true, then must be true.

Note that the antecedent subderivation can consist of a single line serving both as the assumed and the derived , as in the following derivation of

  
 
1.       Assumption
 
2.     1 CI

Negations[edit | edit source]

Example derivation[edit | edit source]

To illustrate Negation Introduction, we will provide a derivation for the argument

    


 
1.     Premise
2.     Premise
 
3.       Assumption
4.       2 KE
5.       3, 4 CE
6.       2 KE
 
7.     3–6 NI
8.     1, 7 CE
9.     8 DI


Lines 3 through 6 constitute a subderivation. It starts by assuming the desired formula's opposite and ends by assuming a contradiction (a formula and its negation). As before, there are two vertical fences between the line numbers and the formulae to set it off from the rest of the derivation and to indicate its subordinate status. And the horizontal fence under line 3 again separates the assumption from the rest of the subderivation. Line 7, which follows from the entire subderivation, is the application of Negation Introduction.

At line 9, note that the annotation '5 DI' would be incorrect. Although inferring from is valid by DI, line 5 is no longer active when we get to line 9. Thus we are not allowed to derive anything from line 5 at that point.

The Negation Introduction rule[edit | edit source]

Negation Introduction (NI)

The consequent line is inferred from the whole subderivation. The annotation is the range of lines occupied by the subderivation and the abbreviation is NI. Negation Introduction sometimes goes by the Latin name Reductio ad Absurdum or sometimes by Proof by Contradiction.

Like Conditional Introduction, Negation Introduction cannot be justified by a truth table. Rather it is justified by the Reductio Principle introduced at Properties of Sentential Connectives.

Another example derivation[edit | edit source]

To illustrate Negation Elimination, we will provide a derivation for the argument

    


 
1.     Premise
 
2.       Assumption
3.       2 DI
4.       1 KE
 
5.     2–4 NE


Lines 2 through 4 constitute a subderivation. As in the previous example, it starts by assuming the desired formula's opposite and ends by assuming a contradiction (a formula and its negation). Line 5, which follows from the entire subderivation, is the application of Negation Elimination.

The Negation Elimination rule[edit | edit source]

Negation Elimination (NE)

The consequent line is inferred from the whole subderivation. The annotation is the range of lines occupied by the subderivation and the abbreviation is NE. Like Negation Introduction, Negation Elimination sometimes goes by the Latin name Reductio ad Absurdum or sometimes by Proof by Contradiction.

Like Negation Introduction, Negation Elimination is justified by the Reductio Principle introduced at Properties of Sentential Connectives. This rule's place in the Introduction/Elimination naming convention is somewhat more awkward than for the other rules. Unlike the other elimination rules, the negation that gets eliminated by this rule does not occur in an already derived line. Rather the eliminated negation occurs in the assumption of the subderivation.

Terminology[edit | edit source]

The inference rules introduced in this module, Conditional Introduction and Negation Introduction, are discharge rules. For lack of a better term, we can call the inference rules introduced in Inference Rules 'standard rules'. A standard rule is an inference rule whose antecedent is a set of lines. A discharge rule is an inference rule whose antecedent is a subderivation.

The depth of a line in a derivation is the number of fences standing between the line number and the formula. All lines of a derivation have a depth of at least one. Each temporary assumption increases the depth by one. Each discharge rule decreases the depth by one.

An active line is a line that is available for use as an antecedent line for a standard inference rule. In particular, it is a line whose depth is less than or equal to the depth of the current line. An inactive line is a line that is not active.

A discharge rule is said to discharge an assumption. It makes all lines in its antecedent subderivation inactive.



Constructing a Complex Derivation[edit | edit source]

An example derivation[edit | edit source]

Subderivations can be nested. For an example, we provide a derivation for the argument

    

We begin with the premises and then assume the antecedent of the conclusion.

Note. Each time we begin a new subderivation and enter a temporary assumption, there is a specific formula we are hoping to derive when it comes time to end the derivation and discharge the assumption. To make things easier to follow, we will add this formula to the annotation of the assumption. That formula will not officially be part of the annotation and does not affect the correctness of the derivation. Instead, it will serve as an informal reminder to ourselves noting where we are going.

 
1.     Premise
2.     Premise
3.     Premise
 
4.       Assumption   


This starts a subderivation to derive the argument's conclusion. Now we will try a Disjunction Elimination (DE) to derive its consequent:


This will require the showing two conditionals we need for the antecedent lines of a DE, namely:

and


We begin with the first of these conditionals.

 
5.         Assumption   
     
6.           Assumption   


This subderivation is easily finished.

 
7.           5, 6 KI
8.           1, 7 CE
9.           2 KE


Now we are ready to discharge the two assumptions at Lines 5 and 6.

 
10.         6–9 NI
   
11.       5–10 CI


Now it's time for the second conditional needed for our DE planned back at Line 4. We begin.

 
12.         Assumption   
     
13.           Assumption   
14.           2 KE
15.           3, 14 CE


Note that we have a contradiction between Lines 12 and 15. But line 12 is in the wrong place. We need it in the same subderivation as Line 15. A silly trick at Lines 16 and 17 below will accomplish that. Then the assumptions at Lines 12 and 13 can be discharged.

 
16.           12, 12 KI
17.           16 KE
     
18.         13–17 NI
   
19.       12–18 CI


Finally, with Lines 4, 11, and 19, we can perform the DE we've been wanting since Line 4.

 
20.       4, 11, 19 DE


Now to finish the derivation by discharging the assumption at Line 4.

 
21.     4–20 CI

The complete derivation[edit | edit source]

Here is the completed derivation.

 
1.     Premise
2.     Premise
3.     Premise
 
4.       Assumption   
   
5.         Assumption   
     
6.           Assumption   
7.           5, 6 KI
8.           1, 7 CE
9.           2 KE
     
10.         6–9 NI
   
11.       5–10 CI
   
12.         Assumption   
     
13.           Assumption   
14.           2 KE
15.           3, 14 CE
16.           12, 12 KI
17.           16 KE
     
18.         13–17 NI
   
19.       12–18 CI
20.       4, 11, 19 DE
 
21.     4–20 CI



Theorems[edit | edit source]

A theorem is a formula for which a zero-premise derivation has been provided. We will keep a numbered list of proved theorems. In the derivations that follow, we will continue our informal convention of adding a formula to the annotations of assumptions, in particular the formula we hope to derive by means of the newly started subderivation.

An example[edit | edit source]

You may remember from Constructing a Complex Derivation that we had to employ a silly trick to copy a formula into the proper subderivation (Lines 16 and 17). We can prove a theorem that will help us avoid such obnoxiousness.

 
1.       Assumption   
 
2.     1 CI


Derivations can be abbreviated by allowing a line to be entered whose formula is a substitution instance of a previously proved theorem. The annotation will be 'Tn' where n is the number of the theorem. Although we won't require it officially, we will also show the substitution, if any, in the annotation (see Line 3 in the derivation below). The proof of the next theorem will use T1.

 
1.       Assumption   
   
2.         Assumption   
3.         T1 [P/Q]
4.         1, 3 CE
   
5.       2–4 CI
 
6.     1–5 CI

Justification: Converting to unabbreviated derivation[edit | edit source]

We need to justify using theorems in derivations in this way. To do that, we show how to produce a correct, unabbreviated derivation of T2, one without citing the theorem we used in its abbreviated proof.

Observe that when we entered Line 3 into our derivation of T2, we substituted for in T1. Suppose you were to apply the same substitution on each line of our proof for T1. You would then end up with the following equally correct derivation.

 
1.       Assumption   
 
2.     1 CI


Suppose now you were to replace Line 3 of our proof for T2 with this derivation. You would need to adjust the line numbers so that you would have only one line per line number. You would also need to adjust the annotations so the line numbers they would continue to refer correctly. But, with these adjustments, you would end up with the following correct unabbreviated derivation of T2.

 
1.       Assumption   
   
2.         Assumption   
     
3.           Assumption   
     
4.         3 CI
5.         1, 4 CE
   
6.       2–5 CI
 
7.     1–6 CI


Thus we see that entering a previously proved theorem into a derivation is simply an abbreviation for including that theorem's proof into a derivation. The instructions above for unabbreviating a derivation could be made more general and more rigorous, but we will leave them in this informal state. Having instructions for generating a correct unabbreviated derivation justifies entering previously proved theorems into derivations.

Additional theorems[edit | edit source]

Additional theorems will be introduced over the next two modules.



Derived Inference Rules[edit | edit source]

This page introduces the notion of a derived inference rule and provides a few such rules.

Deriving inference rules[edit | edit source]

The basics[edit | edit source]

Now we can carry the abbreviation a step further. A derived inference rule is an inference rule not given to us as part of the derivation system but which constitutes an abbreviation using a previously proved theorem. In particular, suppose we have proved a particular theorem. In this theorem, uniformly replace each sentence letter with a distinct Greek letter. Suppose the result has the following form.

.

[Comment: This and what follows seems to me to be potentially confusing to students. The intention stated in earlier sections to avoid metatheory creates a problem here as one needs to know about the Deduction Theorem for this to make more sense.]

We may then introduce a derived inference rule having the form

An application of the derived rule can be eliminated by replacing it with

  1. the previously proved theorem,
  2. repeated applications of Conjunction Introduction (KI) to build up the theorem's antecedent, and
  3. an application of Conditional Elimination (CE) to obtain the theorem's consequent.

The previously proved theorem can then be eliminated as described above. That would leave you with an unabbreviated derivation.

Removing abbreviations from a derivation is not desirable, of course, because it makes the derivation more complicated and harder to read, but the fact that a derivation could be unabbreviated justifies the use of abbreviations, so that we can employ abbreviations in the first place.

Repetition[edit | edit source]

Our first derived inference rule will be based on T1, which is

Replace the sentence letters with Greek letters, and we get:

We now generate the derived inference rule:

Repetition (R)

Now we can show how this rule could have simplified our proof of T2.

 
1.       Assumption   
   
2.         Assumption   
3.         1 R
   
4.       2–3 CI
 
5.     1–4 CI


While this is only one line shorter than our original proof of T2, it is less obnoxious. We can use an inference rule instead of a silly trick. As a result, the derivation is easier to read and understand (not to mention easier to produce).


Double negation rules[edit | edit source]

The next two theorems—and the derived rules based on them—exploit the equivalence between a doubly negated formula and the unnegated formula.

Double Negation Introduction[edit | edit source]

 
1.       Assumption   
   
2.         Assumption   
3.         1 R
   
4.       2–3 NI
 
5.     1–4 CI


T3 justifies the following rule.

Double Negation Introduction (DNI)

Double Negation Elimination[edit | edit source]

 
1.       Assumption   
   
2.         Assumption   
3.         1 R
   
4.       2–3 NE
 
5.     1–4 CI


T4 justifies the following rule.

Double Negation Elimination (DNE)

Additional derived rules[edit | edit source]

Contradiction[edit | edit source]

 
1.       Assumption   
   
2.         Assumption   
3.         1 S
4.         1 S
   
5.       2–4 NE
 
7.     1–5 CI


Our next rule is based on T5.

Contradiction (Contradiction)


This rule is occasionally useful when you have derived a contradiction but the discharge rule you want is not NI or NE. This then avoids a completely trivial subderivation. The rule of Contradiction will be used in the proof of the next theorem.

Conditional Addition[edit | edit source]

 
1.       Assumption   
   
2.         Assumption   
3.         1, 2 Contradiction
   
4.       2–3 CI
 
5.     1–4 CI


On the basis of T2 and T6, we introduce the following derived rule.

Conditional Addition, Form I (CAdd)


Conditional Addition, Form II (CAdd)


The name 'Conditional Addition' is not in common use. It is based on the traditional name for Disjunction Introduction, namely 'Addition'. This rule does not provide a general means of introducing a conditional. This is because the antecedent line you would need is not always derivable. However, when the antecedent line just happens to be easily available, then applying this rule is simpler than producing the subderivation needed for a Conditional Introduction.

Modus Tollens[edit | edit source]

 
1.       Assumption   
   
2.         Assumption   
3.         1 KE
4.         2, 3 CE
5.         1 KE
   
6.       2–5 NI
 
7.     1–6 CI


Now we use T7 to justify the following rule.

Modus Tollens (MT)


Modus Tollens is also sometimes known as 'Denying the Consequent'. Note that the following is not an instance of Modus Tollens, at least as defined above.

The premise lines of Modus Tollens are a conditional and the negation of its consequent. The premise lines of this inference are a conditional and the opposite of its consequent, but not the negation of its consequent. The desired inference here needs to be derived as below.

 
1.     Premise
2.     Premise
3.     2 DNI
4.     1, 3 CE
5.     4 DNE

Of course, it is possible to prove as a theorem:

Then you can add a new inference rule—or, more likely, a new form of Modus Tollens—on the basis of this theorem. However, we won't do that here.

Additional theorems[edit | edit source]

The derived rules given so far are quite useful for eliminating frequently used bits of obnoxiousness in our derivations. They will help to make your derivations easier to generate and also more readable. However, because they are indeed derived rules, they are not strictly required but rather are theoretically dispensable.

A number of other theorems and derived rules could usefully be added. We list here some useful theorems but leave their proofs and the definition of their associated derived inference rules to the reader. If you construct many derivations, you may want to maintain your own personal list that you find useful.

Theorems with biconditionals[edit | edit source]

Theorems with negations[edit | edit source]



Disjunctions in Derivations[edit | edit source]

Disjunctions in derivations are, as the current inference rules stand, difficult to deal with. Using an already derived disjunction by applying Disjunction Elimination (DE) is not too bad, but there is an easier to use alternative. Deriving a disjunction in the first place is more difficult. Our Disjunction Introduction (DI) rule turns out to be a rather anemic tool for this task. In this module, we introduce derived rules which provide alternative methods for dealing with disjunctions in derivations.

Using already derived disjunctions[edit | edit source]

Modus Tollendo Ponens[edit | edit source]

We start with a new (to be) derived rule of inference. This will provide a useful alternative to Disjunction Elimination (DE).

Modus Tollendo Ponens, Form I (MTP)


Modus Tollendo Ponens, Form II (MTP)

Modus Tollendo Ponens is sometimes known as Disjunctive Syllogism and occasionally as the Rule of the Dog.

Supporting theorems[edit | edit source]

This new rule requires the following two supporting theorems.

 
1.       Assumption   
2.       1 KE
3.       1 KE
4.       3 CAdd
5.       T1 [P/Q]
6.       2, 4, 5 DE
 
7.     1–6 CI


 
1.       Assumption   
2.       1 KE
3.       1 KE
4.       3 CAdd
5.       T1
6.       2, 4, 5 DE
 
7.     1–6 CI

Example derivation[edit | edit source]

For an example using MTP, we redo the example derivation from Constructing a Complex Derivation.

    
 
1.     Premise
2.     Premise
3.     Premise
 
4.       Assumption   
   
5.         Assumption   
6.         2 KE
7.         3, 6 CE
8.         4, 7 MTP
9.         5, 8 KI
10.         1, 9 CE
11.         2 KE
   
12.       5–11 NI
 
13.     4–12 CI


After Line 4, we did not bother with subderivations for deriving the antecedent lines needed for DE. Instead, we went straight to a subderivation for the conclusion's consequent. At line 8, we applied MTP.

Deriving disjunctions[edit | edit source]

Conditional Disjunction[edit | edit source]

The next derived rule significantly reduces the labor of deriving disjunctions, thus providing a useful alternative to Disjunction Introduction (DI).

Conditional Disjunction (CDJ)

Supporting theorem[edit | edit source]

 
1.       Assumption   
   
2.         Assumption   
     
3.           Assumption   
       
4.           3 DI
5.           2 R
     
6.         3–5 NI
7.         1, 6 CE
8.         7 DI
   
9.       2–8 NI
 
10.     1–9 CI

Example derivation[edit | edit source]

This derivation will make use of T12 (introduced at Derived Inference Rules) even though its proof was left to the reader as an exercise. The correctness the following derivation, particularly Line 2, assumes that you have indeed proved T12.


  
 
1.       Assumption   
2.       T12
3.       1, 2 CE
4.       3 KE
5.       4 CAdd
 
7.     1–6 CI
8.     7 CDJ


Here we attempted to derive the desired conditional by first deriving the antecedent line needed for CDJ.