# Logic for Computer Science/Propositional Logic

## Propositional Logic

Propositional logic is a good vehicle to introduce basic properties of logic. It does not provide means to determine the validity (truth or false) of atomic statements. Instead, it allows you to evaluate the validity of compound statements given the validity of its atomic components.

For example, consider the following:

I like Pat or I like Joe.
If I like Pat then I like Joe.
Do I like Joe?

Accept as facts the first two statements, noting that the use of "or" here is not exclusive and thus could really be thought of as saying "I like Pat, or I like Joe, or I like them both". Do these statements imply that "I like Joe" is true? Try to convince yourself that "I like Joe" is true, and consider another line of reasoning:

Pigs can fly or fish can sing.
If pigs can fly then fish can sing.
Can fish sing?

We can see that the answer is yes in both cases. The above two sets of statements can be both abstracted as follows:

${\displaystyle P\vee Q}$
${\displaystyle P\rightarrow Q}$
${\displaystyle Q}$?

Here, we are concerned about the logical reasoning itself, and not the statements. Thus, instead of working with pigs or Pats, we simply write ${\displaystyle Q}$s or ${\displaystyle P}$s. We begin our study first with the syntax of propositional logic: that is, we describe the elements in our language of logic and how they are written. We then describe the semantics of these symbols: that is, what the symbols mean.

### Syntax

The syntax of propositional logic is composed of propositional symbols, logical connectives, and parenthesis. Rules govern how these elements can be written together. First, we treat propositional symbols merely as a set of some symbols, for our purposes we'll use letters of the Roman and Greek alphabets, and refer to the set of all symbols as ${\displaystyle {\textrm {Prop}}}$:

Propositional symbols: A set ${\displaystyle {\textrm {Prop}}}$ of some symbols. For example ${\displaystyle p,q,r,\ldots }$

Second, we have the logical connectives:

Logical connectives: ${\displaystyle \wedge ,\vee ,\neg ,\to }$

Note that these are not the minimal required set; they can be equivalently represented only using the single connective NOR (not-or) or NAND (not-and) as is used at the lowest level in computer hardware. Finally, we use parenthesis to denote expressions (later on we make parenthesis optional):

Parentheses: ${\displaystyle (,)}$

An expression is a string of propositional symbols, parenthesis, and logical connectives.

The expressions we consider are called formulas. The set ${\displaystyle {\textrm {Form}}}$ of formulas is the smallest set of expressions such that:

1. ${\displaystyle {\textrm {Prop}}\subseteq {\textrm {Form}}}$
2. If ${\displaystyle \phi ,\psi \in {\textrm {Form}}}$ then
1. ${\displaystyle (\phi \wedge \psi )\in {\textrm {Form}}}$,
2. ${\displaystyle (\phi \vee \psi )\in {\textrm {Form}}}$,
3. ${\displaystyle (\phi \to \psi )\in {\textrm {Form}}}$, and
4. ${\displaystyle (\neg \phi )\in {\textrm {Form}}}$.

Another way to define formulas is as the language defined by the following context-free grammar (with start symbol ${\displaystyle {\textrm {Form}}}$):

${\displaystyle {\textrm {Form}}\Rightarrow {\textrm {Prop}}}$, where ${\displaystyle {\textrm {Prop}}}$ stands for any propositional symbol
${\displaystyle {\textrm {Form}}\Rightarrow ({\textrm {Form}}\wedge {\textrm {Form}})}$
${\displaystyle {\textrm {Form}}\Rightarrow ({\textrm {Form}}\vee {\textrm {Form}})}$
${\displaystyle {\textrm {Form}}\Rightarrow ({\textrm {Form}}\to {\textrm {Form}})}$
${\displaystyle {\textrm {Form}}\Rightarrow (\neg {\textrm {Form}})}$

Fact 1 (Unique Readability): The above context free grammar is unambiguous.

### Semantics

The function of a formula is to create meanings of statements given meanings of atomic statements. The semantics of a formula ${\displaystyle \phi }$ with propositional symbols ${\displaystyle p_{1},\ldots ,p_{n}}$ is a mapping associating to each truth assignment ${\displaystyle V}$ to ${\displaystyle p_{1},\ldots ,p_{n}}$ a truth value (0 or 1) for ${\displaystyle \phi }$. (The truth values true and false can be used instead of 1 or 0, respectively, as well as the abbreviations T and F.)

The semantics are well defined due to Fact 1 (seen just above).

One way to specify semantics of a logical connective is via a truth table:

${\displaystyle p}$ ${\displaystyle q}$ ${\displaystyle p\wedge q}$
0 0 0
0 1 0
1 0 0
1 1 1

Can one always find a formula that implements any given semantics? Yes, any truth table is realized by a formula. The formula can be found as follows. "Represent" the rows where ${\displaystyle \phi =1}$ with conjunctions of the true proposition symbols and negations of the false ones. Finally write the disjunction of the results.

For example,

${\displaystyle p}$ ${\displaystyle q}$ ${\displaystyle \phi }$ Conjunctions (true values only)
0 0 1 ${\displaystyle \neg p\wedge \neg q}$
0 1 0
1 0 1 ${\displaystyle p\wedge \neg q}$
1 1 0

${\displaystyle \phi :(p\wedge \neg q)\vee (\neg p\wedge \neg q)}$

Corollary: Every formula is equivalent to a disjunction of conjunctions of propositional symbols or negation of propositional symbols (DNF).

Dual of DNF is CNF.

To get ${\displaystyle \phi }$ in CNF:

1. Describe cases when ${\displaystyle \phi }$ is false.
2. Note that ${\displaystyle \phi }$ is true when ${\displaystyle \neg \psi }$ is false. Hence, negate ${\displaystyle \psi }$ using DeMorgan's laws.

There are cases when DNF (resp. CNF) is exponentially larger than the original formula. For example, for ${\displaystyle (x_{1}\vee y_{1})\wedge (x_{2}\vee y_{2})\wedge ...\wedge (x_{n}\vee y_{n})}$ the equivalent DNF is exponential in size.

Does each truth table have a polynomial size formula implementing it? More precisely, does there exist ${\displaystyle k}$ such that every truth table with ${\displaystyle n}$ propositional symbols has a form ${\displaystyle \phi }$ of size ${\displaystyle \leq n^{k}}$? Answer: no.

Proof: Assume there exists such ${\displaystyle k}$. The number of truth tables for ${\displaystyle n}$ propositional symbols is ${\displaystyle 2^{2^{n}}}$. The number of formulas of size ${\displaystyle \leq n^{k}}$ is ${\displaystyle (n+6)^{n^{k}}}$ (${\displaystyle n}$ propositional symbols, 4 connectives and parentheses.) Clearly, ${\displaystyle (n+6)^{n^{k}}<2^{2^{n}}}$, for sufficiently large ${\displaystyle n}$.

[TODO: exposition to explain what these definitions are and provide their context]

• Satisfaction: Satisfaction of a formula ${\displaystyle \phi }$ by a truth assignment ${\displaystyle \tau }$. Notation: ${\displaystyle \tau \models \phi }$ (${\displaystyle \phi }$ is true for ${\displaystyle \tau }$).
• Implication: A set of formulas ${\displaystyle \Sigma }$ implies ${\displaystyle \phi }$. Notation: ${\displaystyle \Sigma \models \phi }$. ${\displaystyle \Sigma }$ implies ${\displaystyle \phi }$ if and only if every truth assignment that satisfies ${\displaystyle \Sigma }$ also satisfies ${\displaystyle \phi }$.

### Formula Classes of Special Interest

• ${\displaystyle {\textrm {VALID}}}$ - the set of formulas that are always true (also known as tautologies). For example, ${\displaystyle (p\vee \neg p),(p\to p),(((p\vee q)\wedge (p\to q))\to q)}$ are valid formulas.
• ${\displaystyle {\textrm {UNSAT}}}$ - the set of formulas that are never true (unsatisfiable).
• In between: ${\displaystyle {\textrm {SAT}}}$ - the set of formulas for which there exists a satisfying assignment (not unsatisfiable).

Note. ${\displaystyle \phi \in {\textrm {VALID}}\iff \neg \phi \in {\textrm {UNSAT}}}$.

Claim: ${\displaystyle \Sigma \models \phi \iff (\Sigma \cup \{\neg \phi \})\in {\textrm {UNSAT}}}$

Claim: ${\displaystyle {\textrm {SAT}}}$ is NP-complete.

Proof:

• ${\displaystyle {\textrm {SAT}}\in {\textrm {NP}}}$: guess a satisfying assignment, then verify that the formula is true (a satisfying assignment is a certificate).
• Hardness. graph 3-coloring ${\displaystyle \in {\textrm {NP}}}$ (there also exists a direct proof). We reduce 3-coloring to ${\displaystyle {\textrm {SAT}}}$. Let ${\displaystyle G=(V,E)}$ be a graph with ${\displaystyle n}$ nodes ${\displaystyle \{1,\ldots ,n\}}$. We use propositional variables ${\displaystyle p_{i,g},p_{i,r},p_{i,b}}$ to indicate that vertex ${\displaystyle i}$ is colored with green, red, or blue. Construct ${\displaystyle \phi }$ as follows:
${\displaystyle \phi ={\mathop {\bigwedge }}_{i=1}^{n}((p_{i,g}\wedge \neg p_{i,r}\wedge \neg p_{i,b})\vee (p_{i,r}\wedge \neg p_{i,g}\wedge \neg p_{i,b})\vee (p_{i,b}\wedge \neg p_{i,r}\wedge \neg p_{i,g}))}$
${\displaystyle \wedge {\mathop {\bigwedge }}_{(i,j\in E)}\neg (p_{i,g}\wedge p_{j,g})\wedge \neg (p_{i,r}\wedge p_{j,r})\wedge \neg (p_{i,b}\wedge p_{j,b})}$

Claim: ${\displaystyle G\in {\textrm {3-Coloring}}\iff \phi \in {\textrm {SAT}}}$.

It is also possible to prove that ${\displaystyle {\textrm {SAT}}\in {\textrm {NP}}}$ directly

Claim: ${\displaystyle {\textrm {VALID}}\in {\textrm {coNP}}}$.

#### Horn Clauses

Special case for which SAT is in polynomial time. Example:

${\displaystyle (p\vee \neg q\vee r)\wedge (\neg p\vee q\vee \neg r)}$

A Horn clause is a disjunction of literals of which at most one is positive. There are two kinds of possible Horn clauses:

1. clause has 1 positive literal
1. ${\displaystyle p}$, or
2. ${\displaystyle p\vee \neg x_{1}\vee \ldots \vee \neg x_{k}:x_{1}\wedge ...\wedge x_{k}\rightarrow p}$
2. no positive literal
1. ${\displaystyle \neg x_{1}\vee ...\vee \neg x_{k}:\neg (x_{1}\wedge ...\wedge x_{k})}$
2. ${\displaystyle x_{1}\wedge ...\wedge x_{k}\rightarrow false}$

Claim: For every set ${\displaystyle \Sigma }$ of Horn formulas, checking whether ${\displaystyle \Sigma }$ is satisfiable is in ${\displaystyle {\textrm {P}}}$.

Proof Idea: Let ${\displaystyle \Sigma _{1}}$ be the subset of ${\displaystyle \Sigma }$ containing only clauses of type 1, and ${\displaystyle \Sigma _{2}}$ the subset of ${\displaystyle \Sigma }$ containing clauses of type 2. Note first that ${\displaystyle \Sigma _{1}}$ is satisfiable. To obtain a minimum satisfying assignment ${\displaystyle \sigma }$, start with literals from single-literal clauses and crank the rules. It now remains to check consistency of ${\displaystyle \sigma }$ with the clauses in ${\displaystyle \Sigma _{2}}$. To do this, it is enough to check that for each clause ${\displaystyle x_{1}\wedge ...\wedge x_{k}\rightarrow false}$ in ${\displaystyle \Sigma _{2}}$, ${\displaystyle \sigma }$ is not true for all of ${\displaystyle x_{1},\ldots ,x_{k}}$.

Example: Consider the set ${\displaystyle \Sigma }$ of Horn clauses:

${\displaystyle p}$
${\displaystyle q}$
${\displaystyle r}$
${\displaystyle \neg p\vee \neg q\vee s}$
${\displaystyle \neg s\vee \neg r\vee t}$
${\displaystyle \neg t}$

The set ${\displaystyle \Sigma _{1}}$ of clauses of type 1 consists of the first 5 clauses, and ${\displaystyle \Sigma _{2}}$ consists of the last clause. Note that ${\displaystyle \Sigma _{1}}$ can also be written as:

${\displaystyle p}$
${\displaystyle q}$
${\displaystyle r}$
${\displaystyle p\wedge q\rightarrow s}$
${\displaystyle s\wedge r\rightarrow t}$

The minimum satisfying assignment for ${\displaystyle \Sigma _{1}}$ is obtained as follows:

1. start with ${\displaystyle \{p,q,r\}}$
2. use the first implication to infer ${\displaystyle s}$
3. use the second implication to infer ${\displaystyle t}$

Thus, the minimum satisfying assignment makes ${\displaystyle \{p,q,r,s,t\}}$ true. This contradicts ${\displaystyle \Sigma _{2}}$, which states that ${\displaystyle t}$ must be false. Thus, ${\displaystyle \Sigma }$ is not satisfiable.

### Deductive Systems

A deductive system is a mechanism for proving new statements from given statements.

Let ${\displaystyle \Sigma }$ be a set of known valid statements (propositional formulas). In a deductive system, there are two components: inference rules and proofs.

Inference rules
An inference rule indicates that if certain set of statements (formulas) ${\displaystyle \varphi _{1},\ldots ,\varphi _{k}}$ is true, then a given statement ${\displaystyle \varphi }$ must be true. An inference rule ${\displaystyle H}$ is denoted as ${\displaystyle H:{\frac {\varphi _{1},\ldots ,\varphi _{k}}{\varphi }}}$.
Example (modus ponens): ${\displaystyle {\frac {p,~~p\rightarrow q}{q}}}$
Proofs
A proof of ${\displaystyle \varphi }$ from ${\displaystyle \Sigma }$ is sequence of formulas ${\displaystyle \varphi _{1},...,\varphi _{n}}$ such that ${\displaystyle \varphi _{n}=\varphi }$ and for all ${\displaystyle i\leq n}$
• Each formula ${\displaystyle \varphi _{i}\in \Sigma }$, or
• There are a subset of formulas ${\displaystyle \varphi _{i_{1}},...,\varphi _{i_{k}}}$ ${\displaystyle i_{1},\ldots ,i_{k}, such that, ${\displaystyle {\frac {\varphi _{i_{1}},\ldots ,\varphi _{i_{k}}}{\varphi _{i}}}}$ is an inference rule.

If ${\displaystyle \varphi }$ has a proof from ${\displaystyle \Sigma }$ using inference rule ${\displaystyle H}$ we write ${\displaystyle \Sigma \vdash _{H}\varphi }$.

Properties:

• Soundness: If ${\displaystyle \Sigma \vdash _{H}\varphi }$ then ${\displaystyle \Sigma \models \varphi }$ (i.e., all provable sentences are true). This property is fundamental for the correctness of the deductive system.
• Completeness: If ${\displaystyle \Sigma \models \varphi }$ then ${\displaystyle \Sigma \vdash _{H}\varphi }$ (i.e., all true sentences are provable). This is a desirable property in deductive systems.

### Natural Deduction

Natural deduction is a collection of inference rules. Let ${\displaystyle \perp }$ denote contradiction, falsity. The following are the inference rules of natural deduction:

1. ${\displaystyle \left\{{\frac {\varphi ,\psi }{\varphi \wedge \psi }}\right.}$
2. ${\displaystyle \left\{{\frac {\varphi \wedge \psi }{\varphi }}\right.}$
3. ${\displaystyle \left\{{\frac {\varphi \wedge \psi }{\psi }}\right.}$
4. ${\displaystyle \left\{{\frac {\varphi ,\varphi \rightarrow \psi }{\psi }}\right.}$
5. ${\displaystyle \left\{{\frac {\varphi ,\neg \varphi }{\perp }}\right.}$
6. ${\displaystyle \left\{{\frac {\neg \neg \varphi }{\varphi }}\right.}$
7. ${\displaystyle \left\{{\frac {\perp }{\varphi }}\right.}$
8. ${\displaystyle \left\{{\frac {\varphi \rightarrow \psi ,\psi \rightarrow \varphi }{\varphi \leftrightarrow \psi }}\right.}$
9. ${\displaystyle \left\{{\frac {\varphi \leftrightarrow \psi }{\varphi \rightarrow \psi }}\right.}$
10. ${\displaystyle \left\{{\frac {\varphi \leftrightarrow \psi }{\psi \rightarrow \varphi }}\right.}$
11. ${\displaystyle \left\{{\frac {\varphi }{\varphi \vee \psi }}\right.}$
12. ${\displaystyle \left\{{\frac {\psi }{\varphi \vee \psi }}\right.}$
13. ${\displaystyle \left\{{\frac {\begin{matrix}\varphi \\\vdots \\\psi \end{matrix}}{\varphi \rightarrow \psi }}\right.}$
14. ${\displaystyle \left\{{\frac {\begin{matrix}\varphi \\\vdots \\\perp \end{matrix}}{\neg \varphi }}\right.}$
15. ${\displaystyle \left\{{\frac {\begin{matrix}\neg \varphi \\\vdots \\\perp \end{matrix}}{\varphi }}\right.}$
16. ${\displaystyle \left\{{\frac {\begin{matrix}\varphi \vee \psi &\varphi &\psi \\&\vdots &\vdots \\&\rho &\rho \end{matrix}}{\rho }}\right.}$

Rule (13) allows us to prove valid statements of the form "If ${\displaystyle \varphi }$ then ${\displaystyle \psi }$" even if we don't know the truth value of the ${\displaystyle \varphi }$ statement (i.e., ${\displaystyle \varphi }$ is not in the set ${\displaystyle \Sigma }$ of known valid statements). Indeed, for this rule, we start assuming ${\displaystyle \varphi }$ is valid. If we can conclude ${\displaystyle \psi }$ is valid in a world where ${\displaystyle \Sigma \cup \varphi }$ are valid, then we conclude that the relation ${\displaystyle \varphi \rightarrow \psi }$ is true, and we "release" the assumption ${\displaystyle \varphi }$ is valid.

We now show how to apply the above inference rules.

Example: De Morgan's Law for negated or-expressions says:

${\displaystyle \neg (\varphi \vee \psi )\leftrightarrow (\neg \varphi \wedge \neg \psi )}$

Proof: By rule ${\displaystyle (8)}$ if we can prove ${\displaystyle \neg (\varphi \vee \psi )\rightarrow (\neg \varphi \wedge \neg \psi )}$ and ${\displaystyle (\neg \varphi \wedge \neg \psi )\rightarrow \neg (\varphi \vee \psi )}$ we can infer the desired result.

To prove the first direction, we use rule 13 and assume the hypothesis ${\displaystyle \neg (\varphi \vee \psi )}$. Then

${\displaystyle \neg (\varphi \vee \psi )}$ (assumed)
${\displaystyle \varphi }$ (assumed)
${\displaystyle \varphi \vee \psi }$ (by rule 11)
${\displaystyle \perp }$ (by rule 5)
${\displaystyle \neg \varphi }$ (by rule 14)
${\displaystyle \psi }$ (assumed)
${\displaystyle \varphi \vee \psi }$ (by rule 11)
${\displaystyle \perp }$ (by rule 5)
${\displaystyle \neg \psi }$ (by rule 14)
${\displaystyle \neg \varphi \wedge \neg \psi }$ (by rule 1)
${\displaystyle \neg (\varphi \vee \psi )\rightarrow (\neg \varphi \wedge \neg \psi )}$ (by rule 13)

We now prove the second direction.

${\displaystyle \neg \varphi \wedge \neg \psi }$ (assumed)
${\displaystyle \neg \varphi }$ (by rule 2)
${\displaystyle \neg \psi }$ (by rule 3)
${\displaystyle \varphi \vee \psi }$ (assumed)
${\displaystyle \varphi \psi }$ (assumed)
${\displaystyle \perp \perp }$ (by rule 5)
${\displaystyle \perp }$(by rule 16)
${\displaystyle \neg (\varphi \vee \psi )}$ (by rule 14)
${\displaystyle (\neg \varphi \wedge \neg \psi )\rightarrow \neg (\varphi \vee \psi )}$ (by rule 13)

Proof of Pierce's Law:

${\displaystyle ((A\rightarrow B)\rightarrow A)\rightarrow A}$.
${\displaystyle (A\rightarrow B)\rightarrow A}$ (assumed) (1*)
${\displaystyle \neg A}$ (assumed)
${\displaystyle A}$ (assumed)
${\displaystyle \perp }$ (by rule 5)
${\displaystyle B}$ (by rule 7)
${\displaystyle A\rightarrow B}$ (by rule 13)
${\displaystyle A}$ (by assumption (1*) and rule 4)
${\displaystyle \perp }$ (by rule 5)
${\displaystyle A}$ (by rule 14)
${\displaystyle ((A\rightarrow B)\rightarrow A)\rightarrow A}$ (by rule 13)

Fact 2: Natural deduction is sound.

To show that natural deduction is also complete we need to introduce propositional resolution.

### Propositional Resolution

Resolution is another procedure for checking validity of statements. It involves clauses, formulas and a single resolution rule.

Some terminology:

Clause
A clause is a propositional formula composed by disjunction of literals. For example ${\displaystyle p\lor q\lor \lnot r}$. It is usually denoted as the set of literals, e.g. ${\displaystyle \{p,q,\lnot r\}}$.
The empty clause, denoted as an open box "${\displaystyle \Box }$", is the disjunction of no literals. It is always false.
Formula
A set of clauses, each of them satisfiable. For example, ${\displaystyle \{\{p,\lnot q\},\{r\},\{\lnot r,s\}\}}$ represents the CNF formula ${\displaystyle (p\lor \lnot q)\land (r)\land (\lnot r\lor s)}$.
The empty formula, denoted as ${\displaystyle \emptyset }$, is the set that contains no clauses. It is always true.
Resolution Rule
It is a rule that, given two clauses ${\displaystyle C}$ (containing some literal ${\displaystyle y}$) and ${\displaystyle C'}$ (containing some literal ${\displaystyle \lnot y}$), allows to infer a new clause, called the resolvent of ${\displaystyle C}$ and ${\displaystyle C'}$ (with respect to ${\displaystyle y}$).

A proof system for resolution contains a single resolution rule, where the resolvent is defined as follows. Assume ${\displaystyle C}$ and ${\displaystyle C'}$ are clauses such that ${\displaystyle y\in C}$ and ${\displaystyle \lnot y\in C'}$, then

${\displaystyle res_{y}(C,C')=(C-\{y\})\cup (C'-\{\lnot y\})}$.

The smallest set of clauses containing ${\displaystyle \varphi }$ and closed under resolution is denoted ${\displaystyle Res(\varphi )}$.

Example: If ${\displaystyle C=\{p,y\}}$ and ${\displaystyle C'=\{q,\lnot y\}}$, then ${\displaystyle res_{y}(C,C')=\{p,q\}}$.

It is possible to show that the resolution rule, as defined, computes a clause that can be inferred using natural deduction.

Claim: Let ${\displaystyle C}$ and ${\displaystyle C'}$ be any two clauses such that ${\displaystyle y\in C}$ and ${\displaystyle \lnot y\in C'}$. Then ${\displaystyle C\land C'\implies res_{y}(C,C')}$.

In order to prove the validity of a statement ${\displaystyle \psi }$, we will prove the negated statement ${\displaystyle \lnot \psi }$ is unsatisfiable. To prove unsatisfiability of a formula ${\displaystyle \varphi }$, we need to define the resolution refutation of the formula ${\displaystyle \varphi }$:

The resolution refutation tree of the formula ${\displaystyle \varphi }$ is a tree rooted at the empty clause, where every leaf is a clause in ${\displaystyle \varphi }$ and each internal node is computed as the resolvent of the two corresponding children.

Notice that clauses of ${\displaystyle \varphi }$ can appear repeated as leaves. From above claim we can conclude that:

Claim: If there exists a resolution refutation tree for formula ${\displaystyle \varphi }$, then ${\displaystyle \varphi \implies \Box }$, that is, ${\displaystyle \varphi }$ is unsatisfiable.

Example: The formula

${\displaystyle \varphi =(p\lor q)\land (\lnot q\lor r)\land (\lnot r)\land (\lnot p\lor \lnot s)\land (s\lor \lnot t)\land (t)}$

has the following resolution refutation tree:

The order in which clauses are selected to compute the resolvent matters when computing the resolution refutation tree, as the following example shows: Consider the formula

${\displaystyle \psi =(p\lor q)\land (\lnot q\lor r)\land (\lnot p)\land (\lnot q)}$.

Even though a resolution refutation tree may exist for ${\displaystyle \psi }$, order is important when trying to build the tree. Below are two different resolution refutation trees, but only one is successful:

Unsuccessful attempt of resolution refutation tree for ψ.
A successful resolution refutation tree for ψ.

### Properties of Propositional Resolution

Soundness: Propositional resolution is sound, that is, if there exists a resolution refutation tree for a given formula ${\displaystyle \varphi }$, then ${\displaystyle \varphi }$ must be unsatisfiable.

Theorem: For any formula ${\displaystyle \varphi }$, if ${\displaystyle \Box \in Res(\varphi )}$, then ${\displaystyle \varphi \implies \Box }$.

Completeness: Propositional resolution is complete, that is, if a given formula ${\displaystyle \varphi }$ is unsatisfiable, then ${\displaystyle \varphi }$ has a resolution refutation tree.

Theorem: For any formula ${\displaystyle \varphi }$, if ${\displaystyle \varphi \implies \Box }$, then ${\displaystyle \Box \in Res(\varphi )}$.

Proof: By induction on the number of variables in ${\displaystyle \varphi }$.

Basis: We have one variable, say ${\displaystyle p}$. All possible clauses of ${\displaystyle \varphi }$ are ${\displaystyle \{p\}}$ and ${\displaystyle \{\lnot p\}}$. If ${\displaystyle \varphi }$ is unsatisfiable then both clauses occur, and therefore ${\displaystyle \Box \in Res(\varphi )}$.

Induction step: Suppose the hypothesis is true for formulas with less than ${\displaystyle n}$ variables. Let ${\displaystyle \varphi }$ be a formula with ${\displaystyle n}$ variables. Suppose ${\displaystyle \Box \notin Res(\varphi )}$; we will show ${\displaystyle \varphi }$ is satisfiable. Let ${\displaystyle p}$ be a variable of ${\displaystyle \varphi }$. Then either ${\displaystyle \{p\}\notin Res(\varphi )}$ or ${\displaystyle \{\lnot p\}\notin Res(\varphi )}$ (if both hold then ${\displaystyle \Box \in Res(\varphi )}$ immediately).

Assume ${\displaystyle \{\lnot p\}\notin Res(\varphi )}$. We define the formula ${\displaystyle \varphi ^{p}}$ as containing all clauses that do not contain ${\displaystyle \{p\}}$ and where the literal ${\displaystyle \lnot p}$ has been removed from each clause (in other words, ${\displaystyle \varphi ^{p}}$ is equivalent to the formula resulting from setting ${\displaystyle p}$ true).

Formally,

${\displaystyle \varphi ^{p}=\{C-\{\lnot p\}:C\in \varphi ,\,p\notin C\}}$.

First, notice that

${\displaystyle Res(\varphi ^{p})=\{C-\{\lnot p\}:C\in Res(\varphi ),\,p\notin C\}}$

and thus,

${\displaystyle \{\lnot p\}\notin Res(\varphi ^{p})}$.

Also, since ${\displaystyle \Box \notin Res(\varphi )}$ we have that ${\displaystyle \Box \notin Res(\varphi ^{p})}$. By the induction hypothesis, ${\displaystyle \varphi ^{p}}$ is satisfiable. Then ${\displaystyle \varphi }$ is satisfiable by an extension of the satisfying assignment of ${\displaystyle \varphi ^{p}}$ with ${\displaystyle p}$ equal true. The case ${\displaystyle \{p\}\in Res(\varphi )}$ is analogous.

### Completeness of Natural Deduction

Theorem: Let ${\displaystyle H}$ be the set of inference rules of Natural Deduction. If ${\displaystyle \Sigma \models \varphi }$ then ${\displaystyle \Sigma \vdash _{H}\varphi }$.

The idea behind the proof of completeness of natural deduction is as follows. Suppose ${\displaystyle \varphi }$ is valid (then ${\displaystyle \lnot \varphi }$ is unsatisfiable). We then show there exists a resolution refutation for ${\displaystyle \varphi }$ and then by applying the contradiction rule (rule 15):

${\displaystyle {\frac {\begin{matrix}\neg \varphi \\\vdots \\\perp \end{matrix}}{\varphi }}}$

we conclude ${\displaystyle \varphi }$ can be inferred.

Proof: (Sketch) Given a formula ${\displaystyle \varphi }$ valid under ${\displaystyle \Sigma }$, we perform the following steps:

1. Prove that ${\displaystyle \lnot \varphi }$ is equivalent to some ${\displaystyle \psi }$, where ${\displaystyle \psi }$ is in CNF.
2. Prove that ${\displaystyle \psi \implies Res(\psi )}$, for all ${\displaystyle \psi }$.
3. By completeness of resolution, if ${\displaystyle \psi }$ is unsatisfiable then ${\displaystyle \Box \in Res(\psi )}$. Therefore, ${\displaystyle \{p\}}$ and ${\displaystyle \{\lnot p\}\in Res(\psi )}$ for some literal ${\displaystyle p}$. This implies ${\displaystyle Res(\psi )\implies \bot }$.
4. Conclude that ${\displaystyle \lnot \varphi \implies \bot }$ and therefore ${\displaystyle \varphi }$ is valid.

Step (1) can be easily done by repeated application of De Morgan's laws. Step (2) can be proven using natural deduction. Finally, step (3) can be proven by induction on the number of steps to obtain ${\displaystyle Res(\psi )}$. Clearly, each step can be simulated using natural deduction.

It is very likely that any algorithm for propositional resolution will take very long on the worst case (recall that checking validity of a formula ${\displaystyle \varphi }$ is co-NP complete).

### Linear Resolution and PROLOG

Linear resolution is a particular resolution strategy that always resolves the most recent resolvent with a clause. The resolution refutation tree so obtained is therefore linear. It is possible to prove that, if the set of clauses are Horn clauses, there exists a linear resolution strategy for any formula. That is, linear resolution is complete for the set of Horn clauses.

The language PROLOG uses resolution on a set of Horn clauses. Each clause is called a program clause. Moreover, clauses composed by a single literal are called facts. A clause with a single negated literal is called a query. The table below shows a comparison of the different notations. In PROLOG, to query a statement ${\displaystyle t}$, the idea is to negate the statement (${\displaystyle \lnot t}$) and to perform resolution with the set of known true statements. If a resolution refutation tree is found, the statement ${\displaystyle t}$ is implied by the program.

Example: An example of linear resolution for the formula

${\displaystyle \phi =(p)\land (q)\land (r)\land (t\lor \lnot s\lor \lnot r)\land (s\lor \lnot p\lor \lnot q)\land (\lnot t)}$

is shown here: