# Bug Free Programming

Research has shown that there is a factor of 20 between the productivity of the best and the worst programmers. That is, top programmers work 20 times faster than the worst ones, and around 5 to 10 times faster than average programmers. This means that a very good developer can do in a day what others may need a month for. There are a couple of ways to increase productivity, but the most important factor is accuracy.

Accuracy strongly correlates with development speed. The best developers distinguish themselves by writing bug free code right away. If you are able to write code that does not contain any mistakes, you don't have to waste your time hunting bugs. Ever wondered why that guy at the next table who works so leisurely gets things done a lot faster than you? That's because he does the right thing at the first try. It is of course impossible to prevent bugs entirely, but:

**Working fast means working accurately rather than working hard.**

So how to write bug free code? You'd think that most bugs are introduced by thoughtlessness, but in fact, most bugs are introduced by the programmer not entirely understanding his own code. If you get a NullPointerException in Java, that's usually not because you forgot to think about null references, but because you didn't really understand the role of the variable that was null.

**The trick to bug free programming is a rock solid understanding of the code you write.**

To write bug free code, you first have to learn how to **prove** that a piece of code works. A lot of developers study these techniques at the university, but they never really learn to apply them in real-life coding. Other developers (especially those without a degree) never learn formal methods in the first place, so it's a good point to start. It may look too theoretical at first sight, but this is the key to bug free programming.

Let's start out with a very simple program. The goal here is that variable *x* of type *int* contain the value 10 when the program terminates. We will use C++ for demonstration throughout this book.

int x; x = 10;

This code is obviously bug free. To prove it, let's add **conditions** enclosed by braces before and after each line of code. Conditions are logical statements that are true at the point of the condition in execution time **no matter how execution arrives there**.

{true} int x; {true} x = 10; {x == 10}

The first two conditions are true, because we don't have any information at those points (note that C++ doesn't preinitialize variables). The only thing we know is true there is *true* itself. However, after the line *x = 10* is executed, we know that the value of *x* is *10*, because this is the definition of assignment. We can state this in more general terms:

{true} x = a {x == a}

It means that whatever was true before, after an assignment of *a* to *x*, *x==a* will hold true, i. e. the value of *x* will be *a*. We will treat it as an axiom of constant assignment.

The following program also assigns the value *10* to *x*, but it does it differently.

int x, y; y = 10; x = y;

To prove that it's correct, annotate it with conditions again:

{true} int x, y; {true} y = 10; {y == 10} x = y; {x == 10}

After the assignment to *y*, we can state that its value is *10*. Well, if *y* is *10*, and we assign it to *x*, *x* will also be *10*. This leads to the full axiom of assignment:

{y == a} x = y {x == a}

Similar rules apply to when there is an expression at the right hand side of the assignment. Another important axiom is that assignment doesn't change any variables that are not assigned to.

{y == a} x = b {y == a}

This is still very trivial and is only needed to build the rest around it.

You've probably realized by now that there are two conditions to consider about each assignment: one that holds true before it, and one that holds true after it. The former is called *precondition*, the latter is called *postcondition*. We will prove the correctness of a program by chaining them: the postcondition of a statement is the precondition of the next statement.

if {P} S1 {R} and {R} S2 {Q} then {P} S1; S2 {Q}

Here *S1; S2* means two statements executed after each other, i. e. in a sequence. There are two other types of control structures: conditionals and loops. We'll learn to reason about them later.

One final axiom we need to start proper correctness proving is the notion that:

if P => R and {R} S {T} and T => Q then {P} S {Q}

*A => B* means *A* implies *B* in the logical sense.

If this is all Greek to you, don't panic. We'll cover enough examples for these thigs to start making sense.

Now let's do our first formal correctness proof. Prove that the following program assigns *10* to *x*.

int x, y, z; z = 1; y = 2 * z; x = y * 5;

Annotate it:

{true} int x, y, z; {true} z = 1; {z == 1} y = 2 * z; {y == 2} x = y * 5; {x == 10}

We have the annotations, but we have to prove that they indeed hold true at the point where they are written.

{true} int x, y, z {true}

This is trivial because variable definitions do nothing from a correctness point of view. Now we have to prove that:

{true} z = 1 {z == 1}

This is because of the axiom of assignment. Since it is an axiom, we don't have to further prove it. We can now use this as the precondition of the next statement:

{z == 1} y = 2 * z {y == 2}

and again:

{y == 2} x = y * 5 {x == 10}

Why does this prove the original statement? If we name the statements and the conditions like this:

{Q1: true} S1: int x, y, z; {Q2: true} S2: z = 1; {Q3: z == 1} S3: y = 2 * z; {Q4: y == 2} S4: x = y * 5; {Q5: x == 10}

and use the axiom of sequence three times, we get:

{Q1} S1 {Q2} and {Q2} S2 {Q3} => {Q1} S1; S2 {Q3} {Q1} S1; S2 {Q3} and {Q3} S3 {Q4} => {Q1} S1; S2; S3 {Q4} {Q1} S1; S2; S3 {Q4} and {Q4} S5 {Q5} => {Q1} S1; S2; S3; S4 {Q5}

We essentially proved that the program as a whole goes from *{true}* (whatever) to *{x == 10}* (*x* having a value of *10*).

This is what we had to prove to make 100%, absolutely sure that the code is bug free. Note that now there is no doubt that this code is correct (unless we did a mistake in the proof, of course). What made this possible is the fact that we had a very clear understanding about what was going on in the code at every possible point of execution. Annotating the code with conditions made our expectations explicit, and having to prove them in a formal way made us think about why these expectations were true.

Nobody has the time to do formal correctness proofs in real life situations, but knowing how to do it helps a lot in creating bug free code. The trick is to be explicit about expectations. This prevents you from forgetting about *null* references, among other things.

Let's move on to conditionals.

int x, y; if (x % 2 != 0) { y = x + 1; } else { y = x; }

We want to prove here that *y* is even at the end. The axiom of conditional is as follows:

given {P && R} S1 {Q} and {P && !R} S2 {Q} => {P} if R S1; else S2 {Q}

To prove that a conditional is correct, you have to prove two things. First, you have to prove that when the condition is true, the "then" clause gives correct results. Second, you have to prove that when the condition is false, the "else" clause gives correct results. Annotating the former example:

{true} int x, y; {true} if (x % 2 != 0) { y = x + 1; } else { y = x; } {y % 2 == 0}

As for the variable definitions:

{true} int x, y {true}

Check. We didn't explicitly assign a value to *x* and yet we will prove the correctness of a program that uses its value. C++ doesn't preinitialize local variables. The value of the variable is some memory garbage. We have a conditional here, so we have to check for the *then* and the *else* clause separately.

{Q1: true && x % 2 != 0} y = x + 1 {R1: true && y == x + 1 && x % 2 != 0}

Obviously

true && y == x + 1 && x % 2 != 0 => (y - 1) % 2 != 0 => y % 2 == 0

Now if you use the axiom of relaxation:

{Q1} y = x + 1 {R1} and R1 => y % 2 == 0 ---------------------------------------- {Q1} y = x + 1 {y % 2 == 0}

Now let's go for the *else* clasue

{Q2: true && !(x % 2 != 0)} y = x {R1: true && y == x && !(x % 2 != 0)}

Obviously

true && y == x && !(x % 2 != 0) => y == x && x % 2 == 0 => y % 2 == 0

Again, using the axiom of relaxation:

{Q2} y = x {R2} and R2 => y % 2 == 0 ------------------------------------ {Q2} y = x {y % 2 == 0}

Now combine the results with the axiom of conditionals:

{true && R} y = x + 1 {y % 2 == 0} and {true && !R} y = x {y % 2 == 0} ---------------------------------------------------------------------- {true} if R y = x + 1; else y = x {y % 2 == 0}

I know you don't believe that it's of actual real life use, but keep reading. We will eventually drop the maths and only use the core notions of the proofs to produce bug free code.

Let's write our first program with some actual value. Say we have an array of *int* 's, and we want to put the sum of its elements into a variable called *sum*. We will learn how to reason about loops in this example, and this will be the first time you'll see how it helps you write correct code right away. We will start with a pre made program, but later we will investigate how to turn the techniques used in correctness proofs to design the code.

int[] array; int length; // Fill the array. {Q: array has been created and has at least length elements and length >= 0} int i, sum; i = 0; sum = 0; while (i != length) { sum = sum + array[i]; i = i + 1; } {R: sum = array[0] + array[1] + ... + array[length-1]}

Precondition *Q* is stated to makes sure we don't try to read unallocated memory. We don't care how the array is allocated or filled with numbers for this example.

Loops are the most complicated control structures of the three, and the most prone to mistakes. The difficulty is that a loop is a series of conditionals in disguise, and you don't even know how many conditionals it is, because it depends on the condition which is evaluated at runtime. If you try to reason about loops by visualizing how they execute, you are in trouble, particularly if you have more complicated loops than the previous one. Visualization is an important step in getting a rough idea about how to code something, but if you don't want to spend your evening before the deadline debugging corner cases you didn't visualize, you'd better reason about your code statically. That does not necessarily mean mathematically. We will eventually drop the math completely.

The trick is to convert the dynamic nature of loops into a static condition. If we could prove a condition that is true every time before before the condition is evaluated, we could reason about the loop without relying on visualization. This condition is called the **invariant** of the loop.

{Q: array has been created and has at least length elements and length >= 0} int i, sum; {Q} i = 0; {Q && i == 0} sum = 0; {Q && i == 0 && sum == 0} while (i != length) { sum = sum + array[i]; i = i + 1; } {R: sum = array[0] + array[1] + ... + array[length-1]}

We'll only care about the loop in this example because the rest is trivial. The invariant is that *sum* contains the sum of the elements from index *1* to index *i-1* **every time** before the condition is evaluated, including when execution first arrives at the loop.

{I: sum == array[0] + ... + array[i-1]}

When execution hits the loop:

{Q && i == 0 && sum == 0}

holds. Does it imply *I*? Of course:

Q && i == 0 && sum == 0 => sum = 'sum of elements from 0 to -1'

This is true, because there are no elements between index *0* and index *-1*, and the sum of *0* numbers is *0*. That is, the sum of all the elements of an empty array is *0*. If the array is not empty, the body of the loop will execute, and the condition will be evaluated again. At this point *I* is true (we don't consider loop conditions with side effects, more on that later). The condition of the loop is also true, because otherwise, we wouldn't have entered the loop body. What we have to prove is:

{I && i != length} sum = sum + array[i]; i = i + 1; {I}

That is, the invariant is preserved across iterations. What's the condition to use between the statements? *sum* has been updated, but *i* hasn't. Try:

{Q1: sum == array[0] + ... + array[i-1] && i != length} sum = sum + array[i]; {Q2: sum == array[0] + ... + array[i-1] + array[i] && i != length} i = i + 1; {I}

Can we prove that

{Q1} sum = sum + array[i] {Q2}

? We have a slight problem here, because we can't prove that *array[i]* is a valid expression. *i* must be smaller than *length* and at least *0* for it to be valid. Additionally, we need array to be an allocated array with at least *length* elements (let's denote this condition with *T* to save space). Therefore, we have to modify our invariant to include this information.

{I: T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1]}

Note this important step in understanding the code we write. We didn't make all our expectations explicit. This is how bugs sneak in. With this new invariant we have to start over. Is it true when execution hits the loop?

T && length >= 0 && i == 0 && sum == 0 => T && 0 <= i && i <= length && sum = 'sum of elements from 0 to -1'

This is true. For example, we know that *i <= length* because *i == 0* and *length >= 0*. Is it preserved across iterations?

{Q1: T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1] && i != length} sum = sum + array[i]; i = i + 1; {I}

Find the right condition after updating *sum*:

{Q1: T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1] && i != length} sum = sum + array[i]; {Q2: T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1] + array[i] && i != length} i = i + 1; {I}

We know that *array[i]* is valid, because of the constraints on *i* included in the condition. What about

{Q2} i = i + 1 {I}

?

{Q2: T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1] + array[i] && i != length} i = i + 1; {Q3: T && 0 <= i-1 && i-1 <= length && sum == array[0] + ... + array[i-1] && i-1 != length}

We changed *i* to *i-1* because it's the original value of *i* (before the statement was executed).

Q3 => I

because

T && 0 <= i-1 && i-1 <= length && sum == array[0] + ... + array[i-1] && i-1 != length => T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1]

For example *0 <= i-1* implies *1 <= i* which implies *0 <= i*. *i-1 <= length* and *i-1 != length* together imply that *i-1 < length* which in turn implies that *i <= length*.

When the loop terminates, its condition is false. Additionally, the invariant is true. This is because it's true every time before the condition is evaluated. If we can prove that

I && !(i != length) => R

we prove that *R* is true when the loop terminates. It's trivial:

T && 0 <= i && i <= length && sum == array[0] + ... + array[i-1] && i == length => sum = array[0] + array[1] + ... + array[length-1]

We have proven that the program gives correct results if it terminates. However, note that we haven't proven that it actually terminates. This is called partial correctness. Full correctness means to also prove that the program terminates. We will come back to it later.

Make sure you understand the above. I only gave a couple of examples, because this is not a book on formal correctness proving. If you feel you need to dive deeper into it, look up some book on Hoare's method. The rest of the book will not make much sense if you don't understand it.

So how does this help you work faster? You don't have the time to work out the formal proof of your code all the time, but knowing how to do it will help you even when you aren't using it. A good understanding of preconditions, postconditions, and invariants will help you write much better code. Now we will cover plenty of examples that show how.

Example 1: Sum up the elements of an array. This is the same as the previous example, but now we will design it rather than prove that it works. Pretend you've never seen the solution.