Optimizing Code for Speed/Order of Complexity Optimizations

From Wikibooks, open books for an open world
Jump to navigation Jump to search

What is order of complexity?[edit | edit source]

Generally, an algorithm has an asymptotic computational complexity. Assuming the input is of size N, we can say that the algorithm will finish at O(N), O(N^2), O(N^3), O(N*log(N)) etc. This means that it is a certain mathematical expression of the size of the input, and the algorithm finishes between two factors of it.

Generally, the smaller the order of complexity of the program's underlying algorithm, the faster it will run and the better it will scale as the input gets larger. Thus, we should often seek more efficient algorithms in order to reduce the order of complexity.

Some examples for reducing Order of Complexity[edit | edit source]

Lookup[edit | edit source]

Let's suppose we're looking for a certain item in a collection of N items. A naïve algorithm for looking for such an item would be to go over all the items one after the other, and see if they match this item. Then we can stop when we found this item, or declare that it wasn't found if we did not find it.

This is called a linear search, and has an average (and worst-case) complexity of O(N). Now, if we're going to do such a naïve lookup many times, then it will cost us O(N) every time. And this is usually unacceptable.

A better idea would be to use a more efficient lookup. For example, a Binary search is O(log(N)). It assumes we keep the existing items in a sorted array, or in a balanced tree. A Hash table is a heuristic that with a good design of its underlying parameters provides an average O(1) lookup, but often O(log(N)) is good enough.

Case study for Lookup - the Freecell Solver States Collection[edit | edit source]

Freecell Solver is a library written in ANSI C that solves deals of FreeCell and similar "full-knowledge" Solitaire games. Free cell Solver used traditional Game artificial intelligence heuristics from the beginning such as Depth-First Search and Best-First Search. As a result, there was a need to maintain a collection of the previously encountered board positions (or "states") so they won't be checked twice.

The very first version of the solver (written in Perl) used a linear search over an array. That proved to be too slow to be effective to solve even the most elementary deals. Afterwards, the program was re-implemented in C, and used a sorted array, with a "sort margin" that was sorted using ANSI C's qsort's function, which performs the Quick Sort algorithm, or a different efficient sorting algorithm, at an average complexity of O(N*log(N)), giving the program an average lookup of O(log(N)) and an accumulated addition of between O(N*log(N)) and O(N^2).

Later versions made optional use of two Balanced Binary Trees libraries: GNU libavl and libredblack, which have a maximal O(log(N)) lookup and insertion and an accumulative O(N*log(N)) run-time. Sometimes later, a custom hash table was coded, whose run-time was even somewhat faster than the balanced binary trees, and had an average run-time of O(N). This hash was later further optimized using micro-optimizations.

Counting Sort instead of Comparison-Based Sort[edit | edit source]

Normal comparison-based sort algorithms such as Quick Sort, Merge Sort or Heap Sort run at O(N*log(N)). This is much better than naïve comparison-based algorithms such as "Insertion Sort", or "Bubble Sort" which run at O(N^2). However, we can improve upon O(N*log(N)) too, in some cases.

One of them is if the keys in question are, or can be mapped to integers in a certain range. In this case, we can use an algorithm such as "Counting Sort", to sort at a time of roughly O(N). We can also try using "Radix sort" along with counting sort, if we have more than one such "digit" in the radix (as long as their number remain constant).

On the other hand, sometimes using Insertion Sort is preferable over Merge Sort or Quick Sort, if the number of elements being sorted is very small, as the latter algorithms have a lot of overhead.

Joel Spolsky's Schlemiel the Painter's Syndrome[edit | edit source]

In his article "Back to Basics", Joel Spolsky illustrates a common (and unnecessary) pattern that increases the complexity of programs. Essentially, when one does in C:

  char string[1000];
  strcpy (string, "One ");
  strcat (string, "Two ");
  strcat (string, "Three ");
  strcat (string, "Four ");
  .
  .
  .

And so forth, then the strcat calls will keep starting from the beginning of the string and seek the (increasing) end point time and again. As a result, the complexity of appending N strings each with a limited length, becomes O(N^2) instead of O(N).

Eliminating such problematic mis-implementations in the code can yield a substantial speed-increase.

Note about Order-of-Complexity Reduction[edit | edit source]

It should be noted that some algorithms with a proven lower Big-O notation than equivalent ones, are either too complicated to be effectively implemented or have a huge runtime factor that will make them impractical for most reasonable data-sets, or both. For example, there's an algorithm for finding the Median (= middle element) of an array in linear (O(N)) time, that was discovered in the nineties, but it's very complex and has a very large linear factor, so that an efficient O(N*Log(N)) sorting would normally be faster. That and we are very often interested in the Median for optimizing sorting, and so it would make sense not to use this algorithm in the first place.