IB Computer Science/Abstract Data Structures and Algorithms

From Wikibooks, open books for an open world
< IB Computer Science
Jump to: navigation, search


Return to IB Computer Science

Abstract Data Structures and Algorithms[edit]

Fundamentals[edit]

Static data structures[edit]

Dynamic data structures[edit]

Objects in problem solutions[edit]

Recursion[edit]

Example (Java):

public static int factorial(int num) {
    int ans;
    if(num !== 1) {
        ans = factorial(num-1) * num;
    }
    return ans;
}

This is a simple code to represent the value of factorial.

Algorithm Evaluation[edit]

Efficiency of Algorithms[edit]

To optimize algorithms, one must first know whether a certain algorithm is better or worse than another one. In this case, better can mean that the algorithm processes data faster or that less RAM is used. To measure data processing speed, one can use Big O notation.

Big O notation is always in the form of O(expression). The expression inside the parentheses is different for different efficiencies. It almost always contains n, the number of items of data to be processed. The Big O notation of a linear search on an array (of size n) is O(n). This means that doubling the array size will, on average, double the time for the linear search to complete.

Big O notation does not, however, indicate exactly how efficient an algorithm is. It only indicates how much the time spent increases when n is very large and is increased. For instance, if a certain algorithm takes 0.5n + 2 milliseconds for each item of data, it will still have a Big O notation of O(n). This is because when n gets to, say, one trillion, the added 2 will have almost no effect, so it is discarded. In the same way, all coefficients of n are discarded.

Other examples of Big O notation include O(n^2) for bubble and selection sort and O(\log n) for binary search. Note that binary search’s Big O notation is \log n instead of \log_2 n, because logarithms of different bases increase in approximately the same way when n is large.