Jump to content

Abstract Data Structures

0% developed
From Wikibooks, open books for an open world


Thinking recursively

[edit | edit source]

5.1.1 Identify a situation that requires the use of recursive thinking.

Recursive thinking is a process of breaking down a complex problem into smaller, simpler subproblems that can be solved using the same method as the original problem. Here are some situations that typically require the use of recursive thinking:

  1. Problems with a recursive structure: Problems that have a natural recursive structure, such as tree traversal, are ideal for recursive thinking. These problems can be easily divided into smaller subproblems that can be solved recursively.
  2. Problems that can be reduced to a smaller version of themselves: If a problem can be reduced to a smaller version of itself, then it can be solved recursively. For example, a problem that involves computing the nth Fibonacci number can be solved by computing the (n-1)th and (n-2)th Fibonacci numbers recursively.
  3. Divide and conquer problems: Divide and conquer problems can be solved recursively. For example, the problem of finding the maximum element in an array can be solved by dividing the array into two halves and finding the maximum element in each half recursively.
  4. Backtracking problems: Backtracking problems, such as the traveling salesman problem, can be solved recursively by exploring all possible solutions and backtracking when a solution is found to be incorrect.
  5. Recursive algorithms are useful in these situations because they allow the problem to be divided into smaller subproblems, making it easier to understand and solve. The recursive solution can be implemented in a compact and elegant way, making it easier to understand and maintain.

5.1.2 Identify recursive thinking in a specified problem.

Recursive thinking involves breaking down a problem into smaller subproblems that can be solved using the same method as the original problem. Here's an example to help you understand how to identify recursive thinking in a problem:

Example: The problem of finding the factorial of a number.

The factorial of a number is defined as the product of all positive integers less than or equal to that number. For example, the factorial of 5 is 5 * 4 * 3 * 2 * 1 = 120.

To solve this problem recursively, we can use the following steps:

  1. Identify the base case: The base case is when the number is 1, and the factorial is 1.
  2. Break down the problem into smaller subproblems: To find the factorial of a number n, we can find the factorial of n-1 and then multiply it by n. This breaks down the problem into a smaller subproblem that can be solved recursively.
  3. Make a recursive call: Call the function to find the factorial of n-1 and store the result in a variable.
  4. Combine the solutions: Multiply the result of the recursive call by n to get the factorial of n.
  5. Return the solution: Return the result of the calculation to the caller of the function.

This problem can be solved recursively by breaking down the problem into smaller subproblems and using the same method to solve each subproblem. This is the essence of recursive thinking.

Subprogram statements

[edit | edit source]

Subprograms are self-contained units of code that perform a specific task. They are also known as functions, procedures, or subroutines. Subprograms can be called from other parts of a program, allowing for modular and reusable code. Here are some common statements used in subprograms:

  • Function declaration: A function declaration specifies the name of the function, the parameters it takes, and the type of value it returns. For example, in a programming language like Python, a function declaration might look like this: def factorial(n):
  • Parameter passing: Parameters are values passed to a function. They are used by the function to perform its calculations. There are two main methods of parameter passing: pass by value and pass by reference.
  • Return statement: The return statement is used to return a value from a function to the calling code. The return statement can also be used to exit the function early if a certain condition is met. For example, in Python: return result
  • Local variables: Local variables are variables that are declared and used within a function. They are only accessible within the scope of the function and are not visible to other parts of the program.
  • Recursion: A subprogram can call itself, a technique known as recursion. This allows for a problem to be broken down into smaller subproblems and solved using the same method.
  • Function call: A function call is a statement that invokes a function, passing any necessary parameters, and allowing the function to execute its code. For example, in Python: factorial(5)
  • Scope: The scope of a variable is the region of a program where it is accessible. Variables declared within a function have local scope, while variables declared outside of a function have global scope.

These are some of the common statements used in subprograms, but the exact syntax may vary depending on the programming language being used.

[edit | edit source]

Recursive binary search is a search algorithm that uses a divide-and-conquer approach to find a target value in a sorted array. The basic idea of the algorithm is to continuously split the array in half and compare the middle element to the target value. Here's a high-level overview of the steps involved in a recursive binary search:

  1. Check the base case: If the length of the array is 0, the target value is not present in the array, and the search should return False.
  2. Determine the middle element: Divide the length of the array by 2 to find the index of the middle element.
  3. Compare the middle element to the target value: If the middle element is equal to the target value, the search is successful, and the function should return True.
  4. Recursively search the left or right half of the array: If the target value is less than the middle element, search the left half of the array. If the target value is greater than the middle element, search the right half of the array.
  5. Repeat the steps until the target value is found or the array is empty: Continue to split the array in half and compare the middle element to the target value until the target value is found or the array is empty.

The advantage of using a recursive binary search is that it is a simple and efficient algorithm that can be used to search large arrays. However, it requires a sorted array and uses a significant amount of memory due to the recursive calls.

Quicksort

[edit | edit source]

Quicksort is a sorting algorithm that uses a divide-and-conquer approach to sort an array of elements. It is one of the most widely used algorithms for sorting due to its average-case time complexity of O(n log n).

Here's a high-level overview of the steps involved in the quicksort algorithm:

  1. Choose a pivot element: The pivot element is used to partition the array into two smaller sub-arrays. A common approach is to choose the first, last, or middle element of the array as the pivot.
  2. Partition the array: Reorder the elements in the array such that all elements less than the pivot are to the left of it, and all elements greater than the pivot are to the right of it.
  3. Recursively sort the sub-arrays: Repeat the partition step on the sub-arrays created in step 2 until the sub-arrays are of length 0 or 1.
  4. Combine the sub-arrays: The sub-arrays are combined to form a sorted array.

Quicksort has a best-case time complexity of O(n log n) and an average-case time complexity of O(n log n), making it an efficient algorithm for sorting large arrays. However, its worst-case time complexity of O(n^2) can be avoided by using a random pivot or by choosing the median element as the pivot.

5.1.3 Trace a recursive algorithm to express a solution to a problem.

Tracing a recursive algorithm involves following the steps of the algorithm to see how it solves the problem. The steps are as follows:

  1. Identify the base case: The first step in tracing a recursive algorithm is to identify the base case, which is the condition that stops the recursion. The base case should be simple and easily solvable, and it should provide a non-recursive solution to the problem.
  2. Break down the problem into smaller subproblems: The next step is to break down the problem into smaller subproblems that can be solved recursively. This is done by making a recursive call to the function with a smaller input.
  3. Trace the recursive call: For each recursive call, trace the steps of the function to see how it solves the subproblem. Repeat this step until the base case is reached.
  4. Combine the solutions: Once the base case is reached, the solutions to the smaller subproblems can be combined to form the solution to the original problem.
  5. Return the solution: Finally, return the solution to the caller of the function.

It is important to understand the structure of the recursive algorithm and the relationship between the smaller subproblems and the original problem in order to trace the algorithm correctly. The trace should provide a step-by-step description of how the algorithm solves the problem, and it should help you understand the behavior of the algorithm.

Abstract data structures

[edit | edit source]

5.1.4 Describe the characteristics of a two-dimensional array.

A two-dimensional array is an array of arrays, where each element is an array. It is a collection of elements arranged in rows and columns, and it can be used to represent a table of data. The following are the main characteristics of a two-dimensional array:

  1. Size: The size of a two-dimensional array is defined by its number of rows and columns. It is a fixed size data structure.
  2. Indexing: The elements in a two-dimensional array are accessed using two indices, one for the row and one for the column. The indices start from 0.
  3. Representation: A two-dimensional array can be represented as a table with rows and columns, where each element is represented by a cell.
  4. Multidimensional: A two-dimensional array is a type of multidimensional array, meaning that it can be used to store data in multiple dimensions.
  5. Storage: The elements in a two-dimensional array are stored in a contiguous block of memory, which makes accessing the elements faster than accessing elements in a one-dimensional array.

These are some of the main characteristics of a two-dimensional array. Two-dimensional arrays are widely used in computer programming to represent and manipulate data in a tabular form.

5.1.5 Construct algorithms using two-dimensional arrays.

Two-dimensional arrays (also known as matrices) are useful in various algorithms and can be used to represent tables, graphs, and images. Some common algorithms that use two-dimensional arrays are:

  1. Matrix Multiplication: An algorithm to multiply two matrices. The result is stored in a third matrix.
  2. Searching in a two-dimensional array: An algorithm to search for a specific value in a two-dimensional array.
  3. Image Processing: An algorithm to perform operations on images, such as resizing, filtering, and edge detection.
  4. Graph Representation: An algorithm to represent a graph using a two-dimensional array, where each cell represents a node and its edges.

To implement these algorithms, you can use the following techniques with two-dimensional arrays:

  1. Iterating through rows and columns: To visit all elements in a two-dimensional array, you can use nested loops to iterate through each row and column.
  2. Accessing elements: You can access an element in a two-dimensional array using the row and column indices, such as array[row][column].
  3. Modifying elements: You can modify the value of an element in a two-dimensional array by assigning a new value, such as array[row][column] = newValue.

By using these techniques, you can construct algorithms that perform operations on matrices, search for values in two-dimensional arrays, and process images and graphs.

5.1.6 Describe the characteristics and applications of a stack.

A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle, where the last element added to the stack is the first one to be removed. The following are the main characteristics and applications of a stack:

Characteristics:

  1. LIFO: The last element added to the stack is the first one to be removed.
  2. Dynamic size: The size of a stack can change during runtime.
  3. Two main operations: The two main operations performed on a stack are push (add an element to the top of the stack) and pop (remove the element from the top of the stack).
  4. Limited access: The only elements that can be accessed in a stack are the ones at the top.

Applications:

  1. Undo/Redo operations: Stacks can be used to implement undo/redo operations in text editors and image editors.
  2. Recursion: Stacks are used to keep track of function calls during recursion.
  3. Balancing of symbols: Stacks can be used to check if symbols are balanced, such as in expressions with parentheses, brackets, and braces.
  4. Backtracking: Stacks can be used to implement backtracking algorithms, such as in solving mazes or finding paths in graphs.
  5. Web browsing history: Stacks can be used to implement web browsing history, where each webpage visited is added to the top of the stack and the back button removes the top element.

These are some of the main characteristics and applications of a stack. By using the LIFO principle and the push and pop operations, stacks can be used to implement various algorithms and applications that require keeping track of elements in a certain order.

5.1.7 Construct algorithms using the access methods of a stack.

The following are some common algorithms that can be constructed using the access methods of a stack:

  1. Depth-First Search (DFS): A graph traversal algorithm that visits all the vertices of a graph in depth-first order, starting from a given source vertex. The algorithm uses a stack to keep track of the vertices to be visited next.
  2. Infix to Postfix Conversion: An algorithm to convert an infix expression to a postfix expression. The algorithm uses a stack to keep track of operators and operands.
  3. Evaluating Postfix Expressions: An algorithm to evaluate a postfix expression. The algorithm uses a stack to keep track of operands and perform arithmetic operations.
  4. Backtracking: An algorithm that tries out different solutions and goes back to the previous state when a solution doesn't work. The algorithm uses a stack to keep track of the states to be revisited.

To implement these algorithms, you can use the following stack access methods:

  1. push: Add an element to the top of the stack.
  2. pop: Remove the top element from the stack.
  3. peek: Get the top element without removing it.
  4. isEmpty: Check if the stack is empty.

By using these access methods, you can construct algorithms that perform operations such as visiting vertices in depth-first order, converting infix expressions to postfix expressions, evaluating postfix expressions, and backtracking.

5.1.8 Describe the characteristics and applications of a queue.

A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle, where the first element added to the queue is the first one to be removed. The following are the main characteristics and applications of a queue:

Characteristics:

  1. FIFO: The first element added to the queue is the first one to be removed.
  2. Dynamic size: The size of a queue can change during runtime.
  3. Two main operations: The two main operations performed on a queue are enqueue (add an element to the end of the queue) and dequeue (remove the element from the front of the queue).
  4. Limited access: The only elements that can be accessed in a queue are the ones at the front and the end.

Applications:

  1. BFS (Breadth-First Search): A graph traversal algorithm that visits all the vertices of a graph in breadth-first order, starting from a given source vertex. The algorithm uses a queue to keep track of the vertices to be visited next.
  2. CPU Scheduling: A queue is used to keep track of the tasks that are waiting to be executed by the CPU. The CPU schedules tasks based on the order in which they were added to the queue.
  3. Event Handling: A queue is used to keep track of events that occur in an operating system or a computer program. The events are handled in the order in which they were added to the queue.
  4. Printer Queue: A queue is used to keep track of the print jobs that are waiting to be executed by a printer. The print jobs are executed in the order in which they were added to the queue.

These are some of the main characteristics and applications of a queue. By using the FIFO principle and the enqueue and dequeue operations, queues can be used to implement various algorithms and applications that require keeping track of elements in a certain order.

5.1.9 Construct algorithms using the access methods of a queue.

The following are some common algorithms that can be constructed using the access methods of a queue:

  1. Breadth-First Search (BFS): A graph traversal algorithm that visits all the vertices of a graph in breadth-first order, starting from a given source vertex. The algorithm uses a queue to keep track of the vertices to be visited next.
  2. Level-Order Traversal of a Tree: A tree traversal algorithm that visits all the nodes of a tree in level-order, starting from the root. The algorithm uses a queue to keep track of the nodes to be visited at each level.
  3. Shortest Path in a Graph: An algorithm to find the shortest path between two vertices in a graph. The algorithm uses a queue to keep track of the vertices to be visited next.

To implement these algorithms, you can use the following queue access methods:

  1. enqueue: Add an element to the back of the queue.
  2. dequeue: Remove the front element from the queue.
  3. peek: Get the front element without removing it.
  4. isEmpty: Check if the queue is empty.

By using these access methods, you can construct algorithms that perform operations such as visiting vertices in breadth-first order, keeping track of nodes to be visited, and finding the shortest path in a graph.

5.1.10 Explain the use of arrays as static stacks and queues.

Arrays can be used to implement both static stacks and queues.

A stack is a last-in-first-out (LIFO) data structure, where elements are added and removed from the top of the stack. An array can be used to implement a static stack by using the end of the array to represent the top of the stack. To push an element onto the stack, we increment the end index and store the element at that position. To pop an element, we retrieve the element at the end index and decrement the end index.

A queue is a first-in-first-out (FIFO) data structure, where elements are added at the back and removed from the front. An array can be used to implement a static queue by using two indices, one to represent the front of the queue and the other to represent the back. To enqueue an element, we increment the back index and store the element at that position. To dequeue an element, we retrieve the element at the front index and increment the front index.

It's important to note that a static array has a fixed size, which means that once it's full, you can't add any more elements to it. This can be a limitation when using arrays as static stacks or queues. A dynamic version of stacks and queues, implemented using linked lists, can overcome this limitation by allocating new memory as needed.

Linked lists

[edit | edit source]

5.1.11 Describe the features and characteristics of a dynamic data structure.

A dynamic data structure is a type of data structure that can change its size during runtime. Some features and characteristics of dynamic data structures include:

  1. Resizing: The size of a dynamic data structure can be changed as needed, making it more flexible than a static data structure which has a fixed size.
  2. Dynamic allocation of memory: Dynamic data structures allocate memory dynamically, meaning that memory is allocated as needed, rather than all at once at the start of a program.
  3. Efficient insertion and deletion: Dynamic data structures often allow for efficient insertion and deletion of elements, making them useful for applications where the number of elements can change frequently.
  4. Complexity: Dynamic data structures often have a higher time complexity compared to static data structures for operations such as search and access.
  5. Examples: Some examples of dynamic data structures include linked lists, stacks, queues, and trees.

5.1.12 Describe how linked lists operate logically.

A linked list is a dynamic data structure that consists of a sequence of nodes, where each node contains an element of data and a reference (or "link") to the next node in the list.

Logically, a linked list operates as follows:

  1. The first node in the linked list is known as the head node. It contains the first element of the list and a reference to the next node.
  2. The next node contains its own data element and a reference to the next node in the list. This continues for each node in the list until the last node, which does not have a reference to another node (its "next" link is null).
  3. To access a specific node in the list, one must start at the head node and follow the links from one node to the next until the desired node is reached.
  4. To insert a new node into the list, a new node is created and its "next" link is set to reference the node that it should come after in the list. The "next" link of the node that comes before the new node is then updated to reference the new node.
  5. To delete a node from the list, the "next" link of the node that comes before it is updated to reference the node that comes after it, effectively removing the node from the list.

By linking nodes together in this manner, linked lists allow for efficient insertion and deletion of elements, as well as quick traversal through the elements of the list.

5.1.13 Sketch linked lists (single, double and circular).

Single Linked List: A single linked list consists of a sequence of nodes, where each node contains an element of data and a reference to the next node in the list. It can be visualized as a series of nodes, where each node is connected to the next node by a single arrow pointing to the next node.

Double Linked List: A double linked list is similar to a single linked list, but in addition to a reference to the next node, each node also contains a reference to the previous node. It can be visualized as a series of nodes, where each node is connected to the next node by an arrow pointing to the next node, and also connected to the previous node by an arrow pointing to the previous node.

Circular Linked List: A circular linked list is a type of linked list where the last node in the list points back to the first node, creating a circular chain of nodes. It can be visualized as a series of nodes, where the last node is connected to the first node by an arrow, creating a circular chain of nodes.

Trees

[edit | edit source]

5.1.14 Describe how trees operate logically (both binary and non-binary).

Trees are hierarchical data structures that are used to organize and store data. There are two types of trees: binary trees and non-binary trees.

Binary Trees: A binary tree is a tree data structure in which each node has at most two child nodes, referred to as the left child and the right child.

Logically, a binary tree operates as follows:

  1. The topmost node in the tree is called the root node.
  2. Each node in the tree has a value, and the value of the parent node is used to determine the position of its children. If the value of a node is less than the value of its parent, it is placed on the left as the left child. If the value of a node is greater than the value of its parent, it is placed on the right as the right child.
  3. This process continues for each node in the tree, with the left and right children of each node becoming the parent nodes for their respective subtrees.
  4. To search for a specific value in the tree, one starts at the root node and compares the value of the node to the target value. If the value of the node is less than the target value, the search continues on the right subtree. If the value of the node is greater than the target value, the search continues on the left subtree. This process continues until the target value is found or it is determined that the target value does not exist in the tree.

Non-Binary Trees: Non-binary trees are trees where a node can have more than two children. The logical operation of a non-binary tree is similar to that of a binary tree, except each node can have more than two children, and the number of children a node has determines how the search for a specific value is performed. The process of determining the position of a node in the tree is also different, as it may involve additional comparisons and conditions.

Logically, a non-binary tree operates as follows:

  1. The topmost node in the tree is called the root node.
  2. Each node in the tree has a value, and the value of the parent node is used to determine the position of its children. The process of determining the position of a node may involve multiple comparisons and conditions, depending on the specific implementation of the tree.
  3. The children of each node form subtrees, and the process of determining the position of a node continues for each node in the tree.
  4. To search for a specific value in the tree, one starts at the root node and evaluates the values of the children nodes. Depending on the implementation of the tree, the search may involve multiple comparisons and conditions to determine which child node to traverse next.
  5. This process continues until the target value is found or it is determined that the target value does not exist in the tree.

In a non-binary tree, the number of children a node has may affect the time complexity of searching for a specific value, as it may require traversing more branches. However, non-binary trees can be more flexible and useful in certain applications where a binary tree may not be sufficient.

5.1.15 Define the terms: parent, left-child, right-child, subtree, root, and leaf.

  1. Parent: A parent node is a node in a tree data structure that has one or more child nodes.
  2. Left-Child: A left-child node is a child node in a tree data structure that is positioned on the left side of its parent node.
  3. Right-Child: A right-child node is a child node in a tree data structure that is positioned on the right side of its parent node.
  4. Subtree: A subtree is a portion of a tree data structure that consists of a node and all its descendants.
  5. Root: The root node is the topmost node in a tree data structure. It is the starting point for all operations and traversals in the tree.
  6. Leaf: A leaf node is a node in a tree data structure that has no children. It is the end of a branch in the tree and has no further descendants.

5.1.16 State the result of inorder, postorder, and preorder tree traversal.

In a tree data structure, the result of tree traversal refers to the order in which the nodes are visited. There are three common tree traversal methods: inorder, postorder, and preorder.

  1. Inorder Traversal: In an inorder traversal, the nodes are visited in the following order: left subtree, root node, and right subtree. The result of an inorder traversal is a sorted list of nodes if the tree is a binary search tree.
  2. Postorder Traversal: In a postorder traversal, the nodes are visited in the following order: left subtree, right subtree, and root node. The result of a postorder traversal is a list of nodes in the order they would be processed in a reverse Polish notation expression evaluation.
  3. Preorder Traversal: In a preorder traversal, the nodes are visited in the following order: root node, left subtree, and right subtree. The result of a preorder traversal is a list of nodes that represents the structure of the tree.

These traversal methods are useful in various operations, such as searching, printing, and modifying the tree data structure.

5.1.17 Sketch binary trees.

Binary trees can be represented visually as a series of connected nodes, where each node has up to two children.

To sketch a binary tree, follow these steps:

  1. Draw a box for the root node and label it with its value.
  2. Draw a line from the root node to a box for its left child and label it with its value.
  3. Draw a line from the root node to a box for its right child and label it with its value.
  4. Repeat the process for each child node, drawing lines to boxes for their children and labeling them with their values.
  5. Continue this process until all nodes have been represented and labeled.

Here's an example of a simple binary tree sketch:

5

/ \

3 8

/ \

2 4

Note that in a binary tree, each node can have at most two children. If a node has no children, it is called a leaf node. The height of a binary tree is defined as the number of edges in the longest path from the root to a leaf node. The height of the binary tree in this example is 2.

Applications

[edit | edit source]

5.1.18 Define the term dynamic data structure.

A dynamic data structure is a type of data structure that can change in size during the execution of a program. Unlike static data structures, the size of a dynamic data structure can increase or decrease as needed. This is achieved by allocating memory dynamically at runtime, rather than beforehand.

Dynamic data structures are used in many applications because they can adjust to changing requirements and handle unpredictable amounts of data. Some common examples of dynamic data structures include linked lists, stacks, and queues, as well as more complex data structures such as trees and graphs.

Dynamic data structures are often implemented using pointers and dynamic memory allocation, which allows for efficient and flexible manipulation of data. However, this also requires careful management of memory, as dynamically allocated memory must be explicitly deallocated when it is no longer needed.

5.1.19 Compare the use of static and dynamic data structures.

Static and dynamic data structures are two types of data structures with different characteristics and use cases.

Static data structures are data structures with a fixed size that is determined when the program is compiled and cannot be changed during runtime. Examples of static data structures include arrays, fixed-size stacks and queues, and matrices. The advantage of static data structures is that their size does not change during runtime, so the memory allocation for each element is predictable and straightforward. This makes them efficient for tasks that require a fixed amount of data, such as mathematical operations.

Dynamic data structures, on the other hand, are data structures whose size can change during runtime. Examples of dynamic data structures include linked lists, stacks and queues implemented using dynamic arrays, and trees. The advantage of dynamic data structures is their ability to adjust to changing requirements, so they can handle unpredictable amounts of data. This makes them ideal for tasks that require data structures that can expand or shrink as needed, such as data storage and retrieval.

In summary, the choice between a static and dynamic data structure depends on the requirements of the task at hand. For tasks that require a fixed amount of data, static data structures are more efficient, while for tasks that require dynamic data structures, dynamic data structures are a better choice.

5.1.20 Suggest a suitable structure for a given situation.

To suggest a suitable data structure for a given situation, consider the following factors:

  1. The size of the data and its nature: Is it a small or large amount of data? Is it numerical, categorical or a mix of both?
  2. Access patterns: How frequently do you need to access the data? Do you need to search for specific elements, sort the data, or insert/delete elements frequently?
  3. Time and space complexity: What is the desired time and space complexity for operations on the data?

Based on the above factors, you can then choose from the following commonly used data structures:

  1. Arrays: When you need to store a small number of items and access them randomly by index.
  2. Linked Lists: When you need to store a large number of items and access them sequentially.
  3. Stack: When you need to access the last element first, and only need to access the last element, like an "undo" function.
  4. Queue: When you need to access elements in a first-in-first-out order, like a line.
  5. Hash Table: When you need fast access to elements based on a key and don't need to preserve the order of elements.
  6. Trees: When you need to search, insert, and delete elements frequently and efficiently, and maintain an order among elements.
  7. Graphs: When you need to represent relationships between elements or find the shortest path between two nodes.

Note: The choice of data structure will depend on the specific requirements of the problem and the trade-off between time and space complexity.