A-level Computing 2009/AQA/Problem Solving, Programming, Operating Systems, Databases and Networking/Problem Solving

From Wikibooks, open books for an open world
Jump to navigation Jump to search
From the Specification: Problem Solving
  • Information hiding
  • Abstraction
    • Be familiar with the concept of abstraction as the modelling of a complex system, only including the essential details.
  • Comparing algorithms
    • Understand that algorithms can be compared by expressing their complexity as a function relative to the size of the problem.
    • Understand that some algorithms are more efficient time-wise than other algorithms.
    • Understand that some algorithms are more space-efficient than other algorithms.
  • Big-O notation
    • Linear time, polynomial time, exponential time.
  • Order of complexity
  • Intractable problems
    • Be aware that software and hardware present limitations to solving problems.
  • Solvable and non-solvable
    • Understand that some algorithms involve too many steps to be solvable in a reasonable time by computer (The Travelling Salesman Problem) in the general case but that a heuristic approach can provide a solution for specific cases.
  • Halting problem
    • The unsolvable problem of determining whether any program will eventually stop given particular input.
  • Turing Machine
    • The abstract model of the Turing Machine and the Universal Machine.
  • Universal Machine
  • Finite state machines with and without output
    • Draw and interpret state transition diagrams for finite state machines with no output (automata) and with output (Mealy and Moore machines).
  • Regular expressions
    • Form simple regular expressions for string manipulation and matching.
  • Backus-Naur Form (BNF) / syntax diagrams
    • Be able to check language syntax by referring to BNF or syntax diagrams.
  • Reverse Polish notation
    • Be able to convert simple infix notation to reverse Polish notation and vice versa.

This section covers the first part of the Unit 3 exam, Problem Solving.

Information hiding and abstraction[edit | edit source]

In the Computer Science & programming world, information hiding is inherent to the idea of encapsulation. You hide the details of how systems work in order that other programmers avoid the mistake of using undocumented side effects of code. This also means that the inner workings of a program or part of a program can be rewritten without affecting other parts of the program.

Interface - a boundary between the implementation of a system and the world that uses it


We use interfaces as a means of hiding what is behind a system. An interface provides an abstraction of the entity behind the system and thereby separates the methods of external communication from internal communication. A keyboard is an example of an interface. A user does not need to know how the operation of the machine works, only how to control it.

By hiding the complexities of the machine behind a well-designed interface, users are able to learn how to use the machine easily and with little training. For example, a car has a basic interface which a driver can learn how to use, then they don’t need to relearn how to drive if they switch model or make of car. This is the same for computing – a user can use an Operating System on multiple computers with different hardware – the user controls each computer through an identical interface even though behind this common interface the computers are different. Two objects with different implementations can have the same interface.

Information, software or hardware can be divided into separate parts. Information hiding provides flexibility of software design. A well designed computer program is made up of different modules of code. When a programmer is working on one module of code, they do not need to know the detail contained in the other modules, only what the modules can do for them. Changes are easier to make with this approach because usually a change only has to be made locally in one module rather than across all the modules.

Abstraction means leaving out unnecessary details. Humans deal with abstraction all the time as it makes things easier to understand and use.

Abstraction is very useful when computations need to be performed and there are two kinds of abstractions that are used. Generalisation is where a program is progressively decomposed into more and more detailed modules. Representation is applied to problems solving to remove details until it becomes possible to represent the problem in a way that makes it solvable.

Comparing algorithms[edit | edit source]

Algorithm - An unambiguous set of instructions for solving a problem


There may be several algorithms which can all solve a given problem, but they may do so at very different speeds and require different amounts of memory.

We must always ensure than an algorithm provides the correct output for all inputs, not just for some. We can do this by working through the algorithm logically or using a mathematical proof. As well as considering whether an algorithm produces the correct output we must also consider the time and space the algorithm takes to produce an output.

Computational complexity of an algorithm - how economical an algorithm is with time and space


The computational complexity of an algorithm depends on its time complexity and its space complexity. Time complexity indicates how fast an algorithm runs and space complexity indicates how much memory the algorithm requires to run. We often have a trade-off between time and space complexity. If we choose to make an algorithm quicker then it often can take up more space and vice versa.

Many algorithms take much longer to run using larger inputs. Worst-case complexity is the computed complexity when using an input which takes the longest time or uses the most space. Best-case complexity is the computed complexity when we choose an input which takes the shortest time or uses the least space. Average-case complexity is calculated by averaging the times for every possible input.

The complexity of a problem is taken to be the worst-case complexity of the most efficient algorithm that solves the problem.

Comparing the efficiency of algorithms can be difficult because computer systems vary and one computer may be able to run an algorithm much faster than another. Instead of measuring the time an algorithm takes to run, an approach that is used is to count the number of times each operation in an algorithm is executed. This can be quite demanding, so fortunately it is rarely necessary to count every operation. It is much easier to identify which operation contributes most to the overall running time and to calculate the number of times this operation is executed.

Basic operation - The operation that contributes most to an algorithm's running time


The method used to analyse an algorithm's time efficiency is based on the number of times the basic operation is executed.

Order of growth[edit | edit source]

Order of growth - by what factor the execution time increases when the size of the input increases


A problem has a linear order of growth when an algorithm takes twice as long to run if the input is doubled. If the execution time increases by 4 when the input is doubled then the order of growth of the problem is quadratic. Exponential growth is used to measure problems which grow very quickly.

Order of complexity[edit | edit source]

The complexity of a problem is taken to be the order of growth of the most efficient algorithm to solve that problem. The complexity is often expressed using BigO notation.

Order of complexity - a problem's big O complexity


Finite state machines (FSM)[edit | edit source]

An FSM has a set of input symbols, and if it produces an output, then it will also have a set of output symbols. It has a finite set of states and transition functions which map state-symbol pairs to the next state and may also generate an output. We can use a transition table to represent a transition function. One state is always designated as the start or initial state and indicated by a → symbol.

FSMs are often represented using state-transition diagrams which are directed graphs whose nodes represent states. Each edge or arc represents a transition and these are labelled using symbols (e.g. a|b), where a represents the input symbol or transition trigger and b represents the output symbol, although a b symbol is optional.

FSMs are widely used including for controlling traffic lights, spell checking, searching large sections of text, recognising speech and networking protocols.

  • How to represent FSMs in diagrams
  • How to represent FSMs in transition tables
  • Mealy and Moore machines
  • Deterministic finite automata (DFA)
  • Non-deterministic finite automata (NFA)
  • Finite state automata (FSA)

Turing machines[edit | edit source]

  • What are Turing machines
  • Unary representation
  • Turing machines that use tapes
  • State transition diagrams
  • Rigorous definition of an algorithm
  • Computability
  • Equivalence with all types of computing machines
  • A universal Turing machine
  • The Principal of universality

Intractable problems[edit | edit source]

Not every problem can be solved by an algorithm. Problems that have an algorithmic nature can be solved in a series of rules or steps, whereas non-algorithmic problems cannot. An example of an algorithmic problem would be answering the question, How do I make a cup of tea?; answering the question, What is the meaning of life?, would be a non-algorithmic problem.

Any algorithmic problem with a finite set of inputs is solvable, but algorithmic problems with infinite sets of inputs may or may not be solvable. It is impossible to construct an algorithm to solve some algorithmic problems, and these problems are called non-computable.

Undecidable problems[edit | edit source]

An algorithmic problem with a yes/no output is called a decision problem. If the decision problem has an algorithmic solution then the problem is decidable. However, a problem is undecidable if it is a decision problem that is non-computable.

Intractable problems[edit | edit source]

Even if a problem is both algorithmic and computable whether or not a solution can be found can depend on the size of the input. Problems with very large inputs will either be tractable, that is have a reasonable time solution (of polynomial complexity), or intractable, a reasonable time solution has not yet been found. Intractable problems are considered unsolvable for large inputs.

However, if the solution to an intractable problem can be guessed, then it is relatively quick to check the solution, and the problem can be solved. It takes know-how and experience to guess a solution and this is known as heuristic knowledge.

Another way that is often used to find solutions to intractable problems is to ignore some of the constraints to find a solution that acts as a compromise. It will not be the optimal solution, but will be usable, and be able to be found in a reasonable amount of time.

The halting problem[edit | edit source]

The halting problem asks whether it is possible, in general, to write a program that can tell, given any program and its inputs and without executing the program, whether the given program with its given inputs will halt.

The answer to this has been proven to be no.[1] To discover if an algorithm will halt it needs to be run on its range of inputs, there is no general way to discover whether it will halt or not.

Regular expressions[edit | edit source]

  • The difference between a natural and formal language
  • Formal Language
  • Regular Language
  • Regular expressions (Reg Ex)
  • Search using Reg Ex

Backus-naur form (BNF)[edit | edit source]

Backus-Naur Form is notation for describing a set of rules of a regular language.

The symbols used in BNF are:

  • ::= indicating 'is defined as'
  • | indicating 'or'
  • <> angle brackets to surround category names

Reverse Polish Notation[edit | edit source]

This is a notation that is easy for machines to interpret, as it does not require knowledge of the order of mathematical operations or the use of brackets. A RPN statement takes the following form

 FirstValue SecondValue Operator

where each value can be instead another RPN statement.

Examples[edit | edit source]

 "5 4 +" = 9
 This is a simple calculation
 
 "2 3 4 * +" = "2 (3 4 *) +" = 14
 Here you can see how statements can be nested
 
 "1 3 * 4 2 / -" = "(1 3 *) (4 2 /) -" = 3 - 2 = 1
 Even with large numbers of calculations taking place on a single line, the meaning of the expression is unambiguous.
 'The original example: "1 3 + 4 2 / -" results in 4 - 2 = 2'

References[edit | edit source]

  1. http://www.bbc.co.uk/dna/place-london/plain/A1304939