# Computational Complexity

Complexity theory is a central field of theoretical computer science and has had large impact on all of computer science. Complexity theory tries to find out how many resources (time, space, energy,...) are needed to perform a particular task or to solve a particular problem. It tries to classify problems into complexity classes such that problems in the same class have similar requirements on the resources needed to solve them. For instance, all problems belonging to the complexity class ${\displaystyle {\mathcal {P}}}$ have, by definition, an algorithm that solves them relatively fast, that is, in polynomial time.

When we speak of solving a particular problem, we might have in mind the problem of deciding whether a given input graph is connected. This problem can be formalized in a mathematical way as the set ${\displaystyle L=\{G|G{\text{ is connected}}\}}$, that is, the set of all connected graphs. We say that an algorithm solves a problem ${\displaystyle L}$ if it returns yes if the input is a member of ${\displaystyle L}$ and no if not. In our example, the algorithm solving ${\displaystyle L}$ would read some input graph, compute something, and then just say yes if it has found out that the input graph is connected. What is beautiful about this framework is that the set ${\displaystyle L}$ is the problem that we want to analyze.

Given a problem ${\displaystyle L}$, natural questions are:

• How fast is the fastest algorithm that solves ${\displaystyle L}$?
• How much space (e.g. hard drive) does the best algorithm solving ${\displaystyle L}$ need?
• If I do not have much space (on ancient mobile phones, say), can I still find an algorithm for ${\displaystyle L}$? Do I then still have the fastest possible?
• Assume that you have a very hard problem ${\displaystyle L}$, for which you just cannot figure out how to solve it efficiently? Can you prove that it cannot be solved efficiently?