The Johns Hopkins University
The term “computational complexity” has two usages which must be distinguished. On the one hand, it refers to an algorithm for solving instances of a problem: broadly stated, the computational complexity of an algorithm is a measure of how many steps the algorithm will require in the worst case for an instance or input of a given size. The number of steps is measured as a function of that size.
The term's second, more important use is in reference to a problem itself. The theory of computational complexity involves classifying problems according to their inherent tractability or intractability — that is, whether they are “easy” or “hard” to solve. This classification scheme includes the well-known classes P and NP; the terms “NP-complete” and “NP-hard” are related to the class NP.
To understand what is meant by the complexity of an algorithm, we must define algorithms, problems, and problem instances. Moreover, we must understand how one measures the size of a problem instance and what constitutes a “step” in an algorithm. A problem is an abstract description coupled with a question requiring an answer; for example, the Traveling Salesman Problem (TSP) is: “Given a graph with nodes and edges and costs associated with the edges, what is a least-cost closed walk (or tour) containing each of the nodes exactly once?” An instance of a problem, on the other hand, includes an exact specification of the data: for example, “The graph contains nodes 1, 2, 3, 4, 5, and 6, and edges (1, 2) with cost 10, (1, 3) with cost 14, …” and so on. Stated more mathematically, a problem can be thought of as a function p that maps an instance x to an output p(x) (an answer).
An algorithm for a problem is a set of instructions guaranteed to find the correct solution to any instance in a finite number of steps. In other words, for a problem p, an algorithm is a finite procedure for computing p(x) for any given input x. Computer scientists model algorithms by a mathematical construct called a Turing machine, but we will consider a more concrete model here. In a simple model of a computing device, a “step” consists of one of the following operations: addition, subtraction, multiplication, finite-precision division, and comparison of two numbers. Thus if an algorithm requires one hundred additions and 220 comparisons for some instance, we say that the algorithm requires 320 steps on that instance. In order to make this number meaningful, we would like to express it as a function of the size of the corresponding instance, but determining the exact function would be impractical. Instead, since we are really concerned with how long the algorithm takes (in the worst case) asymptotically as the size of an instance gets large, we formulate a simple function of the input size that is a reasonably tight upper bound on the actual number of steps. Such a function is called the complexity or running time of the algorithm.
Technically, the size of an instance is the number of bits required to encode it. It is measured in terms of the inherent dimensions of the instance (such as the number of nodes and edges in a graph), plus the number of bits required to encode the numerical information in the instance (such as the edge costs). Since numerical data are encoded in binary, an integer C requires about log2|C| bits to encode and so contributes logarithmically to the size of the instance. The running time of the algorithm is then expressed as a function of these parameters, rather than the precise input size. For example, for the TSP, an algorithm's running time might be expressed as a function of the number of nodes, the number of edges, and the maximum number of bits required to encode any edge cost. As we have seen, the complexity of an algorithm is only a rough estimate of the number of steps that will be required on an instance. In general — and particularly in analyzing the inherent tractability of a problem — we are interested in an asymptotic analysis: how does the running time grow as the size of the instance gets very large? For these reasons, it is useful to introduce Big-O notation. For two functions f(t) and g(t) of a nonnegative parameter t, we say that f(t) = O(g(t)) if there is a constant c > 0 such that, for all sufficiently large t, f(t) ≤ cg(t). The function cg(t) is thus an asymptotic upper bound on f. For example, 100(t2 + t) = O(t2), since by taking c = 101 the relation follows for t ≥ 100; however, 0.0001 t3 is not O(t2). Notice that it is possible for f(t) = O(g(t)) and g(t) = O(f(t)) simultaneously.
We say that an algorithm runs in polynomial time (is a polynomial-time algorithm) if the running time f(t) = O(P(t)), where P(t) is a polynomial function of the input size. Polynomial-time algorithms are generally (and formally) considered efficient, and problems for which polynomial time algorithms exist are considered “easy.” For the remainder of this article, when we use the term “polynomial,” we mean as a function of the input size.
In order to establish a formal setting for discussing the relative tractability of problems, computer scientists first define a large class of problems called recognition (or decision) problems. This class comprises precisely those problems whose associated question requires the answer “yes” or “no.” For example, consider the problem of determining whether an undirected graph is connected (that is, whether there a path between every pair of nodes in the graph). This problem's input is a graph G consisting of nodes and edges, and its question is, “Is G connected?” Notice that most optimization problems are not recognition problems, but most have recognition counterparts. For example, a recognition version of the TSP has as input both a graph G, with costs on the edges, and a number K. The associated question is, “Does G contain a traveling salesman tour of length less than or equal to K?” In general, an optimization problem is not much harder to solve than its recognition counterpart. One can usually embed the recognition algorithm in a binary search over the possible objective function values to solve the optimization problem with a polynomial number of calls to the embedded algorithm.
The class P is defined as the set of recognition problems for which there exists a polynomial-time algorithm, where “P” stands for “polynomial time.” Thus, P comprises those problems that are formally considered “easy.” The larger problem class NP contains the class P. The term “NP” stands for “nondeterministic polynomial” and refers to a different, hypothetical model of computation, which can solve the problems in NP in polynomial time (for further explanation, see references).
The class NP consists of all recognition problems with the following property: for any “yes”-instance of the problem there exists a polynomial-length “certificate” or proof of this fact that can be verified in polynomial time. The easiest way to understand this idea is by considering the position of an omniscient being (say, the wizard Merlin) who is trying to convince a mere mortal that some instance is a yes-instance. Suppose the problem is the recognition version of the TSP, and the instance is a graph G and the number K = 100. Merlin knows that the instance does contain a tour with length at most 100. To convince the mortal of this fact, he simply hands her a list of the edges of this tour. This list is the certificate: it is polynomial in length, and the mortal can easily verify, in polynomial time, that the edges do in fact form a tour with length at most 100.
There is an inherent asymmetry between “yes” and “no” in the definition of NP. For example, there is no obvious, succinct way for Merlin to convince a mortal that a particular instance does NOT contain a tour with length at most 100. In fact, by reversing the roles played by “yes” and “no” we obtain a problem class known as Co-NP. In particular, for every recognition problem in NP there is an associated recognition problem in Co-NP obtained by framing the NP question in the negative (e.g., “Do all traveling salesman tours in G have length greater than K?”). Many recognition problems are believed to lie outside both of the classes NP and Co-NP, because they seem to possess no appropriate “certificate.” An example would be the problem consisting of a graph G and two numbers K and L, with the question, “Is the number of distinct traveling salesman tours in G with length at most K exactly equal to L?”
To date, no one has found a polynomial-time algorithm for the TSP. On the other hand, no one has been able to prove that no polynomial-time algorithm exists for the TSP. How, then, can we argue persuasively that the TSP and many problems in NP are “intractable”? Instead, we offer an argument that is slightly weaker but also compelling. We show that the recognition version of the TSP, and scores of other NP problems, are the hardest problems in the class NP in the following sense: if there is a polynomial-time algorithm for any one of these problems, then there is a polynomial-time algorithm for every problem in NP. Observe that this is a very strong statement, since NP includes a large number of problems (such as integer programming) that appear to be extremely difficult to solve, both in theory and in practice! Problems in NP with this property are called NP-complete. Otherwise stated, it seems highly unlikely that a polynomial algorithm will be found for any NP-complete problem, since such an algorithm would actually provide polynomial time algorithms for every problem in NP !
The class NP and the notion of “complete” problems for NP were first introduced by Cook (1971). In that paper, he demonstrated that a particular recognition problem from logic, SATISFIABILITY, was NP-complete, by showing directly how every other problem in NP could be encoded as an appropriate special case of SATISFIABILITY. Once the first NP-complete problem had been established, however, it became easy to show that others were NP-complete. To do so requires simply providing a polynomial transformation from a known NP-complete problem to the candidate problem. Essentially, one needs to show that the known “hard” problem, such as SATISFIABILITY, is a special case of the new problem. Thus, if the new problem has a polynomial-time algorithm, then the known hard problem has one as well.
The term NP-hard refers to any problem that is at least as hard as any problem in NP. Thus, the NP-complete problems are precisely the intersection of the class of NP-hard problems with the class NP. In particular, optimization problems whose recognition versions are NP-complete (such as the TSP) are NP-hard, since solving the optimization version is at least as hard as solving the recognition version.
The polynomial hierarchy refers to a vast array of problem classes both beyond NP and Co-NP and within. There is an analogous set of definitions which focuses on the space required by an algorithm rather than the time, and these time and space definitions roughly correspond in a natural way. There are complexity classes for parallel processing, based on allowing a polynomial number of processors. There are classes corresponding to randomized algorithms, those that allow certain decisions in the algorithm to be made based on the outcome of a “coin toss.” There are also complexity classes that capture the notions of optimization and approximability. The most famous open question concerning the polynomial hierarchy is whether the classes P and NP are the same, that is, P = ? NP. If a polynomial algorithm were discovered for any NP-complete problem, then all of NP would “collapse” to P; indeed, most of the polynomial hierarchy would disappear.
In algorithmic complexity, two other terms are heard frequently: strongly polynomial and pseudo-polynomial. A strongly polynomial-time algorithm is one whose running time is bounded polynomially by a function only of the inherent dimensions of the problem and independent of the sizes of the numerical data. For example, most sorting algorithms are strongly polynomial, since they normally require a number of comparisons polynomial in the number of entries and do not depend on the actual values being sorted; an algorithm for a network problem would be strongly polynomial if its running time depended only on the numbers of nodes and arcs in the network, and not on the sizes of the costs or capacities.
A pseudo-polynomial-time algorithm is one that runs in time polynomial in the dimension of the problem and the magnitudes of the data involved (provided these are given as integers), rather than the base-two logarithms of their magnitudes. Such algorithms are technically exponential functions of their input size and are therefore not considered polynomial. Indeed, some NP-complete and NP-hard problems are pseudo-polynomially solvable (sometimes these are called weakly NP-hard or-complete, or NP-complete in the ordinary sense). For example, the NP-hard knapsack problem can be solved by a dynamic programming algorithm requiring a number of steps polynomial in the size of the knapsack and the number of items (assuming that all data are scaled to be integers). This algorithm is exponential-time since the input sizes of the objects and knapsack are logarithmic in their magnitudes. However, as Garey and Johnson (1979) observe, “A pseudo-polynomial-time algorithm … will display 'exponential behavior' only when confronted with instances containing 'exponentially large' numbers, [which] might be rare for the application we are interested in. If so, this type of algorithm might serve our purposes almost as well as a polynomial time algorithm.” The related term strongly NP-complete (or unary NP-complete) refers to those problems that remain NP-complete even if the data are encoded in unary (that is, if the data are “small” relative to the overall input size). Consequently, if a problem is strongly NP-complete then it cannot have a pseudo-polynomial-time algorithm unless P = NP.
For textbook introductions to the subject, see Papadimitriou (1994) and Sipser (1997). The most important reference on the subject, Garey and Johnson (1979), contains an outstanding, relatively compact introduction to complexity. Further references, including surveys and full textbooks, are given below.
Bovet, D. P. and Crescenzi, P. (1994). Introduction to the Theory of Complexity. Prentice-Hall, Englewood Cliffs, New Jersey.
Cook, S. A. (1971). “The complexity of theorem-proving procedures,” Proc. 3rd Annual ACM Symp. Theory of Computing, 151–158.
Garey, M. R. and Johnson, D. S. (1979). Computers and Intractability: a Guide to the Theory of NP-Completeness. W.H. Freeman, New York.
Karp, R. M. (1975). “On the computational complexity of combinatorial problems,” Networks 5, 45–68.
Lewis, H. R. and Papadimitriou, C. H. (1997). Elements of the Theory of Computation, 2nd ed. Prentice-Hall, Englewood Cliffs, New Jersey.
Papadimitriou, C. H. (1985). “Computational complexity,” in Lawler, E. L., J. K. Lenstra, A. H. G. Rinnooy Kan, and D. B. Shmoys, eds., The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization. Wiley, Chichester, UK.
Papadimitriou, C. H. (1993). Computational Complexity. Addison-Wesley, Redwood City, California.
Papadimitriou, C. H. and Steiglitz, K. (1982). Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Englewood Cliffs, New Jersey.
Shmoys, D. B. and Tardos, E. (1989). “Computational complexity of combinatorial problems,” in Lovász, L., R. L. Graham, and M. Groetschel, eds., Handbook of Combinatorics. North-Holland, Amsterdam.
Sipser, M. (1997). Introduction to the Theory of Computation. PWS-Kent, Belmont, California.
Stockmeyer, L. J. (1990). “Complexity theory,” in Coffman, Jr., E. G., J. K. Lenstra, and A. H. G. Rinnooy Kan, eds., Handbooks in Operations Research and Management Science; Volume 3: Computation, Chapter 8. North Holland, Amsterdam.
This text originally appeared in Encyclopedia of Operations Research and Management Science (ISBN 079237827x)