A

David

Darling

complexity

Complexity is a phenomenon that has two distinct and almost opposite meanings. The first, and probably the oldest mathematically, goes back to Andrei Kolmogorov's attempt to give an algorithmic foundation to notions of randomness and probability and to Claude Shannon's study of communication channels via his notion of information. In both cases, complexity is synonymous with disorder and a lack of structure. The more random a process is, the greater its complexity. An ideal gas, for example, with its numerous molecules bouncing around in complete disarray, is complex as far as Kolmogorov and Shannon are concerned. Thus, in this sense, complexity equates to the degree of complication. It is with this aspect of complexity that complexity theory in computer science is involved.

 

The second, and more recent notion of "complexity" refers instead to how structured, intricate, hierarchical, and sophisticated a natural process is. In particular, it's a property associated with dynamical systems in which new, unpredictable behavior arises on scales above the level of the constituent components. The distinction between these two meanings can be revealed by answering a simple question about a system: Is it complex or is it merely complicated? Measures of complexity include algorithmic complexity, fractal dimensionality, Lyapunov exponents, and logical depth.

 


Complexity theory

Complexity theory is a part of the theory of computation that has to with the resources needed to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). Complexity theory differs from computability theory, which deals with whether a problem can be solved at all, regardless of the resources required.