An algorithm is a systematic method for solving a problem. More precisely, it is a finite set of well-defined rules for the solution of a problem in a finite number of steps.


The word "algorithm" comes from the name of the Persian mathematician, al-Khowarizmi, and may have been first used by Gottfried Leibniz in the late 1600s. It remained little known in Western mathematics, however, until the Russian mathematician Andrei Markov (1903–1987) reintroduced it. The term became especially popular in the areas of math focused on computing and computation. Specifically in computer programming, an algorithm is a set of instructions in a computer-readable format which incorporates an established series of steps to obtain the solution to a problem.


Algorithmic complexity

Algorithmic complexity is a measure of complexity that has been developed by Gregory Chaitin and others and is based on Claude Shannon's information theory and earlier work by the Russian mathematicians Andrei Kolmogorov and Ray Solomonoff. Algorithmic complexity quantifies how complex a system is in terms of the length of the shortest computer program, or set of algorithms, needed to completely describe the system. In other words, it is how small a model of a given system is necessary and sufficient to capture the essential patterns of that system. Algorithmic complexity has to do with the mixture of repetition and innovation in a complex system. At one extreme, a highly regular system can be described by a very short program or algorithm. For example, the bit string 01010101010101... follows from just three commands: print a zero, print a one, and repeat the last two commands indefinitely. The complexity of such a system is very low. At the other extreme, if system is totally random its algorithmic complexity is very high since the random patterns can't be condensed into a smaller set of algorithms: the program is effectively as large as the system itself.


Logical depth

Logical depth is a another measure of the complexity of a system, developed by the American computer scientist and mathematician Charles Bennett. It contrasts with another such measure, algorithmic complexity, but, like it, is a measure of the algorithms needed to generate the data from a system.