What are some interesting math algorithms




The
Algorithmic
revolution
On the history of interactive art

from October 31, 2004
ZKM | Atriums 8 + 9


 
Introduction - The Mathematical Algorithm

The concept of algorithm is a fundamental term in mathematics. It is given intuitively and is usually understood to be a general procedure for solving a class of problems. An algorithm is defined by a finite set of rules that are applied one after the other and often repeated according to certain conditions. A classic example are the Euclidean algorithm and the Gaussian elimination method.
Already the Arabs, under the influence of the Indians, developed algorithms for handling algebraic tasks. The word "algorithm" goes back to the name of the Arab mathematician Mohammed Ibn Musa Alchwarizmi (around 800) who, through his book (later cited as "liber algorithmi") on the treatment of algebraic equations, contributed significantly to the dissemination of the equations that were created at that time Calculation methods has contributed.
Descartes developed analytic geometry with the intention of making the geometry accessible to algebraic calculation methods. Leibniz is also known that he had an all-encompassing method of solving every problem through "arithmetic" in mind and that he therefore endeavored to develop algorithms. He was one of the first to plan to build a calculating machine.
The lack of mathematical accuracy in the current definition of an algorithm bothered many mathematicians and logicians of the 19th and 20th centuries. A.A. Markow created a general theory of stochastic processes or random processes using his so-called Markow chains in 1906, which was generalized by A. Kolmogorow in 1936. The concept of algorithmic randomness was accepted as the ultimate definition of chance and was introduced in the 1960s by A. Solomonov ("A Formal Theory of Inductive Inference", 1964), A. Kolmogorow ("Three Approaches to the Quantitative Definition of Information" ", 1965) and G. Chaitin (" On the Length of Programs for Computing Finite Binary Strings ", published 1966) for the establishment of an algorithmic information theory, which today is essentially treated as part of the complexity theory.
In order to have algorithms for solving as many mathematical problems as possible, attempts were made to find more and more general algorithms. In the field of logic, calculi had been developed that allow large parts of mathematics to be derived from a few axioms with the help of such a calculus, e.g. in the Principia Mathematica (1910-1913) by A.N. Whitehead and B. Russel.
The question was now about a general algorithm that makes it possible to derive all mathematical theorems of a mathematical theory, e.g. geometry, from axioms of the theory. With efforts to solve this problem, the actual theory of algorithms began and the concept of algorithm was precisely defined.
The problem of making the concept of algorithms more precise is, historically and logically, closely related to making the concept of a computable function more precise: in order for a function to be computable, there must be an algorithm that can be used to determine the corresponding one for each argument Can calculate function value. These investigations began with a work by Thoralf Skolem in 1923. The functions considered by Skolem are the primitive recursive functions. These functions were first introduced as a term in 1931 by Kurt Gödel in his work. That was the reason to look for a more general definition of predictability. Herbrand and Gödel developed the concept of the general recursive function around 1934, although Gödel was not yet convinced that he had found the general concept of the computable function with it. It wasn't until 1936, when A. Church and S.C. Kleene had found a completely different specification of the notion of computability (the _ (lambda) -definability) and, in collaboration with J.B. Rosser were able to prove the equivalence of this term with that of Herbrand and Gödel, Church expressed his famous thesis: “The intuitively given, commonly used term of the calculable arithmetic function is identical to the precisely defined term of the general recursive function ''. Although this thesis cannot be proven, it has been strengthened more and more by many later results, so that its validity is now generally accepted. When it turned out that one also had to consider arithmetic functions, which are explained like the general recursive functions, but only with a subset of the natural numbers as the domain, the wider concept of the partially recursive function was introduced (Kleene 1938 ) and also used it in Church's thesis.
Another crucial step in the theory of computability was the work of A.M. Turing "On Computable Numbers, with an Application to the Decision Problem" (1936-37). Here the fundamental concept of the automatic machine, now called the Turing machine, was introduced before the technical development of calculating machines. This term enables a very natural and easy access to an exact definition of the computable function and the algorithm. Almost simultaneously with Turing and independently of him, E. Post published a very similar proposal in 1937 to make the concept of predictability more precise. Turing established the equivalence of the concepts of the function that can be calculated with a Turing machine and that of the function that can be calculated in the usual sense. Because Turing was able to prove the equivalence of his concept of calculability and the _ (lambda) -definability of Church in 1937, it could be shown that Church's and Turing's theses are identical.
Three different precisions had now been found for the concept of calculable function and their equivalence had been proven. The term Turing predictability also has a special meaning. Namely, it provides a definition of mechanical predictability. The first problem in classical mathematics whose unsolvability has been proven is a problem by A. Thue (1914) from the theory of semigroups. Its insolubility was proven simultaneously, but independently, by E. Post and A.A. Markov in 1947. From its unsolvability one can deduce the unsolvability of a number of other, very famous problems in mathematics, such as the unsolvability of the word problem of group theory or the decision problem of predicate logic.
The development of programming languages ​​began with Axel Thue, who in 1914 provided the first precise version of an algorithmic decision-making process with his work "Problems about changes in strings of characters according to given rules". With the help of a finite alphabet (e.g. six letters) and a rule system R (e.g. two conversion rules) it was possible to determine in individual cases whether a given series of characters could be generated from the given alphabet and rule system. Such semi-Thue systems were used to develop the theory of formal languages. In the 1950s, Noam Chomsky used semi-Thue systems to describe the grammatical structures of natural languages. Building on the Semi-Thue systems by Chomsky, John Backus and Peter Naur introduced a formal notation around 1960 to describe the syntax of a language, from which the first successful programming language developed: ALGOL 60 (algorithmic language). All of the following programming languages ​​FORTRAN, COBOL, BASIC, Pascal, C etc. have an ALGOL-like structure.