Statistics In the physical sciences In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases. According to several standard interpretations of quantum mechanics , microscopic phenomena are objectively random. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case.
|Published (Last):||4 February 2017|
|PDF File Size:||8.58 Mb|
|ePub File Size:||7.38 Mb|
|Price:||Free* [*Free Regsitration Required]|
However, we can also imagine a relatively simple program for U, call it P, which checks through all the proofs in FAS and when it finds a proof that some positive integer, I, requires a program of N bits to specify it, prints out I and halts.
Which is to say that a lower bound on program-size complexity cannot be derived in any FAS which itself has a complexity cFAS much lower than that bound.
To prove that a particular object has program-size complexity of N, you need more or less "N bits of axioms", as Chaitin is fond of saying, meaning that the size of the shortest proof-checker for the formal system used must be not less than N. Where Turing and Godel say "undecidable! Kolmogorov had defined an infinite binary sequence to be random where all their "prefixes" are incompressible the "prefixes" are just the finite sequences you find at the beginning of any infinite sequence - their initial n elements.
Chaitin introduced these results in the opening plenary session of the IEEE symposium on Information Theory, and published his new self-delimiting version of program-size complexity in his paper A theory of program size formally identical to information theory in the ACM Journal, which he regards as finally establishing algorithmic information theory on a firm footing as a field, the previous efforts of himself and Kolmogorov belonging to its "pre-history".
Apart from the application to random sequences, the requirement for self-delimiting programs enforces that the complexity of two sequences added to one another is never greater than their separate complexities added plus a small constant , making program-size complexity an additive quantity. Around the same time, Chaitin devised his infamous constant, Omega , another development made possible by his idea of self-delimiting programs.
Chaitin showed that for any universal Turing machine U, whose valid programs are self-delimiting, the probability that it will halt on random input is a real number less than one.
This "halting probability" is the Omega for U. This had been tried before, by Solomonoff, but without the requirement for self-delimitation attempts to calculate Omega would always produce divergent series , giving a halting "probability" of infinity!
As n increases, the series of successive Omegan converges on a real number between 0 and 1, Chaitin showed. This means that it is also normal - any finite subsequence of length k appears in it as often as all the other k-length subsequences. The bits of Omega exhibit no structure, order, or pattern - knowing the first 25 bits helps you not one whit in calculating the 26th bit, knowing all the odd bits makes it just as hard to calculate the even ones.
The time needed to calculate the first n bits of Omega grows at the rate of the busy beaver function of n. In order to prove you have the first n bits of Omega, you need n bits of axioms. This is about as unknowable as a well-defined sequence of bits can be. But inspiration struck again, when in the mid-eighties he was invited by Cambridge University Press to contribute the first volume for their series: Cambridge Tracts in Theoretical Computer Science.
Taking a LISP interpreter and "transforming it into an equation using ideas of Jones and Matijasevic", then seasoning with a LISP program which computes approximations to Omega, Chaitin produced a page exponential diophantine equation with the property that if its single parameter is set to N, then the Nth bit of Omega for the interpreter is 0 only when the equation has a finite number of solutions counting no solutions as finite.
It follows that since the Nth bit of Omega is random, then the answer to the question are there a finite number of solutions with parameter N? While irreducible randomness in bizarre constructions like the halting probability was an intriguing oddity, the same thing in a "simple" equation, with only integer terms and ordinary arithmetic involved, was more provocative.
Chaitin has continued to refine and develop his ideas, combining them with ever more elegant LISP interpreters into practical courses which let you see the main points of his theories with the help of programs written in his own specialised variant of LISP. Much of his work is freely available via his website, where you can also find multiple lectures given about foundational questions in mathematics. On the significance of his own brand of information-theoretic incompleteness results, Chaitin suggests that they support a view of mathematics as a quasi- empirical discipline, that rather than searching for the One True Axiom Set, mathematicians should feel free to experiment with "more bits of axioms".
In a recent article for EATCS Bulletin, June , he says: I think that incompleteness cannot be dismissed and that mathematicians should occasionally be willing to add new axioms that are justified by experience, experimentally, pragmatically, but are not at all self-evident.
In my opinion that P is not equal to NP is a good example of such a new axiom. Sometimes to prove more, you need to assume more, to add new axioms! Of course, at this point, at the juncture of the 20th and the 21st centuries, this is highly controversial. It goes against the current paradigm of what mathematics is and how mathematics should be done, it goes against the current paradigm of the nature of the mathematical enterprise. However this radical paradigm shift may take many years of discussion and thought to be accepted, if indeed this ever occurs.
Books by Gregory Chaitin.
The theory of randomness is founded on computability theory, and it is nowadays often referred to as algorithmic randomness. Research in algorithmic randomness connects computability and complexity theory with mathematical logic, proof theory, probability and measure theory, analysis, computer science, and philosophy. It also has surprising applications in a variety of fields, including biology, physics, and linguistics. Founded on the theory of computation, the study of randomness has itself profoundly influenced computability theory in recent years. Download chapter PDF 1 Introduction In this chapter we aim to give a nontechnical account of the mathematical theory of randomness. This theory can be seen as an extension of classical probability theory that allows us to talk about individual random objects. Besides answering the philosophical question what it means to be random, the theory of randomness has applications ranging from biology, computer science, physics, and linguistics, to mathematics itself.
Watson Research Center in New York and remains an emeritus researcher. He has written more than 10 books that have been translated to about 15 languages. He is today interested in questions of metabiology and information-theoretic formalizations of the theory of evolution. Other scholarly contributions[ edit ] Chaitin also writes about philosophy , especially metaphysics and philosophy of mathematics particularly about epistemological matters in mathematics. In recent writings, he defends a position known as digital philosophy. In the epistemology of mathematics, he claims that his findings in mathematical logic and algorithmic information theory show there are "mathematical facts that are true for no reason, that are true by accident".
The Mathematical Foundations of Randomness