In the Classroom – Limits on Scientific Knowledge: Chaos, Computational Complexity, and Computability
For more than a thousand years scientific thought has been dominated by the principle of determinism, the belief that the future behavior of a system can be be completely determined from its current state. That is, a problem can always be solved if the initial state is known, the rules that govern the system are known, and sufficient computational resources are available. This fundamental philosophy guided researchers from the Ancient Greeks to Newton as they developed the laws of physics, chemistry, astronomy, and mathematics which culminated in Newton’s Laws of Motion. However, in the 20th century the applicability of the deterministic principle was dramatically affected by the four conceptual revolutions represented by chaotic systems, computational complexity, undecidability, and incompleteness.
These seemingly unrelated disciplines each created fundamental limits on what we can understand about our world. While classical Newtonian mechanics can accurately predict the behavior of a small number of interacting objects, such as the period of revolution of a planet orbiting a star or the rate of acceleration of an apple falling to the ground, when the number of objects increases, the equations of motion become very sensitive to the initial conditions of the system. As a result, long-term behavior cannot be predicted accurately without precise information about the system’s initial state, information which is often impossible to obtain. Researchers including Poincare (1908), Julia (1918), Lorenz (1963) and Mandelbrot (1967) have observed that chaotic, unpredictable behavior can be the result.
The solution of combinatorial optimization problems is an important part of many disciplines, and many of these problems can be solved easily. An example of an easy problem is finding the most efficient way to route a telephone call through a network. However, the NP-complete problems are widely believed to be unsolvable using any reasonable amount of computational resources that could possibly exist. These computationally complex problems, which include thousands of problems of interest to scientists, are considered to be intractable in practice, as shown by Cook (1971) and Karp (1972). While NP-complete problems are believed to be unsolvable using reasonable amounts of computational resources, some problems cannot be solved even if infinite resources are available. These undecidable problems, such as Turing’s Halting Problem, are constructed such that a solution leads to a logical inconsistency. Similarly, Godel’s incompleteness result indicates that there are limits on our ability to use logical reasoning to determine which statements are true and which are false.
Taken together, these ideas make a powerful and overarching statement about the limits of human knowledge. The implications of these theories of computational complexity, formal logic, and chaotic systems have profoundly affected disciplines as diverse as biology, philosophy, chemistry, mathematics, computer science, computer engineering, and industrial engineering. By defining formal limits on our ability to use models to predict the future, researchers have been forced to alter their expectations when developing new theories. In the seminar we review the familiar deterministic model and discuss its history and applicability. We then cover key results from chaos theory, computational complexity, logic, and the theory of computation. Finally, we synthesize this material by discussing the practical implications of these ideas for scientists. Each student completes a research project that either conducts a more detailed investigation of one of the fundamental limits, or describes how one of the limits affects a specific scientific discipline.
Department of Electrical and Computer Engineering