God and Nature 2025 #2

By Andy Quick
As the heavens are higher than the earth, so are my ways higher than your ways and my thoughts than your thoughts (Isaiah 55:9 NIV).
For in him all things were created: things in heaven and on earth, visible and invisible, ... all things have been created through him and for him. He is before all things, and in him all things hold together (Colossians 1:16,17 NIV).
I often think about verses like these as I ponder questions in the foundations of Computer Science. In this essay I consider some philosophical questions about the nature of computation.
Computability
In the 19th century, mathematicians were interested in formalizing mathematics. Giuseppe Peano proposed Peano Arithmetic (PA), an axiomatic system describing the natural numbers {0,1,2,...}, in which the normal arithmetic operations are defined in terms of statements of logic, based on nine fundamental axioms. PA allows mathematical logicians to formally prove statements about mathematics and to explore its limits.
At the end of the 19th century, David Hilbert posed some questions about formalizations of mathematics:
(1) Were they consistent?
(2) Were they complete? and
(3) Is there an effective way to decide whether a given theorem is true?
The mathematical logician Kurt Gödel answered the first two questions as follows:
(1) Yes, they are consistent—meaning that there does not exist a statement about the natural numbers in Peano Arithmetic (PA) that is false and yet can be proven in PA, and
(2) No, they are not complete because there are true statements about the natural numbers that cannot be proven within PA.
The third question depends on a definition of the word “effective”.
Mathematicians Alonzo Church, Gödel, and others proposed abstract mathematical theories that encapsulated the notion of effective procedure and proved that there are statements in PA whose truth is not decidable by an effective procedure. Alan M. Turing was the first to connect this notion with the physical world and the idea of computability. Turing defined an idealized computer, known as a Turing machine, and proved the existence of uncomputable problems. The crucial aspect of Turing’s proof is that a Turing machine is universal in the sense that its instructions, or program, can be set up so that it performs any computable task. This includes taking as input an encoding of another program and emulating it; that is, interpreting a program as data.
Perhaps the main philosophical idea underlying computability is something called the Church-Turing Thesis. The thesis says that any function “naturally to be regarded as computable” is computable by a Turing machine. The thesis isn’t a provable mathematical statement, but rather a statement about how we should think about the word computable. And, as we shall see, it has withstood the test of time very well.
Computational Complexity
In the fledgling field of Computer Science in the 1960s, it soon became apparent that computability wasn’t the interesting question for practical purposes. The concern was: could a problem be feasibly solved by a computer? Mathematicians came up with the idea of studying how much resources an algorithm consumed on an idealized computer, such as a Turing machine, to solve a problem. Resources could include space (memory), time (number of steps), or other measures. It was useful to define the resources needed as a function of the size of the input.
It seemed that the notion of an efficient algorithm was captured by the requirement that it computes an answer in running time bounded by some polynomial in the size of the input. That is, for input size n, the algorithm computes an answer in at most cn^k steps, for some constants c and k. This characterization of efficient algorithms has stood the test of time, giving a useful differentiation to algorithms that must use a form of “brute force search” to search a solution space of possibilities exponential in the size of the input.
Framed in terms of “decision problems” (i.e., problems with a yes or no answer), the class P is the set of problems for which there is an algorithm that halts and gives a yes or no answer in time bounded by a polynomial function of the input size.
The class NP (for non-deterministic polynomial time) is the set of problems for which there is an algorithm such that, if the answer is yes, then a given “witness” (such as a solution) can verify the yes answer in polynomial time. One can think of NP as the problems whose positive answer can be checked in polynomial time, whereas P is the class of problems whose answer can be found efficiently. The term “non-deterministic” comes from the idea that a Turing machine is allowed to guess the answer and then verify it in polynomial time.
Clearly, all problems in P are also in NP, since actually computing a solution in polynomial time also verifies it. The question of P =?NP might be one of the most profound questions any scientist has ever asked. For example, consider the problem: is there a proof of at most k symbols of a theorem in PA of size n? If k is a polynomial function of n, it can be shown that given a proof of a theorem of PA of size at most k, the proof can be checked for validity in time polynomial in n. Therefore, this problem is in NP. But if P = NP, one could efficiently find a proof of the theorem if one exists of a reasonable size. If we accept this, then the (seemingly creative) act of proving theorems in mathematics could be efficiently automated.
For this and other reasons, it may seem obvious that P≠ NP. On the other hand, no one has proven either P = NP or P≠ NP in over 50 years since the question was asked. Furthermore, while beyond the scope of this article, it has been formally shown that certain “natural” proof techniques will not suffice. It seems that some entirely novel insight will be needed to resolve the question.
Models of Computation
As we have seen, Turing made a new kind of connection between the abstract world of mathematics and the physical world with the concept of the Turing machine.
In a tangible way, the Turing machine model informed the design of the first electronic digital computers. Of course, real computers by necessity have finite resources, such as memory. But it isn’t hard to see that an idealized digital computer is computationally equivalent, in terms of the class P, to a Turing machine.
In the 1980s, physicist Richard Feynman and others pointed out that it would take resources exponential in n to simulate the quantum mechanical state of a system of n particles. Feynman also raised the possibility of building computers out of quantum components, which could possibly harness these vast computational resources. Since then, a rich theory of quantum computation has developed. The main idea of quantum computation is to use the quantum mechanical concepts of superposition and interference, acting on quantum bits, or qubits, to solve seemingly difficult problems for “classical” computers. The most famous problems in this category are those of computing discrete logarithms and factoring integers. A quantum algorithm was published in 1994 by Peter Shor that can factor n-bit integers in time polynomial in n, whereas the best-known classical algorithm takes 2^n^1/3 steps.
As the heavens are higher than the earth, so are my ways higher than your ways and my thoughts than your thoughts (Isaiah 55:9 NIV).
For in him all things were created: things in heaven and on earth, visible and invisible, ... all things have been created through him and for him. He is before all things, and in him all things hold together (Colossians 1:16,17 NIV).
I often think about verses like these as I ponder questions in the foundations of Computer Science. In this essay I consider some philosophical questions about the nature of computation.
Computability
In the 19th century, mathematicians were interested in formalizing mathematics. Giuseppe Peano proposed Peano Arithmetic (PA), an axiomatic system describing the natural numbers {0,1,2,...}, in which the normal arithmetic operations are defined in terms of statements of logic, based on nine fundamental axioms. PA allows mathematical logicians to formally prove statements about mathematics and to explore its limits.
At the end of the 19th century, David Hilbert posed some questions about formalizations of mathematics:
(1) Were they consistent?
(2) Were they complete? and
(3) Is there an effective way to decide whether a given theorem is true?
The mathematical logician Kurt Gödel answered the first two questions as follows:
(1) Yes, they are consistent—meaning that there does not exist a statement about the natural numbers in Peano Arithmetic (PA) that is false and yet can be proven in PA, and
(2) No, they are not complete because there are true statements about the natural numbers that cannot be proven within PA.
The third question depends on a definition of the word “effective”.
Mathematicians Alonzo Church, Gödel, and others proposed abstract mathematical theories that encapsulated the notion of effective procedure and proved that there are statements in PA whose truth is not decidable by an effective procedure. Alan M. Turing was the first to connect this notion with the physical world and the idea of computability. Turing defined an idealized computer, known as a Turing machine, and proved the existence of uncomputable problems. The crucial aspect of Turing’s proof is that a Turing machine is universal in the sense that its instructions, or program, can be set up so that it performs any computable task. This includes taking as input an encoding of another program and emulating it; that is, interpreting a program as data.
Perhaps the main philosophical idea underlying computability is something called the Church-Turing Thesis. The thesis says that any function “naturally to be regarded as computable” is computable by a Turing machine. The thesis isn’t a provable mathematical statement, but rather a statement about how we should think about the word computable. And, as we shall see, it has withstood the test of time very well.
Computational Complexity
In the fledgling field of Computer Science in the 1960s, it soon became apparent that computability wasn’t the interesting question for practical purposes. The concern was: could a problem be feasibly solved by a computer? Mathematicians came up with the idea of studying how much resources an algorithm consumed on an idealized computer, such as a Turing machine, to solve a problem. Resources could include space (memory), time (number of steps), or other measures. It was useful to define the resources needed as a function of the size of the input.
It seemed that the notion of an efficient algorithm was captured by the requirement that it computes an answer in running time bounded by some polynomial in the size of the input. That is, for input size n, the algorithm computes an answer in at most cn^k steps, for some constants c and k. This characterization of efficient algorithms has stood the test of time, giving a useful differentiation to algorithms that must use a form of “brute force search” to search a solution space of possibilities exponential in the size of the input.
Framed in terms of “decision problems” (i.e., problems with a yes or no answer), the class P is the set of problems for which there is an algorithm that halts and gives a yes or no answer in time bounded by a polynomial function of the input size.
The class NP (for non-deterministic polynomial time) is the set of problems for which there is an algorithm such that, if the answer is yes, then a given “witness” (such as a solution) can verify the yes answer in polynomial time. One can think of NP as the problems whose positive answer can be checked in polynomial time, whereas P is the class of problems whose answer can be found efficiently. The term “non-deterministic” comes from the idea that a Turing machine is allowed to guess the answer and then verify it in polynomial time.
Clearly, all problems in P are also in NP, since actually computing a solution in polynomial time also verifies it. The question of P =?NP might be one of the most profound questions any scientist has ever asked. For example, consider the problem: is there a proof of at most k symbols of a theorem in PA of size n? If k is a polynomial function of n, it can be shown that given a proof of a theorem of PA of size at most k, the proof can be checked for validity in time polynomial in n. Therefore, this problem is in NP. But if P = NP, one could efficiently find a proof of the theorem if one exists of a reasonable size. If we accept this, then the (seemingly creative) act of proving theorems in mathematics could be efficiently automated.
For this and other reasons, it may seem obvious that P≠ NP. On the other hand, no one has proven either P = NP or P≠ NP in over 50 years since the question was asked. Furthermore, while beyond the scope of this article, it has been formally shown that certain “natural” proof techniques will not suffice. It seems that some entirely novel insight will be needed to resolve the question.
Models of Computation
As we have seen, Turing made a new kind of connection between the abstract world of mathematics and the physical world with the concept of the Turing machine.
In a tangible way, the Turing machine model informed the design of the first electronic digital computers. Of course, real computers by necessity have finite resources, such as memory. But it isn’t hard to see that an idealized digital computer is computationally equivalent, in terms of the class P, to a Turing machine.
In the 1980s, physicist Richard Feynman and others pointed out that it would take resources exponential in n to simulate the quantum mechanical state of a system of n particles. Feynman also raised the possibility of building computers out of quantum components, which could possibly harness these vast computational resources. Since then, a rich theory of quantum computation has developed. The main idea of quantum computation is to use the quantum mechanical concepts of superposition and interference, acting on quantum bits, or qubits, to solve seemingly difficult problems for “classical” computers. The most famous problems in this category are those of computing discrete logarithms and factoring integers. A quantum algorithm was published in 1994 by Peter Shor that can factor n-bit integers in time polynomial in n, whereas the best-known classical algorithm takes 2^n^1/3 steps.

Physicist David Deutsch is of the opinion that the ideas of quantum computation lend support to the Many-Worlds Interpretation of quantum mechanics. Deutsch issues the challenge: explain how Shor’s algorithm factors a 1000 − bit integer, which uses more bits than there are particles in the known universe, on a classical computer?
It is important to note that there may be an efficient classical algorithm for factoring integers, and that we just haven’t found it. Also, it’s important to understand that quantum computers do not perform exponentially many separate computations in parallel. Rather, they can exploit the mathematical structure of some tasks, and use interference to reinforce correct answers and cancel out incorrect answers.
Much groundbreaking work has been done in building real quantum computers, with some amazing results in small-scale experiments. However, it’s fair to say that the jury is still out on whether any of these experiments can be scaled up to realize a practical quantum computing device.
It is important to note that there may be an efficient classical algorithm for factoring integers, and that we just haven’t found it. Also, it’s important to understand that quantum computers do not perform exponentially many separate computations in parallel. Rather, they can exploit the mathematical structure of some tasks, and use interference to reinforce correct answers and cancel out incorrect answers.
Much groundbreaking work has been done in building real quantum computers, with some amazing results in small-scale experiments. However, it’s fair to say that the jury is still out on whether any of these experiments can be scaled up to realize a practical quantum computing device.

Philosophical Musings
Traditionally we have thought of Computer Science as a field built upon a foundation of mathematics and physics (mathematics to inform the theory, physics to inform the practice). Professor Scott Aaronson has called Computer Science “what mediates between the physical world and the Platonic world”. In this article, I hope I have given a glimpse into how Computer Science can inform the study of finite physical beings such as humans—in particular, in the study of the limits of what we can know, and of how information is related to the physical world. Perhaps the model on the left showing the relationships between the physical sciences is appropriate.
References
Andy Quick is a retired software engineer, with a Master's degree in Computer Science. He is a member of the CSCA and lives in Kitchener-Waterloo, Canada.
Traditionally we have thought of Computer Science as a field built upon a foundation of mathematics and physics (mathematics to inform the theory, physics to inform the practice). Professor Scott Aaronson has called Computer Science “what mediates between the physical world and the Platonic world”. In this article, I hope I have given a glimpse into how Computer Science can inform the study of finite physical beings such as humans—in particular, in the study of the limits of what we can know, and of how information is related to the physical world. Perhaps the model on the left showing the relationships between the physical sciences is appropriate.
References
- Aaronson, Scott, 2013. Why Philosophers Should Care About Computational Complexity https://www.scottaaronson.com/papers/philos.pdf
- Aaronson, Scott, 2013. Quantum Computing Since Democritus New York: Cambridge University Press.
- Garey, Michael R. and Johnson, David S., 1979. Computers and Intractability, A Guide to the Theory of NP-completeness Bell Telephone Laboratories, Inc.
- University of Waterloo, 2000. The Red Room https://ist.uwaterloo.ca/cs/redroom/
Andy Quick is a retired software engineer, with a Master's degree in Computer Science. He is a member of the CSCA and lives in Kitchener-Waterloo, Canada.