In 1874, mathematician Georg Cantor invented set theory, which became a fundamental tenant of modern mathematics. He defined the concepts of infinity and well-ordered sets, and proved that the real numbers are a larger order of infinity than the natural numbers.

Since the natural numbers map one-to-one with objects in the physical world, Cantor's work expanded the domain of mathematics from the experience of the physical universe to any universe conceived by the mathematician's mind.

Cantor's conception of set theory and infinity became the dominant paradigm in modern mathematics and was used without question by all early computational theorists.

In 1928, David Hilbert challenged the mathematics community to address the following question: Could any logical proposition be validly answered? In other words, could the proposition be decided? This was called the Halting problem. It is the origin of our modern concept of decidability.

In a paper published in 1936, Alan Turing proved that a general algorithm to decide the Halting problem cannot exist. His formulation proved that the existence of such an algorithm would lead to a logical paradox, and, therefore, Halting is undecidable.

In 1953, as the digital computer era began, and programming became a practical problem, Henry Gordon Rice built upon Turing's proof to show that undecidability generalized to all non-trivial, semantic properties of programs.

Turing's proof, and Rice's derivative work exploited Cantor's concepts of orders of infinity to prove logical inconsistencies or paradoxes. Since Turing's work preceded digital computers, he used a paper computational framework in his proof — a Turing machine — and enumerated all possible Turing machines.

He mapped all naturally possible Turing machines to the infinity of the natural numbers, and then created a counter-example in a higher order of infinity using the mathematical technique of diagonalization. In short, the crucial logical inconsistency, the paradox, is a Turing Machine in a higher order of infinity.

Do these theories about infinity and an infinite number of programs apply to finite programs in the real world?

In 1886, Leopold Kronecker, a wealthy businessman, and professor at the University of Göttingen, rejected Cantor's notions of infinite sets and irrational numbers.

He maintained that a theory's logical correctness does not imply the existence of the entities it purports to describe and that they remain devoid of any significance unless they can actually be produced.

If we accept Kronecker's rejection of infinite sets, how does that change our interpretation and application of Turing's and Rice's theorems?

In addition to other theorems, L.E.J. Brouwer introduced intuitionism in the 1920s. Simply stated, intuitionism is a foundational mathematical philosophy that math is purely the result of human constructive mental activity.

We began to wonder; what if, instead of using Cantor's abstract math to analyze physical computations, we use Kronecker's and Brouwer's real-world math to understand real-world software programs?

This question led us to the foundational science that unlocks the meaningful knowledge in software.

"To understand the development of the opposing theories existing in this field, one must first gain a clear understanding of the concept 'science'; for it is as a part of science that mathematics originally took its place in human thought."L.E.J. Brouwer

We combine the right math with AI innovations to change the essence of the software development process.

Herbert Simon, Allen Newell, and others pioneered AI in the 1950s. Then, while working at MIT's AI Laboratory, Marvin Minsky and Seymour Papert proposed that AI researchers focus on developing programs capable of intelligent behavior in artificially simple situations. These situations came to be known as microworlds.

Following this approach, we normalize software into a formal world. Like early AI's microworlds, our normalized representation is finite, and quite obviously maps to a subset of reality – a subset of natural numbers.

Proofs that use concepts of infinity and higher orders of infinity simply do not work in this context. Diagonalization cannot be usefully applied because counter-examples in alternative mathematical universes have no practical significance in our reality.

Correspondingly, one can easily see the usefulness of the constructionist and intuitionist mathematics; software physically embodies the mental constructions of the programmer.

In dynamic execution, software mechanically executes the construction of data states intended by the programmer. This all conforms to intuitionist and constructionist mathematics.

In 2010, during a time of significant transition in AI research, Peter Norvig and his colleagues at Google published an influential paper urging machine-translation and speech-recognition researchers to reject theory development and instead "embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data."

Data science algorithms learn about human intention from data patterns, and then use this learning to create new algorithms that capture the essence of the intention, such as natural-language comprehension.

Phase Change does the same with software. The hurdle we faced was that data-science algorithms, by definition, require a formalism and machine-interpretation of meaning. To write an algorithm on floating-point numbers, floating-point arithmetic has to conform to the rules of arithmetic.

How does one transform programs into data representations that are similar to floating-points for numbers or strings for text? They must have formal operations like arithmetic and concatenation. The representations and operations must capture the semantics of what human engineers intend when they write and manipulate programs. This is no mean feat.

One can now see that constructionist mathematics is essential. It is a mathematical theory that is amenable to representing the programmer's intention, and the consequent behavior of programs.

Thus, we have hurdled the barrier, transforming software into data. This makes the software variant of Norvig's complexity amenable to data science and modern AI.