This note was posted online upon the celebration of the 30th anniversary of Mathematica.

From one of our homepages ( 81018.com ), the challenge for the next 30 years is clear:

“The degree to which we have integrated all our models as a working system is a measure of our intelligence and the integrity of our knowledge systems. There is no consistent scale between all of these models. That is, our Standard Models do not yet have a known consistent scale between each other, and there are no current scales from either Standard Model to our human systems scales where the greatest diversity of algorithms are at work. Add to that quandary (challenge), in deep learning, our artificial intelligence systems generate their own algorithms from the analysis of thousands, millions, and even billions of records and it appears today that we have no access to understand the inner workings of those algorithms.”

It’s our Achilles Heel.

This lack of integration and our understanding of that integration of systems (explainability) is a weakness within this industry. It is a weakness in our understanding about the deep role of mathematics and logic. Yes, it is ultimately even a weakness for our nation’s defense.

We may be taking the data from a chart from the Planck units to the Age of the Universe — there are 202+ notations — to create a “spaceapp” this weekend for NASA’s Space App Challenge.

The first second within the life of this universe takes up just over 144 of those 202+ notations. The first 67 notations are much smaller than the work done at CERN labs so imagination is a key to creating the initial blocks of notations.

Although we will use the epochs defined by the big bang theory as a level set and guide, our data does not show any signs of a big bang. Base-2 acts like a scripting language that defines the epochs better than they have been heretofore defined, all without a bang.

I’ll also drop Mitchell Feigenbaum and Robert Langland a note to see if they have suggestions. I believe this domain would be the first time dimensionless or pointfree geometries vis-a-vis Alfred North Whitehead would actually have been visualized.

Do you have any programs and/or people you could loan us to get this project in motion?

I cannot imagine a more simple model of the universe
— just 200+ base-2 exponential notations from the Planck base
units to the Age of the Universe, today, the Now.

It puts bifurcation theory on steroids.
Are we crazy? If so, please let us have it!
We can take it if we are! Thanks.

Sincerely,
Bruce

Second email (our first email disappeared without a trace): January 8, 2014

Dear Dr. Stephen Wolfram:

I thought that I sent you this note back on January 4,
but it has disappeared without a trace so I resend it now.
A Stanford friend recommended your Rockwood-UCSD lecture
to me and I was ever so glad she did. What a fascinating
introduction to your work from over ten years ago. I can
only imagine where you are now.

I believe cellular automata actually apply best in the
very small scale universe from the Planck Length to
particle physics. Most academics discount that rather
tiny, but complex domain.

A high school geometry class and now one of the better students
and I are slowly digging into it as part of his science fair project.

Here are our conclusions and our guesses:

1. The universe is mathematically very small.
Using base-2 exponential notation from the Planck Length
to the Observable Universe, there are somewhere over 202.34
and under 205.11 notations, steps or doublings. NASA’s Joe Kolecki
helped us with the first calculation and JP Luminet (Paris Observatory)
with the second. Our work began in our high school geometry
classes when we started with a tetrahedron and divided the edges
by 2 finding the octahedron in the middle and four tetrahedrons
in each corner. Then dividing the octahedron we found
the eight tetrahedron in each face and the six octahedron
in each corner. We kept going inside until we found the Planck Length.
We then multiplied by 2 out to the Observable Universe. Then it
was easy to standardize the measurements by just multiplying
the Planck Length by 2. In somewhere under 205.11 notations we go
from the smallest to the largest possible measurements of a length.

2. The very small scale universe is an amazingly complex place.
Assuming the Planck Length is a singularity of one vertex, we also
noted the expansion of vertices. By the 60th notation, of course, there are
over a quintillion vertices and at 61st notation well over 3 quintillion more
vertices. Yet, it must start most simply and here we believe your work
with the principles of computational equivalence has its a great possible
impact. We believe A.N. Whithead’s point-free geometries will also have
applicability.

3. This little universe is readily tiled by the simplest structures.
The universe can be simply and readily tiled with the four hexagonal plates
within the octahedron and by the tetrahedral-octahedral-tetrahedral chains.

4. And, the universe is delightfully imperfect.
In 1959, Frank/Kaspers discerned the 7.5 to 7.38 degree gap with a simple
construction of five tetrahedrons (seven vertices) looking a lot like the Chrysler
logo. The icosahedron with its 20 tetrahedrons is squishy. We call it quantum
geometry in our high school. Between 30 and 45 is the opening to randomness.

5. The Planck Length as the next big thing.
Within computational automata we might just find the early rules
that generate the infrastructures for things. The fermion and proton
do not show up until the 66th notation or doubling.

I could go on, but let’s see if these statements are at all helpful.

Our work is just two years old yet relies on several assumptions
that have been rattling around for 40 years. I’ll insert a few references below.