Author: Joseph Van Name

So it appears that some people in the mathematical community have some trouble with the second law of thermodynamics since they do not believe in science. They instead believe in perpetual motion machines, the world is flat (or a torus, or a projective sphere), and that computers, refrigerators, TV’s, cars, and airplanes are powered by demons rather than gasoline and electricity. Let me therefore give you a few experiments that you can do at home on the second law of thermodynamics.

WARNING: All of these experiments are AT YOUR OWN RISK. I cannot be held responsible for any electric shocks, burns, other injuries, dumb patents, or any other damages as a result of these experiments.

A scientific experiment on perpetual motion machines

So let me give you a do-it-yourself experiment that you can do that will hopefully convince you that perpetual motion machines do not exist.

Step 1: Find an extension cord. Extension cords can typically be found attached to electrical outlets or around equipment that requires electricity such as televisions or computers. If you cannot find an extension cord, then you may be able to buy one at Wal-Mart or Staples.

Step 2: Plug the extension cord into itself.

Step 3: Turn the extension cord on and then plug other things into the extension cord.

Step 4: Does the light on the extension cord turn on and do the things plugged into the extension cord turn on and/or power up? If the light on the extension cord turns on and your appliances receive power, then, congratulations, you have created perpetual energy. Go and patent your invention and make trillions of dollars. If the light does not turn on, then perpetual energy does not exist, and the second law of thermodynamics is true.

An experiment on the second law of thermodynamics

Step 1: Make a cup of some hot coffee. Do not drink the coffee.

Step 2: Measure the temperature of the coffee with a thermometer and the temperature of the room which the coffee is in. If you do not have a thermometer, then you may find one at Wal-Mart or some other store. The temperature of the coffee should be much hotter than the temperature of the room. Please make sure that you have a thermometer and not a moisture meter.

Step 3: Wait 24 hours. Do not move the coffee. Store the coffee in a regular cup. Do not store the coffee in a thermos. Make sure that your mother does not heat up your coffee again since that will ruin this experiment.

Step 4: Measure the temperature of the coffee and the temperature of the room which you have left the coffee in.

Step 5: If the temperature of the coffee is equal to the temperature of the room, then the second law of thermodynamics is correct.
If you find that the temperature of the coffee is still near a boiling temperature, then you may have been using a moisture meter instead of a thermometer. In this case, you should (at your own risk of course) stick your finger into the coffee. If you do not get burned, then the second law of thermodynamics is correct after all. If you do get burned, then you need to ask your mother if she heat up the coffee again. If she denies heating up your coffee again, then the second law of thermodynamics is false. You should therefore publish your results in the top physics journal. If they reject you, then they are persecuting you.

A computer experiment on the second law of thermodynamics

Here is another remedial experiment which you can perform to convince you that not only does the second law of thermodynamics apply to our physical universe, but the second law also applies to any time reversible universe.

Step 2: Draw an image in the field. You may draw any image that you like.

Step 3: Select a rule or input a rule yourself. You may use any rule that you like. After you select a rule, then you should run the rule for at least 100,000 generations.

Step 4:

If the resulting image looks something like this,
then try running your cellular automaton under a different rule and repeat steps 1-4 a few times. If the image still looks orderly every time, the perhaps the second law of thermodynamics does not hold for all universes.

On the other hand, if the resulting image looks like this,

then the second law of thermodynamics works not just in our physical universe but also in any hypothetical or simulated universe such as a cellular automaton as long as that universe is time reversible.

An experiment on the second law of thermodynamics

With this experiment, you will be able to tell whether the universe will spontaneously violate the second law of thermodynamics and make all of the oxygen molecules migrate to the other side of the room while the nitrogen molecules stay on your side of the room.

Step 1: Gather the materials. For this experiment, you will need an oxygen mask and a full tank of oxygen. Please make sure you know how to use the oxygen mask before starting this experiment. You should also bring a fidget spinner to entertain yourself since this experiment may take a long time to complete and you may get bored. Oh. And make sure to pack some lunch and dinner since this experiment may take a while.

Step 2: Once you have the required materials, go to a room and lock the door. Please prepare to stay in that room for up to 10 hours since it may take a while for the second law of thermodynamics to be violated.

Step 3: If you feel like you cannot breathe, then all of the oxygen molecules have migrated to the other side of the room in violation of the second law of thermodynamics. In this scenario, immediately put on your oxygen mask since it may take a while for the violation of the second law of thermodynamics to wear off. If the second law of thermodynamics is still being violated as your oxygen tank is about to be depleted, then please leave the room and get some fresh air. At this point, you should probably determine whether this phenomenon is a true violation of the second law of thermodynamics or whether your house is haunted. In order to tell if your house is haunted, please get it inspected. Any good home inspector will be able to tell whether evil spirits live in your home or not. If you paid 40,000 dollars for a house that normally costs 500,000 dollars, then your house is probably haunted; I mean, why would anyone sell a non-haunted house for 1/10th of its worth?

Step 4: If nothing happens and you do not have to put your oxygen mask on after 10 hours, then the second law of thermodynamics is probably correct.

Satire notice: This post is satire. I wrote this post in order to address the anti-scientific beliefs which I have found among the mathematical community, namely the denial of Landauer’s principle.

So I have just posted the paper outlining the proof-of-work problem for my upcoming cryptocurrency Nebula. Here is the link for the paper. I hope to launch Nebula as soon as possible.

The idea behind Nebula is to use a reversible computing optimized proof-of-work (RCO-POW) problem instead of an ordinary proof-of-work problem (if you do not know what I am talking about, I suggest for you to read the original paper on Bitcoin). An RCO-POW problem is like an ordinary proof-of-work problem except for the fact that the RCO-POW problem can be solved by a reversible computing device just as easily as it can be solved using a conventional computing device.

It is very rare for a problem to be solvable by a reversible computing device using just as many steps as it is solvable using a conventional computing device. In general, it takes more steps to solve a problem using a reversible computation than it takes to solve the same problem using conventional computation. Therefore, since reversible computation has this computational overhead and since reversible computers currently do not exist, chip manufacturers do not have much of an incentive to manufacture reversible computing devices. However, since RCO-POW problems are just as easily solved using nearly reversible computational devices, chip manufacturers will be motivated to produce energy efficient reversible devices to solve these RCO-POW problems. After chip manufacturers know how to produce reversible devices that can solve these RCO-POW problems better than conventional devices, these manufacturers can use their knowledge and technology to start producing reversible devices for other purposes. Since reversible computation is theoretically much more efficient than conventional computation, these reversible computing devices will eventually perform much better than conventional computing devices. Hopefully these reversible computational devices will also eventually spur the development of quantum computers (one can think of reversible computation as simply quantum computation where the bits are not in a superposition of each other).

Nebula shall use the RCO-POW which I shall call R5. R5 is a POW that consists of five different algorithms which range from computing reversible cellular automata to computing random reversible circuits. I use the multi-algorithm approach in order to ensure decentralization and to incentivize the production of several different kinds of reversible devices instead of just one kind of device.

The only thing that will be different between Nebula and one of the existing cryptocurrencies is the POW problem since I did not want to add features which have not been tested out on existing cryptocurrencies already.

In August, I will be giving a contributed talk at the 2017 BLAST conference.

I am going to give a talk about the applications of functional endomorphic Laver tables to public key cryptography. In essence, the non-abelian group based cryptosystems extend to self-distributive algebra based cryptosystems, and the functional endomorphic Laver tables are, as far as I can tell, a good platform for these cryptosystems.

ABSTRACT: We shall use the rank-into-rank cardinals to construct algebras which may be used as platforms for public key cryptosystems.

The well-known cryptosystems in group based cryptography generalize to self-distributive algebra based cryptosystems. In 2013, Kalka and Teicher have generalized the group based Anshel-Anshel Goldfeld key exchange to a self-distributive algebra based key exchange. Furthermore, the semigroup based Ko-Lee key exchange extends in a trivial manner to a self-distributive algebra based key exchange. In 2006, Patrick Dehornoy has established that self-distributive algebras may be used to establish authentication systems.

The classical Laver tables are the unique algebras $A_{n}=(\{1,…,2^{n}-1,2^{n}\},*_{n})$ such that $x*_{n}(y*_{n}z)=(x*_{n}y)*_{n}(x*_{n}z)$ and $x*_{n}1=x+1\mod 2^{n}$ for all $x,y,z\in A_{n}$. The classical Laver tables are up-to-isomorphism the monogenerated subalgebras of the algebras of rank-into-rank embeddings modulo some ordinal. The classical Laver tables (and similar structures) may be used to recursively construct functional endomorphic Laver tables which are self-distributive algebras of an arbitrary arity. These functional endomorphic Laver tables appear to be secure platforms for self-distributive algebra based cryptosystems.

The functional endomorphic Laver table based cryptosystems should be resistant to attacks from adversaries who have access to quantum computers. The functional endomorphic Laver table based cryptosystems will be the first real-world application of large cardinals!

So if you remember, I recently made this post calling out the trolls, haters, and science denying crackpots on another network. I have removed all the content from this post since things have been getting out of hand and because that post has been getting too many views. I have a couple questions though.

If you have noticed, this post had very little content to it. Why does a content-free post get so much more attention than a post which actually has content to it? It seems like the people here are attracted to contentless click-bait posts much more than they are to actual content. This is really pissing me off. Your attraction to click-bait articles and false and hateful rumors is disgusting and deplorable. You should be interested in new ideas instead of immediately rejecting them out of hatred. People have been ruining my reputation since they are more interested in stupid rumors than in truth.

I realize that some of my posts are quite technical, but some of them are not. Some of them just announce some new program that I have written which you can launch in your browser where the only thing you have to do is read a couple directions and click a few buttons and produce a few pictures that you can hang on your wall. You do not even have to do anything.

I have some amazing news. The total market cap for all cryptocurrencies has for the first time has reached 100 billion dollars. On April 24,2017 the total market map for all cryptocurrencies has hit 30 billion for the first time and it has been growing ever since. At the same time, while Bitcoin has in the past been the only major cryptocurrency, Bitcoin no longer is the only major cryptocurrency since Bitcoin currently amounts for about 45 percent of the total market cap for all cryptocurrencies (on March 1,2017 Bitcoin amounted to about 85 percent of the total market cap for all cryptocurrencies).

I believe that unless most of the world’s governments wage a cyber war against cryptocurrencies, in the somewhat near future cryptocurrencies will for the most part replace fiat currencies or at least compete with government fiat currencies. Governments today have an incentive to produce more fiat currencies since governments are able to fund their programs by printing more of their own money (producing more money is a bit more subtle than simply taxing people). However, there is no motivation for anyone to exponentially inflate any particular cryptocurrency (I would never own any cryptocurrency which will continue to exponentially inflates its value). For example, there will never be any more than 21 million bitcoins. Since cryptocurrencies will not lose their value through an exponential inflation, people will gain more confidence in cryptocurrencies than they would with their fiat currencies. Furthermore, cryptocurrencies also have the advantage that they are not connected to any one government and are supposed to be decentralized.

Hopefully the world will transition from fiat currencies to digital currencies smoothly though. Cryptocurrencies are still very new and quite experimental, but cryptocurrencies have the potential to disrupt the global economy. There are currently many problems and questions about cryptocurrencies which people need to solve. One of the main issues with cryptocurrencies is that the proof-of-work problem for cryptocurrencies costs a lot of energy and resources and these proof-of-work problems produce nothing of value other than securing the cryptocurrency. If instead the proof-of-work problems for cryptocurrencies produce something of value other than security, the public image of cryptocurrencies will be improved, and as a consequence, governments and other organizations will be less willing to attack or hinder cryptocurrencies. In this post, I will give another attempt of producing useful proof-of-work problems for cryptocurrencies.

In my previous post on cryptocurrencies, I suggested that one could achieve both security and also obtain useful proofs-of-work by employing many different kinds of problems into the proof-of-work scheme instead of a single kind of problem into such a scheme. However, while I think such a proposal is possible, it will be difficult for the cryptocurrency community to accept and implement such a proposal for a couple of reasons. First of all, in order for my proposal to work, one needs to find many different kinds of proof-of-work problems which are suitable for securing cryptocurrency blockchains (this is not an easy task). Second of all, even if all the proof-of-work problems are selected, the implementation of the proposal will be quite complex since one will have to produce protocols to remove broken problems along with a system that can work together with all the different kinds of problems. Let me therefore propose a type of proof-of-work problem which satisfies all of the nice properties that hash-based proof-of-work problems satisfy but which will spur the development of reversible computers.

I am seriously considering creating a cryptocurrency whose proof-of-work problem will spur the development of reversible computers, and this post should be considered a preliminary outline of how such a proof-of-work currency would work. The next time I post about reversible computers spurred by cryptocurrencies, I will likely announce my new cryptocurrency, a whitepaper, and other pertinent information.

What are reversible computers?

Reversible computers are theoretical super-efficient classical computers which use very little energy because they can in some sense recycle the energy used in the computation. As analogies, an electric motor can recover the kinetic energy used to power a vehicle using regenerative braking by running the electric motor in reverse, and when people recycle aluminum cans the aluminum is used to make new cans. In a similar manner, a reversible computer can theoretically in a sense regenerate the energy used in a computation by running the computation in reverse.

A reversible computer is a computer whose logic gates are all bijective and hence have the same number of input bits as output bits. For example, the AND and OR gates are irreversible gates since the Boolean operations AND and OR are not bijective. The NOT gate is a reversible gate since the NOT function is bijective. The Toffoli gate and Fredkin gates are the functions $T$ and $F$ respectively defined by
$$T:\{0,1\}^{3}\rightarrow\{0,1\}^{3},T(x,y,z)=(x,y,(x\wedge y)\oplus z)$$
and
$$F:\{0,1\}^{3}\rightarrow\{0,1\}^{3},F(0,y,z)=(0,y,z),F(1,y,z)=(1,z,y).$$
The Toffoli gate and the Fredkin gate are both universal reversible gates in the sense that Toffoli gates alone can simulate any circuit and Fredkin gates can also simulate any circuit.

While reversible computation is a special case of irreversible computation, all forms of classical computation can be somewhat efficiently simulated by reversible circuits. Furthermore, when reversible computers are finally constructed, one should be able to employ a mix of reversibility and irreversibility to optimize efficiency in a partially reversible circuit.

Reversible computation is an improvement over classical computation since reversible computation is potentially many times more efficient than classical computation. Landauer’s principle states that erasing a bit always costs $k\cdot T\cdot\ln(2)$ energy where $T$ is the temperature and $k$ is the Boltzmann constant. Here the Boltzmann constant is $1.38064852\cdot 10^{−23}$ joules per Kelvin. At the room temperature of 300 K, Landauer’s principle requires $2.8\cdot 10^{−21}$ joules for every bit erased. The efficiency of irreversible computation is limited by Landauer’s principle since irreversible computation requires one to erase many bits. On the other hand, there is no limit to the efficiency of reversible computation since reversible computation does not require anyone to erase any bit of information. Since reversible computers are potentially more efficient than classical computers, reversible computers do not generate as much heat and hence reversible computers can potentially run at much faster speeds than ordinary classical computers.

While the hardware for reversible computation has not been developed, we currently have software that could run on these reversible computers. For example, the programming language Janus is a reversible programming language. One can therefore produce, test, and run much reversible software even though the super-efficient reversible computers currently do not exist.

The proof-of-work problem

Let me now give a description of the proof-of-work problem. Suppose that

$((a_{i,1,1},a_{i,1,2},a_{i,1,3}),…,(a_{i,85,1},a_{i,85,2},a_{i,85,3}))$ is a randomly generated partition of $\{1,\ldots,256\}\setminus\{a_{i}\}$ for $1\leq i\leq 128$,

For $i\in\{1,…,128\},j\in\{1,…,85\}$, $f_{i,j}:\{0,1\}^{256}\rightarrow\{0,1\}^{256}$ is the mapping where
$f_{i,j}((x_{n})_{n=1}^{256})=(y_{n})_{n=1}^{256}$ if and only if $x_{n}=y_{n}$ whenever $n\neq a_{i,j,3}$ and
$$y_{a_{i,j,3}}=T_{i,j}(x_{a_{i,j,1}},x_{a_{i,j,2}},x_{a_{i,j,3}}),$$

$$f_{i}=f_{i,1}\circ…\circ f_{i,85},$$

$\sigma:\{0,1\}^{256}\rightarrow\{0,1\}^{256}$ is the mapping where $\sigma((x_{n})_{n=1}^{256})=(y_{n})_{n=1}^{256}$ precisely when $x_{n}=y_{n}$ for $n>128$ and $y_{n}=x_{n}+x_{n+128}$ whenever $n\leq 128$,

$\mu:\{0,1\}^{256}\rightarrow\{0,1\}^{256}$ is the mapping where $\mu((x_{n})_{n=1}^{256})=(y_{n})_{n=1}^{256}$ precisely when $x_{n}=y_{n}$ for $n>128$ and $y_{n}=x_{n}+x_{257-n}$ whenever $n\leq 128$,

$\tau:\{0,1\}^{256}\rightarrow\{0,1\}^{256}$ is the mapping where $\tau((x_{n})_{n=1}^{256})=(y_{n})_{n=1}^{256}$ precisely when
$x_{n}=y_{n}$ for $n>64$ and $y_{n}=x_{n}+x_{n+64}$ whenever $n\leq 64$,

The function $f$ can be computed by a reversible circuit which we shall denote by $C$ with 132 different layers. The 128 layers in the circuit $C$ which are used to compute the functions $f_{1},…,f_{128}$ shall be called rounds. The circuit $C$ consists of 10880 Toffoli-like gates along with 576 CNOT gates (The circuit $C$ has a total of 11456 gates).

Since the randomizing function $f$ is randomly generated, one will be able to replace the function $f$ with a new randomly generated function $f$ periodically if one wants to.

Proof-of-work problem:

Let $\alpha$ be an adjustable 256 bit number. Let $k$ be a 128 bit hash of the current block in the blockchain. Then the proof-of-work problem is to find a 128 bit nonce $\mathbf{x}$ such that $f(k\#\mathbf{x})\leq\alpha$. The number $\alpha$ will be periodically adjusted so that the expected value of the amount of time it takes to obtain a new block in the blockchain remains constant.

Efficiency considerations

It is very easy to check whether a solution to our proof-of-work problem is correct or not but it is difficult to obtain a correct solution to our proof-of-work problem. Furthermore, the randomizing function $f$ in our proof-of-work problem can be just as easily computed on a reversible computer as it can on an irreversible computer. If reversible gates are only slightly more efficient than irreversible gates, then using a reversible computer to solve a proof-of-work problem will only be profitable if the randomizing function is specifically designed to be computed by a reversible circuit with no ancilla, no garbage bits, and which cannot be computed any faster by using an irreversible circuit instead. Standard cryptographic hash functions are not built to be run on reversible computers since they require one to either uncompute or to stockpile garbage bits that you will have to erase anyways.

Security considerations

The security of the randomizing permutation $f$ described above has not yet been thoroughly analyzed. I do not know of any established cryptographic randomizing permutation $f:\{0,1\}^{n}\rightarrow\{0,1\}^{n}$ that is written simply as a composition of reversible gates. I therefore had to construct a cryptographic randomizing permutation specifically as a proof-of-work problem for cryptocurrencies.

There are several reasons why I have chosen to use mostly randomness to construct the function $f$ instead of intentionally designing each gate of the function $f$. The circuit $C$ has depth 132 which is quite low. Perhaps I can slightly increase the security or the efficiency of the proof-of-work problem by designing each particular gate without any randomness but I do not believe that I can increase the security or efficiency by much. If the function $f$ is constructed once using random or pseudorandom information, then one can set up a system to automatically replace the function $f$ with a new function generated in the same manner (periodically changing the function). A randomly constructed function $f$ may even provide better security than a function $f$ constructed without randomness because it seems like since the function $f$ is constructed randomly, the function $f$ would be difficult to analyze. Another reason why I chose a randomly constructed circuit is that a randomly constructed circuit $f$ has a much simpler design than a function such as the SHA-256 hash.

In the worst case that the randomizing function is suspected to be insecure, one would have to increase the number of rounds in the randomizing function $f$ from 128 to a larger number such as 192 or 256 so that the randomizing function $f$ would become secure (it is better if 128 rounds is suspected to be insecure before the cryptocurrency is launched since if we need to increase the number of rounds from 128 to 192, then that would require a hard fork which will annoy users and miners and devalue such a currency).

The security requirement for the randomizing permutation $f$ as a proof-of-work problem is weaker than the security problem for a randomizing permutation when used to construct a cryptographic hash function or a symmetric encryption-decryption system. For a cryptographic hash function or a symmetric encryption or decryption to remain secure when used for cryptographic purposes, such a function must remain secure until the end of time, so an adversary may potentially use an unlimited amount of resources in order to attack an individual instance of a cryptographic hash function or symmetric encryption/decryption system. Furthermore, in the case of symmetric cryptography, an adversary will have access to a large quantity of data encrypted with a particular key. However, the current blockchain data $k$ changes every few seconds, so it will be quite difficult to break the proof-of-work problem.

I believe that the large scale quantum computer will by far be the greatest technological advancement that has ever or will ever happen. When people invent the large scale quantum computer, I will have to say “we have made it.” Perhaps the greatest motivation for constructing reversible computers is that reversible computers will facilitate the construction of large scale quantum computers. There are many similarities between quantum computers and reversible computers. The unitary operations that make up the quantum logic gates are reversible operations, and a large portion of all quantum algorithms consists of just reversible computation. Both quantum computation and reversible computation must be adiabatic (no heat goes in or out) and isentropic (entropy remains constant). However, quantum computers are more complex and more difficult to construct than reversible computers since the quantum gates are unitary operators and the quantum states inside quantum computers must remain in a globally coherent superposition. Since quantum computers are more difficult to construct than reversible computers, a more modest goal would be to construct reversible computers instead of quantum computers. Therefore, since reversible computers will be easier to construct than large scale quantum computers, we should now focus on constructing reversible computers rather than large scale quantum computers. The technology and knowledge used to construct super efficient reversible computers will then likely be used to construct quantum computers.

An anti-scientific attitude among the mathematics community

I have seen much unwarranted skepticism against Landauer’s principle and the possibility that reversible computation can be more efficient than classical irreversible computation could ever be. The only thing I have to say to you is that you are wrong, you are a science denier, you are a troll, and you are just like the old Catholic church who has persecuted Galileo.

At this link you can create 2D heat maps and 3D topographical surfaces from the endomorphic Laver tables. Here is what the pictures which you could obtain from the endomorphic Laver tables look like.

I was able to obtain pictures from the endomorphic Laver tables simply by giving the coordinate $(i,j)$ temperature and elevation
$t^{\sharp}(\mathfrak{l}_{1},\mathfrak{l}_{2},\mathfrak{l}_{3})(\mathbf{0}^{i}\mathbf{1}^{j})$. Obviously, the images that you can produce here only show a small portion of the functional endomorphic Laver tables and the functional endomorphic Laver table operations are too complicated to be completely represented visually.

I have some great news. For the first time, the total market cap as of 5/23/2017 for all cryptocurrencies is at \$80,000,000,000 (a month ago it was at \$30,000,000,000 but who knows what it will be in a month or two). Furthermore, as of 5/17/2017, for the first time cryptocurrencies other than Bitcoin consist of more than 50 percent of the total market cap for all cryptocurrencies. These historic events should not be taken lightly. I predict that in the future, cryptocurrencies will for the most part replace fiat currencies. The rise of cryptocurrencies will undoubtably change the world in many important ways. For example, some have predicted that cryptocurrencies will end all war (it will be hard to fund wars with cryptocurrencies)! To learn more about cryptocurrencies, you can read the original paper on the first cryptocurrency Bitcoin.

A proof-of-work problem is a computational problem that one must do to in order to produce new coins in the cryptocurrency and to maintain the security of the cryptocurrency. For example, in Bitcoin, the proof-of-work problem consists of finding suitable strings which produce exceptionally low SHA-256 hashes (the hashes must be low enough so that only one person produces a such hash every 10 minutes or so). In this post, I am going to outline how employing many different kinds of more mathematical proof-of-work problems will provide much better security for cryptocurrencies than if one were to employ only one type of proof-of-work problem and how some of the issues that arise with useful proof-of-work problems are resolvable.

I believe that hash problems are popular proof-of-work problems for cryptocurrencies not because they are the best but simply because they are simple and these hash-based proof of work problems have been the best proof of work problems in instances where a proof-of-work was needed before cryptocurrencies came into being. For example, hash-based proof-of-work problems are used to thwart denial-of-service attacks and spam since for a normal user the proof-of-work problems are not too difficult to solve but the problems will be too difficult for an attacker to compute. While the hash-based proof-of-work problems are ideal to thwarting denial-of-service attacks and spammers, they are not ideal as proof-of-work schemes for cryptocurrencies.

Cryptocurrencies with a many kinds of proof-of-work problems can satisfy very lax requirements compared to cryptocurrencies with only one type of proof-of-work problem.

The security of cryptocurrencies depends on the fact that no single entity can solve more than 50 percent of the proof-of-work problems. If a malicious entity solves more than 50 percent of the proof-of-work problems, then such an entity can undermine the security of the entire cryptocurrency (such an attack is called a 51 percent attack). If a cryptocurrency has only one kind of proof-of-work problem, then that proof-of-work problem must satisfy very strict requirements. But if the same cryptocurrency has many different kinds of proof-of-work problems, then those proof-of-work problems are only required to satisfy relatively mild conditions, and it is therefore feasible for many useful proof-of-work problems to satisfy those conditions (here temporarily assume that there is a good protocol for removing broken proof-of-work problems). Let me list those conditions right here.

The conditions colored orange are always absolutely essential for any proof-of-work problem on a cryptocurrency. The conditions colored green are essential when there is only one kind of proof-of-work problem, but these conditions in green are not necessary when there are many different kinds of proof-of-work problems for the cryptocurrency.

Verifiable but intractible

-These problems need to be difficult to solve but it must be easy to verify the solution to these problems.

Utility

The solution to the problem should have other uses than simply securing the cryptocurrency. The solution should have practical applications as well, or at the very least, the solutions to the problem should advance mathematics or science. Right now, the proof-of-work problem for Bitcoin is amounts to simply finding data whose SHA-256 hash is exceptionally low.

Scalability.

For Bitcoin, the proof-of-work problems need to be solved approximately every 10 minutes (with an exponential distribution), and in any cryptocurrency, the proof-of-work problem must consistently be solved in about the same amount of time. As time goes on, the proof-of-work problems need to get more difficult to accomodate for faster computers, quicker algorithms, more people working on the problem, and more specialized computer chips solving the problem. If $X$ is the distribution of the amount of time it takes to compute the proof-of-work problem, then $X$ should have a consistent mean and the distributions $X,1/X$ should have low variance (we do not want to solve the problem for one block in one second and solve the problem for the next block in one hour).

Efficient automatic generation

-The proof-of-work problems need to be generated automatically based on the previous blocks in the blockchain. Cryptocurrencies are decentralized so there cannot be a central agency which assigns each individual proof-of-work problem.

Progress-freeness

Progress freeness for search problems means that the probability that the first solution is found at time $t$ follows an exponential distribution. Progress freeness for problems whose objective is to optimize $f(x)$ means that for all $\alpha$ the probability of finding a solution where $f(x)\geq\alpha$ at time $t$ follows an exponential distribution. In other words, solutions are found randomly without regard of how long one has been working on obtaining a solution. Said differently, if Alice spends 30 minutes attempting to find a solution without finding a solution, then Alice will not have any advantage if she has not spent any of the 30 minutes searching for a solution. If there is only one kind of proof-of-work problem for a cryptocurrency, and that proof-of-work problem is far from progress free, then the entity who has the fastest computer will always win, and such an entity could launch a 51 percent attack. As an extreme example of why progress-freeness is needed, suppose that a problem always takes 1000 steps to solve. If Alice has a computer that can do 500 steps a second and Bob has a computer that can do 1000 steps a second, then Bob will solve the problem in 1 second and Alice will solve the problem in 2 seconds. Therefore Bob will always win. However, a lack of progress freeness is no longer a problem if in the cryptocurrency, there are many proof-of-work problems running simultaneously. For example, in the above scenario, even if Bob is able to solve a particular problem better than everyone else, Bob will have to spend much of his computational power solving that particular problem, and Bob will not have enough resources to solve the other proof-of-work problems, and therefore Bob will be unable to launch a 51 percent attack.

Optimization-freeness

By optimization-freeness, I mean that people need to be confident that in the future an entity will not be able to obtain a faster algorithm than we have today. The motivation behind optimization freeness is that if an entity has a secret algorithm which is much faster or better than all the other algorithms, then such an entity will be able to launch a 51 percent attack. However, a cryptocurrency employs many different kinds of proof-of-work problems, then such an entity will be able to solve a particular problem better than all others but that entity will not be able to launch a 51 percent attack. In fact, if a cryptocurrency employs many different kinds of proof-of-work problems, then optimization freeness will make the cryptocurrency less secure instead of more secure.

Pre-computation resistance

Pre-computation refers to when an entity solves a proof-of-work problem before the proof-of-work problem for a particular block is determined.

Unlimited problems

There needs to be an unlimited supply of proof-of-work problems to solve. This is only an issue if there is only one kind of proof-of-work problem for the cryptocurrency. If there are many kinds of proof-of-work problems in a cryptocurrency, then a particular type of proof-of-work problem can simply be removed by an automatic procedure or when all of the particular instances of the problem have been solved.

The problem-removal-protocol

So for a cryptocurrency which employs many different kinds of proof-of-work problems, the list of all problems which are currently being used in the cryptocurrency shall be called the problem schedule. If there are many different types of proof-of-work problems in the problem schedule, there is a good chance that a few of these proof-of-work problems should eventually be broken and should therefore be removed from the problem schedule. Since cryptocurrencies are meant to be as decentralized as possible, the process of removing broken proof-of-work problems from he problem schedule needs to be done automatically without a change to the programming of the cryptocurrency. Let me therefore list three different ways that a proof-of-work problem could be automatically removed from the problem schedule.

Formal proof of break

The proof-of-work problem will be considered to be broken and removed from the problem schedule if an entity submits a formal proof that the proof-of-work problem can be solved in polynomial time. The entity who submits the formal proof will win a pre-determined number of blocks in the blockchain and hence new coins.

Heuristic algorithm for break

The proof-of-work problem will be considered to be broken and removed form the problem schedule if an entity named Alice submits an algorithm that breaks the proof-of-work problem. For search problems, the algorithm is simply expected to consistently find a fast and accurate solution even if the difficulty of the proof-of-work problem is increased to very high levels. For optimization problems, the breaking algorithm must quickly and consistently give solutions which are at least as good as solutions obtained using other algorithms. In the case of optimization problems broken by a heuristic algorithm, the protocol for removing a problem from the problem schedule is more involved. Alice will first need to openly submit her algorithm for solving the proof-of-work problem. Alice is then required to follow her own algorithm and obtain a solution which is at least as optimized as the solutions produced by all other entities (keep in mind that other entities may use Alice’s algorithm or attempt to improve upon Alice’s algorithm. Furthermore, the optimization process will last long enough for other entities to attempt to improve upon Alice’s algorithm). If Alice satisfies these requirements, then the problem is removed from the problem schedule, Alice is given a pre-determined number of blocks in the blockchain, and Alice will be awarded plenty of coins.

Automatic removal of broken proof-of-work problems

For search problems, if the proof-of-work problem is repeatedly quickly solved (possibly with multiple solutions) even when the difficulty of the problem increases without bounds, then the proof-of-work problem will be automatically removed from the problem schedule. For optimization problems, if the best solution or best solutions are consistently obtained early within the timespan for solving the problem, then the proof-of-work problem will automatically be removed from the problem schedule and after the problem is removed, no entity will be awarded any blocks in the blockchain as a prize.

The automatic removal system should preferrably be implemented in a way so that an entity with a secret algorithm can always provide the best solutions to this proof-of-work problem simply by producing solutions which always win but which fall below the threshold-of-removal. However, the automatic removal system should also be implemented so that if the secret algorithm that breaks the system becomes publicly available, then the problem will easily be removed. In other words, an entity can cross the threshold-of-removal if he wants to but such an entity can just as easily remain just below the threshold-of-removal in order to obtain many coins.

The benefits of having many kinds of proof-of-work problems

The most obvious benefit to our composite proof-of-work system with many kinds of proof-of-work problems is that our proof-of-work system allows for a wide flexibility of problems and hence it allows for proof-of-work problems which have useful benefits besides securing the cryptocurrency.

Useful proof-of-work problems will greatly benefit cryptocurrencies

Useful proof-of-work problems will obviously provide benefits since the solution to those proof-of-work problems will have practical applications or at least advance mathematics. Useful proof-of-work problems will also provide a benefit to the cryptocurrencies themselves since useful proof-of-work problems will greatly help the public image of cryptocurrencies.

The public image of cryptocurrencies is one of the most important battles that advocates of cryptocurrencies must face. After all, cryptocurrencies directly threaten many of the powers that governments possess, and governments have the power to ban cryptocurrencies. Not only can governments ban cryptocurrencies, but powerful governments also have the power to undermine the security of cryptocurrencies by launching 51 percent attacks against them since governments have nearly unlimited resources. Cryptocurrencies therefore need to obtain and maintain a strong public image in order to avoid governmental backlash as much as possible.

Another reason for cryptocurrencies need to obtain a strong public image is that cryptocurrencies are still new and many people do not consider cryptocurrencies as legitimate currencies (those dinosaurs are wrong). People therefore need to accept cryptocurrencies as legitimate currency.

Today, many people view the process of mining cryptocurrencies and maintaining the blockchain for a cryptocurrency as being incredibly wasteful and bad for the environment, and these criticisms against cryptocurrencies are justified. Cryptocurrency advocates would respond by saying that cryptocurrency mining is useful for securing and decentralizing the blockchain and hence not wasteful, but in any case, it is much better and much more efficient for these proof-of-work problems to have applications besides securing the blockchain (many cryptocurrencies have attempted to do away with proof-of-work problems since proof-of-work problems are seen as wasteful). I personally will not give cryptocurrencies my full endorsement until they are used to solve useful problems with mathematical, scientific, or practical merit.

Today people receive about 4 million dollars worth of bitcoins every day from mining these bitcoins. Unfortunately, in order to mine these bitcoins, people need to use much energy to power the computers that make these calculations. Imagine how much cryptocurrencies can be used to advance mathematics and science if the proof-of-work problems are selected with scientific advancements in mind. Imagine how much these advancements to science could boost the public image of cryptocurrencies.

A multitude of proof-of-work problems will increase the security of cryptocurrencies

It is less obvious that our composite proof-of-work system also strengthens the security of the cryptocurrency against 51 percent attacks. It is much more straightforward for an entity to launch a 51 percent attack against a cryptocurrency with only one type of proof-of-work problem than it is to launch an attack against a cryptocurrency with many types of proof-of-work problems. After all, to launch a 51 percent attack against a single proof-of-work hash-based problem, one must simply spend a lot of money to produce a lot of ASICs for solving that problem and at any time the attacker can completely break the problem. On the other hand, to launch a 51 percent attack against many proof-of-work problems, one must obtain many different types of ASICs in order to solve many different kinds of problems. Furthermore, in some cases, an attacker may not be able to solve some of these problems since the attacker may not have access to a secret algorithm for solving this problem, and an attacker will not have access to some specialized knowledge which will allow one to quickly solve the problem.

How our system could be implemented on established cryptocurrencies

Cryptocurrencies are supposed to be decentralized currencies. Since cryptocurrencies are supposed to be decentralized, there should be as few changes to cryptocurrencies as possible. This poses a problem for cryptocurrencies which are attempting to switch from useless proof-of-work problems to useful proof-of-work problems. My solution is to only make a change to the cryptocurrency protocol once. In order to minimize backlash and other chaos, the changes stated in the cryptocurrency protocol will be implemented gradually. Let me now give an example of how the change from useless proof-of-work problems to useful proof-of-work problems could take place. Let $t$ be the number of years before the change of the proof-of-work protocol were to take place. When $t=-4$, the cryptocurrency will begin selecting 144 different proof-of-work problems to be used in the cryptocurrency. At time $t=-1$, all 144 of the proof-of-work problems will be selected and the changes to the cryptocurrency protocol will be finalized. At time $t=0$, the cryptocurrency protocol will be modified but these modifications will take effect over a process of 6 years. Every day an average of 144 blocks will be generated. Every month from time $t=0$ to time $t=6$, two out of these 144 proof-of-work problems will be selected at random to be implemented into the problem schedule. At the $m$-th month from $t=0$, every day approximately $144-2m$ of the blocks on the blockchain will have a hash-based proof-of-work problem while the $2m$ other blocks will be from the 144 proof-of-work problems which have already been implemented into the problem schedule. At time $t=10$, a collection of new proof-of-work problems will be selected to replace the problems which have already been broken or which will be broken in the future.

At this page, you can now plot the graphs of the output of endomorphic Laver tables. As one would expect, the endomorphic Laver tables are much more complex than their corresponding classical Laver tables or multigenic Laver tables. Here is an graph of the outputs of an endomorphic Laver table operation.

In this post, I will give an analysis of the Ko-Lee key exchange for the functionally endomorphic Laver tables (the Ko-Lee key exchange for functional endomorphic Laver tables is simpler to analyze than the distributive version of the Anshel-Anshel-Goldfeld key exchange). In the next post, I will simply link a JavaScript program here on this site which will allow one to analyze the Ko-Lee key exchange for functional endomorphic Laver tables. We will leave an analysis of the more complex but probably more secure distributive version of the Anshel-Anshel-Goldfeld key exchange for functional endomorphic Laver tables for a later post (we believe that the distributive Anshel-Anshel-Goldfeld key exchange would be more secure for the functional endomorphic Laver tables).

Critical points

Suppose that $(X,*)$ is a Laver-like algebra. Then if $x,y\in X$, then define $\mathrm{crit}(x)\leq\mathrm{crit}(y)$ if $x^{n}*y\in\mathrm{Li}(X)$ for some $n$. We say that $\mathrm{crit}(x)=\mathrm{crit}(y)$ if $\mathrm{crit}(x)\leq\mathrm{crit}(y)$ and $\mathrm{crit}(y)\leq\mathrm{crit}(x)$. We shall call $\mathrm{crit}(x)$ the critical point of the element $x$. Let $\mathrm{crit}[X]=\{\mathrm{crit}(x)|x\in X\}$. Then $\mathrm{crit}[X]$ is a linearly ordered set.

If $\alpha\in\mathrm{crit}[X]$, then there is some $r\in X$ with $\mathrm{crit}(r)=\alpha$ and $r*r\in\mathrm{Li}(X)$. Suppose now that $\alpha\in\mathrm{crit}[X],\mathrm{crit}(r)=\alpha,r*r=\mathrm{Li}(X)$. Then define a congruence $\equiv^{\alpha}$ on $X$ by letting $x\equiv^{\alpha}y$ if and only if $r*x=r*y$. Then the congruence $\equiv^{\alpha}$ does not depend on the choice of $\alpha$. The congruence $\equiv^{\alpha}$ is the smallest congruence on $X$ such that if $\mathrm{crit}(x)\geq\alpha$, then $y\in\mathrm{Li}(X)$ for some $y$ with $y\equiv^{\alpha}x$. If $x\in X$, then define the mapping $x^{\sharp}:\mathrm{crit}[X]\rightarrow\mathrm{crit}[X]$ by letting $x^{\sharp}(\mathrm{crit}(y))=\mathrm{crit}(x*y)$. The mapping $x^{\sharp}$ is monotone; if $\alpha\leq\beta$, then $x^{\sharp}(\alpha)\leq x^{\sharp}(\beta)$. Suppose that $(X,E,F)$ is a Laver-like endomorphic algebra. Then suppose that $\alpha\in\mathrm{crit}(\Gamma(X,E))$. Then define the congruence $\equiv^{\alpha}$ on $(X,E,F)$ by letting $x\equiv^{\alpha}y$ if and only if $f(x_{1},…,x_{n_{f}},x)=f(x_{1},…,x_{n_{f}},y)$ where $x_{1},…,x_{n_{f}}\in X$ and $\mathrm{crit}(f,x_{1},…,x_{n_{f}})=\alpha$ and $(f,x_{1},…,x_{n_{f}})*(f,x_{1},…,x_{n_{f}})\in\mathrm{Li}(\Gamma(X,E))$.

In the Laver-like LD-systems which we have looked at which are suitable for cryptographic purposes, $\mathrm{crit}[X]$ tends to be rather small (for cryptographic purposes, $X$ should have at least about 9 critical points, but the largest classical Laver table ever computed $A_{48}$ has 49 critical points).

$\mathbf{Proposition}:$

If $y\equiv^{\alpha}z$, then $x*y\equiv^{x^{\sharp}(\alpha)}x*z$ and $x\circ y\equiv^{x^{\sharp}(\alpha)}x\circ z$.

Suppose that $X$ is a Laver-like LD-system and $x\in X$. Let $\alpha$ be the least critical point such that $x^{\sharp}(\alpha)=\max(\mathrm{crit}[X])$. Then if $y\equiv^{\alpha}z$, then $x*y=x*z$ and $x\circ y=x\circ z.$

Therefore if $\mathbf{y}\in X/\equiv^{\alpha}$ is the equivalence class of $y\in X$, then define $x*\mathbf{y}$ to be the equivalence class of $x*y$ in the quotient algebra $X/\equiv^{x^{\sharp}(\alpha)}$ and define $x\circ\mathbf{y}$ to be the equivalence class of $x\circ\mathbf{y}$ in $X/\equiv^{x^{\sharp}(\alpha)}$.

$\mathbf{Proposition}$ Suppose that $(X,t^{\bullet})$ is an $n+1$-ary Laver-like LD-system. Suppose that $\mathfrak{u}_{1},…,\mathfrak{u}_{n},\mathfrak{v}_{1},…,\mathfrak{v}_{n}\in\Diamond(X,t^{\bullet})$. Then $\mathrm{crit}(\mathfrak{u}_{1},…,\mathfrak{u}_{n})\leq\mathrm{crit}(\mathfrak{v}_{1},…,\mathfrak{v}_{n})$ in $\Gamma(\Diamond(X,t^{\bullet}))$ if and only if $\mathrm{crit}(\mathfrak{u}_{1}(\varepsilon),…,\mathfrak{u}_{n}(\varepsilon))\leq\mathrm{crit}(\mathfrak{v}_{1}(\varepsilon),…,\mathfrak{v}_{1}(\varepsilon))$.

We may therefore define $\mathrm{crit}(\mathfrak{u}_{1},…,\mathfrak{u}_{n})=\mathrm{crit}(\mathfrak{u}_{1}(\varepsilon),…,\mathfrak{u}_{1}(\varepsilon))$.

$\mathbf{Proposition}$ Suppose that $(X,t^{\bullet})$ is an $n+1$-ary Laver-like LD-system and $\alpha\in\mathrm{crit}(\Gamma(X,t^{\bullet}))=\mathrm{crit}(\Gamma(\Diamond(X,t^{\bullet})))$. Then $(\Diamond(X,t^{\bullet}))/\equiv^{\alpha}\simeq\Diamond((X,t^{\bullet})/\equiv^{\alpha})$. In particular, define a mapping $\phi:\Diamond(X,t^{\bullet})\rightarrow\Diamond((X,t^{\bullet})/\equiv^{\alpha})$ as follows. Let $\mathfrak{l}\in\Diamond(X,t^{\bullet})$. Let $\mathbf{BAD}_{\mathfrak{l}}$ be the set of all strings $i\mathbf{x}\in\{1,…,n\}^{*}$ with $\mathrm{crit}(1\mathbf{x},…,n\mathbf{x})\geq\alpha$. Let $\mathbf{GOOD}_{\mathfrak{l}}$ be the set of all strings in $\{1,…,n\}^{*}$ which are not in $\mathbf{BAD}_{\mathfrak{l}}$ nor have suffixes in $\mathbf{BAD}_{\mathfrak{l}}$. Then $\phi(\mathfrak{l})(\mathbf{x})=\mathfrak{l}(\mathbf{x})/\equiv^{\alpha}$ whenever $\mathbf{x}\in\mathbf{GOOD}_{\mathfrak{l}}$ and $\phi(\mathfrak{l})(\mathbf{x})=\#$ otherwise. Then $\phi$ is a homomorphism whose kernel is $\equiv^{\alpha}$.

Analysis of the Ko-Lee key exchange for Laver-like LD-systems.

The security of the Ko-Lee key exchange for Laver-like LD-systems depends on the difficulty of the following problem.

Problem A for $(a,x,b)$ in $X$: Suppose that $X$ is a Laver-like algebra. Suppose that $a,x,b\in X,r=a\circ x,s=x\circ b$. Let $\alpha$ be the least ordinal such that $a^{\sharp}(\alpha)=\max(\mathrm{crit}[X])$ and let $\beta$ be the least ordinal such that $r^{\sharp}(\beta)=\max(\mathrm{crit}[X])$.

Note: In the above problem, we assume that $\alpha$ is known since $X$ usually has a limited number of critical points and $\alpha$ can either be solved or there are a limited number of possibilities for $\alpha$.

We take note that problem A is asymmetric in the roles of $a$ and $b$. In particular, Problem A-2 seems to be an easier problem than problem A-1 since $X/\equiv^{\beta}$ is simpler than $X$.

Constructing endomorphic Laver tables

Suppose that $(X,*)$ is a Laver-like LD-system. If $t:X^{n}\rightarrow X$ is a mapping that satisfies the identity $x*t(x_{1},…,x_{n})=t(x*x_{1},…,x*x_{n})$, then define an operation $T:X^{n+1}\rightarrow X$ by letting $T(x_{1},…,x_{n},x)=t(x_{1},…,x_{n})*x$. Then the algebra $(X,T)$ is a Laver-like endomorphic algebra.

$\mathbf{Proposition:}$ Suppose that $(X,*)$ is a Laver-like LD-system and $t:X^{n}\rightarrow X$ is a mapping that satisfies the identity $x*t(x_{1},…,x_{n})=t(x*x_{1},…,x*x_{n})$. Then define $T:X^{n+1}\rightarrow X$ by letting $T(x_{1},…,x_{n},x)=t(x_{1},…,x_{n})*x$. Then
$\mathrm{crit}(T,x_{1},…,x_{n})\leq\mathrm{crit}(T,y_{1},…,y_{n})$ in $\Gamma(X,T)$ if and only if
$\mathrm{crit}(t(x_{1},…,x_{n}))\leq\mathrm{crit}(t(y_{1},…,y_{n}))$ in $(X,*)$.

By the above Proposition, we will hold to the convention that $\mathrm{crit}(T,x_{1},…,x_{n})=\mathrm{crit}(t(x_{1},…,x_{n}))$.

If $x>0$, then define $(x)_{r}$ to be the unique integer with $1\leq(x)_{r}\leq r$ and where $(x)_{r}=x\mod\,r$. If $n\geq m$, then define a mapping $\pi_{n,m}:A_{n}\rightarrow A_{m}$ by letting $\pi_{n,m}(x)=(x)_{2^{m}}$. Define a linear ordering $\leq^{L}_{n}$ on the classical Laver table $A_{n}$ by induction on $n$ by letting $x

Suppose that $\mathbf{S}$ is a function such that

if $n$ is a natural number and $x_{1},…,x_{r}\in A_{n}$, then $\mathbf{S}(n;x_{1},…,x_{r})\in\{0,1\}$, and

if $x_{1},…,x_{r}\in A_{m},y_{1},…,y_{r}\in A_{n}$ and there is an isomorphism $\iota:\langle x_{1},…,x_{r}\rangle\rightarrow\langle y_{1},…,y_{r}\rangle$ with $\iota(x_{1})=y_{1},…,\iota(x_{r})=y_{r}$ and which preserves $\leq^{L}$, then $\mathbf{S}(m,x_{1},…,x_{r})=\mathbf{S}(n,y_{1},…,y_{r})$.

Then define a mapping $\mathbf{S}_{n}:A_{n}^{r}\rightarrow A_{n}$ for all $n$ by induction on $n$. Suppose that $\mathbf{S}_{n}$ has been defined already and suppose that $x_{1},…,x_{r}\in A_{n+1}$. Then let $v=\mathbf{S}_{n}(\pi_{n+1,n}(x_{1}),…,\pi_{n+1,n}(x_{r}))$. If $|\{v,v+2^{n}\}\cap\langle x_{1},…,x_{r}\rangle|=1$, then let $\mathbf{S}_{n+1}(x_{1},…,x_{r})$ be the unique element in $\{v,v+2^{n}\}\cap\langle x_{1},…,x_{r}\rangle$. If $|\{v,v+2^{n}\}\cap\langle x_{1},…,x_{r}\rangle|=2$, then let $\mathbf{S}_{n}(x_{1},…,x_{r})=v+2^{n}\cdot\mathbf{S}(n;x_{1},…,x_{r})$.

Then for all $n$, the operation $\mathbf{S}_{n}$ satisfies the identity $x*_{n}\mathbf{S}_{n}(x_{1},…,x_{r})=\mathbf{S}_{n}(x*_{n}x_{1},…,x*_{n}x_{r})$.

For our analysis of the Ko-Lee key exchange for functional endomorphic Laver tables, assume that $t:A_{n}^{2}\rightarrow A_{n}$ is a mapping that satisfies the identity $x*_{n}t(y,z)=t(x*_{n}y,x*_{n}z)$ and $T^{\bullet}:A_{n}^{3}\rightarrow A_{n}$ is defined by $T^{\bullet}(x,y,z)=t(x,y)*z$.

In the classical Laver table $A_{n}$, we have $\mathrm{crit}(x)\leq\mathrm{crit}(y)$ if and only if $\gcd(x,2^{n})\leq\gcd(y,2^{n})$. Therefore, for the classical Laver table $A_{n}$, we define $\mathrm{crit}(x)=\log_{2}(\gcd(x,2^{n}))$. In the classical Laver table $A_{n}$, we have $x\equiv^{m}y$ if and only if $x=y\mod 2^{m}$. If $x\in A_{n}$, then let $o_{n}(x)$ be the least natural number $m$ such that $x*_{n}2^{m}=2^{n}$. In other words, $o_{n}(x)$ is the least critical point $\alpha\in\mathrm{crit}[A_{n}]$ where in $A_{n}$, we have $x^{\sharp}(\alpha)=\max(\mathrm{crit}[A_{n}])$.

Let $\mathfrak{u}_{1},\mathfrak{u}_{2},\mathfrak{l}_{1},\mathfrak{l}_{2},\mathfrak{v}_{1},\mathfrak{v}_{2}\in\Diamond(X,T^{\bullet})$. Then in order for problem A for $(\mathfrak{u}_{1},\mathfrak{u}_{2}),(\mathfrak{l}_{1},\mathfrak{l}_{2}),(\mathfrak{v}_{1},\mathfrak{v}_{2})$ to be difficult, one needs for $o_{n}(t(\mathfrak{u}_{1}(\varepsilon),\mathfrak{u}_{2}(\varepsilon))\circ_{n}t(\mathfrak{l}_{1}(\varepsilon),\mathfrak{l}_{2}(\varepsilon)))$ to be as large as possible. We therefore recommend for $o_{n}(t(\mathfrak{u}_{1}(\varepsilon),\mathfrak{u}_{2}(\varepsilon))\circ_{n}t(\mathfrak{l}_{1}(\varepsilon),\mathfrak{l}_{2}(\varepsilon)))\geq 4$ for the functional endomorphic Laver table based Ko-Lee key exchange to be secure. Unfortunately, $o_{n}(x\circ y)$ is usually very small $\leq 4$ except for highly specialized values of$ x,y$. If $x,y\in A_{n}$, then $o_{n}(x\circ_{n}y)\leq\min(o_{n}(x),o_{n}(y))$, so $o_{n}(x\circ_{n}y)$ is typically very small.

The following function shows that $o_{n}(x)$ is usually small (usually 2 or 4).

Increasing $n$ past $10$ in $A_{n}$ will probably not increase the security of functional endomorphic Laver table based cryptosystems.

While $\{o_{n+1}(x),o_{n+1}(x)+2^{n}\}\subseteq\{o_{n}(x),o_{n}(x)+2^{n}\}$, and $o_{n+1}(x+2^{n})=o_{n}(x)$, the following data indicates that for $n\geq 10$ increasing $n$ has little effect on $o_{n}(x)$ since it is very rare for $o_{n+1}(x)=o_{n}(x)+1$ as $n$ gets large.

While one can show that $o_{n}(1)\rightarrow\infty$ using large cardinals, there is no known proof in ZFC that $o_{n}(1)\rightarrow\infty$ and it is known that if $o_{n}(1)\rightarrow\infty$, then $o_{n}(1)$ must diverge very slowly.

Increases $n$ past about 10 or so does not seem to increase the security of the functional endomorphic Laver table based cryptosystems. If $n$ is much larger than $10$, and if $f$ is a $k$-ary term in $\Diamond(X,T^{\bullet})$ and $\mathfrak{l}_{1},…,\mathfrak{l}_{k}\in\Diamond(X,T^{\bullet})$, then $f(\mathfrak{l}_{1},…,\mathfrak{l}_{k})(\mathbf{x})$ is either very close to $2^{n}$, very close to $1$, or very close to $\mathfrak{l}_{i}(\mathbf{y})$ for some string $\mathbf{y}$. Said differently, increasing the size of $n$ past about 10 or so does not seem to make problem A (or any other cryptographic problem) much harder to solve.

In my research on the generalizations of Laver tables, I found that the endomorphic Laver tables could be used as platforms for several cryptosystems. I am currently the only one looking at endomorphic Laver table based cryptosystems, and it is unclear whether endomorphic Laver table based cryptosystems are secure or not. I am therefore giving this post to invite researchers to evaluate the security of endomorphic Laver table based cryptosystems and to give them some basic information about such cryptosystems. While these posts will give some information that will allow a basic cryptanalysis of endomorphic Laver table based cryptosystems, one would definitely need to read my paper generalizations of Laver tables for a thorough understanding of the algebras involved in these cryptosystems. One should also read the chapter Elementary Embeddings and Algebra in the Handbook of Set Theory (Chapter 11) and Chapters 10-13 in Dehornoy’s book Braids and Self-Distributivity for the an outline of the development of the classical Laver tables before I started my work in 2015.

If you are interested in evaluating the security of endomorphic Laver table based cryptosystems, then please post a comment to this post or send me an e-mail or otherwise contact me.

As a general principle, the non-abelian group based cryptosystems extend to self-distributive algebra based cryptosystems. In particular, the Ko-Lee key exchange, Anshel-Anshel-Goldfeld key exchange, and conjugacy based authentication system, which are some of the most important non-abelian group based cryptosystems, extend from group based cryptosystems to self-distributive algebra based cryptosystems, and the endomorphic Laver tables seem to be good platforms for these self-distributive algebra based cryptosystems.

It seems like endomorphic Laver tables based cryptosystems would even remain secure even against adversaries who have access to quantum computers. I do not believe that quantum computers pose any more of a threat to these cryptosystems than classical computers would since none of the techniques for constructing quantum algorithms (besides Grover’s algorithm which works in general) seem applicable to anything remotely related endomorphic Laver tables. Unless there is a fairly obvious attack, I do not foresee that we will be able to mathematically prove that there is an polynomial time classical or quantum algorithm that breaks these cryptosystems. Nevertheless, I am concerned that these cryptosystems may be susceptible to heuristic attacks.

I have only been able to efficiently compute the endomorphic Laver table operations for two months now (see my previous post for more information about the difficulties in computing the endomorphic Laver table operations and how it was overcome), so these algebras and cryptosystems are fairly unknown.

Endomorphic algebras

An LD-system is an algebra $(X,*)$ that satisfies the self-distributivity identity $x*(y*z)=(x*y)*(x*z)$. LD-systems arise naturally in set theory and in the theory of knots and braids. One can generalize the notion of self-distributivity to algebras with multiple operations, operations of higher arity, and algebras where only some of the fundamental operations are self-distributive.

Suppose that $(X,E,F)$ is an algebraic structure where each $f\in E$ has arity $n_{f}+1$. If $f\in E,a_{1},\dots,a_{n_{f}}\in X$, then define the mapping $L_{f,a_{1},\dots,a_{n_{f}}}$ by letting $L_{f,a_{1},\dots,a_{n_{f}}}(x)=f(a_{1},\dots,a_{n_{f}},x)$. Then we say that $(X,E,F)$ is a partially endomorphic algebra if the mapping $L_{f,a_{1},\dots,a_{n_{f}}}$ is an endomorphism of $(X,E,F)$ whenever $f\in E,a_{1},\dots,a_{n_{f}}\in X$. If $F=\emptyset$, then we shall call $(X,E,F)$ an endomorphic algebra (and we may write $(X,E)$ to denote the endomorphic algebra $(X,E,\emptyset)$). If $(X,t)$ is an endomorphic algebra and $t$ is an $n+1$-ary operation, then we shall call $(X,t)$ an $n+1$-ary self-distributive algebra. The partially endomorphic algebras are the universal algebraic analogues of the LD-systems.

If $(X,E)$ is an endomorphic algebra, then let
$$\Gamma(X,E)=(\bigcup_{f\in E}\{f\}\times X^{n_{f}},*)$$
where the operation $*$ is defined by
$$(f,x_{1},\dots,x_{n_{f}})*(g,y_{1},\dots,y_{n_{g}})=(g,f(x_{1},\dots,x_{n_{f}},y_{1}),\dots,f(x_{1},\dots,x_{n_{f}},y_{n_{g}})).$$
Then $\Gamma(X,E)$ is an LD-system.

Endomorphic algebra based cryptosystems

The Ko-Lee key exchange for semigroups

The following key exchange is a simplified version of the Ko-Lee key exchange for semigroups.

The simplified Ko-Lee key exchange: A semigroup $(X,\circ)$ along with an element $x\in X$ are public.

Alice selects an element $a\in X$ and sends $r=a\circ x$ to Bob.

Bob selects an element $b\in X$ and sends $s=x\circ b$ to Alice.

Let $K=a\circ x\circ b$.

Alice computes $K$ using the fact that $K=a\circ s$.

Bob computes $K$ using the fact that $K=r\circ b$.

An eavesdropper listening in to the communications between Alice and Bob cannot compute $K$. The full-fledged Ko-Lee key exchange would not work for Laver-like algebras since the associative operation on Laver-like algebras has very little commutativity.

The Anshel-Anshel-Goldfeld key exchange for self-distributivity

In this paper, Kalka and Teicher have extended the Anshel-Anshel-Goldfeld key exchange to LD-systems. The following key exchange extends this cryptosystem by Kalka and Teicher to partially endomorphic algebras.

Alice selects some $a\in A$ along with some term $t$ such that $t(x_{1},\dots,x_{m})=a$. Alice selects an
$f\in E$ along with a tuple $\mathbf{a}=(a_{1},\dots,a_{n_{f}})\in X^{n_{f}}$. Alice then sends $$u_{1}=f(\mathbf{a},y_{1}),\dots,u_{n}=f(\mathbf{a},y_{n}),p_{0}=f(\mathbf{a},a)$$
to Bob.

Bob selects some $g\in E$ along with $\mathbf{b}=(b_{1},\dots,b_{n_{g}})\in B^{n_{g}}$ and terms
$t_{1},\dots,t_{n_{g}}$ such that
$$b_{1}=t_{1}(y_{1},\dots,y_{n}),\dots,b_{n_{g}}=t_{n_{g}}(y_{1},\dots,y_{n}).$$
Bob then sends the elements $v_{1}=g(\mathbf{b},x_{1}),\dots,v_{m}=g(\mathbf{b},x_{m})$ to Alice.

Let $K=f(\mathbf{a},g(\mathbf{b},a))$.

Alice computes $K$. Alice is able to compute $K$ since Alice knows $f,\mathbf{a}$ and since
$$g(\mathbf{b},a)=g(\mathbf{b},t(x_{1},\dots,x_{m}))=t(g(\mathbf{b},x_{1}),\dots,g(\mathbf{b},x_{m}))
=t(v_{1},\dots,v_{m}).$$

The classical Laver table $A_{n}$ is the unique algebraic structure $(\{1,\dots,2^{n}\},*_{n})$ that satisfies the self-distributivity identity $x*_{n}(y*_{n}z)=(x*_{n}y)*_{n}(x*_{n}z)$ and where $x*_{n}1=x+1\,\mod 2^{n}$ for $x\in A_{n}$. The operation $*_{n}$ can easily be computed from a 2.5 MB file of precomputed data as long as $n\leq 48$. We believe that it is feasible with current technology to compute $*_{n}$ when $n\leq 96$ though. The classical Laver tables alone are unsuitable as platforms for self-distributive algebra based cryptosystems (the classical Laver tables will offer at most 96 bits of security and in practice such cryptosystems based on the classical Laver tables are effortlessly and instantly broken). The multigenic Laver tables are not suitable as platforms for such cryptosystems either since elements in multigenic Laver tables can easily be factored.

If $(X,*)$ is an LD-system, then a subset $L\subseteq X$ is said to be a left-ideal if $x*y\in L$ whenever $y\in L$. An element $x\in X$ is said to be a left-identity if $x*y=y$ whenever $y\in X$. Let $\mathrm{Li}(X)$ denote the set of all left-identities in the set $(X,*)$. We say that an LD-system $(X,*)$ is Laver-like if

$\mathrm{Li}(X)$ is a left-ideal in $X$, and

Whenever $x_{n}\in X$ for all natural numbers $n$, there is some $N$ such that $x_{0}*\dots*x_{N}\in\mathrm{Li}(X)$ (here the implied parentheses are grouped on the left. i.e. $x*y*z=(x*y)*z$. ).

For example, the classical Laver tables $A_{n}$ are always Laver-like.

If $(X,*)$ is an LD-system, then define the Fibonacci terms $t_{n}$ for $n\geq 1$ by letting $t_{1}(x,y)=y,t_{2}(x,y)=x$ and $t_{n+2}(x,y)=t_{n+1}(x,y)*t_{n}(x,y)$. If $(X,*)$ is a Laver-like LD-system, then define an operation $\circ$ on $X\setminus\mathrm{Li}(X)$ by letting $x\circ y=t_{n+1}(x,y)$ whenever $t_{n}(x,y)\in\mathrm{Li}(X)$ (the operation $\circ$ does not depend on the choice of $n$). The operation $\circ$ is associative and hence suitable for the Ko-Lee key exchange. The operation $\circ$ also satisfies the identity $(x\circ y)*z=x*(y*z)$.

A partially endomorphic algebra $(X,E,F)$ is said to be Laver-like if the hull $\Gamma(X,E)$ is Laver-like. Any Laver-like algebra may be used to induce a partially endomorphic Laver table, but for simplicity, we shall only define the functional endomorphic Laver tables induced by the endomorphic algebras with only one fundamental operation.

Suppose now that $(X,t^{\bullet})$ is a $n+1$-ary Laver-like algebra. Let $\Diamond(X,t^{\bullet})$ be the algebra with underlying set consisting of all functions $\mathfrak{l}:\{1,\dots,n\}^{*}\rightarrow X\cup\{\#\}$ such that

$\mathfrak{l}(\varepsilon)\in X$.

If $\mathfrak{l}(\mathbf{x})\in\#$, then $\mathfrak{l}(i\mathbf{x})\in\#$ for all $i\in\{1,\dots,n\}$.

If $\mathfrak{l}(\mathbf{x})\in X$, then either $\mathfrak{l}(i\mathbf{x})\in X$ for all $i\in\{1,\dots,n\}$ or
$\mathfrak{l}(i\mathbf{x})=\#$ for all $i\in\{1,\dots,n\}$.

If $\mathfrak{l}(i\mathbf{x})\in X$ for all $i\in\{1,\dots,n\}$, then $(\mathfrak{l}(1\mathbf{x}),\dots,\mathfrak{l}(n\mathbf{x}))\not\in \mathbf{Li}(\Gamma((X,t^{\bullet})))$.

If $\mathfrak{l}(\mathbf{x})\in X$, and $\mathfrak{l}(i\mathbf{x})\in X$ for all $i\in\{1,\dots,n\}$, then
there is some $x\in X$ where $t(\mathfrak{l}(1\mathbf{x}),\dots,\mathfrak{l}(n\mathbf{x}),x)=\mathfrak{l}(\mathbf{x})$.

$\mathfrak{l}(\mathbf{x})\in X$ for only finitely many strings $\mathbf{x}\in\{1,\dots,n\}^{*}$.

If $\mathfrak{l}_{1},\dots,\mathfrak{l}_{n}\in\Diamond(X,t)$ and $(\mathfrak{l}_{1}(\varepsilon),\dots,\mathfrak{l}_{n}(\varepsilon))\not\in\Gamma((X,t^{\bullet}))$, then define $t_{x}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})\in\Diamond(X,t^{\bullet})$ by $t(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})(\varepsilon)=t^{\bullet}(\mathfrak{l}_{1}(\varepsilon),\dots,\mathfrak{l}_{n}(\varepsilon),x)$ and where $t(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})(\mathbf{x}i)=\mathfrak{l}_{i}(\mathbf{x})$.

Now define an operation $t^{\sharp}:\Diamond(X,t^{\bullet})^{n+1}\rightarrow\Diamond(X,t^{\bullet})$ by letting

Then the algebra $(\Diamond(X,t^{\bullet}),t^{\sharp})$ is an $n+1$-ary self-distributive algebra which we shall refer to as a functional endomorphic Laver table. Every functional endomorphic Laver table is a Laver-like endomorphic algebra. If one has an efficient algorithm for computing $\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l}$, then one also has a reasonably efficient algorithm for computing $t^{\sharp}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l})$.

The Ko-Lee key exchange for endomorphic Laver tables

One could apply the Ko-Lee key exchange to the operation $\circ$ in $\Gamma(\Diamond(X,t^{\bullet}),t^{\sharp})$. However, since we are only able to compute $t^{\bullet}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l})(\mathbf{x})$ instead of the entire function $t^{\bullet}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l})$, we will need to modify the Ko-Lee key exchange so that the endomorphic Laver tables can be used as a platform for this cryptosystem. The self-distributive Anshel-Anshel-Goldfeld key exchange can be modified in the same way.

The Ko-Lee key exchange for endomorphic Laver tables: Suppose that $(X,t^{\bullet})$ is an $n+1$-ary Laver-like algebra. Then in this key exchange algorithms for computing $\mathfrak{l}_{1},\dots,\mathfrak{l}_{n}\in\Diamond(X,t^{\bullet})$ are public.

Alice computes $K$. Bob will not reveal the algorithms for computing $\mathfrak{k}_{1},\dots,\mathfrak{k}_{n}$ since Bob’s algorithms for computing $\mathfrak{k}_{1},\dots,\mathfrak{k}_{n}$ depend on his algorithms for computing $\mathfrak{v}_{1},\dots,\mathfrak{v}_{n}$. However, upon request from Alice, Bob will send specific values of the form $\mathfrak{k}_{j}(\mathbf{y})$ to Alice. Alice will be able to compute $K$ using her algorithms for computing $\mathfrak{u}_{1},\dots,\mathfrak{u}_{n}$ and using the values $\mathfrak{k}_{j}(\mathbf{y})$ sent from Bob. In general, Alice will need to ask Bob for the values of $\mathfrak{k}_{j}(\mathbf{y})$ several times.

Bob computes $K$ in a similar manner. Alice will not reveal algorithms for computing $\mathfrak{j}_{1},\dots,\mathfrak{j}_{n}$ either. However, Alice will reveal specific values of the form $\mathfrak{j}_{j}(\mathbf{y})$ to Bob. Bob will therefore compute $K$ using his secret algorithms for computing $\mathfrak{v}_{1},\dots,\mathfrak{v}_{n}$ along with values of the form $\mathfrak{j}_{j}(\mathbf{y})$ sent by Alice.

Remark: The common key $K$ may only have a couple of bits of entropy so the common key $K$ could be found by an eavesdropping party by guessing the most common possibilities. The most obvious way to mitigate this problem is to perform the above key exchange multiple times in order to obtain a large enough key. Another probably more efficient way to mitigate this problem is to use $(\mathfrak{w}_{i}(\mathbf{x}_{0}),\dots,\mathfrak{w}_{i}(\mathbf{x}_{r}))$ as the common key instead where $(\mathbf{x}_{0},\dots,\mathbf{x}_{r})$ is an enumeration of the suffixes of $\mathbf{x}$.

To be continued

In the second post, I will give a detailed analysis of the Ko-Lee key exchange for endomorphic Laver tables. Furthermore, at the same time, I will release an online JavaScript program for analyzing the Ko-Lee key exchange for endomorphic Laver tables.