Elastos and Quantum Wealth

This curriculum is a program from Professor Xu Ke and I, and in sum, we will use computational thinking to discuss some of the phenomena and questions associated with the new internet economy and blockchain. A core concept will of course be computational thinking. This concept was originally proposed by Columbia University Professor, Jeannette Wing, who is famous in this field.

Figure 1: Pictured in the middle is Professor Jeannette Wing, on the left is Turing Award Laureate, Andrew Chi-Chih Yao. Previously, Professor Jeannette Wing was the Director of the Department of Computer Sciences at Carnegie Mellon, and is currently the Director of the Institute for Data Science at Columbia University.

She was an early proponent of computational thinking and provided its definition in a short, four-page article, with rationale that was not very deep. If the definition of computational thinking is so straightforward, then what is it exactly? It is a kind of recursive thinking, often called iteration in physics, which is something we can all understand. If a simple algorithm is repeatedly carried out, the initial inputs equal the final outputs. At the same time, computational thought also includes the idea of parallel processing, such as in a case where many CPUs simultaneously complete a part of a calculation rule.

On the surface, there is not anything especially novel about this concept for the computer science industry and anyone working in the field would understand this. Then why did this concept proposed by her article have such a great impact? Because it was the first time that anyone had ever proposed that computational thinking could be applied outside of computer science. She asserted that computational thinking could not only be used in computer programming, but it could also be used to explain phenomena in an extraordinary number of fields, including cosmology.

Seth Lloyde, Professor of Physics at MIT, has made outstanding contributions in the field of quantum computing. In fact, the concepts he proposed in his book, Programming The Universe, did not necessarily come later than computational thinking but are very similar.

When you read the book’s title, Programming the Universe, you might be able to guess the general topic of the book. Although he didn’t explicitly cite the concept of computational thinking, the meaning of the book’s title is that the entire universe moves forward according to quantum computer mechanism. This book made a big impact, mainly within the field of physics.

So why is the influence of computational thinking steadily increasing? At its root, this simple concept challenges traditional human thinking. In fact, we conclude that it is decentralized thinking that has challenged centralized thinking. Humans initially interpreted all things, including the universe and society, through centralized thinking. For example, we could not understand why the universe came into existence or why life was formed. The only way that we have been able to explain these questions is through the generally held concept that a creator or god exists at the center of everything and created all through supreme intelligence.

Looking at socio-economics in China, for example, an aspect where the country is prospering, the economy is developing well. According to traditional centralized thinking, if China is doing well socio-economically, then it must be attributed to the leadership of a good emperor. In China, we generally talk about the prosperity of the Qing Dynasty, during which time the rulers were seen as very enlightened. The flavor or Chinese culture is stronger, and generally speaking, it can be seen as playing a central role here. We have only just now discovered from recursion, iteration, and parallel computing that there is no central role in computational thinking. Computational thinking is decentralized thinking.

A concrete example of decentralized thinking can be illustrated with the algorithms used by ants. This example of computational thinking is the easiest to describe and is also the simplest to understand. The IQ of individual ants is not high, and ant colonies are not ruled by a central point of intelligence, such as an ant king or any other such figure. But ants are often able to complete extremely abstract and complex tasks. For example, when ants leave the nest to find food, they go according to general economics, such that they will find the shortest distance from the nest to the food. Making too many turns would undoubtedly use up more resources for the ant when attempting to bring the food back to the nest. In principle, the shortest distance should be found, but the shortest path problem is well known within mathematics as a problem solvable only by the top minds in the field. One of the earliest people to find a solution for the shortest path problem was Euler, which was an incredible milestone in human history.

Reason would tell us that expecting ants to find the shortest distance between two points would be a hopeless task. It is unfathomable that an ant could emerge from the colony able to perform the same feats as Euler. The chances are slim that a single central ant with great intelligence would emerge. But later, humans learned more details about ant colonies. These concepts were later abstracted, providing computer science with an ant algorithm. As for the discovery that ants use computational thinking, it is nothing more than iteration, recursion, and parallel computing. As each ant moves on its path towards the food, it leaves behind pheromones, which later ants can perceive. When other ants are selecting a path, any available path has an equal probability of selection and there is no reason for one path to be selected over another. But the ants instinctually select the path with a higher density of pheromones, which increases that path’s probability of selection. Using recursion, the ant colony is ultimately able to find the shortest distance path.

Figure 3.1: At the beginning, there are three ant paths, each with the same probability of selection, where the shortest one is unknown.

Figure 3.2: The middle path is the shortest and allows for the fastest return time. The increasing density of pheromones from each time an ant crosses the path will be greater than that of other paths, and the probability that other ants select this path will increase according to the principle of algorithmic recursion, producing a feedback loop. Ultimately, the ants find the shortest path through “calculations.” Cartoon provided by Yang Ge.

I asked my friend, Yang Ge, a painter, to prepare a cartoon that would allow us to quickly understand how ants are able to find the shortest path. Early on, in Figure 1, there are three paths, and each path has the same probability that it might be selected by an ant, because the ants do not know which is longer or which is shorter. They all have the same odds of being randomly chosen, but there will always be one path, which is the shortest distance. The shortest distance means that the time it takes to reach the food and bring it back to the nest must necessarily be the shortest, and the time it takes to complete the same trip on the other paths is longer. The shortest trip time means that the frequency that it is used begins to increase. Regardless of having equal probability of selection, the ants distribute themselves onto the three paths. On the shortest path, a round trip requires the least time, causing the frequency of trips made to increase. So-called pheromones are left on the shortest path and increase in density over time. This is because less time is spent traveling the path, whereas the time spent on other paths to complete one round trip is longer. Initially, it was a process of random selection by the ants, but slowly, the density of pheromones on the shortest path became relatively greater than that of the other paths, even if by a small amount. The probability that it is selected by the later ants changes, as the ants now have a preference for the path with more pheromones. The remaining ones rely on recursion or iteration, creating positive feedback.

As the probability of selection by the ants increases, the density of pheromones on the shortest path also increases, in turn increasing that path’s probability of selection, again causing an increase in that path’s pheromones, creating a positive feedback loop. Through mathematical operations performed over a period of time, it is as if the ant colony is able to perform operations, resulting in the selection of the shortest path, while other longer paths fall out of use. Due to their comparative decreasing density of pheromones, the longer paths are increasingly inferior to the shortest path. It is just that simple. In fact, according to the calculation, if you were to write it out in a program, it would only have about four steps, but the ants solved the shortest path problem, completely without the aid of mathematicians or centralized intelligence. You might say that the combined IQ of the ant colony is that of an average person. The IQ of each individual ant is very low, but using this simple algorithm, recursion, followed by parallel calculation, they are able to solve complex problems, such as the shortest path problem.

We are slowly discovering that in reality, computational thinking is the method by which the vast majority of selections operate in nature, and the reason why a physicist authored Programming The Universe is because he believes that the entire universe is a quantum computer running this program. But future research must continue to look for rules, protocols, or procedures in cosmology, which unfolds according to computational thinking.

Bee colonies are also an especially classic example. In the beginning, humans viewed bee colonies through human centralized thought, and was initially referred to as a “king bee.” Later research on honeybee colonies revealed that “king bee” is not an accurate name. First of all, the so-called “king bee” feeds on royal jelly, a special kind of honey, and has no strategic role in the hive. It cannot be called a king of the colony because other bees do not take orders from it. Humans relied on their own image, but in reality, the only function of the so-called king bee is to reproduce, as it is the only bee in the colony in possession of a fully matured reproductive system. Although there are female worker bees, they are unable to reproduce without having eaten the royal jelly, which is only available to the queen. Later, humans changed the name from “king bee” to “queen bee,” as its only role in the colony is to produce the next generation of bees. Other honeybees are called “worker bees.”

The queen bee has no decision making power and relies on the colony to care for her, so the colony operates on computational thinking. After the research became clear, the field of computer science began to imitate bee colonies, which gave way to the bee colony algorithm. For example, how to locate food? How to produce the excellent next generation of bees, and what rules should be used to build their marvelous nest? Humans view beehives as the peak of perfection. Although I have not researched this personally, architectural analysis of beehives indicates that they use the least possible amount of materials, and remain cool in summer and warm in winter. Bees are a classic example of a colony which operates through computational thinking, resulting in a structure that is entirely decentralized, and which has inspired many a human structure: the beehive.

From this, we can infer that computational thinking really does have a decisive role in evolution of the earth’s ecosystems and actually, so-called evolution is nothing more than this basic protocol: hybrid variations and survival of the fittest. When Darwin wrote The Origin of Species, it was shocking to people. Previously, all thinking was centralized and many early explanations for the existence of life included the idea that it must be the work of an ultimate creator, central to the universe. Darwin discovered that this is not true, and rather, the reason that living things, including humans, have been able to survive until today are based on very simple principles: hybrid variation, survival of the fittest, and giving birth to the next generation through the recursive and iterative processes of mating. If you aren’t fit for the environment, then sorry, but you would get eliminated, and the next generation would be unable to go on. These principles also occur in parallel. “Calculations” are made on independent individuals by earth, depending on where they live. This is certainly not the work of any god’s plan and there is no area on earth that is given priority over another. Every species on earth engages in equal competition with the others.

This process is similar to an algorithm of computational thinking. It has been going on for hundreds of millions of years to very good effect. Let’s leave humans aside for now and look at leopards, about which I often sigh in admiration. Leopards hunt in a way that it graceful and nimble, their run, physical build and abilities are all the result of the calculations made during evolution on earth over billions of years.

Figure 4

Early legends told of digging up dinosaur fossils and movies today tell these same stories, albeit somewhat more clumsily than the legends. The calculations of evolution are obviously excellent. Over tens of billions of years, they have produced the innumerable high quality specimens that are alive on earth today. They rely on distribution, but not on any central role. Of course, humans would appear to be the most advanced, and we too, are the results of the evolutionary process.

Let’s now take a look at the concept of markets. Why has Adam Smith been called the father of economics? Truthfully, I have thumbed through The Wealth of Nations for a long time and on the surface, it is a bland read, merely describing some contemporary economic phenomena. It appears that he doesn’t even have an especially interesting theory, and there was a time when I had trouble understanding why he was so respected. The Wealth of Nations became the originator of modern economics. Only after I thought about it from the perspective of computational thinking did I suddenly realize its gravity.

In the past, humans interpreted society as being based on a central point. In Chinese history, the emperors of the Qing Dynasty were particularly benevolent and the era now is a well-known example of prosperity. Commonly depicted in contemporary television dramas, the unique prosperity of the time is usually attributed to the influence of the benevolent emperors. But similar to his predecessor, Darwin, Adam Smith ultimately discovered that reality could not be explained this way. From our point of view, the market operates without relying on a central point, which, although not explicitly stated, is computational thinking. The basic premise is that goods are exchanged for mutual benefit and trade is based upon the real value of goods. Using computational thinking to look at market activities is similar to looking at the ant algorithm. For example, we can compare consumers trying a new product to the ants selecting a path. At first, the selection is made blindly and there is an equal likelihood that any option might be selected. But over time, products that are better than others will gain a reputation, which can be compared to pheromones left behind by the ants. These pheromones slowly accumulate among the products, forming positive feedback. Products with a better reputation attract more people to purchase them and as more people purchase them, the reputation is further spread until finally, a well-known brand name is formed. The formation of a brand name is just like the ants using an algorithm to select the shortest and most effective path. In this way, the market slowly evolves under the principal of survival of the fittest. Adam Smith’s Wealth of Nations describes the algorithm of the free market (the so-called “invisible hand”), which continually optimizes the allocation of industrial resources.

The results of this evolution are the fruits of the market. You might even say that all of civilization is built upon the basic principles of the market. Just 100 years ago, cars began as ox-carts with a motor attached. But what have cars become? Automatic, sleek, and made just like a spacecraft. There was no centralization during this process, which is to say, there was no central decision making role. You might say we should thank him and that it was all because of him that we have cars, but that is not the case. We should actually be thanking the algorithm: the exchange goods for mutual benefit and trade based upon the real value of goods. When we look at The Wealth of Nations this way, it is clear that it is nothing more than a summary of the phenomena produced by the basic protocol of computational thinking, something that people of the time ignored.

Take England for example. Market selection resulted in the slow but steady development of some industries. The Wealth of Nations discussed the amassing of raw materials in one place, the accumulation of resources in another, and the advancement and iteration of handicrafts. All of these phenomena were discussed in The Wealth of Nations one by one and eventually led to the automation of production. In fact, to elaborate how the entire system does not rely on any central point, it relies on a basic protocol instead. Wherever it is executed, it is able to continuously make our society more advanced. This is the equivalent to higher evolution and a greater ability to produce better products.

The example of China is very distinct. Students seated here may have not experienced the China of 30 years ago, but maybe your parents have. That was the time of the so-called planned economy, which was the epitome of centralized thinking. Everything was set by the central government. What you could produce, what you were required to invest, what you could sell, and even the prices that you could charge were set in an extremely rigid way. I experienced this era. How poor was the planned economy? Go ahead and buy a ticket to North Korea and have a look around. Then you will know that it is nothing but poverty, completely unable to develop. In fact, the reason for Deng Xiaoping’s greatness was his belief that socialism could also have a market. He recognized this basic protocol and agreed to use it to operate in a decentralized way. That year in China, his acceptance of this most basic rule allowed the country to thrive in the thirty years following China’s reform.

For example, my memories of listening to reports from the Third Plenum of the 18th Central Committee are very clear. I don’t really enjoy listening to political reports, but suddenly, my eyes lit up when something appeared on the television: “Allow the market to carry out its decisive role in resource allocation.” I remember this phrase almost word for word it, because its direction was correct. Regardless of anything else, you must persist in carrying out this basic point. If you violate this basic protocol, then your market will not be able to improve. As long as you persist in implementing the protocol, then it is guaranteed that more wealth, more prosperity, and more value, will be brought to society. This was where Adam Smith’s greatness lies. He is considered to be the father of modern economics because he decentralized the process. Historically, we believed that wealth must rely on a centralized point or some kind of supreme intelligence, but he acknowledged that it is just an iterative, recursive, and parallel calculation. Of course, Darwin or Adam Smith didn’t explicitly state the word “calculate,” but when we look back on it now, it is, in fact, the same thing.

I really enjoyed reading Programming The Universe because I study physics. Of course, the logic it uses to interpret many things in the universe is, in large part, similar to computational thinking. Because my major at Tsinghua was originally quantum mechanics, I believe that these similarities are not by chance. It needs a unified basis for interpretation, and wasn’t just hit upon by a stroke of luck. Did Adam Smith get lucky, suddenly discovering the concept of the market? Was Darwin’s discovery of the Theory of Evolution just by chance? No. They must have a unified basis for understanding the world, a basis that I later came to understand as quantum mechanics.

Our view of the origin of the world is comprised of what we think was the basic point of origin for the universe. We call this a paradigm or ontological theory. If this can be established, computational thinking would naturally arise. This would be our universe, including the most basic form of and rules for how all things operate. Quantum mechanics establishes this basis. I have discussed this over and over with Gu Xueyong, Tsinghua Professor of Automation with a PhD from MIT in Science and Engineering. He was the first to explain computational thinking to me. Later I told him about quantum mechanics, when we both finally discovered a common basis. A basis for understanding this paradigm is found in quantum mechanics, which is ultimately computational thinking.

First of all, why do we shift from a centralized thinking to a decentralized thinking, from the classical Newtonian mechanics to the quantum mechanics? Because according to the Newtonian worldview, the world is made up of particles, and the movement of any particle has a definite orbit. Therefore, our world can exist in the wisest Laplace’s demon. It only needs to know the initial conditions (position and velocity) of all particles, and their dynamics equations, it can deduce the evolutionary state of the universe at any time. This is the scientific basis of the centralization of thinking and the basis for the planning economy. Only when it comes to quantum mechanics, the new worldview, that is, decentralized thinking, has a new scientific basis.

So what kind of paradigm does quantum mechanics provide? First, let’s have everyone prepare themselves a little bit, as the paradigm of quantum mechanics is definitely different from the world view which most people hold. Since we were children, we have been taught Newtonian mechanics, creating a set of paradigms that provide a basis for common knowledge.

Why has this paradigm since been overturned? You might say that in the field of physics, it was overturned at its base by quantum mechanics, overturned at its very base. We just talked about all kinds of phenomena, like Darwin’s Theory of Evolution and Adam Smith’s Wealth of Nations, and although they are great accomplishments, none of them were able to overturn the old paradigm or create a new one. They were able to overturn the old paradigm in some areas. For example, I watched a film about the amazing Theory of Evolution. When the Theory of Evolution came out, it caused an uproar among theologians in Britain because it was the polar opposite of the common belief in the existence of god.

Even so, Adam Smith’s theories still weren’t able to overturn the old paradigm with his work because it did not address the most central components of the old paradigm (Newtonian world view). This can only be done by quantum mechanics. In a moment, I will talk about how our space-time paradigm has begun, but this process still has a long way to go and we can only say that the great curtain of history has only just been pulled back. What makes us especially lucky is that changing our most basic understanding of the universe’s existence to establish a whole new paradigm has naturally given rise to computational thinking. You could say that there is great hope because this effort is being advanced by many academic disciplines.

To this end, we are seeing a significant collection of knowledge and experiment. So what is the resulting paradigm that is provided by quantum mechanics? Or perhaps another question we might ask is what kind of new understanding of the world does it provide us with? You shouldn’t be scared off by the words “quantum mechanics,” because it isn’t actually that mysterious. Of course, I spent 20 challenging years at Tsinghua University studying and researching, and only then did I really feel as if this new paradigm had finally become part of me. When I first experienced quantum mechanics, I thought it was quite difficult, and at one point, I abandoned it. 20 years ago, the reason that I did not complete my PhD at Tsinghua and decided to start a business was not so much because I wanted to earn money, but more so because I couldn’t continue researching. When there is discord in the way that you perceive yourself, it is very difficult to continue with your endeavors. It felt as if I was being forced to accept a theory that I hadn’t yet come to understand, so I gave it up.

But my confusion was later dispelled, thanks to the continuous guidance and education of my mentor of many years at Tsinghua, Mr. Zhang Li. Each year, he teaches a class titled Issues on the Frontiers of Quantum Mechanics, which is now the flagship course of Tsinghua’s Physics Department. Mr. Zhang Li, who is now 90 years old, only two years younger than Mr. Chen-Ning Franklin Yang, still stands at the podium lecturing classes, tirelessly chiseling away at his work and refining it down to a fine point.

When Mr. Zhang Li lectures on quantum mechanics, he starts with this one experiment. Once you understand this, you will understand half of the concepts of quantum existence.

This is the first experiment on which Professor Zhang Li lectures during Issues on The Frontiers of Quantum Mechanics. It’s actually a very simple experiment. The yellow thing in the upper left hand corner is an electron gun that only shoots one electron at a time. This is a crucial aspect of the experiment. This experiment was successfully completed in 1989 and the single-electron gun was one of the most advanced pieces of equipment for the time. Because of the small size of the electron, it is extremely difficult to guarantee that only one is emitted each time. It is simple to emit many electrons, just heat up the resistance wire and begin to emit electrons.

The ability of Professor Tonomura’s gun to emit only one electron at a time is the result of many years of human striving. You might be able to guess why it was a Japanese person who made the discovery. Japanese people devote their entire lives to working on one thing, spending several decades on what we might normally spend a single day in order to reach the peak of their field. This is something that we ought to study the Japanese for. They don’t give up on things easily. Last year, in 2016, I hosted a discussion with the folks at Tsinghua about lasers, and they told me they wanted to import a state of the art single-photon light source that could guarantee that only one photon is emitted at a time.

The idea for this experiment was actually proposed 100 years ago, but was only actually successfully completed by Professor Tonomura. The experimental process is not complicated. One electron is emitted at a time towards two ordinary slits. There is no question that the electron will move across to the two opposing slits, but it is uncertain which slit it will go through. Finally, a bright point is made in the top image on the right side of the screen.

As we begin, I bet everyone is thinking that this electron is definitely a Newtonian particle. Let’s hypothesize that you are right and that the electron must follow a defined trajectory. If it follows the correct trajectory, you could think of it like a bullet. Regardless of whether it goes to the first or the second slit, its motion is random. In the end, many electrons deposited on the right screen, two bright bands must appear corresponding to each slit. This would be the Newtonian explanation and only two bands would be formed: one for each slit. The particles are on a certain trajectory. If they don’t go through one slit, they go through the other. If you don’t believe me, you can take a bullet and try it out. You will discover that two bands from each corresponding slit will be formed.

The experimental results in the first image show one dot for each particle emission. There is nothing controversial about this: one particle, one dot. But as they start to accumulate, there aren’t enough data points to show any pattern, and there are definitely not the two corresponding slits. Moving forward, it can’t be correct. If you remember double-slit wave interference from middle school, you will recognize the classic bands of this phenomenon shown in the final image. Of course, ordinary books on quantum mechanics, including most science textbooks, give verbose explanations about wave-particle duality. But keep in mind that wave-particle duality has not brought on a new paradigm. I find the phenomenon of wave-particle duality quite crude and furthermore, it is a rather irresponsible way of talking. The idea of wave-particle duality is ambiguous and doesn’t tell us much. What is wave-particle duality and does it represent a new paradigm and new worldview? It does not.

Let’s review the concept of waves. If the interference patterns here were produced by waves, then this wave definitely passed through the two slits in the mean time and then was superimposed. Why are there alternating bands of light and dark? Because the phase of the wave passing through the two slits are different, superimposing on the right side of the screen. It’s very simple. The places with constructive waves are bright and the places with destructive waves are dark. Any type of waves will produce these results. When I was at The High School Affiliated to Renmin University of China, our conditions were crude, so we used water ripples. The teacher at the podium dripped water droplets into water to make ripples, which also looked like the final image. A slightly more high-tech yet much more effective method is to use light waves, preferably a laser, which produces extremely clear bands of alternating light and dark, which are extremely, clear.

What can we ascertain from the perspective of waves? The physical image clearly shows that the wave simultaneously passed through the two slits and then produced interference. The concept of waves provides a good explanation, because it must simultaneously pass through both slits, producing constructive and destructive interference, finally producing alternating bands of light and dark.

Note that this explanation cannot be applied to electrons. Why emphasize the electron? Let me ask you this question: if an electron passes through the slits, which path does it actually take? Does it go through the first or the second? Whichever way you think the Newtonian particle to go, it will not produce an interference pattern of alternating bands of light and dark. Although no one is very willing to at the start, the only way forward is to depart from the Newtonian paradigm, as will definitely not work here. So let me ask you this: which slit does the electron go through? I will provide a short anecdote here, as it is very difficult for physicists to show others how to depart from the Newtonian paradigm. Even Einstein wasn’t willing to make this departure. The Newtonian paradigm has been around for several hundreds of years and everyone knows that Newton is one of the greats of physicists. It is true that Einstein’s Theory of Relativity did overturn previous theories of space-time, but at its core, his paradigm was consistent with Newton’s. Einstein inherited Newton’s view that each particle has a determined trajectory and convincing him to depart from this view was extremely difficult.

If a new concept suddenly appears that looks to be beyond belief, it just might be right.

The electron went through both slits. Right now, it might seem like that is impossible, and that the electron isn’t able to split itself in half. But we can’t help but admit that it really did go through both slits; otherwise, there would be no way for the interference pattern to have formed. Physicists conducting initial studies proposed all kinds of explanations, but these never obtained quite the right results. Science is very simple but harsh and experiments must be identical if they are able to stand on two feet. The only explanation is that the electron is going through both slits.

I bet there are already some students who are starting to furrow their brow, unable to accept this. This is natural. Even physicists like Einstein are unable to accept it, and for many years, I, too, was unable to accept this.

A physicist, illustrating just how absurd he thinks this concept is, drew the cartoon above. Here, we see the spooky skier who simultaneously goes both left and right around a tree. This spookiness has caused people to write papers on ghost-like qualities of quantum mechanics.

But is it really that strange? After all the years I have spent thinking about and discussing this, the person who produced the most succinct description actually wasn’t a physicist. Two years ago, there was an investor in blockchain, very active in VC and economics. She reached out to me because she wanted to come to Tsinghua University to discuss blockchain. At the time, I was quite interested in blockchain, so I agreed for her to come, and we met right there at the Tsinghua University SEM Cafe.

We talked about blockchain, but once the conversation started winding down, she suddenly said that she had read and understood my articles online about quantum mechanics. My initial reaction was to assume that she must be engaged with physics. She responded that she was not, but that she had majored in English. How could an English major understand my writings? I was skeptical. She explained that her parents were professors at Fudan University and that she had grown up at Fudan. Xie Xide, the famous female physicist and former Fudan University President, was her neighbor. She said that they had been explaining quantum mechanics to her since she was a child, which is how she is able to understand it. My first thought was that she was exaggerating. I asked her how she has been able to understand ever since she was little, but after many years at Tsinghua, I wasn’t able to understand? Impatient to end the conversation, I said, fine, and asked her to describe her knowledge of quantum mechanics using one sentence. My intention was simply to hear what she came up with for kicks and then let the issue go. What she said was this: Her understanding of the world of quantum mechanics is that “behind the visible world, there is another invisible world.” Honestly, when I heard her say this, I began to break out in a cold sweat, mainly because I had to admit that what she said was true. She didn’t use the language of a professional physicist, but I could tell from what she said that her understanding was correct. After studying this for so many years, I had never said it in such a way.

I later related this story to my mentor, Professor Zhang Li and with a knowing smile, he said yes, that way of putting it is very interesting and her understanding was correct. What does this mean? She really did understand. Behind the visible world exists an invisible world. The reason why the slit experiment is so strange is because the intermediate process, during which one electron simultaneously passes through two slits, is invisible. In this case, the point where the electron hits the screen is clearly visible, but keep in mind that quantum superposition, where one particle passes through two slits, is an invisible process. That is to say, the location information is only one part of reality, or a certain kind of special state.

Until now, I have said that one electron passing through slits is a process that is invisible. Mister Zhang Li has discussed this experiment in detail during his class, Issues on the Frontiers of Quantum Mechanics. However, simply saying that this process is invisible is not enough. You must explain the process through experimentation. The David E Pritchard Team at MIT wanted to take on this challenge. They said that if the interference patterns depend on the middle process being invisible, then they would risk it all by placing a detector next to a slit in order to find out which slit the particle goes through. They didn’t believe that the process was invisible. They began the experiment, which took them about ten years to complete. The experiment required an extremely high level of precision in order to see which slit the electron had gone through and in the beginning, the precision wasn’t high enough. Although they were able to obtain some information, the precision was not high enough to make an exact determination of the electron’s path. During that time, the banded interference pattern continued to appear as normal, and the single electron still passed through two slits unseen, despite obtaining some small amount of information. The reason the team worked on this for so many years was to increase the precision of the experiment. Finally, they reached the level of precision required to perform the experiment, and the information obtained was sufficient to make an exact determination of the electron’s path. Once the information was clear, what happened to the bands observed on the right side of the screen? Once the intermediate process was observed clearly, it reverted to the Newtonian paradigm, the interference patterns disappeared, and the quantum superposition was broken, which makes sense.

The most fundamental conclusion is that quantum processes are invisible by nature and when they truly evolve, most are invisible and non-local.

While we are on this topic, let me say a few words about the theory of parallel Universe (Multiverse). People often use the concept of parallel worlds to tell stories about quantum mechanics. It makes sense to talk about parallel worlds in conceptual worlds that are invisible or non-local, because that is precisely what quantum superposition is!

What really doesn’t make sense is when people write science fiction about the existence of parallel worlds in the visible world. That is totally wrong!

I owe a lot to my investor friend. If not for her, I would still be going along according to the old presentation of physics stuff and everyone would be even more mystified. She just used simple language to describe the quantum world: ”There is an invisible world behind the visible world.” Later on, many of my articles on introduction to quantum mechanics automatically had their name changed to this phrase, which is now very representative of the public view point. It is fundamental for everyone to understand that the quantum process of evolution is invisible. If you insist on observing the process and place a detector there (at the Figure 5.1 S1 or S2), this would result in the appearance of a Newtonian particle and the disappearance of the interference patterns. What kind of new paradigm does this bring? Actually, we truly ought to thank Einstein. In truth, Einstein acknowledged this fact, causing him to take the blame for many years, don’t look at him as great. Most people writing articles these days don’t understand what they’re talking about and many say that Einstein opposed quantum mechanics. Last year, I even saw a popular science article which made a direct assertion that quantum mechanics makes Einstein lose face, which was going a little too far.

批注 [R3]: Need to research

What these writers say is not completely unfounded. It’s true that Einstein always sided with the opposite of the mainstream opinion, opposing Niels Bohr and the Copenhagen School. Bohr and his school believed that randomness is fundamental, while Einstein believed that randomness was just an appearance, but behind both theories was the belief in Newtonian determinism. It’s just as that famous saying goes: “God does not play dice with the universe.”

I later did research and found that the Bohr and the Copenhagen school proposed the theory of wave-particle duality, including the crucial uncertainty principle, which was, of course, proposed by Heisenberg of the Copenhagen school. This principle is usually explained in the first chapter of most quantum mechanics textbooks and is quite simple. Its fundamental principle is not provable by other theorems. It asserts that position and momentum (speed) cannot be precisely measured simultaneously, and the margin of error of the two variables, when multiplied, is greater than Planck’s constant. With this principle, the concept of trajectories in Newtonian particles is no longer tenable.

Everyone knows that the reason why a particle has a trajectory is because it has a certain position, the derivative of which can be determined. This means that you can only draw a continuous trajectory when both the position and the speed are known. So once we have this, the classical physics paradigm is promptly abolished. What paradigm replaces it, Bohr was unable to say.

Just by chance, I came upon Einstein’s 1933 speech at Oxford University, a flawless piece of work. Einstein said, “If we accept the uncertainty principle, it means that we must forever set aside the description of particle localization.” What meaning does this statement have? The point of departure for a new quantum mechanics-based is definitely non-locality. But first, let’s talk about the concept of locality. Locality is the determinism of particles and their existence at a specific location in space-time, a concept that should definitely be forgotten. Quantum existence is non-local and invisible.

This is precisely what Einstein had grasped, and marks the beginning of a new paradigm.

Now I am acutely aware that at the time that Einstein proposed the concept of single quantum non-locality, it did not receive the attention that it warranted, for which he took the fall. As we know, the concept of non-locality has become a mainstream concept in the field of quantum mechanics.

But did Bohr have similar ideas? I later hear a story about Bohr that illustrated that he had his own realizations and although he did not say it, he did have similar ideas. My mentor, Mr. Zhang Li, told this story to me. The story goes that in the beginning, Bohr made some major contributions to the field of quantum mechanics. He was awarded with the Nobel Prize for his theory and verification of the structure of hydrogen atoms. The Danish queen was preparing to grant him the Danish Order of the Elephant. Niels Bohr was to participate in the ceremony, during which his coat of arms would be used. Bohr came from a family of common people who had no family coat of arms, but the Danish royal court insisted on holding a ceremonial event in which the coat of arms was a requirement. If you don’t have one, then you should go design one. Bohr agreed, and went ahead and designed it himself.

Here is the coat of arms that he designed, on which the Chinese taijitu is placed front and center.

Looking at it now, 100 years later, can you guess why he placed the taijitu in the center? In truth, seeing his coat of arms reminded me of studying the Tao Te Ching by Lao Tzu in college, and in an instant, many of its concepts finally made sense. For instance: “The nameless is the origin of heaven and earth. While naming is the origin of the myriad things.” “All things in the cosmos arise from being. Being arises from non-being.” “The great sound is hard to hear. The great form has no shape.” “All things submit to yin and embrace yang.” Before, I felt like these sayings didn’t have any logic to them. “Nothing,” or wu, should just be nothing. How could “something,” you, give rise to “nothing?” On the surface, this is a paradoxical proposition. But from the perspective of non-locality, if we look at the phrase “presence and absence produce each other,” it naturally makes sense. Nothing, or wu, isn’t the absence of stuff, but rather, the absence of visible stuff. Under certain conditions, absence will give rise to localized presence, or you, providing you with information. But in essence, it would be what is referred to as a state of “nothing,” or wu. Of course later, the concepts of yin and yang were used to express these ideas. Yang is what is visible, what the world reveals when exposed to the light of the sun. But it is crucial to remember that what is visible is only one part of the world. Behind that is the invisible, a concept expressed by yin.

But the worldview of ancient Chinese Taoists was that yin and yang transform into each other. The two are not absolutely opposing. There is no absolute yin or absolute yang. They are both continually affecting each other, causing the other’s transformation, which is expressed in the taijitu. This is a very wise concept. If you go back and look at the initial experiments, like the double slit experiment, we can see elements of yang in the visible dots left on the screen at the end. In the intermediate process, the non-local, invisible part can be considered to be in a state of yin (quantum superposition while traveling two paths). So, this represents the two fundamental information states of all material in the universe, precisely that which has been ignored by classical physics.

Newtonian mechanics only researches the visible part of the universe. At the time, they were not aware of, or didn’t care about, any connections behind the visible world, including quantum entanglement. Einstein laid it out bluntly, saying that the logic of quantum mechanics will develop into a paradigm that truly is non-local. I believe this point is part of Einstein’s greatness. Because of this, articles I later wrote praised Einstein heavily, but I kept my mouth shut about Bohr. Bohr had his inspirations, but they lacked a boyish passion. He solemnly placed the taijitu on his family coat of arms because he believed in this worldview, despite his inability to refine the concept of non-locality so completely as Einstein. But this tells us that a new paradigm had begun, beginning only not with the development of quantum mechanics, but actually as early as 2000 years ago.

Later, when I read the Bible, this worldview was again discernible. For example, the Bible says ” since what is seen is temporary, but what is unseen is eternal”(Corinthians 4:18). Looking at this from the surface, can you understand what it means? That which is invisible is eternal. A few years ago, I was at Tsinghua giving a lecture on paradigms in quantum mechanics and some of my colleagues in the audience said that what I described was actually Buddhism. At first, I was very unhappy. I was clearly talking about science. How could someone mistake it for Buddhism? I have never even studied Buddhism before. Later on, there were a few more people who kept saying the same thing, people who were really in the know, with high levels of education. Among them was even the Vice-President of China National Petroleum Corporation. Since they were always saying this, I asked them to send me a Buddhist scripture to look over and see if it has any relation. They sent me the Diamond Sutra. I summoned up my courage and set upon reading it

As I read, I found that while I didn’t exactly understand everything, I got a clear sense that Siddhartha Gautama’s perception of the world was certainly non-local. Initially I was completely unable to understand many of his conclusions, such as “The ten thousand things are defective.” What is that supposed to mean? He believed that the world that you see is fake. And what is the real? Buddhism believes that “emptiness,” or kong, is real and is the essence of the world. In the novel, Journey to The West, the Monkey King’s name, Sun Wukong, is a nod to this concept, just like the saying “existence and emptiness are one in the same.” At first, these propositions seem impossible and paradoxical. How can emptiness, or kong, become the essence of the world? From Einstein’s perspective, emptiness is not the absence of things, but it is rather the concept of non-locality. Although Buddhism placed greater emphasis on the concept of “emptiness,” non-locality actually is the essence of the world’s existence.

He believed that if you examined the concept of being, or you, using the paradigm of quantum mechanics, it is only by chance that we all witness the spatially defined world, because its existence requires so many conditions. Probability is unable to meet the conditions required for obtaining spatially defined information. Many people are now starting to become aware that 95% of the universe is made up of dark matter and dark energy and only about 5% of the universe is made up of stuff we can actually see. It’s true that we don’t have the ability to conclusively confirm whether this is related to quantum non-locality, but it does seem to match up with the idea that what we can see is only a small part of the entire world. Including the previous electron experiment, the only way for the dot to have appeared is if the electron’s impact on the screen was facilitated by certain conditions. A majority of states are non-local. Buddhism calls it emptiness, or kong, and Lao Tzu calls it nothingness, or wu. According to Lao Tzu, the state of nothingness is in fact the most fundamental state of the universe — a state of non-local quantum superposition.

If we extend the concept of non-locality, there are many people who believe that the strange concept of quantum entanglement is actually very natural, because there are many uses for it, including quantum communication. Einstein was initially opposed to entanglement. Let’s talk about the reasons behind that. Actually, you will easily arrive at the conclusion that quantum non-locality exists if you just consider it in conjunction with the conservation law. For example, angular momentum, you will immediately be able to come to the conclusion of the existence of entanglement. It’s quite simple. For example, looking at electron spin, remember that the physical quantity is non-local. I just indicated the spin states,Electron space spins both up and down, co-existing in a state of superposition called non-locality, which is the nature of quantum particles. If it could only exist in one state, then it would defy the nature of quantum particles, so the natural state of quantum particles is such that the spin is simultaneously both up and down, which is to say that they have superposition.

Up through this point, this is all well and good, but what if two electrons interact with each other? And what if the interaction between the two elections is itself a Hamiltonian following the principle of angular momentum conservation? That is to say, the angular momentum resulting from the interaction cannot change. Be aware that the law of conservation cannot be violated, regardless of whether it is conservation of momentum, conservation of angular momentum, or conservation of energy. If even one were violated, then the entire scientific theory would be destroyed, because the law of conservation is one of its cornerstones. Because there have been no indications that conservation of angular momentum could be violated, it must be followed. Finding a Hamilton for conservation of angular momentum is easy and I have done it myself. Two electrons interact, and I guarantee that initially, the total spin is in a state of zero. I also want to ensure that after the interaction, it will maintain a total spin of zero, if it is to follow the laws of conservation.

On the surface, it would appear that this is impossible. After the interaction, since the entire system is believed to be non-local, the two can simultaneously face both up and down. If they are to both face up, add each 1/2 together to get 1, or if they are to both face down, add both together to get -1, but neither of these combinations equal 0. In this way, if the interaction of the two electrons forms a system, it seems that the principles of non-local angular momentum conservation would be violated. Early on, Einstein had noticed other similar problems (the so-called EPR pair).

But nature is very mysterious. There is only one state in which both conditions can be satisfied, namely where non-locality can be present while also guaranteeing the law of conservation is followed. It is very simple: entanglement. How to begin? I admit that the condition of non-locality must be met, where one electron is simultaneously directed both up and down. Non-locality must be met. But if it is to maintain an overall spin of zero, then what? Ultimately, the quantum state must be formed when the first spin is tested. If the first test shows that the spin is in an upward direction, then the second spin must be in a downward direction, thus maintaining zero. Similarly, among the spins, there might be one going downwards that is so-called non-local, but once you test that it is going down, the other one must then go up, guaranteeing that the total spin component is zero. This is what we call quantum entanglement.

Entanglement is a natural state. Humanity has misunderstood entanglement, believing that it is something that can only be produced in a laboratory, but that is not the case. Looking at it from this perspective, entanglement is ubiquitous, because meeting the conditions is extremely common. If the first particle is non-local, then as long as the second particle follows the laws of conservation, then it becomes entangled. After entangling, this would appear to contradict the Theory of Relativity. This is because once entanglement occurs, there are no requirements for how close together or far apart the two particles must be. Theoretically, there is no limit to how far apart we can pull them. You could even separate them across entire galaxies, and it would still work. Einstein thought that this was incorrect. If you measure a particle here and its spin is upwards, it should affect the entangled another particle over there, in a split second, without any delay. If there were any delay, it would violate a law of conservation. In a split second, if we measure the particle here spins up, then the another particle over there must spin down. Is this possible? Einstein’s Theory of Relativity was founded on the principle that nothing can exceed the speed of light. This is the starting point for his concept of space-time. So Einstein began to strongly oppose the idea of entanglement, which is why people to ultimately assume that he opposed quantum mechanics in its entirety.

First of all, innumerable experiments have proven that there is no problem with entanglement. Entanglement has now even begun to be used in practical applications. Jian-Wei Pan’s team has managed to maintain a state of entanglement between a satellite and a ground station. This is of course, very high-tech. Because the two stations are separated by the earth’s atmosphere, this makes the entanglement very fragile, easily broken by other entangled particles. The difficulty is that other entanglements in the atmosphere would immediately break the experimental entanglement if they came into contact with each other.

I was confused for many years but ultimately, it was not just my personal conclusion but rather a universally accepted conclusion. When Einstein said that nothing could travel faster than the speed of light, he was talking about spatially defined and deterministic information, which was his basic premise. When the Special Theory of Relativity had just been proposed, quantum mechanics didn’t exist yet. He wasn’t paying particular attention to this detail, but rather was only concerned with information the Buddhists might refer to as “being,” or have. But upon second look, you should have a very clear sense that the information that would have to travel is spatially determined, and Einstein’s entire concept of space-time is based on determinism. Because any point in four-dimensional space-time, such as x and t, is deterministic when written out, we will revisit this concept later.

Spatially defined space-time and deterministic information were the great premises of Einstein’s Theory of Relativity, but what a lot of people don’t realize when they talk about this problem is something that Einstein himself also failed to take into account. Entanglement cannot transmit spatially defined information. Quantum spin simultaneously faces up and down. Prior to measurement, its essential randomness is not randomness that was introduced from the measurement itself. A fundamental point is that it is essentially random, existing in a state where it faces both up and down.

Entanglement itself has no defined spatial information, to say nothing of it traveling faster than the speed of light. It can already be strictly proven that it does not violate the theory of relativity. Otherwise, Jian-Wei Pan’s experimental satellite, Micius, could upend the Theory of Relativity, exploding physics circle. Einstein’s Theory of Relativity wasn’t upended at allThose who initially opposed it were wrong and Einstein himself even misunderstood it.

Next, we will talk about what is, for our purposes, the most fundamental question: what does this have to do with computational thought? But first, there is an obstacle that needs to be addressed. There is a popular notion that quantum mechanics represents the quantum world, that we humans live in the macroscopic world, and that these two worlds are unrelated.

Frankly, that makes me very uncomfortable. The reason why I love physics is that I believe it can provide us with something ultimate. The universe is certainly a coherent, unified world. You can’t say that one part of the world is its own world, completely unrelated to its counterpart, or try to assert that the two worlds are in conflict with each other. In the macroscopic world, it looks like most of what we see is at odds with the concept of non-locality, a paradox embodied by Schrödinger’s cat, the thought experiment in which the cat’s life and death are held in superposition. In reality, it wouldn’t be possible to experience this phenomenon in the macroscopic world. The cat’s life and death both exist at the same time? On the surface, you would never see it.

I believe that the greatest misconception is due to the belief that the macroscopic and the microscopic are two different worlds that do not interact. This set of quantum mechanics has already been verified, which the mainstream believes only applies to the subatomic world and is unrelated to the macroscopic world. I do not agree with this. I believe that quantum phenomena definitely exist in the macroscopic world. Ten years ago, especially during 2008, I suddenly became aware of quantum non-local phenomena in the macroscopic world. From the surface, we didn’t see anything, such as a non-local phenomenon, but I suddenly became aware that the principle of maximum entropy is, in reality, a macroscopic effect of quantum non-locality.

The experiment in the image above shows diffusion. It is common knowledge that if you drop ink into a cup of water, it will ultimately diffuse without the need to stir it. It is a certainty that it will move towards a state of maximum uniformity and maximum disorder. You don’t need to stir it at all; you just need to wait a period of time. Thermodynamics interprets this as the principle of maximum entropy, which says that a closed system will ultimately move towards maximum disorder and will lose spatial defined information, which is a state of maximum entropy. On the surface, why does thermodynamics refer to this concept as the principle of maximum entropy? Because previously, there was nothing that would prove the theory, it was the logical basis of thermodynamics and no one had established a relationship between it and quantum mechanics. So the principle of maximum entropy was a principle. Physics is actually pretty miserable because it has been chopped up into separate, unrelated pieces and when it is explained, it is as if each piece has its own principle, without any connection to the other scientific principles.

In 2008, I intuited that actually, this is a quantum non-local phenomenon. Why does it ultimately move towards a state of diffusion? It ultimately is a principle of non-locality. Initially, the drop of ink is spatially defined, existing in a set place in space-time. In the end, it will naturally move towards diffusion. Just based on this explanation, I wouldn’t be able to persuade anybody, but what I find comforting is that through continuously gathering evidence under Mr. Zhang Li’s mentorship, the body of theoretical and experimental evidence is growing.

Starting in 2009, I continuously discovered the new fruits of theoretical physics. It was lucky that I was not the only one who was thinking this way. I discovered that many people were working hard in this direction.

Figure 9 This article proves that theoretically, the subsystem of a pure quantum system relies on quantum evolution to reach thermodynamic equilibrium, but the environment (other parts of the pure system) does not need to reach a state of thermodynamic equilibrium.

Entanglement might be able to explain the thought behind The Arrow of Time. One of the ways to think about it was raised by Professor Seth Lloyd, previously mentioned, who proposed it ten years before I did. Seth Lloyd was originally a physics major at Harvard. Later, he wrote the famous book, Programming the Universe, and is now a professor at MIT.

So later, this research group was created and for close to ten years, they chipped away at the task of proving that thermodynamic equilibrium really does have a connection to quantum entanglement. I found their latest results published in this paper. The conclusion of this article is very clear. Their experimental model was a pure quantum state with entanglement. Mathematically, they proved that if you take only a part of the entire entanglement system (this part must be smaller than the other part), then you are able to deduce from quantum evolution that the small part is certain to ultimately move towards thermodynamic equilibrium. They weren’t able to prove that this is a state of maximum entropy, but it will certainly arrive at an ultimate state. In addition, they also had a particularly inspired conclusion, where they proved that even if this part of the system has reached so-called thermodynamic equilibrium, the remaining environment does not necessarily reach, and will not reach, a state of thermodynamic equilibrium. From this, we can see that the universe will not end in heat death. Originally, when the principle of maximum entropy came out, humanity was most troubled about the fact that if the entire universe was heading towards reaching a state of maximum entropy, then wouldn’t we all be doomed? We would all just be a pot of hot soup, ultimately moving towards self-equalization and self-disorganization. But in reality, it appears that the universe does not evolve in such a way.

Previously, this has been something that has troubled classical physicists, so at least Noah Linden and his group already have evidence that if a subsystem has reached a maximum state of thermodynamic equilibrium, then the environment does not need to also reach equilibrium. So-called heat death will not happen.

Certainly more apt experimental results came from Harvard last year (2016). The people at Harvard are pretty awesome. Harvard has been able to make several breakthroughs in the past couple of years regarding questions similar to this one. They have even begun to extend their studies into the area of life. This experiment was even more straightforward. The group simply created 6 electron entanglements and then tested a group of three. Without any other predetermined conditions, the group proved that the subsystem comprised of the three electrons reached a state of maximum entropy relying on quantum evolution. As long as entanglement is present, the principle of maximum entropy will appear according to quantum non-locality and uncertainty.

Figure 10 Further experimental progress by a group at Harvard University in 2016: Quantum thermalization through entanglement in an isolated many-body system.

This image was used by the group to interpret the results. Basically, SAB is the total entropy, where SA and SB are the two separate parts. One group of the 6 particles (the three electrons) is A, and the other group (the other three electrons) is B. The first tests were done on the three electrons in Group A, which ultimately reached maximum entropy. But notice that when the group had reached maximum entropy, the overall entropy of the quantum system was zero. How did they explain this? It is because of quantum entanglement. The greater the number of particles over three (three being the test group), the greater the interactive entropy, IAB. The size of the test group increased, causing the number of entanglements in test group to increase accordingly. Following this, the overall entropy, SAB, minus that of the interaction information, tended towards zero, maintaining its pure form. There is a view now that says the entire universe exists in a pure state, what we see is just one portion of it, and the principle of maximum entropy is being established in an isolated part of the system. I think this has not been rigorously tested and but I believe it’s ture. But I feel very grateful because I have witnessed the resolution of a problem that has confused me for over ten years. Even if it wasn’t me who solved it, other people are working hard for the progression of science. The parallelism of the earth is a riddle, which is being solved, bit by bit, through everyone’s computations. The most fundamental thing it tells us is that single quantum non-locality exists not only in the subatomic world, it should also be the foundation for the entire paradigm. At least the principle of maximum entropy in the so-called visible, or macroscopic, world is related to quantum non-local entanglement. But there are still more questions that need to be answered. Why is it that the visible world has locality and entropy reduction? In reality, we see plant and animal life in the natural world, the results of entropy reduction. These questions have stumped me for many years.

When I had just studied thermodynamics, the principle of maximum entropy was undoubtedly the most glorious because it explains almost all concepts within thermodynamics. But once you step out of the field of thermodynamics, it seems like it can’t be right, because what we see in reality is a world of entropy reduction and locality. You could talk about quantum non-locality for days, but why is the world we see certain? Why is it local?

Recently, I wrote an article with Rui Ziwen,Zhang Kai etc. We used Heisenberg’s uncertainty principle to conclude that, when two conjugate observables for any quantum wave function, such as a position, a momentum, or the component of the two directions of the electron’s spin, are tested respectively for entropy, their sum must be greater than or equal to ln2. We name it: Quamtum minimum entropy principle. What does this mean? It means that for a group of any conjugate quantum physical quantities, the minimum uncertainty (sum of entropy) when measured together cannot be less than ln2. It means the information originated from Einstein single quantum non-locality.

The concept of information, surprisingly, is quite different between in normal life and in the Shannon’s theory. In our normal life, information often means some knowledge we have known, it’s past tense. But in Shannon’s theory, information means some knowledge we could get, it’s future tense. It’s quantified by entropy.

For example, if we have a big box, the lion and monkey will appear at the same probability (1/2) in the future, then in such box, we have ln2 information (entropy) in Shannon’s meaning. In another box, we have lion, monkey , tiger, cat, wolf, duck, one of them will appear in the same probability (1/6) in the future, then we have ln6 information (entropy) in Shannon’s meaning. Certainly, we have more Shannon’s information in second box.

So, if we have no Einstein single quantum non-locality, we have no Shannon’s information and entropy.

This opens up an imaginary space for later during the Maxwell’s Demon effect, which necessitates that to obtain the certainty of one bit, we must dissipate a quantity of heat equal to kTln2 (T = environmental temperature ,k= Boltzmann constant).

Realistically, based on this, the logical next step appears, which is the physical principle of computational thinking, or the so-called Maxwell’s demon. Although these are unrelated fields, everyone’s striving has produced a mutually recognized result, such as this. What is Maxwell’s Demon? It explains the confusions I mentioned before. Why is the visible world local? Why do we see so many instances of entropy reduction? Could it be related to a conflict between quantum non-locality and the principle of maximum entropy? Certainly, it is also very simple in terms of computation. The question that computation answers is this: whatever physical system that we rely on to ultimately obtain certain bits, computation will always obtain a result in the end, and we should treat the final result the same as a certainty bit. Why do we say that it is Maxwell’s Demon? Because the overall reasoning is ultimately obtained by following the solution to the question of Maxwell’s demon. Maxwell’s Demon is also very simple. Why is it called a demon? James Clerk Maxwell discovered that the principle of maximum entropy is a paradox. On the surface, we can think of the principle of maximum entropy as a gas chamber, which ultimately reaches an even temperature, as well as an even density. He disagreed, asking if a demon existed, and there was a partition between the two chambers with a small hole in the middle, what would the demon do in the middle? It would let the high speed particles through and leave the slow ones behind. If a demon like this really existed, then chamber would quickly be split into a warm side and a cold side, where the speed of the molecules on one side would be greater than the other. With one side hot and one side cold, a heat engine would be formed. Once the temperature became uneven, everyone knows that it would then be able to perform external work, thus forming the second type of perpetual motion machine, violating the second law of thermodynamics (directly interfering with the principle of maximum entropy).

On the surface, this seems absurd, which is why Calvin mockingly dubbed it “Maxwell’s Demon.” What is strange is that on the surface, this demon seems beyond belief, but surprisingly, physicists spent more than 100 years and were been unable to prove that this demon does not exist. You say that it violates the principle of maximum entropy, but which law of physics can you use to prove that the demon doesn’t exist? Embarrassingly, it couldn’t be proven, so this demon continued to exist, right up until R. Landauer and C. H. Bennett arrived at the IBM laboratories. They were originally studying the physical principles behind computation, but happened upon the answer to the question of Maxwell’s Demon during the process.

Figure 12 Landauer Principle: If it is necessary to irreversibly eliminate the uncertainty of one bit, then it is necessary to dissipate the energy equal to kT ln2 at minimum.

The two are undoubtedly very awesome. The IBM Laboratory is noteworthy in that it has produce 6 Nobel Prize recipients. The solution to these questions becomes a cornerstone for the entire computer sicience discipline. It has fundamentally solved the physical principles of computing. The final model is extremely simple (see image Figure 12 above): The single atom computer is the simplest computer, containing just a single atom. Of course, now that we have a quantum, nonlocal worldview, so we know that the initial state must be non-local. There is a partition between the two sides, and we don’t know if it is to the left or to the right and in general, as the probability is equally distributed between both sides.

So what is computing? The simplest kind of computing requires that a process be carried out. I want to obtain one bit of certainty, which will define if it is on the left or the right. One bit is the smallest possible unit in terms of certainty, and cannot be any smaller. The result of this computation would be that I ultimately made it go to one certain side. The remaining issues are very simple. What does Maxwell’s demon do? A demon like this must exist. At the outset, we have quantum non-locality, where the particle exists on both the left and right sides. You want to make it so that it exists on a certain side, so you definitely need this demon to exist. Isothermal compression and the remaining thermodynamic processes are very simple. It is compressed until finally (Figure 12 image d, above), it is compressed into a certain location, and you obtain a bit of certainty. But be aware that the entire process must dissipate heat. Obtaining certainty without heat dissipation is impossible. This is known as isothermal compression.

Use an air pump for a while and it will start to heat up. When I was little, I always mistook this as the result of friction. Later on when I really started to study physics and thermodynamics, I realized that even in the absence of friction, the isothermal compression of gas must also dissipate heat. This is a necessity. If you intend on obtaining the certainty of one bit by eliminating the corresponding quantity of non-locality, the Landauer Principle says that a minimum energy of kTln2 must be dissipated (T = environmental temperature,k= Botzmann constant).

We ultimately ask a question. Can everyone imagine why the earth environment is low entropy? In reality, the chief demon is the sun. <Programing The Universe> says that the sun is the master Maxwell’s demon. Without it, all spatially defined locality, including entropy reduction, could not exist. The entropy reductio ad absurdum logic used by this book is actually very simple. Imagine if there was a switch and tonight, we turned off the sun. Tomorrow morning, the sun would not come out. Ask yourselves, how would our world evolve? The only thing remaining would be the principle of maximum entropy. We would all perish, all living things would die out and the entire earth would ultimately move towards an even temperature and disorder.

The sun is the ultimate Maxwell’s demon. This shows us that the line in the popular song, “Life and growth depend on the sun…” is not just simply an artistic expression, but rather a scientific fact. This is the only way that any entropy reducing process could happen on earth, including locality.

Now that we have arrived at this point and we have Maxwell’s demon, everyone should be able to imagine the reason why the two lines of computational thinking conveniently appear. Firstly, why do we want recursion? Tsinghua Department of Automation Professor Gu Xueyong and I have discussed this. When we talk about so-called complex problems in the field of computer programming, how can the complexity of these types of problems be expressed? Professor Gu says that the simplest calculation of each computational step is nothing more than Maxwell’s demon, compressed. So called recursion is a question of how many steps it takes to solve a problem. The Maxwell’s demon of the R. Landauer principle is metacomputing. Recursion explains the complexity of this question.

For the first time, it finally allowed mankind to be able to quantize the complexity of a problem. The process of Maxwell’s Demon is metacomputing, the simplest computing step, each time obtaining one bit of certainty,erasing one bit information which cannot be divided any smaller. The recursion of complexity relies on Maxwell’s demon to be carried out one step after another. In theory, you could use it solve any complex problem. The only question is how many times the recursion would have to be repeated.

Realistically, using the model of Maxwell’s demon can easily explain why we must do parallel calculations and why nature chooses distribution over centralized operations. Maxwell’s demon is simple, if you do some math. Imagine if all of these nodes (in the Figure 14 above) are active. Since they are all active, the simplest model is if you imagine it is a single atom computer, and the processor must handle one binary decision each second. However, centralized decision-making does not allow you to make decisions. It striped the nodes of decision-making permissions and only the central point is able to complete it. What results would this produce? It’s simple. If you calculate it, you will know.

In the Figure 14 above, if the system has a quantity of N nodes and each node handled one binary decision per second, then the entropy of the system would increase by Nkln2 per second. But if you relied solely on the central point to complete the decision making, it would be as if only the central node had the ability to carry out compression, while the other nodes do not have the ability to carry out compression or decision making permissions. If only the central node has compression capabilities, N is originally in the front and it is a question of adding the quantity of N together. But if you only have one processor, then your processor would need to take N inside ln calculation, turning it into an exponent. The question of the difficulty of 2 to the power of N immediately increases significantly, the difficulty of addition increasing by the additional difficulty of exponents.

Physically, it is easy to calculate. Even if the processor at each node is as big as a prokaryotic cell(Ten negative six powers — meters), then once this these nodes surpass a number of 50, a small calculation shows that in order for it to maintain the system no increase of entropy, the central point must compress at a rate greater than the speed of light. Einstein surely is correct. The Theory of Relativity determined that the ability of centralization to maintain the entropy reduction of the entire system is extremely low. 50 nodes wouldn’t be able to take it and would be unable to guarantee that system entropy would not increase.

But parallel computing, as computational thinking which we refer to as decentralized, does not experience this problem because each node is able to compress on its own, without any limitations. Isn’t the ability to maintain entropy reduction also a phenomenon that occurs in the natural world? A simple example of this is the computational thinking of flocking birds. Since I was little boy, I didn’t understand flocking birds and wondered if they had a commanding officer in charge of the flock. There is no such commanding officer.

Figure 15 http://pic.people.com.cn/n/2014/0213/c1016-24348115-3.html

This is clearly low entropy. The heart shape was coincidentally photographed by a reporter, but flocks of birds also make all kinds of other shapes that we can’t even imagine, all low entropy. If it was high entropy, it would look like a regular atom, the birds evenly distributed in the air in all directions, but it is not.

Later, I did some research on flocks of birds. Their rules of computational thinking are extremely simple. Recursion and parallelism, how simple is the birds’ the basic protocol? It is nothing more than two conditions. The first is that they do not collide with each other, and the second is that the distance between them cannot exceed half a meter. Each bird is the equivalent of Maxwell’s demon. Once the distance exceeds half a meter, the bird’s IQ is able to sense this fact, allowing it to adjust its distance back to with half a meter. This is just like the compression of Maxwell’s demon.

Each bird is just that simple. It’s IQ is metacomputing, which realistically isn’t much more advanced than Maxwell’s demon. But the whole system makes you gasp in amazement as it maintains low entropy and innovates. Because it is not predictable, it can only maintain low entropy, and certainly does not require that it must be in a certain shape.

Today, the conclusion has pretty much established and we will stop here. Why does the universe select computational thinking in basically every field? This is what has been determined by the basic material existence of the entire universe, non-locality, and entanglement, including maximum entropy. The mechanism of Maxwell’s demon is able to allow for entropy reduction. These can all be combined together. So we see that computational thinking will certainly be able to explain many fields. This is the most fundamental mode of operation for the universe and systems.

The smart economy also uses computational thinking though to research the concepts that inevitably arise from the new internet economy.