Today we are going to look at the manner in which Libertarianism can solve the Ethics problem posed by the author's paper and never answered. Not even hinted.

THEIR WAY

Defining the problem

According to them, what designers of AGIs should do is:

The specific behavior of an AGI cannot be predicted even when all the processes that AGI uses to find an answer are working properly.

Verifying the ethics of every solution is challenging because we must understand the process leading to create such ethical solutions since testing each ethical solution solves nothing.

Ethical ways of working must be engineered into the AGI.

And according to us, this is what designers should do:

Define ethical behavior in a manner in which is limited by the laws of physics in order to make it objective.

Apply such definition in the context of a problem, not in the process of finding a solution.

It seems easier said than done, but it is not. Lets.

Ethics

At the most fundamental level, ethics is simply a philosophy that defines what is "good" and by contrast, what is "bad". Yes. It is that simple. The problem with the concepts of "good" (and by extension) "bad", is that they are subjective. What philosophers have been trying to do since… well… ever, is to create the ultimate definition of "good" so that a universal ethics can be defined and hopefully adopted by every human. Problem is, they have failed miserably. Their early attempts have yielded diametrically opposing points of view, one equally valid (or invalid) as the other. Take for example the definition of ethics under Socialism; "what is good for society as a whole is good for a person" and a similar definition under Libertarianism; "what is good for a single person is good for society". See what we mean?

To be fair, philosophers have managed to push through these boundaries into a more palatable (and common-sensical) theory, which is Ethical Relativism. This theory states that morality is relative to the norms of one's culture. This is, whether an action is right or wrong depends on the moral norms of the society in which it is practiced.

Close, but not enough.

The Nature of Ethics

Furthermore, the authors tell us that it is necessary to come to a deep understanding of the very nature of Ethics, if we are to build ethical AGIs. They challenge us to investigate the very rules on which ethics operates, in the same manner as we understand the rules of chess. The problem is, of course, that Ethics in and by itself is an abstract concept. It only makes sense when we develop a specific ethical philosophy by defining what is good or bad. For example, it is all good and great to define chess as a "game composed of pieces with different values which move over a limited board with varying movements". But this definition tells us very little to nothing about chess. We have the same problem with the definition of Ethics, which reads "moral principles that govern a person's or group's behavior".

Until you actually specify the type of chess you are playing (2D, 3D, chaturanga, shatranj, xiangqi or shogi or any other of the 2000 variants that we know of), we actually know very little about what the nature of "chess" really is.

Similarly, until you create an ethical philosophy actually defining what is "good", we will know very little about the nature of "ethics". And even if we know this definition, there are mountains of questions and doubts that cannot be provided with answers using only the original definition. It is for this reason that we have "Ethics Committees". If these questions would be easy to answer, we would not need committees! Think, for example, about the terms "capitalist bourgeoisie" and "proletariat". In a sense, any member of the proletariat is a capitalist because they save money (i.e. capital) and they use it to improve their lives. In a sense every "capitalist" is actually a member of the "proletariat" because they work and they earn a profit from their labour. Thus, to have a philosophy that states that "good" is defined by "what is good for the proletariat" and by extension what is bad for the "capitalist" is utterly nonsensical. So much for internal consistency in ethics.

The Way not to Engineer Ethics into AGIs

And so, when we put all this together, this is what we should do (according to the authors):

Pick a specific ethical philosophy that everybody agrees upon.

Clearly and unquestionably define all the rules under which such an ethics operates.

Program those rules into AGIs in a manner in which even their own evolution won't alter them.

In light of all that we have seen before, does this sound like something even remotely doable? Or, on the contrary, it sits on the verge of ridiculousness and moving forward on the abyss of the delusional?

We must always remember that programming AGIs is not a philosophy where semantic battles can be fought, but an actual engineering process where actual programmers actually program code into computers. The former requires clever writing while the latter requires accuracy, discipline and, above all, logic and common sense.

What do you estimate the chances of success of such endeavour? Though so…

Every conceivable physical interaction of AGIs with humans is… well… physical in nature. Thus, it can be codified in actual physical terms.

Think about it. Even most (if not all) feelings are revealed through micro expressions or temperature, breathing, blinking and so on… which can be codified as parameters in face, body or movement recognition patterns. Physical patterns that can be observed objectively and calculated.

We can do the same with things. AGIs can identify colors, hardness, smoothness, brightness, movement, location, position and so on, all in physical, objective terms.

This physical database of interactions provides the basic tools an AGI will need to interact with people and things. An AGI will know, physically, how things are, what things do and how they behave.

Of course, this is purely mechanical information. Up to this point it provides no guidance as to what an AGI should or should not do with these items. This information provides no behavioural rules.

The Communication Tool

The second element than an AGI would need, is the ability to communicate with people. This is, the capacity to comprehend written or spoken ideas, not only to process them, but to comprehend them. Furthermore, it will need the skill to ask for clarifications about those ideas and the capacity to extrapolate and interpolate when there is insufficient information.

This communication tool is critical because it enables the transmission and comprehension of subjective and personal ideas.

The Libertarian Principle

The third element that an AGI will require is the most basic principle that will guide all its interactions with humans and other Rights-bearing AGIs. This principle is quite simple:

Though shall not interact with the properties of Right-bearing entities unless as dictated by a voluntary agreement.

The Way to Engineer Ethics into AGIs

Back then in our second installment of this series, we stated that what was necessary for an AGI to act in an "ethical" manner was the following:

Define ethical behavior in a manner in which is limited by the laws of physics in order to make it objective.

Apply such definition in the context of a problem, not in the process of finding a solution.

We can now apply the tools we described above to achieve these goals. The first part of the first goal is provided by our Physical Reality Tool. Everything that an AGI can interact with has been defined in terms of physical parameters.

Now we need to define "ethics" in those terms. The definition comes from our second tool. Communication allows us to establish contracts with AGIs which, as they are interacting with our property(es), act as "ethics" for such interaction. An AGI accepting such contract will know exactly what it is that it should do because it is in the contractual arrangement and it will know exactly how to do it because of its knowledge of the physical world.

With this information at hand, an AGI may use any problem-solving algorithm it chooses (typically a successive approximation one) to find a suitable solution to fulfill the contract.

However, all this is not enough. An AGI will invariably come across elements upon which it must evaluate potential solution models whose scenarios are not stated in the contract. How can an AGI do so?

By applying the third tool. Anything that is not permitted in the contract is forbidden. This is what provides context. This context is not built into the solution-searching algorithm and it is thus not "engineered" into the solution-seeking algorithm. It is a requirement above such algorithm. This frees the AGI to explore as many potential solutions it wishes but to execute only the ones that are "ethical" according to any given contract. This minimizes the chances that an algorithm will be trapped into a "local minimum" but at the same time it places definitive, physical limits on acceptable solutions.

The net result is an AGI which is as intelligent as it can make itself yet it behaves "ethically" when it comes to humans. In this manner we restrict interaction, not intelligence, which is what the authors' solution would imply.

How Does All This Work?

Let's say that we contract an AGI to guide us crossing a street. The contract will imply that we want to cross the street but also that we want to do so in a safe manner. By "safe" we stipulate no interaction of any kind with moving objects, maximum acceleration in all three axis, maximum time on the street and so on. This is our job. We must stipulate the parameters of the contractual agreement. If we screw-up this contract, then there is only us to be blamed. We stipulate our "ethical" conditions for this job.

Then, the AGI works on the problem. It will search for a model that fits the conditions of the street we want to cross. Low or high traffic. Traffic lights. Day or night? Animals? People? And so on.

Let's say that the AGI finds an optimum solution which implies waiting until cars are at least 100 meters from its current location and then proceed to cross the street at a speed of 20 meters per minute. This will satisfy our contractual conditions. So far so good.

But what will prevent the AGI from implementing a different solution? For example, a solution that would imply the AGI crossing us at such a time that will force incoming traffic to deviate from their trajectory in order not to hit us? Our contractual conditions have been fulfilled, but not so car owners' contractual conditions. What contractual conditions you ask? Good question. There are none. And what does our Libertarian principle state? Can the AGI interact with other cars without having a contract to do so? No. Thus, this solution will be discarded because it does not meet the overall Libertarian context. The AGI will act "ethically" within our personal and subjective "ethical" parameters.

But what happens if there is a busy street and there is no way to avoid forcing cars to change trajectories? Then the AGI may simply reject the contract. Remember, all contacts with Right-bearing entities are strictly voluntary. Or, the AGI may demand a clause in the contract stating that it is allowed to behave in such a manner that will force incoming traffic to change trajectory but that the consequences of such act(s) are on you. The AGI still does not have any right to interact with incoming traffic, but since you are its insurer (so to speak) should anything happen it won't suffer any consequences. You will.

CONCLUSION

Libertarianism is exceedingly powerful not because it is ridiculously simple, but because it is so immensely powerful in its simplicity. We have enjoyed this little excursion outside of our normal writing parameters because we wanted to emphasize that Libertarianism is inherently practical. It works. Period. It provides answers for even the most intractable problems as we have just proven. Yet, it remains accessible and understandable at all times and it does not hide behind foggy definitions or majestic gurus. At all times you, and only you are in control and this is the best you can ever expect to have… until the next political evolution. See you there!