In our present age we have reached a critical moment; Machine learning is on the verge of transforming our lives. However, our approach to this technology is still experimental; we are only beginning to make sense of what we are doing and the need for a moral compass is of great importance at a time when humanity is more divided than ever.

Many of the ethical problems of machine learning have already arisen in analogous forms throughout history and we will consider how, for example, we have developed trust and better social relations through innovative solutions at different times. History tells us that human beings tend not to foresee problems associated with our own development but, if we learn our lessons, then we take measures during the early stage of machine learning to minimize unintended and undesirable social consequences. It is possible to build incentives into machine learning that can help to improve trust in various transactions.

Over the years, I increasingly gain an impression of my lived experience as being game-like. I find myself accidentally playing ‘Nell Watson RPG’, a game whereby the player roams around an open world, accepting quests and requests for help from a variety of NPCs. Sometimes the player character will accept a reward after the fact, but often just knowing that the state of play is improved in some way is plenty in itself.

The game also includes a lengthy main quest to help ‘save the world’ through shepherding into being new social technologies. Along the way, various other players have joined up as a party, in order to effect some goal or mission. In doing this, I have learned how to better align my character's strengths with those of others in the party, each taking their respective ideal roles.

It's easy to be cynical, and to sneer at exuberance and deride it as irrational.

We don't have flying cars, but we have something better. We don't have moon bases yet, but we have developed the means access to space at 100th the cost. Our robotic butlers are extant, if ethereal in the Cloud.

Even ten years ago it would have been conceivable to write such developments off as infeasible. If the engineers behind such great chains of innovation had abandoned the hope of accomplishing these feats, we would be robbed of them.

From a universal perspective, life itself is merely an information set that happens to possesses a degree of agency. We are self-propelled gatherers and processors of data, flung forward by time's arrow and a trillion iterations.

For eons this was the status quo; the gene was the most robust means of storing, processing, and propagating information. It was the development of the neo-cortex that enabled a shift to new forms of information, such as Dawkins' meme. Meme's are much less robust in geological terms, but vastly more rapid in their ability to shift and iterate, and influence entire populations - even the ecosystem itself.

We are facing a machine-driven moral singularity in the near future. Surprisingly, amoral machines are less of a problem than supermoral ones.

We have checking mechanisms in our society that aim to discover and prevent sociopathic activity. Most of it is rather primitive, but it works reasonably well after the fact. Amoral machines may have watchdogs and safeguards to monitor activity for actions that stray far from given norms.

However, the emergence of supermoral thought patterns will be very difficult to detect. Just as we can scarcely imagine how one might perceive the word with an IQ of 200, it is very challenging to predict the actions of machines with objectively better universal morals than we ourselves possess.