Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

Recent months have seen dire warnings from Stephen Hawking, Elon Musk and others regarding the dangers that highly intelligent machines could pose to humanity. Fortunately, even the most pessimistic agree that the majority of danger is likely averted if AI were “provably aligned” with human values. Problematical, however, are proposals for pure research projects entirely unlikely to be completed before their own predictions for the expected appearance of super-intelligence [1]. Instead, with knowledge already possessed, we propose engineering a reasonably tractable and enforceable system of ethics compatible with current human ethical sensibilities without unnecessary intractable claims, requirements and research projects.