Exactly as Artificial Intelligence (AI) did before, Artificial General Intelligence (AGI) has lost its way. Having forgotten our original intentions, AGI researchers will continue to stumble over the problems of inflexibility, brittleness, lack of generality and safety until it is realized that tools simply cannot possess adaptability greater than their innate intentionality and cannot provide assurances and promises that they cannot understand. The current short-sighted static and reductionist definition of intelligence focusing on goals must be replaced by a long-term adaptive one focused on learning, flexibility, improvement and safety. We argue that AGI must claim an intent to create safe artificial people via autopoiesis before its promise(s) can be fulfilled.

It’s the ultimate nightmare scenario . . . an innocent mistake is made, misinterpreted by the other side, and the ensuing escalation – easily avoided by trust, communication or even sufficient knowledge of the other – instead leads to Armageddon. It’s a storyline that is so universal that it is the plot-line for just as many romantic comedies as action-packed thrillers. Him vs. her, us vs. them, our need to remain comfortable and in control incorrectly supersedes our true goal of having things turn out for the best. Continue reading →