> * From a utilitarian point of view, this Friendly AI problem is
> probably the best use of your time and resources, even compared with other
> direly under-addressed problems in society.
>

This argument seems problematic to me. First, note that AI has a huge
credibility problem. People have been crying wolf about AI for decades, and
the media laps it up. But I still don't have a robot butler. Even things
like face recognition are still quite difficult.

Second, note that there is no strong reason to believe FAI is really
possible. We only know intelligence is possible by looking at human
intelligence. There are no "friendly" humans in the sense that we'd require
from an AI (there are no provably friendly humans).

Third, from an altruistic view of things, it's not at all clear that
advancing AI will make the world a better place. It's very possible that it
will make the world a terrible place, for reasons including but not limited
to the Friendliness problem.

Here are some alternative carrots that might motivate people:

1) AI is going to take off soon, and there are big bucks to be made.
2) AI is going to take off soon, and it may be possible for dedicated
thinkers to discover deep new scientific truths.