On 12/15/05, 1Arcturus <arcturus12453@yahoo.com> wrote:
> I had another question about SIAI in relation to Kurzweil's latest book
> Singularity is Near.
>
> If I have him right, Kurzweil predicts that humans will gradually merge with
> their technology - the technology becoming more humanlike, more
> biological-compatible, and integrating into the human body and mental
> processes, until eventually the purely 'biological' portion becomes less and
> less predominant or disappears entirely.
>
> SIAI seems to presuppose a very different scenario - that strongly
> superintelligent AI will arise first in pure machines, and never
> (apparently) in humans. There seems to be no indication of 'merger', more
> like a kind of AI-rule over mostly unmodified humans.
>

Some of us think that one possible solution to the problem of
unfriendly AIs is to aggressively augment and amplify the intelligence
of humans--and more importantly, the intelligence of human social
organizations composed of augmented humans--such that we have a broad,
powerful, and evolving base of intelligence based on human values in
place to deal with the threat of unfriendly AIs. Society is already
proceeding down this broad path, but certainly not with any sense of
urgency.

On the other hand, some of us think that the risk of unfriendly AI is
so great in its consequences, and possibly so near in time, that
humanity's best chance is for a small independent group to be the
first to develop recursively self-improving AI and to build in
safeguards which, unfortunately, have not yet been conceived or
demonstrated to be possible. I don't disagree with this thinking, but
I assign it a very small probability of success because I think it is
vastly outweighed in terms of military and industrial resources that
can and will pick up the project when they think the time is right.

My (optimistic and hopeful) bet is on Google to be prominent in both scenarios.