Just like Brian, I'm also surprised at Ben's claim that human programmers
will take a significant, ongoing part in enhancing human-level AI to
super-human ability - and that this may take decades. Any system with these
characteristics is missing some crucial aspect of general intelligence. I
suspect that this will be the case with AI systems that concentrate on
*sepearately* coding high-level reasoning, instead of deriving it from
environmentally-interactive perception/ action & concept formation.

Surely, once we have a single (even slow) AI with human general cognitive
ability (even mostly blind and quadriplegic), super-human ability will
mainly come down to something like:

* Building/ getting enough hardware (possibly specialized) to have a much
more powerful version, and/ or many of these 'seeds'

* Allowing them to learn specific skills & knowledge of computer science,
and to apply this to improving their own software/hardware design
incrementally

* Repeat.

This scenario does not even into account any specialized software/
computer-language skills that may be more deeply embedded (as per Eli's
design), nor the many other advantages of artificial, designed systems
(http://www.optimal.org/peter/hyperintelligence.htm )

Provided you have *really* achieved the essence of human general
intelligence, the take-off will be Hard & Fast.