> Without the motive to do so, no entity needs to develop imagination. > Without imagination and volition, we'll never have true AI. That is what> is at the heart of the kind of intelligence we posess, and it cannot be> hardwired into a system, it must evolve through complex feedback from an> external environment.

While I agree that we want volition for AI to become useful and
interesting, I'm not sure it must be evolved. A sufficiently clever
programmer might be able to write an AI program from scratch, although
it might turn out that this is so tricky and inconvenient that
everybody in their right minds will evolve AI and then tweak it.

> In fact, I expect it will be impossible to create a virtual world of> sufficient complexity to allow an AI 'baby' to achieve conceptual> awareness at all. Certainly far more complex than to create the AI 'baby'> itself. One would probably have to wire up the baby with enough sensory> organs to allow it to accumulate a rich enough sensory and perceptual> experience, with plenty of feedback mechanisms, and let it free in the> real world if one expected it to get anywhere in its cognitive growth.

I think you make the mistake of thinking that we need to put all the
complexity in by hand, which would likely be infeasible. But you could
set up a basic set of world rules that encourage emergence, and then
let the world develop (e.g. let virtual plants go through a run of
simulated evolution, generating a synthetic ecology with fairly
complex structures no human has had the need to program).

Second, I don't see why you need to create a very complex virtual
world. It might turn out that "rather complex" worlds will do just as
well for creating basic AI. One might even have a succession of worlds
of increasing size and complexity, where the AI grows up until it
emerges in the real world.