> Therefore, all reasonable implementations will be approximations
> of the one
> solution and necessarily be similar in all aspects that matter.
> Efficacy of an
> algorithm in the universal case generally improves the closer the
> approximation
> to the ideal, and I don't know why that would be different here.
> In fact, the
> empirical evidence seems to suggest that this is *definitely* the
> case when
> talking about AGI. The thing that makes many broken AGI implementations
> "broken" isn't that they are not approximations of the One True
> Solution, but
> that the efficacy is so poor that they may as well not be in the
> real world on
> real machines.
..
> Perhaps it isn't clear to me what your counter-hypothesis is.

My counterhypothesis is that

"Maximize intelligence within computational resources R"

is an optimization problem whose fitness landscape, even when smoothed, is
nowhere near unimodal.

My counterhypothesis is that, for realistic values of R, there exist a
number of rather different near-optimal solutions to this problem, so that
even when smoothed the fitness landscape of the problem is highly
multimodal.

For example, consider the issue of how to represent and reason about complex
procedures. I think there are many ways of doing this, each with pluses and
minuses, and that different choices will tend to lead to AI systems with
"different sorts of minds."