> The philosophical thing that I see Ben and Eliezer
> (the two people on this list who are visibly trying
> to create AI) both doing is a sort of phenomenology
> of cognition - they think about thinking, on the basis
> of personal experience. From this they derive their
> ideas of how to implement thought in code. The
> question
> that bothers me is, is this enough?

I don't think that philosophical sophistication can be
embedded in an AI system by design. I think that the capability for
philosophical sophistication is there in any highly intelligent
system, and that the practical manifestation of this capability in
an AI system has to be encouraged by the system's ~education~.