Ben Goertzel wrote:
> I agree that a book on Eliezer's FAI and related ideas would be more useful than a book on rationality. A chapter or two on rationality, with a focus on rationally thinking about AI, FAI and the Singularity, would fit in nicely too.
>

I more or less agree. I would very much enjoy and love to see
the book on rationality and think it could generally do much
good in the world. However, the more theoretical writings on
FAI very much need to be broken down into some more accessible
semblance of components and requirements for each of those
components so that the actual programming work can begin. Until
that point is reached very few programming resources are going
to step forward. Without those, no matter how good the theory
is, FAI will remain a pipedream.

I also believe that open sourcing some of the components that
are generally useful will be a much faster way of getting those
components built than waiting for everything including funds to
come together to do the job on a closed source basis. A few
good architects to create component subprojects and manage them
as OSS would be easier to come by than many more master
programmers dedicating their lives to a more closed effort with
little or no working capital or compensation until a hopefully
happy Singularity.

I think there is a false assumption that useful people resources
need to understand everything about FAI in near Eliezer-like
depth before they can do a lot of very useful and needed work.

Given that we believe both that getting to FAI is utterly
crucial and every day until that point is full of huge risks to
all humanity and that FAI is difficult on technical and
theoretical levels, it seems to me we need to divide and conquer
the problem as much as possible. Otherwise stasis will prevail.