Eli wrote:
> > I'm not accusing you of that. I'm merely pointing out that
> it's very hard for humans to
> > accurately assess their priors and reason about things in a
> manner divorced from sentiment.
>
> I agree, it's very hard. But you have provided no evidence supporting the
> assertion that I am guilty of this flaw in any of the specific cases in
> question; *first* you must establish this, *then* you may sigh over the
> fallibility of humans in general and me in particular.

OK, so it's "psychoanalyze Eli" time is it? What could be more exciting,
and more relevant to the future of AI and the universe? ;)

I would propose that our friend Eliezer has a strong personal desire to be
useful and helpful, on a grand scale. (I won't go so far as to call it a
"savior complex" ;). I would propose that this desire is biasing (in a +
direction) his estimate of the probability that it's possible for *any*
human being to be useful and helpful in terms of the Singularity and
associated events.

But, the thing is, our probability estimates as regarding such things are
*inevitably* pretty damn shaky anyway, and hence very highly susceptible to
bias.... Where do you draw the line between bias and intuition?

Where there is very little data, we have to use sentiment to guide our
intuitions. There's not enough data to proceed by pure conscious reason.

[By the way, in my view, "sentiment" may be analyzed as "a lot of little
tiny unconfident reasoning steps merged together, many of them analogies
based on personal experience". so you can say it's a kind of reason, but
it's different than "conscious reason", which is based on a smaller number
of reasoning steps, most of which are fairly confident, and each of which
can be scrutinized extensively.]

Hmmm... the relation of sentiment and reason in nonhuman or transhuman
minds... a more interesting topic than Eliezer's psyche, perhaps???