I'd guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.

Huh. Is it possible that the corpus callosum has (at least partially) healed since the original studies? Or that some other connection has formed between the hemispheres in the years since the operation?

Yes it was video. As Brillyant mentioned, the official version will be released on the 29th of September. It's possible someone will upload it before then (again), but AFAIK nobody has since the video I linked was taken down.

I'm just starting arc 9, and am ready to give up. It's fun enough, but there doesn't seem to be any rationality here. I would buy an argument that the author is rationalist, but not any of the characters so far. (The backstory does suggest there the characters have done research and thought deep thoughts, be we see none of that.)

If it suddenly improves please let me know -- I've heard enough good things from enough people that I kept going this far, and it'd be a pity to quit just before things get interesting. But I'm almost a third of the way through, and still nothing :-/

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of paperclips, but not skip work and simply enjoy some hedonism?

That is, if the AI saw its utility function from a neutral perspective, and understood that the only reason for it to follow its utility function is that utility function (which is arbitrary), and if it then had complete control over itself, why should it just follow its utility function?

(I'm assuming it's aware of pain/pleasure and that it actually enjoys pleasure, so that there is no problem of wanting to have more pleasure.)

This re-posting was prompted by a Sean Carroll article, that argued along similar lines...epiphenomenalism (one of a number of possible alternatives to physicalism) is incredible, therefore no zombies.

There are a number of problems with this kind of thinking.

One is that there may be better dualisms than epiphenomenalism.

Another is that criticising epi. doesn't show that there is a workable physical explanation of consciousness. There is no see-saw (titter-totter) effect whereby the wrongness of one theory implies the correctness of another. For one thing,there are more than two theories (see above). For another, an explanation has to explain...there are positive, absolute standards for explanation..you cannot say some Y is an an explanation, that it actually explains, just because some X is wrong, and Y is different to X. (The idea that physicalism is correct as an incomprehensible brute fact is known as the "new mysterianism" and probably isn't what reductionists physicalists and rationalists are aiming at).

Carroll and others have put forward a philosophical version of a physical account of consciousness, one stating in general terms that consciousness is a high-level, emergent outcome, of fine-grained neurological activity. The zombie argument (Mary's room, etc) are intended as handwaving philosophical arguments against that sort of argument. If the physicalist side had a scientific version of a physical account of consciousness, there would be no point in arguing against them philosophically, any more than there is a point in arguing philosophically against gravity. Scientific, as opposed to philosophical, theories are detailed and predictive, which allows them to be disproven or confirmed and not merely argued for or against.

And, given that there is no detailed, predictive explanation of consciousness, zombies are still imaginable, in a sense.
If someone claims they can imagine (in the sense of picturing) a hovering rock, you can show that it is not possible by writing down some high school physics. Zombies are imaginable in a stronger sense: not only can they be pictuured, but the picture cannot be refuted.