L 600: This is the first explanation of the 2010 Flash Crash that I have read.

L 636: I am sceptical of “general intelligence”; I think it is ill-defined.
Intelligence comes in many forms, just like people.

L 703:

For instance, an “engineering superintelligence” would be an intellect that vastly outperforms the best current human minds in the domain of engineering.

What about computing orbits?

L 846:

Since there is a limited number—perhaps a very small number— of distinct fundamental mechanisms that operate in the brain, continuing incremental progress in brain science should eventually discover them all.

In retrospect, they will all be obvious, I suspect.
You can also argue from information in DNA, most of which is shared with much simpler chordates.

L 877:

We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they will have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness).

I see something a bit teleological here—as if being like people was their ultimate goal.
He goes on to make a similar point.

L 1032: I do not understand how these “IQ gains from selecting among” figures were produced.
Also what is this selection criterion?
If reading the DNA is the only guidance, then just compose the desired DNA.

L 1166: There are two sorts of evolution: genetic drift and mutation.
The selection mechanisms described can only enhance drift which is not what separated us from the apes.

L 1538: Bostrom makes a good point:

For example, biological neurons are less reliable than transistors.
Since noisy computing necessitates redundant encoding schemes that use multiple elements to encode a single bit of information, a digital brain might derive some efficiency gains from the use of reliable high-precision computing elements.

L 1579: Bostrom assumes that in the process of getting smarter than us machines will go thru a phase of being like us—achieving “general intelligence”.
I doubt that.
I think that we humans have a warped view of the intelligence landscape.

L 1704: “It is entirely possible that the quest for artificial intelligence will appear to be lost in dense jungle until an unexpected breakthrough reveals the finishing line in a clearing just a few short steps away.”
I doubt it.
I can more easily imagine a debate between an AI and some humans with the AI saying “I am too.” against “no you’re not”.

L 1734: Bostrom proposes training different copies of the same human emulation in different disciplines.
Mind meld is probably impossible for emulations of different humans, but perhaps more nearly possible for emulations of the same person.
This makes technological expansion of emulated memory interesting.

L 1963: “An AI would have no disgruntled employees ready to be poached by competitors or bribed into becoming informants.”
Bostrom is making many assumptions here.

L 2007: I am continually bothered by Bostrom’s notion of some one dimensional space of superintelligences.
The question is not whether it is superintelligent, but “what can it do?”.
(See 2259 below.)

L 2193: Bostrom considers what a superintelligence might do, in particular what power it might usurp.
Humans “took over” because were the product of Darwinian evolution that survived, now and then, by destructive competition with other species.
A superintelligent will not have that legacy unless we implant it.

L 2213: I sense an oncoming presumption that a superintelligence will understand people better than people do.

L 2259: Bostrom’s table 8 is useful.
It addresses the above question “What can it do?”.

L 2457: Perspective: Some intelligences would describe the set of people as a “singleton”.
Suddenly Bostrom seems to be rooting for the superintelligence, as our friend—a bit like Morevec‘s “Mind Children”.

L 2859: Bostrom is merely recounting “meaning of life” dilemmas in an AI context.
The AI’s will have their own parallel dilemmas orthogonal to our own.

L 2900 Bostrom assumes that the AI is a single unified entity.
Modern software systems are built from different modules with different faculties.
They need not all trust all other faculties.

L 2924: Bostrom worries about transmitting a Hubble volume to paper clips, but then worries about fickle AI abandoning the values we give them.
Perhaps it is merely planning or the worst case.

Another notion that Bostrom omits is a budget.
Give an AI a strict resource budget that it is allowed.
Build this in at a low level.

L 3235:

“However, the ability of tripwires to constrain a full-fledged superintelligence must remain very much in doubt, since it would be hard for us to assure ourselves that such an agent could not find ways to subvert any tripwire devised by the human intellect.”

That presumes that the AI can detect the tripwire.
another reviewAsynchronous Notes

I think that the AI story will not change much in the next 50 years.
I am not being pessimistic.
McCarthy’s jab: “As soon as it works it is no longer AI.” will continue to rule the roost.
In the next 50 years robots will learn to walk instead of being programed to walk.
This general faculty will allow them to learn to ride a bicycle without further engineering.
This faculty is necessary, I think, for any sort of consciousness.
So too speech recognition, or better speech comprehension will move forward and a few steps there will feed into what it means to know something.

A closely related but more difficult faculty is acquisition of frames (or patterns) with parameters.
Our world models are built in large part from frames.
Consciousness comes as we find patterns in the head.

Motivations, goals and values are another whole can of worms.
Why would a robot want to ride a bicycle?

I can imagine more easily that most of Bostrom’s robots, a robot whose only passion is mathematics.
It might be entirely unaware of the real world.

I think that Bostrom anthropomorphizes too much.
We are familiar with bad people and this is an easy way to go.

Today is 2015.
What we know today about the brain that we have exploited in our AI attempts can fit on the back of an envelope.
Our AI technology is deep and so is our knowledge of the brain but there is not much overlap.
There is considerable overlap between our subjective knowledge of how we think, and AI technology.
That was the original and continuing source of AI technology.
I think we will learn new things about the brain that can be used in AI that will require many envelopes.
I think we will learn how to steal information from DNA that bears on the brain.
It will be a long time before an AI is much like a person as in Hollywood’s notions.