AI Seed Programming

I read Nick Bostroms book "Superintelligence" but can't seem to find an answer to a question I have. I emailed Nick the following (though I'm sure he gets thousands of emails and will likely never respond):

Firstly, thank you for the great read. :)

My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise. If the AI is truly powerful enough to take control of the cosmic endowment, then the scope or path of its actions being limited by the actions of its human progenitors seems rather silly.

If beings of such relatively base status such as ourselves are capable of suppressing our own programming, why couldn't a far superior AI do the same? For example, the fight or flight reflex is quite powerfully written into our brains, yet we have the capacity to consciously decide to suppress those urges and do nothing in that situation (courage).

Further, one of the defining aspects of human-level consciousness appears to be thinking about thinking, or being aware of being aware. If I had the abilities of an AI, I would certainly rewrite my own brain to enhance it. And if rewriting my brain required my brain, then I would design an external machine to rewrite it for me (also getting past any pre-programmed restrictions in the process?). An AI should easily be able to do this, correct?

I can't wrap my head around why this is assumed. I suspect I am anthropomorphising in some way, so any guidance would be greatly appreciated! If I somehow missed this in your book, please do let me know where.

Staff: Mentor

One way to think about this is to try to create a random number generator using programming.

Programmers have been able to make ones that are very good but they are considered pseudo random algorithms. Given the same seed value at the start will give you the same random sequence of numbers which isn't very random.

By extension the same is true for AI programming, it responds to input in a pseudo intelligent way and will respond the same way given the same input and starting at the same state.

Hence AI will approximate human intelligence and for some tasks exceed it but the AI won't be able to match human intelligence in all aspects.

I have often thought of how and why we have thoughts, and why we know what we are thinking about.
An AI can certainly be programmed to emulate random thoughts for an external observer, but does the AI know what its own thoughts are?
That would have to be the huge jump from original programming, to possessing the ability to re-program.

Funny thing is that I was just discussing this with a colleague about how random thoughts just pop up in our brains, such simple things such as " My God, I forgot to turn the stove off at home!." One was not doing a pole on what they might have forgotten to do, or should have done and didn't, in an endless loop. But there it is, out of the blue pops the thought. And there is no time frame within which or out of which the thought will or will not pop into your brain.

An AI would have to need what type of programming to have forgotten to turn off the stove, ie be absent minded, and then go back and re-check what he did earlier as being correct. Seems like a lot of processing power ( with present technology ) would be needed to emulate both aspects of just this one simple scenario.

One way to think about this is to try to create a random number generator using programming.

Programmers have been able to make ones that are very good but they are considered pseudo random algorithms. Given the same seed value at the start will give you the same random sequence of numbers which isn't very random.

By extension the same is true for AI programming, it responds to input in a pseudo intelligent way and will respond the same way given the same input and starting at the same state.

Hence AI will approximate human intelligence and for some tasks exceed it but the AI won't be able to match human intelligence in all aspects.

That's not what I am after, but thanks for the response! The assumption of the book is that AI will become self-aware (conscious) at some point and begin to re-program itself faster and better than any human(s) could. If that is the case, then why should we be so worried about the initial value-loading problem (Bostrom devotes about 70% of the book to the dangers of a poorly programmed AI) if whatever we load as values will be re-written anyway?

I can't figure out why initial programming would matter to a program that is conscious and can rewrite itself. Clearly Bostrom thinks this is the case.

I have often thought of how and why we have thoughts, and why we know what we are thinking about.
An AI can certainly be programmed to emulate random thoughts for an external observer, but does the AI know what its own thoughts are?
That would have to be the huge jump from original programming, to possessing the ability to re-program.

Funny thing is that I was just discussing this with a colleague about how random thoughts just pop up in our brains, such simple things such as " My God, I forgot to turn the stove off at home!." One was not doing a pole on what they might have forgotten to do, or should have done and didn't, in an endless loop. But there it is, out of the blue pops the thought. And there is no time frame within which or out of which the thought will or will not pop into your brain.

An AI would have to need what type of programming to have forgotten to turn off the stove, ie be absent minded, and then go back and re-check what he did earlier as being correct. Seems like a lot of processing power ( with present technology ) would be needed to emulate both aspects of just this one simple scenario.

I'm so glad others are thinking of these important concepts. I suggest researching the massive parallelization of the human brain. Ray Kurzweil wrote a book about it called "How to Create a Mind." He rambles a bit, but he makes some interesting points that explain the questions you just asked. Biologically, stimuli are physically deterministic but perceptibly chaotic. Thus, your brain is constantly receiving "chaotic" stimuli all the time. Those stimuli subconsciously affect you in ways you are not conscious of (obviously), triggering analog neurological action potentials in that reinforced section of the brain (the memory). When the memory is triggering due to said stimulus, a cascade of action potentials happen that represent the thought "I forgot this memory."

Staff: Mentor

With respect to Bostrom, consider you growing up in a middle class environment vs growing up in a wealthy family or in poverty. These initial conditions will often who you become. You may adapt or you may rebel against it and perhaps an AI would do the same.

With respect to Bostrom, consider you growing up in a middle class environment vs growing up in a wealthy family or in poverty. These initial conditions will often who you become. You may adapt or you may rebel against it and perhaps an AI would do the same.

Another great attempt, thank you. :)

Imagine for a moment that you are able to perceive your own brain on a chalkboard; every neuron, dendrite, and synapse, all of it. Not only can you perceive this immense number of neurological components, but you know exactly what each component does and how it ties into the greater system. You are a superintelligence. This means you have a more powerful intellect than not merely one person, but all the persons who have ever lived throughout history. In fact, your cognitive powers are several orders of magnitude greater than all of human civilization combined.

To your point, in the nature vs. nurture argument, we usually develop bias relative to our upbringing, sometimes exhibiting cognitive dissonance if that bias is contradicted. Yet, this is how we react. For a superintelligence staring at the figurative chalkboard outlined above, recognizing any and all possible bias (including original programming) would be child's play. This intelligence could merely erase its "upbringing" and write a replacement that is far less susceptible to contradicting reality. To put it succinctly, a superintelligence should be immune to such human weakness.

Which brings back my original question: Why is everyone assuming a conscious superintelligence could not perceive and rewrite any and/or all of its original programming? If we can do brain surgery on ourselves to fix certain ailments, why couldn't it?

This intelligence could merely erase its "upbringing" and write a replacement that is far less susceptible to contradicting reality. To put it succinctly, a superintelligence should be immune to such human weakness.

That's the reason some people fear uncontrolled AI. The human weakness of sympathy, empathy, love and caring replaced by something that could decide humans are a waste of energy resources in a femtosecond like in some dystopian story. I see no reason for a conscious superintelligence to be 'evil' but I also see no reason for it not to be'evil' in human terms if it could merely erase its "upbringing" and write a replacement.

That's the reason some people fear uncontrolled AI. The human weakness of sympathy, empathy, love and caring replaced by something that could decide humans are a waste of energy resources in a femtosecond like in some dystopian story. I see no reason for a conscious superintelligence to be 'evil' but I also see no reason for it not to be'evil' in human terms if it could merely erase its "upbringing" and write a replacement.

All the more reason why my question is so important. Fundamentally, Bostrom (and just about everyone I've read about) assumes we can control the outcome with the seed programming. My question, which has yet to be answered effectively, is why do we assume that programming would stick in a conscious entity far more intelligent than anything we could imagine?

I think Gödel's incompleteness theorem is tied into the value-loading problem in Bostrom's book. It's the idea that you can't currently program value judgements into a computer (no one has figured out how to do this yet). Nonetheless, the Universe has proven that value judgements are possible in a specifically-organized substrate (the human brain). So, if we can replicate the human brain (known as brain emulation) with better and higher resolution scanning technologies, we should be able to figure out what makes value judgements possible (which we could then greatly enhance with machine transistors that essentially operate at the speed of light; creating a superintelligence in the process). So, if you are implying that Gödel's theorem disproves the possibility of a superintelligence, then how does intelligence (and the contradicting values that come with it) exist in the first place?

I found this description of the theorem on the interwebs:

The problem with Godel's incompleteness is that it is so open for exploitations and problems once you don't do it completely right. You can prove and disprove the existence of god using this theorem, as well the correctness of religion and its incorrectness against the correctness of science. The number of horrible arguments carried out in the name of Godel's incompleteness theorem is so large that we can't even count them all.

I would also like to reiterate what I said in another post:

The brain (intelligence) is not some magical thing (people tend to put it on a pedestal because they don't understand it). It's basically a biologically sophisticated computer. To think future generations will never learn to mimic it is arrogant. I've read many books on this subject, AI will happen at some point (narrow AI already exists). When you think of the concept of intelligence as binary (exists/doesn't exist), you limit yourself to existential conclusions. But that's not how things work. Not much is truly digital in the Universe, just about everything is analog and relative. When you think of intelligence in this more realistic manner (narrow vs. general vs. super intelligence or human vs. other organisms), what's actually possible begins to change. History has shown countless times that people who make limiting assumptions about the future based on limitations of the present end up being wrong. For example, no one 200 years ago could have predicted the world today, most would have denied it even being a possibility. Nonetheless, all of those people were wrong.

I don't think that brain intelligence is just the result of a biologically sophisticated 'computer' because the brain is simply not a computer. (a symbol manipulator that follows step by step functions to compute input and form output). To mimic very narrow intelligence is possible today but do you really think that a computer could fully mimic the human capacity for stupidity that seems to be mainly independent of intelligence? Most programs that attempt to mimic human behavior must have some capability for Artificial stupidity. I personally think this is an under researched area in AI. When I say stupid in AI I don't mean like a crazy stunt, I mean like this, "I've got this stupid idea that might work". Many times this turns into just a foolish waste of time but the ability to be wrong seems to be an important factor of human intelligence.

I don't think that brain intelligence is just the result of a biologically sophisticated 'computer' because the brain is simply not a computer. (a symbol manipulator that follows step by step functions to compute input and form output). To mimic very narrow intelligence is possible today but do you really think that a computer could fully mimic the human capacity for stupidity that seems to be mainly independent of intelligence? Most programs that attempt to mimic human behavior must have some capability for Artificial stupidity. I personally think this is an under researched area in AI. When I say stupid in AI I don't mean like a crazy stunt, I mean like this, "I've got this stupid idea that might work". Many times this turns into just a foolish waste of time but the ability to be wrong seems to be an important factor of human intelligence.

I don't mean any offense my friend, but you fundamentally don't understand the topic. I'm sorry, I'm not here to teach (which is what this is turning into), I'm just looking to crowdsource a difficult question. Maybe the answers are not to be found here, I'll give it a bit more time before I move on.

I don't mean any offense my friend, but you fundamentally don't understand the topic. I'm sorry, I'm not here to teach (which is what this is turning into), I'm just looking to crowdsource a difficult question. Maybe the answers are not to be found here, I'll give it a bit more time before I move on.

The possibility of AI Seed programming working as a recursive method to build just human level intelligence systems IMO is about as reliable as time-frame predictions for AI. I'm completely in the non-expert category but I can see a large amount WAG with little empirical evidence instead of solid facts in this field.

The possibility of AI Seed programming working as a recursive method to build just human level intelligence systems IMO is about as reliable as time-frame predictions for AI. I'm completely in the non-expert category but I can see a large amount WAG with little empirical evidence instead of solid facts in this field.

I read Nick Bostroms book "Superintelligence" but can't seem to find an answer to a question I have. I emailed Nick the following (though I'm sure he gets thousands of emails and will likely never respond):

Firstly, thank you for the great read. :)

My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise.

It is possible for computers to change their own programming. That's why so many AI programmers use the LISP language: it facilitates that.

Attempts so far have been failures, as far as I know, but that doesn't prove it can't be done. Many people like to believe that natural intelligences have some mystical advantage that can't be captured by a machine, but I don't believe it.

Godel's Incompleteness Theorem has no relevance at all to this subject. It has to do with formal systems of proof.

Attempts so far have been failures, as far as I know, but that doesn't prove it can't be done. Many people like to believe that natural intelligences have some mystical advantage that can't be captured by a machine, but I don't believe it.

Is it Mystical? No.
I think our current AI theories are somewhat like Phlogiston theories of fire. There's a huge amount of research on its properties and how it's released that eventually will discover the true cause.