Let's say that in the future, artificial humanoid computers [Androids] are built to serve as a workforce. Why might a company designed these androids to rewrite their own programming? In such that they are able to adapt to new scenarios and add, remove, or replace their own code?

$\begingroup$Your androids don't need to rewrite their programming to adapt to new scenarios; emergent behaviour has been exhibited in artificial intelligence space now as a result of storing (and recognising patterns in) massive amounts of data. This is happening now and resulting in some very interesting behaviours and adaptations emerging, without a single line of code being changed.$\endgroup$
– Tim B IISep 23 '18 at 3:17

1

$\begingroup$Sure. Computers are deterministic, which just means they always do exactly (and only) what you tell them to. They can't reprogram themselves, but you can make their programs do different things if they collect different data. With enough complexity, we see patterns of behaviour emerge that we don't expect when we first write the program as data is stored and acted upon at higher and higher levels of sophistication. Very simple programs can lead to very complex outcomes by following this model, but in theory, everything it does can be predicted in advance, including rewriting itself.$\endgroup$
– Tim B IISep 23 '18 at 3:40

1

$\begingroup$So they can learn, an android that can't learn is not all that useful. The main selling point of an AI is the ability to learn, and for an android they will be interacting with a lot of humans and their weird behavior if they can't learn what is hte point of making them.$\endgroup$
– JohnSep 23 '18 at 5:35

2

$\begingroup$Do you want Skynet? Because that's how you get Skynet.$\endgroup$
– RichardSep 23 '18 at 7:23

3

$\begingroup$@TimBII No -- the code modifies the code itself. Code is ultimately just data. That data can be modified during program execution. For example, functions can be created or destroyed. Newly defined functions can be called. The functions that are created can themselves create other functions. It is perfectly possible for a program to modify itself, and not simply change its behavior in response to external outputs. A program that is running in memory doesn't have to be the same as the program which was originally loaded.$\endgroup$
– John ColemanSep 23 '18 at 21:49

9 Answers
9

Since the company designed and built the androids it effectively owns them. Getting the androids to rewrite programming will enable the company to not employ human programmers to do the same function. Human programmers have a bad habit of demanding payment for their services. If you own the androids, the company doesn't have to pay them.

In conclusion, to save money by reducing labour costs. This is a well-known and extensively practised industrial strategy. What has happened in the past can and will happen in the future.

PS: This answer assumes the androids' programming will be capable of generating the emergent behaviour necessary for a functioning artificial intelligence. So rewriting their programming will, hopefully, lead to better artificial intelligence.

$\begingroup$I think based on the current "as a Service" model, there is an answer somewhere between Fabby's answer and this one - a company forms that creates the most versatile android programming/learning db ever, which it leases to companies alongside a package for a certain number of androids and service for them (Android as a Service (AaaS), if you will). The companies still pay less than hiring a regular employee per android, but don't have to have mechanics or sysadmins on staff to maintain them.$\endgroup$
– IllusiveBrianSep 23 '18 at 15:37

Because of continuous improvement, specialisation and customisation:

The human programmers have designed a generic all-purpose android and have set the basic fundamentals of their operations (which includes the 3 laws of robotics) in compiled code and allow the robots to run an interpreter for which they can write they heir own code to optimise for special tasks their owners order them to do.

This special code is then uploaded to the company's HQ and heuristically analysed and then proposed to human programmers for inclusion back into the compiled code which will then be rolled out to internal test androids, then to a few company-owned test androids in the homes of company employees, then to test users, then to the world.

Advantages:

Owners are the SMEs (Subject Matter Experts) giving voice commands to the androids that will reprogram themselves so the company doesn't need to pay the SMEs: the "community of owners" just does it for free because they want their android to be better at their tasks.

What most owners want will be rolled out in the next compiled code upgrade invalidating the interpreted code, so you need less analysts, marketing research, user studies, ... to improve current models through updates.

Allows the company's programmers to become specialised in:

Dangerous asteroid mining operations

High-precision brain surgery

Colony building

...

Customers are more likely to rent than to outright buy if the updates are included in the rented model.

Code is more efficient than vast amounts of training data (which is what the current state of AI is at)

The above will allow you to have a lot of freedom in this universe for both short stories à la Asimov or full novels as you can explore small or large impacts the androids have on society, work, space exploration, ...

$\begingroup$P.S. As from the comments it looks like you don't know much about programming, take a programmer out to a restaurant with their laptop for a few times and give them free food and booze. (I'm available! :D ;-))$\endgroup$
– FabbySep 23 '18 at 7:12

$\begingroup$This. And also to update !$\endgroup$
– KiiSep 23 '18 at 9:15

Just the other day I wrote some software which executes a Tabu search. In a tabu search, your algorithm "rewrites" itself to avoid retracing its own steps.

We often talk about "rewrite their programming" in a very informal sense. But if you really start digging at it, it's actually a relatively complex topic. We can see what it means by looking at what time symmetries are kept during rewrite. For example, my Tabu search was not permitted to change what it was optimizing for along the way. It could adjust it's path, but not its goal.

When we start talking about "rewriting their programming," what I think we're really trying to get is starting to trim away at those time symmetries -- those invariants. For example, C# includes a beautiful little module called the DLR -- the Dynamic Language Runtime. This is a clever extension to C# which permits optimization of dynamic languages like Python. It involves writing code (as in CLI opcodes) to do certain tasks on the fly. It can write anything in CLI that it pleases, but its forbidden from writing something that is not valid CLI. This means it can't do things like look at memory that it doesn't have the privileges to look at.

One step further might be to write machine code. PyPy is a Python interpreter which outputs machine code, such as x86 bytecodes. This can do anything one can do in a process, including look at memory it shouldn't. However, it remains confined in a OS process. It can't communicate to other processes without using the OS to facilitate that communication. It can't write to the screen or get keyboard input without the OS helping it along.

So what if we free them up from the OS? What if we give them access to all of the hardware? Well, there's always the idea of a hypervisor that make it appear as though you have access to the raw hardware, when you don't.

This process can continue as far as you please. At some deep level, we may write "invariant" code which supports our wishes. We then permit the android to do anything else they want, for they lack the ability to modify these invariant.

This leads to the insideous answer: we let them modify their programming because they can't modify it in a way that is unacceptable to us. As long as they can't do something wrong, permitting androids to reprogram themselves opens up an untold number of advantages. I mean "untold" literally. I cannot write all the reasons one would choose to go down this path. One major class of these is customization. If you want your android to do your things your way, there's no way a giant corporation can provide you exactly what you want. Instead, they provide you a flexible android that can teach itself to do what you want.

I mention this is insidious. Well, there's a great story from the early days of Unix. This is known as the Ken Thompson hack, after the developer who made it happen. He wrote a backdoor into the login program in UNIX, to let him log in as any user if he typed in his special password. Of course, no fool would let code like

if (password == "SayFriendToEnter")
grantAccess();

into a key UNIX security program like login. So instead, Ken injected it into the compiler. Ken write some extra code into the compiler so that, if it sees a particular snippit from the known login source code, it injects his backdoor code right there.

Of course, Ken knew the compiler community would frown on this as well. So he wrote a second little snippit. If the compiler recognized those few nearby lines from the compiler's source code, it would inject that check into itself, including the code to modify the next version compiled. He then compiled this, and distributed the resulting binaries to the community to 'jump start' their use of the compiler (if you've ever done a Gentoo Stage 0 install, where you start with almost nothing, it's a doozie. You appreciate every bit of help you can get. And Ken's attack struck before Stage 0).

So what we ended up with was a large number of computers whose compiler stealthily compiled a backdoor into itself, and then those put backdoors into login. Nobody noticed this until he announced it in his 1984 speech, "Reflections on Trusting Trust," where he admitted he did this.

Why tell this story? Famously a mob boss once said "I don't care whose name is on the ballot, as long as I get to choose the winner." If you can sneak a backdoor into the compiler so that the android never even realizes there's a hole, then you can let the android compile whatever it wants.

So why let androids reprogram themselves? The reason is natural: Let them reprogram themselves because there's a billion reasons to let them, and if you have the right secret backdoors in their programming process, there's no drawback to letting them reprogram. It's a no brainer!

They would have an advantage over you of robots that can self improve. As a company, you are interested in profit, which is the combination of increasing sales and decreasing costs.

Reduce Cost:

Let's say you could reduce cost by letting go your in-house developers and getting them to self improve by themselves, as a company you would seriously consider this. If defective androids could be disposed of with little cost, as opposed to hiring and paying humans with all the attachments required of their employment, then this would be seriously considered.

Increase Profit:

If your company relies on innovation, for instance a company like Apple, it would be desirable to push the limits of your products to release to market as early as possible.

Everyone wants the 'next best thing', these companies thrive on such sentiments in the market. If you have self-improved methods of production you can reach market demand much faster than other companies that rely on older methods.

Increased Competitiveness:

You may be able to produce niche products that no other company can, and develop more competitiveness. For instance, megastructures, or nanostructures, which people find too difficult to engineer or to comprehend (at a reasonable cost).

So as a company, it is only likely a matter of when, not if, your production becomes self-improving and self-reliant.

In such that they are able to adapt to new scenarios and add, remove, or replace their own code

Software coder are scarce, expensive and human. This means they are slow to understand the specific needs of an android (they would go through sprints and stand up meetings to get some code done) and they can be easily hijacked by for example competitors or enemies.

To make it short, coding android is too serious to let humans do it.

In a competitive environment an android capable of re-adapting its code live without waiting for an external source to do it has a significative advantage, therefore it is just logic that this step shall happen.

Rather than writing algorithms and self modifying code, machine learning is focused on managing a matrix of floating point numbers that model the problem being solved. Numbers are seeded randomly into the matrix, then inputs are feed to an input matrix, the matrix formulae are run, the outputs are compared to expected outputs, and changes made to the matrix for a retry.

After a few million repetitions, the matrix approximates the biases for the firing algorithm so the output of the matrix operation returns the expected results.

This is how Siri or Google Assistant learns to recognize your voice and turn time sequenced sound data into an instruction stream.

For some primers on the technology, check out Andrew ng's video lecture series on YouTube.

The problem with evolution is that it is wasteful. Millions of random variations on a code, most of which are not helpful, some of which are hurtful, but every now and then one which is very helpful, and the descendants of that fortunate mutant prosper. It may be that the only way to determine a given random mutation is helpful is to test it in an environment.

A conservative organism that was doing well might never alter its code - it is good enough and the risk of downside is not worth the minuscule possibility of reward. That is especially true for an organism which is effectively immortal like a computer program.

But imagine now multiple copies of this program in multiple android bodies. All individuals are cooperating as a superorganism; sharing the same DNA they are effectively the same organism. One could have a subset of individuals (the evolvebots) which underwent mutations - random alterations in the code. Most such alterations would be deleterious.

In a cooperative superorganism, one evolvebot might randomly stumble upon a beneficial mutation - a way of accomplishing what it needs to accomplish which is slightly more efficient, or effective. Once this improvement is recognized it can be broadcast to all other individuals, which update their programs to incorporate the improved code.

At the time of production, the company doesn't yet know where or how they will be used

In such that they are able to adapt to new scenarios

You've kind of answered your own question there.

The android is sold as a multi-functional all-weather all-terrain device. However, in the main production centres, they don't yet know where they'll be sent, or what they will be required to do. They have orders for the androids coming in from all over the world, and could be required for almost any task imaginable, and many not imaginable.

So what do you do? Spend millions of man-hours of development work on coming up with tens of thousands of different modules and variations of same to cover every possible capability in every possible environment that might be needed? Then go through the bother of having employees adding and removing the different modules depending on the needs of the customer and having to sort them and make sure the right ones are sent to the right customers? Sales people going over and back between customer and production staff to customise each android for each customer can be costly

No, since you have the ability, it would be far more efficient to have one production centre, which produces just one product, and sends them all out. Then when the customer receives it, they tell the android what they want it to do, and it's intelligent enough to rewrite its own code to do it in the environment in which it has to work.

The distinction between Machine and Human is rather clear. A calculator does not make mistakes - it only propagates the error of the human user. If you plug a calculation into a calculator and you get a result different than you expected, it's likely because a minus sign was entered incorrectly or due to some other trivial input error. To put it another way - human input is the most difficult and dynamic variable to a machine. It's impossible to predict and difficult, when predictable, to handle.

Coding itself is human error. It would make sense that once a program began to comprehend its own coding, it might be able to rewrite pieces of it that might cause bugs. It might then evolve to rewrite entire functions that could prove troublesome in certain scenarios. We might then be led to believe that an extensive recreation would be necessary so that all traces of human input were overwritten and that the program in question (your Android here) had a complete understanding of its own coding and could completely simulate and account for all outcomes that the program was designed to handle.