Devs, welcome your EVIL ROBOT OVERLORDS from MIT

This ONE WEIRD TRICK turns developers into rioting LUDDITES

Computer science researchers at MIT might just be working on technologies to eliminate computer science practitioners.

That's the truly evil outcome of work going on at MIT's Computer Science and Artificial Intelligence Lab, which The Register hopes will be found next to the marketing division of the Sirius Cybernetics Corporation, come the revolution.

On seven open source programs (the university doesn't name them), MIT claims their CodePhage system found and patched “all instances” of vulnerable code in less than ten minutes.

Oh, the humanity.

Beneath the sunny optimism of the MIT news office release lurks a reality so horrifying it would haunt the nightmares of even the Bastard Operator From Hell:

“Automatic patch generation holds out the promise of correcting defects in production software systems without the time and expense required for human developers to understand, triage, and correct these defects”

OK, pour yourself a strong one and let's take in what's on offer.

The idea behind CodePhage is to repair bugs – specifically security bugs – by looking around for similar functionality in more secure applications and importing their functionality.

The system doesn't even need to see the source code: it analyses an application's execution and characterises the security checks it performs.

CodePhage “can import checks from applications written in programming languages other than the one in which the program it’s repairing was written.

“Once it’s imported code into a vulnerable application, CodePhage can provide a further layer of analysis that guarantees that the bug has been repaired.”

Welcoming our cut and paste overlords

Jn other words, CodePhage is performing function-transplants on software. It starts by looking for inputs that cause a program to crash (the target crashable software is called the “recipient”) – perhaps using DIODE, another output from the same lab earlier this year.

Feed the safe input to a “donor” candidate – that is, some other user authentication module – to see if it crashes. The evil developer-replacing automaton system also takes a symbolic record of the donor, to track things like the sanity checking the donor conducts (for example, whether the donor applies constraints to the input);

Next, CodePhage is tested with the input that causes the recipient to crash – again recording what happens in symbolic form.

“CodePhage then analyzes the recipient to find locations at which the input meets most, but not quite all, of the constraints described by the new symbolic expression. The recipient may perform different operations in a different order than the donor does, and it may store data in different forms. But the symbolic expression describes the state of the data after it’s been processed, not the processing itself”, MIT's article says.

As always, those who prepare others for the unemployment queue say their aim is to eliminate drudgery: “one of their hopes is that future versions of CodePhage could drastically reduce the time that software developers spend on grunt work”.

And in the longer term, “you never have to write a piece of code that somebody else has written before”.