If you’ve seen The Matrix, The Terminator or even 2001: A Space Odyssey, you know one thing is inevitable: The machines are coming, and someday they’re going to kill us all. And given the recent proliferation and sophistication of military drones and other automated weapons systems, that future could be getting closer than we think.

In an attempt to head that future off, Terminator-like, Cambridge University has announced it’s setting up a center next year devoted to the study of technology and “existential risk” — the threat that advances in artificial intelligence, biotechnology and other fields could pose to mankind’s very existence.

The Cambridge Project on Existential Risk is the brainchild of two Cambridge academics — philosophy professor Huw Price and professor of cosmology and astrophysics Martin Rees — as well as Estonian tech entrepreneur Jann Tallinn, a co-founder of Skype. The center hopes to train a scientific eye on the philosophical issues posed by human technology and whether they could result in “extinction-level risks to our species as a whole”.

Price tells TIME that while our demise at the hands of our own technological creations has long been the subject of Hollywood films and science fiction (again: Terminator), it is something that has hitherto seen little serious scientific investigation:

“I enjoy those science fiction films, but the success of those movies has contributed in a way to making these issues seem not entirely serious. We want to make the point that there is a serious side to this too.”

Take, for example, the still little-understood flash crash of May 6, 2010. In just six minutes, automated trades executed by computers caused one of the biggest single-day declines in the history of the Dow Jones Industrial Average, causing the stock index to plummet almost 1,000 points, only to recover again within minutes. The dip caused alarm among regulators who realized that this technology — lightning-fast trades set to execute based on computerized analysis of market conditions — is already in many ways beyond our control.

Price says that advances in biotechnology —a specialty of his colleague Rees — are equally concerning; thanks to new innovations, the steps necessary to produce a weaponized virus or other bioterror agent have been dramatically simplified. “As technology progresses,” Price says, “the number of individuals needed to wipe us all out is declining quite steeply.” His words echo that of the scientists involved in a seemingly harmless genetic parlor trick from earlier this year — in which they encoded the text of a book in DNA — who acknowledged that the same technology could perhaps be used to encode a lethal virus.

Price emphasizes that the focus of his work won’t just be on artificial intelligence, insisting that the center would look more widely at how human technology could threaten our species. But he admits that AI is nevertheless something he finds “quite fascinating”:

“The way I see it as a philosopher is that more than anything else, what distinguishes us as humans is our intelligence, and this has been a constant throughout history. What seems likely is that this constancy is going to change at some point in the next couple of centuries, and it is going to be one of the most fascinating phases in our history.”

Itappears that we humans have been created here on earth, in a firststep, like rearing a life inside a cocoon. There seems no doubt that infurther steps we will be bequeathing or imparting our intelligence to anew more capable and hardy species-namely machines. Man is too fragilefor space and its daunting surroundings. So in a natural progression ourintelligence will be taken over by machines which are more robust forhardy space. But if the transfer is premature, the machines will failand also we may be exterminated by them. This study is welcome, whichwill give better understanding how this empowering process of machineswill be progressing. And may also reveal to us, what role we humans areto play in the near and distant future.

Another new research project to get research money to justify the 'Publish or Perish' mentality of university professors. I wonder how many years it will take to write a paper on this project and, since technology futurists seem to get it wrong most of the time, how off the mark this study will be.

One cannot see how technology will be implemented because the technology developed is, many or most times, used in a way the developers never intended nor foresaw.

No mention was given about how technology has weakened us physically and mentally, this is the larger threat to our exixtence. We are so insulated from the natural world by our technology, the world that sustains us, that we have lost our primal strength.