Comments

Hehe. "The next fifty years." With the rate things are changing, I would hesitate to talk about anything more than ten years away. In thirty years, at most, we're going to have mature nanotechnology, including the superfast computers and augmented human intelligence that brings with it. I think I'll leave the computer security problems to people millions of times smarter than me, thanks.

Did Alan really mention 50 years? At least not from anything they quoted him as saying in the article. The title's the only thing that mentions it, and the intro sort of hints that that's the same title Alan's using for the talk at the conference.

Hehe. I don't mean to say that computer security isn't going to be meaningless in 50 years, just that it's going to look so different by then that there's not much point in talking about it now. With super-advanced tech, the security issues just get harder, that's all. More of what we see today--asymmetric threats and all.

As someone who studies language design, I find it interesting that Cox emphasizes language design's importance in security. Certainly Java memory management, as mentioned in the article, is a sucess story. It will be intersting to see if security is what finally gets features like program annotation adopted in the community of real world programmers.

50 years...
Imagine you dig in the old closet in the attic, wipe away the dust and discover an old article - "Computer Security in the Year 2005", written 1955. Perhaps the article would mention locks, maybe even multi-user-systems with passwords - but no mass-mailing worms (a what? What is email? What are networks?), DDoS-attacks et al.
(Visit http://fun.drno.de/pics/english/HomeComputer.jpg , 94k) - I think it's hard to predict just the next 10 to 15 years, and it's absurd to try to predict for 50 years _and_ get a usable (= roughly true) prediction...

But I agree with pdf23ds, the interview mostly concentrates on the last decade and the here and now, there are hardly any future hints beyond, say, 5 years.

Don't forget Mauchly's famous 1962 article on the future of computing called "Pocket Computer May Replace Shopping List". Of course it's hard to say whether the UNIVAC inventor's thoughts had a ripple effect that helped lead to the personal computer, or if he was just commenting on what he saw as inevitable, but I don't think it's fair to say all predictions are futile.

There was actually a study of this topic called "Predicting the market evolution of computers: was the revolution really unforeseen" (http://www.sciencedirect.com/science/article/B6V80-4B6TXSF-1/2/05623aa2720716fdd18abb7c4a8bda29). I haven't read it but the abstract is interesting as it points out predictions related to computers became increasingly relevant...around 50 years ago, about the time of the UNIVAC.

I think a lot of pretty specific things can be predicted about the next fifty years, but computer security is definitely not one of them. It's a much more complicated subject, and it depends a lot on the specific nature of whatever technologies are involved. Insofar as Cox is speculating on trends in language design toward verification of correctness and increasing compiler/runtime duties; and specific but huge weaknesses, like the ability of virtually any virus to completely scramble the hard drive and BIOS of a computer, that can be improved by improved overall system design; I can see where his speculations could be somewhat relevant in the next one or two decades, but those trends that he mentioned are, for one, fairly obvious now, and fo another, only a small part of what computer will consist of.

For one, the switch to massively parallel computing on the desktop, and small devices connected to the internet sharing their computing resources, is going to introduce both a whole new class of software design challenges and a whole new class of potential security exploits. For another, the increasing pace of the adoption of new forms of technology is going to provide more and more new platforms for malware every year. Cell phones and PDAs are already being compromised. Nanotech malware promises to be a threat greater to bioterrorism, even as bioterrorism becomes more and more likely as gene sequencing and manipulating techniques improve. And artificial intelligence is making steady gains in both the profitability and the total complexity and sophistication of its techniques. In another ten years we may begin to see AI agents that can convincingly interact with humans in a number of limited circumstances. In twenty, we'll probably see those circumstances, and the realism of the AI, expand to near human levels. The potential danger from a maliciously--or incompetently--designed AI is no less massive than that of a gray-goo scenario.

It's not so much that any of his predictions are innacurate. It's more that his predictions are so totally inadequate to really give any idea about what we'll be looking at outside of the increasingly narrow range of expertise (that is, when seen in context of all technological expertise) that is OS level software design. Just not very compelling from a security standpoint.

Change is accelerating. While it may have been possible fifty years ago to look fifty years into the future, today it's scarcely possible to look ahead thirty. And as the achievement of smarter-than-human intelligence draws closer, us plain old humans will completely lose whatever predictive ability we have left.

He does make some excellent points, particularly the issue of "provable security".

A little history here: Some years ago, Britain's Royal Signals and Research Establishment designed its own processor from the ground up for defense applications. It was called Viper. It was no speed demon, no mega-cruncher; it was about equivalent in performance terms to maybe a 386. It didn't have multiple integer pipelines, or SIMD instructions, or branch prediction, or anything advanced like that. But what it did have was one feature that uniquely qualified it for defense applications.

That feature was quite simple: the architecture of the Viper processor was intentionally simple enough that the design of the chip could be mathematically proven correct. There was not even the possibility of, say, a Pentium FDIV bug. This was, in fact, the primary design goal of the Viper -- rather than being designed up to a desired feature set, it was designed down to a not-to-be-exceeded maximum level of design complexity that could be mathematically analyzed and verified error-free.

I don't know if the RSRE Viper is actually still in use in the field (and if I did know, I almost certainly couldn't tell you for security reasons). But it's something worth thinking about when we talk about provable security.

pdf23ds wrote: "In another ten years we may begin to see AI agents that can convincingly interact with humans in a number of limited circumstances. In twenty, we'll probably see those circumstances, and the realism of the AI, expand to near human levels."

Please excuse a certain scepticism. Do we yet have AI computers with the intellectual prowess of an average to low life form? Might that not be a step along the way? Perhaps that of a bee, or a salmon, or (somewhat higher up the scale) a hampster has been demonstrated. But I missed it. Any links so I can catch up?

It may be counterintuitive, but from what I know about the progress of AI, animal intelligence is really sort of beside the point for the purposes of interactive agents. All I'm extrapolating on are continuing incremental improvements in voice (and perhaps later, facial) recognition, and the application of expert systems with a growing knowledge base to specific human interaction roles. I may be a bit overoptimistic about the potential of these specific techniques; I can't claim a deep knowledge of the field. Animal-level intelligence is a matter of combining a lot of competencies that don't have much relevance these interactions.

And general intelligence, which is where the main danger lies, seems likely to arise from other techniques entirely; either from some profound insights into the implementation of Bayesian outcome optimization systems, or, as a last hold, from the reverse engineering of the human brain that will become possible with increased neuron-scanning resolution. This is a much less certain area, of course. It could be seven years or fourty before we see it yield fruit.

But I do believe that cockroach brains have been reverse-engineered using these methods, and attached to little robots. I forget where I saw that one. Medina and Mauk at the University of Texas Medical School have run a computer simulation of a neural structure based closely on the human cerebellum. And Lloyd Watts has devised a computer model of the human auditory pathways based on neural studies that is able to differentiate and locate sounds give binaural inputs. This, rather than hamster simulations, seems to be the way things are going.

Skepticism is healthy, but given what has been accomplished in the last 65 years with computer technology, and the diverse applications it has led to in virtually every field, it is not hard to see that in 10-20 years that there would be significant improvements in A.I. become apparent.

If we attempted to build towards a human-like A.I. by stepping through the emulation of predecessor animal behaviours, it would take a long time, but we do not need A.I. to function the way a human mind does, we merely need the outputs to be sufficiently close to how a human would accomplish it.

To get an understanding of what I mean, compare and contrast how a robot would be used to build a chair, and how a human would build a chair. The end result can be the same, but there are any number of ways to achieve the result. It has also been shown that the most efficient way to build chairs with a robot is to use robots that radically differ from humans.

@Yvan Boily, who wrote: "but we do not need A.I. to function the way a human mind does, we merely need the outputs to be sufficiently close to how a human would accomplish it".

In the early 1980s, I recollect using arguments similar (cars have no legs and aeroplanes do not flap their wings). This was in questions on a technical conference paper on Artificial Neural Networks (applied to Automatic Speech Recognition), in which the author used, as a major justification for his work, that the process was similar to that of the brain. I argued ANNs should we viewed as [just] an interesting class of learning algorithm.

However, unless you dispute that animal brains have similar (but lessor) processing than human brains, we (by whatever means are used) will surely have the ability to simulate substantially the whole of brain function of lower animals long before we can do the same for human brains.

If you think we are going to "have the ability" to demonstrate some particular general form of scientific progress and then not do it, so be it. I disagree.

So show me a speech recognition machine (or face recognition machine) that has ability equivalent to that of a 4-year old child.

Also, concerning work on the cerebellum, my dictionary (its not my field) tells me that is the part of the brain concerned with muscular control and other lower-level functions. Am I jumping a conclusion too far in assuming that we share such processing with all/many animals.

Work on "computer model[s] of the human auditory pathways based on neural studies" has been going on since the 1970s to my personal knowledge (and perhaps earlier). Understanding such things is, I am certain, very useful. However, full and direct use of such models has not demonstrated (so far, and so far as I know) any clear advantage for Automatic Speech Recognition, over simpler acoustic analysis approaches that embody only a small fraction of the knowledge of human processing and at much lower computational load.

So back to my substantive point: what reason do we have to believe: "[in twenty years], we'll probably see those circumstances, and the realism of the AI, expand to near human levels"

@Nigel Sedgwick, who wrote:
'However, unless you dispute that animal brains have similar (but lessor) processing than human brains, we (by whatever means are used) will surely have the ability to simulate substantially the whole of brain function of lower animals long before we can do the same for human brains.'

The ability to substantially simulate the brain function of a cat is only relevant to simulating the brain of a human if you attempt to iterate through Class Mammalia to achieve the appearance of human intelligence. Since we are targeting emulating human behaviour and interactgions, the behaviour and interactions of lower animals are barely relevant. Further to that point, attempting to emulate a dog or a cat sufficiently to fool a human would be relatively easy, but the complexity of building a cat to successfully emulate the interactions between the fake cat, and a real cat would be unsurmountably challenging as we cannot even really understand the natural interactions of these animals as we lack sufficient communication abilities. This is why we domesticate them (at least for modern humans, and in the context of pets as friends instead of pest control).

If however, you simply want to make something appear to have artificial intelligence, you need only emulate the functionality required to appear intelligent.

If you are building a computer to emulate a human, then you don't need to emulate the human brain, nor do you if you are emulating a cat. If it is a computer simulation not intended to drive the human (or feline) body, then you no longer need to emulate a huge range of components of the brain which are responsible for organic components such as the endocrine system.

Realistically speaking, attempting to emulate the brain function of a human to acheive artificial intelligence is just flat out stupid; you only need to emulate the brain of an organism if you are emulating the whole body. Once we could achieve this scenario, down to the chemical level, we would likely have a much better concept of human conciousness, but again, this conciousness is a holy grail for A.I., whereas practical A.I. need only be deductively interactive.

pdf23ds said "Hehe. "The next fifty years." With the rate things are changing, I would hesitate to talk about anything more than ten years away. In thirty years, at most, we're going to have mature nanotechnology, including the superfast computers and augmented human intelligence that brings with it."

Will that be before or after the flying cars and personal jet-packs. Been waiting a while on those already. It's really impossible to say with certainty that anything will be "perfected" in the near future, because until it is, you don't know for sure what obstacles you will encounter.

> We are still in a world where an attack like the slammer worm combined
> with a PC BIOS eraser or disk locking tool could wipe out half the
> PCs exposed to the internet in a few hours.

The only comments are either that "the advertisement on the O’Reilly page was too big", or that "fifty years" is too long a period to comment.
Me, I just hope that I will still have my bank account in few hours, even if I know that my bank is using PCs to store my account details. Yes, they pretend to be secure because they upgrade their anti-virus *every* day or more...

While Schneier says predicting 50 years into the future is futile, I learned a lot from reading many different views in "Beyond Calculation - the next 50 years of computing" - way back in 1997. The threat of intertribal human conflict is real.

I was one of the reviewers of the VIPER processor and was quite surprised to hear the one point from my summary which wasn't entirely damning used in the wrap-up presentation at the end of the review period. I did compare the processor performance to a i386, but it was 16 times slower on basic calculations. We lived in a world that got edicts from the MoD, for a short time there was talk of VIPER being 'required' by projects, but it was quietly dropped after Cambridge found there was an error in the design. In my own tests I found one instruction which the manual said was 20 cycles and actually took 19, not a big error, but when you are trying to force something on an industry as 'mathematically correct' it's a sign that maybe our Idol has feet of clay. At the time when the processor was being pushed towards industry I was always struck by the word provable. They never said proven. I would be very surprised if VIPER is still in use.

I was one of the reviewers of the VIPER processor and was quite surprised to hear the one point from my summary which wasn't entirely damning used in the wrap-up presentation at the end of the review period. I did compare the processor performance to a i386, but it was 16 times slower on basic calculations. We lived in a world that got edicts from the MoD, for a short time there was talk of VIPER being 'required' by projects, but it was quietly dropped after Cambridge found there was an error in the design. In my own tests I found one instruction which the manual said was 20 cycles and actually took 19, not a big error, but when you are trying to force something on an industry as 'mathematically correct' it's a sign that maybe our Idol has feet of clay. At the time when the processor was being pushed towards industry I was always struck by the word provable. They never said proven. I would be very surprised if VIPER is still in use.