Please tell me why the part of the OS that draws on the screen ever needs a promiscuous sniffing connection to the network? Or the filesystem handler needs to have access to the USB subsystem (there should be a subsystem to connect the two, but that surely only needs access to USB devices and an internal filesystem daemon interface).

This is why you modularise, compartmentalise, permission and break off rather than still sitting with a superuser tucked away capable of doing EVERYTHING.

Even in an OS, you shouldn't have one part of it be able to access everything if you're at all concerned about security. (Performance is an entirely different issue).

It might be interesting to see an OS that didn't even have a superuser that can dole out those privileges, but then you'd have problems with the issuing company being responsible for determining which pieces of hardware can be accessed by each driver/stack/framework. It would probably limit 3rd party driver and service development projects as well. Even the scale of changes that would allow the OS to lockdown which system services get access to specific pieces of hardware--even through an intermediary service--would probably require completely rearchitecting the OS, so it's not likely to be done anytime soon.

As it is, the OS normally does pass off the work to specialized modules, but without restricting which systems or pieces of hardware they can access. They just don't bother wasting their resources accessing unrelated stuff.

But I thought your rant was supposed to be about anti-virus software, which insists it needs all that stuff, and for rather more dubious reasons.

Prior to the advent of the cryptolocker variations, I'd have suggested that running any of the typical AV applications was a far bigger detriment to system performance than simply having an infection.

If your system impact is that high, your software is crap. Even if it works, it's crap. And AV, characteristically, does not work very well. (See also itunes for example of massive blows to performance for crap software.)

Software/hardware paradigm is the problem

We have ignored this problem for too long. The problem is insufficient architectures coupled with low-level programming that target the weaknesses in those architectures. People like C.A.R. Hoare knew in the early 1960s that software should be verified and built into Elliott ALGOL - verification like bounds checks. These checks were dynamic, thus slowed processing down. Performance was critical in the 1960s, and the scientific programming/hardware community won out and did not put checks in, especially soft checks like in Elliott ALGOL.

However, Bob Barton at Burroughs in 1962 decided such checks would be better done in hardware - for speed and for security. However, this still came at a small performance penalty - but it was an example of complete systems design, not just a CPU. Such checks are not just software verification checks - in a multitasking environment they are critical security checks. Burroughs released the B5000 in 1964 and these machines are still going in Unisys Clearpath MCP. The scientific community hated the B5000 because it spent cycles on in-built security checks. (Burroughs came out with a scientific processor BSP as a backend to the B5000 - watch for this architecture in quantum computing.)

Then it was decided we could statically check software with type checks. Programmers hated types "why should we have training wheels" - this thinking is a completely false analogy.

Fast forward to 1969 Dennis Ritchie throws out most of the advances of ALGOL over FORTRAN, except for the better ALGOL-based syntax and block structure. C was built around low-level CPU instruction sets (PDP-8 where the awful ++ operator came from). That was a strength of C, but also its prime weakness. Yes, you could let the programmer do anything which appealed to programmers egos, (and it is also great to teach this level to programmers, but that would be the equivalent of training wheels) but it has proven to be completely the wrong approach to non-scientific, everyday computing. End-user computing needs to be more secure than anything else. Server computers are run by professionals with tight controls. (Linux is good here, but not appropriate for end-user systems, but that's another, although related topic.)

C's philosophy was 'trust the programmer'. But in retrospect, that was naive because not all programmers have noble intentions. At the least now it is a stupid philosophy, but more likely negligent, and due to security problems, it should become criminally negligent. If engineers built such a sloppy bridge, they'd be gaoled.

We could build verification into code generated by compilers. But that is still not good enough. We need to build verification checks into CPUs as in the B5000. We have plenty of silicon on a chip to do it now. Programmable Logic Controller (PLC - the hardware that directly controls physical-world objects) designers are coming to realise this due to Stuxnet, but we now need to apply it to rational CPU design as well. Security experts and CPU designers need to study the B5000 architecture to understand the basis of what to do in the future. (The current release is downloadable from Unisys and runs on PCs.)

Of course, there are security flaws at higher-levels of abstraction, but until we build strong legs and a sufficient foundation, the rest of the body will be vulnerable at the lowest levels.

Make no mistake, the big elephant in the room is low-level programming with languages like C, C++, and assembler. C, C++, and most CPU architectures must be replaced and the sooner the better. Stop ignoring the elephant in the room.

Note: this is not a popular message. Like the issue of climate change, it is unpopular with many people, who will try anything (mostly bogus) to try to deny this message. The problem is that they are having fun, and those with messages like security and climate change (planetary security) are unpopular party poopers.

Re: Software/hardware paradigm is the problem

"Then it was decided we could statically check software with type checks. Programmers hated types "why should we have training wheels" - this thinking is a completely false analogy."

Long before my time. I grew up with strongly typed programing languages in the late 80s.

But OTOH, javascript is one of the most "popular" languages around now. Which naturally inspired some old-timers to introduce TypeScript which is again strongly typed.

Though it would be interesting to see some statistics on this (how many favours strongly-typed languages vs those who don't), I do suspect you are barking up the wrong tree. I believe the problem here is that the anti-malware industry is simply trying to make a quick buck off the gullible. If users can be tricked into opening dodgy e-mail attachments, surely they can be relied upon to fork out a monthly fee for 'protection'? And here is the rub: If things start working too smoothly, users might start thinking that they no longer needs 'protection'. They need constant reminding of the dangers that lurks out there. Flaws in the security software itself? All publicity is good publicity.

Re: Software/hardware paradigm is the problem

9rune5. I'm not really sure you are replying to my comment in the last part or the original article.

If we get our hardware/software paradigm right with defences built in from the ground up, the need for bolt-on defence goes away.

What we have at the moment is watch towers dotted sporadically around the country watching for advancing attackers. The encampment itself is surrounded by a garden fence, not a castle wall. The watch towers are put up by the anti-malware industry. But we need to be more intrinsically secure. It can be done.

But I agree that even when we clean up that part, we still need to be vigilant. However, end users are not vigilant. So there will always be opportunities for attackers if we rely on vigilance. We actually need to protect the end users of devices. Today's architectures and languages do not do that.