Search

Subscribe

Home Users: A Public Health Problem?

To the average home user, security is an intractable problem. Microsoft has made great strides improving the security of their operating system "out of the box," but there are still a dizzying array of rules, options, and choices that users have to make. How should they configure their anti-virus program? What sort of backup regime should they employ? What are the best settings for their wireless network? And so on and so on and so on.

How is it possible that we in the computer industry have created such a shoddy product? How have we foisted on people a product that is so difficult to use securely, that requires so many add-on products?

It's even worse than that. We have sold the average computer user a bill of goods. In our race for an ever-increasing market, we have convinced every person that he needs a computer. We have provided application after application -- IM, peer-to-peer file sharing, eBay, Facebook -- to make computers both useful and enjoyable to the home user. At the same time, we've made them so hard to maintain that only a trained sysadmin can do it.

And then we wonder why home users have such problems with their buggy systems, why they can't seem to do even the simplest administrative tasks, and why their computers aren't secure. They're not secure because home users don't know how to secure them.

At work, I have an entire IT department I can call on if I have a problem. They filter my net connection so that I don't see spam, and most attacks are blocked before they even get to my computer. They tell me which updates to install on my system and when. And they're available to help me recover if something untoward does happen to my system. Home users have none of this support. They're on their own.

This problem isn't simply going to go away as computers get smarter and users get savvier. The next generation of computers will be vulnerable to all sorts of different attacks, and the next generation of attack tools will fool users in all sorts of different ways. The security arms race isn't going away any time soon, but it will be fought with ever more complex weapons.

This isn't simply an academic problem; it's a public health problem. In the hyper-connected world of the Internet, everyone's security depends in part on everyone else's. As long as there are insecure computers out there, hackers will use them to eavesdrop on network traffic, send spam, and attack other computers. We are all more secure if all those home computers attached to the Internet via DSL or cable modems are protected against attack. The only question is: what's the best way to get there?

I wonder about those who say "educate the users." Have they tried? Have they ever met an actual user? It's unrealistic to expect home users to be responsible for their own security. They don't have the expertise, and they're not going to learn. And it's not just user actions we need to worry about; these computers are insecure right out of the box.

The only possible way to solve this problem is to force the ISPs to become IT departments. There's no reason why they can't provide home users with the same level of support my IT department provides me with. There's no reason why they can't provide "clean pipe" service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what's the alternative?

In 1991, Walter S. Mossberg debuted his "Personal Technology" column in The Wall Street Journal with the words: "Personal computers are just too hard to use, and it isn't your fault." Sixteen years later, the statement is still true­ -- and doubly true when it comes to computer security.

If we want home users to be secure, we need to design computers and networks that are secure out of the box, without any work by the end users. There simply isn't any other way.

This essay is the first half of a point/counterpoint with Marcus Ranum in the September issue of Information Security. You can read his reply here.

Comments

Selling a computer to connect to the Internet that depends on users to configure security themselves is about like selling a car to take on a road trip that requires the driver to configure the brakes themselves--after they pull out of the parking lot.

"There's no reason why they can't provide home users with the same level of support my IT department provides me with."

Well, so long as your ISP has the same level of control over your computer that the IT department does. *I* wouldn't use an ISP that required that level of control over my system.

It's kind of analogous to the Onion story where they reported that 98% of commuters favored public transportation for everyone else. A lot of users would be very happy with a system that would protect unsophisticated, inexperienced, non-technical users from the dangers of the Internet. Some would correctly classify themselves. Many would not.

"The only possible way to solve this problem is to force the ISPs to become IT departments."

Not going to happen. And it's a good thing because it's a really bad idea. Who does the forcing? Government, of course. Who thinks THAT would be an improvement?

What might work instead is a competitive free market responding to what customers demand. Supply creates demand... so someone has to go first. Once people see a reasonably priced ISP that keeps their computers safe and clean, they will demand that.

I see this as a more general problem of adding to the complexity of everyday life. In the modern world, people are expected to be an expert on driving, medical practices, foreign policy, and yes, computers, among a host of other issues. The call to "educate the people" ignores the fact that whatever specific issue your educating means their not be educated about some other equally important issue.

Bad driving leads to accidents. Bad knowledge of medicine leads to poor health. Bad foreign policy knowledge leads to choosing bad leaders at the polls. Bad computer security leads to widespread malware. But even a basic functional knowledge of all these things is nearly impossible.

Hardly. I know dozens of people who don't have one and don't want one.

"we've made them so hard to maintain that only a trained sysadmin can do it."

Poppycock. I can train anyone, in one hour (provided they speak at least passable English), how to maintain several virtual machines that will utterly SOLVE their maintenance problems. Cost: $200 for the software, and $100 for my time.

"Home users...They're on their own."

So they hire someone. This (legitimately created) need has created thousands of jobs in the form of third-party tech support companies. And the going rates/prices are set by something called the "free market." Look into it.

"It's a public health problem."

Let's call Hillary. She could nationalize all tech support companies. Tech support run by government bureaucrats. Hmmm, now we're truly talking about a nightmare scenario.

"Everyone's security depends in part on everyone else's."

Nope. My network is solid, and so are the networks of my customers. Their security does not depend on yours, I don't care how many virus-laden emails your machine sends, or how infected with malware it is.

"It's unrealistic to expect home users to be responsible for their own security."

Send in the government saviors! "We're from the Government, here to help you! You're too much of a child to take responsibility for yourself, so we'll be your parents! Now hand over that mouse, little one!"

"[Home users] don't have the expertise, and they're not going to learn."

I don't have the expertise to fix my car, or pull my own teeth, and I'm not going to learn. That's why I pay OTHER people, professionals, to perform those services for me. It's called the "free market". Look into it.

"The only possible way to solve this problem is to force the ISPs to become IT departments...it will require changes in the law to make this mandatory."

Pass A BroadLaw!™ Legislation! It's the only answer! Look how well government solutions have worked everywhere else in the national economy!

"If we want home users to be secure, we need to design computers and networks that are secure out of the box, without any work by the end users. There simply isn't any other way."

Sure there is. Home users can hire a professional to secure their network, customized to their needs. Remember, security is a tradeoff. Home users want convenience too, and convenience is on the opposite end of the spectrum from security. Any national or state-level mandate concerning "security requirements" will create a market for computer network professionals measures who know how to undo security measures that home users don't want, giving back those home users the security they do want.

I think it was Milton Friedman who said that leftist-liberals hate the free market because it excels in giving people what they want, instead of giving people what the leftist-liberal insists that people "need."

Good comments, and Ranum's response is insufficient. Learning how to use a computer is like an undocumented alien attempting to learn English: Possible, but hard and you have to try really hard and keep up with new idioms. Technological change happens in waves, and people don't always rise with the tide. There are still people who don't have television or telephones. The solution, as you suggest, it to throw time (ie money) at the problem and/or let others deal with the major issues.

Regular users should be (and most are) behind a NAT-router - this block all direct attack to their badly configured PC...

Of course they are still able to be attacked by e-mail mal-ware - But good ISP's stop that, at least I think they do here (Denmark) - Not because it is the "right think" - but because it is cheaper to prevent malware, then pay for the extra traffic...
I am just guessing, it might not be a pure cost/benifit reason - But I am sure it is a big factor...
What is the cost for a big ISP's mail-server to be i a "BlackHole-List" - Counting the time the supporter use explaining...

And for the record - ISP here have used security/anti-virus/anti-spam as a selling-point for years now - And I belivieve John Doe have brought it - And choose a connecting that costs maybe 15% more, if it is "safe"...

Geeks (like me), who have a FreeBSD/Linux/XX firewall/router just buys the cheapest/most unrestricted connection - I actually would pay more to get an uncontrolled connection... (Spam-filterring is nice, but I will white-list/filter myself thank you...)

And how, pray tell, does it logically follow that because "you think" something is true, therefore ALL HOME USERS in the U.S.A should, as a MANDATORY requirement of connected-to-the-Internet computer usage, be forced (backed up by State coercion if necessary) to comply with your vision of Internet utopia?

You've not much concern for their freedom of choice, do you? Nor their property rights.

Your mother already currently has a choice, and that is to disconnect her computer from the Internet if the downside of usage for her is bigger than the upside for her (and if she doesn't have someone, like her son, who will secure her computer for her, for pay or not.) If there is a sufficiently big demand among users like her, a company (mine, for instance) will go into business providing the service that she would willingly pay for. (It's called the free market.)

But if that company doesn't exist yet, the idea that the entire country should have yet another freedom of choice usurped because one's mother (and others) can't get what she wants as soon as today, smacks of an immature solipsism.

The 'thin vs. fat' question is a time-honored one. I'd like to address one of the comparisons made in this article and used by Mr. Ranum in his response - the comparison to the automobile. The problem with using the evolution of the automobile as an example for study in the computer security space is that the externalities are quite different. This, in turn, ensures that the responses (both individual and regulatory) are quite different. If an automobile is unsafe and/or is being operated in an unsafe manner, the potential negatives cited above (speed and damage, etc) are limited to its immediate proximity. Moreover, they are directly and intuitively traceable to the automobile in question - if J. Random Dimwit loses control of his Unsafe At Any Speedster and plows through a sidewalk cafe, there's little doubt whose car caused the problem.

In contrast, though, the problem of computer security failures conributing to more general problems is harder to directly link. Machines that join zombie nets and pump out spam or join botnet attacks are traceable to those problems only by experts, and in the overriding number of cases they either never are or are linked so only long after the fact. Given that, the motivation structure by which society can 'pressure' the users of those computers is not nearly as concrete and visible as that with which it can pressure the users of the automobiles.

While autombile pollution regulation is a much more compelling case, it still falls short, because in the case of computer security there *is* direct cause/effect linkage (computer A attacked target X due to malware) even if it's hard to find, as opposed to the case of cars contributing to a general commons problem - that of pollution - which can't be traced directly back in any case.

As this article points out (http://www.theregister.co.uk/2007/09/10/isps_ignore_strorm_worm_and_other_malware/), ISPs *are* in a position to do something about misbehaving hosts if they care to. The question is, why don't more of them care to?

Perhaps because it costs money to do so, and they hesitate to pass on those costs to their customers.

I'm a customer. I want to pay as little as possible for as much as possible (to my ISP, as in anything in my life.) If my ISP raises prices for the services they provide me, and tells me that it's because the ISP is now giving me more services, I'll decide whether I want to pay the increased price for the increased services. If I do, then I pay. If I don't, then I go get a new ISP.

Like any ISP, these services will probably be "rolled out" to a block of a few customers first, to see if the customers actually want it, and to see if they're willing to pay for it.

The ISPs who are already doing it: are they showing a profit? Are they making the service more widespread? Or has it failed to show a good return?

It would be like someone selling you a Ferarri and telling you that you'll figure out how to drive as you go along. "Heck, yeah, just take her out on the freeway and get a feel for it... What's the worst that could happen?"

Having your ISP be your IT would be like being given a chauffeur. Many of us enjoy driving.

We have basic competencies that every driver is expected to demonstrate before they go out on the open road and potentially hurt someone. There are safeguards in place that try to prevent car dealers from selling to people who aren't licensed.

Shocking Idea... Wouldn't it be more feasible to say that you must learn how to "drive" the "info highway" before being allowed to hit the on-ramp? We're not going to let you drive your 18 wheeler broadband connection down the middle of it until you know the rules of the road? If you keep causing problems, we'll take away your ability to use it...

Those who don't care to drive can hire a chauffeur if they please. If Ford was so hot to sell cars that they wanted to let anyone drive regardless of competency, the Highways would be just as dangerous as the internet. They'd fight speeding laws, and balk at the idea of a driver's license.

Bruce, after reading you for years, I'm surprised that you haven't realized that in this scenario you propose, the idea that out ISP's could keep us totally secure is laughable. If Verizon rolled out security for 50,000 users, the game would be to beat Verizon's security and get all 50,000 in one feld swoop. The larger and more numerous the targets, the easier and more profitable the hunting.

No offense, and not to sound haughty, but did you go into brain shutdown mode when you wrote this? Your asking ISP's to be flexible enough to allow us to do what we want to and still somehow keep us totally safe? Would this all be managed by the TSA? Once the ISP does guarantee me safety, what would they liability be if I did get infected with a worm?

I think you have a strange definition of "clean pipe". When I say "What I want from an ISP is a Clean Pipe." what I mean is that they pass my packets without messing with them. I don't want any hidden DNS or HTTP proxies or any traffic being dropped for reasons other than bandwidth limitations (that are protocol neutral).
What you seem to be suggesting is giving people a "Nanny Pipe" that has "bad" things removed from it. If a system like that gets mandated, eventually all sorts of things will be declared "bad" and it won't be limited to want is an actual threat to home users' computers.

The problem is that each user has different needs and expectations. For one user, their expectation is to never, ever, ever get spam...and losing some legitimate email is acceptable. For another person, say someone starting a business at home, losing even one potential customer's email because it is rejected as possible spam is unacceptable. But, with the vectors and algorithms for sending and detecting spam/viruses/spyware changing every day, it is nearly impossible to be 100% either way.

Likewise, your mom may not care that her ISP prevents her from accessing youtube, because there's been a recent outbreak of viruses linked to youtube videos. Until the day her niece sends her a link to a video of her latest school performance, which was posted on youtube. Suddenly blocking youtube is unacceptable.

Similarly, when your rich uncle sends your mom an XYZ device (which is really cool and does something nifty), but your ISP has blocked use of those because they don't use encryption when receiving wireless messages, your mom won't be too happy.

The answer is one you referenced yourself. Your company has a dedicated IT department. Your company's IT department knows your company's computing needs and recommends systems, secures systems, and provides training and support which meet those needs. Home users should find themselves a reputable computer support person/company/store who is willing to understand and meet their unique needs. Then, consult them when choosing/installing software or hardware or changing their setup. In this way, they can be as secure as the computers at your company.

I like Marcus' car analogy because it is so true in many ways. Cars also have many "add-ons" that can cause their own environmental or safety hazards, and trained professionals or experienced hobbyists maybe be able to install them. But my mom should not try to change her own brakes. She should consult a professional.

I value free speech more than I value computer security. Mandatory ISP level packet filtering practically begs to become a vehicle for censorship, either by the ISP itself, or at the hands of congressmen eager to "protect us" from allegedly objectionable or dangerous content.

Bollocks! A law imposed by government implies coercion. There is no evidence to support that this problem needs to be solved by coercion.

The iPhone is an example of an internet access device that comes into your hands secure (as much as the manufacturer can make it) and provides the capabilities a basic user needs.

There is no need to do the same for PCs; home users can simply opt to use a more suitable internet access platform for their level of skill, such as the iPhone.

If another PC user doesn't secure his machine because he doesn't care about it, he is not putting you at risk because of it. He is simply allowing an attacker to exploit vulnerabilities your system already has. This doesn't change the fact that your system could have been designed without vulnerabilities, or at least without ones that are easy to exploit.

Bruce, you are exhibiting the hypocritical traits of a typical liberal. On the one hand, when it comes to individual freedoms, you quote Thomas Jefferson saying: "Eternal vigilance is the price of liberty." On the other hand, when it comes to business freedoms, you are the first to argue for government regulations and restrictions, in this case to bring to home users what you think they need, instead of bringing to them what they want.

You care so much about computer security because it is your job. Home users do not care as much about computer security. When they do, companies can take advantage of that and offer solutions to solve that.

There is no need for you to impose your software security standards on other people's machines. Instead, if you think that people would pay for a secure ISP, then go ahead, make it a business plan, and found your own ISP company that will do that. If you prosper, then great, you have succeeded! And if you don't, then you will know that the market never wanted your proposed solution in the first place.

I'd like to see some authentication for this article. It is rather at odds with the many wise things I've read from Bruce over the years. Incidentally, I'm rereading Secrets & Lies this week - one of the main points is that there is no such thing as "secure out of the box". Security is an ongoing effort that requires some diligence from users, whether we're talking about online purchases or bicycle locks.

As for the proposed solution, we can't just mandate our problems away. Are there any examples of laws specific to the computer industry that actually had any benefit? Maybe some ISPs are already good at securing customers' machines, but I haven't heard of them. Software that is forced on users isn't free from its own security problems.

Of course, we could always do better. What kind of argument is that? I submit that we are doing better. Are you tired of spam? Use gmail. I get lots of mail, and maybe two spams a month, and I don't have to turn a tap. Of course there are and will be other problems, but there are also people working to solve those problems. Few of them work for the government, or would do their best work for it.

Re: this column, I agree with almost everything, but rather than the "clean pipe", the filtering should be in the other direction. As pointed out above, ISPs can detect contagious computers and should quarantine them. And software makers should be held liable for producing these public health menaces.

It may be true that (some) home users *think* they want their ISP to become their IT department. But frankly, how many companies with IT departments that are that controlling have good relationships between their IT departments and all of the rest of the company computer users?

And that's before you get into the issue of how much different most home users are from company users. Is the home user going to put up with the chance of email from Aunt Em about the local happenings being blocked because of a spam filter's false positive? Will they accept being prevented from installing the latest games (for themselves or their kids), or being told they *must* go out and buy a new set of software because their old one is out of date and vulnerable to exploits?

And what about those users who do know how to use their computers? If the ISP is made into an IT department, will it mean that we would need to pay *extra* to get them to not mess with our internet connection? Or is it just not an option at all, and everyone is restricted to the applications and traffic uses that the ISP mandates?

Wouldn't it be much simpler, for all parties involved, if it were mandated that the ISP must act on abuse reports and take action when worm traffic is detected? Make the ISP contact the home user when that user's computer is doing something bad, and shut down the connection if the problem continues. Heck, some ISPs might then see an extra profit source by offering those users a *choice* to pay the ISP for security services. And as long as you work in protections for when the user can prove that they're not doing anything wrong, a legal requirement to do something about these issues would help defuse a customer angry about losing their connection.

@More Government - I agree that home users could hire someone to secure their systems, in the same way that you hire a mechanic to service your car. But where education comes in is in realising that this is needed: most people know they can't service their car, so they hire someone. But they think they can (or don't need to) secure their system, so they're unwilling to pay someone else to do it for them.

Re: the Ranum column. It's common when improving usability in products aimed at older users to justify it by saying that old people are defective. We have to make the controls easier to use, they say, because old people have lousy eyesight and coordination. Or, we have to simplify because the poor dears can't understand this newfangled technology. And so forth.

Here is the real reason you have to make stuff more usable: Old people don't take crap.

Those 8-year-old Windows admins are going to grow up and someday say to themselves, "You know what? Life's too short for me to always be futzing around with these settings. Give me something that just works, dammit!" And the next generation's Marcus Ranum is going to tell them that they've become obsolete, and should just go off somewhere and die. Good grief.

"But they think they can (or don't need to) secure their system, so they're unwilling to pay someone else to do it for them."

These particular users would be in the same boat (car?) as the few people who believe that they can service their own car (when these particular individuals don't in fact have the skills to do so) or who believe that their car doesn't need servicing (ever) by anyone, and are thus unwilling to pay someone else to do it for them.

You whine about unpatched machines as a
'public health' aspect, and yet, when (a while back) Microsoft tried to turn automatic windows update, I don't recall your steadfast support for such a move.

Yes, your unpatched computer is making viruses more likely to propagate. And no, I cannot force you to do anything about it. The best I can do is support the major corporations in trying to create this benefit (MS, for example, would benefit much if virus threats were reduced, so they have an interest in acting benevolently). However, I won't hold my breath, the anti-corporate, anti-capitalism and anti-center attitudes rampant on this blog would not make a support for common sense likely...

"We are all more secure if all those home computers attached to the Internet via DSL or cable modems are protected against attack. The only question is: what's the best way to get there?"

Actually, that's not the only question. There are three questions:
1) What are the benefits of "forcing ISPs to become IT departments"?
2) What are the costs?
3) How do the benefits compare to the costs?
The article skims over the first question with a vague nod towards hackers using Bob's insecure home PC to attack someone else's system, which is apparently robust against direct attacks but vulnerable to indirect ones. The second and third questions are elided entirely. There is no quantitative analysis.

It may be that I'm being unfair to Bruce here -- maybe he has thought about these things more deeply and the article was just a layman's overview of his argument. But really, it appears to propose a radical and invasive solution to a fairly mild problem without demonstrating that this is necessary and that there is no alternative.

I don't want my ISP to do anything other than provide me with access to the internet. If they did any less (or any more) than that, I would want to move elsewhere.

To be fair, they do already do more than that; they log when I've been online, and where I connected from, they log what websites I've visited, and other traffic that I've generated. Here in the UK, under the RIP Act, they have to. And that's a Bad Thing.

If I wanted an ISP which offered less than full internet connectivity, I'd use AOL :-) Seriously, a filtered internet connection would be a Bad Thing.

If I received a strange email, which got through the spam filters, and I can't determine its legitimacy, pointing to www.example.com. I'd want to check all sorts of things about www.example.com, including a portscan. If my ISP are filtering anything they think might be strange (they'll miss things!) or that "They" (USA Gov't? My Gov't? CERT?) think are strange, then my nmap results could be skewed. Also, if the rules are set by the ISP, I don't really know how skewed they are. If they're set centrally, then everybody (including the bad guys) know what gets through and what doesn't get through.

And - the main thing - is that we wouldn't have an internet any more. We'd have some form of mediated computer-to-computer network. The internet is ruled by RFCs and that whole philosophy of a shared, heterogeneous, and open network. Business and/or Government (any kind of controlling authority who would be held responsible for security breaches) would be entirely opposed to any kind of openness. We'd probably end up with something more like Prestel or Teletext, or - safer still - TV programmes broadcast only by authorised networks. (There aren't too many pirate TV stations I am aware of; pirate radio stations seem to be a thing of the past, unless I'm just too old to know about them these days!)

The greatest strengths of the internet are that it is open and homogenous. Bruce's proposals would effectively remove both of those benefits.

1) "Holy Invisible Hand". The Free Market gets unleashed upon the unwashed masses. The people become an income source for some legit vendors, and also for a lot of questionable existences and professional scammers. The situation does not improve much if at all, just becomes more expensive. Those who already know what they want can already buy it, the rest already does not know they can nor where to ask and will not know later too - or they get brainwashed by advertising and get some crap like Norton Antivirus that causes more problems than it solves.

2) "Glorious Government". In this scenario, the always-wise rulers decide on a set of requirement, then airdrop it on we the people. There is no reason to believe they will not screw it up like anything else. Mandatory crap increases costs. Embedded shards of the highway to hell pavement, also known as good intentions, get abused later and cause collateral damage in terms of freedom. Dangerous ideas later mission-creep into the list of what the population should be protected against. The First, the Fourth, and the Fifth, those annoying legacy bugs, get patched by technical means. Disagree? To a no-fly, and/or no-Net, list you go.

3) "Industrious ISP". The ISPs take over and mandate set of requirements for the customers, to make it easier and cheaper to police the network and the terminal devices. Wide swaths of customer choice go down the drain. Possible rerun of a pre-Carterphone phone situation. Many competitors' applications (e.g. VoIP, streaming video) get filtered as a more than welcomed collateral damage. Malware exists still. ISPs react with deep packet inspection. Malware, together with customer-demanded P2P systems, implements encryption and steganography. ISPs react by blocking everything they can't deep-packet inspect. Malware adapts. ISPs demand running only and only signed code, PCs turn into pay-per-keystroke appliances, megacorporations rejoice. Customers mostly can't go elsewhere as in too many areas their choice is between one DSL and one cable, both likely to have substantially similar no-win policies. Politicians, noticing the centralization, mandate all sorts of surveillance and social control measures, adding the Glorious Government scenario risks into the mix.

I honestly believe that we are wringing out the last dregs of value out of TCP/IP and contemporary OS design.

It's time to start again and think about what we really want out of the 'Internet' and Information Systems.

It may be hard to 'get' this if you have spent your life mastering current technology to pay the bills ... but I really, genuinely, sincerely believe that we are using 1970s IT concepts to deal with todays problems.

The one legacy of the Bush Administration is it's ineptitude has emboldened the social-liberals to remerge from the shadows.

The good news is their making themselves look just as ridiculous as ever with their bizarre world views.

Early this week I head Robert Reich expounding on how if we don't stop taxing corporations, they deserve the right to vote (no representation without taxation, you know.) While he was giving it as an argument in favor of abolishing corporate taxes, that he would even think of that situation makes me believe the socialists running around in his circles have discussed just that as yet another creative way to make voters.

Now we have computers...home computers...being declared a public health problem.

While I agree (more or less) with Bruce's ends: that ISPs *should* be able to function as IT infrastructure, I don't agree with the means: through government mandate. The Government is a big and inaccurate stick at the best of times (I'm sure someone has said that in a pithier way before). However, current market forces offer no incentives to ISPs to begin offering IT department-like support to their users. I understand Bruce's frustration with the status quo, which is why I don't believe that this column was written by Fake Bruce Schneier.

Two comments on Ranum's response:

1. I'd love to sit Ranum's kid experts in front a trash 80 or Apple II, with all the manuals, and see how far they get. Or for that matter, in front of a Unix terminal tied to a mini-computer circa 1984. A lot of their so-called "expertise" is only possible because somebody who *could* "scratch-bake a firewall with a custom filtering reverse web proxy in a weekend" built myspace, digg, flickr, blackberry or any number of sites or products which allows these "kids" to seem more "expert" in the online world.

And Ranum's reference to "home users who can't manage a Windows XP upgrade, but who can successfully instrument land a jet fighter" is precisely why we want to be able to move things like spam filtering intrusion detection out to ISPs. Clearly, anyone *can* be proactively "security conscious" with the right training (and motivation). But just as we all can't be (or don't want to be) fighter pilots, we shouldn't all *have to* become security experts beyond the basics of "don't click on email from unknown people", "don't open attachments you're not expecting" and the like.

2. The car analogy is exactly right, except Ranum's conclusions are backwards. The evolution of the automobile shows what *tends* to happen to "complex" technology as it becomes widely adopted by "non-technical" people.

The most complicated controls in the modern car have nothing to do with the mechanics of driving, but everything to do with the experience of being in a car: e.g., the radio, navigation system, environmental controls. We've automated almost every aspect of moving a vehicle from point A to point B that doesn't require mind-reading/AI; the one major buyer configurable holdout is the manual clutch. I mean, how many new cars give drivers direct control over the choke? ignition timing? fuel/air ratio? Once upon a time, *every* car had these controls, and a driver had to manage them all if he (and on few occasions, she) wanted to keep the engine from dying while driving across town.

It seems to me that personal computing now is undergoing the same transition that the car industry went through from 1950s to the 1960s:
1950s -- introduction of the automatic transmissions, new models aimed at different market segments, marketing campaign saying how every family should have one
1960s -- smaller cars for lower prices, alternatives to cars such as scooters and motorcycles that take up even less space, luxury branding with added features like power windows and doors.

I think that Bruce & I would both love to move the computer industry to where the car industry is now: remote locking, alarms, and ABS comes standard; automatic wipers, cruise control, air bags, inputs for external media devices, run-flat tires in the basic options package; navigation unit, media devices with multi-zone distribution, built-in hands-free support for cell phones in the higher end options package. We can all think of equivalents in the PC world to these features. It would be nice to offer them as options to any user who wants them.

I frequently agree with Bruce, but this time, I think he has lost his mind.

1. There is no such thing as complete security. ISPs are still trying to figure out spam-filtering. Most of them can't even staff their abuse desk. How are we to expect them to suddenly figure out other kinds of security?

2. Earthlink has offered free security software for XP users for many years. A few years ago, AOL added an anti-malware application rolled out to their users. If you really believe this, maybe you need to switch to AOL.

3. I don't know about you, but my ISP's software only supports Windows users. If you use Mac OS X, Linux, one of the BSDs, Syllable, or some other operating system, even if you are less likely to be infected, such a mandatory system would lock you out. Mandating a single software configuration would make an otherwise minor problem catastrophic.

4. As soon as you centralize the control of users' computers, you present an irresistible target for government surveillance as well as for the bad guys. Last I heard, most states do not even require companies to notify consumers when their information is exposed in a security breach. Do you expect one of the big telcos to tell people that they lost their information?

5. In California, drivers actually paid mechanics to disable the anti-smog devices. Likewise, computer users attempt to disable the security settings on corporate networks. This tells me that users will continue to cause problems until and unless we start training them before we let them loose on the Internet.

I apologize, I didn't have time to verify that this sentiment wasn't already logged.
Mr. CounterPoint frustrated me.

My daughter is 14--she is nimble with ipods, the imac, linux gui, windows gui. But just when I think she knows a heck of a lot more than I do about how to install or navigate some new whistle/bell, her answers to my questions reveal that while she "keeps up better with trivia" than I do, she has not yet had enough experience in software to understand how it works or what the risks are.

She can use the toaster exceptionally well! But she doesn't know how it works or how it might hurt her. Computer usage has become another religion of mystic misunderstandings and rituals-- my child just wants to reach her end goal: play my song, download my video, upload my myspace picture. She doesn't yet know or care how it works. If young people don't invest the time in learning how it works, somewhere along the line, they cannot be the next generation of safe/good designers, supporters, or security experts or even smart users.

Mr. Counterpoint, I have faith in the next generation, trust and advocate for the young, and am willing to admit old stodgies need to move over when they're done contributing to culture, but don't confuse Mr. Schneier's point about complexity of systems with your 'non-answer' that the ''old who can't manage change" will die off--we may be creating another generation of "gee I dunno"'s right now. Perhaps if "Idiocracy" has a place in the future, it just doesn't matter. :)
Sincerely,
Tamara

Shame on all of you who assume that an older personal is unable to use a computer with any skill and doesn't know anything about security. My husband and I are in our 70's and have been using computers since the days of the 8-bit machines. We have all the defensive software necessary and have so far managed to stay out of trouble. If there is one thing we have learned over the years, it is not to be gullible and I'll bet we could put some of you to shame on that score.

I don't think that ISPs should handle security for their customers but they should have a "security corner" where they explain what good security is, what the common problems are and give recommendations for some appropriate software. Many computer users don't have the time or skill to do this for themselves.

The critical mistake is Microsoft's. When DOS evolved into Windows evolved into Vista, the single-user mindset - and the complete administrative control/responsibility - came with it. It's a disaster, and it's why viruses run wild on Microsoft operating systems. When I use a Windows box, I'm an administrator by default, and if I'm not, software breaks left and right. If I'm on a Mac, I log in as an unprivileged user, making it much more difficult for a virus to take hold.

The real problem is Microsoft isn't being held accountable for the virus/zombie mess they've enabled.

I think as some other people have suggested, the point is to begin by assuming that users know what they're doing, and not impose a lot of heavyhanded ISP-level controls on them from the get-go. But the ISPs should be a lot more proactive in cutting off accounts that start misbehaving. Not just sending out 10,000 penis-enlargement emails a minute, but ones that also show signs of being part of a botnet or DDoS system.

Just tell ISPs that they will be responsible for the actions taken by their users. That would give them a reason to throttle down anyone who's causing problems, but their desire not to lose customers right and left would hopefully encourage them to give affected customers instruction on how to resolve the situation.

The market might not come up with a perfect solution to this, but I think it's much preferable than a heavyhanded government-led initiative that would be unnecessarily intrusive and infringe on private property.

Most users DO NOT know what they are doing. Anyone who thinks so has a very limited perspective and hasn't talked to many real users.

The average user expects a computer to work like a TV set. You turn it on, you jump around the channels looking at stuff and you turn it off when done. It's all very mindless and requires no real knowledge other than how to press the channel buttons.

Technology is complicated. Companies and programmers have a vested interest to maintain this complication. To do otherwise could cost them their jobs.

There are only two possible realistic solutions to this problem that Bruce writes about.

1. Thin clients for everyone where the provider takes care of all security issues
2. Penalties for allowing your computer to be co-opted (in other words, you are a menace to society and everyone around you).

And then we have the discovery by Windows Secrets, which proves that the vendor of the dominant OS is also underhanded in installing changes to your computer without your knowledge (replaced Windows Update silently). Or, was that already common knowledge?

In this month's CryptoGram, Bruce said:
> The only possible way to solve this problem is to force the ISPs to
> become IT departments. There's no reason why they can't provide
> home users with the same level of support my IT department provides
> me with. There's no reason why they can't provide "clean pipe"
> service to the home. Yes, it will cost home users more. Yes, it
> will require changes in the law to make this mandatory. But what's
> the alternative?

Apparently, the alternative is that we actually receive CryptoGram;
the news story just below this article reads:

If you're going to allow users the freedom to misconfigure/misadminister their networked computers, you have to hold them accountable for the consequences if they do so.

I still believe the solution is that every ISP should require a bond (order of $100) per user. If a user's PC is determined to be compromised and inconveniencing other PCs (spam, virus propagation, bot herding etc.) their connection should be switched off, the bond should be forfeit, and another bond must be posted before their connection is turned back on.

I bet users would get a more realistic appreciation of their real sysadmin skills in a big hurry if it cost them something not to do so. They might actually hire someone to fix their computer, or take a class. They might also start demanding responsibly-programmed software from their OS vendors.

Maybe we should take a better look at our PC, and see what extension of the current safety regulations could do for us. If I take apart a PC I find a plethora of quite strictly regulated components. A power supply has to conform to a large set of regulations concerning electical safety (and, surprise surprise, all kinds of regulations regarding spurious HF radiation. It is after all a switched power supply). A wifi card has to be certified for use in a specific country, like a traditional modem. Even the mains plug has to be certified. All these rules are based on a three-tiered legal system. Laws stating that equipment has to conform to a set of requirements, the rules itself (mostly laid down in standards) and the legal safeguard of liability. All this has been in place for a long time, basically for every industry in existence. And it works. You can buy electrical equipment an be reasonably sure that it is safe to use. But now, when it comes to computer security, it seems like we have no history at all of dealing with safety and security issues. Which is, obviously, not tru. So, we don't have to start from scratch. Most of the legal framework already exists, embedded in consumer safety law, industrial safety law and the like. Most of these maybe need a little extension to cover (home) computers, but I don't think we need completely new laws. Most of the work is actually in the standards and regulations. And this is the point where everybody starts screaming that the don't want to loose their freedom of choice, or loose root etc. But regulation is not about such issues. An example of a possible rule would be the "rule of least privelidge". This rule would basically state two things, first that a program is only allowed the resources/permissions it really needs, and two the OS has the task to enforce this rule. Take a look at Bitfrost to see how it works. As a rule in itself it states nothing new, but combination of law, regulation and liability makes that over time a system of "acceptable/good/legal practice" develops. That is something the legal system is good at. Users still have the freedom to do what they want to do. But there is now a kind of standard against which software can be tested. And action can be taken if some company makes a mess of there security.

I think we will move in this direction eventually. All industries did in the past. I don't think making an ISP your system admin is the way to go, mostly because this leads almost certainly to abuse of power. And, for al lot of people, there is no choice as to their ISP. They only have one available. I think that sensible regulation, liability for software and the development of a system of good practices is the way to go.

1. Why go straight to having the ISPs act as IT departments? Shouldn't the first line of defense be improved software engineering techniques that reduce bugs in the first place?

2. Which brings me to the second question: what, exactly, are the economic trade-offs involved in computer security? Of course, the basic story is to shift costs, so the software purchase price is lower due to less intensive QA before release, but someone down the line has to pay for the fallout from security problems. (Similarly, less security typically means greater flexibility in adding new functionality--like not having to muck with selinux settings just to drop in a new piece of wiz-bang software--at the cost of greater risk of problems down the line.) But who comes out ahead and who comes out behind, and by how much? Can an end user effectively externalize most of those security costs today? Is the current approach overall less expensive than up-front security if you take into account the time value of money?

"...regulation [has been] in place for a long time, basically for every industry in existence. And it works."

That's true. If by "it works", you mean "it's broken, and is unfathomably wasteful."

Federal regulations cause $1.5 trillion (in 1999 dollars) in economic output to be lost each year. This is roughly equivalent to the entire economic output of the Mid-Atlantic region: Delaware, the District of Columbia, Maryland, New Jersey, New York, and Pennsylvania.
Richard Vedder
John M. Olin Visiting Professor of Labor Economics and Public Policy
Center for the Study of American Business

In fiscal year 2000, some 54 federal departments and agencies and over 130,000 federal employees will spend over $18.7 billion writing and enforcing federal regulations.
Center for the Study of American Business
Regulatory Budget Report No. 22
August 1999

The average annual cost of regulation, paperwork, and tax compliance for firms with fewer than 500 employees is about $5,000 per employee. Firms with 20 to 49 employees spend, on average, 19 cents out of every revenue dollar on regulatory costs.
U.S. Small Business Administration

Since 1997, the size, scope and cost of the federal regulatory system has returned to pre-1994 levels and once again is increasing at record rates.

15,280 final rules were issued by federal regulatory agencies between April 1, 1996 and September 30, 1999.

220 of those rules were "major", each of which will have an annual effect on the economy of more than $100 million.
Angela Antonelli
The Heritage Foundation
Issues 2000

Do you mean you want the ISP to provide support for different vendor's OS's and applications, which the user picks? Do you mean the user will be "disallowed" from picking applications the ISP deems unfit?

You want the mom & pop ISP's, with possibly one staffer to do that? You want the giant corporations (which will likely offshore it) to do that?

Well, if they ever do, I won't want that ISP.

I'll take the unfiltered connection anyday.

There already are ISP's that provide some security features to their users, as a selling point, and that's good. There are others that provide no filtering whatsoever, and that's a selling point. I haven't looked, but I'm sure I could find an ISP that provides total managed services (webtv?), but they don't seem to be common. All are possible, and desirable, as they provide solutions to the wide array of people out there.

1) Here in the UK, at least, there is serious concern about a "digital divide". Many goods and services are cheaper online, including an increasing number of government services. Access to debate and democracy is easier online. Meanwhile, the rate of uptake of home internet connection in general, and broadband in particular, is slowing, and universal access is looking far distant. Thinking about the cost per seat of decent IT services, and the current cost of ISPs, surely you're talking about increasing the cost of internet access by a factor of at least 5. That's not a good way to encourage uptake, and when uptake enables participation in new marketplaces and in democracy, that's a significant political issue.

2) IT departments are only incrementally better at managing security than home users. Sure, home users are the current low-hanging fruit, but there are plenty of virus victims and zombie PCs behind corporate firewalls, and that's even when the IT department restricts the hardware and software, which ISPs clearly can't do. So, all that expense might not even have a decisive effect on "public health". Maybe it would do enough to make a big difference, and maybe it wouldn't, but I'd be interested to see why you'd prefer to spend the money on the ISP rather than the OS.

3) If I get my email from Google via https, my ISP is going to be justifiably distressed to learn that it is responsible for somehow preventing me from falling to a phishing attack. Personally, I don't want my ISP to provide security, I want my OS and my online applications to worry about that. My ISP is responsible for providing the network layer: why should I be locked into their email, update management, and backup systems?

I think Bruce fell to the syndrome as old as the humanity itself - being invited to help those in power to rule seduced too many good minds into believing that coercion - under some circumstances, of course, of course - can be good.

Bruce - here's the real challenge - figure out how to fix end-user system security problem WITHOUT treatening any peaceful citizen with grave violence.

The ISP is the real point of power in this problem--it's the access point. The computer and OS can be anything, but the ISP determines whether the computer is online, whether the connection is fast or slow. The ISP is the choke point, or border, or what's the phrase?
Therefore, it's a good starting point for filtering spam, handling viruses, identifying trouble areas.

I bet ISP support personnel can determine traffic volumes and spot potential problems (suspicious heavy traffic) fairly efficiently. In a system like the Internet, where huge numbers of beings organize and share resources, it makes sense to delegate responsibility to the points where it can be effectively implemented to increase safety and throughput.
I don't want Big Brother, but I'm realizing that Internet and computing issues have gotten so big and complex that even conscientious people cannot keep up with them. I don't want computer updates and maintenance taking over my very existence--the cycle of creating a new problem every time you solve an old problem drives me nuts. So what's the logic flow of the issue?

Both sides are begging the question. Is there an end-user security problem that needs solving?

In my experience people notice that their computer is slow and blame it on the computer being old, which is true, in a way (they're running an unpatched old OS with lots of malware on it). They then buy new hardware and get a fresh OS install. More savvy users just reinstall the same OS that got them in trouble in the first place. And then they get on with their lives, which do not involve poring over computer manuals and tweaking security settings. Only security professionals can imagine that mainstream users would actually consider this as a worthwhile use of their time.

If your computer were like a car, it would cost $15,000 and need periodic oil changes or else it might seize up and fail, costing $1000 or more to get it working again. It would also need periodic preventive maintenance or else you might lose control and die, or kill someone else (and possibly be liable for millions of dollars of damages). If that were the case, home users might bother with proactive maintenance.

As it is, $400 every 2 or 3 years for a brand new computer is probably enough effort to satisfy a home user. Sure, they could have spent hours and hours patching and reinstalling the OS and hardening their systems, but it's not worth their time.

As for the big-government suggestions (ISPs should be a nanny state proxy, or hold money in escrow etc.), why bother? The market has already spoken, and the answer is that crappy security is acceptable, in order to get the most obvious OS choice with the largest number of applications.

Until home users decide that they don't actually want the most complicated, most feature-heavy, most general purpose computer and operating system, they will keep paying a price of security problems. There are several other options available to users, and these have been available for years, but the overwhelming choice is to keep buying PCs running Windows and using MSIE, and suffering the incompatibility, security and usability problems those bring.

I set up a Vista Home Premium PC for my sister. I created an admin account and a day-to-day account for her non-administrative activities. She gets it fine and she loves Vista.

She liked it so much, she ordered broad band, and had Qwest come out and set her up because there's an (unused) alarm system in the phone circuit that had to be dealt with.

So, noticing that there were no children in the household (my sister is a retired schoolteacher), the guy assumed she wanted to run in the administrator account and set her up so she has to be administrator to log into her service without asking her.

She's going nuts with the admin-permission requests and also with the default browser set up they create (MSN Explorer instead of Internet Explorer). She hated AOL's interface, but this is not an improvement for her.

I'll go over next week and see if I can repair the installation and have her working safely. I'll also check out her antivirus, firewall, and other settings at that time.

I have absolutely no sympathy for fools who attempt to use the Internet, particularly those fools who use Micro$oft products. Having seen the writing on the wall after I installed Windows 98 and it crashed constantly, I put myself on a steep learning curve for about a month and then switched over to Linux. That was almost nine years ago. Linux was far more difficult to use then than it is now. Most Linux distributions are said to be easier to install than Windows. Try Ubuntu. Its designed for non-geeks. No viruses. No hassles. Low cost. Very reliable. Very secure.

I disagree that most kids these days are computer savvy. Some are, maybe, but I work with hundreds of twenty-somethings who haven't got the first clue about how to deal with their laptops or how to avoid spyware and viruses. And they're too narcissistic to care how it affects anyone but themselves.

Funny, I've been using Windows since the 3.1 days and have never had a virus. I hated Windows 98 and never had it installed on my own PCs, but NT 4.0, 2000, and XP work just fine if you just follow a few simple rules: use and update an a/v product, set Windows to update automatically, and use just a smidge of common sense.

I run a few Linux servers and am on the Linux security notice listserv. I get notices every single day about another Linux vulnerability that needs to be patched. I don't see that it's any more or less secure than a properly maintained Windows box. An unpatched Linux box is NOT reliable, secure, or virus/hassle free.

My ISP filters my email against spam, but sometimes it also blocks legitimate emails. Luckily, I can turn that off.

My DSL modem has a built-in firewall with almost every port closed by default, but sometimes that will prevent me from running particular software. Luckily, my modem allows me to open any ports I wish to open.

I don't want to lose the option to control my system and internet connection. I don't want my ISP forcing me to run particular brands of software. I don't want them to force me to upgrade to Windows Vista once Microsoft decides to discontinue Windows XP patches. I don't want them, or the government, involved in my security decisions, unless my system has already been compromised.

What I want is a system that is "secure enough" out of the box, with the option to enable more dangerous features as required. This additional step helps reduce the number of vulnerable systems online and allows users the opportunity to learn *why* certain options are disabled by default.

While normally I agree with your .sig (which I presume you intend as sarcarsm), I think your "free market" suggestion may a good idea in this instance.

The problem is that, from a security standpoint, the market for personal computers dosen't work. My suspicion is that one reason they don't work from a security standpoint is that the average purchaser has no idea how secure (or insecure) they are. So much like we require labeling of food for their nutritional value, let's require security labeling for all new computers that are sold.

Obviously, since this label is designed for end users, we'd want it to be very simple - perhaps 2 numbers. I'll propose the first as the logarithm of the number of seconds it takes to access data on the computer with physical access (but no permission). and the second as the logarithm of the number of seconds it takes to access data with the IP address and an IP connection (but no permission). Obviously, these assessments would need to be done by independent labs, and would need to be changeable at fairly frequent intervals (perhaps weekly). Computers with insufficient (or no) testing would need to be marked "unknown"; this could also be used to prevent an onerous cost to small makers, and those making special purpose computers where this sort of testing isn't useful.

With this, you might get a "free" market that would function better from a security standpoint. If I'm buying a Physical: 2 Remote: -5, I at least know that my computer will be attacked quickly if I put it on the internet or allow anyone else near it. If, on the other hand, I'm buying a Physical: 6 Remote: 9, I'd feel confident that the security is (at least at that moment) good enough for my personal use.

I used to be an advanced programmer until the jobs stopped hiring folks who were older, experienced not certified and had made good money.

I am well known as one who can root difficult viri out of a machine and often get calls from folks wanting help.

As I said I have a fair level of concern.

I have yet to observe a PC that does not have unidentified spy-scum-crime-attack-ware infesting it including some allegedly secure machines inside of large corporate firewalls.

Most of what I see I would term severe infestations with numerous components with the infamous “unidentified��? actions.

The one thing these machines all have in common is the OS. All Microsoft.

We will suffer from allowing such an insecure system to become the de-facto standard of industry, commerce, communications, infrastructure and home when we are attacked by a horde of zombies controlled not by some lowly script kiddie but by a military or industrial bad guy who has some programming muscle on their side and a nefarious agenda.

Frankly it scares me.

We don’t need Big-Brother as an IT dept but Microsoft needs to be “encouraged��? to make something more then a cosmetic upgrade to their OS and to make it work with a reasonable minimum hardware req so that the legacy users can update without buying a new machine.

I am at the point where I only use my Wintel machine as a IE test can for web dev because I don’t want my security to be that compromised.

Regarding government control of the internet, would we be using the China model? If that was not where they started, I would bet that is where we would end up. The people who love to control other people would see to it. All they need is their foot in the door.

I definitely do not want my ISP expanding a gatekeeper role. Even now, contacting Tech Support is unbelievably frustrating. My ISP has outsourced to a female computer voice. She recently insulted me with this piece of condescension:

(After several minutes of troubleshooting:)

Me: I would like to speak with a human.
Her: I think you said, 'I would like to speak with a human', is this correct?
Me: Yes.
Her: Well, I think I can still help you. Let's start again.

The last bit was spoken with discernible haught. I was stunned. I was not allowed to speak with a human.

No, thank you ... I don't need any more reasons to contact my ISP Tech Support than already exist.

Yes, real problems exist and need to be addressed. I am not educated enough to figure them out, but, please, for those of you who _are_ smart / educated enough, please try to preserve as much liberty as you can.

I like Adrian's last two paragraphs. Protect, Educate, Enable.

Vista has some of this behavior now and even presents actual Help file pages in a small window to the side with some of the things that have come up. This is 'small dose' learning ... a little at a time, as needed. This is good.

Meanwhile, as an older member of the unwashed herd, I will continue to remain sensibly paranoid ... (g) ... vanilla

- Anyone with lung, or heart disease can never buy cigarettes
- Anyone who is overweight can no longer buy cigarettes, or food with high cholesterol
- Anyone with diabetes can no longer eat at a fast food restaurant, buy food with high cholesterol

Health care is in a crisis, why NOT do the above?

Computers are in a crisis, why NOT have some outside group decide how you run your computer.

There has to be a better way...

Why not have the computer industry be required to put TV, radio and print ads on how to protect your computer?

Why not require educational campaigns, after all, most folks seem to think education is a good idea?

I can't agree with that. All a law will do is raise prices (as ISP's will complain they need to charge "a fair rate" for this service), affect people who never use the service, and shift burden to folks who have no stake in the outcome. They'll have no reason to be efficient about it. Why should they be? Every year they get to go before the Government ISP Tech Support Commission and demand an increase in taxes... err surcharges, just like the US Postal Service.

Laws cannot solve the problem. They will merely add costly and wasteful bureaucracy. Then we will complain about the ineffective bureaucracy just as we complain about the TSA.

Jaime has come pretty close to the expansion on the car analogy I planned to make.

When cars were first invented they were fragile contraptions, and only people with money and mechanical skill could own and operate one. The personal computer was born in much the same fashion, but has progressed to the point of being a "modern appliance" much more quickly. With some exceptions, buyers today want a "transportation appliance" that gets them comfortably from point A to point B and the market accommodates them.

Consider a modern car: It's fast, it's safe, and it's very reliable. The basic task of using it for transportation is easy and accessible to millions. And these things are the expectation buyers have -- if you buy a new car you'll get all these things from it. Maybe a little more comfort or style or a little more speed, but these are what you get regardless of what you buy.

Consider a modern computer: It's fast, it's cheap, and it's...well not as reliable as your car. Applications crash, your network connection goes down, there's a never ending stream of updates that it requires. And these too are the expectations buyers have -- the computer will eventually slow down and you'll just have to buy a new one. Sort of like a rusting out car from the days of planned obsolescence.

Continuing the comparison: How does a person know there's a problem with their car? Well it will many run rough, or be sluggish. Perhaps warning lights appear on the dash and the driver knows to have it checked. There's a new noise. Basically, a new behavior appears. How do you know there's a problem with the computer? In the past, viruses made it crash or corrupted files. Pornographic windows popped up on the screen. That there was a problem was obvious and the user took the computer in for service (or investigated himself).

Now consider "modern malware". Quite a lot of it is designed to be invisible to the untrained eye. Malware doesn't want to crash your computer anymore -- it wants to use it and your network connection. It doesn't want you to take your computer "to the shop" so it hides itself and works in the background. There are still signs of problems, but you have to know how to detect them beyond the accepted expectation that "computers just do that" or "they just get slower and you buy a new one."

There's a lot of focus here on training users, and that is important. But there's also the barrier of user expectations to overcome. Users have to expect consistent performance from their computer before they will recognize new behavior they didn't install. They need to notice the "funny noise" under the hood and get it checked out. No more "computers just do that."

I have to agree more with Bruce than Marcus on this one, although he makes
a few good points. True, today's children often are able to use computers
much better than today's adults can. My son started installing software on
our PC himself before he was 3. However, not all children get this kind of
early exposure, nor are even most of those that do learning the necessary
security principles to administrate their computers securely. My kids are
always upset because the anti-virus software I insist they use slows the
computer down and the firewall interferes with their games. As it is they
always press "allow". They hate the auto-update because it can reboot
the system when they don't expect it to, and it has broken games in the
past. Furthermore, the landscape is always changing, what they do
manage to remember about security is too specific and will not be
applicable in a few years.

Marcus is right about the speed of public health. Sewers did not become
common place until the linkage between cholera and human wastes was
established during the 1850 cholera epidemic even though effects of
not dealing with urban levels of human waste even without regard to
disease would strike any of us modern city dwellers as more than
adequate reason to install sewers.

"There's no reason why they can't provide 'clean pipe' service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory."

Whoa! Hold the phone! Mandatory?!? Have you lost your squid-loving mind?

Problem #1: The self-centered problem. My computer is secure, but you want to pass a law requiring me to pay more to my ISP so some computer retard down the street can be pseudo-secure? No way!

Problem #2: I use the term pseudo-secure because that is what it is. All it takes is for a cracker to breach the ISP, and then he's got blanket access to all the computers dependent upon the ISP for their security.

Problem #3: Crappy tech support. Calling my ISP right now is a huge waste of time, spending 30-60 minutes online just to have some ignoramus following a script tell me something I know for a fact is wrong. You want society to entrust their security to these people? That might marginally be better, but not enough to justify the added cost you are trying to legally mandate.

Problem #3a: flooding tech support with calls on security issues will make the tech support experience worse than it already is.

Problem #4: Liability. If it's my ISP's responsibility to ensure my security, does that mean I get to sue them for data loss due to every new virus that comes along? Because it will happen. Virus writers are always a step ahead of anti-virus writers. It's the nature of the beast.

Problem #4a: Why not? You've mandated by law that they have to do this, and I have to pay them for it, and now I have no legal recourse when they screw up?

Problem #5: Censorship. If the ISP is in charge of deciding what is a security risk and what isn't, what's to stop them from blatantly censoring all the patterns of 1's and 0's they don't like?

"Nope. My network is solid, and so are the networks of my customers. Their security does not depend on yours, I don't care how many virus-laden emails your machine sends, or how infected with malware it is."

So...you are claiming that, with the very limited resources that your private-individual clients have at their disposal, you can (and do) implement a network so "solid" that it is immune to a DDOS attack?

So why haven't you retired to your own banana-republic island by now? Lots of huge companies - not to mention quite a few governments - would pay big bucks for that kind of setup!

After all, the DDOS attack is one of the prime examples of why we want other people's computers to be "reasonably secure". If you can't deal with that, then you are full of $#it!

Furthermore, your clients' Networks are probably not for "mostly internal" use. The "average home user" is extremely interested in Internet connectivity.

Even if you could somehow insulate their systems and home networks from any possible external attack, it wouldn't do them much good if the entire 'Net goes down (or even just the part they are interested in at the moment).

We've seen it happen before. It will certainly happen again. Most such attacks rely on rapid, relentless propogation through the large number of unsecured systems out there.

As I said, your "totally secure 'Net connections" are not much good if there is no 'Net on the other side. Thus, you inherently *do* care if other people's systems are secure, lest they become an unwitting threat to the "Environment" that you don't and can't control, but nevertheless need in order to function.

"So...you are claiming that, with the very limited resources that your private-individual clients have at their disposal, you can (and do) implement a network so "solid" that it is immune to a DDOS attack?"

How silly. Of course not. You misunderstand what a DDOS attack is.

A home user's computer is never the TARGET of a DDOS attack (although theoretically it could be, but what would be the point?)

A home user's computer comes into play in a DDOS attack by being *part of the attack network*. Prevention from becoming part of the attack network can absolutely be attained; you can certainly secure a home user's computer against the vulnerabilities that are most often used to surreptiously join that computer to a botnet.

"After all, the DDOS attack is one of the prime examples of why we want other people's computers to be "reasonably secure".

Right. So they don't become part of the attack network. Not so they can be immune from the DDOS attack itself.

"Even if you could somehow insulate their systems and home networks from any possible external attack, it wouldn't do them much good if the entire 'Net goes down (or even just the part they are interested in at the moment). As I said, your "totally secure 'Net connections" are not much good if there is no 'Net on the other side."

Have you seen me anywhere claim that I can guarantee the uptime of the entire Internet?

This will be my last post in this thread. It's getting too ridiculous.

I think I have to go more with Bruce's side than Marcus'. Historically, the only way to adequately maintain and develop a "commons" is through government regulation. That's why we have "utilities", even if we have begun the disastrous experiment of "privatizing" them.

Even in cases where there is a corporate monopoly or consortium "managing" a shared resource "for the public", it effectively acts as a government with regards to using that resource. Ma Bell certainly didn't let "just anybody" connect "any old telecommunications device" to their wires. They had, and vigorously enforced, "standards". Sure, initially the standards consisted of "use our stuff", but the de-facto electrical and signaling standards defined by that "stuff" eventually became the "open" standards by which others could access the wires. The whole thing would have collapsed into a pile of static-infested slag if there had been not standards-enforcement.

A lack of standards has universally been a hindrance to widespread interoperability, whether in terms of monetary standards, weights and measures, or communications standards. Usually, the chosen "keepers of the standards" are governments. Even in those cases where the standards are independently developed, and promulgated by industry, the enforcers of the standards are governments - per definition. When the Hanseatic League started enforcing standards, it *became* the government!

I'm by no means certain how to go about it (the ISP-based SysAdmin might work, but might not), but I think what we are really saying here is that we want some sort of "security standard" for systems that connect directly to the Public Internet. One of the parameters of such a standard might be "testability". A 'Net-connected system has to pass an automated security-check test before it is allowed out of the "sandbox" of whatever entity is providing the connection.

This sort of approach, mandated by government (even if that "government" is an ad-hoc group of cyber-vigilantes who "put down" any system they find that doesn't pass the test) could easily be implemented by ISP's, *without* their having the deep and broad system-control that a typical corporate IT department demands of the computers under it's purview. Basically, their "control" of your computer is limited to whether or not they allow you "out". You'd start a session, and at first you can only get to their "sandbox", where your system is tested, and various virus-updates, patches, etc., are available for download. Once you pass the tests, you are allowed access to the wider 'Net. Periodic or random security-tests (basically, simulated attacks) are run against your system if you stay connected long-enough, and the ISP simply "unhooks" you if you ever fail.

Part of the "standards" an ISP would have to meet would include supporting the "sandbox", and having some sort of published and registered "drop point" for receiving both Test-Script updates, and End-User Security-Downloads for distribution through their sandbox.

Some entity or entities then have to be delegated the responsibility for producing, maintaining, and updating the Test-Scripts. At this point, I'd think that would devolve to the distributors of various Operating Systems, with some sort of "Master Override" by an entity like CERT (so that a given O/S could be arbitrarily shut out, if it's distributor was not being responsible and timely in testing against known threats - most likely used as an emergency-stop against a rapidly-spreading attack).

That doesn't really seem too "intrusive" to me - all the ISP knows about my system is whether or not it passes the current "security check" penetration-test. They provide common solutions for download - and those would probably be "point and click" auto-install packages for Windows, Mac, and maybe some Linux distros, accessed from a Web Browser. Anything else is *your* problem. They really don't care *how* you qualify, just *that* you do.

"A home user's computer is never the TARGET of a DDOS attack (although theoretically it could be, but what would be the point?)"

Well, this is basically true (currently), especially against "professional" attacks.

But small businesses have definitely come under attack. Gibson Research was the target of a sustained attack orchestrated by a teenaged boy for no particular reason other than that he could. Steve Gibson is no ordinary user...he basically developed a bunch of tools to reverse-engineer the attack, and seize control of the offending 'Botnet, hunting down the perpetrator. That alone should have scared the little piss-ant off, but he wasn't smart enough to understand the implications, so the attacks continued - an Steve was too ethical/reasonable/whatever to shut the boy's system down with a DDOS of his own.

Basically, DDOS attacks clearly are a potential threat to end users, if the whim of a 13-year old script-kiddie is all it takes to launch one. As children become ever-more "computer-savvy", we'll be seeing more and more "spite-attacks" against individuals. It's really just the natural evolution of the current upsurge in "online bullying" going around. Especially as it will probably be driven by a "Nerds strike back" sort of mentality, as the ostracized geek types finally get fed up with having mudslinging web-pages about them put on FaceBook.

How is it possible that we in the computer industry have created such a
shoddy product? How have we foisted on people a product that is so
difficult to use securely, that requires so many add-on products?

I refer you to Fred Brooks' question--and answer--from his seminal work, The Mythical Man-Month:
Essays on Software Engineering :

Q: How does a project get to be a year late?
A: One day at a time.

That is, large software projects are still largely misunderstood. Many in our industry understand that the tasks involved in software development are non-partitionable, e.g., you can't birth a baby in one month by hiring nine women. By "understand," I mean that they could answer correctly on a quiz. It remains apparent that projects which are behind schedule get "more resources," i.e., more people are hired and/or assigned to the project. This has the unfortunate effect of making the project later, because all of the new people must be informed, which takes time from the people working on the project.

And this is just for the projects which were well designed to begin with. Think of how many systems may have had elegant and robust designs at inception, then have been outpaced by the doubling pace of technology and its new markets.

Another persistent problem is the dominance of business over development. The bottom line is the master FUD tool in software companies. I would expect that you are familiar with this problem; if not, read any twelve Dilbert cartoons, and it will become apparent. Management doesn't get developers, and developers don't seem to get management.

Your points relative to these questions are well taken. The market for personal computers has indeed exploded, and continues to grow. The operating systems are barely keeping pace, if at all, and the security? Forget it. The botnets are getting better at running unnoticed, and they're also getting better at payload delivery.

I recently read what Bertrand Russell had to say about posing a problem, that it's irresponsible to pose one without offering a solution. I apologize; I don't have a solution. I believe it'll get worse before it gets better, even for me, a career computing professional. We human beings seem willing to ignore risks in favor of instant gratification.

I suppose that open source, both for the auditing advantage that you mention and for the distributed development model, may be a part of the solution. The larger open source communities seem to be very effective in solving problems and in countermanding the problem of adding resources. I don't see this happening any time soon, however. Consider Microsoft's leaked "Halloween Memos," in which they have apparently decided

I think that the grand lesson to be learned is that secrecy inherently has long term problems. Julian Jaynes observed that consciousness enables deception. It is my hope that consciousness will also solve the problems introduced by deception.

-- "Basically, their 'control' of your computer is limited to whether or not they allow you 'out'. You'd start a session, and at first you can only get to their 'sandbox', where your system is tested, and various virus-updates, patches, etc., are available for download." --

Their control is limited to allowing me "out", by first requiring me to install the proper virus-updates and patches, which they conveniently provide for me? Your "limited" control doesn't seem quite so limited when you look at it that way.

-- "Part of the 'standards' an ISP would have to meet would include supporting the 'sandbox', and having some sort of published and registered 'drop point' for receiving both Test-Script updates, and End-User Security-Downloads for distribution through their sandbox." --

Sort of like an ISP-side version of the automatic updates Bruce often complains about? Using a "drop point", as in "just drop a couple of scripts into your favorite incompetent ISP's computer and soon you'll be able to take control of its customers' machines"?

-- "That doesn't really seem too 'intrusive' to me - all the ISP knows about my system is whether or not it passes the current 'security check' penetration-test." --

But what does that "security check" entail? For instance, your "emergency-stop" scheme won't work unless my ISP knows which brand and version operating system I am running. And no doubt they'd also want to know which client software packages I'm running, since those may be vulnerable as well.

-- "They provide common solutions for download - and those would probably be 'point and click' auto-install packages for Windows, Mac, and maybe some Linux distros, accessed from a Web Browser." --

They provide the downloads, along with some extra code of their own, perhaps?

Thanks, but no thanks. I don't want the government to force any of that upon me.

I haven't thought this through completely, but I would like to float this idea and see where it leads.

How about we hold users (owners, operators, whatever) responsible for what their computers do? If your computer sends spam, it's your fault. If your computer attacks mine, it's your fault. Etc. Unless you can provide credible evidence that it wasn't your fault, e.g. someone else was using your computer at the time.

If we do this, the pain will be closer to where the problem is. Currently, security is mostly an externality. Most malware isn't obvious to users of the infected computers. It's others who get the spam and the worms and so on. Why would you invest the effort to secure your computer if its security doesn't affect you? Well, if you will be held responsible when your computer is compromised, you _will_ be affected.

By itself, this is somewhat draconian. Some attacks using compromised computers can cause a lot of damage. If we hold users responsible for actions carried out from their computers, unsuspecting users could end up facing bills totalling thousands of dollars. Better for them to quit using computers altogether, or at least to disconnect them from the Internet. That's not the outcome I'm looking for, however.

The situation where you may be faced with a large expense at any moment, without you having much control over it reminds me of something. Typically, we have insurances for this sort of thing. We could do the same here. Provide users with the opportunity to insure themselves against the cost of a compromised computer, and things become a lot more reasonable. With a bit of luck, different policies and premiums will be provided for different risk profiles, so that users can choose between running a more secure system and payer a higher premium. That way, users would have an incentive to invest in security, but nothing draconian would happen if they didn't.

Besides insurance, there could be an option to outsource security. For example, ISPs could offer to help protect your computer from incoming malware and/or filter outgoing badness. This could include accountability, too, where they are supposed to make sure nothing bad happens involving your computer, and if it does anyway, you can collect damages from them.

I fully agree that my off-the-cuff proposal may have huge implications and limitations that I didn't fully consider. It's just that - a harebrained proposal for people to shoot at and run with.

You've shot some good holes in it. Can they be patched? Do you have a better solution?

I didn't mean to say you had to use the ISP's security-patches. I meant that (most) ISP's would (basically, as a business function) supply "common" patches for their customers, so the majority of quarantined customers could easily resolve their situation. That is, grandpa Windows-user can click the "fix it" button on a pop-up, and become reasonably secure, again.

You don't have to use their patches. You *do* have to pass their test.

Here, of course, we get to the political meat of the problem: who makes and vets the tests? We certainly don't want tests of the sort "do you have signature 'X' in reply to query 'Y'?", which would quickly lead to "are you running the appropriate G*D-sanctioned latest version of MS-Windows with a verified registration?" Bleah! I agree totally.

We would most-likely want either black-box penetration tests of a totally-generic type, or grey-box penetration tests (probably supplied or supported by the vendor) that check for system-specific known weaknesses. As you correctly point out, the more "specific" the testing (and emergency response) is, the more "they" have to know about your system. Which, of course, means that attackers would probably know more about your system, too.

On the other hand, many of the penetration scripts out there have lots of tools for testing what version of browser and underlying O/S you are using. Maybe it would be enough (as a first hack) to have those included in a general black-box test, which can then concentrate on more-specific tests for the type of system(s) it detects. If it guesses wrong, you're obviously somewhat-protected against script-kiddies, anyway...

If the standards do end up requiring end-users to "advertise" some sort of system specification coding ("Win98 with IE4", for example), then clearly the paranoid cognoscenti are free to "lie". As when using an Opera Browser, reply with any signature you want, so long as you pass the tests. If this gets you erroneously "autoblocked" in the occasional case when a flashfire-bug is threatening the entire 'Net, c'est la vie. The 'Net was going down otherwise, anyway - you're probably not much worse off.

Of course, "lying" about your system would soon lead to everybody installing daemons/drivers/etc. that claimed they were running OpenBSD, using the LYNX browser... but hey - that's a de-facto standard in the making. Then everyone just starts writing tests against that putative interface, with all the appropriate probes for masquerading weaknesses.

I think something like this could work. But more to the point, I think something like this *will* be developed and imposed upon us - it is the historical solution to chaotic access to communal resources. If we want the solution to be "good" (or at least acceptable and tolerable), we should start discussing it sooner, rather than later.

The usual approach to standardization is for "interested parties" to hash out some rules that "work for them". Then, they can "float it on the market" and let competition weed out the non-conformists (unless they truly do have a marketably-better approach); they can set up a "certification process", which often becomes a "pay-to-play" industry cabal that muscles opposition out of the market; or they can seek Government sponsorship and enforcement. None of these approaches takes much notice of end-user desires (except to the extend that overall marketing initiatives are affected), unless the end-users are actively part of the process...usually from the very beginning.

This is the approach that I once though "the best". However, it leads to a particulary-insidious form of cyber-terrorism: attacks with the specific intent of financially destroying the target.

There are basically two kinds of cyber-attack we have to defend our systems against: random or mass attacks, and targeted penetrations.

Currently, the random/mass attacks are the "public health threat" to the 'Net. They are widely considered easily-defended against if people take reasonable precautions and regularly update their systems. It is the fact that most users seem incapable of exercising this level of "competence" that is being discussed.

Targeted attacks are more of a personal/corporate security-issue. They are also impossible to totally defend against. Really, the best you can do is to have reasonably good intrusion-detection systems, and a comprehensive recovery-plan.

Under the current "scheme" (such as it is), targeted attacks have no reason to "advertise" themselves by turning their targets into spambots or similar attention-garnering broadcast stations. In fact, this would often contravene the very purpose of the attack - which would most-likely be the secret gathering of corporate (or national) intelligence.

However, if the end-users were made directly liable for "damages" or "per-event fines" from their systems (or, more particularly, from their registered IP-Addresses, as that is how it would be "caught"), it suddenly becomes worthwhile to have targeted attacks just to generate "nuisance noise". Even if this is all "buffered" by an insurance system, it could easily spiral completely out of control.

It becomes an obvious target for blackmail ("Dear Google: paying a billion dollars will cost you less than having every one of your servers suddenly start blasting a DDOS attack at every site you've indexed..."). It becomes a relatively-easy approach for revenge/extortion (get to your ex-whatever's computer, stealthily plant some nastiness, and let them *try* to prove their innocence in a "presumed guilty" environment such as this). Finally, it opens us up to really nasty "dirty tricks" by overzealous law-enforcement, who have already shown an ability and willingness to plant software (mostly, key-loggers) in people's system - and "bad" government employees are not unknown (Duke lacrosse prosecutor Mike Nifong, Illinois Cook-County sheriff "torturer" John Burge, ... President Richard "dirty tricks" Nixon).

After all, the real problem with this approach is the implicit assumption of guilt: "It came from your IP, so you must have done it deliberately or by omission, unless you can prove otherwise."

Marcus is right, to some extent; large chunks of this problem will go away as the general body of knowledge for computer users goes up, and that's generational.

I'm not so sure I want to wait another couple of decades, though, Mr. Ranum... and Bruce is also correct, the new attacks and new malware will be better, stronger, and faster.

I think this actually requires solutions-in-depth; ISPs need to take more responsibility for traffic generated on their networks, simpler and more secure terminals need to be marketed to the non-power user crowd (and, corollary, digital content providers need to provide content in a manner that doesn't require the user to do anything).

True, I don't want my ISP filtering my traffic. On the other hand, people like me are a very small minority... most users won't ever need to run sshd, postfix, irc servers, etc. Intelligent network services are certainly a possibility even in today's market.

There doesn't need to be a law mandating that ISP's run IT departments. There does need to be a law requiring accountability for sourcing network traffic. If ISPs are responsible for halting DDOS attacks launched from within their networks (note, I'm not saying they're fiscally liable for damages or criminally liable, just that they have to have a complaint-response mechanism wherein an attacked site can register a complaint and the ISP must halt the offending traffic), they're going to provide training, filtering, etc. They'll add IT departments and subsidize the cost by providing services to the users. Everyone will benefit.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.

About Bruce Schneier

I am a public-interest technologist, working at the intersection of security, technology, and people. I've been writing about security issues on my blog since 2004, and in my monthly newsletter since 1998. I'm a Special Advisor to IBM Security, a fellow and lecturer at Harvard's Kennedy School, and a board member of EFF. This personal website expresses the opinions of none of those organizations.