Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "The NETI@home Internet traffic statistics project (featured in Wired and Slashdot previously) has a quick analysis on the malicious traffic they observed. It's a rough world out there." Perhaps not suprising, but still disheartening, the researchers find among other things that a large portion of typical end-user traffic consists of malicious connection attempts.

Considering these malicious programs aren't following any kind of 'standard' to reduce bandwidth utilization when checking over entire subnets of IPs that have been checked by 100000x other copies of the virus, it doesn't suprise me one bit.

It would be like setting up a massive feedback loop on a mail server. When user X gets message X, he passes message X to user Y, who upon receiving message X sends it back to user X.

Oh, so there should be a central hub where the virus/worm can talk to other copies of itself. Any place it could talk to itself would quickly be located and shutdown. Besides, I don't think the writers of these kinds of programs are really concerned with your network utilization.

Most of the malicious type traffic I'm seeing lately (aside from SPAM) is ssh worms trying to log into my boxes. Most boxes are set to only allow ssh from a few IPs or subnets, but I have one that I block class A's anytime I see

You can't impose a standard upon viruses. What will you do if a virus doesn't follow the standard? Find the author and punish them unless they fix it and release a new version that fully supports the standard?

The only way viruses will ever get standards is if the authors agree that they will get a considerable benefit by working together. I can't see that happening.

Its insane the ammount of bandwidth this is sucking up (i remember a time when virus's and worms were relativly well programed, still as bad but less collaterol dammage).I would like to see more ISP isntead of suplying basic DSL modems with those overpriced sign up deals but instead a proper firewall/router/Dsl modem.This would save us all alot of pain in the long run .

Most of the interesting recent viruses *do* have some level of organization to reduce duplication of effort, and the postulated "Warhol Worms" designed to take over the entire Internet in 15 minutes would need to do so, because otherwise they're not as effective. Some of them pre-scan the net to find a list of vulnerable machines to infect first, and then haul around parts of the list. Others partition the address space quasi-deterministically (e.g. Phase 1 scans all of the valid/8 address spaces until it's infected some machine in each one, Phase 2 scans all of the 256/16 address spaces within its/8 until it's affected one in each, Phase 3 scans all of the 256/24 addresses within its/16, Phase 4 scans all the 256 addresses within its/24.

Code Red II [caida.org] implemented a randomized variant on this: "1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same/8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same/16 (so the IP address probed would start with 10.9.)" It means the worms don't have to keep track of phases, but it gets similar effects, and while there is more chance of overlap, it's not too high until the worm's infected most of the net, and the added random searches help make up for machines that didn't successfully infect their netblocks due to firewalls or failures or simple slowness.

At least one worm that took this sort of approach had a bad random number generator, so it kept hitting the same territory too hard and missing other wide-open spaces, which protected a few parts of the net from infection.

Here is the original paper about Warhol Worms [berkeley.edu]. While it makes an excellent sales pitch for AV companies, and a good "wow, scary technological Y2K-type problem on the horizon" for Newsweek or Wired, I don't think we'll ever see one in real life.

The big reason is the sheer vastness and varied topology of the Internet. Try running a massively distributed application sometime and get a real life education in exactly how theoretical the guarantee of data transfer between two machines picked at random is. My

Does anything like this exist already? It would be nice if I could filter, say, ssh traffic coming from "known" naughty sites, and report sites that portscan me, though probably I should look at using smartcards or something more secure at this point. I can't just restrict the ssh port at the firewall, since people could be coming in from pretty much anywhere because of travel to remote sites. Aside from complaining to upstream providers (which so far has yielded zero responses) when I see people banging away at ssh, I don't see much else I can do.

It might be worthwhile to look at setting up some sort of a webbased authentication system that would dynamically allow an IP address or subnet for a certain amount of time. Block everything, but if your customer/employee/whatever needs in, they can authenticate via a webpage which would then update your firewall rules.

Why can't you restrict access to ssh from the firewall? One solution could be port knocking. You only let your firewall open up ssh after a series of connections on pre-defined parts are made. So say you choose "233 457 69 876 2094 576" to be your "password". You would make a client that would connect to those ports in that order and only after that initiate an ssh connection on port 22.

You really should be using RSA or DSA keys instead of passwords. Hardly a day goes by that my systems don't get at least one script-kiddie SSH password guessing scan. Since I'm requiring keys for authentication, they're wasting their effort; if someone manages to crack a public key, we have far worse problems than password guessing.

Exactly right. It's almost trivial even under Windows to do it. Two factor should have been a standard years and years ago but as long as people can have four to eight digit passes which are easy to break, we keep seeing problems that shouldn't be there.

Anyone notice that PGP has passphrases of quite possibly insanely large size? It's hard to remember some farked and leeted phrase chosen to confound brute force and guessing when you have t

I've only recently started worrying about this regarding my own hosted server (i.e. not corporate, just little ol' me.) I have no problems creating certs and configuring sshd, but my reading suggests that sshd will accept certs fine, but if they're not presented it will fallback to password mode. Is my understanding correct? I'd rather have it not ask for passwords at all. Any pointers?

but my reading suggests that sshd will accept certs fine, but if they're not presented it will fallback to password mode. Is my understanding correct? I'd rather have it not ask for passwords at all. Any pointers?

On a UNIX ssh server (Open/Fsecure) look into the "PasswordAuthentication" parameter in sshd_config. Setting this to "no" will prevent password authentication to proceed. Check with '-v' to ssh; it will tell you what authentication methods that can proceed. Haven't played with a Windows sshd ser

There are some. This site [bluetack.co.uk] has several different blocklists, such as ad-hosts, anti-p2p bodies, spyware companies, hackers, trackers, trojans etc. The link above lists what's available. Sure, the lists aren't 100% acurate, but they are a lot better than nothing.

Very highly recommended. With the case of p2p, it's good to keep your head down. It's the tall ones that get their heads chopped off...

They also have software to convert the lists to various formats for use in different firewalls. iptables fans should check out "linblock". Beware though, a large list can take an hour to parse on your typical recycled firewall box, but the tool merges the ranges to keep the tables as short as possible.

As far as I understand, the main reason more worm-cleaning worms aren't written is that the people who would write them find it unethical to 1) infect any machine and 2) clog more networks with the scanning the "good" worms would need to perform. This list could be used to get around #2.

Unfortunately, very few providers actually filter traffic leaving their network, which means someone could easily spoof their IP address. If someone can bang away at your firewall with a spoofed IP address, your firewall will cut off the traffic from the fake source AND the real one.

Also, a lot of zombies on the net sit on dynamically addressed machines. The next time a zombie connects to the net, your RBL won't block him since he will be coming from a different address.

ISPs could use this data to great benefit, if they'd put out some effort.

Assuming that the statistics show which IP address ranges are the worst offenders for malicious traffic, the ISP(s) responsible could simply shut down the outbound connection(s) of the "problem" users until they de-virus their systems and KEEP THEM THAT WAY.

Perhaps that will help to finally clue people in that having Internet connectivity is a privilege, not a right, just like driving. If you're going to enjoy an Internet connection you need to show some responsibility for making sure your own system isn't going to be a problem to others.

I -still- think there should have been Internet user licenses, just like we have driver's licenses...

The problem is a large portion of those IPs are home users with dynamic addresses which means when if I am the next to get the IP my outgoing ports will be blocked because thelast person ran windows, er, I mean because they could not keep there pc's clean. And I am assuming the last part about internet usage licenses is troll baiting so I don't think I'll respond to that one.

The ISP KNOWS the physical addresses of the cable/dsl modem a home user has. It's not like the ISP has no idea which ip addresses are home user or account is using at any given time. How do you think they can reliably (for the most part) identify people for the likes of the RIAA when they ask. Likewise, with modern hardware and software its a pretty trivial task for an ISP to turn your internet access down to a crawl or off with the click of a button. They can do this, they just don't want to.

Maybe it would be a good idea to throttle the users down to a bare minimum and redirect all http traffic to a gateway page to tell them they have a problem with their computer they need to correct. It seems to work for wireless access points in hotels/airports/coffeeshops. Why can't big ISPs do the same thing?

Would you really want to piss of 40% of your client base in one swoop? Average joe doesnt care about thsi kind of crap, and he doesnt want his ISP forcing him to care either. He will cancel his account, and move to someone else, or he will drive up support calls by calling to complain about the change.

Any ISP who puts something like what you described in place is likely to lose customers in a hurry. Hotels/Airports/Coffeeshops have transient, non-recurruing customers, or the customers are there for somethi

It seems like ISPs are going to have to make a choice between increased costs due to the insane amounts of traffic caused by spyware and malware, or the cost of the loss of some customers. The whole point of such a strategy is to notify the customer and help him correct the problem if necessary. Customers with problems that would be caught by the gateway page would probably call support anyway, wondering why their connections are so slow. If they're not calling, they're probably complaining quietly about

Tell me... how would packet shaping away the trojans/worms piss off their client base? Suddenly, everyone's network connection is much faster, and there are fewer ads/spam mails appearing. Seems to me 98% of their client base would *love* this.

If you shut off a dial-up user, he might change ISPs or might (try to) clean up his act (with some level of failure, which is not surprising since he was sufficiently incompetent that he got infected in the first place.) If you shut off a DSL user, he also might change ISPs or might try to clean up. But cable modem services are harder to change - there's usually only one cable company serving a given end user, and changing to DSL is not always an option, so cable users are more stuck than other types of I

Would you really want to piss of 40% of your client base in one swoop?

No need to disconnect them initially. Just email with a warning and simple instructions on how to fix it, maybe linked to a web app to do the work. Most naive users are paranoid about viruses due to the media exposure and are happy to fix it if they are told how.

I'm pretty sure internet connectivity is neither a privilege nor a right. It's just a service, plain and simple. You pay ISP, they provide internet connectivity. You don't pay, you don't get internet. No rights or privileges involved.

One problem with your argument is that the Internet is more or less a public thing, originally funded by the US government. Another problem is the design of the Internet itself. Many different companies and people with different policies and wants/needs are giving the OK to be connected to each other, and this complicates things like "quality of service" and "acceptable use". There is more to it than just paying money.

If an ISP shutdown my outgoing connections, I would get a new ISP. Maybe ISP's can use this data to help them, but not the way you mentioned.

You say that internet access is a privilege and not a right. True. But if I sign with an ISP and they do not disclose they they will block my access beforehand, aren't they breaking a legal agreement? In that case, do I not have a right to take legal action? If their contract does state they can shut down my connection, then fine. But in that case, I can switch ISP's

I'm sorry you feel that way. Are you saying, then, that people should NOT be held responsible for whatever spew their virus-compromised system sends out, regardless of how many problems it may cause other systems? That's what licensing would have done -- provide accountability.

If you can suggest a better way to provide some sort of accountability, then please go right ahead and suggest something. I don't pretend to have all the answers, and name-calling is hardly productive.

Ignoring all complaints about Windows, the root of the problem goes back to having access to the network in the first place. If ISPs would spent a few bucks on implementing passive traffic analyzers to search for the viral/trojan patterns and null route offenders, we'd clean things up pretty quick. Why do we have all these piracy probes going on to sue people and no infected probes going on to cut people's access?

Now, stepping back to the Windows complaints...wouldn't the ISP turning off your access motivate you to get a BASIC education in computing and maintain your PC?

To make an analogy, in most states you need to have your car inspected (and some require emissions inspection, too). PUBLIC roadways means you share it with other people...an unsafe car affects more than just you. When you're connected to the net, your PC affects everyone else. I'm not suggesting the ISPs make an inspection system or a law passes to force ISPs to monitor traffic, but the same logic applies....someone should be doing checkups and flagging the offenders.

Ahh but herein lies the problem. As a previous employee of an ISP we'd be willing to bend over backwards to make a customer happy. This means NOT turning off their access when we detected a worm/trojan etc. Sure, we would null route their IP's if they were partaking in a DDOS or something, but a simple virus we'd *help* them by informing them. You don't make money in this world by shutting people off. I for one say null route them, but you have to think of it from a reality standpoint (Regardless of ho

If you piss off 5 more people who get infected by the machine that is spewing viruses and spam, and they all leave, then what? You just lost 5 customers by not "bending over backwards to make them happy" by removing the source of the hack attempts/spam that is causing them trouble.

Do you want to be the state-registered Computer Inspector? Note also that computers break down a lot faster than a car. Cars wear out over time, with some exceptions. Computers work (in theory) perfectly until one or two mistakes are made that bring the system to its knees - be it crash it, or zombify it, etc.

I do entirely agree with the idea of passive analyzers and filters, as long as they don't inhibit legit traffic. Put the burden on the ISP in this case.

Sadly, while some customers might get motivated to learn something, others would just be motivated to switch ISPs. Which costs the ISPs money, which means that they won't do it.

At least such is their thought process as often presented. I suspect it's bad cost-benefit analysis; if your dumber customers leave, it's probably a net win for you. Smarter customers mean less bandwidth (at least, they don't act as spam zombies maxing out the bandwidth) and fewer tech support hours explaining how to fix the cup holder.

The big players (AOL, Comcast) are the best targets for this logic, but they live for those left-side-of-the-bell-curve customers. They're the "default" ISPs that people get because they're so readily available, so they get all the customers who don't know better. (Hell, I don't know better; I use Verizon for my DSL but I don't let them do anything but provide me bits.)

So AOL and Comcast are in a bit of a bind; they don't want these customers, but they don't want to lose them, either. I think that they're probably going to have to use gentle persuasion to say, "Hey, it looks like you've a spam zombie. Please call your cousin's best friend to clean the crap off your computer again and give you a stern talking-to. And please stop downloading Bonzi Buddy."

In Norway the leading ISP has started with a similar scheme. They do passive searches on traffic from customers - if anything gets flagged as viral or malicious they will cut access to sending email, or even to transmit data at all. Then a email is sent to the customer explaining the problem and he can then call Tech Support to get it fixed.

This is mostly considered a benefit since it helps the customer in keeping his PC operational. My father lost access to sending mail for a couple of days after getting

Sadly, while some customers might get motivated to learn something, others would just be motivated to switch ISPs. Which costs the ISPs money, which means that they won't do it.

Another thing that will cost the ISPs money? Lawsuits. Class action lawsuits from people that experience damages from zombie PCs and virus infected spew-factories that could EASILY be shutdown by an ISP with a minimal effort of outbound scanning.

I'm surprised we haven't seen that lawsuit yet. I'd guess it's because the lawyers don't think it will make them money.

(Even so, eventually you'll find some lawyer willing to take the case. He'll treat it as a lottery ticket: low odds but a big win.)

So why don't they think it'll win? I'm not a lawyer, but I suspect that the defense will run, "Look, we just carry the bits. If you don't like the bits I send you you're free to set your router to drop 'em on the floor. It's not our job to censor our cust

I am not an AOL customer, have never been, never will be (at least, not by choice), but I am glad AOL is there to serve the unwashed masses. Because a huge portion of their customer base is, shall we say, "uninformed," AOL has taken a number of measures to protect them (and their network) from malicious traffic. Based on anecdotal observation, it seems to be working.

Because hundreds of people have my "public" email address in their address books, I recive dozens (sometimes hundreds) of virues per week when

I've seen the same thing, anecdotally. I don't know what it is AOL does to keep its users from infecting the world. I've never heard of somebody being told "we're closing your AOL account until you clean up."

Some of it must be filtering (blocking viral messages before they hit the user) on incoming mail. They may even be censoring outgoing mail. As for other worms, like sasser, I suspect they blocked the relevant ports long before XP SP2 came out.

Hey, I know it's below the/. radar, but the big ISPs ARE doing something about the malware problem. The focus of the current round of competing commercials is 'free' add on services like spam blockers and anti-virus. They know most users won't spend the time and effort to secure their machines so they are going to do it for them. Of course that pretty much dismisses any change of privacy from your ISP. I guess the ISPs figure if you'll lay back and spread your legs for viruses you'll do it for them as well

Oops! Someone hasn't noticed the number of trains and ships running Windows. No danger of a virus killing anyone there, then.

I don't experience any significant negative effects from zombie machines, so I am not willing to pay for such a system.

Someone also hasn't noticed the amount of effort that goes into protecting his system from zombie machine. Perhaps he thinks firewalls were a gift from unknown stellar travellers and spam filters require no effort to creat

Oops! Someone hasn't noticed the number of trains and ships running Windows. No danger of a virus killing anyone there, then.

Red herring. Give me one example of a fully operational system (read: not that 7 year-old Navy test that everyone parrots) that has had a problem. In any case, Windows is a desktop OS and should not be used in these situations to begin with.

Perhaps he thinks firewalls were a gift from unknown stellar travellers and spam filters require no effort to create and update.

Let me introduce you to New Jersey. We have the same shit, so I knew this counter-argument would arise.

State vehicle emission tests are done as a result of the Clean Air Act which requires the States to meet what are known as the National Ambient Air Quality Standards (NAAQS). The reason some states don't have emission tests is because their air quality does not yet exceed the NAAQS.

In any case, a health argument can still be made to justify pollution reduction. While one car out of emission spec will not

To make an analogy, in most states you need to have your car inspected (and some require emissions inspection, too). PUBLIC roadways means you share it with other people

Here is an additional error in your analogy. PUBLIC does not simply mean you share it with other people. Rather, it means "Maintained for or used by the people or community". Internet access is not a public utility (to wit. ISP's vs. municipal broadband), it's more like a toll road. There's nobody on the internet who doesn't directly pay t

If ISPs would spent a few bucks on implementing passive traffic analyzers to search for the viral/trojan patterns and null route offenders, we'd clean things up pretty quick.

Bollocks.

The aren't running a network in their parents basement you know. Their networks are massive, with nodes LITERALY spanning thousands of miles. The volume of traffic they deal with is HUGE. They use cutting-edge routers just to keep up with the demand.

How on earth do you do traffic analysis on that level? You might be able to catch some of the more obvious spammers, but how do you differentiate (on the IP level) between: a) a residential user b) a commercial user who maildrops willing customers c) a zombie d) a community group or e) blah. Blocking someone based on traffic is not possible, unless you want to lose your valid customers.

What they should do is be more responsive to complaints. If a customer of theirs is a zombie spambot or acting as a stepping stone for some script kiddie, they should have their connection suspended until it is remedied. But they can only do this based on a complaint.

Besides, what's the profit in spending any resource on the problem in the first place? Until that is affected, they won't care about it.

There's a Best Current Practices document BCP38 and a few RFCs, notably RFC2827 [faqs.org], recommending that ISPs block IP packets with forged Source addresses from their network. It's easy to block them from end users, and while you can't totally block forged packets coming from other ISPs, you can block some of them (strict uRPF for your end users, loose uRPF for peering/transit, plus blocking packets or at least routes from outside that pretend to be from your non-dual-homed end users.) These are standard featur

AT&T Internet Protect [att.com] collects traffic headers from some large fraction of their network - the last figure I saw was about 12TB a day of headers (source/dest IP, protocol, source/dest port.) There's a certain amount of analysis they do in real-time, some more that gets fed to human analysts to try to make sense of, and the data's also there for later research. Some kinds of problems are obvious (e.g. port scans on TCP 17300 and TCP 1025 from Asia are heavy this week - 17300 is usually one specific vir

Yeah, but what ISP was it? Was it a good ISP, like Speakeasy, a small local outfit, or one of the biggies who thrive on the "don't know any better" crowd?

I know Speakeasy polices their network for open SMTP relays, because I see it in my server logs. I don't know if they actively look for zombied machines, but I can tell you that they've pretty quickly shut off the connections of customer machines on their network that I've brought to their attention when I've seen obvious worm-related connection attempts

I've only scimmed the paper, but from the looks of it, a lot of not all that harmful trafic could be labeled "malicious", for example nmap port scans. I use them all the time, not to find valunerable services, but for more general sysadmin stuff.

I've only scimmed the paper, but from the looks of it, a lot of not all that harmful trafic could be labeled "malicious", for example nmap port scans. I use them all the time, not to find valunerable services, but for more general sysadmin stuff.

If you had RTFP, you would have noticed they actually tracked a lot of that down and counted it as benign, not malicious, since they could ID the IP at their university.

It's good to know the IP addresses of machines active searching dark IP space. If you can see those statistics in real time, you have useful information.

ISPs are already starting to work together on this type of information. If an ISP sees malicious worm spreading behavior, it can upload the offending IP into a global db that all ISPs can use to block at their borders.

Again, the authors conclusions are that nothing beats having a nice dark block to trigger alerts.

The biggest problem in Intrusion Detection Systems (buzzword for firewalls with more intelligence than a typical rule-based firewall) is that metrics gathering is occuring at a specific site, making it difficult to discern malice intent from dropped packets or bad coding.

Any time the central server sees a certain threshold of malicious attempts from a single IP, it adds it to a short term blacklist... Make the term length just slightly longer than the reporting period so if it persists it'll remain on the list but if it stops, the IP is cleared in short order.

To collect data, Internet users must volunteer
to run the software package on their end hosts. Once the
package is installed, the NETI@home client will collect net-
work statistics from the end host and periodically send a
report back to the NETI@home server.
Volunteer by downloading the NETI@home toolbar with new "we are watching you" emoticons

I would like to submit this proposal for your review. I am seeking funding for a new research project. Please grant me the funds needed so that I can deploy rain sensing equipment to every residence in the Seattle area.

This project will record 3 years of data and prove once and for all whether or not it actually rains in seattle.

sincerely,
Kelly H.
Head research scientistDarington Univeristy of Heretics

Apparently no one told the authors the second thing anyone reading a paper does is skim over the graphs and tables. I had flashbacks to a lecture from a lab professor about making clean clear graphs after trying to decode those cryptic plots.

Been to Borders and seen the honeypot books on the shelves amongst the rest of the become-a-security-guru-in-$29.95-easy-steps books?

Does it prove or disprove simple A==B logic to note that these incidences of spyware and insecurity are growing at the same time as adoption of Linux variants? Just musing on the "l33t win script kiddie finds Linux religion" phenomenon I've been seeing lately.

Anyhow, this does suggest further that security is where it is at for the future skillset of interest at intervie

So you're now pouring water in your nose. There's also been some possibly relevant marketing research done by Golgafrinchians about "Do people want fire that can be fitted nasally?". But Yetis? Probably not what you need.