Posted
by
timothyon Wednesday September 30, 2009 @03:18PM
from the would-love-to-see-the-install-prompt-for-this dept.

itwbennett writes "If antivirus protectors could collect data from machines and users, including geographic location, social networking information, type of operating system, installed programs and configurations, 'it would enable them to quickly identify new malware strains without even looking at the code,' says Dr. Markus Jakobsson. In a recent article, he outlines some examples of how this could work. The bottom line is this: 'Let's ignore what the malware does on a machine, and instead look at how it moves between machines. That is much easier to assess. And the moment malware gives up what allows us to detect it, it also stops being a threat.'"

A) This isn't a new idea and I'm pretty sure that some AV packages already automatically submit questionable files for analysis, all it takes on top of that is for a vendor to track trends. I've had anti-virus software ask me to opt-in to such schemes before.B) Self-encrypting viruses that choose to infect non-common running process images (i.e. avoid Windows system files) might have different signatures everywhere and still require manual analysis.C) Once a virus is running on a host surely i

Hmmm... This is somewhat similar to an issue mentioned in the article: polymorphic viruses. It raises an interesting question. Do existing AV products try to detect such behavior in newly executed code? I am really not sure how tricky the algorithms would be to detect code that is trying to encrypt itself or modify

However, most regular software (funnily enough excepting security software trying to avoid detection by malware!) does not need to do this, so such code should probably be blocked and reported by default.

Lots of software does, though. Usually it's due to executable packers/code-obfuscators/anti-reversing runtime protection.

About a decade ago, my college installed an "advanced" AV program which blocked the behavior you described. They had to uninstall it almost immediately.

Problem was, the college taught computer science classes, and one of the very first things a compiler does is write a zero-length executable file. Then, it proceeds to modify the code in said executable file. And then the AV suite blocks the compiler, thinking it's a virus.

AV heuristics is an idea at least a decade old. It never really caught on - e

THe people likely to be volunteering their data are probably people informed about what's going on. Which are the people not likely to be infected, because they don't click on every "FREE PORN" ad they see.

If operating under that assumption, you could learn just as much from those systems since you could extrapolate that the things found on those people's machines were things that probably weren't malware.
So you'd essentially have 2 classes of users:
- Those who opt in (easier to gather data on what's likely to not be malware)
- Those who don't opt in (software not used by the opt-in users may be more likely to be malware)

Oh yes, the smug "users are dumb" argument.Since the same people typically have ADSL modems which are NOT infected with any sort of malware I think the argument is complete rubbish and we're suffering from a platform where "developers are dumb".Microsoft are waking up to it very slowly, but there are a vast number of third party applications developed by those still asleep at the wheel of the speeding malware trainwreck in progress. Just about any effort Microsoft make at improving security is rendered poi

I'm actually more surprised all the time how the antivirus vendors go more the way that scareware does. Good example is Symantec and their Norton product (I feel sorry for the guy..)

I haven't had an antivirus product on my machine for years because I know how to use to the internet. But there was a case when I though I've made a mistake - so I got myself an antivirus scanner just to make sure.

Unluckily for me, it happened to be Symantec's. For this day I've still tried to get it off my system, with no luck.

Why hell yes, they do.In my brief six month stint in working as a phone agent for one of the Devils of the Internet, they rolled out their branded copy of McAfee. End Users, having been scared into clicking NO to anything asking if they trust something, would manage to block themselves off from their high speed connection except in Safe Mode, where most of the time, McAfee would sod off long enough to let them get online to get the McAfee Removal Tool (affectionately named MCPR2.exe [mcafee.com]).

Unluckily for me, it happened to be Symantec's. For this day I've still tried to get it off my system, with no luck. Every week it popups during night, scans all of my harddrives and tells me I have to buy their product to protect myself - just like every scareware product. And it only detected some *tracking cookies*.

Yeah, that sounds exactly how it worked on my system up until the latest version. I was going to dump Symantec for something else (finally), but then heard they had made major improvements to

Windows Defender, which is on pretty much every XP and Vista box, already does this. Out of the box, it will submit information on startup programs, malware detected and removed, and which services and startup programs you have disabled, to the aptly named Microsoft SpyNet [microsoft.com].

It's not quite as scary as it sounds; if you're using Windows Defender to decide whether or not to kill that fishy-looking SynTpEnh.exe process from starting, you can see that 99% of SpyNet members leave it enabled because it makes your laptop's touchpad work. </contrivedexample>

Think about a corporate environment where this level of information is readily available: if your automated system can spot a virus working its way through the PHBs, the system could block it before it gets to Accounting and starts interfering with people who actually do work.

I wonder why they need all that information? Why don't they put software in all internet backbones worldwide that detects all virus traffic, and stop the virus there? You don't need user information or geographical information from people, the internet lines themselves are geographically known and shouldn't that be enough?

First, the service better be free. No way in hell I'm going to pay an AV vendor to do their job for them. Second, what if malware lifts credit cards and passwords are from my computer? Will enough info be relayed to the good guys before my identity is stolen? Third, malware authors will become savvy, cat-and-mouse game, etc.

Time. Automated patching occurs around the clock, and worms infect no matter what time of day. But a Trojan, for example, depends on its victim being awake â" the user has to approve its installation. Roughly speaking, if the malware takes advantage of a machine vulnerability, it often will spread independently of the local time of the day (to the extent that people leave their machines on, of course), whereas malware that relies on human vulnerabilities will depend on the time of

Malware generally moves the same way any other software moves. The user downloads and installs it.

Not only that, the user often willingly downloads it! It often doesn't come like the spyware of old, buried deep inside the ToS. Instead, the user willingly downloads the trojan and runs it.

People complain that anti-virus programs continually complain that cracks are infected, but from what I've seen, the AV program is right. People release clean cracks, then more nefarious ones take that crack, and wrap it wit

Cookies are also hard to even browse without, most sites don't load if the cookie is rejected.

Don't know where you are browsing but I've been blocking the majority of cookies for years with little problem. Yes some sites need them, usually the ones you are trying to log into or buy something from. That only describes a small minority of sites - most don't actually need to set a cookie and if you block them you'll never notice the difference. If it is a site you trust and do business with regularly, cookies are fine. Otherwise either block them forever or only allow them for that session. Your we

the only difference is, the people collecting the data are the freaking security experts you decided to trust with your data's integrity and privacy. it's not that similar to uploading personal data to facebook, or using google docs to store your banking info. of course, security experts aren't infallible, but i'd readily trust them with ALL my data if they convince me that doing so will make their protection substantially better.

Come on! I RTFA and it only talked about different characteristics of different forms of "malware". It even ENDS with that crap.

Can this be done?Of course, I shared the above with the assumption that this type of installation information can be harvested from millions of client machines, infected or not. I believe this is possible, and will share some thoughts here soon.

Fuck you very much. This isn't "possible". This is "something I thought up between beers".

This idea is impractical in so many ways. Leaving aside the privacy issues raised by the prerequisite of collecting the kinds of information the author mentions, he makes far too many assumptions (and of course, does not back them up with any hard facts).

Even if his assumptions are partially correct, he fails to factor in how real security software interacts with real users. Modern viruses are very fluid things, and thus modern virus detection is non-deterministic (and so is this author's system as far as I can tell). So in order to catch all viruses a certain level of false positives will inevitably arise. And it doesn't take many false positives before the user starts to ignore the warnings.

It's like saying, if everyone knew what everyone was doing and thinking at any given moment we'd never have any type of crime. However, who wants to be monitored 24/7 and in their head? Likewise, who wants all of their computers information, sensitive or not, to be handed over to McAffee or Symantech or whoever. Not me.

The best way to stop malware is to audit code so that it doesn't have vulnerabilities. The OpenBSD [openbsd.org] volunteers have been doing that for many years.

In my opinion, and the opinion of many others, the vulnerability of Microsoft products to malware is a result of Microsoft managers not allowing Microsoft programmers to finish their jobs.

When people have problems with their computer, they often buy a new computer. Then Microsoft sells another copy of Windows, which, of course, still has huge security risks. For examples, see the New York Times article Corrupted PC's Find New Home in the Dumpster [nytimes.com]. Vulnerability to malware is very profitable for Microsoft and its main customers, who are computer manufacturers.

Solving the problems with malware will not be fully successful if Microsoft managers do not want it to be successful. Vulnerabilities are profitable when a company has a virtual monopoly.

IF the programmers of Apple OSX, Linux, and BSD can make mostly malware-free software, Microsoft can also.

Those operating systems have fewer vulnerabilities because they were designed to be secure.

Apple has a horrible record for patching OSX.Linux and *BSD have plenty of advisories and vulnerabilities.No, they were NOT designed to be secure. There are specialised variants, such as OpenBSD and SELinux that can make that, but the vast majority of *nix operating systems can not.If you want security by design look at the mainframe or iSeries.

IF the programmers of Apple OSX, Linux, and BSD can make mostly malware-free software, Microsoft can also.

Depends on how stable the codebase is, how much backwards-compatibility is needed, how much of a kludge the component code bits in question were in the first place, how modular the overall design is/was, etc.

Sure - Microsoft can do it, but judging from complaints by former Microsofties, and the leaked code from way back in Windows 2000 as a design guide of sorts? Well, on the same note I can, with the same probabilities, dig out Mount Everest and relocate it by using nothing more than a pick axe with a bust

It was widely reported that Windows XP was released with more than 100,000
known defects [lowendmac.com]. (I don't have time to find a better link.) Microsoft
reported that Windows XP Service Pack 2 fixed several hundred bugs, several of
them very serious.

Windows Vista was released against the wishes of some Microsoft
managers, who said it was not ready for release. There was a court case [channelregister.co.uk] that revealed emails saying that. (Again, I
don't have time to find a better link.)

People need to refocus malware views and start focusing on some of the largest scourges of the issue.

Visa

Mastercard

American Express

People write malware because it is profitable to so. Regardless of how a machine has been owned, it typically boils down to one of two uses, a botnet or hijacking financial data. The easiest way to do this is get people to submit their own credit card details voluntarily through a webform. While the hosted pages are typically fake, the billing is almost always real, and th

They are excellent targets, and getting these companies to cooperate with international anti-fraud efforts would be a huge win. Without doubt they are the favored methods of 419 scammers and many other scammers for their ability to send money internationally. That being said, sending money through one of these services isn't nearly as convenient or automated as sending money through a credit card. Whilst you may see larger transactions through those services, they can't begin to compare to the sheer volume

There are much simpler ways than "watching merchant accounts": banks and credit card companies simply need to use standard security procedures. For example, banks and credit card companies could have all large transactions confirmed by text message. Or they can use hardware tokens or smart cards.

The biggest problem is that they can't be bothered as the fraud is profitable for them.

How about building a tool in windows that ensures all windows system files are Genuine and then shows what extra crap and drivers startup and lets you choose to either disable or enable them. How about a Registry locker that you lock down your registry while running said tool so you can see if the Malware is trying to re-install itself back onto your computer?

The first part IIRC already exists somewhat (especially in Vista, which is why UAC was so damned annoying and usually gets shut off at first opportunity). If you were thinking of some other mechanism, I apologize (unless that mechanism involves some sort of local or remote database of 'approved' software to check against, which is a very bad idea).

The second part would be cool, but the Windows Registry, being a constantly evolving thing (and of piss-poor design) has data written to it by the OS constantly d

I've noticed over the last few years a growing trend toward host-based detection systems, like the McAfee [mcafee.com] product line for example.The US government or at least the DoD [disa.mil] is really jumping on this band wagon.

At Virus Bulletin 2009 [virusbtn.com], Symantec gave a presentation on reputation systems: "Using the wisdom of crowds to address the malware long tail [virusbtn.com]," which cited data from one that began development in 2006. While I do not claim to understand the system, in a nutshell, it seems to work by generating a hash for files after they are downloaded or when they are to be executed, and sends this to Symantec along with some metadata, such as source IP/host, filename, path specification on the local host, date and time stamp on the file and other useful information, which is sent to Symantec, initially to provide a quick lookup, but more information can be sent if additional analysis is required. Symantec's client software can then display a message saying "Program XYZ.EXE has been seen n time(s) over the course of n day(s)/week(s)/month(s)." along with some suggestions about how safe it is likely to be based on new/unique program files more likely to be untrusted (higher potential for malcode) and older, commonly program files having a higher degree of trust.

One advantage of this approach is that it quickly allows malcious files encoded using server-side polymorphism to be quickly identified, as well as the sites hosting them. This negates the technique used by the bad guys to constantly modify code to in order to escape detection by anti-virus software.

For a year or more, all Symantec security products have included some form of heuristics/behavior/reputation-based detection. The technologies include Norton Insight [wikipedia.org], SONAR [wikipedia.org], and TruScan [symantec.com].

The signature-based detection that has been used for so many years isn't very useful anymore. By the time something is confirmed to be in the wild, captured, analyzed, and defintions created for and tested, that particular strain has pretty much ran its course already.

But who would maintain the whitelists? Either end users maintain it and they whitelist a trojan just to see the dancing bunnies [codinghorror.com], or a big company maintains it and all free software is banned like on the game consoles.

So we let the malware freely send itself to hundreds of other computers, steal our sensitive information, and then decide that something is wrong and remove it? Besides that, a lot of malware get's installed by unexperienced users that wanted ringtones/wallpapers/porn/games/porn/porn. Move along, there is nothing to detect.

"The insight is: Let's ignore what the malware does on a machine, and instead look at how it moves between machines. That is much easier to assess. And the moment malware gives up what allows us to detect it, it also stops being a threat."

But of course, malware that doesn't actually DO anything isn't a threat. As an administrator, I am worried about the misuse of resources.

Staging a DDOS attack from malware is a problem for me, because it uses my bandwidth inappropriately. Stealing credit card numbers because it is an inappropriate information leak. And so on.

I actually DON'T CARE if someone clicks on the funny cursors package, in exchange for complete information on their browsing habits -- as long as inappropriate information is not leaked. If the user loses the contents of their savings account to a hacker with a trojan? My initial reaction is to laugh, and then feel pity. As long as its not a theft of resources I am controlling.

Which boils down to: malware is defined by what it does. If propagation is an issue (usually network issues), it becomes my concern. Otherwise? I don't care. So, I use behaviour based approaches to malware control. If a new (to this system) piece of software doesn't have access to resources, it can't misuse them.

Simple trojans, viruses and worms? Amusing, but not particularly on my radar. Specific attacks on security frameworks designed to contain software? Definitely, along with root kits.

About the only reason I bother with "malware detection" is to keep Windows users happy(ier). They seem to think that this stuff is somehow important.

... it depends detection of a significant number of machines being compromised to produce the detection event and response. Meanwhile a significant number of machines have been compromised. The horses are out of those barns by the time the doors are closed.

Rinse and repeat, with a fresh variant of the malware, until "all your horse are belong to us".

Meanwhile, all they're doing is detecting a pattern of distribution of a pattern of data, without any way to differentiate whether the data itself is malware. Surprise: This same pattern occurs with news and with ideas. Do we really want a surveillance system to treat the spread of, say, stories of government corruption, as a malware infection?

"...If antivirus protectors could collect data from machines and users, including geographic location, social networking information, type of operating system, installed programs and configurations... The bottom line is this: 'Let's ignore what the malware does on a machine, and instead look at how it moves between machines. That is much easier to assess. And the moment malware gives up what allows us to detect it, it also stops being a threat.'"..."

"let's argue that there are secure ways antivirus protectors could learn about all installations of software -- good and bad -- that any of their end-users perform. Let's also assume that they could easily collect other data from these machines and users [itworld.com]: geographic location, social networking information, type of operating system, installed programs and configurations"

What's going to protect us from defects in these security systems? Wouldn't giving these malware monitoring systems access to computer ne

You actually think that nobody would start making malware/adware for Linux? Not all adware/malware is installed without knowledge of the user... downloading a smiley pack that has malware in it seems to still be fairly common. I see no reason why someone wouldn't do the same for Linux. It would just have ".rpm" instead of ".exe"

Sure, it wouldn't probably be in one of the good repositories, but since when has availability-from-reputable-sources that stopped people from downloading/installing software?

It also exploited microsoft systems, and a warning was issued less than 14 hours after it was first spotted. Mitigating the attack was fairly straightforward, and fixes were quickly available and easy to apply. There are windows worms, trojans and viruses still going around that are years old. But you drag up a situation that was resolved nearly a decade ago.

My point was that the ISC was created in response to a virus that had an impact on Linux. More to the point, that "Linux" ( much like "Mac" ) does not mean "invulnerable". Any competent system admin will tell you that.

fixes were quickly available and easy to apply

This has less to do with existence of exploits and more to do with competency doesn't it? Tell you what, if you can tell my mother-in-law how to apply this decade old fi

Windows is leaps and bounds more secure than any distro of linux, and will be for quite a while.

Citation, please?

The reason windows is so exploited, is because it is on 90%+ of the machines in the world which make it the prime target. If Linux had 90% of the desktop, I'm sure you wouldn't be saying "Switch to Linux"

while the attacker is standing on his head, drinking a glass of water, and whistling "Yankee Doodle".

Anyone who can successfully code a virus for Linux while doing everything you just specified above is a walking holy terror and needs to be shot on sight before he (or she) decides the world is boring and it needs to be more "interesting".

A properly configured 'nix machine is much more difficult to exploit than a 'doze box.

Here is the problem. A properly configured Windows box is pretty damn hard to exploit. I haven't had a virus in my recent memory, and most other malware infections are wholly the users fault (i.e. no amount of OS level security will protect them). Granted, in my near 30 years of computers, I've had 2 Windows viruses, 0 Linux viruses, and 0 OS X/Mac Viruses, and 0 C64/Amiga/DOS/BSD ones as well. Well, really one Window

I'll just point out here that Linux users generally do not run as Admin-God on their machines, so while they could still bork their own user account it becomes that much more difficult to compromise the entire machine.

But it requires root access to install updates (keep your system updated!) and software typically, does it not? Which means the normal user will be in the habit of typing in the root password, just like Windows users are accustomed to clicking "Yes, allow" and/or typing the Administrator password.

No, Linux users don't generally run as root on their machines, but I type the root password into Ubuntu installations very frequently.

There is little difference. One clicks "Yes" to allow something to happen, the

No, it just depends if there is also an exploit (perhaps a totally seperate one) at that point in time that allows privilege elevation.

Distro's do tend to patch pretty fast, but there is at the moment, a clear day or two gap over some apps like Firefox releasing, and the distro's having patch versions.

The real problem remains between the chair and the keyboard.... The operating system can't prevent a total retard clicking yes to everything, or typing in their password because something looks cool....

I'll just point out here that Linux users generally do not run as Admin-God on their machines, so while they could still bork their own user account it becomes that much more difficult to compromise the entire machine.

I'll just point out there that since the vast majority of machines aren't professionally-run multiuser servers, and very little malware really needs elevated privileges, that distinction is basically irrelevant in the real world.

I have no idea how this happens, but it does. The entire system gets broken. Antivirus gets broken quickly before definition updates come in. People have system-wide IE problems, and their hosts file is rewritten, and there's a damn ring-zero network driver running.

Linux, however, has actual account separation. Yes, malware could get in, and horribl

If you think Linux is inherently more secure than Windows, you're absolutely nuts.

Linux is more secure against malware than Windows in the same way that a solid storm window with a few pinhole air leaks at the edge of the frame is more secure against poison gas than a window screen.

This is a "feature" of the way Windows and its application suite are designed.

Now that elaborate malware constructs have been designed and debugged for decades on the Windows Swiss Cheese platforms, and a multibillion dollar malware industry built upon them, if Windows should ever be displaced as the dominant platform by Linux you can expect the payloads to be ported. Then ANY successful Linux exploit the authors can find will give them a new "infection head" and an opportunity to pull the same stunts on Linux, despite the far smaller number of vulnerabilities.

So Windows' security issues (and the failure of the company and users to adequately address them) have made things bad, not just for Windows users, but for everybody. The plague has been bred to enormous strength and virulence in other species and now poses a general threat - much like H1N1 in birds and pigs now poses a threat to humans. Thanks, Microsoft.

Meanwhile, with Windows still the big target, avoiding it in favor of the harder-to-crack, quicker-to-fix, less-profit-for-bad-guys-meanwhile Linux platform remains a benefit for those who use it.

And if it ever DOES become a big enough target to go after, we can hope that the lower number of vulnerabilities, more rapid fix cycle, the model of "fix the holes" in preference to "identify and intercept the latest mutant strains", and the far more varied population of instalations, might keep the problems far smaller than it is with Windows.

Hell, Steve Ballmer keeps repeating over and over how much more expensive the Mac is. If that's true, then people with Macs have more money. Where's the shitstorm of malware trying to steal identities from all those Mac users with hefty bank accounts?

The installed base is smaller. Therefore the return-on-investment must be lower for a certain development effort (even taking into account your postulate that Mac users are "richer", which I don't buy without seeing some numbers). Remember, malware authors don't do their work for free. A larger user base means proportionally larger returns for the person who contracted the malware development.