In this video Webroot purposely infect a machine running Webroot SecureAnywhere. They even disable the behaviour shield to replicate what would happen if a threat was missed and it executed on your PC.

We estimate there to be somewhere in the region of 50,000 new strains of malware every single day, so it's frankly impossible for the legacy signature-based approaches to keep up with the vast volume of threats.

Webroot SecureAnywhere adopts a new cloud-driven approach, ensuring that users always have access the the latest security "definitions" without needing to download any updates. This, coupled with a 700Kb agent, ensures optimal performance and enhanced security.

Webroot also recognise that the ever-rising volume of malware means that they'll miss threats, too. While they do have industry leading detection rates (See: http://www.av-test.o...er/mayjun-2012/) they have introduced unique protection against information-stealing malware, so even if they do miss something, the data that you really care about cannot be tampered with.

I guess it kinda is an ad. The video and content came from Webroot themselves. Doesn't mean it can't stir up some interesting debate on a new approach to AV. When was the last time an AV vendor purposely infected a PC running their software....?

There is, of course, offline protection. Some of which is highlighted in the video.

There is, of course, offline protection. Some of which is highlighted in the video.

Then it provides no benefit over its competition. My AV checks for updates every hour. If within that hour I get hit with something new which totally blocks my AV from grabbing an update (which may or may not resolve the issue) then I am hosed.

If I run the service you suggest and I get hit within the hour, I can't reach the cloud to grab the update so off-line mode can't fix it. Hosed either way.

I like the idea of a constantly-updated "cloud" definition-base, but it would have to work as a hybrid system that also periodically downloads it locally. That way you generally get the very latest definitions, but in the event of malware that kills your internet connection, you still have a relatively-recent offline copy it can use to scan the system. I'm sure that's what er0n mentioned, but I am at work atm and can't view the video, so I can't be sure of how it works.
So then, Rohdekill, the advantage would be that in most cases you have a very up-to-date solution. Not sure which AV you use, but most people's don't update that frequently, so it may provide some benefit for an "average" user.

Then it provides no benefit over its competition. My AV checks for updates every hour. If within that hour I get hit with something new which totally blocks my AV from grabbing an update (which may or may not resolve the issue) then I am hosed.

If I run the service you suggest and I get hit within the hour, I can't reach the cloud to grab the update so off-line mode can't fix it. Hosed either way.

Hi Rohdekill,

Let me explain how our offline protection works.

When a new file is introduced to a PC we try to obtain a classification from the Webroot Intelligence Network (cloud). If the connection cannot be established because the user is offline, the file is assumed to be 'unknown'.

Files that have an 'unknown' classification will be executed in a 'Monitor' state. Even though it's running on the endpoint, we're carefully watching the file to make sure it can't make any malicious modifications to your PC. Also, every single change that the file does make to your PC while in the Monitor state will be recorded in a local change-journal database.

Once the connection to the internet has been established, and we send down a 'bad' classification to the PC, all of those changes are perfectly reversed. There is a lot of protection built into the product to protect and verify the integrity of the internet connection, including LSP chain protection and kernel-mode connectivity.

So in summary your endpoint is benefiting from a degree of generic protection to stop your PC being 'trashed' and you're also getting a perfect clean-up routine.

It could be argued that we're no better/worse than the competition at protecting your PC when it's offline, but the benefits when conneced to the internet are clear.

Let me know if you have any other concerns on this topic.

Thanks,Will

If I run the service you suggest and I get hit within the hour, I can't reach the cloud to grab the update so off-line mode can't fix it. Hosed either way.

Edit: Take a look at the last part of the video and you'll see the journaling and rollback in action. In the unlikely scenario that the situation you describe occurs, the user will be able to manually 'block' the infected file, and every single change it made to the system will be perfectly reversed. This requires no active connection to the internet.

I like the idea of a constantly-updated "cloud" definition-base, but it would have to work as a hybrid system that also periodically downloads it locally. That way you generally get the very latest definitions, but in the event of malware that kills your internet connection, you still have a relatively-recent offline copy it can use to scan the system. I'm sure that's what er0n mentioned, but I am at work atm and can't view the video, so I can't be sure of how it works. So then, Rohdekill, the advantage would be that in most cases you have a very up-to-date solution. Not sure which AV you use, but most people's don't update that frequently, so it may provide some benefit for an "average" user.

Honestly, do we need AV that is updated every second? Unless you're a high value target (e.g.: government, banking, super rich...) are you really at risk of being hit with 0-day attacks?

Even if the 0-day threat is real for average users, which I don't think it is, the frequency of the definition downloads are less important than the total time it takes the AV vendor to discover, classify, and add a definition for it... The AV venders don't publish those numbers though...

Honestly, do we need AV that is updated every second? Unless you're a high value target (e.g.: government, banking, super rich...) are you really at risk of being hit with 0-day attacks?

Even if the 0-day threat is real for average users, which I don't think it is, the frequency of the definition downloads are less important than the total time it takes the AV vendor to discover, classify, and add a definition for it... The AV venders don't publish those numbers though...

I think most users will be absolutely fine, and it really depends how you use the internet, how highly you value your sensitive data, and how highly you value your time. If you don't do online banking or store your resume/CV on your PC, then you'll probably be fine with one of the legacy signature-based solutions.

Your last comment is exactly why we have decided to take the approach that we have. There are approximately 7 million users currently using Webroot SecureAnywhere today - whenever a new file is observed for the first time on one of our customer's PCs, it's executed on the PC in a isolated sandbox environment where we'll capture the intial behaviour of the file. We'll then make a determination as to whether the behaviour is good or bad - if it's bad, all of our 7 million customers are instantly protected without having to wait for us to publish a signature or get them to download anything.

If the behaviour doesn't appear to be bad, the file is executed on the endpoint but the user/PC is still protected using the methods shown in the video.

The window of exposure to a new threat (1 in ~50,000 per day) is dramatically reduced using this model.

FWIW, 0-day threats are not necessarily targetted attacks. They can spread through software vulnerabilities and infected legitimate web-sites.

P.S. I have no idea whether i'm actually allowed to be posting on this thread. I hope I'm not breaking any rules.

I think most users will be absolutely fine, and it really depends how you use the internet, how highly you value your sensitive data, and how highly you value your time. If you don't do online banking or store your resume/CV on your PC, then you'll probably be fine with one of the legacy signature-based solutions.

Your last comment is exactly why we have decided to take the approach that we have. There are approximately 7 million users currently using Webroot SecureAnywhere today - whenever a new file is observed for the first time on one of our customer's PCs, it's executed on the PC in a isolated sandbox environment where we'll capture the intial behaviour of the file. We'll then make a determination as to whether the behaviour is good or bad - if it's bad, all of our 7 million customers are instantly protected without having to wait for us to publish a signature or get them to download anything.

If the behaviour doesn't appear to be bad, the file is executed on the endpoint but the user/PC is still protected using the methods shown in the video.

The window of exposure to a new threat (1 in ~50,000 per day) is dramatically reduced using this model.

FWIW, 0-day threats are not necessarily targetted attacks. They can spread through software vulnerabilities and infected legitimate web-sites.

P.S. I have no idea whether i'm actually allowed to be posting on this thread. I hope I'm not breaking any rules.

How is this dynamic scanning any different than heuristic engines that have been built into AV scanners for the last decade? Unless you're saying that every file a user opens is transmitted to Webroot for additional analysis?

How is this dynamic scanning any different than heuristic engines that have been built into AV scanners for the last decade? Unless you're saying that every file a user opens is transmitted to Webroot for additional analysis?

Hi Frazell,

Traditional AV products typically utilize basic local heuristics which are renowned for generating false positives and being largely ineffective.

Webroot SecureAnywhere sends the behaviour of the file, along with its meta data to the Webroot Intelligence Network (cloud) where the behaviour is compared to tens of thousands of advanced behavioural rules. In addition to the behaviour, Webroot is able to make a more accurate 'estimation' by considering the age (how long it's been known to the Webroot community) and popularity (how many users in the Webroot community are using it). Some other solutions have also started to adopt cloud reputation lookups.

The key thing here is that while our 'heuristics' should be more effective, we recognise that the bag guys are getting smarter, so we don't rely on them. We've implemented generic protection against information-stealing malware and implemented a unique feature for perfect remediation - you can see these features in action in the video in the OP.

Why doesn't the product have any kind of email scanning, and what about webpage scanning (ex. sites hacked to run malicous code in an iframe or script)...I also noticed it does not actively scan downloads like NOD32 does, including inside ZIP files.