I recognize maybe (MAYBE) half of the programs on the test. Also no Symantec/Norton, Panda, MalwareBytes...

I looked at their procedure and methodology. Honestly, IMHO, I'm not impressed. Free to submit for tests, pay to use the logo, pay to see the test results. Also, they give the list of infections they are going to scan to the vendor weeks ahead of time. So really all that needs to happen to pass is to scan the files they are going to scan, then change defs to match the set for both bad stuff and good stuff.

Other fun facts from their info:A VB100 award means that a product has passed our tests, no more and no less. The failure to attain a VB100 award is not a declaration that a product cannot provide adequate protection in the real world

The results we report show only the core detection capabilities of traditional malware technology. Many of the products under test may offer additional protective layers to supplement this, including but not limited to: firewalls, spam filters, web and email content filters and parental controls, software and device whitelisting, URL and file reputation filtering including online lookup systems, behavioural/dynamic monitoring, HIPS, integrity checking, sandboxing, virtualization systems, backup facilities, encryption tools, data leak prevention and vulnerability scanning. The additional protection offered by these diverse components is not measured in our tests.

So, what it really sounds like is "Can this product's vendor process this file list through their threat research team in five days so they can get a 'perfect 100' and then pay us to say so?" I really wish the actual bad guys would give the AV vendors a list of what they'll release in five days. Either way, all it looks at at all is scanning of dormant samples and the vendor gets these samples in advance. Heck, even -I- could write a "Virus scanner" MSDOS Batch program that will get a VB100 rating successfully just by making a list of MD5s against the threat set. No false positives, guaranteed win. Of course it wouldn't work really well (or at all) in the real world. In fact, the file list itself gives the MD5s.

The test measures products' detection rates over the freshest samples available at the time the products are submitted to the test, as well as samples not seen until after product databases are frozen

That test operates based on a threat database that is "frozen" at the deadline (No definition updates). They want to see if the existing old database and heuristics will continue to help against new threats that it "can't possibly know about".

SecureAnywhere has an extremely minimal local database, so it can't effectively be frozen without taking away the entirety of its "database" (the cloud system). It literally cannot be tested fairly at all, because there is no way to freeze the database, since there isn't one.

The test measures products' detection rates over the freshest samples available at the time the products are submitted to the test, as well as samples not seen until after product databases are frozen

That test operates based on a threat database that is "frozen" at the deadline (No definition updates). They want to see if the existing old database and heuristics will continue to help against new threats that it "can't possibly know about".

SecureAnywhere has an extremely minimal local database, so it can't effectively be frozen without taking away the entirety of its "database" (the cloud system). It literally cannot be tested fairly at all, because there is no way to freeze the database, since there isn't one.