Our Mission

We work for a fair, just, and safe software marketplace for all consumers, empowering consumers to protect themselves.

How We Evaluate

We take software binaries and look at the safety features that are present, the functions that were used, the code complexity, and where possible, the software's performance during crash testing. We evaluate the compiled software that your computer would run, not the source code. Some safety features aren't added until compile time, so source code presents an incomplete view, and getting access to source code would require signing NDAs with vendors, which would compromise our position as an independent testing organization.

Safety Features

Also known as "Application Armoring". Modern compilers, linkers, and loaders come with lots of safety features, but they won't do you any good if the software doesn't have them enabled. These features are to software what airbags and seatbelts are to cars: things that are known and proven to improve safety, and whose use should be established by now as industry-standard. If your car doesn't have airbags, you're entitled to know that before you buy it.

Code Hygiene

There are things we can learn about a developer's security skill and knowledge based on what functions are used in the code they write. We evaluate about 500 functions that fall into these categories, and by looking at the frequency, count, and consistency of the function used we can learn a lot about the developer practices of a particular software vendor.

Code Complexity

Complex code is harder to review and maintain, and is more likely to contain bugs. This is why NASA/JPL put limits on things like function size for code that's going into critical systems. Some features we look at are Code Size and the Number of Libraries used.

Crash Testing

One established way to test a product is to see how it breaks or fails. Crash testing, or fuzzing, means that you provide bad inputs to software so that you can see how it fails. If the software is well built, it will be more robust in the face of bad inputs. If it is very badly built, then it will crash in ways that might indicate how the software could be exploited.

The Shape of Our Scores

Highly secured software will score highly in all of our scoring categories. Each category is represented as a corner of the triangle. If the software scores well in that category the point will be closer to the outside of the triangle.

Somewhat secured software will score okay in our scoring categories. Each category is represented as a corner of the triangle. If the software scores okay in that category the point will be between the center and the outside of the triangle.

Insecure software will not score well in our scoring categories. Each category is represented as a corner of the triangle. If the software scores poorly in that category the point will be close to the center of the triangle.

Often software will score high in some categories and lower in others. You can compare the strengths and weakness of different software by observing the shape created by these variable category scores. For example, green has a better score than red for Code Hygiene, but does worse than red on Safety Features.

Thanks to our Sponsors & Partners

As one of the only nonprofit research organizations of our kind, we test software and computing products through expert scientific inquiry into safety and risk. More importantly, we advise, empower, and educate consumers in their use of those products and software. With our partners and supporters, we're making the digital age safer for everyone.