Objective

The software quality assurance goal is to confirm the confidentiality and integrity of private user data is protected as the data is handled, stored, and transmitted. The QA testing should also confirm the application cannot be hacked, broken, commandeered, overloaded, or blocked by denial of service attacks, within acceptable risk levels. This implies that the acceptable risk levels and threat modeling scenarios are established up front, so the developers and QA engineers know what to expect and what to work towards.

Platforms Affected

All

Best practices

Leverage the available resources like the OWASP Top Ten list, CLASP, or the policy compliance frameworks described in Chapter 5 and the threat modeling processes described in Chapter 7. These processes will help identify design parameters, establish measurable goals, and ensure that security testing proceeds in a systematic, thorough, and quantified fashion.

Select a preferred vulnerability scoring system (CVSS, OVAL, etc.) and a management/tracking system (Bugzilla, a third-party vulnerability management package or service, etc.)

Establish and collect useful metrics that will facilitate decision making (for example, the count of open defects by severity and category, the arrival count over time, the close rate, total testing coverage, etc.)

Identify the testing activities which will be automation candidates and discuss how it will be done.

Have a set of QA entry criteria, which identifies the items necessary to begin testing:

Policy compliance validation requirements

The applicable threat modeling scenarios

The testing schedule, resource list, and budget

The metric and vulnerability scoring system selections

An organizationally meaningful certification, which shows the QA team participated in design reviews and was satisfied with the security parameters of the system.

The completed test plans

The QA exit criteria should include proof of application security integrity and readiness including:

A summary report with charts, which summarize the collected metrics.

A security testing report, which describes how well the application performed, compared to the policy compliance requirements and threat modeling scenarios, and its readiness compared to the established security baselines.

No outstanding high-severity security defects (for example, a simple list showing that all severity 1 security bugs have been resolved and verified).

An assessment which uses metrics to show that application security meets or exceeds established baselines, and that all security-related design goals have been met (that is, proof that the job is well done).

Note: Ideally, the reports should use visual presentation techniques whenever possible, via charts, graphs, and other methods for displaying the information visually, so the numbers are easy to comprehend and will facilitate the decision making process.

How to protect yourself

Select and employ a vulnerability scoring system, such as CVSS, OVAL, or the like. Or at least make sure that security related defects have some special tracking method or tag.

Make sure the question of “Got security?” comes up during design reviews.

Establish a working escalation procedure for security-related defects.

Metrics

Description

The QA group will identify, select, and employ the meaningful metrics to provide the baseline measurement of application security. This baseline will serve as a comparison point for future assessments, too.

How to identify if you are vulnerable

A good system of metrics provides a basis for the following:

Summary charts, showing the security-related bug counts over time, their open and closure rates, and the progress towards policy compliance and risk assessment goals.

The numbers necessary to answer management’s questions about “How secure is the application?” or “Is risk increasing or decreasing over time?”

A known security defect density (that is, the average number of security bugs per unit of code is being monitored and the rate is going in the right direction: Down!)

How to protect yourself

Establish a working set of metrics. For example, count the number of high, medium, and low severity security bugs as a start. Follow with rate assessments, which will answer questions like, “How fast are security-related bugs being discovered in QA testing?”, “How severe are the bugs that are being detected?”, and “How complete is the testing coverage for the areas prioritized by our policy compliance or risk assessment goals?”

Track that all security related tests have been checked (a simple spreadsheet will do).

Automate the calculation and charting of the metrics as possible, so accurate information is available on-demand, even in a dashboard summary fashion.

Make sure all high-priority security bugs are fixed and regression-checked, prior to software release.

Testing Activities

Description

How to identify if you are vulnerable

Not every QA team will employ all of the following testing activities, but the more you employ strategically, the better your security assurance will be:

Cross-site scripting and SQL injection tests have been run.

An assessment of how well the application handles user input, including special or multibyte characters, excessively long strings, null inputs, or invalid values has been done.

Cookie or credentials manipulation testing has been performed.

Denials of Service scenarios have been checked. It is understood how the application will perform in the presence of connection, login, or transaction flooding.