Although the [https://www.owasp.org/index.php/Top_10_2007 |2007] and earlier versions of the [https://www.owasp.org/index.php/Top10 |OWASP Top 10] focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The [https://www.owasp.org/index.php/Top_10_2010 |OWASP Top 10 for 2010] clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.

+

Although the [https://www.owasp.org/index.php/Top_10_2007 |2007] and earlier versions of the [https://www.owasp.org/index.php/Top10 OWASP Top 10] focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The [https://www.owasp.org/index.php/Top_10_2010 OWASP Top 10 for 2010] clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.

−

The Risk Rating methodology for the Top 10 is based on the [https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology |OWASP Risk Rating Methodology]. For each Top 10 item, we estimated the typical risk that each weakness introduces to a typical web application by looking at common likelihood factors and impact factors for each common weakness. We then rank ordered the Top 10 according to those weaknesses that typically introduce the most significant risk to an application.

+

The Risk Rating methodology for the Top 10 is based on the [https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology OWASP Risk Rating Methodology]. For each Top 10 item, we estimated the typical risk that each weakness introduces to a typical web application by looking at common likelihood factors and impact factors for each common weakness. We then rank ordered the Top 10 according to those weaknesses that typically introduce the most significant risk to an application.

The [https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology |OWASP Risk Rating Methodology] defines numerous factors to help calculate the risk of an identified vulnerability. However, the Top 10 must talk about generalities, rather than specific vulnerabilities in real applications. Consequently, we can never be as precise as a system owner can when calculating risk for their application(s). We don’t know how important your applications and data are, what your threat agents are, nor how your system has been built and is being operated.

The [https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology |OWASP Risk Rating Methodology] defines numerous factors to help calculate the risk of an identified vulnerability. However, the Top 10 must talk about generalities, rather than specific vulnerabilities in real applications. Consequently, we can never be as precise as a system owner can when calculating risk for their application(s). We don’t know how important your applications and data are, what your threat agents are, nor how your system has been built and is being operated.

Line 20:

Line 19:

Note that this approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating also does not take into account the actual impact on your business. <u>Your organization</u> will have to decide how much security risk from applications <u>the organization</u> is willing to accept. The purpose of the OWASP Top 10 is not to do this risk analysis for you.

Note that this approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating also does not take into account the actual impact on your business. <u>Your organization</u> will have to decide how much security risk from applications <u>the organization</u> is willing to accept. The purpose of the OWASP Top 10 is not to do this risk analysis for you.

Although the |2007 and earlier versions of the OWASP Top 10 focused on identifying the most common “vulnerabilities,” the OWASP Top 10 has always been organized around risks. This has caused some understandable confusion on the part of people searching for an airtight weakness taxonomy. The OWASP Top 10 for 2010 clarified the risk-focus in the Top 10 by being very explicit about how threat agents, attack vectors, weaknesses, technical impacts, and business impacts combine to produce risks. This version of the OWASP Top 10 follows the same methodology.

The Risk Rating methodology for the Top 10 is based on the OWASP Risk Rating Methodology. For each Top 10 item, we estimated the typical risk that each weakness introduces to a typical web application by looking at common likelihood factors and impact factors for each common weakness. We then rank ordered the Top 10 according to those weaknesses that typically introduce the most significant risk to an application.

The |OWASP Risk Rating Methodology defines numerous factors to help calculate the risk of an identified vulnerability. However, the Top 10 must talk about generalities, rather than specific vulnerabilities in real applications. Consequently, we can never be as precise as a system owner can when calculating risk for their application(s). We don’t know how important your applications and data are, what your threat agents are, nor how your system has been built and is being operated.

Our methodology includes 3 likelihood factors for each weakness (prevalence, detectability, and ease of exploit) and one impact factor (technical impact). The prevalence of a weakness is a factor that you typically don’t have to calculate. For prevalence data, we have been supplied prevalence statistics from a number of different organizations and we have averaged their data together to come up with a Top 10 likelihood of existence list by prevalence. This data was then combined with the other two likelihood factors (detectability and ease of exploit) to calculate a likelihood rating for each weakness. This was then multiplied by our estimated average technical impact for each item to come up with an overall risk ranking for each item in the Top 10.

Note that this approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating also does not take into account the actual impact on your business. Your organization will have to decide how much security risk from applications the organization is willing to accept. The purpose of the OWASP Top 10 is not to do this risk analysis for you.

Threat Agents

Attack Vectors

Security Weakness

Technical Impacts

Business Impacts

Application Specific

Exploitability AVERAGE

Prevalence VERY WIDESPREAD

Detectability EASY

Impact MODERATE

Application / Business Specific

2

0

1

2

Likelihood Rating: 1

(Average of Exploitability, Prevalence and Detectability)

* 2

Risk Ranking: 2
(Likelihood * Impact)

[[Top 10 {{{year}}}-What's Next for Organizations|← What's Next for Organizations]]