The ravings of a SANS/GIAC GSE (Compliance & Malware)
For more information on my role as a presenter and commentator on IT Security, Digital Forensics Statistics and Data Mining;
E-mail me: "craigswright @ acm.org".

Dr. Craig S Wright GSE

Followers

My Profile

What is happening

BooksI have a few books and another is on the way for 2012. Firstly, I have to plug the first in the Syngress Series of books on IT Audit. This is a comprehensive compliance hand governance handbook with EVERYTHING (from the high level to the hands on for the expert) to get you started in IT compliance and systems security. The main book is "IT REGULATORY AND STANDARDS COMPLIANCE HANDBOOK". This is the first in a series I have planned and more will follow in time. There will be electronic updates to this book over time to maintain it to a current level over time.

I will be working on co-authoring a book on CIP (Critical Infrastructure Protection) - but more on this later.

On top of this I recycle computers. To do this I take 1.5 to 2 year old corporate lease computers and refurbish them so that they can run the most current programs.

The question is - what do you do to help?

If you do not have the time, have you though about a donation?

This blog has been monetarised. This is where the money goes. By clicking and purchasing on this site, you help Burnside and Hackers for Charity. All monies earned here are split 50/50 between these two charities.

Who I am...or what...

Visitor locations

Tuesday, 9 February 2010

Many people feel that it is not feasible to model risk quantitatively. This of course is blatantly false. In the past, many of the calculations have been computationally infeasible at worst and economically costly at best. This has changed. The large volumes of computational power that is available coupled with novel stochastic methods has resulted in an efficiently viable means of calculating risk quantitatively with a high degree of accuracy. This can be measured as a function of time (as survival time), finance (or monetary value) or any number of other processes.

As an example, a recent question as to the ability of secure SMS based banking applications has been posed on the Security Focus mailing list.

The reality is that any SMS application should be a composite of multiple applications. Such a system could be one where a system uses an SMS response with a separate system (such as a web page over SSL), the probability that the banking user is compromised and a fraud is committed, P(Compromise), can be calculated as:

P(Compromise) = P(C.SMS) x P(C.PIN)

Where: P(C.SMS) is the probability of compromising the SMS function and P(C.PIN) is the compromise of the user authentication method.

P(C.PIN) is related to the security of the GSM system itself without additional input. P(C.SMS) and P(C.PIN) are statistically independent and hence we can simply multiply these two probability functions to gain P(Compromise).

The reason for this is that (at present) the SMS and web functions are not the same process and compromising one does not aid in compromising another. With the uptake of 4G networks this may change and the function will not remain as simple.

The probability that an SMS only system can be cracked is simply the P(C.SMS) function and this is far lower than a system that deploys multiple methods.

For each application, we can use Bayes' theorem to model the number of vulnerabilities and the associated risk. For open ports, we can use the expected reliability of the software together with the expected risk of each individual vulnerability to model the expected risk of the application. For instance, we could model using this method.

alternatively;

Over time, as vulnerabilities are uncovered the system has a growing number of issues. Hence, the confidence in the product decreases with time as a function of the SMS utility alone. This also means that mathematical observations can be used to produce better estimates of the number vulnerabilities and attacks as more are uncovered.

It is thus possible to can observe the time that elapses since the last discovery of a vulnerability. This value is dependent upon the number of vulnerabilities in the system and the number of users of the software. The more vulnerabilities, the faster the discovery rate of flaws. Likewise, the more users of the software, the faster the existing vulnerabilities are found (through both formal and adverse discovery).

If we let E sand for the event where a vulnerability is discovered within the Times T and T+h for n vulnerabilities in the software

Where a vulnerability is discovered between time T and T+h we can use Bayes’ Theorem to compute the probability that we have n bugs:

From this we see that:

By summing the denominator we can see that if we observe a vulnerability at time T after the release and the decay constant for defect discovery is , then the conditional distribution for the number of defects is a Poisson distribution with expected number of defects, .

Hence:

The reliability function (also called the survival function) represents the probability that a system will survive a specified time t. Reliability is expressed as either MTBF (Mean time between failures) and MTTF (Mean time to failure). The choice of terms is related to the system being analysed. In the case of system security, it relates to the time that the system can be expected to survive when exposed to attack. This function is hence defined as:

The function F(t) in x.x1 is the probability that the system will fail within the time 't'. As such, this function is the failure distribution function (also called the unreliability function). The randomly distributed expected life of the system (t) can be represented by a density function, and thus the reliability function can be expressed as:

The time to failure of a system under attack can be expressed as an exponential density function:

where is the mean survival time of the system when in the hostile environment and t is the time of interest (the time we wish to evaluate the survival of the system over). Together, the reliability function, R(t) can be expressed as:

The mean () or expected life of the system under hostile conditions can hence be expressed as:

Where M is the MTBF of the system or component under test and is the instantaneous failure rate where Mean life and failure rate are related by the formula:

The failure rate for a specific time interval can also be expressed as:

As and , we can see that the reliability of the SMS function can be expressed as:

What this means is that the SMS only function has a limit at R(t)=0 as t. This means that the longer the application is running, the less secure it is.

Adding an independent second factor goes some way to mitigate this issue as long as R(t) does not 0 as tand that it does this more effectively than the SMS function itself.

9 comments:

The probability that an SMS only system can be cracked is simply the P(C.SMS) function and this is far lower than a system that deploys multiple methods.

From a simple arithmetic pov its clear that multiplying 2 probabilities results in smaller probability (as both numbers are less than 1) so P(C.SMS) * P(C.PIN) < P(C.SMS)

unless of course the pin is automatically compromised.

Neglecting that it is rather dangerous of you to take about the statistics and probability of security vulnerabilities when these event are not actually random. While probability theory might be acceptable for dice rolling when the outcome is random it is entirely not applicable for a security system where the likelyhood of the event occurance does not depend on chance, rather on the attacker knowing a vulnerability in your system.

Support you have a service exposed to the world - the probability of compromise is not constant and has nothing to do with the length of time the service is exposed - either its vulnerable or not. If its not vulnrable you can leave it there for an infinite length of time before its compromised. If its vulnerable, and you are aware of it, it will get hacked quickly. If it is vulneable with a zero day and you dont know about it - you are not in a position to estimate the risk.

Security is really simple - dont make it too complex by throwing numbers around - it just confuses people. Your basic assumption by using probability theory is that the probability of failure is constant for all attackers - and the event happens randomly.

Probability theory is not applicable to security. You are better off designing a system which is as secure as you possibly can make it rather than knowingly leaving a system vulnerable just because your math tells you there is a mean time before failure of 20 years.

Rereading this post, I realized that I made a mistake in my previous response (although that response is not posted -- maybe because of this error).

P(compromise) already has the possibility of a vulnerability and the possibility of it being attacked by someone with sufficient motivation and resources built in, doesn't it?

I missed this because the work I've done generally requires me to split these things out. And we split them out because they're not well modeled where I work. I've seen some approaches that use a Gaussian curve for parts of this, but I don't generally agree with that approach.

I agree with your statement that the probability of compromise increases generally. There's a great deal of evidence to support the statement and it's not hard to see why.

There are two problems with your approach that I think I would encounter while trying to use it. First, my company does a lot of in-house development. When a new web application is deployed, assuming we've tested thoroughly and remediated the vulnerabilities we've found, it would appear that the number of vulnerabilities is zero. This is false, of course.

So, the very beginning of the modeling exercise must start from an unknown number of possibly theoretical vulnerabilities. It may turn out that the vendor for the application container has a protocol flaw, or the operating system services have some flaw, or the application itself has some non-obvious behaviour defects under unusual circumstances. I expect some of these will be found over time and increase the probability of compromise.

What I don't know is the density function to apply when estimating (since I have no historical data at time zero). In addition to this, I may not have a detailed understanding of my user base to factor into the chance of vulnerability.

I can choose a standard density function, like the Gaussian for example, but I believe there will be cases where it's difficult to predict with certainty what the risk will be, due to not knowing what key factors will push a particular population of users to produce even one attacker, especially when the population is smaller and more restricted than the Internet at large.

And I won't be surprised if you say I simply don't understand statistics well enough in this case to know how to approach the problem. Without question, you understand it better than I do. My reason for pointing to Taleb's work is that he has substantial data indicating that the choice of density function can mask the fact that we don't know something and then the rest of our calculation, while precise, will not be accurate.

I believe that better models might be produced with more data, but I also believe those models will be influenced by observation. Risk modeling in the financial sector has to be a sure sign of this. Taleb predicted in 1997 that the model being used wouldn't be accurate and he had analysis to back that position up.

What makes you believe that your original integral for P(E|n) holds? This is not a stationary process. I think this original assumption is mistaken. Furthermore, you are assuming that the your actions will not affect the environment. This is not true in security. Game theory is probably a better model here. You don't have to outrun the bear, just the guy next to you.