The ravings of a SANS/GIAC GSE (Compliance & Malware)
For more information on my role as a presenter and commentator on IT Security, Digital Forensics Statistics and Data Mining;
E-mail me: "craigswright @ acm.org".

Dr. Craig S Wright GSE

Followers

My Profile

Share it

What is happening

BooksI have a few books and another is on the way for 2012. Firstly, I have to plug the first in the Syngress Series of books on IT Audit. This is a comprehensive compliance hand governance handbook with EVERYTHING (from the high level to the hands on for the expert) to get you started in IT compliance and systems security. The main book is "IT REGULATORY AND STANDARDS COMPLIANCE HANDBOOK". This is the first in a series I have planned and more will follow in time. There will be electronic updates to this book over time to maintain it to a current level over time.

I will be working on co-authoring a book on CIP (Critical Infrastructure Protection) - but more on this later.

On top of this I recycle computers. To do this I take 1.5 to 2 year old corporate lease computers and refurbish them so that they can run the most current programs.

The question is - what do you do to help?

If you do not have the time, have you though about a donation?

This blog has been monetarised. This is where the money goes. By clicking and purchasing on this site, you help Burnside and Hackers for Charity. All monies earned here are split 50/50 between these two charities.

Who I am...or what...

Visitor locations

Friday, 7 November 2008

There are a number of maxims for the creation of a secure system in information technology. The question is where do these come from and what are they all.

The paper, "The Protection of Information in Computer Systems" by J. H. Saltzer and M. D. Schroeder [Proc. IEEE 63, 9 (Sept. 1975), pp. 1278-1308] was the watershed paper on this topic and the origins of he maxims that we take for granted today.

These maxims are the fundamentals of information security. These are:

Economy of mechanism: Keep the design as simple and small as possible.

Fail-safe defaults: Base access decisions on permission rather than exclusion.

Complete mediation: Every access to every object must be checked for authority.

Open design: The design should not be secret.

Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.

Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job.

Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.

Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.

Point 8 is commonly overlooked. To make a security system work it needs to be accepted by the people using it. If we make a system too complex it will fail. If people perceive it is impeding on their ability to undertake their job, they will find a way to bypass it.

These Maxims are listed in the section of the paper under called "Design Principles". This section begins by stating: "Whatever the level of functionality provided, the usefulness of a set of protection mechanisms depends upon the ability of a system to prevent security violations. In practice, producing a system at any level of functionality (except level one) that actually does prevent all such unauthorized acts has proved to be extremely difficult. Sophisticated users of most systems are aware of at least one way to crash the system, denying other users authorized access to stored information. Penetration exercises involving a large number of different general-purpose systems all have shown that users can construct programs that can obtain unauthorized access to information stored within. Even in systems designed and implemented with security as an important objective, design and implementation flaws provide paths that circumvent the intended access constraints. Design and construction techniques that systematically exclude flaws are the topic of much research activity, but no complete method applicable to the construction of large general-purpose systems exists yet. This difficulty is related to the negative quality of the requirement to prevent allunauthorized actions".

A few applications (such as Port Knocking) should go over these maxims and maybe they might realize that they are not meeting several of them.

Wednesday, 5 November 2008

Honestly I find it difficult to understand why people do not get the idea of why errors and low quality software occur.

A comment was made as a question on Security Focus:Why isn't Quality Assumed? Why isn't Security Assumed? Why are these concepts thought of as add ons to Applications and Services?

Why do they need to be specified, when they should be taken for granted? - Input Validation - Boundary Conditions - Encrypt Data as necessary - Least Privilege Access - White lists are better than Black lists

It is simple economic theory. We are talking high school level. If youthink about it for a moment you will come to understand.

First, think of a few things in life outside IT. I will pose a fewquestions and see if you can answer them:

Are all cars of the same quality? Why do you pay more for a Lexus over

a Hyundai?

Do you have to take insurance on a trip?

Now some that are a little closer to home:

Are all door locks of the same quality?

Do all houses come with dead-bolts and alarm systems?

Do all cars have a lojack installed?

Do all windows on all houses have quality locks?

Are all windows made of Lucite (which is child proof)?

The simple answer is that quality varies with cost. If you want moreyou pay more. This is honestly a simple exercise. Quality softwaredoes exist. If you like you can go to the old US Redbook standards andhave an "A" class software verification. Except that that copy ofWindows XP or Vista will now cost $10000+.

I do code reviews. They are needed to both verify the findings fromstatic analysis software used to test code as well as to gain a higherlevel of assurance. Even then, this is not perfect as modeling complexinteractions is more time consuming and error prone.

I can do around 190 to 220 lines of code an hour on a good day for alanguage such as C. Less for Assembly. My rates are charged hourly. Ananalysis of XP would take over 50,000 man hours at this level. Thisexcludes the fixes. This excludes the add-ons.

Tuesday, 4 November 2008

To measure residual business risk we need to assess a number of areas. These should be measured using quantitative methods. Subjective evaluations of risk (qualitative measures) are based on perception. These have their place. They aid in measuring the perception of risk.

The more common risk frameworks are problematic in that they rely heavily on the use of personal judgment. The correlation between personal judgments is skewed by both the ethos and pathos of the argument. The logos (logical thought), is lost in the heart.

We need to break the risk down into its components. Let’s start with the fundamentals and work up. If we start by breaking up risk first off (risk = the probability of a threat source exploiting a vulnerability) we have a few places to start.

It is essential to create models that reflect the functions for each of these aspects.

Threats and threat modeling.

There are several data sources available that could supply the raw data necessary for the creation of a viable quantitative risk framework for security:

Dshield

Storm center

CERTS

Internal metrics etc

What is needed to create these models is to classify the data into manageable classes. Through the creation of a set of categories that can be used to model an organization and also the same with the attacker, the data from these sources can be divided into categories that provide a suitable framework. These would be organizational classifications that we can align the ongoing risk to. This will allow both predictive modeling (GARCH, ARIMA – Heteroscadastic time series data) and point processes (the existing risk, risk if we do X).

An organization itself needs to be categorized. To characterize an organization’s systems, network, applications etc, it is necessary to;

To do this, we need to look at both the technological needs and the business needs of that organization. We can start by mapping the risk into qualitative classes to ease people into more complete models involving distributions. At least however, these will be classes based on a non-biased and non-subjective analysis.

Administrative StepsAdministrative processes impact operational issues and as such need to be noted. In particular, areas such as policies and processes. Some the areas to consider in analyzing the administrative controls on organization would include:

Determine the organizations (Business) Goals

Determine the organizations structure

Determine the organizations geographical layout

Determine current and future staffing requirements

Determine the organizations existing policies and politics

Technical StepsApplications or systems generally do not act in isolation. Consequently it is necessary to consider more than just the primary application. By this it is meant that you also need to investigate how the application interacts with other systems. Some of the things to check this are going to include:

CharacterizationIn characterizing an organization we have a number of stages and that will quickly help to determine the risk stance taken. This means looking at the various applications and protocols deployed within the organization. For instance, have internal firewalls been deployed? Does centralized antivirus exist within the organization? The stages of characterization are generally conducted in an opposing order to a review. Rather than starting with policy this type of characterization starts with applications and works to see how well these fulfill the organization’s vision. The areas we need to consider are:

Applications

Network protocols

Document the existing network

Identify access points

Identify business constraints

Identify existing Policy and procedures

Review existing network security measures

Summarize the existing security state of the organization

This information is vital to be able to understand an organization’s requirements:

The need to be able to do to conduct your business,

What should the system’s security to set to permit, deny, and log, and

From where and by whom.

Next we need to consider the vulnerability and the attack. This is derived from survival models. I am hoping that we get a positive result from requests to the Honeynet project, but there are alternatives even without these.

Using a common framework such as the CIS application and OS risk levels to selected configurations will allow a more scientific determination. These can be aligned with Honeypot data and mapped to DShield results.

Ease of Resolution and Ease of ExploitationSome problems are easier to fix than others, some are harder to exploit. Knowing how difficult it is to solve a problem will aid in assessing the amount of effort necessary to correct it. The following are given as human classifications, but the idea would be to use survival data. Set a vulnerability in the wild and how long does it take to be exploited? This would be a continuous function, but we can set values at points to make it easy to explain to people.

The following are common classifications in risk analysis. However they are at present subjective and biased metrics;

Trivial ResolutionThe vulnerability can be resolved quickly and without risk of disruption.The vulnerability can be exploited quickly and with a low risk of detection.

SimpleThe vulnerability can be mitigated through a reconfiguration of the vulnerable system, or by a patch. The risk through a disruption of services is present, but diligent direct effort to resolve the problem is acceptable.

ModerateThe vulnerability requires a patch to mitigate and is a significant risk. for instance, an upgrade may be required.

DifficultThe mitigation of the vulnerability requires an obscure patch to resolve, requires source code editing or is likely to result in an increased risk of service disruption. This type of problem is impractical to solve for mission critical systems without careful scheduling.

InfeasibleAn infeasible fix to a vulnerability is due to a design-level flaw. This type of vulnerability cannot be mitigated through patching or reconfiguring vulnerable software. It is possible that the only manner of addressing the issue is to stop using the vulnerable service.

Trivial ExploitationThe vulnerability can be exploited quickly and with a low risk of detection.

How to improve these...The answer is to start creating Hazard/Survival distributions for the various attacks and threats. Using data that is readily available, complex but accurate mathematical forecasting and modeling may be conducted even now.

Monday, 3 November 2008

The idea of a Memristor has been around for a long time, but they are only now starting to be built. HP's recent breakthroughs in this long touted technology will radically change the face of computing in the years to come and also allow Moore's law to continue as well as in accelerating the advance of storage and memory space.

Memristors combine several advantages of memory and disk based storage into a single unit. Basically, think of combining a flash hard drive and DRAM into one package.

Great, new tech, but how does this really impact forensics and security?

The answer is mind blowing when you think about it. Not only will the findimentals of computational theory change when long term and short term memory start to combine; but memory will become static.

What occurs when you pull the power cord on your computer now?

Now thinkwhat if the computer state remains the same (like a super-hybernate)

Think of memory forensics - this will be the norm as ALL storage will be memory.

We still have a few years - HP plans to offer these commercially by 2012 and some believe that these devices will replace the existing paradimes between 2014-2016. This may be a while, but things move quickly. Blink and say helo to tomorrow...

This is the course that is associated with the GIAC GCIH certification. As GCIH # 6896 and a Gold certification holder I highly recomend this course and Chris as a Mentor. The real benifit to mentor sessions is that you can learn the material in depth over time.