Friday, April 25, 2008

Security is a hot topic. At least when it comes to protect the privacy in terms of information of persons or organizations.The first thing which came up in my mind related to security testing was focusing on prevention that data or systems are accessed to get information which can be turned into money. Or get other advantages.

On Wikipedia Security testing is defined as follow: "(The) Process to determine that an IS (Information System) protects data and maintains functionality as intended.The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorisation, availability and non-repudiation."

The main words are here protecting data and maintaining functionality as intended. When I think of security testing it was mainly focusing on PC's and servers. They should be protected from the outside world introducers to make sure that data is not stolen, altered or removed. Another thought I had was to leave the functionality as is, prevent others to replace existing code with their own to get easier access. This access can be get using the so called about trojan horses.

The definition on Wikipedia: Trojan Horse: "In the context of computing and software, a Trojan horse, or simply trojan, is a piece of software which appears to perform a certain action but in fact performs another such as a computer virus."

Only leave functionality as is can also be read as leave the functionality accessible or usable by the intended user. I think most of the security testing projects focusing mostly that other cannot alter the code or functionality. Only another side is avoiding that user are not able to use their systems.

Another way to get access to information systems known bugs can be used by developing an exploit to make usage of holes/bugs in the system.

Wikipedia defined Exploit as: "An exploit (from the same word in the French language, meaning "achievement", or "accomplishment") is a piece of software, a chunk of data, or sequence of commands that take advantage of a bug, glitch or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized)."

During a search on the internet I came up with an article which is explaining how exploits can be automatically generated. I have to mention that this is quite a technical article. Though it gives some good information about the time pressure we might have on deploying patches and therefore even more the time pressure in the testing cycle.

In this article they explain that they were able to generate exploits using a tool. Those exploits can be generated within minutes based on the newly release patches. On of the interesting statements is that about 80% updates their system NOT within 24 hours when an update is released. If an exploit is generated within minutes this leaves persons a playing ground for almost 24 hours times 80% of the users connected to the internet.

In the same article they explain how their tool works. Looking to the chapters it has some relation towards testing:

Patch-Based Exploit Generation using Dynamic Analysis

Patch-Based Exploit Generation using Static Analysis

Patch-Based Exploit Generation using Combined Analysis

Perhaps such an approach and tool can also be used in security testing. Trying to generate the possible exploits for you system and use this information for risk analysis.

That time to market for solutions is important was also mentioned by Michael Kranawetter, Microsoft (D) on the SQC 2008 in Dusseldorf on April 17th 2008. He presented the Security Development Lifecycle. (see also my posting: Software Quality Conference 2008, Dusseldorf)

In his presentation he pointed out that the pressure to market for new deployments will increase. Exploits are found and have to be solved in a very short time. If exploits can be generated automatically within minutes, the time frame between a new deployement, a new exploit and a corresponding solution is extended. For example: if an update is deployed on moment T1 and the solution takes about 1 month for deployement and the exploit was initially found in 1 week. You have a risk period for about 3 weeks. If now an exploit can be found in minutes. This leaves that the risk period is extended from 3 weeks to 4 weeks.

Some exploits doesn't allow the users to gain control over the system. Though there are exploits that prevent users to use their system. Prevent users using a system can be done by brute force like the so called: Denial-of-service attacks. Were persons trying to attempt to make a computer resource unavailable.

On Wikipedia they define Denial-of-service attack as: "A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users."

An exploit can also be used for this. Without using brute force and in that case without a lot of other means, like infected systems by trojans.

Perhaps it is our fault that when we hear the word computer we think of the PC's or notebooks we are working with. I think in the definition above a computer can also be a system which contains a CPU. This means that any product which contains a CPU can be part of a DOS or an exploit.

Currently we are focusing more or less on security testing on our information systems. This is wrong in my opinion. CPU's are also embedded in all kind of things. I recently heard that about 99.8% of the CPU's in the world are used outside PC's and servers. If those CPU's are used for protecting or helping us in the real world, we should protect them also for intruders.Security is here very important. Only sometimes the risk for persons is very low and therefore security is somehow neglected. In the Netherlands recently a public transport card was introduced. It was proven that the security had some holes. The direct damage for people was not that high. Only the trust in the project was minimized.

If securiy testing will not get the same attention as functionality testing then the project might fail just after it will be used. Exploits can be nowadays generated very quickly.

This will have some impact on the testing process. Michael Kranawetter suggested to reserve about 20% of the testing time in a project for security testing. If this is valid figures it means that either the testing time will extend, other functionality should be tested lesser or the test team should be extended.

Another impact is the moment when it is valid to start security testing. Initially I would say: start at the beginning at the same time normal testing starts. Only the system is still in development, sometimes the developer did not yet pay attention towards security issues. On the other side "wrong" coding can be discovered much earlier and the impact of repairs can be minimized in an early stage.

A solution for this is also mentioned in SCRUM projects, Continuous Integration. To make this succesfull you have to automate your testing.

If security testing will get a larger part in a project and the development process is addapted to it; you need testers with different skills. A test team might consist in short of the following people:

Test experts, skill of usaging of different test methods, informal and formal testing techniques which fit the choosen development method (Development method knowlegde)

Tool experts: skills of using functional tools as well technical tools with programming language

Security experts: testers with skills and knowlegde of the latest tools and security methods.

In my opinion Security Testing has quite an impact on projects and testing. The test strategy will change, the process will change and also the skills of testers will change and the tools will change. And this all have to be done in a shorter time frame. And last but not least: The project managers should be aware that projects might shift delivering right and save products instead of reaching the deadline. In some cases you can go into production when there are workaround for known issues related to functionality. Only those issues should not have impact on security. If security fails, the project might also fail when it goes into production and articles are written in papers.

2 comments:

What I would also like to see is the not so technical side Security Testing. Most of the security issues are internal. No matter how good your code, if the user is in a group or role with too much privileges you still have a problem. Same goes for social engineering; I forgot my password, can you please reset it?, etc.

Hello Brian,I can imagine you want to see another less technical part of security testing. Only if (what I heard) 0.2% of the CPU's are on PC's and servers, the risk more on the technical part. Related to you statement, authorizations is mostly covered in User Acceptance testing. I can see that this can go much further as authorizations are mostly defined on GUI level and the functions behind the GUI. Within infrastructure also roles are defined and those are most of the times forgotten.

The main idea of this article is writing some of the newly learned perspective I got and trigger people to think beyond the functionality. Still you wrote some good additions to come up with. Indeed, there are also internal securities you have to focus on.

AS IS

The postings on this blog are provided "AS IS" by Jeroen Rosink with no warranties. Most of the post will be thoughts, and thoughts can change over the time. I would appreciate when you leave a comment to sharpen my ideas and thoughts.