Architect and design for security policies. Create a software architecture and design your software to implement and enforce security policies. For example, if your system requires different privileges at different times, consider dividing the system into distinct intercommunicating subsystems, each with an appropriate privilege set.

Keep it simple. Keep the design as simple and small as possible [Saltzer 74, Saltzer 75]. Complex designs increase the likelihood that errors will be made in their implementation, configuration, and use. Additionally, the effort required to achieve an appropriate level of assurance increases dramatically as security mechanisms become more complex.

Default deny. Base access decisions on permission rather than exclusion. This means that, by default, access is denied and the protection scheme identifies conditions under which access is permitted [Saltzer 74, Saltzer 75].

Adhere to the principle of least privilege. Every process should execute with the the least set of privileges necessary to complete the job. Any elevated permission should only be accessed for the least amount of time required to complete the privileged task. This approach reduces the opportunities an attacker has to execute arbitrary code with elevated privileges [Saltzer 74, Saltzer 75].

Sanitize data sent to other systems. Sanitize all data passed to complex subsystems [C STR02-A] such as command shells, relational databases, and commercial off-the-shelf (COTS) components. Attackers may be able to invoke unused functionality in these components through the use of SQL, command, or other injection attacks. This is not necessarily an input validation problem because the complex subsystem being invoked does not understand the context in which the call is made. Because the calling process understands the context, it is responsible for sanitizing the data before invoking the subsystem.

Practicedefense in depth. Manage risk with multiple defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense can prevent a security flaw from becoming an exploitable vulnerability and/or limit the consequences of a successful exploit. For example, combining secure programming techniques with secure runtime environments should reduce the likelihood that vulnerabilities remaining in the code at deployment time can be exploited in the operational environment [Seacord 05].

Use effective quality assurance techniques. Good quality assurance techniques can be effective in identifying and eliminating vulnerabilities. Fuzz testing, penetration testing, and source code audits should all be incorporated as part of an effective quality assurance program. Independent security reviews can lead to more secure systems. External reviewers bring an independent perspective; for example, in identifying and correcting invalid assumptions [Seacord 05].

Bonus Secure Coding Practices

Define security requirements. Identify and document security requirements early in the development life cycle and make sure that subsequent development artifacts are evaluated for compliance with those requirements. When security requirements are not defined, the security of the resulting system cannot be effectively evaluated.

Model threats. Use threat modeling to anticipate the threats to which the software will be subjected. Threat modeling involves identifying key assets, decomposing the application, identifying and categorizing the threats to each asset or component, rating the threats based on a risk ranking, and then developing threat mitigation strategies that are implemented in designs, code, and test cases [Swiderski 04].

Bonus Photograph

We like the following photograph because it illustrates how the easiest way to break system security is often to circumvent it rather than defeat it (as is the case with most software vulnerabilities related to insecure coding practices).

The photograph depicted a street (named Konsequenz) in the University Bielefeld, Germany, at lat/long. 52.036818, 8.491467. It is visible via Google Street View.

We don't know who took this photograph. If you do, please let us know in the comments!

References

[Saltzer 74] Saltzer, J. H. "Protection and the Control of Information Sharing in Multics." Communications of the ACM 17, 7 (July 1974): 388-402.

12 Comments

The photograph was circulated fairly widely in early 2005. I had captured my copy based upon a reference in comp.risks (where it is referenced from Elias Levy from Symantec), to http://www.syslog.com/~jwilson/picks-i-like/kurios119.jpg . Wilson has it in his blog, http://fantasygoat.livejournal.com/37624.html, dated jan 4th, 2005, 1:27PM. However, Wilson notes "Where do you find the pictures you post on LJ? They are sent to me by friends, posted on boards, randomly surfed via Google, and also on various photo sites." so he probably does not have the copyright. However, the picture is very much in his style of humor, so he is probably the first source on the net. You would have to ask him.

How should Secure Coding Practices address the use of ActiveX? I would like to see some mention of where the use of ActiveX falls in relation to the above guidelines. This is still a very popular methodology involving a lot of expense which probably leads to violating some of the above guidelines, but how can that be fairly expressed?

All of the above guidelines are very general and can apply to ActiveX or most other systems. You could draw up specific secure coding rules that apply the above principles to ActiveX. You would also have to account for any security flaws in ActiveX itself.

These guidelines have focused on C, C++, and Java because of their widespread usage (more widespread than ActiveX) and their public standards.

Circumventing System Security is only a way to prove Individual Pride and Prejudice rather than Consistent strainght forward approach toward achievement and maintenance of goals. This photograph can better be used for demonstrating Circus Tricks and not on Real-Time and Standardized IT Systems.

I like the photo very much as well. It is also very funny to see the other side.. - see street view ( I do not have permission to insert pictures.)

It is even more symptomatic paradox of IT security, when you realize that despite of the purpose of the university building (and the size as well) such place can so nicely demonstrate the old truth about ultimate need of the common sense in any occasion despite of your education. Even such great concentration of intelligence cannot prevent such poor design of the control

My tribute to anonymous university member, who was watchful enough to observe, open minded enough to realize and brave & self-confident enough to publish this photo. That is exactly, why we need Academic freedom.

First: If you look at google street view, you'll notice that this is an Exit and not an Entrance (see the street sign). Think of Data Ex-filtration or data leak; while everyone else was thinking (They got around it to come in!) we were all wrong folks, this was an EXIT all along.

Second: Since then, they've took some countermeasures but installing new barriers/pillars that did not exist in the original photos. Original photos had some barriers but they had some gaps. They managed to fill these gaps.

I've depicted both in the photos here:

https://ibb.co/k4z8Vz (see the no entrance sign and new barriers that did not exist in the original image!)