Hello Monks, I am in the process of working on a HUGE Application Security initiative where I work. Big part of this is secure coding standards for all languages we use in house. CERT provides an excellent standard for C, Java, C++ but nothing on shell languages. CERT Secure Coding Standards.

Fortify provided a taxonomy on PHP and some other languages, and I have translated those that are applicable to Perl and Ruby into our standards wiki but was wondering if you Wise monks knew of other places with Perl specifics? (And Ruby, bash, etc but Perl would be a great find).
Obviously, I have already written up use of taint mode, strict mode, etc. but additional knowledge if, for example, some CPAN modules for input validation etc type information would be very useful.

One place to look is in the standard Perl documentation. See perlsec especially, which is all Perl security info. Also look at perltaint. There is security info in the open, system, and exec portions of perlfunc and more in perlopentut. Some of the info in perltrap is security related.

Secure web programming is mentioned at http://advosys.ca/papers/web/61-web-security.html. You might not be doing web programming, but don't forget your application domain has its own security issues no matter the language. Make sure you have standards in place for the application domains, too.

Read up on vulnerabilities and consider how to avoid them. Knowing what you're securing against is one of the best ways to formulate how you're going to secure something.

Above all, remember that untested security is likely very little security at all. Most security errors slip through from a lack of black-box testing of the code at its boundaries. Write tests to check boundary conditions and even completely invalid inputs that are unlikely to occur. Any interface to the user is an interface to a fuzzing tool.

mr_mischief's excellent advice aside a for a moment, I provide a bit of cold water. Fire anyone who sends user provided data directly anywhere. No taint or input validation? Fired! No placeholders for DB interaction? Fired! Also fire anyone who surfaces user data in world readable ways. Username in the cookie? Fired! Accepting GET for login forms? Fired! Cached user settings page? Fired! Let's not stop there. Someone setup a DB without a root password? Fired! Someone used the same password on all the secure entry points and it's 3 years old? Fired!

We spend a lot of time talking about difficult, non-trivial security exploits. The last two or three places I worked had holes you could drive a truck through. One place sent cookie content directly to SQL which meant a maliciously crafted cookie could delete the entire production DB.

I'm being facetious about firing everyone but I argue that the way to address this sort of problem is not more process, it's more serious consequences for not knowing how to do your job. It's a professional and cultural issue, not one caused by a lack of information. It sounds like you might be getting a handle on the cultural aspect. Good luck and keep it up!

You make some very good points. The finer details of secure coding practice mean nothing if they are not followed. The most secure application code in the world can be a security nightmare when the application is configured incorrectly.

Treating all data that comes from outside the program as suspect or even downright hostile is one of the most important rules for security if not the most important. It's one that often gets forgotten when programming or configuring an application, though.

Any coding guidelines that help with robustness or maintainability help with security, too. If something's not robust and crashes when it's not a target of an attack, someone can crash it on purpose as well. They may even be able to crash it in a predictable enough way to exploit it. If code's not maintainable, then fixing security issues once they are found will take longer. Being security conscious is a good coding practice, but good coding practices in general help with security, too.

I realized since my post above that since PerlMonks is a great source for discussion of all things Perl, I probably should have included some node references. I did a little searching around the Monastery for other security discussions, and some of the topics you mentioned came up. As usual, the threads are generally more valuable taken together than any single node from a thread by itself.

Is it Secure? is a good general discussion from 2002 on formulating and following good security guidelines and some references to particular ones.

merlyn bemoans security issues and several people reply with good references about security in web site design, or lack thereof where he (merlyn) reminds us that secure and non-functional is better than insecure and functional.

The tutorial Using Temporary Files in Perl mentions some of the security implications surrounding temporary files which are common to any language.

Preventing XSS talks about cross-site scripting, which is another language agnostic security issue.

Finally, one of the most comprehensive nodes and associated followup threads on coding standards ever to grace PM is On Coding Standards and Code Reviews by eyepopslikeamosquito which covers all sorts of guidelines for writing code including some security issues. It also gives a nice bibliography of other guideline and policy documents from other organizations regarding coding for security, robustness, and maintainability.

As you say, though, the simple security issues that often aren't paid any attention should be addressed first. Get the low-hanging fruit that is most likely to allow the easiest route to exploitation, then move up the tree.

Update 2009-04-8: Thanks to ambrus for spotting a grammatical error. s/(secure and non-functional) (than insecure and functional)/$1 is better $2/;

That is the heart of the problem, the people who tend to be in charge are the least able to evaluate other peoples skills. I have had one manager who knew how to program. He was up to a** in alligators trying to get his bosses to line up a more stable development environment. So he ended up not being able to do that much.

All the rest of the managers, they did not want to get involved with anything involving processes, they mostly wanted to mediate between departments and individuals. They did not themselves people who involved themselves directly with work and therefore had not ability to evaluate whom was doing what. Why do think so many companies suck at development? They put non-technical people in charge of the technical side of the business and it sucks.

Until you get managers who view themselves as part of the work process it is very hard to implement a standard on how work should be done.

I totally agree. I tried to work a little about that in but I didn't have a good example. Some of the most awful security holes I've seen were known but persisted because of an unwillingness to pay the development costs required to fix them.