Posted
by
Unknown Lamer
on Monday January 02, 2012 @07:10PM
from the i-hear-plain-text-password-databases-are-secure dept.

rhartness writes "I am a long time Software Engineer, however, almost all of my work has been developing server-side, intranet applications or applications for the Windows desktop environment. With that said, I have recently come up with an idea for a new website which would require extremely high levels of security (i.e. I need to be sure that my servers are as 100% rock-solid, unhackable as possible.) I am an experienced developer, and I have a general understanding of web security; however, I am clueless of what is requires to create a web server that is as secure as, say, a banking account management system. Can the Slashdot community recommend good websites, books, or any other resources that thoroughly discuss the topic of setting up a small web server or network for hosting a site that is as absolutely secure as possible?"

I've seen many a question or thought like this and I don't understand the underlying wonderment. Web applications aren't different than any other networked applications. You just have a larger selection of clients that could be communicating with you. But you'd never trust ANY client would you?

Aren't web apps very different? Inside my Intranet I can make certain assumptions that I can't on the Web. If those assumptions prove false, it's because another layer above me isn't doing it's job. You might balk at this, but the fact is that as programmers we're constantly relying on some layer above us; whether it's network (TCP/IP, SSL, TSL), software (the OS, the API) or hardware (is the Memory on this board bad?).

Sounds like you're making bad assumptions. One unexpected breach and your network is no longer secure. On a secure network, you still close off all of the ports except the ones you use. You don't make assumptions that something is safe, you add IP filtering and passwords.

Web apps are exactly the same as any other intranet app, and should be just as secure. The only difference is, you also have a web server and a framework adding potential bugs and holes. And then your code most likely has to protect against common browser-based attacks and handle user authentication/authorization on a stateless connection.

Don't trust anything on any network, or you'll end up like Sony. Breach after breach.

You fail to actually address any of the technologies he mentioned as a layer above. You're talking about closing ports and other pretty standard bland basic intro-level security. Sure, there's overlap, but what he's saying is that a lot of common Internet problems are reliably and intelligently pre-solved for you if you control enough of the technology stack.

I'll pick his example of TLS since that's a good example of the sort of technology stack you can rely on in an intranet application which is prohibitive to implement in an Internet application.

If your web server has validated a TLS certificate, unless your signing authority has been compromised (and for internal purposes, that's owned by your own company's security team), you can trust the subject of the TLS cert. It is not only considered safe to assume that TLS is valid, it's widely regarded as one of the most secure possible means of authentication you can have since it includes endpoint verification on both ends. It's excellent practice, but if your CA is compromised it falls apart. Of course you're probably also relying on other proven technologies like LDAP for identification, but if someone ends up with write access to parts of LDAP they shouldn't have, this falls apart too.

In internal applications you can make these sorts of assumption that aren't really available on the public Internet since you don't control enough of the technology stack outside your own network to do so without substantial inconvenience for your customers. That doesn't make you a bad developer. In fact the opposite is likely true. If you're building an intranet web application and you think you can do a better job of managing user credentials than LDAP or a better job of securing communications than TLS, you're deluding yourself and very likely introducing security bugs into your application.

a lot of common Internet problems are reliably and intelligently pre-solved for you if you control enough of the technology stack.

Another bad assumption, that you control enough of the technology stack. I did address the technologies a layer above. You can't control the IP implementation, nor the TLS, nor LDAP, nor anything else outside of what your framework allows you to control.

Therefore, you have to distrust everything. Assume your LDAP is compromised, that TLS is broken, that your framework and web server and host OS are all broken. Write as defensively as possible.

This isn't about just writing an application, there is OS level hardening, web server hardening, framework hardening, and more. You can't assume it's all in place and just write the application. Especially if you are "clueless of what is requires to create a web server that is as secure as, say, a banking account management system."

That is the (one of) the reason(s) Sony got hacked repeatedly within a few weeks. Don't assume anything. The web is hostile, everyone is an enemy, and no matter what your assumptions, unless you assume that everyone is an enemy, you are going to be wrong. Just once out of a million page views, or a trillion, or a trillion squared, once is all it takes.

Most attacks come from trusted machines, either from people wanting to use their rightful access level to get more data than what they should (or modify data they should not be allowed to), or by bots crafted to infect internal users' workstations and rob their credentials. No, you shoul not trust them just because they are internal anymore than what you should trust me.

Exactly, the idea should be that you should assume that every piece of data that you are receiving is likely malicious, so as such you should sanitize every variable, never execute *anything* sent to you, mandate bidirectional encryption in which you verify certificates at both sides, and kill the session if a single out-of-order packet is received.

As well, block *every* port except the one that you intend to use within your application, and monitor all traffic to detect anyone *attempting* to connect over any other port, and immediately greylist their IP Address for an hour. If they repeatedly do it, than blacklist them permanently.

As well, requesting a non-existent resource should be treated just as trying to SSH in to your box as root!

Anyone who legitimately runs into your security protections would need to call to get their account reinstated.

You should also ensure that any functions that will only be *reading* data do not have privileges to *write* data under any circumstances.

Only writing functions should be capable of writing to your data stores / databases.

Any malformed entries stored within your database should be immediately flagged as "bad data" and *not* presented back to the user. The record should simply be gone. Any one user who has more than 3 pieces of "bad data" associated with their account should be immediately blocked pending review.

The best course of action with regards to designing any hardened applications is to assume that any data coming from your own, non-internet accessible servers is suspect and then you will do well in limiting risk.

and monitor all traffic to detect anyone *attempting* to connect over any other port, and immediately greylist their IP Address for an hour. If they repeatedly do it, than blacklist them permanently.

From what I see in real-world firewall logs, there are often tons of IPs trying to connect to your nonlistening ports. And those can be from dynamic IP users. Blacklisting these permanently would cause more problems and not really help much (assuming your system is hardened and has upstream DoS/DDoS protections in place).

and kill the session if a single out-of-order packet is received.

If you're worried about that sort of thing, you should solve it by using TLS/HTTPS (correctly;) ) rather than killing sessions just because an out-of-order packet is received. If the attacker already has the ability to pwn a user's TLS/HTTPS connections, the attacker has no need to inject out-of-order packets to pwn that user.

If you're that paranoid what you could do is set up "honey data" and "honey rows" in database tables. For example, you could create customer records of nonexistent people/items who/that don't appear anywhere else in the world. If those data/records are ever accessed, it means something has gone wrong. And if that data ever appears "outside" (internet or elsewhere), it may mean something has gone very very wrong...

Another way for attackers to access the data would be via the backups and the systems that do backups. So even if your web apps and servers are super-hardened, it may not matter if the attacker can get the data via the backups.

There is a huge difference though. It is true that you should not trust any clients. But many people make incorrect assumptions.

They think that when you are working internally, there is a very small number of clients that can possible connect to it. The odds of a hacker getting onto your network are small. So of course it's secure, it's on a server behind a firewall. Opening an application to the internet strips those security blankets away.

You could sign the form/url (and have a random salt). So the injected javascript has to figure out how to generate a valid signature.For example: Slashdot doesn't do any checking so you can post a logout url, and if anyone's browser somehow loads that url, they get logged out. But if you add a random salt and a signature and tie it to some other secret, the attacker will have to generate or copy the logout url from somewhere.

The logout url will still have to be derivable by the browser, otherwise the user w

You know that. I know that. But someone with little or no experience with web browsers probably won't know that.

My point was: a web app can't be treated like "any other networked program" because the clients and protocols have some unique considerations you have to keep in mind.

I've programmed for the web for almost 10 years now, and have always tried to keep a focus on security. I would hesitate to do a project like this myself, because I know there are a lot of ways things can go wrong. So many details to

It will get hacked, it's just a matter of time. If you have data that is getting uploaded, then needs to be secure after that, consider using a unidirectional network, also known as a "data diode", which can only send data in one direction.

If you can't hand the administrator account passwords to someone and rest easy, you shouldn't be counting on it to be secure.

You can also spread your application out into layers. From your request I assume you will be collecting and/or publish sensitive data. It may be possible to divide that process into sections, and spread the seconds over three different machines, with custom-written interfaces between them. That way, when (not if, but when) your world-facing server gets pwned, the pwners will probably be unable to immediately pull anything useful out of the second section (on the second machine), since it isn't using any ord

Pretty bad advice. Unfortunately this is an area where you will continually need to keep asking the question. While there are certainly basics that should be covered there are also subtleties and interactions and new exploits in software you will depend on.

Not, "How can I write flawless code?," but, "What should I be reading?" The submitter showed no prior knowledge of exploits, so it seemed reasonable to provide him with a simple introduction to the kinds of exploits he may encounter and how they can be prevented.

Interestingly, the 2010 "OWASP top 10 vulnerabilities" have all existed for a decades - a competent developer flash-frozen in 1998 and thawed out today would be able to guard against all of those flaws. That's not good evidence for your position that the question "continually needs to be asked."

Q3: Are compiled languages such as C safer than interpreted languages like Perl and shell scripts?

The answer is "yes", but with many qualifications and explanations.

Really? C is a safer web language than Perl? Buffer overflows and all? Their example that you might accidentally be editing a file (in production?) in Emacs and leave a backup file sitting around that someone can request, and therefore have access to its source code is so weak it's pathetic. Isn't every major modern web server already configured to refuse to serve files whose mime type it does not recognize from the file extension? "Foo.cgi~" won't be downloadable because the web server doesn't understand what a ".cgi~" file is. Never mind that this example assumes that you're engaging in the egregious sin of editing a file on a production system.

If it's not editing directly in a production system, you almost certainly have a.gitignore (or.cvsignore or.svnignore or whatever) set up to ignore backup files, so it'll never go through your build system or become part of your deployed package anyway. And STILL if you're relying on the obscurity of your source code as a security measure, you're doing something wrong. It doesn't hurt to keep the source secret, but by no means should you be compromiseable because someone was able to get a peek at one of your source files. If someone wants your source code badly enough, they just need to pay off one of your engineers and they get the entire stack source, maybe even revision history. Corporate espionage is all but impossible to track down the perpetrator unless he's very stupid, and it leaves a lot less evidence behind than traditional brute force attempts (like guessing script file names and looking for backup copies somehow left around in production).

Yes. To have an exploitable buffer overflow, a programmer has to make a major mistake in the most fundamental area of programming

So programs without flaws are flawless. lol.

_Anything_ that reduces flaws is a good thing. That includes superior tools/languages or hiring better programmers. Like assembly, C forces you to think of a problem at the level of manipulating bits in memory and pushing them around, manually managing resources, etc. Such thinking is stupid, wasteful, and pointless and needlessly increases development time, testing and other various time costs. Nobody should use C unless there is a very specific benefit from it -

So $200 for vBulletin and $150 for vBSEO per developer. Wherever you're working, get out as soon as you can, if they can't spend $350 per developer on licenses, they're going to cut a lot of other important corners - like pay raises. Seriously, that comes out to $0.16 per hour. Even if you guys are entry level guys making $16/hour, that's still just 1% premium on your pay rate to grant a substantial amount of utility (plus the conflicts involved in multiple developers working in a common environment is g

I am clueless of what is requires to create a web server that is as secure as, say, a banking account management system

You can't.

There is no way to provide the same level of security as an in-house application running on dedicated terminals and a dedicated network as with the banks' teller terminals and ATMs.

And that's because you have no control over the browser and it's plugins, so you can't stop them from mismanaging or misrepresenting the data, custom code in modified copies of open source browsers saving pieces of secure pages that you never meant to see a hard drive, etc.

As to the technology of how to deploy that, there are no easy answers and checklist standards. New attack vectors and design oversights come out all the time, so web security is an ongoing battle, not something you just design for and "finish".

Aside from that, there is no mention of what kind of web server technologies are to be used. I seriously doubt anyone can make many useful suggestions for how to secure an application when you don't even know what tools and languages are to be used to develop it.

Itemize your security requirements, then filter your tool options based on whether they have features to enable or support those requirements. Find out if it's possible to address any gaps through custom code.

And do blanket filtering. never trust input. always filter to extreme, as long as you can get away with it. as much as you can get away with it.

And remember, filtering input is only half the story. The other half is to harden the server itself by using a good, solid, external hardware firewall, and being careful to run only those services which are absolutely essential. And make sure all those services are patched and up to date. It does no good to harden your web app in the extreme if you're running an old and buggy ssh server on the same system.

I would agree with that 100%. In addition, using different machines for different filtering tasks. You might have a firewall that keeps bad stuff out of your network, but then have another firewall type of machine that protects and sanitizes only your app server. And have different physical networks. Nobody, not even system admins, should be able to get to the machines from the outside facing network.

And do blanket filtering. never trust input. always filter to extreme, as long as you can get away with it. as much as you can get away with it.

A lot of the posters on this story talk about filtering, and that's absolutely right - but filter on a whitelist, never a blacklist! Think about what inputs or input forms are acceptable rather than trying identify all possible bad inputs. If you go the blacklist approach, it's almost certain that someone more cracking-minded than you will identify something you missed.

The only problem is, there ARE people named Bobby Tables [xkcd.com]. "Filtering" is the wrong approach, the program must be able to handle any input in a safe manner, no matter how scary it looks like.

Slashdot is a good forum to ask this type of question. I'm sure you'll find a few individuals who work securing financial systems, but you are probably better off having a one on one with someone with some real experience. Data security methodology will likely be a hot topic this year among management and security researchers, so you should check for conferences in your area, or budget some time to take a weekend or too off. I doubt a book will teach you everything (often year old information), so I strongl

However hard you write your web app, if its running anything important, it WILL get hacked. there's nothing on this planet that cannot get hacked if it is a software. even hardware can get hacked if it is running on even read-only software. so, assume it will get hacked, and design so that you will minimize the damage when the app is hacked.

However hard you write your web app, if its running anything important, it WILL get hacked. there's nothing on this planet that cannot get hacked if it is a software.

I disagree. You can mitigate many risks by using a proper language (memory safe) or web framework which handles user input and session/CSRF protection. You can mitigate all risks by properly using a theorem prover to guarantee your implementation has all the important properties to guarantee safety.

You'll never be 100% secure but take a look at something like http://www.rapid7.com/products/vulnerability-management.jsp. Its the company who bought metasploit(http://en.wikipedia.org/wiki/Metasploit_Project) which is a common penetration testing framework.They have a community version that free you can run against the server if you own it, if you don't own the server you can check with the hosting company and see if its ok to run to verify everything is fine. Its for banking you should look at http://en.w

And use VBScript with activeX controls mixed with sql server 6.0 and make sure the clients all have to use IE 6.

Throw a little ASP, not asp.net or anything bloated that checks the sql agaisnt injections and you will have one rock solid platform that nothing will get hacked or get intercepted.Just ask any MCSE to secure it and you are good to go

Be sure to checkout out all of the fine resources at http://www.owasp.org./ [www.owasp.org] It's the Open Web Application Security Project. All materials, training, libraries, and content are free. There are numerous local chapters also so be sure to search for one in your local area.

That's the most important rule of thumb. Don't even trust your own client code.

Make definite security boundaries. Draw a circle, label it data. Draw a circle around that circle, label it prepared statements. Keep drawing circle adding layers for each security boundary so you have something like this.

Each layer needs to validate everything. Let each layer assume that the protected layer in front of it is missing. It just does not exists. One common issue is having only the client side code validate the user input. I love to modify client side code to bypass validation just to see what breaks. If its HTML, there are so many ways to do that.

While one can arguably say everything can be hacked (unless air-gapped), in certain scenarios you can at least mitigate the impact of a breach to make it almost irrelevant.

Easiest example is password storing. Some SQLi may get through and provide someone with a dump of your user passwords, but if you follow up to date recommended security practices [codahale.com], the data will be nearly useless.

1) Multiple layers. Consider your application and the entire framework it exists in. Assume that each part is completely under the control of a hostile. Now design the system so that the hostile still can't do much harm. So for example start with the webserver assume it were hostile, how are you protecting the data? Go through the entire architecture this way and make sure you can contain any type of part under hostile control even if it went undetected.

2) You probably want to be using capabilities not permissions i.e. X has the permission to do Y to Z, not X has the permission to do Y. That takes a ton of time to setup, and it is as much a jump in security as going from no password to passwords.

3) You want to use languages, servers, software that are security aware and designed. So for an obvious example you want to use web frameworks that taint check everything as a matter of course. You want a database that does the same thing (remember multiple layers).

4) You are going to want a full security implementation. A fragmented network, the server in a DMZ with monitoring behind a firewall. You are going to want intrusion detection and vulnerability assessment.

5) If you are really serious, hire a white hat team to audit you and do multiple cycles.

And if your boss is serious I'd be happy to start discussing this professionally.

All input is suspect, all users are suspect, deny access to everything and go from there.

Nicely put. I agree. And possibly -- internal users are suspect as well.

I agree with you on money saving. Very few people want to pay for security, except for security companies and the military. Web companies aren't bad either because they have to deal with so many attacks. But in general most people want to claim security without the spending.

Nicely put. I agree. And possibly -- internal users are suspect as well.

I'm actually in the mindset that all users, including the administrators, are suspect. I get a lot of flak on that, but personally, I have yet to see a reason to give anyone complete direct access to raw data without making it difficult.

Also, for some reason, many devs don't like the whitelist method for validating input. Never really understood why.

Whitelisting is a pain in most languages and a lot of work. That's why I'd go with languages and/or frameworks that make it mandatory and easy. But fundamentally most developers don't like doing security, that's why it is often a good idea to have a security development team do work like validation of input functions and then the more application oriented stuff is done by developers.

If you can, split the application in two parts - the font end running on a world-facing web server and a back end on a private network. Use a well-defined, high level protocol for communication between the two. If you can afford (literally, it's just a matter of throwing more hardware at the problem) some overhead, use a text-based serialization format with a solid, well-tested parser. The simpler, the better. Check every single request at the backend in every possible way, data

I was saying vulnerability assessment above. VA (IMHO) is very specific to the choice of tools. But I agree it is a critical part of the process. Humans can check some stuff in detail but they don't check 50,000 easy things.

Software engineering is fairly similar to structural engineering. Just as an architect does not truly understand how to create an indestructible building without first learning how buildings are destroyed, you can't possibly hope to create a secure software system without understanding how software is broken.

If you are serious about securing your software (without having a security expert on hand), you need to spend some time *breaking* software. http://www.hackthissite.org/ [hackthissite.org] has some fairly good tutorials, but you're also going to need to learn about buffer overruns, binary magic (such as never-ending zip files and over-sized jpegs), sql injection, malformed packets, firewalling, fail2ban, encryption (certificates at the very least), intranet isolation, air-gapping, client-securing, hardware securing (disabling USB ports), etc.

Basically, there is a reason security experts spend so much time in school and charge so much per hour. If this project is already in the blue-print stage and has a deadline, you should be looking to hire a security expert at the planning stage and at least a few audit stages along the lines. If this is more of a pet-project, it could be a very good way to get yourself motivated to learn these subjects.

Software engineering is fairly similar to structural engineering. Just as an architect does not truly understand how to create an indestructible building without first learning how buildings are destroyed, you can't possibly hope to create a secure software system without understanding how software is broken.

Earthquakes don't adapt their attack strategies as well as hackers. Learning to hack will help you harden against the lamest attacks of five years ago. Every single buffer-overflow can be fixed by simply keeping up on patches. Every SQL Injection vulnerability can be fixed by using a proper database access layer.

I find it much more effective to first apply basic coding standards based on OWASP, then to think of every web page from a request-response perspective instead of from a user perspective. 90% of

I've already commented, otherwise I'd mod this up. I sometimes program or modify in assembly language, ad I have to say, cracking an app here or there has made me absolutely paranoid when writing. I still think in machine execution terms when writing VB.NET. I consider what the CPU has to do to get the results I want.

Side effect, I was able to make a web service 30 plus times faster by changing maybe 4 lines of code. Because I know what the VM/Interpreter has to do, and I tell it to do the same thing a

I'm not very practiced in developing "hardened" web apps (mostly I've just worked with already written code that is secure), but:
Use as little javascript as possible (if you're planning to use web2.0 AJAX type stuff). It's almost laughably easy to change javascript after the webpage has loaded (Greasemonkey for example). If you're super good at programming clean secure C/C++ you might want to program your own webserver (servers like Apache are easier to use, yes, but they release security patches to them

Use as little javascript as possible (if you're planning to use web2.0 AJAX type stuff). It's almost laughably easy to change javascript after the webpage has loaded (Greasemonkey for example)..

That really has no impact, as long as you make sure the server side validates all actions to be sure they are correct and allowed. Javascript is great to enhance the experience, but it does nothing for security.

A friend of mine once had a router that used javascript based authentication that could be hacked using Greasemonkey. So don't do that, is sorta what I was trying to say. I suppose I could have said it better though....

If you're super good at programming clean secure C/C++ you might want to program your own webserver (servers like Apache are easier to use, yes, but they release security patches to them all the time, so they aren't THAT secure. A dedicated single program that only does a few things is likely to have less vulnerabilities).

That's the worst idea I've ever heard. Apache and IIS both average less than ten discovered meaningful vulnerabilities per year. Making a secure HTTP server is way harder than you think it is.

Use SSL or some other form of secure transport (https). This will insure (well not insure, but make it more difficult) that even if someone is able to snatch your user's packets (like if they are in Starbucks or something), they will have to decrypt them before they get a token (by which time it will have expired).

Look up Cross-Site Request Forgery, it's a whole class of attack that makes the browser do the hard work for you. This is a great example of a case where using a framework instead of rolling your own is best. Most authentication frameworks had some CSRF protections in them before most web developers even knew it was

I can also speak from personal experience. A company I worked for had to pass a security audit in order to do business with the City of Houston government. It was a joke. We programmers all knew of glaring security holes, but the audit missed everything, and we passed with flying colors.

The moral of the story? Use common sense. Do the things that you know make a site more secure. Don't store plain-text passwords. Use stored procedures. Use SSL. Use the latest development tools. Somebody will still find a way around your security controls. But to keep your customers happy, get a security audit done. That will give them the peace of mind they want, and you the cover you need.

Nobody has created real rock-solid security--physical or digital--without spending truckloads of money.

1950's computer science used a model of "input/output/processing/storage" and it worked well for most projects but it also kept programmers minds on data flow. Find out how that data flow can be abused and prevent it. The simpler a system is, the few bugs it will tend to have.

Also don't use systems that want to load up hundreds of packages to do something simple. Software complexity is the root of all security issues.

You can start by reading up on what's frequently been the vector for system break-ins elsewhere, and avoiding those mistakes. For instance:

- Don't write in php, and especially don't rely on a php-driven framework. While no language is perfect, serious php exploits still appear with alarming regularity (as compared to, say, perl or python). Also be sure to disable php on your server if possible (e.g. just redefine all php-related extensions to be treated as text/html).

Anyone who has actually worked for a time in web development will tell you that tight schedules, shoestring budgets and large sites, with many moving parts, all conspire to ensure that security holes exist in just about every non-trivial web app of any meaningful size. It's not that we're unaware of SQL injection, validation of inputs or even cross site scripting, we just don't have time to check and test everything and all possible interactions before we have to move on to something else (thank you MBAs).

Seriously, the way banks do things should absolutely never be a model for security. Run BSD (not Windows, not Mac, and not Linux). Find the smallest open source web server that can do what you need (but absolutely not Apache), and review the source and history of bugs and exploits. Or just write your own. Avoid languages with lots of modules if you can, and certainly avoid those modules. As much as you can you need to be writing all the co

First, make sure your software is written correctly. That is, use Java with a lot of unit testing and good coverage using something like Cobertura.

Now, given a correct program the only way it can fail is if it accepts bad input from the user. This means you need to write your program where you validate your inputs as early as possible. If the input is clean your program will not fail except for something out of its control - such as a resource failue (bad network, hardware failure etc). The hard thing is th

ModSecurity (or any other WAF) can greatly decrease the number and kinds of attacks that actually make it through to your application. And like a good firewall it can alert you when you're under attack. If you do nothing else, put this in place.

You also want to make sure your app is solid, so head on over to DISA and see what the military recommends. They have Security Technical Implementation Guides (STIGs) for just about everything in your architecture: http://iase.disa.mil/stigs/app_security/index.html [disa.mil]

Once you have things built, test! Use some of the open source penetration testing tools to see if there are any known vulnerabilities in your stack. Try it with and without your WAF in place.

Finally, if you really need to go the extra mile, it's time to shell out some cash for professional penetration testers. They'll have a tool belt full of open source and proprietary tools and the good ones will even do a static analysis of your code.

Sure, there's a gazillion lines of COBOL code out there in banking, it would take FOREVER to replace, though there are more modern banking systems which provide a solid starting point and it could be done and it might cost less than running IBM mainframes. But there's another great reason for COBOL.

COBOL is a crippled language. Unlike most other languages where all the code is written in that language... or maybe with the help of one more language. COBOL is an insanely retarded language. COBOL is generally only used for processing things. RPG-IV or something similar is used for terminal user interfaces, web based systems can communicate with the COBOL back end and then COBOL would perform the transactions required. COBOL itself doesn't even store the data. Even though COBOL often has a crippled internal ISAM or possibly can use SQL. More often than not, COBOL is linked to a DB/2 database... which is more of a large scale ISAM as opposed to a SQL server. Think of it as the data store used by an SQL server. Records are searched for using archaic search methods based on classic database structures... more like how it was done with dBase, FoxPro or Clipper.

While I don't recommend writing code in COBOL... you should consider the mainframe model of development.

1) The web interface DOES NOT query data directly from the data store. Instead, it requests data from a broken on another server.

2) The other server is connected using a non-IP protocol. The only method of requesting data from the other server is through a simpler interface which is fully known. You can't ask for credit card numbers for example. This can be accomplished using Ethernet, but avoid using IP. It's far too big and hard to understand. Using Infiniband as a link is great because you can use MPI over Infiniband to send a message asking for data and then wait for a result. This sounds more expensive than it is... but Infiniband is pretty damn cheap until you need fabric switches. Point to point is pretty easy on the wallet. The key is, any machine accessing data on the database should not under any circumstance be able to define its own query. Add a new function with parameters for each query. It must be explicit.

2a) Ultimate paranoia. Critical information such as credit cards and social security numbers exist on a separate payment processing machine hidden behind that machine. All communication with that machine is performed over a dedicated data link such as a high speed serial port (you can get them in megabit+ speeds these days) and all queries performed on that machine will be 100% explicit and will guarantee that there is no possible means of requesting anything other than the last 4 digits of either number. Transactions are sent to that machine, it performs the transaction and responds back with "Yes or no!". If a single user has multiple credit cards... there can be a query function which returns the card type and the last 4 digits only.

3) Just like an IBM mainframe. NO C/C++ code compiled native. This is simply because C and C++ lack proper memory management for secure systems. People still use pointers when they should use classes and there's no run-time checking on these things. Instead run inside a VM type environment. I recommend using a web server running on top of Mono or on top of Java. Not a web server running Mono or Java inside of them. This way, there's a much higher amount of security involved. Yes, this can still go wrong... but the chances are much lower. Make it a requirement that all classes and functions which are used in the system are compiled for the virtual machine and DOES NOT call out of the VM... unless there is no alternative.

4) Port 443 is the only open port on the systems... no exception. If you run netstat -a there should be absolutely no ports listening other than HTTPS.

5) Hardware based load balancing proxy server. These are expensiveish... but if you get a hardware based proxy server that load balances by proxying HTTPS requests across your web servers, it'll m

That's untrue. You can assume the worst and protect your application by following secure coding checklists, code reviews and static analysis. You don't need some sort of reformed hacker on your team in order to be effective.

That's untrue. You can assume the worst and protect your application by following secure coding checklists, code reviews and static analysis. You don't need some sort of reformed hacker on your team in order to be effective.

The OP never claimed you needed a reformed hacker to be effective, merely that you need to think like an attacker. That's certainly not following a bunch of check lists, static analysis, some code review, and calling it a day. Those techniques are helpful, but they're only a piece of the puzzle (though I'd be willing to argue that a check list mentality is likely counter-productive).

To create effective security you need to understand the attack vectors, and what you're securing. Code is only part of security. Your own code can be completely secure, but you can get owned by a 3rd party library or framework. All that crap can be secure, but you get owned by someone tricking a secretary into opening up an Excel spreadsheet with a zero-day Flash exploit. Security is an entire discipline, and it shouldn't be swept away into a few simple rules and procedures to follow.

Your own code can be completely secure, but you can get owned by a 3rd party library or framework.

Or by not updating the OS your server is running.There's no point in spending time, effort, and money coding an incredibly secure website backend if you're running it on an OS that's susceptible to a 6-month old remote exploit.

That's untrue. You can assume the worst and protect your application by following secure coding checklists, code reviews and static analysis. You don't need some sort of reformed hacker on your team in order to be effective.

Yeah. All You Have To Do Is... <sarcasm>

Expect to be cracked. Unless your system is too boring and valueless to attract attackers, it will be targeted. If it IS boring and valueless expect to get targeted by automated attackers anyway. Buy insurance. Both the money kind and the technical kind (good offline backups, etc.).

Expect your costs to go way up. Secure and "Git 'R Dun!" don't go together any more than "welding" stainless steel pipes with duct tape does. It's going to take longer and cost more. In addition to the development cost increases, there will be maintenance cost increases, plus you should be running more tests, audits and so forth.

Once you've adjusted your expectations, look at your tools and platforms. Windows isn't quite the security nightmare that it used to be, but I still prefer to avoid it as the OS for critical servers. Likewise, certain platforms have less-than-sterling security records (RoR, PHP, for example) where others are designed from the ground up with security in mind (Java Enterprise Edition). An "insecure" platform used in a secure way trumps a secure platform used in an insecure way, but overall, it's a good idea to start with a secure basis.

DON'T create your own security system. Most of the DIY systems I've seen can't stand 5 minutes with a 5th grader. Unless you are a full-time security professional, you'll end up with a ton of exploit points, new team members and contractors won't understand it or remember to apply it properly to new/changed code, and most DIY systems I've seen required integration into application code, meaning that changes to security can - and often does - break the business logic. J2EE has a built-in wraparound security system designed to fend off attackers before the application code is invoked and require minimal security code in the application logic. Most insecure J2EE systems I've seen ignored this capability in favor of DIY login code. I blame too many J2EE books that use "login screens" to illustrate application programming.

DO adhere to best practices. There's no excuse for SQL injection exploits or exploits that came from trusting that what comes back from a client is what's supposed to come back from a client. Likewise, don't keep passwords in clear text in your databases. And never, ever expect that the only way people will access secured resources is by following a predetermined pathway unless you're demonstrably certain that they had no other way.

DO read up on the literature. There's plenty out there on hardening networks, servers and systems. There's a whole genre of books on secure Java design and programming.

Keep up to date. New exploits show up all the time. Likewise, keep your software security up to date. Test early and often.

I never understand why any web page, other than something like google search, will or wants to accept data that is part of the URL for any meaningful interraction.

Sure it's bookmark friendly but:

(1) GET contents are logged by default, and PIN-trap elligible in post-facto and blind ("fishing") legal actions, POST contents are not.

(2) GET is the verb used in things like IMG SRC="" requests, if all GET requests are incapable of incidental write operations then that whole category of cross-site-scripting attack is rendered moot.

(3) Because of item 1, the contents of your web server log(s) is, by default, promoted from a stream of tidbits to a first-tier security risk in need of secure archiving. [If you have followed good practices and separated your database machine and your web server onto separate platforms, for instance, then a compromize of your web server of the classic sort will net very little at all if all your logs say is "IP X.X.X.X GET http://site.tld/someform [site.tld]" and "IP X.X.X,X POST http://site.tld/somerequest [site.tld]". If action and identification information are passed around in your GET(s) then an attacker can learn that IP address A.B.C.D is the home of USER=Bob and so forth.]

Basically if people had _honored_ the designation of everything after the question mark ('?") as a _query_ _string_ in the HTTP specification, but not carried the SQL-burdened definition of "query" into the issue, a lot of web-pain could have been avoided.

Yea it might not be as bookmark friendly, but when is it ever smart to bookmark the POST of a filled-in form?

[B:] learn what a _real_ DMZ is (e.g. two routers with the public machines between them and the internet behind one end and the intranet behind the other, with very intense restrictions on what traffic can pass from the DMZ into and through the internet end and _both_ routers configured to _distrust_ _all_ connection attempts originating from the DMZ machines). Then implement this arrangement correctly. There are a bunch of rules for doing this right, and if you follow them your web service machine will be "stuck" in a deep warm hole of safety, in that what it can do will be greatly limited, which is at least as important as making sure that what can be done to it is limited. Most exploits require more than one path to the machine, for instance tricking the web server into "calling you back" with a telnet session or an FTP or SCP of bulk data. If the web server can only pass traffic from the one port (port 80 etc) off of the DMZ then even a successful compromize of the machine may be stopped from having any net effect.

[C:] Every machine in the DMZ is allowed to do exactly one thing. e.g. don't build a LAMP machine, build a LAP and a separate LM machine and place them very close together. This sort of separation can even be done with virtual machines. Just so long as the machines cannot peek at one another's storage etc.

This is not mainstream wisdom, but it is out there if you look for it. (e.g. I didn't make all this stuff up myself. 8-)

There are lots of things that are easy, but not always cheap, to do that could make the world much safer.

They just aren't in the five-days-to-your-web-presence quick-start guides to web servers.

"Reformed hackers" reminded me of Kevin Mitnick and something that we solely lack as a security measure - social engineering countermeasures.

I honestly think adding in stuff like Challenge/response prompts would do pretty well for greatly reducing the possibility for social engineering towards the end of infiltration. It works for the military...

Any decent programmer should be able to write a secure program. Read your input, reject it if it's not what you want.

That's true as far as it goes, but there are vulnerabilities in the language's collection of input, in the webserver's collecting of data and parsing of packets, in the network system layers below that, even, sometimes, in CPU instruction sets. And then there's social engineering, human error (just because you "can" write a secure program doesn't mean you *did*) and of course physical access is the nastiest risk of all.

It's really not as simple as we would like it to be. Unfortunate, but true.

Having worked on a banking website I can assure you that the requirements of a secure site include secure use by the average idiot. Of course the complete idiot will always defeat you. (Someone who has a smart phone configured as their SMS out of band authentication for their bank, the website bookmarked, and their password in an unencrypted file called "passwords", no lock on the phone which they then leave in the pub)

That's fairly naive in web terms. For example, the application may carefully check an incoming string is valid for what it expects, but fail to correctly encode it on output and create a cross-site-scripting attack vulnerability (for example if the input contained a element). There's also a lot to check; for a number, it's not too hard, you check that the input is an integer/decimal as appropriate, and do range check if relevant. For a string it gets harder; length check is obvious, but what about checking character set? It turns out just finding out what the character set of an incoming string _is_, is difficult (blame IE): http://www.crazysquirrel.com/computing/general/form-encoding.jspx [crazysquirrel.com]

Then you get cases such as CSRF (cross-site request forgery) attacks ( http://en.wikipedia.org/wiki/Cross-site_request_forgery [wikipedia.org] ), where the user is fooled into clicking a link that sends a request to the web site, If they're logged in, the browser will typically send appropriate cookies, meaning from the server point of view the user has sent an entirely valid request.

OTOH, to say "If you don't know, you can't do it", is hopelessly defeatist. I would not start with a security-critical web application any more than I would start with any other security-critical application, but you can learn this stuff. Alas, it does take time...

Any decent programmer should be able to write a secure program. Read your input, reject it if it's not what you want.

Writing a secure program is relatively easy. Building a secure system is difficult. This is largely because any system that performs any non-trivial task in this day and age will necessarily entail running large amounts of code written by someone else.

You don't know how many times I've told people that. They're usually the same people who say "How could they have done it?", and then I have to break out years old writeups of the exploit.

Case in point, SQL injection. I was talking to some web programmers who apparently have worked in a bubble, and learned everything from the books, but glossed over the part about "never trust user input". They didn't get it. I demonstrated a SQL injection against their code. Then they were willing to listen.

Too many programmers see user input as being trustworthy. Back in the day, it was as simple as "don't allow ` or ; in lines you send to a system calls". People even screwed that up. Then it was "don't do system calls, do everything natively in your code". People ignored that. Then it became "never trust user input", and "sanitize any user provided data". It's sad but true, they still don't care.

I've introduced people to hacking tools and methodologies. It's not so they can hack. I "encourage" them to try to hack their own code. Code it right the first time. Then attack it to prove that it is right. And keep trying to break into it. Learn better techniques, and teach me something. I don't mind in the least if a coworker can show me that I'm wrong. It's worse if a malicious 3rd party does.

There's no excuse for someone not to know and use the same tools that attackers use, to defend themselves.

Can you answer his question? Do you have any advice on how to harden the web site/service? Any books you know he can read? Any web sites or software good for this? NO? THEN SHUT THE FUCK UP! For once either answer the question and no fucking sermons.