Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

DMandPenfold writes "NASDAQ's aging software and out of date security patches played a key part in the stock exchange being hacked last year, according to the reported preliminary results of an FBI investigation. Forensic investigators found some PCs and servers with out-of-date software and uninstalled security patches, Reuters reported, including Microsoft Windows Server 2003. The stock exchange had also incorrectly configured some of its firewalls. NASDAQ, which prides itself on running some of the fastest client-facing systems in the financial world, does have a generally sound PC and network architecture, the FBI reportedly found. But sources close to the investigation told Reuters that NASDAQ had been an 'easy target' because of the specific security problems found. Investigators had apparently expressed surprise that the stock exchange had not been more vigilant."

In an alternate Universe, software would be released not before it's done, bug-free, and not need updates other than to add functionality.

Software quality being what it is today, there's only 2 choices:

If you don't want to patch all the time, disconnect from network so that you have a stand-alone installation (or only use on very strict managed local network).

If internet-facing: patch, patch, patch, so that you have current software with known leaks fixed. In this respect, *nix or Windows doesn't make much difference, the important thing is that it's kept up-to-date.

There does seem to be a pretty large difference in the time between exploit and patch between the two platforms though. You can have Windows exploits go unpatched for months, although occasionally there is a workaround to mitigate the risk.

How are you going to guarantee that your software is bug-free? That's like trying to prove that God exists.

Software complexity being what it is today, it's very difficult to make sure that a system is bug free. Even if you didn't rely on other people's libraries, it would be very difficult to do anything non-trivial without introducing some kind of unanticipated behaviour.

Back to the summary, what's wrong with running Windows Server 2003 if it's still getting security updates? Wouldn't it be more likely to be secure than a newer version of Windows Server, which has new features that haven't had as much time to mature?

Back to the summary, what's wrong with running Windows Server 2003 if it's still getting security updates? Wouldn't it be more likely to be secure than a newer version of Windows Server, which has new features that haven't had as much time to mature?

Even more so, since win2003 doesn't have IPv6 by default. IPv6 software stacks have not been around that long, and many security flaws have been found (not in IPv6, in IPv6 software). Even OpenBSD was caught by an IPv6 flaw.

Even if you prove that the code matches the proof - how do you know that the proof itself doesn't contain some false assumption? That you haven't misunderstood the problem that you're trying to solve? You then have to prove the proof. And so on.

The proof is about the code implicitly and proves that it follows a specification. Both are written in precise formal language though the specification less so. If you don't trust it read over it yourself. The proof itself is in a formal language, and the rules for manipulating them clear. There are several formal system, none of which have more than a handful of axioms, and such known systems are mathematically equivalent in what they can describe.

More concerning is the poor firewall configuration. Badly patched servers can be put down to laziness, or unwillingness to fully regression test servers running bespoke software. Badly configured firewalls can only indicate incompetence.

It's all about Marketing. MS Windows is has plenty of speed if you are willing to put the right hardware behind it and the brochure advertising their platform only mentions that their system has the lowest latency when processing stock data and not total cost.

I see this all the time at my job. The third-tier support are click monkeys. They don't know how to actually manage a domain, but they know how to type and click. So some company makes a GUI for managing users, policies, etc that's as simple is "red light-green light". When we ask them what actually was changed, they have no clue.

It's cheaper to hire click monkeys than to actually hire a Windows Domain Engineer, but they figure that the cost-benefit is better.

They run both. The actual trading system (I recall) runs some form of heavily modified real time linux, because the high-speed traders demand crazily fast speeds - they are trading on the microsecond level now, and growing frustrated by the time it takes for a signal to go down an ethernet cable. The Windows servers will be for things like the frontend interface used by the less-high-speed traders.

That if they're not responsible for $bazillions in transactions and the NASDAQ is, then perhaps NASDAQ ought to be hiring somebody that knows what they're doing. The fact that there's a large volume of transactions only makes it more important that the machines be properly maintained and patched. What's more it's not like they can't take them down during the week end.

I love reading quotes like that... You made me laugh. Financial services are a different beast and there are a couple of risks/issues that have to be understood that make patching very difficult...

First, you can't say the "IT department" as if its a simple thing that can make decisions. The IT department consists of several hundred to several thousand individuals working in different groups with different requirements. Looking at the org chart, most commonly the CTO or COO is the person that links the syste

Now throw in a team dedicated to information security and you get additional opinions on how to do patching. Its next to impossible to put 10 people in room and get a decision, and these conversations go on for years.

If that's the case, they're not a team dedicated to information security, they're dedicated to having easy jobs and like to call themselves 'information security professionals'.

It's a culture issue on the concept of server up-time vs service up-time.

I developed the patch management process that is used on the servers of one of the largest trading companies in the world. I got started on this at the time after hearing one of the server admins brag about an up-time over five years. What he was really saying was that he hadn't patched his servers in over 5 years. Unless your running a mainframe or a certain flavors of Linux a reboot is required for many patches.

When one of those servers go down the cost is measured in the millions of dollars per minute. The culture took as a matter of pride to make sure that never happened. The best perceived way to avoid this was avoid anything that could affect server up-time. Since patching necessarily involved rebooting the server it simply wasn't done.

Changing this culture was a half year long internal political fight that boiled down to a single thing. I posited the argument that server up-time should no longer be tracked as a metric and should instead be replaced with service up-time.

During that half year period I developed the process (working with a lot of other teams) for patching these servers without affecting service up-time. Doing so involved creating a SLA that had server maintenance windows defined for specific times. It also explicitly defined that service availability would not be affected by having a server be unavailable during those very maintenance windows.

Ultimately the culture was so entrenched that it literally took upper management handing down orders from on high that server up-time was no longer allowed to be tracked as a metric. In the end we were patching our servers on a routine basis and doing so without impacting service availability.

Excellent point, and a practice I've already seen at my current job (tracking service availability instead of server uptime--in fact, since I started, we've tracked nothing but service availability).

That said, this has led us down the path of constantly increasing availability requirements, for things as (relatively) insignificant as an internal company blog. We're currently doing work between two new data centers, and one of the goals is to provide near 100% availability of all systems. It becomes very