Posted
by
michael
on Thursday August 21, 2003 @11:00AM
from the woodpecker-that-brought-down-civilization dept.

stieglmant writes "For everyone who thought the 'blackout of 2003' was bad, how about this, according to an article at SecurityFocus, and another article at The Register, 'The Slammer worm penetrated a private computer network at Ohio's Davis-Besse nuclear power plant in January and disabled a safety monitoring system for nearly five hours.'" Russell writes "Maryland MARC Train Service was shut down most of Wednesday morning due to what sounds like the MS-Blast worm or one of its variants. The local Baltimore news reports that the cause was a signal malfunction but CSX, whose communications system runs the tracks, has an article describing the shutdown as a result of 'a worm virus similar to those that have infected the systems of other major companies and agencies in recent days'. This indicates that the network that the train signaling stations are on is not protected by firewalls, at least to block ports 135 and 444 where the DCOM vulnerability is attacked. Wow, taken to the extreme, the exploitation of their systems could have caused a train collision and injury or death to hundreds of Maryland and Virginia commuters."

I was under the impression that Microsoft didnt encourage the use of its products in applications such as these. We are talking about systems that cannot fail - if they do, people could die.

I thought Microsoft had the sense to accually say 'this is not what our product is for - get something custom'. If I worked at Microsoft, the last place I'd want our 'it-does-everything' operationg system doing would be managing the safety systems at a nuclear plant.

Does anyone know if Microsoft accually encourages this type of a deployment - if they dont, what moron decided to use it?

I didn't actually see anything in those articles that said it was MS systems that were running the safety at the nuclear plant. All I could see is that the bandwidth had dropped due to the slammer worm and that a display monitor was disabled due to multiple scan attempts. This tells me that there were MS systems that were affected on their network segment, but it never says that the safety systems themselves were MS systems.

Rules of IT:1) Do not place a vulnerable system on a critical network unless absolutely necessary.2) When configuring a computer/server, always assume that you are hooking up to a hostile, unfiltered network.

If they'd applied these two rules to their network, routers, servers, etc., this likely wouldn't have happened. These are pretty basic ideas, folks. If you have a Windows box on the same network as a computer controlling nuclear saftey checks, you better have a damn good reason and you better check for patches weekly.

"Doesn't encourage" is a happy dream of MS's.They think they want 100% market penetration, but they also think they can get away without taking on the responsibility which that implies.

They're "encouraging" everyone to use MS products excusively, everywhere. When it gets to the point where everything is Microsoft and nobody knows anything else (which is what Microsoft is shooting for) how are they going to deny responsibility for stuff like this?

This might be compared to a concrete manufacturer coercing the market, becoming the sole supplier of concrete, but all along saying something like "you shouldn't use our product for pre-stressed bridge segments." Once they became the sole supplier for concrete, what the hell else are people who want to build bridges supposed to do?

Can a supplier reasonably be excused for making crappy product which kills someone because they said to use some other product, even though they themselves were the ones who drove all the other products out of the marketplace?

I've worked with VITAL control systems - train brake systems, landing gear, flight recorders, etc., and those systems are in a completely different space than PCs (or Suns, or IBM, etc). You're more likely to find Vertix Ada than you are MS C++ or any Java implementation. The likes of Sun, IBM, and Microsoft never even bid on the control systems I worked on.

Having said that, while the PC commercial vendor types like MS and Sun stay a far distance from control side (and rightly so), they definately bid on the monitor boxes. That SCADA may well be running a custom RTK, but the console that the operator back at base has in front of him could well be an XP system.

I've never used MS-based front ends myself, but I've written interfaces to OS/2-based consoles that talked to my onboard stuff, and I can't see any reason why a Win2K or XP front end would be any more or less contentious than an OS/2 one.

The problem is not the SCADA or braking system itself; it's the remote monitoring station. Often, those things are connected to the net to synch the
atomic clocks, and sometimes for remote logging purposes. If *those* get compromised, the control systems may be affected, but they are not compromised. Which is to say, it's a major fscking PITA, but the brake system will still work on the train without remote intervention or monitoring; it's just not going to start again after it stops.

At Dungeness B nuclear power station in the UK they still run the reactor control systems with BBC B computers. The reason is that the operating system and control code is so small (ca. 32KB) that the engineers have gone through it by hand and manually checked every possible scenario.

A complete flow chart exists that details all errors that can occur in the code and what the solutions are. Try doing that with Microsoft Windows or Linux. Sometimes the simple solutions are the best.

This doesn't surprise me in the slightest, and it's not as bad as it sounds, either.

8-bit processors still dominate the CPU market in terms of volume, and very nearly in terms of profitability. They are virtually never used as general-purpose computers anymore, but due to low cost of development, deployment and testing, they are ubiquitous in the control systems industry.

Companies like Atmel and Microchip are constantly devising new and better 8-bit microcontroller chips for this market. A lot of them are available in hardened grades for just these uses. A modern one will often bundle the entire machine onto a single chip, with as much IO and analog interfacing as you could ask for.

Reading the ENTIRE assembly dump of a 32K program is rather simple. A team of a dozen engineers can verify it in a matter of a couple months (I mean formal verification here, like you would do for a truly critical system, not just "give it a look over").

While truly using a BBC micro is a little obsolescant, the ideals that caused them to do so are sound.

Yes, and find a lot of crap written by people who repeat a web myth. Now as far as people who were on the ship at the time or who actually wrote the software involved we get a different story. WinNT was not at fault. The truth is that a server app corrupted it's data, a client app tried to use that bad data, and the client app failed to control equipment. Can happen with any OS. Add to this the fact that the ship was a test platform not an operational ship and they were trying to break things.

"Others insist that NT was not the culprit. According to Lieutenant Commander Roderick Fraser, who was the chief engineer on board the ship at the time of the incident, the fault was with certain applications that were developed by CAE Electronics in Leesburg, Va. As Harvey McKelvey, former director of navy programs for CAE, admits, "If you want to put a stick in anybody's eye, it should be in ours." But McKelvey adds that the crash would not have happened if the navy had been using a production version of the CAE software, which he asserts has safeguards to prevent the type of failure that occurred."

http://www.sciam.com/1998/1198issue/1198techbus2.h tml

"McKelvey writes that the failure, "was not the result of any system software or design deficiency but rather a decision to allow the ship to manipulate the software to stimulate [sic] machinery casualties for training purposes and the 'tuning' of propulsion machinery operating parameters. In the usual shipboard installation, this capability is not allowed.""

The question was whether MS use was encouraged in life-critical systems. I would consider a Navy ship's control system life-critical. The answer is yes, end of story.

Wether it was MS's fault or the App's fault that the ship was dead in the water was not part of this discussion. In fact, everything I've read said that this was an unhandled floating point exception, which is of course the problem of an application not the OS.

Enterprise/Mission-critical/Life-critical systems should not be doing floating point operations period. They introduce too many errors and inaccuracies. If you think you need floats, try adjusting your units.

What a blanket statement. So it's impossible (or too difficult) to use floating point numbers correctly? You know this... how?

IANAM(athematician), but....

Using floats introduces innacuracy because there is rounding and because of the fundamental limit in accuracy of floats in terms of how many decimal places are represented on a computer. For some applications the number of possible significant digits is unacceptable because it is not accurate enough.

It is fairly common to represent units as integers either by using smaller numbers or by representing a decimal number as integers in the program and using integer math to do all teh calculations. This way you do not lose digits or have unnecessary rounding.

The funny thing is I remember reading about this technique being used in DOOM because for this critical application the innacuracy of floating point was unacceptable and the performance was unacceptably degraded by the floating point processors of the day. Now that we have multiGhz CPUs and more video ram than we know what to do with and deicated video processors I regularly hear about floating point performance being important which to me implies floats are being used in games now.

However I would not be surprosed if programs written for NASA and such where they need billions of decimal places and being off at all means people die or are lost in space forever some pretty sophisticated techniques are required in programs. I think the poster was implying that the calculations for the engine of a Naval ship might need similar treatment. It is certain that the programmers designing the software handling calculations used for the armaments (trajectories of shells and navigation systems for the missiles, etc) would do well to excercise such care. After all, what is more mission critical? DOOM? or a ship with hundreds of people on it in enemy terrirtory?

I've often wondered why ANY military branch would continue to have a presence on the internet, with the exception of recruitment sites. Back in the days before public/commercial internet access, I was a network contractor for the Navy, working at Point Mugu naval air station. The installation of a "command LAN" was a top priority, but the mere mention of a link to the internet was greeted with open hostility. (Wasn't my suggestion, either, thank God.) Made 100% sense to me then, even more so now.

I for one DON'T want them to install patches as they are released at a nuclear power plant. I'd like them to install patches on test machines, to be sure the 'fix' doesn't break something else. Installing patches without testing them first is just as senseless (if not more so) then not patching..

I believe the article stated that at least one of the systems was NOT directly connected to the internet.

Most likely this scenario was the same as the one at TI here in Dallas a few weeks ago. Some nimrod from marketing or somewhere in the company brought their laptop home, got it infected, and brought it back to infect the network. Fact is, admins can't control absolutely everything in their networks.

It's surprising to me that during this latest ballooning Microsoft crisis, Linux and Macintosh aren't getting more press. They can always step up and say "Ha Ha, this isn't happening to us."

true, any admin that doesn't know about packet filter firewalls should be fired...

Sometimes that's not enough. At my university, the departmental firewall did just fine in blocking the virus, until somebody got their Windows laptop infected at home and brought it to work, behind the firewall. Once again proving that great network security can be easily defeated by poor physical security.

in an environment like a nuclear power plant, why aren't there firewalls on all clients? i mean, network security in such an installation is about as important as it gets.

it's possible the vulnerability arose through someone accessing internet e-mail. but wall street firms regularly blacklist internet e-mail sites. they do that b/c they're regulated to ensure that proprieties are kept and people aren't defrauded. a nuke though--we're talking more than just dollars and cents here.

Sometimes that's not enough. At my university, the departmental firewall did just fine in blocking the virus, until somebody got their Windows laptop infected at home and brought it to work, behind the firewall. Once again proving that great network security can be easily defeated by poor physical security.

Hard on the outside, soft & crunchy in the middle? The safety monitoring computer for a power system should be accessible only by floppy disk through a terminal in a locked room with pressure sensitive floors, a sound monitor, body heat detectors *AND* laser trip wires on all the ventilation grates. (The floppy disk should be run through a demagnitizer before and after each use.)

Hard on the outside, soft & crunchy in the middle? The safety monitoring computer for a power system should be accessible only by floppy disk through a terminal in a locked room with pressure sensitive floors, a sound monitor, body heat detectors *AND* laser trip wires on all the ventilation grates. (The floppy disk should be run through a demagnitizer before and after each use.)

I saw a documentary on that once. Apparently that's EXACTLY how the CIA headquarters mainframe at Langley is setup! OH wait, no, that was Mission Impossible. Forget it.

Don't forget, had the administrator followed proper MS testing to see if his machines were patched, they still may or may not have been.There's plenty of blame to go around here boys. Make sure everyone gets some.

These were inexcusable mistakes: using Windows for mission critical equipment and connecting to the Internet, especially Windows.

With MS systems it's not just a matter of loading a patch, quite often they break something especially third party apps, fail to fix the problem they claim to fix, or open a new vulnerability.

If a model of car were found to be so defective -- bolts breaking, carbonmonixide in the passenger compartment, split drive shaft when you change gears, works with only one brand of gas, plays only approved radio stations, etc. -- no one would think to blame the user.

The MARC network admin should be tied to the tracks a la dudly doright (sp?). Hope that signal to switch the tracks gets though...damn... That'll learn ya for hooking an operational network to the 'net'.

Same with the power plant. Your office is now located in side the containment building. Do you think they would pay more attention to the network security?

Why was the safety monitoring system on a nuclear power plant exposed, even indirectly, to the internet?

It doesn't even necessarily take an indirect connection to the internet. If a virus is on a laptop that was connected to a public (or any infected network) like at home, then connected to a completely autonomous network, it can then infect that network.

Then why was the safety monitoring system exposed to the office network? In this case, the worm came in on a non-firewalled T-1 line from a contractor's network, and through there to the internet.

I would have suspected that there would be multiple layers of protection in front of critical systems like that. Even more, I would expect that safety regulations require these layers of protection. Of course, that would hurt the bottom line, so we can't have that happening:(

I agree the admin has some serious explaining to do. But have you ever worked as an administrator?

The "typical" administration job is exactly what you'd expect -- you're understaffed, underpaid, your budget is abysmal, and you have a gaggle of retarded secretaries calling you up asking the *same questions* constantly because they're too lazy to use the help system!

Most of your day is spent putting out fires. Fixing critical systems before all hell breaks loose, keeping your web/nfs/mail/compute servers running when they have a load average of *5*, fixing viruses, fixing shitty HP machines because your boss wouldn't listen to you and buy a cheaper machine made of quality parts.

Luxuries like patching systems, and preemptive security measures are things there aren't time for.

So my question would be... is their IT department critically underfunded and that CAUSED the problem, or was someone just lazy?

Ridiculous. Those important systems shouldn't even be on the same network as the office, much less attached to a network that can see the internet. I'm not talking firewalls/seperate vlans/whatever either, I mean physically no kind of connection at all. If they have to be accessible from a vpn, you better have a damned good idea of who will be doing that accessing.

When it comes to your average office network, sure, you can give the "oh they brought in an infected laptop" excuse, but this is quite a bit different.

CSXT has confronted increasingly sophisticated computer viruses, like ones that have penetrated some of the most secure sites in the country in recent days.

Sorry, but they're obviously not "some of the most secure sites in the country". If they were, they wouldn't have been penetrated like this. How can I say this? Because my company didn't get penetrated.

I'm afraid of sounding like a broken record here, because if anyone looks at my past posting history they'll see I've said exactly the same thing. However, the fact is we have mission-critical 24/7/365 servers running Windows (as well as Linux) that simply can not be vulnerable. So we secure them, and we protect them, and put in safeguards, and work together as a team if there is a particularly nasty threat out there...and we keep running. Funny, that.

Sod it; plenty of other posters will argue the point about patching, firewalling, etc., and a myriad of rabid MS-bashers will refute and insult. Let my small voice add merely this to the fray -- it doesn't have to be this way, even if you use Windows. All that is required is people who know what they're doing.

CSXT has confronted increasingly sophisticated computer viruses, like ones that have penetrated some of the most secure sites in the country in recent days.

Wha the fuck ever. I've heard similar excuses all freaking week. "Viruses are getting smarter" , "Those hackers have no lives", etc etc. They miss the point that it's actually the OS's fault in the first place! The virus comes in through an exploitable service which runs by default. It's not like the virus tricked the user into executing it.

It's like me leaving the door to my house open, some thief comes in , cleans out my house and then I say.. "Oh that bastard has no life". Well, it's also my fault for being stupid and leaving the door open in the first place.

This ignorance won't stop until the media stops talking bullshit, tells the whole story and includes _all_ the parties at fault including MS, who well, basically sold me the house without doors!

I would expect that the problem is not with the network administrators. The problem probably lies with the CIO, who has no idea about computers or firewalls. Trying to save money is what will really screw you.

Network Administrator: We should get an outsourced firewall and a managed virus system. It will cost 45000 a year, but it will be worth it. We also need to start putting on patches on the servers.

CIO: Too much money. Just buy something from Best Buy. As for the servers, we cannot pay you overtime to put patches on them. Besides, Microsoft is a big company. There shouldn't be any real problems.

Network Administrator: But sir....

CIO: Just do it. I've got an MBA. I know what I'm talking about. If there is a problem, we'll just blame you.

That brings up a good question. Doesn't software need to be certified before it can be used in nuclear applications? In fact, isn't one of the (many) disclaimers on most software (including Windows) "don't use this in a nuclear facility"?

I think the fault here is with the moron that managed and accepted the software in the first place. One of the first disclaimers all software companies make is that they do not gauruntee that they are suitable for life threatening situations. Who accepted this software? Who speced it? Who supervised their work and ensured that they were competent people to manage this type of work?

I agree with this. Given the EULA claim that software is *not* certified for use in applications such as life-threatening situations, why did due-diligence not prevent this application from being approved. I also think, however, that this is not a network administrator problem. It is a legal counsel problem, and a CEO problem. How, after all, did a nuclear powerplant escape segregating its key security functions from a publically connected network. Have they never heard of air-gaps?!
These are the same people who never want regulations telling them what to do. No, voluntarism is always to be preferred.
How about penalties for dumb mistakes like this one.
Fines and public ridicule have a wonderful way of concentrating stubborn minds.
D

It is horrifying that critical systems such as Nuclear (or Nucular as W. says) power plant safety systems have been compromized by rampant known issues with Microsoft Security
I believe that it is worse that such critical systems are not better administered. Heads should roll in the IT department. This is also an indicator of how this Nuclear power plant has treated Homeland Security in general. Having such systems exposed to the internet is just plain negligent.

...before someone really is killed due to M$'s negligence. Sure, one could argue that they should have applied patches and that it isn't M$'s fault but tell that to the jury. When surviving relatives see the potential for a profitable liability suit they are going to go after the biggest pockets and that is M$.

Sure, one could argue that they should have applied patches and that it isn't M$'s fault but tell that to the jury. When surviving relatives see the potential for a profitable liability suit they are going to go after the biggest pockets and that is M$.

Yes, and then software liability will be mandated by legislation and then everyone in the software industry will be trouble. Be careful what you wish for. If MS goes down for something like this, the whole software industry is in trouble. We don't make as much as doctors in this business, so we can't afford the malpractice/liability insurance.

Again, the question should be asked why were mission-critical systems connected directly to any network, other than connections to other mission-critical boxes?

is why anybody still thinks that Windows is suitable for a production control environment. I can understand the pretty gui for someone's desktop, but (and I'm serious when I ask this) what kind of utter cretin would think to put Windows, or any Microsoft product, in a fucking nuclear power plant, completely un-fucking-protected from this sort of stuff?

It doesn't make sense. Use a Unix/Linux machine, make sure it has only the access level needed from the outside (maybe sshd running, maybe), and keep the thing patched.

Why is this rocket science? Why do people who are building nuke plants and rail lines not know any better?

Use a Windows 2000 machine, make sure it has only the access level needed from the outside (maybe sshd or something similar running, maybe), and keep the thing patched.
If there was a Linux/Unix worm running around, couldn't the exact same situation happen?

While I agree with you in principle, the problem I have with MS patches is that I have NO FSCKING CLUE what other areas of the OS are affected. At least if I see a patch for TFTP for Linux, I KNOW I don't need it.

My God Man, just running MS Terminal Services requires the MS Client, even though I run a Netware network!

If there was a Linux/Unix worm running around, couldn't the exact same situation happen?

Yup. But I havn't heard of them. I've heard of a couple viri/worms/trojans with windows that have taken out significant parts of the internet. My Linux/Solaris machines still get hit daily with code red, a 2 year old exploit.

If you were interviewing 2 people for a job, and one was a convicted violent self confessed felon, would you hire him over someone without a record?

is why the control computers for a nuke plant are even hooked up to the same network. I can understand the need for the systems to communicate, but for them to have a physical connection to the outside world, firewalled & patched or not, is just plain stupid.

It isn't likely that the SCADA or management systems themselves are running on a windows box, but the front end will be. You do see a lot more of ModBus-over-Ethernet these days, which I understand can coexist with TCP/IP. Although this would be a bad design, I can picture how you would end up with a single ethernet backbone, and have multiple protocols and devices running on it.

If the critical system is on the same physical network as workstations other than the head-end, that could be a problem. Technician plugs his infected laptop into the networ for diagnostics or downloading data, and the network traffic kills the ability for the SCADA nodes to interact.

This is an easy mistake to make; all it takes is having multiple people need to share the same information, and a lack of money to provide dedicated physical layers for each function and proper gateways between the layers.

In actual practice, that may be what happened. The critical control system network itself should be (have been) inaccessible from the desktop/laptop network (aside from known secure methods, a la ssh) with the appropriate firewalls on *that* network (at a gateway, and maybe on each host/node). I can only wonder if the submitter/commentator meant/implied this when they asked why such ports were not blocked.

I just submitted the same story, it will probably get rejected, so here's some more links:
The Washington Post is reporting [washingtonpost.com] that the Slammer worm crashed the computerized display panel which monitors the most crucial safety indicators (coolant systems, core temperature sensors, and external radiation sensors) at Ohio's Davis-Besse nuclear power [doe.gov] plant in January. No serious problems occured, primarily because the plant has been offline for more than 1-1/2 years. Davis-Besse is run by FirstEnergy [firstenergycorp.com], which many people feel may bear much of the responsibility [forbes.com] for last weeks power blackout.

1. Worms infect Internet taking control of nuclear power stations and public transport2. Japan announces 30 year program to build intelligent robots3. New Scientist reports self-healing robots a reality, can survive battle damage4. Arnold announces "I will go to Sacramento and I will clean house".

All I can say is that I hope the next/. story is about someone inventing 2 million sunblock or we're all going to have a really bad day.

... and people will stop using Windows in critical systems where failure can have catastrophic results. The only thing Windows does reliably is fail. Whoever decides to run a nuclear plant's safety monitoring system or a civil rail's monitoring and safety system on a Windows platform should be dragged into the street, shot, burned, pissed on, disemboweled and then hanged.

Funny you should mention the Blackout. The timing DOES seem interesting. I wonder just what functions inside the electric utilities depend on Microsoft Windows. If it's good enough for the nuclear industry, would anyone be surprised if failure of a critical set of Windows systems were responsible for the Blackout?

I've seen networks with effective firewalls still just down by worms. Laptops are a very effective way to breach firewalls -- if a laptop user connects at home, or on the road without a firewall, and gets the worm, it is trivial to bring that same computer into work, and start spreading it behind the firewall.

Wow, taken to the extreme, the exploitation of their systems could have caused a train collision and injury or death to hundreds of Maryland and Virginia commuters.

Thats why trains have human engineers and brakes. It's why people should use good judgement and observation. If you approach an intersection, and see that the traffic lights in all directions are green, use your head and stop, because something's wrong. Of course this is impossible, theres a mechanical failsafe that will make all lights blink red if that happened - making a 4 way stop, similar mechanical fallbacks are employed in the railroads. This is all besides the point.

Techies tend to overestimate the role of technology in day to day life. MARC was shut down more because the clerks were having a hard time selling tickets, since they cant do simple math in their heads.

> Wow, taken to the extreme, the exploitation of their systems could have caused a train collision and injury or death to hundreds of Maryland and Virginia commuters.

No. Taken to the extreme, this exploitation could cause the train system to stop. Which is what it did.

Ever since the Victorian era, trains are designed to stop if there's a failure. That's what "fail safe" means, not that it is "safe from failure" but that "when it fails, it is safe".

For a simple example [fraser.name], take a look at the _mechanical_ switching gear on the tracks behind my office. More modern electronic or computerised equipment is exactly the same in terms of how it reacts to failures.

From the submission: "This indicates that the network that the train signaling stations are on is not protected by firewalls, at least to block ports 135 and 444 where the DCOM vulnerability is attacked."

As most people who had to fight this worm already know, a firewall doesn't do you a whole lot of good if you have users with laptops who plug in at home, then bring in their infected PCs and plug them into your internal network.

I'm not saying there aren't still ways to prevent the spread of worms, but an internal infection is in no way proof that there's no firewall. In many cases, it's just a clueless PHB who refuses to let the IT department lock down his laptop or install a personal firewall on it.

I don't care if you're running MS, Linux, or FreeBSD. That damn port should've been firewalled and the software should've been patched. What's scary is imagining what could've happened if someone intentionally tried to hack the power plant. Some terrorist cell could cause a nuclear meltdown without ever setting foot in the US.

his indicates that the network that the train signaling stations are on is not protected by firewalls, at least to block ports 135 and 444 where the DCOM vulnerability is attacked.

That is a silly conclusion to come to. Presumably they're also implying the same about the power grid.

I have first-hand experience with Ontario Hydro's IT nework (now Hydro One's IT network;) and I gotta say - they have firewalls up the wazoo. And this is the problem. They rely on border security. However, on networks as large as the ones being discussed, border security doesn't cut it. There are too many entry vectors. People reading email, people browsing the web, and oh my god people with laptops - the pain the pain.

So before you go thinking "they aren't even taking precautions that would have saved them! Fire them!" understand that it's *exactly* that attitude which caused the networks to go down in the first place - the common misconception the a firewall is a magic wand that will solve all their ills.

Border security does NOT cut it when you run insecure software on the inside, boys and girls. And you can take that to the bank.

With Blaster, spyware, etc. that seems to be spreading, I've wondered about using SSH only on a machine. Everything has to tunnel through the SSH connection (web, email, X11, etc.) using SSH port forwarding. That way, every machine on the local network would only accept SSH traffic. Any worm that gets installed and runs would try infecting other machines behind the firewall, only to find that those machines won't listen to the worm. Would something like this work?

P.S. Obviously, using this in a Windows environment would be difficult. Maybe this would be another good justification for migrating to a *nix platform.

Train Control and Signalling systems are universally designed for Fail Safe == Stop Working. The low-level, safety critical systems are controlled with very low-tech Vital Relays which which will stop train movement and/or make all the signals present a Red Aspect in case of computer failure, and that's what they did.

Train control has this luxury. Computer systems onboard airplanes do not... simply turning off jet engines in case of computer failure is not an appealing possibility.

when in worked as a contractor at Virginia Power in 1999, all the temps had internet access.
So it was just a matter of time before viruses found their way into Source Safe.
When I checked out a project, there goes my hard drive.
Guess who checked in the infected file?
You got it, a member of the HELP DESK SUPPORT TEAM.
Three cheers for the idiots.
Oh yah, if you are wondering, the plants reactors were made by Westinghouse in the early 70s, so no computer control there.
There are so many layers of mgmt to go through to do anything close to throwing a switch.
anyways, no firewalls at virginia power.
lots of internal lans and servers accessible by anyone too..

You're not just connecting to your business partners, you're connecting to everyone they've ever connected to.

The Register article says "It began by penetrating the unsecured network of an unnamed Davis-Besse contractor, then squirmed through a T1 line bridging that network and Davis-Besse's corporate network. The T1 line, investigators later found, was one of multiple ingresses into Davis-Besse's business network that completely bypassed the plant's firewall, which was programmed to block the port Slammer used to spread".

I'd never let a client do that. From a business risk management point of view, you *might* allow a direct connection by a vendor, *if* you had a good contract requiring them to keep good security and be responsible for breaches, and *if* you had secured everything sensitive in your internal network. From a theoretical or technical point of view, you should never trust something you don't control.

Monitoring systems are just as safety-critical as control systems. After all, the feedback loop is part of a control system. Imagine an intruder changing the readings to show that reactivity was decreasing, core temperature was dropping, and coolant pressure was so high that relief valves should be opened. You'd have a Three Mile Island rerun. That system should never, NEVER have been exposed even indirectly to the Internet.

But then, Davis-Besse is the plant where someone thought the way to check for an air leak was to poke around with a lit candle near flammable insulation wrapping critical control cables (1975).

"Wow, taken to the extreme, the exploitation of their systems could have caused a train collision and injury or death to hundreds of Maryland and Virginia commuters."

I think that's a little far-fetched, and almost amounts to fear-mongering. At best, it displays ignorance of how modern rail systems work. When the signals fail, the trains simply stop - engineers don't look at a broken signal and say "well, gee, I hope there's nobody in front of me, full speed ahead!" In fact, on most modern equipment the braking is automatic when signals fail. I don't know exactly how modern the system is in Maryland, but at the very least there would be a regulation that all trains come to a halt in the event of signal failure. They certainly would not go speeding around without knowing if there's another train occupying the same block.

Collisions can and do occur even when the signals are working properly - it takes time to stop a speeding train. But assuming positioning is all correct to begin with and everybody's following proper speed limits before the signals go out, there should be no problem stopping a train in time once the signals do fail.

First of all, this kind of service should never be connected to the public network, or even better, never to a non-dumb terminal.

Secondly, Microsoft CLEARLY spells out that their software is never to be used in this kind of implementation. Most software manufacturers do -- Sun, Apple, and most Linux distros IIRC.

Now, if this is a case of a critical service being overflowed from a remote location simply because it's connected to a public network, that's bad enough. To be running a consumer operating system on those critical services is simply unacceptable and probably worthy of execution. I don't care if the system was offline at the time -- this kind of thing should be definitely ringing warning bells. I hope whatever moron implemented this system gets fired.

From reading the article the services that went down had analog backups, but it's still unacceptable. Don't connect critical services to the fucking Internet.

The infected systems were 'only' in the higher level of the control hierachy. Control systems in all plants like this (chemical, power etc) are built on multiple levels. You start at level 0, which is pretty much mechanical - safety valves, burst plates, simple thermostats. Those ensure that even if every control layer above that goes haywire and tries to make the plant blow up, you still remain safe.

I discovered the usefulness of this after setting a digital pressure control on a pilot plant wrong - nitrogen vented everywhere (which makes an incredibly loud noise), my supervisor went mad, but nothing broke:)

I am amazed that the infection of the Halifax Bank ATM machines in the UK -- reported by someone here on Slashdot a few days ago -- did not reach the mainstream press in the UK.

I find it hard to believe that one of the best known banks in the UK has ATM machines that are exposed to the Internet in some way and can get infected by worms. Any UK journalists reading this - I'm sure your readers would be interested to know how insecure the Halifax computer network is.

Wow, taken to the extreme, the exploitation of their systems could have caused a train collision and injury or death to hundreds of Maryland and Virginia commuters."

railroad signaling systems being what they are, I'm certain that this could not have caused a collision. Railroad signal systems run on proprietary, failsafe software. Getting trains to bump into each other, in most systems, takes a computer glitch in code, or a specific series of commands to the signal system, plus a human overriding signal indications in the field.in every signal system i've ever seen (quite a few across the country), the only thing that MS software/OS relates to is supervisory remote control and monitoring. The local signal logic (software or relay based) will not allow for unsafe train movements, even if accidentally commanded to do so, unless very specific conditions are met. Again, an Engineer passing a stop signal, for example, is usually one of the requirements.

The idea of a MARC train with a few hundred people getting into an accident because CSX's dispatching center is down, is nothing compared to a freight train with hazardous material wrecking in a large city (since railroads grew up at the same time most large cities did... they run THROUGH the cities, not around them). Fire, gas, explosion, you name it, it could have happened.

One life and death critical systems they should use proprietary hardware, OS and software.

Not any version of Windows, not any version of Linux, not Intel, not AMD, but something totally alien. Something that is designed from the ground up to be DIFFERENT and CLOSED that can not communicate with the outside world and the system that the outside world run on.

I mean NEW CPU's and a NEW OS and NEW software that is so different and so tightly closed that nothing can communicate with it but other systems of the same design.

With every other little dickweed with a Wally World emachine typing "1337" into google and downloading DIY virus labs, and these same little punks having access to the same networks that all the above mission critical systems communicate on, well, it's a disaster waiting to happen.

And when some script kiddie crashes a 747 full of people from his Wally World emachine on his mommies AOL account, what then? Or the same kiddie opens the floodgates on a dam and kills 200,000 people. Or a million people. Or makes a nuke plant go Chernobyl?

When burglars keep breaking into your safe every week and robbing you blind you would assume that it's time to get a better safe..

Before the world went insane and computerized every friggin thing from toasters to pay toilets to the power grid, this sort of thing was IMPOSSIBLE. Time to fix it folks..

Springfield's own Homer Simpson was promoted to IT manager of Springfield's nuclear power plant today. Simpson promised that his first act would be to remove Unix from all of the power plant's computers. "Whoever heard of Unix anyway? I run Windows at home as do most Springfield residents. If it's good enough for playing games, it's good enough to run our nuclear power plant!", Simpson declared.

Why in heavens name are critical systems running consumer-grade software...and worse, why are they connected to the public internet?

And then there are VPNs...fine for offices, but not critical infrastructure - critical systems should be on totally separate, dedicated private networks, period!

Among my biggest fears in regards to computer worms, etc somehow getting into a nuclear weapons system and causing nuclear missiles being launched - in particular nuclear based ICBMs which are less protected; Windows is used on some nuclear subs from what I've read - frightening!

CSX Transportation's (CSXT) information technology systems experienced significant slowdowns early today after a computer virus infected the network. The cause was believed to be a worm virus similar to those that have infected the systems of other major companies and agencies in recent days.

The infection resulted in a slowdown of major applications, including dispatching and signal systems. As a result, passenger and freight train traffic was halted immediately, including the morning commuter train service in the metropolitan Washington, D.C., area. Contrary to initial reports, the signal system for train operations was not the source of the problem. Rather, the virus disrupted the CSXT telecommunications network upon which certain systems rely, including signal, dispatching and other operating systems.

CSX will implement InCharge(TM) Service Assurance Manager and InCharge(TM) Availability Manager to ensure the reliability of its Next Generation Dispatch Network, the core IP-based infrastructure that controls the dispatch and timely operation of 1,700 trains and over 20,000 carloads per day. More than 2,000 routers back this complex CSX network, each with multiple points of connectivity and multiple layers of redundancy.

Dumbasses at nuclear power plant allow systems to be brought down by a bug microsoft and the IT security industry warned people about weeks ago. Management unaccountable for making their lazy IT employees do their job.

This indicates that the network that the train signaling stations are on is not protected by firewalls, at least to block ports 135 and 444 where the DCOM vulnerability is attacked.

It means no such thing. It is perfectly possible to have machine (such as a laptop) infected on the outside, then brought in and connected to the inter LAN, where it starts infecting machines it can reach.

And sicne when does port 444 have anything to do with it? Once exploited, the victim is running a command shell on port 4444.

I'd love to see what the Linux community would say if some intravenous drug pump running an embedded version of Linux had a bug that caused it to fail and kill a patient?

They'd probably cry, 'But we already released a fix! They didn't install this patch, and this patch, and this patch, and then recompiled.'

Don't blame the software companies for the "sh*t quality" of their software, as you say--blame the system administrator who didn't install the already-available fixes or patches. That by far is your guilty party right there.