Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

renai42 writes "An Australian security firm is about
to launch a clustered Linux distribution based on openMosix that aims to
utilise the unused nightly processing power of corporate desktops.
Dubbed CHAOS, the distro is able to remotely boot a computer and run
it on Linux without affecting the local hard disk. CHAOS is designed
to provide dumb node power to a cluster run by existing full-featured
clustering distributions such as Quantian and ClusterKnoppix."

I don't know whether it's just me and my uninformed nature, but it occurs to me that switching off these computers would be saving a hell of a lot of money. Rather than using them for something else - which I notice TFA is not clear on, something about a demonstration - why not just power down?

From the Pure Hacking website - Internal on-site penetration testing gives the business the assurance it needs to conduct safely on the internet and with business partners.It would make a lot more sense if this was only intended for use in demonstrations and testing though, as I can imagine very few companies would feel a need to use this sort of distro on a nightly basis, but for one off activities it may be useful.

there are already corporations out there that turn part of their desktops into a cluster by night.

they have a need for computation power that they can't satisfy and this gives them that at no extra investment besides electricity.

if you power them down then they're doing nothing, your investment just sitting on there. by using them to calculate stuff for the engineering department they're doing something usefull and the return on investment on them gets better.

There are also a number of banks and financial institutions that use calculation agents running on desktop PC's to perform calculations such as trade and portfolio valuation, credit risk calculation etc. When the PC's are idle, then join the corporate "SETI at home" style grid and contribute to the various financial calculations being performed. The ultimate goal is to gain as close to 100% CPU utilization as possible across all hardware within the organization.

I believe that the electricity used by a distributed network of PCs is more expensive than renting time on a supercomputer. This formula gets more attractive, however, when the pCs contain powerful vector processing capabilities similar to those of a G5 PPC chip. Since not very many businesses have standardized their desktops to G5 hardware, I am skeptical that your claim is true.

Although the power savings are something that the world could probably benefit from most large corporations probably have computing tasks that take up a large amount of CPU time, or if they do not could probably profit by providing some CPU time to other companies.
An idea like this definitely makes sense to the corporate world, much like the idea of the 3rd shift in the industrial world. You might as well make use of your down time.
I know a lot of the companies that I have been involved with do automatic

Where I work (ehm...) at the univ all PCs are on at night such that others can log in remotely if they need to do distribute their load. And then there are some dedicated number crunching machines. I am not sure if it is appreciated to run SETI-at-home-stuff etc.

It would make a lot more sense if this was only intended for use in demonstrations and testing though, as I can imagine very few companies would feel a need to use this sort of distro on a nightly basis, but for one off activities it may be useful.

It's not a company, but at my university (the University of Bremen, FYI) we have a computer lab full of Dual P4 Fedora boxen, some WinNT boxen and a few antique Sun Blade 100s. At least the Linux boxen are clustered at night and used to bruteforce the student's passwords. If they manage to discover your password your account is locked and you have to go to the admin and have a little talk with him concerning secure passwords.

I can imagine that a lot of companies might be using similar means of making sure that the suits don't use immensely creative passwords like "love", "sex" or "god".

When I worked for Silicon Engineering 11 years ago we had a whole mess of Sparcstation systems from 1+s and IPXs up to SS20s with quad hypersparcs. All the machines were set up to process jobs via DQS, the distributed queueing system. We used the BSD automounter to make sure that tools like verilog and synopsys were available on the same path on all machines and across two different operating systems (sunos4, sunos5). When the user is generating input, the X client qidle tells the queue manager on the syste

The real beauty to companies using this kind of setup for crunching data is that it can run in a limited level (using Linux running on Windows) when the computer is doing other tasks but can boot into a pure Linux enviroment with no resource limitations after the user has gone home. So when the computer would otherwise be off it's crunching data. When the computer is idle (such as during meetings, lunch breaks, etc) it's crunching data. When the computer isn't being kept busy (during those minesweeper games

If it needs to have a Knoppix image installed every night, does that mean I need to leave the Knoppix CD in the drive before I head home? Sounds like the plan would work except for all the lazy people in the office leaving their Mark Knopfler CDs in the drive instead of Linux.

W.O.L. doesn't power-up the system when it's been shut-off, so it's really not of any use in this situation.

It doesn't sound like you've tried this.When configured correctly, it works. We do weekly maintenance and nightly installations of software that way. In some scheduled job, all systems get a wake-on-lan packet and they start, and run some install. The users are never bothered with it, unless their systems are offline at that time (e.g. laptops).

I use this daily to wake up my machines on the LAN from a wireless laptop.
I've yet to see a machine that doesn't respond to this - of course I'm tending to use integrated NICs which don't require a separate jumper, but most BIOSes will wake on PCI events too...

Why is this person moderated informative? He doesn't even know the basics of WOL.

Hell I have a box that multiboots win98, 2k, xp, debian linux 2.4 or 2.6, obsd, and netbsd on my internal network. (Yes all those os' are one one system with one hard drive.) I ssh in to my firewall and then use a perl one liner to send a WOL packet to the system. Then I use cu or tip (serial port programs) and I get a grub prompt over the serial and pick the os to boot. After that I can vnc to the windows installs or

As already said, WOL does start a machine that is turned off. I have one machine (out of 10) that starts up whenever it feels like because the WOL has got a mind of its own. It is supposed to only wake up when asked but it comes on at random times just because it feels like it. The others come on when requested. So you could tell the users to turn off their PCs at the end of the day and they do not even need to know that they are being used during the night.

You need a network card which supports it as well as a mainboard which supports it (or with built in networking, that usually supports it).

To start it up you send a "magic" package to the NIC which tells it to boot. AFAIK it's just MAC level package with all FF in the data field or something like that. The NIC will then boot the computer just as if you had pressed the power key.

The shutdown tool will only shutdown/restart a computer if the account that issues the command has been granted "SeRemoteShutdownPrivilege" authority in the container in which the target computer is located. Without this privilege, the shutdown tool will have no effect at the target computer.

Yep. Which is why you'll notice the blinky lights on the ethernet port light up even when the machine is off when you plug a "live" ethernet cable in.

I use it on my HTPC so that the machine can be "off" (and silent) unless I need to use it or access content stored on it remotely. The BIOS also supports scheduled wakeups, which gets used to schedule TV recordings by the software that came with the tuner card.

"off" is only in quotes because no PC is truly not using any power until the power supply is turn

I've often wondered how you send the WOL signal. Is it bound the MAC of the recipient card, or do you need a direct connection, etc? Would most newer boards with an Onboard-NIC support it?

And for this case, you might not even need WOL... as some motherboards actually support scheduled wakeup operations so you could just have them all with virtual alarm clocks waking up at the appropriate time...

Now I hope that SETI and those other protein folding projects can really get a boost. Who knows? A company which is carrying out its own research may actually be helping its competitor giving it the processing power in the nighttime! And what about i/p stuff, if someone makes a new finding will it be credited to the computer or to the whole cluster ? I think these have to be sorted out first. These issues have not come up partly because SETI and others have not found out anything significant yet. But who kn

Projects like Folding@Home [stanford.edu] already have generated usable results [stanford.edu]. Their FAQ [stanford.edu] answers the question "Who "owns" the results? What will happen to them?":

Unlike other distributed computing projects, Folding@home is run by an academic institution (specifically the Pande Group, at Stanford University's Chemistry Department), which is a nonprofit institution dedicated to science research and education. We will not sell the data or make any money off of it.

I remember hearing about how in the future, we would be able to plug in to the internet and not only access information but also spare processing power. It would be really handy; most of the time you are only using a fraction of the power of your computer (for example, my usage is hovering at around 8%, and I have a movie playing as well as several other applications running), but when you need more processing power, you could get it on demand. Of course, the lag would make it too slow for video games and such, but for some computationally-intensive stuff (video editing, ray-tracing, etc.) it would be perfect.

Not sure what kind of distributed computing you can really do over latency measured in milliseconds. One of the big bottlenecks for today's supercomputers is bus/shared memory access time. I can't really see this being useful for much more than we already do - SETI@Home and so on, where you send packets to be processed and after a few hours the node sends them back.

So yeah not sure if we could ever have a true supercomputer distributed over the net (as it is now, with the light speed as it is!) that's pa

I remember hearing about how in the future, we would be able to plug in to the internet and not only access information but also spare processing power. (...) for some computationally-intensive stuff (video editing, ray-tracing, etc.) it would be perfect.

It's easy enough for SETI which will verify results, and most would be simply discarded. Same with cracking crypto challenges and a few other. But what about video editing, ray-tracing? Someone could just insert junk into it, and you'd never know until yo

Uh, no. I'd know because I'd be using a protocol that verifies the work given back by each node by some method. I'm sure it could be fine tuned to verify some nodes more then others depending on the node's current rating for reliablity.

Is your point that error correction is less efficient then not trying to correct errors? Because, guess what, you are right!

The post above mine has said "it's easy enough for SETI which will verify results..." My point is if you can do it for one mathematical calculation you can do it for certain other types of mathematical calculations. I know for certain that you can distribute ray-tracing work, because I worked for a startup that wrote a multiplatform, multithreaded, distributed renderer.

Probably. There are others working on using worker's desktop systems as spare compute nodes for the evening. An Apple project manager announced that OS X 10.4 will include Xgrid for every version for desktop, server and cluster, so they all can be configured as supplemental nodes. I think they are planning to include Xgrid free of charge.

I was switching slowly from windows to linux. The process started 7 years ago. I have removed windows from my personal machine 4 years ago. about 1 year ago I started doing that on computers of people in my family.

It had to kae so long time for me, bease I was dependant on AutoCAD. It is a tool working only under windows. And it is used by people in archotecture/engineering part of the market. Honestly - I have now 15 year expertise in autocad, as I was using

Corporate Linux Fundamentalist 1: There's this new product that uses all our PC's overnight to harness their power for the greater good. It runs on Linux. It would be a good way for us to become more Linux friendly in the workplace.IT Director: Um, sure, OK, what's it called?Corporate Linux Fundamentalist: Um, Chaos?

Could they not of thought of a better name, how about.Grid or something else Microsoftie, well at least it wasn't called KAy05

Could they not of thought of a better name, how about.Grid or something else Microsoftie, well at least it wasn't called KAy05

Microsoft would have called it the "ActiveChaos Computation Improvement Suite XP" and released it in Embedded, Home, Professional and Server variants whose main difference is the color of the splash screen.

They could have at least had the common courtesy to name it KHAOS and remind us of Get Smart.

Would you believe that I have a cluster of 60 high-powered night-time computers in this office building? No? Would you believe a Pentium and 10 BaseT network card? No? Would you believe a Commodore PET and a dog?

Here is a suggestion that would allow computers that are not in use to be "co-opted" for use in the cluster.

Identify the PC's that COULD theoretically be used, and collect their MAC addresses. Also, configure them to try netboot first, then fall back to booting from the hard drive.

When you want to perform computations, send a WakeOnLAN packet targeted to each of these computers. Wait for netboot solicitations, then, if you have recently sent a WOL packet to that computer, respond with an appropriate netboot directive, booting the PC into a cluster node configuration, with all details loaded from the cluster director.

Otherwise, allow the netboot solicitation to time out, and the computer will boot into its normal configuration.

Not sure how OpenMosix handles nodes that simply vanish, but users could simply reboot the PC when they arrive in the morning, if the computation is still ongoing. Otherwise, the cluster director could remote shutdown/reboot each node prior to the user arriving at work.

Unused PC's would not consume power, cluster node PC's could be instructed to immediately drop the monitor into Power-save mode, etc.

The cluster director could decide how many nodes to start, or the location of the nodes, to optimise the comms between it and the servers.

It would be simple enough to leave a "Please click here to reboot" on the screen of the PC. Alternatively, some explanation of the fact that the computer is not in fact busy and you're free to use it as you wish would work.

Of course, replacing the computers with Linux boxes with background processes set to idle would make more sense.

Does anybody have some example of real (non-scientific nor SETI) example of usage of such a cluster? I want say - what kind of job can such a machine do, especially if generaly network latency/throuhput sucks (standard is still 100 Mbit).

This seems quite similar to the concept of Inferno (http://www.vitanuova.com/ [vitanuova.com] from Vita Nuova Limited, except Inferno runs hosted on the operating system (it can run natively). Similar concept, different implementation. I'll stick with Plan 9, though:)

you don't want swap whenyou perform hours-long scientific computations : if the program ever swaps, the perfomance goes down, and since the computer is unattended, the hard disks burns downs in a few hours (happened to me). Many Beowulf clusters are diskless and headless for cost and maintenance reasons anyways.

Yeah, I looked at that and went with SGE (http://gridengine.sunsource.net/ [sunsource.net]) at the time, mainly for political reasons. SGE gave me extra buy in from a couple of other departments. Works nicely and relatively transparently even for stuff like OpenOffice.org, Netscape and GIMP which you would normally run locally.

For fairly heavyweight apps we have the machines grouped e.g. There are a bunch of OO machines, GIMP, Mozilla machines etc. It takes advantage of shared libraries; OO is about 90Mb resident, but abo

I was thinking about "cheaper than free" software -- a Linux distro that turned your broadband-equipped computer into a cluster node while idle -- a couple of years ago. All that computing power going to waste..... But I couldn't find a way to build a business model around it -- it was just too hit-and-miss for any task I could think of. What data is there that can be batch-processed in a completely non-time-critical fashion, and is so non-security-critical that it can potentially be shown to thousands

Unfortunately, CHAOS isn't one of them. There was an article on CHAOS in Linux Magazine in 1996-97 somewhere. It stood for CHeap Array of Obsolete Systems. The author put together a set of 386, 486, and Pentium boxes that he bought bundled on a pallet. I think he used slackware and beowulf, but in the end, it actually had some pretty significant computing power. The computational power/Kwatt hour ratio wasn't very good though. I wonder if he ever had to run his furnace in the winter?

Nice. A hacking company wants me to load a tiny 6 megabyte linux client into my secure network that then becomes a dumb node in my cluster, "without disturbing (or even touching) the contents of the local hard disk". A company that says they use the power to crack passwords.

Yeah, sign me up with the full knowledge of how many company network policies I would be violating, and the fact that I would not trust them as far as I could throw a datagram.

Hmmm, it quacks like a duck. I would swear they taught us this in both "Social Engineering" and Advertising. Give the "mark" a little benifit, and then take over his world.

Yes... let's leave a bunch of corporate PCs, each consuming 250W of power *on* all friggin' night just so they can have computing power. Not only that, let's do it in an air conditioned office so that we're heating the very office we're trying to keep cool.

The one question that this raises is the big one on security. I've run Seti at a number of places. At one place I came in one day to find my computer off and all of the cat5 pulled out of my hub. The network admin (not the brightest bulb on the Christmas tree) had noticed "strange traffic" on the network and traced it down to my machine. He then claimed that the whole network was acting funny and it was my fault. I'm no MSCE (nor do I ever care to be), but I've admined enough networks in my day. I looked at

Seems to me that what TFA is suggesting is that organizations can use this to gain part-time Beowulf capabilities on machines that could be running Windoze or whatever during normal office hours -- they wouldn't just be giving the processing time away to some random project over the Internet (although that could easily be done too), but using it for in-house projects where an outside connection probably wouldn't even be needed in most cases.

Yeah, I'd want to see some security measures in place, like running it in User Mode Linux or something. A dedicated client program like SETI@Home is one thing. A full OS with the capability to fsck with your hardware is another.

which doesn't make them any money.

But it could help save them money. Lots of OSS users have no viable way to contribute back to their favorite projects.