Posted
by
samzenpus
on Monday October 15, 2012 @02:20PM
from the we-don't-need-no-stinking-wires dept.

Nerval's Lobster writes "A team of researchers from Microsoft and Cornell University has concluded that, in some cases, a totally wireless data center makes logistical sense. In a new paper, a team of researchers from Cornell and Microsoft concluded that a data-center operator could replace hundreds of feet of cable with 60-GHz wireless connections—assuming that the servers themselves are redesigned in cylindrical racks, shaped like prisms, with blade servers addressing both intra- and inter-rack connections. The so-called 'Cayley' data centers, so named because of the network connectivity subgraphs are modeled using Cayley graphs, could be cheaper than traditional wired data centers if the cost of a 60-GHz transceiver drops under $90 apiece, and would likely consume about one-tenth to one-twelfth the power of a wired data center."

Within a data center, you could use $1.00 LED emitters and receivers with integral lenses for short runs, precision (but still cheap) alignment fixtures and $0.10 mirrors. For long runs, LED laser emitters. You'd still beat $90/point by a huge margin. And as a plus, you'd have some extremely high speed connections. Power consumption... I dunno, you'd have to do an analysis. One thing that seems obvious is that for any line not sending data, the LED should be off the vast majority of the time.

Unless you can remodulate or make incredibly dense modulation possible, LED transmitters can manage about the same data rate as you see in WDM, and so the data rate among hosts isn't quite so chill. Power would be low, and it would be tough to find background noise to foul things up. But eventually, you'd need to have alternate spectra to modulate (lambdas) and tight tranceiver pairs to make it work. Your engineering cost just shot your low-cost.

LEDs can be switched in the sub-nanosecond range with a little effort, in the single-digit nanosecond range without any unusual trickery at all. 10...100 ns for an 8 bit word isn't horrible. I don't understand your use of "chill" in this context.

Also not quite sure what you mean by tight transceiver pairs. I envision a transmitter LED nested at the bottom of a flat black tube on one end (crops the easily detectable emission to a very narrow AOV), and a sensor with an integral lens on the other. The only way the sensor could see the transmitting LED is to be lined up with it; parallax would prevent it from seeing adjacent LEDs on the same spatial alleyway, as it were. All low tech. You could fit a *lot* of these on a flat plane representing the end cap of the data allyway.

Most machines in a data center don't have a lot of connections going to them. One for sure, maybe two. That heads off to switches, routers. Those connections could be all LED. The router / switch, if consolidating to a high-traffic line, could use something else. If going out to other machines, LED again. No reason you couldn't mix tech here.

First, you need to use a modulation scheme that allows intense amounts of data exchange. If you don't do that, you're not trying and what did you do this for in the first place?

You have to have pairs that are either lambda or phase delineated for rational discrimination. Then you need plenty of pairs, as this is a crossbar arrangement; otherwise it's useless and you might as well use RS-232.

Finally, if you don't provide optimal switching, you're blocking, and if you're blocking, you're not state coherent, and why did you do this in the first place?

on / off is sufficient to give you more speed than the vast majority of machines actually need. Nothing fancy required. receivers can only see one transmitter; on/off is just as good in that context as it is within a wire, as long as you don't block the path.

Number of pairs isn't a challenge, really. Should be able to get the density up to about what cables give you as long as you use the short transmitter sleeve I described.

It's not a crossbar arrangement. it's point to point. Same as an ethernet cable, wh

I have a very small home data cluster: two servers, one switch. I can hit kill saturation (causing the NIC to overheat) very easily running just ONE cable into each box. Problem solved, very easily, with TWO NICs per machine, two cables to the switch per machine, two IPs per machine. Ingoing data goes through one NIC, outgoing with t'other, cards stay relatively cool and nothing falls over. I'm sure those who have experience with larger data setups have seen similar problems and know therefore that doubling

There are two major flaws with your plan, which is why it hasn't been implemented I would imagine.

Firstly you need all that empty space for the light to travel down. It has to be dead straight and perfectly aligned, which severely limits how you can lay out. Sure, you have mirrors but they just introduce more alignment problems and you won't be packing them that tightly anyway. Compared to just putting in cables there is no real advantage and many disadvantages.

Empty space tends to be perfectly aligned, lol. Yes, of course. But what this means in practical terms is a transceiver group needs alignment -- once, unless the building shifts, etc. If the building shifts, you have other problems. The *space* isn't going to move.

Yes, you want to keep dust out of there, otherwise you'll see error rates go up. The good news is everything benefits from this. Servers don't like dust either.

The first is not a problem; the second... should be solved. So I don't see these are se

Or Rlly? So a traditional datacenter is sinking > 90% of its power into the wired network connections? Not the actual servers themselves? Not the cooling? The wired network connections? I'm not buying those power saving estimates.

not even close. Server NICs are generally integrated, wireless requires a dongle or card. That's extra power for each server. Even if you cut your switch power requirement by 75% there's still the problem of the extra power required by the interface cards, which at the very least will cancel out any power savings (which will be negligible anyway)

DNRTFA, but I imagine that the figure is quoted off of the networking equipment alone, without regard to any other aspect of the datacenter. I.e.: your actual network equipment footprint would shrink 20-30 fold, and that renders the power savings -- and while that is far from a majority of the power utilization of a traditional, large-scale datacenter, it is not an insignificant number in either physical space or power consumption.

That said, I doubt this is feasible without rethinking the datacenter design from the ground up. Simply rearranging the racks to minimize interference is not going to be enough.

I don't buy that at all. The efficiency of a wireless adapter is something less than 8%. If they're getting more than four inches range on 0.3W consumption outputting 60GHz I will be VERY surprised. It's more likely that they're *outputting* 0.3W (consuming over 5W per adapter), for an effective range of still probably less than 30-40 feet. This puts their power claims out by an order of magnitude and confirms what I've said all along: that going wired to wireless saves nothing but copper: practically speak

Can someone explain how a wireless approach could use less power than a wired approach?I understand that if you compare a crappy wired implementation to highly optimized wireless implementation the wireless might win out,but then it would be cheaper to optimize the wired one.

Switches are inherently hub and spoke (even the last of the rings were physically hub and spoke). So you have to have the hubs (networking switches, but literal hubs). With wireless, you could mesh and reduce hubs.

Now, if we were to get switches better optimized for power (most seem to be going the wrong way, with even datacenter-class switches being PoE capable, requiring lots of extra power), then there wouldn't be a savings. Get switches that turn off ports and cores based on load and connections. E

You could get rid of some switches. Because 60 GHz doesn't penetrate through metal well, you can have your own little private network inside the rack cylinder without a switch. Each pair of computers could communicate on a separate frequancy, so you'd get the equivalent of a switched network (one coudl do full duplex to, by using more frequencies). The wireless approach would be more resiliant to failure too. You could use N! wires between the N computers instead, possibly using even less power.

Write the same sum but in the other direction just below the previous one, and sum both lines term by term. Notice you have N times N+1, and that's for twice the sum. So one half for a single line. A visual way to let a kid old enough to know multiplication tables to find it for small cases is to draw the sum as dots on a piece of grid paper as a rectangular triangle. Then double the triangle (symmetry on the long edge) and you get a rectangle where the number of dots can be computed with a simpl

It's not N!, it's N + (N-1) +... + 2 + 1. It probably can be written more easily somehow.

You are correct... here's an easy way of figuring it out:N+... +1 = (N+1)(N-1)+...+2=(N+1)(N-2)+...+3=(N+1)Pairing up a term from the beginning of the expression with one from the end always makes (N-1), and there are N/2 such pairs. So the total is N(N-1)/2 (at least for even N - though it works for odd N too).

A radar detector works by detecting the radar sent by the radar gun. In some states (VA?), such detectors are illegal, and they use detector detectors to find users of them. They work by detecting leaky receivers. I think you misread my comment.

You don't understand how it works. Your radio causes vibrations and oscillations in the same magnetic harmonic frequencies as the transmission. These vibrations upset the natural rest state oscillation harmonics present in All Living Things, these negative and deathly vibrations cause cancer. Life Crystals oscillating in the same Resonant Frequencies absorb these energies and give off life-giving restorative vibrations.

You can't have nearly infinite bandwidth in a finite frequency spectrum, but you can keep adding a shitload of wires if needed.

Given the problems people have when multiple wi-fi routers are too close together like in an apartment building, I am doubtful that it would work well in a server environment, not matter which frequencies are used.

You can't have nearly infinite bandwidth in a finite frequency spectrum, but you can keep adding a shitload of wires if needed.

On the contrary, any number of optical signals can pass right through each other, whereas cables (electric or fiber-optic) cannot do that.
In other words, it's all a matter of how directional the signals are, and how powerful they are.

What I don't understand is what wireless brings to the table. The way it read to me, it was more a matter of having local "sewing circles" that were networked.

I liked the "sewing circle" concept, but why wireless? These are short distances. If you don't like short network cables, why not just use LED transceivers, instead? No wires to plug in when you jack compute modules in and out, (in theory) simpler circuitry, and as long as the cabinets are light-tight, no leakage issues. You could even put a big light

Ages ago (early 1990s), there used to be a system like that for Macs. Aim one transceiver at an area (such as a wall or ceiling), aim another one at the same area, and they would notify you with a LED when the connection was working.

Just have little directional device from each host, have them all point at one area, and be done with it. If two devices just want to communicate with each other, find another piece of paper to aim them at.

When the 60Ghz transceiver (which doesn't exist yet commercially) drops to $90 each, won't 10Gig ethernet drop down to $9/port, skewing their cost justifiication results? They mention using 4 - 15gbit transceivers... what's the aggregate bandwidth of a 60Ghz network? If the aggregate bandwidth is 15gbit, that's not going to handle a rack full of servers.

60 GHz exists right now for point to point communications.You can get it on newer computers by looking for "Intel wireless display" aka WiDiYou can use it commercially with multi-gigabit speeds at ranges up to 1.5km (about a mile assuming good weather).

They mention using 4 - 15gbit transceivers... what's the aggregate bandwidth of a 60Ghz network? If the aggregate bandwidth is 15gbit, that's not going to handle a rack full of servers.

Talking about aggregate bandwidth for 60GHz is meaningless.The only number you have to worry about is the maximum bandwidth of a single transceiver, because, unlike most current wireless offerings,60 GHz frequencies are so directional that you can run multipl

the traffic is sent into the air and its up to each receiver to filter the noise and ignore data not meant for it. lots of interference.

its OK for starbucks or for home use but not by much. i have at least 10 wifi networks around me that constantly interfere with mine. i used to get regular disconnects from x-box live that went away when i tried to connect my x-box to my router with Cat5 cable. same with video streaming.

this is why large events have crappy data speeds. everyone is broadcasting into the same air space and interfering with each other.

wireless is like the old layer 1 hubs the traffic is sent into the air and its up to each receiver to filter the noise and ignore data not meant for it. lots of interference.

Um, well, not exactly. They are similar in that they operate at half-duplex, but WAP's operate at L2 and L3 in addition to L1 (Wifi uses CSMA/CA [wikipedia.org], vs. the CSMA/CD [wikipedia.org] used by switches). Interference can be an issue, but only in an uncontrolled or poorly-designed environment (Pro tip: don't put 2.4ghz wireless phones in your wireless data center).

Even a moderately isolated and shielded data center that sticks to mostly directional transmission should have none of these problems. Look up omnidirectional vs directional antenna. Considering that even off the shelf 802.11ac in the appropriate configuration can offer speeds of nearly 7Gbit/s I (http://en.wikipedia.org/wiki/IEEE_802.11ac) I somehow think you're not really understanding the nature of what is being discussed, I don't see why this should be an issue. They aren't talking about shoving a pile

This is how science works. Statements about existence of anything, made without no evidence to support them are supposed to be treated as false unless and until such evidence is provided. With given evidence, it's much more likely that I am a four-headed lizard who lives in a volcano, than that any kind of deity exists, or ever existed.

Too bad, Bible is not an authority on science or logic, and you lack even basic understanding of science. Plenty of things were called foolish, and then very soon were demonstrated to be true. Characters from ancient folklore about fear of death are not among those things, and I can assure you, never will be.

So Slashdot is now ripping off other sites, copying their content to Slashdot-hosted pages, adding ads, and breaking links. [slashdot.org] The original article [cornell.edu] says "Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ANCSâ(TM)12, October 29â"30, 2012, Austin, Texas, USA.
Copyright 2012 ACM 978-1-4503-1685-9/12/10...$15.00."

In the actual paper, the power consumption bullshit part reads "Power consumption: The maximum power consumption of a 60GHz transceiver is less than 0.3 watts [43]. If all 20K transceivers on 10K servers are operating at their peak power, the collective power consumption becomes 6
kilowatts. TOR, AS, and a subunit of CS typically consume 176 watts, 350 watts, and 611 watts, respectively [9â"11]. In total, wired switches typically consumes 58 kilowatts to 72 kilowatts depending on the oversubscription rate for datacenter with 10K servers. Thus, a Cayley datacenter can consume less than 1/12 to 1/10 of power to switch packets compared to a CDC. That's comparing transceiver drive power with a whole store and forward switching fabric.

It's also not clear how their "Y-switch" thing, which doesn't store anything, handles busy reception points. At some point, in a forwarding network, you either have to store packets or drop them. Or set up end to end channels first.

Even with careful planning and management, wouldn't a completely wirelessly-networked datacenter be more of a target to hacking? Even with a high level of encryption, which would add to network overhead?

I'm sure the network side of things would be secure, but I'd be worried about invisible radio wave attacks that could come from anywhere. A satellite, or parked van could essentially kill the whole network if they focused enough wattage at the data center. Not to mention other interferences like solar-flares.

I was always a bit dubious of the infrared based wireless networking (like IrDA) for an office environment, but what about optical wireless in a data center? Seems like that would solve the potential security issues and you could isolate racks (or parts of racks) on their own wireless network and then do the traditional wired scheme to join those nodes together so that you weren't stretching the bandwidth too thin?

These days, with VMs (and hence software switches) carrying the actual workload, and hastily programmed core switches broken down into a hundred VLANs, why are we hanging on to the ancient notion of "wires"? Clearly a wireless method for every server to be able to talk to every other server is the next logical evolution. Just sprinkle a little software on top to make sure that the servers only see/process what they are supposed to, and surely it will all work great!

Problem - 60GHz is currently very near-space wifi. It's also what, a couple of gigabit worth of bandwidth. Also, I haven't seen any studies yet looking at 60GHz saturation and lots of multi-path reflection. It's a cool technology but it does read like someone's trying to sell the tech, rather than really being suitable for it.

It looks like their hope with the cylindrical orientation is that each server will communicate directly with the 5 to 7 servers opposite it via the inside (and hopefully the signal would be absorbed there as well) and with the servers above/below it on the outside (where the signal would dissipate fast enough to not interfere with other cylinders). Quite intriguing, but it creates one giant (and complex) software-managed ether in the literal sense, information will just "be there" and hopefully the softwar

Hey, I have this awesome idea, let's take out all those expensive copper wires and make our data center wireless. It'll save so much money! But first we'll have to redesign racks to be cylindrical and servers will need to be keystone-shaped. Also, because of the new rack design, you won't have access to rear ports. If something in the center of the rack comes undone or stops working, you need to open the entire rack. And each rack will have to be a faraday cage so the signal doesn't leak out and collid

This will only work if the data-center is deployed as a PaaS cloud grade apps. Falling to leverage those best of industry game changing paradigm the interference from all that vapor will have a detrimental effect on the TCO and ROI KPIs.

From TFA: "the authors picked a Georgia Tech design with bandwidth of between 4-15Gbps and and effective range of less than or equal to 10 meters."

Provided interference, does that mean you won't get more than 15Gbit per second for all the machine in a circle of 10 meters? How much is that 6 racks? You put what, 20 machine per rack? (I am not in IT, so I am not exactly sure.) so you share 15Gbit per second accros 120 machines?

Assuming no other interference. Right now, you can get 10Gbit per second with 10gi

I understand it is directional, but I hardly believe you will be able to target the 6th machine on the 2nd rack on the left without irradiating half of the next 4 racks on the left. Maybe directionality will reduce it to 60 machines in your path instead of 150. But I don't think you'll reach anything below 10 machines.

I am not even mentionning that collisions will only give you half duplex, most likely even less.

Simply put, in order to pull that off, you'll need fairly sophisticated data processing, simply pointing 2 directional antennas at each other works fine outside, but is much more problematic in a data center which has walls and obstacles creating reflections. So what you could do is to use modern MIMO systems, but that would require huge amounts of processing power to get any kind of decent bandwidth. It's no point designing a system now which already peaks out at 10 GBit.

What you say is, true, despite the humor. Let's say there's nice low power low noise wireless. Multiple concurrent channels would have to wired in a method that's like a cross-bar L2/L3 switch to make this work, and the matrix (sorry for the word choice) would have to service each point with non-blocking architecture as the bottleneck built in cache could be enormous and fast. The engineering costs of this outweight the perceived savings. Add-in any random EMI (admittedly at really high freqs), and this som

I have a pair of 4U servers, each containing a 1000W PSU (for hard drives and fans) and a 450W (for the mainboard and everything else). That's shy of 3kW on servers which wouldn't fill a rack a quarter the way. The switch (24-port unmanaged) consumes 40W. That's about 1% of the total power requirement of the entire system. If I switched out for say a Linksys E3000 (7W) and the associated wireless interface cards (I'd have to go with USB since all my

that's the thing - it doesn't. It's *unmanaged*. It's a D-Link DES-1024R+. Just a dumb switch. It doesn't even do DHCP. As to saturation, it's a nonblocking wire-speed architecture around 4.8Gbps via 24+1 autosensing ports (the +1 is a bay for 100MBit optical which is in another switch (that one a D-Link 16+2 with two optical bays but I can't remember the model number). Oh yeah, and it's noisy. I think the fan bearing's gone.

What makes you think it consumes 40W? Have you used a power meter on it? Same goes for your "1000W" servers. Power supply ratings do not indicate typical consumption levels, only peak load capabilities, and the only time you get even close to that is when you're first powering the system on and spinning up your drives.

Because I've metered it? And yes I am perfectly well aware of the peak load on systems when you spin up the drives, which is why there are TWO power bricks in each server. The specifications on each are identical. The peak load on the kW bricks on spinup is 860W, steady and stable(ish) at 412W. The system bricks peak at 383W and stabilise at 241W (saturated), 191W (idle).

Just because you have a particularly weak and inefficient switch doesnt mean thats par for course. 40w would be about right for a high-end 3000-series Cisco which WOULD do all that fancy stuff (except generally DHCP is NOT something you want your switch doing, since it is distinctly Layer 3; not sure if even 3000-series switches do dhcp).

And for the record, if your switch is non-blocking, its doing 9.6gbit/second of traffic at 24 ports and 100mbit port speed (full duplex=200mbit/port *24 ports). Good luck

As someone who makes a living installing and supporting wireless data communications, and has done so since 97, this is totally laughable. Not to mention insecure as hell. And did I mention unpredictable performance? And how did they measure latency on a draft spec that is still a horse race between 4 draft specs? Looks like someone has a new buzzword for when "cloud" starts to wain.

The nice thing about wireless interconnects, though, is you can have a much broader range of network topologies.

Not really, no. Want two nodes connected in your network topology, just pull a wire or fibre between them. Requires lots of wires, but it is going to be better than wireless on throughput, latency, and reliability. Wireless is useful for devices that are constantly moved around. For anything stationary, pulling a wire is a better long term solution. And I assume servers in most data centers are co