Posted
by
samzenpuson Wednesday March 14, 2012 @04:25PM
from the who-made-who dept.

jfruh writes "AC beat DC in the War of the Currents that raged in the late 19th century, which means that most modern data centers today run on AC power. But as cloud computing demands and rising energy prices force providers to squeeze every ounce of efficiency out of their data centers, DC is getting another look."

AC is better than DC for transporting electricity because you can convert between voltages with just a transformer. But in a data centre, when all the equipment will be powered by the same voltage, it makes sense to use one good efficient power supply for multiple computers, so that all the components don't have to be duplicated for each computer.

Yes, let's use big power supply for all computers, so they all share the same exact point of failure AND have a MASSIVE fault current when someone accidentally drops a piece of uninsulated wire across a bus bar, so we have have a couple racks of equipment meltdown and a techie vaporized to ash.

Thats why, like, I donno, 80 years ago, the telco business got in the habit of A and B power bus distribution. I worked at a place with a C bus which was pretty much a load balancing hack and confused the hell out of the CO techs and electricians... They actually shorted out the C bus one time because they didn't understand the concept of having three busses instead of the "standard" two.

AC is better than DC for transporting electricity because you can convert between voltages with just a transformer.

Which was a winning argument in the 19th century, but not anymore. The use of AC entails significant power loses, especially for cables that are immersed in salt water, which is why DC is used in such situations:

AC is better than DC for transporting electricity because you can convert between voltages with just a transformer. But in a data centre, when all the equipment will be powered by the same voltage, it makes sense to use one good efficient power supply for multiple computers, so that all the components don't have to be duplicated for each computer.

It depends.

AC wins out because of ease of conversion, becaues the higher the voltage, the lower the current, and lower the current, the lower the IIR losses in the wire. DC didn't win because at the time, efficient (and cheap) voltage converters didn't exist. These days, a switching DC-DC supply can easily exceed 90% efficiency, and you can get solid-state converters that can handle transmission line powers easily. Hence the launching of HVDC transmission lines which don't have resonant losses and no phasing issues

In a datacenter, you'd probably take the incoming power and turn it into an intermediate voltage like 48VDC per rack or something - something that minimizes IIR losses (you want high voltages) and DC-DC converter losses (ideally you want output voltage and no converter).

It will have to be per-rack at the minimum purely because of the losses - if we did 12V lines and a few servers take 1200W total, we're talking 100A in current If we bump it to 48V, we're dealing with 25A (maybe 30A after inefficiencies), and IIR losses at 25A are lower than at 100A (it increases with the square of the current).

Also, the 100A cables are big and chunky (which you need because they reduce the "R" part of IIR losses).

One of the challenges of HVDC, especially in the transmission/distribution world, is that normal switching happens on the line and not at the breaker. If you can switch futher down the line, you can leave all the people closer to the breaker with power. The issue is that this switching happens while current is flowing which requires that the device interrupts real current. In the AC system this is relatively easy because the arc created by opening a high voltage circuit under load goes out at every current zero. There is no current zero on DC, so you force the interrupting device to break current. An similar situation can be seen if you look at relay contacts. They may be rated at 20A @120VAC but only 0.5A at 12VDC.

Well, my way of thinking is that, what doesn't make sense at all, is that normally in data centers, you convert AC to DC, then to AC again, then to DC again.

Yes... many data centers these days have UPS. So my guess is that should be more efficient to get the DC right from the batteries somehow to avoid the losses of having those two extra conversions. So agreeing with your comment, I assume it would be efficient to have small UPS systems (rectifiers and batteries) per rack (or per small group of racks), t

The relationship can be stated as R = pL/A where p is material resistivity, for example in ohm meters, L is conductor length, and A is conductor cross section area. p varies with temperature, but as you can see, R does indeed vary with A, and discounting A as you suggest is incorrect.

You are getting that wrong. DC can be transmitted farther than AC. DC has only resistive losses, while AC also has capacitive and inductive ones.

I'd sumarize it as the following:

DC is slighlty (just slightly) better for transmitting;AC was easier to convert from one tension to the other (currently, we have the oposite situation);AC is better to use on motors (it was much better, now it is just slightly better);AC is easier to generate (it was much better, now it is just slightly better - except on photovoltaics);AC is easier on the connectors (hight current DC connectors are a hell to maintain)

It is easy to see why AC won. I bet AC would win again just because of the connectors and generators, after all, converting it to DC is relatively cheap. The only problem is the low frequencies we currently use, it would be better to increase them a lot now that we have better materials.

Back when the current wars were happening, there was no good way to convert DC voltages. Edison's model called for lots of local small powerplants to deal with that. AC was easy to convert using transformers. Now DC voltage conversion is easy. Thyristors do the trick nicely.

Because it was hard to convert voltages, you couldn't do HVDC runs unless you wanted it in the home as well.

AC is better than DC for transporting electricity because you can convert between voltages with just a transformer. But in a data centre, when all the equipment will be powered by the same voltage, it makes sense to use one good efficient power supply for multiple computers, so that all the components don't have to be duplicated for each computer.

Unless you want to transmit with lower loss and send more current down the same cable. That's why high-voltage direct current [wikipedia.org] is used for most undersea cables

AC is better than DC for transporting electricity because you can convert between voltages with just a transformer.

Not anymore. The greenies / cost cutting / etc means no more xfrmrs anymore. Bye bye to that technology. Whens the last time you bought a wall wart charged device with a transformer inside it (you'd know, it'll be cubical and heavy)? You have to be pretty old by/. terms to have bought a main desktop computer without a switcher, like early 1980s era pre-PC "home computers"... Ahh the old Altair with its smoking hot 7805 regulators...

Since you're gonna have a switching power supply anyway... why not ski

DC/DC converters are better than transformers in almost every way. They are lighter, smaller, and cheaper. Also, altought theoreticaly you could create a transformer that loses less power than a DC/DC conversor, in practice nobody did that, thus they also waste less power.

They would be even better (on all variables above) if they didn't need to deal with a low frequency AC supply. Either a high frequency AC or a DC one would do.

The last two data centers (Clearwire) I built out were DC. The only AC in the cage was for a video monitor and for the tech's wifi router. Very standard stuff, the telcos have always done it that way. Any bit of Cisco/Juniper/whatever kits can be ordered with DC power supplies. I see DC plants as more the standard now. And yes, the are still built using waxed string.

Since you're gonna have a switching power supply anyway... why not skip the pesky rectifier diodes and feed in raw couple hundred volts DC? Quite a few PC power supplies work just fine off raw DC on the supposed "AC input"... good luck figuring out which work and which dont without some smoke events.

You can run them on DC, but it's a waste; you still get the diode drop and half your diodes are going to get hot as hell; normally you have a full-wave rectifier where each diode operates at 50% duty cycle. Wit

Easier, not better. To carry the same power with the same losses, you need 1.4x more cross section in your cables. That's if you have perfect power factor. If you have poor power factor you need much thicker cables. Transformers are also not the most efficent at low frequencies like 50Hz. They require massive amounts of steel and copper too. Modern switch mode power supplies (like those used in computers, since transformers could not be made small enough the provide the required power) need to convert the A

While it is true that AC can be voltage converted with nothing more than a transformer, it isn't really relevant: the old school AC transformer units have miserable efficiency and are both heavy and bulky. Basically all modern equipment is going to be using a switchmode power supply that is a great deal closer(in terms of complexity, cost, efficiency, and theory of operation) to a DC-DC converter.

Either way, you get to play the "let's balance transmission losses vs. redundancy vs. efficiency vs. componen

All DC in a large data centre and you'll need whopping thick bits of copper and equipment that would look at home next to a 500MW generator.A rack basis is one thing, and can work well. A big roomfull is another.

As opposed to the transformer coming into your building? How about the UPS and HVAC units supporting your server room?

Obviously, you'll have redundant DC power supplies, just like you do now. Except instead of having two AC->DC power supplies per PC, you'll route two room-level DC power supplies to each machine in the room. Lots of little, less efficient, lower quality power supplies replaced by a pair of high quality, high efficiency supplies.

And a big disaster waiting to happen with such large DC currents available on all the busses going all over the room. FYI, telco 48VDC systems addressed the dangers with resistive busses. But that was a huge efficiency loss. They didn't care so much about efficiency back then as all they wanted was a reliable battery backed up system. Making DC efficient is also making DC unsafe, at data center scale. AC is safer on that scale. Then do the conversion to DC at no larger than one rack, and put ride-through (2 minute) backup batteries in each rack (just need to be long enough for slow start generators or maybe a little longer for diversity loading systems so you don't slam the generators with load). I'd have a separate AC distribution system for the generator power and have each (two input) power converter switch over at randomized times over a 2 minute interval.

For you it seems logical, but WHY are are large DC currents such a problem? Why are they more a problem then 10-20 lower AC currents? SHort circuit? Same problem in both setups. Electrocution? Higher voltage sounds more dangerous.

I'm more concerned that I convert AC to DC to charge a battery, then convert it back to AC to power a power supply in my machine that outputs DC voltage. (Or, taking the DC battery output and inverting it to AC to run a computer.) Why can't I just run my PC off a battery that's kept charged by a DC current from a single power supply? I mean, I don't need the efficiency of AC for long distance transfer (we're talking maybe 3 feet) so why convert it back to AC?

It doesn't really work that way. A battery charger is just a power supply. When the battery is charged the charger outputs maintenance voltage and your computer is really running off the charger. When the battery is not charged the charger puts out charging voltage, and your computer is really running off the charger. When the mains current cuts out your computer just runs off the battery. This is a UPS, as opposed to a SPS where you run on mains and then switch to the inverter in case of a failure and hope

The simple power supply is great for charging lead-acids. I use one myself. Just float them at 13.5V, they'll be fine. If you want a battery that packs a higher energy density (ie, more than five minutes runtime without a UPS bigger than the computer) you can't use such a simple charger though - all except lead-acids need carefully controlled current, usually with an embedded computer monitoring charge state and temperature to ensure the battery charges, isn't damaged and doesn't explode.

The idea makes a lot of sense, but the problem with either high voltage (500+V) DC or 400V AC is you have trouble getting the fault current down to under 5,000A per (US) code at the plug. Safety procedures are about 5-10 years out for widespread use of high voltage DC adoption in buildings.

As the distance loss for DC is immense (resulting in unwanted heat), it's probably only feasible to actually gain something from shared DC if the supply is relatively close to the servers, i.e. in the same rack or no further than the end of a row. A centralized supply for the whole datacenter will result in a huge waste of energy from transmission alone, far beyond what small less-than-perfect transformers in each server cause today.

Because you've immediately forgotten the concept of redundant power supplies? In a rack of 48 1U computers, that could be 96 AC-DC converters. Or replace those 96 with 2 (or 3, or 4, depending on risk tolerance) big, high-efficiency AC-DC converters. Better efficiency, easier to cool.

Thank you. This is why the debate always confuses me. The poster is not exactly trolling. A single AC-DC power converter is a single point of failure, which is bad. Typically you have two, or even three, power supplies on most servers.

In my data center the AC is very clean, redundant, and has diesel fail over. Now if that is considered to be reliable, and as one poster suggested, we could use backup batteries for only a minute or two, why not convert all of the servers and supporting hardware to DC inpu

If you're the hardware is built for you and designed to all run on one voltage so it doesn't need a DC-DC power supply just as big as the AC-DC power supply it would have in an AC-powered environment then it makes sense to run DC. But since PCs still have big bulky power supplies even in a DC-DC environment to generate the multitude of voltages they're expected to contain, you're not really gaining anything there.

I guess that is a good point, but just how bulky do you think the equipment would be to generate something like 6 different voltages? I am not sure there really are that many different voltages. Most spec sheets I see show 3 different voltages. 12, 5, and 3.3 IIRC.

Most stuff is pretty standard and I am sure manufacturers could get on board at some point.

Do you think it would only be 2U per rack? How much more?

Getting rid of all the individual power supplies gets you back space (pretty valuable) and save

Most stuff is pretty standard and I am sure manufacturers could get on board at some point.

yeah, it's standardized... on ATX, which uses a big bulky box. But it helps ensure that there will be room in the case for a broad variety of power supplies. as it turns out, saving the space doesn't matter all that much. when you're not putting in more windows just because you have more wall, the cost of keeping the building cool doesn't scale so much with square footage.

Servers are not standardized for power supplies with respect to size. I have seen hot swappable power supplies in quite a few different form factors as well as your standard power supplies in a lot of different form factors as well. They all have a cost in space, components, connectors, etc.

Keeping the building cool is one thing, but I am also interested in density and efficiency in power consumption.

"If you wanted to still make it redundant, you could build a 2U dual high-efficiency AC-DC converter with battery backup. That should be pretty reliable."

umm.. you mean.. build an ups ps?

anyways, it's pretty crappy to run low voltage dc over longer distances and the devices are going to need +12, +5 and 3.3 anyways, so you'll be running more and thicker cables or you're going to have a psu at the machine end, some voltage regulator circuit is going to be there anyhow.

If you can afford a rack of 1U servers you don't use redundant PSUs. Well, unless you're stupid. And there's a lot of STUPID out there.

You design tired load balancing and failover software so no single component is a SPoF.

In the age where supermicro can spit out cheap 95+% AC conversion with mostly single 12v rail mainboard design we're doing pretty well. The only real thing missing from the "Google" design is the per-machine battery backup.

We're still doing rack cabinets wrong. We still load servers from the cold isle, but connect all the cabling from the hot isle. Many datacenters don't do hot isle capture. Until we switch to wiring servers from the cold isle and ducting the hot isle away we can't get any real heat transfer efficiency.

Huh? - You lost me there. Or maybe we're doing it right after all?

We load servers from the cold isle and the wiring is in the hot isle, but the hot isle is complete sealed off (with doors at the end of course) and the cooling sucks in air from the hot isle only and expel the cooled air from both the floor and the ceiling above the cold isle. This way the cool areas are never too cold and the hot areas don't leak heat to the cool areas. All servers are of course of the type that suck in air from the front an

This is what I always thought made sense. Treat a Rack like a vendor-independent Blade chassis. You'd have rack based DC power supplies, ethernet and fabric switching, all vendor neutral, with standardised connectors, and your 1U pizza boxes are effectively just modular CPU and RAM (ie just like a blade except without proprietary connectors) that you can add and remove as required with minimal extra parts. It could be implemented quite easily, just swap the existing hot plug power supplies for a standard po

Yeah! Err. Oh yeah. Down with DC! Except the single AC supply into the building is already a single point of failure. There is no reason you can't have all the redundancy you have with AC phases / UPS / circuits, and have n redundant efficient PSUs powering m-racks, whatever works most efficiently.

Yes, let's use big power supply for all computers, so they all share the same exact point of failure.

Eh, depends on the scale of your operation: Single computers only usually have one or two PSUs. Blade cages might have three or four; but serving 10+ PCs. If your infrastructure is in the thousands of racks, the savings on redundant power supplies might make a rack-level point of failure acceptable. Depends on what you are running and how much you want to pay for it...

Yes, let's use big power supply for all computers, so they all share the same exact point of failure.

Hmm, your post modded troll? Somebody was indeed a) clueless about the very real SPOF potential b) abusing their moderator privilege. Let's try a more rational approach: indeed, supplying multiple processors from a single power supply is a potential SPOF. However, M power supplies per N processing nodes would mitigate this at a modest cost in complexity and cabling.

48VDC also means a rather large amount of current. A data center in many cases these days is much, much larger than a telco switching center was (aside from maybe a few trunk points for large cities). They did, in many cases, divide up the electrical systems to avoid high fault currents. But it was well know the high battery currents involved could be a disaster if there was a short, even on a branch tap into equipment.

The benefit of DC distribution was NOT efficiency. They did use resistors and in some

most telephone exchange and related transmission hubs use DC 12, 24 and 48VDC are standard. This isn't anything new, and data centers have always been space and power inefficient, it's the nature of the beast, and method of construction.

Along with the specialized telecom equipment, a few standard server vendors, including Intel when I was there, have models designed with 48VDC power, along with NEBS-compliant features like - not catching fire.

There was an article about using 380 volts a couple weeks ago on/. in the data center.

Having DC brings some benefits, mainly just needing to step down voltage and not have to rectify it smoothly with capacitors to even out the output current.

However, there are some downsides:

1: AC power supplies in devices tend to be more tolerant of power fluctuations. An all DC shop might completely be halted by a power surge/spike that wouldn't bother a data center on AC.

2: DC sparks a lot when connecting/disconnecting. AC has plenty of zero-crossings a second (120 or so), so it won't make the fireworks show when plugging/unplugging. This makes switches rated for DC a lot more expensive than AC.

3: There is no such thing as a NEMA 380VDC connector. So, either items would have to be wired up to a bus bar similar to how 48VDC telco stuff gets, or it will end up like 12VDC with at least 5+ connectors (direct wires, cig lighter, airplane, marine connector, male/female combined connector, motorcycle accessory connector, banana plugs.)

4: Safety. 12 VDC shocks are annoying; a shock from 380VDC will be fatal, especially because of DC's tendency to get muscles to "lock". (This is why stun fences uses AC, while kill electric fences use DC so they can keep the target locked on the wires long enough to get the amps across the heart.)

5: Issues with wire length. AC, it isn't hard to use a transformer to deal with voltage drop. DC, that will be a lot harder.

All and all, 380VDC seems like a solution in search for a problem. We really don't need another standard. Heck, just pointing out 120VAC in the US means I have to doublecheck if I'm dealing with 15 amps, 20 amps, 30 amps, or 50 amps, and the locking versions of each, which means six plug types and minimum wire gauges.

1: AC power supplies in devices tend to be more tolerant of power fluctuations. An all DC shop might completely be halted by a power surge/spike that wouldn't bother a data center on AC.

All this does is require that the conditioning for power be done well before it reaches the machines. There will be an AC->DC power supply regardless, it'll just be much, much larger and could probably supply even more resilience than a bunch of smaller power supplies.

So you'll handle it much like most hotplug PC hardware is these days, with latches and mechanical disconnects that ensure + and - are disconnected simultaneously.

You don't want to disconnect both simultaneously. The idea is to disconnect + first and leave ground connected. The voltage across the whole component falls to ground level instead of potentially floating up to + briefly.

This is a different problem from what the GP was talking about: When you hot-unplug a device drawing lots of DC, it starts to draw an arc. The arc will continue to draw until it gets too long to be stable, forms a big rainbow, and then extinguishes itself. The distance depends on the v

AC power supplies in devices tend to be more tolerant of power fluctuations. An all DC shop might completely be halted by a power surge/spike that wouldn't bother a data center on AC.

Essentially you're just removing the rectifier from the power supply, putting it outside, and feeding the same old switching supply indoors. Not so. You could design a system that intentionally was more sensitive, but no one would intentionally do that.

or it will end up like 12VDC with at least 5+ connectors

The world seems to be converging on the Anderson Power Pole connector (which I believe is a (TM)). Cheap, high current, tough, reasonable simple to assemble...

All and all, 380VDC seems like a solution in search for a problem

See the above. Basically you're doing a lot of foolishness to remotely mount the rectifier diode

4: Safety. 12 VDC shocks are annoying; a shock from 380VDC will be fatal, especially because of DC's tendency to get muscles to "lock". (This is why stun fences uses AC, while kill electric fences use DC so they can keep the target locked on the wires long enough to get the amps across the heart.)

While 380VDC is really bad news, the myth that DC is more dangerous than AC is just that. A myth. In fact, AC will induce tetanus more readily than DC and cause fibrilation at much lower currents. (Given typical frequencies. High frequencies will not due to skin effect.) i.e.:

The high voltage direct current (DC) electrocution tends to cause a single muscle contraction, throwing its victim from the source. These patients tend to have more blunt trauma. Direct current electrocution can also cause cardiac dysrrhythmias, depending on the phase of the cardiac cycle affected. This action is similar to the affect of a cardiac defibrillator.

Low voltage alternating current (AC) electrocution is three times more dangerous than DC current at the same voltage. The lowest frequency for electrical current in the United States is 60 Hertz (Hz) because this is the lowest frequency at which an incandescent light functions. With AC electrocution, continuous muscle contractions (tetany) may occur, since the muscle fibers are stimulated at between 40 to 110 times per second. With tetany, the victim tends to hold on to the source of current output, thereby increasing the duration of contact and worsening the injury.[2]

(http://www.medscape.com/viewarticle/410681_3)

I've had the original Berkeley student experiments where they studied tetanus and AC vs DC, but I've lost the link. In either case, the results were much as they are reported above, i.e. it takes more than twice the DC current to "lock

AC, DC, it does not make a difference any more. Yes, you have to rectify AC before it powers a computer, but the rectification costs less than 1% of the energy. Power factor compensation can be more costly, but it could be avoided by going to a 3 phase rectifier. There are also serious distribution advantages in 3 phase electricity, but it is not used because of the extra complexity, despite being cheap.

DC distribution is expensive, and 1% gain is just not enough to pay for it. Once we have intelligent grids, the situation may be different, but for now there is just no business case.

The appropriate demo of the dangers of AC data center power will be to show an elephant losing his entire database due to a power failure. Ominous voiceover: "Unlike an elephant... AC-driven data centers always forget!"

Standard -48VDC current distribution requires four times the current as 208V AC distribution for the same amount of power. Have you seen DC cabling at data centers that use it? If we're going to start using DC in data centers we need to come up with a higher voltage standard, otherwise we're going to spend all the savings on more copper (which is expensive!) to carry those extra amps.

380VDC is still horribly unsafe without proper segmenting. And that means a lot of efficiency if you segment from a single large massive conversion system. You need to segment at the rack level. And then you end up with the double conversion scenario.

And you don't literally need, or need all of, his product, to make a very efficient AC-based data center.

I am concerned about his brief mention of cooling that seemed to be based on using a single system. There, I would want multiple redundancy at N(4)+2. The more discrete units you have, the more STABLE you can hold the temperature. The more stable the temperature, the higher temperature you can run it at. UNSTABLE temperatures cause damage to equipment as much as too high a temperature.

But 48VDC also means dual conversion. Convert the AC to 48VDC, then do the conversion again with the PSU in each chassis. You have to get both conversions to be very, very efficient to make that worthwhile.

Everything from Cisco can be had with 240VAC. Very little telco equipment these days actually requires a 48VDC power source. And most of that is for telcos, not for web site providers (for example). And where big network providers do need some 48VDC-only equipment, that can usually be put in the nort

Not really. Every switched mode power supply converts AC to DC, then back to AC (at a very high frequency) and then back to DC (at several voltages). The whole DC buss distribution idea pulls the first AC to DC conversion out of every individual supply and centralizes them. This makes it possible to back up the DC buss with batteries. But as others have noted, the high fault energies available on these busses are harder to deal with using common circuit breakers.

But 48VDC also means dual conversion. Convert the AC to 48VDC, then do the conversion again with the PSU in each chassis. You have to get both conversions to be very, very efficient to make that worthwhile.

The problem is, you need high voltages. You cannot run 12VDC to every server because you're talking about HUGE currents.

Let's say the server is high powered and takes say, 480W. At 120VAC, that's 4A, maybe 5A after power supply inefficiency. 5A isn't a lot of current and wires are nice and thin (like they

... convert that AC to DC at a "blade rack". That would be a rack designed to take blades. But the blades would be a mix of

Processor blades (mostly)

Power conversion blades

Battery backup blades

This will safely segment the power, leaving the DC busses limited to the amperage needed for one rack... or even partial rack. It also has the flexibility of balancing power conversion vs. 1st tier power backup (at the point of use). Increasing the backup times to a couple minutes allows slow start generators, which are more reliable.

I would run 416/240 three phase everywhere in the data center (even in North America... transformers for this are readily available). Where equipment isn't on the DC system, run it on 240VLN. The AC/DC converters might run on 240VLN or 416VLL. In countries with 400/230 or 380/220, just use it that way direct.

AC is safer due to the zero crossing. Circuit breakers can break a lot more power (usually 5x the voltage) with the advantage of AC, as compared to DC. A 380VDC breaker for a rack would be HUGE, especially if it has to handle a data center level of fault current.

I worked there for 7 years. I'm not going to get into specifics but I will say:

Verari tried to take advantage of the efficiency gains in DC with exotic power supplies etc... And that company went the way of the dodo bird after trying to force 800V, 48V, and 12V DC power distribution systems in customer data centers. The fact is, everything already out there (switches, routers, servers, etc) uses AC-DC power supplies in each unit and it works in 99% of power outlets with pretty good uptime. The added complexity of running DC infrastructure isn't worth the efficiency gains (which on paper sound like a lot but theory rarely translates to reality the way we think it will), and when one DC rectifier burns up and takes down a hundred servers (vs 1 server with an AC-DC supply), customers aren't happy. Between the uptime issues and employee safety concerns (high amperage DC power is more dangerous than AC for a variety of reasons) it's also a liability nightmare

Again, I don't feel like getting into specifics but modern datacenters != underground telco installations and DC power distribution has a LOT of challenges that are often overlooked when marketing types start squawking about efficiency gains.

The effort to gain acceptance for DC distribution in data centers is being helped by a series of investments by ABB [datacenterknowledge.com], and the growth of the EMerge Alliance [datacenterknowledge.com], which is trying to unify DC proponents around a 380V standard. The challenge for DC is that customers don't ask for it, meaning multi-tenant facilities aren't likely to offer it.
Also, Schneider says it is "not aware of any data centers moving off of their established, traditional power distribution to DC." In fact, NTT has at least five DC data centers in Japan, and ABB is backing a DC distribution project at a Swiss hosting company [datacenterknowledge.com]. In the US, there are numerous sites testing DC power, which is widely used in telecom infrastructure.

Uh the article the post links to supports AC more than DC in case no one noticed. The article is about DC being hyped beyond the facts and that AC is claimed to be just as good. Sort of reverses the whole discussion here making it AD, alternating discussion. Edison gets the carbonite filament..

Well if you live in Toronto, there's a very good chance that simply walking down the street you could get electrocuted by well anything. I'm not actually kidding, they had a serious problem with live plates and poles all over the city for the last couple of years.

You man, don't know anything about analog current and digital one. Sorry, go and take this course again. With A+. The cheapest way to transport electricity from point A to point B is to use, surprise, the "wave" format. The sinusoid one.

LOL wake me when you have calculations showing RMS voltage is greater than peak voltage for a AC waveform...

Cost of electricity dwarfs cost of endpoint components, at least now a days, so cheapest way to transport = most watts thru a piece of wire.Watts is volts times ampsThe insulation determines the peak voltage. For DC the peak is also the operating voltage. For AC the peak is the... peak of the sine wave.

The graphical/intuitive answer is DC can run full output continuously, but a AC sine wave can only run full out for a zillionth of a second at the peak voltage. If somehow magically you made the AC signal

Turns out the equivalent power transfer of a AC wave is the RMS voltage.

Err times the current, yeah. Ugh.

The point is the "average" of a DC line is... the peak. The "average" of a AC wave is the RMS voltage which is about 70% of the peak.

I put "average" in scare quotes because the actual integrated voltage of a sine wave is zero. Or sometimes the "average" is calculated another way.The number you're looking for is RMS root-mean-squared. take wild guess how you numerically calculate that...