Although Intel's upcoming Prescott CPU has reportedly been plagued by rather severe thermal issues, it now appears that AMD's 90 nanometer processors may also dissipate over 100 watts. Largely due to electrical leakage issues, many if not most 90 nanometer microprocessors will dissipate 100 watts or more. AMD's upcoming flagship“San Diego” Athlon FX-55 CPU will be available by next fall, and should be a 90 nm part featuring Silicon On Insulator (SOI) technology. Although SOI technology is supposed to reduce electrical leakage, it apparently doesn't have a hugely beneficial impact on power consumption/heat dissipation. Chips that dissipate 100 watts should be effectively cooled by heatsink/fan combinations, but it isn't clear whether this will work at higher speeds. As Prescott passes 4GHz and the Athlon 64 passes 3GHz, standard fan/heatsinks may no longer be adequate. It is therefore not surprising that semiconductor companies have made the development of multi-gate transistors and high-k dielectric materials an exigent priority. Still, these new materials and technologies might not be ready for mass production until the 45 nanometer generation. If so, electrical leakage issues at the 65 nanometer generation may be so severe that chip designers will have to reduce clockspeeds in order to compensate.

USER COMMENTS 40 comment(s)

Meeeehhhhhhh!(2:14pm EST Tue Dec 16 2003)Where is that darn good 'ol Goat when you need him? – by GoatGirl

Better cooling(2:14pm EST Tue Dec 16 2003)This is a typical response.. and a logical one, just use better cooling methods. Forget about the pride of it being cooled by a simple fan and heat sink and accept the fact that it will take something more to keep it stable. My car engine needs cooling, nuclear power plants need cooling.. CPUs need cooling, who cares what the method is, just make it work for the mainstream consumer and sell it. Package it with your CPU even. Another thought, though, was an idea I had a while back (and it was also a story on geek.com not too long ago) about redirecting the unused energy from transistors and looping it back to a power management system. I would think that as electricity flows through the transistors, for every transistor that returned an off state, it would be like energy hitting a micro-brick wall. And, that when this happens millions of times on a single, small, platform.. it becomes a lot of micro-brick walls. Why not have a redirection at the 'entrance' of a transistor that only connects to the transistor when the transistor is in an off position? That way the power that was 'flowing' along just redirects to another point (power management) and the transistor is still in the off position. Wouldn't this decrease the dissipation by.. a lot? – by bobby

When we hit the wall,(2:32pm EST Tue Dec 16 2003)The best solution is to start using multi-core processors on a single die. Also, liquid cooling should be well-developed—even better than now.

That should allow AMD and Intel some time to work out their power leakage problems while simply increasing the power of their multi-core cpu's. All they have to do is improve the architecture of say, 4-core processors while endeavoring to reduce the wattage dissipation.

How nice would it be to see Intel release a quad-core 1.7 GHz Pentium-M manuf'd on 90 nm process? 4 x 1.7 GHz with advanced hyperthreading = 6.8 GHz!!! A good fan/heatsink combo might be able to cool all the 4 cores if it does not dissipate too much wattage (say, 4 x 35W = 140W over 200mm^2 total).

Any comments, goats (including the wise old one)? – by Affez

bobby,(2:43pm EST Tue Dec 16 2003)About your so-called “micro-brick walls”, it's a great idea. Unfortunately, it's not a practical one today. At least inside the silicon die, given that it operates on controlled electricity as efficiently as possible (on a 90 nm process). Those electrical engineers are always thinking of ways to save electricity and control it as much as possible.

Instead, we could have an external generator that uses heat to provide power. Just slap that special heatsink on, and it re-generates some power back into the CPU. – by Affez

Affez(2:44pm EST Tue Dec 16 2003)CPU math 1014 x 1.7GHZ = 1.7GHZ

it is one cpu core per thread, no more.That means If you play doom 3 on a 4 core cpu, 1 core will run Windows, 1 core will be running doom 3, and 2 cores will be idle.That will give you only a minimum increase in speed, some1 with a 2.2ghz cpu will be better off.

Some apps are optimized for multi-threading ex: Oracle, 3dmax studio, but games reraly are.

As for the power leaks, they'll fix em' soon, Intel and AMD, i hope.– by DaSpecialist

Re:Affez(2:47pm EST Tue Dec 16 2003)“Instead, we could have an external generator that uses heat to provide power. Just slap that special heatsink on, and it re-generates some power back into the CPU.” This wouldn't stop transistor failure from over heating though. Instead, the heat from the failed transistor would rise to the special heat sink that converts heat into energy. I still think that getting rid of heat has to be done on a microscopic level (regardless of method), side by side with transistors. – by bobby

Re:CPU math 101(2:49pm EST Tue Dec 16 2003)Right, there wouldn't be a big performance increase using current software. But current software isn't designed for the concept of multicore either. Just redesign the software to divide the computations between cores correctly and you have an effective multicore system. – by bobby

True, games really don't use multi-threading YET. You know why?How many home users have a multi processor machine? Not too many, and up to the last few years, multi-threading wasn't in the house either.They are (for the most part) used in business servers and such.My point is, why code for multiple processors if 3% of the market will use it? – by baba booey

Goat Guy Long Predicted This(3:45pm EST Tue Dec 16 2003)He, years ago, would talk about the issues with just die shrinks and how we'd hit the heat barrier quicker than anticipated.

Amazing how people ravaged intel for their 120W disapation for prescott, but they are very mom about 105 for AMD. – by Manu

RE: Manu(3:51pm EST Tue Dec 16 2003)that is because AMD actually tries to make the chips better, instead of just upping the clock like intel does. – by booger

RE: Manu (4:13pm EST Tue Dec 16 2003) Aye manu they are mom on this for reasons we both know.

AMD is puny by comparison and it's like watching a Rocky movie.

Quite frankly I aplaud AMD as by all intents and purposes they should be gone, but theyre still here scrapping it out and landing some smarting hitz while they're at it. Gotta respect that at least on an instinctive level.

China is about to create they're own chips (they'r not the only nation on this) and O/s that may in the big picture, make America inc. work better to compete.

If America wants to keep theyre position they have to change from a culture “comfortable with mediocrity to a culture of respectable Alphas” while delegating other tasks to other Nations (I probably sound abit like Sander but hey:).

I'm looking forward to the next 50 yrs unfolding as I expect a virtual evolution is afoot. – by Watcher

The issues transistor gain (and the corresponding thinning of the dielectric (insulator) separating the gate from the trannie conduction channel) as process dimension diminishes are becoming critical. Can't remember where I saw the graph, but less than 10% of a chip's power was to “waste” in the old 0.18 micron days. Now at 90 nanometers, it is exceeding 35%. At the 65 nanometer level, it is expected to be about 55%, and at 45 nanos, we can well be chagrined by 80%+ … unless something in the industry changes.

Alas, there aren't many things coming down the turnpike that stand to correct (or better – reverse!) the trend. Hi-K dielectrics are going to afford some measure of relief (taking the 45 nanometer node, and reverting it to 35% losses, from over 80% without), gate-on-3-sides and gate-all-around techniques may well allow efficient conduction at twice that or better.

Unfortunately, it all rather rapidly ratchets back up again… due to another nasty – conduction channel dopant implant density fluctuations. Turns out that there is a fair amount of statistical variance on nanometer-scales in the number of dopant atoms that sit next to each other in a conduction channel. Think of this anology: your windshield in the car is the channel. By itself, it conducts no electricity. As raindrops land on it, eventually there are enough of them to form conductive bridges. When the windshield is totally wet, conduction can grow no higher. It is the PARTIAL covering of that windshield which is equivalent to 'doped silicon'. Semiconducting.

OK – now that you've got the visual of a partially raindrop covered windshield, what happens when you keep the drops the same size, but only consider a portion that is the size of a credit card? Will any given credit-card's worth of windshield have enough drops to be a 'proper' semiconductor? Now think of an area the size of a large postage stamp. Etc.

Getting back on track – the issue is that for semiconduction to happen in shorter and shorter channels, it also needs to happen in DEEPER channels, so that the local surface variation of dopant atoms averages out. That it one of the reasons why 'gate-all-around' designs are being pursued with maximum expediency: they offer a 4x to 8x volume of conduction-channel atoms in which to work. This in turn allows somewhat thicker (less efficient) dielectrics having WAY better insulating properties, and so on. All good, provided they're tuned up well.

In any case … what is becoming evident is that below about 35 nanometers or so, there is an ARMY of issues that in the end are all related, yet insidiously independent of each other. Fix one, and you don't have the others addressed, even though all are related to the 'shot noise' distribution of atoms on ever-smaller dimensionality scales. Yet too … if we are to believe that the inverse of Moore's Law (is the real observation), then dimensions are shrinking by 8x every decade. It was only in 2000 that 250 nanometer was transitioning to 180 nanos. Somehow, I doubt that in 2010 25 nanometer features will be working toward 18, or that in 2020 – 3 nanometer features will be shrinking towards 2.5…

I just don't think that the tech is going to make it there. Just as with internal-combustion engines, that cannot be increased in efficiency beyond the limits imposed both by thermodynamics and the Carnot cycle, in the same way, we cannot expect to control “Maxwell's Daemon”, the statistical nature of atomic motion, atomic stochiastry, and quantum tunneling.

The notion of sending up “3d” structures is great, but in many senses, the “upper floors” will need to be like giant bedroom-communities of infrequently powered devices. Memory sounds good. Alternately, we may well find ourselves going “retrograde” heavily in order to capitalize on 3D: working back toward 250 or even 500 nanometer structures (but in 3D), where losses are SO low, and efficiency SO high, that things can be deeply stacked (thousands of layers!), without generating the heat of the surface of the sun.

– by GoatGuy

Man(4:45pm EST Tue Dec 16 2003)Goatguy, are you a rocket scientist ? because you could/should be one. Not only does every1 here praises you for all the info you give us, you actually wrote something WD won't reply to :) – by DaSpecialist

IBM IS IN THE LEAD!!!!(5:22pm EST Tue Dec 16 2003)Appleinsider reports that IBM is currently producing 90nm G5s in volume at speeds of 2GHz, 2.2GHz, 2.4GHz and 2.6GHz.

Apple is expected to announce these new G5 processors in speed bumped PowerMacs at the January MacWorld Expo according to one source at Appleinsider.

This would be consistent with previous rumors and whispers that the low-end PowerMac would become a single 2.0GHz G5. – by MingIsBack

Yes i'm very hopefull that this marks a new beginning in performance increases on a very regular basis, but i'm going to have to wait and see for sure.

– by geekTheBarbarian

RE: GoatGuy(6:10pm EST Tue Dec 16 2003)What you're saying is, as usual, right on the money. However, I think you left out one of the logical follow-on consequences of this impending transistor doom. We keep shrinking processors and giving them higher clock speeds, but that's got to stop. What we need is not *faster* processors but *more* processors. And for that, we need smarter software. And for that we need smarter compilers.

I keep saying it over and over again: the secret to speed for the next decade or two is going to rapidly become more efficient, more flexible software. Processors have made huge leaps in technical ability, but software is still more or less being developed as it was thirty years ago. Intel's HT and AMD/Sun's flirtation with multicore chips points to the future.

I, for one, welcome our heir-apparent SMP overlords. Wouldn't it be nice to buy a motherboard with 2, 4, or 8 CPU sockets on it, populating them with whatever number of CPU's suiting you at the moment? Forget junking everything at upgrade time, just go from 2 CPU's to 4 CPU's. Opteron has shown near linear scaling levels using HyperTransport, and no doubt Intel will sooner or later figure out that GTL+ bus is dead as…well, dead.

But all of this is dependent upon SMP-aware software. There isn't much of it now, but I'm betting that's going to change. The semiconductor boys are going to continue to make progress, but ultimately they're going to price themselves out of the market. Fabs already cost billions to build on top of the billions in R&D spent in order to know how to build them. As transistors get smaller, things get exponentially more expensive. Expensive things mean smaller production runs (due to fewer buyers), which means more expense (due to lack of encomomies of scale) which leads to more expense. It's a self-reinforcing loop.

No, faster CPU's are not the answer no matter how many fans, heatsinks, waterblocks, or gallons of Fluorinert you throw on them. We're going to see lots of effort aimed at improving memory technology, caching technology, system busses, and more processor-efficient software. – by J. Eric Smith

IBM 90nm disapates(6:22pm EST Tue Dec 16 2003) “Preliminary tests conducted earlier in the year on a 2.5GHz PowerPC 970FX G5, built around the 90nm process, showed the processor to dissipate 62-Watts. For comparison, a chip of equal clock frequency, which was manufactured on IBM's current 130nm process, dissipated a considerable 96 watts.

Among is many advancements, the PowerPC 970FX will also boast a feature called 'PowerTune,' which allows for rapid frequency and power scaling, and features electronic fusing.

The chip was officially taped out in November, and is reportedly being manufactured on an SSOI process (Strained Silicon on Insulator). “Unlike Intel, IBM has kept current leakage to a minimum using SOI, and using SOI on strained silicon will reduce current leakage by a further 15%,” sources toldAppleInsider last month.”

– Appleinsider

– by somebody

Dreamin'(7:20pm EST Tue Dec 16 2003)Regarding J. Eric's comments. Smarter software requires more cycles, reducing the efficiency, reducing the marginal returns to extend his economies. As the onboard logic must make more decisions it becomes less efficient, increasing miss per hit in a disproportionate ratio.Ultimately I think that his intentions are correct, we are at a point where we can almost afford to take a step back to go forward though, and not solely rely on hardware to move us forward. It would be nice to see more creative endeavour excercised towards improving the internal structure and communications spread across multiple logic units. If nature were to be our model, lets look towards the plasticity of natural systems. On a purely 'wishful thinking' note – wouldn't it be nice to see physical infrastructure (not the physical parts but their directed instructions and comms)reform to the task at hand, and I mean far beyond the prefetching, predicting and deep pipelining. How about systems that can distribute effort among core components most effective for resolution. We won't get there until we start to look in that direction. Most desktops (and servers for that matter) burn billions of cycles in idle time. I must agree with the venerable Goat, the internal combustion engine is limited but there is still a constant improvement in the mass v. force on an almost annual basis due to ongoing r&d in a highly competitive field. Perhaps what we need is more competition in the arena. Next life perhaps.Best regards. – by mk

Yes, JES(7:24pm EST Tue Dec 16 2003)Just “for the record”, I want y'all to understand that parallelism [whether it is instantiated as

parallelism is the obvious way to get “more out of the architecture” than simple higher clock rates on single threads.

And for the vast majority of code out there, the 'terminal dependency' of the code isn't dependent on a single logical thread sequentially evaluating the data. The majority of code (rather like the multitiered parallelism charter, above) offers quite a bit of parallel-computing opportunities, whether that be at the

Point is — and trust J. E. Smittster, I'm singing a-capella with the choir, not up in the belfrey — that the whole symphony of “parallelism” isn't limited to just “multiple cores”. There are a lot of other levels to the technique that afford their own scaling opportunities, their own measurable, palpable performance gains, under some quite broadly defined circumstances.

— — — — —

That being said, I also carry the opinion (vision? who knows) that multi-core chips are essentially inevitable. They are as inevitable as cache-in-the-CPU became, only 15 years ago. (Anyone remember buying the special “cache” memory chips for the 80386 motherboards, that would speed them up some 25%?) Within that same opinion is the feeling that we're going to see systems that sport “handfuls” of CPUs – rather like how current computers stop at only 2, 3 or 4 memory module slots. (No, there is no particularly good reason why memory chip sockets stop at “4” … if the industry had decided that 16 was the right and proper number, then the modules would have been designed differently to account for the additional parallel impedance loading and signal propagation. We would be at DDR/400 today with 16 modules just as easily as today we're at DDR/400 with 4 slots.)

So, there will be a couple, or as many as 4 slots in the computers of the next “go”. Maybe. Since the demand won't really be there, and since there's SUCH an egregious gap between CPU speed and memory/interprocessor bandwidth growing, there are going to be a lot of advances in MoBo's, in chipsets, in interprocessor communication, in memory architectures, in busses — even as the MHz and power consumption of single CPUs begins to top out (or depart significantly from Moore's Law). Stability here is going to be physics limited, economics designed, pragmatics metered. That's “just the way of things.” [However, not to sound overly pessimistic, I see a real renaissance in people wanting to “go to work”, to be part of the ultra-networks that can only work in the relatively close confines of a building to allow transparent, distributed processing of the more data intensive applications of tomorrow.]

Well goats, gotta run. But parallelism is as inevitable as caching, which was as inevitable as out-of-order execution, which was as inevitable as breaking the one-instruction-per-cycle paradigm, as was on-core virtual memory calculations, as was the ubiquitous incorporation of full floating point on chips, as was the combination of all those separate logic elements into the thing called endearingly “the microprocessor”.

– by GoatGuy

Anyway(7:45pm EST Tue Dec 16 2003)They said it MAY dissipate 105 watts. That doesn't mean it will dissipate 105 watts at all times, does it?

Well anyway, it's less than what Prescott will dissipate!! – by BillyGoat

Who cares??(7:49pm EST Tue Dec 16 2003)Besides, no one will care if they dissipate 105 watts because Prescott will have the same problems.

My point is: Who cares?(Besides the chip cooling companies) – by BillyGoat

Sorry guys I have to gloat(7:49pm EST Tue Dec 16 2003)How many time have I BASHED AMD's SoI???

Then how did I justify it???

Now WHO is right?

Lets be serious PD SoI only works down to 130nm chips this was predicted and documented by Intel.

Now AMD went aginst Intels word and invested how many millions in IBM's SoI process and spent how many million on its on SoI R&D and now they a looking at spending millions more to have to upgrade to a newer SoI process, spend more better wafers with a fully or more depleted substrate, and lose time when they had a chance to actually have a major weapon agisnt Intel to turn the tide in their favor. – by Nataku

heat(7:50pm EST Tue Dec 16 2003)If heat dissipation actually cools off demand for all these 90 nm chips, that should drive the development of techniques to minimize this effect. Otherwise once the wave of price cuts comes around, all this talk of avoiding heat may be considered “impractical”. – by fc

With a seemingly endless (yes, they keep saying we're running out) amount of fuel we had no need to make them dramaticlaly more fuel efficient… we had more oil than we know what to do with…

This has changed somewhat due to 9/11 and not surprisingly you see the world trying to find alternatives. When there is a real need then you end up finding alternatives.

There is a real need for ever faster processesing systems. It's not like a car … and if you look at jets and rocket engines… you will see that when we needed faster engines for specialized purposes we created them — but not for people in their cars going to grandma's house for the weekend. The tried and true internal combustion engine was just fine for them.

Are we moving faster? Solar sails, nucleur propelled space craft, and other examples seem to point toward an answer in the affirmative.

They are the successors to the internal combustion engine. Just as there will be successors to silicon. And yes, we will use those petabytes of RAM. And the scientific community will use still more!

For what? Games, film, and all other manner of entertainment we have yet to imagine. Simply because it doesn't fit into our model of today's word processing and instant messaging doesn't mean it won't fit into tomorrow's model which may contain holographic TVs, photoreal massively multiplayer games, etc.

Yes, you rightly point out the limitations of the internal combustion engine, but if you look toward the sky you'll see that it was not really a limit after all.

The limit was our imagination. – by LV

re: Nataku(1:19am EST Wed Dec 17 2003)your still here? I havent seen a comment from you for quite a while…. – by Next362″

Next362″(2:39am EST Wed Dec 17 2003)Ive been around, making the occasional comment but The avg poster isnt as fun as fun as the debates we used to get into.

Like my rant about AMD's board setup, remember the debates that AMD the best deal beacuse of the life of the socket A, and it was cheaper at each price point. But now AMD is following Intel's idea lock users into one platform and change it so you dont have all the legancy support problems and the crap that comes with it. Remember AMD fanatics bashing Intel for it, saying they would never buy from a company like that, But now AMD does it its fine, nothing wrong with that.

All the AMD fanitics praising how cool the FX processor run on the exotic 130nm SoI process that AMD delayed hammer for months/years for, that cost them millions that then they had to bring in IBM just to find out Intel who told them PD SoI was pointless past 130nm was right and now they are stuck in a worse spot that Intel with a larger, hotter, more complex, expensive chip that they have to fix, that isnt going help them keep profitable.

But no one wants to listen to logic AMD isnt going to have high end 90nm chip til 2005 in mass production how do I know that Intel has been making 90nm SRAM cells and other devices, in mid 2002, Now AMD going to setup and perfectly deploy a massive launch of an ultra complex and massive chip on 200mm wafer with volume in 6-9 months with only 1/10 the money Intel spent? Nope not gonna happen. – by Nataku

Appreciative reply.(3:23am EST Wed Dec 17 2003)Thanks LV for the reply [I can't quite tell whether it is directed at me, or at the other posters here, but what the heck – I threw in the I/C engine, so I might as well respond].

You are right, of course – that there were, are and remain limits to the internal combustion engine – and there are practical sweet spots that determine how much power an engine should have for differing purposes. That wasn't my point. The point is that the good ol' piston engine peaked out at a certain fundamental efficiency, for an “operating point” design, that hasn't been appreciably changed even with all the advances made since 1920's… yes, engines now have WAY superior valves, materials, RPMs, fuel delivery systems, computer monitors, and so on. But thermodynamics and straight economics determines what can and is employed for a given purpose at a given time.

My analogy of the I.C.E. was to illustrate the principle that some processes, some physics cannot be overcome, no matter how hard one tries — IN that frame. Your point is that as new technology is needed to radically change the power-delivery, things are invented to accomplish it. Again, I agree with that principle. After all, it wasn't until the advent of the jet engine that faster-than-sound flying machines could be made, and you don't see many piston-powered rockets whizzing about, do you?

But where those analogies fail to have overlap with the 'computer of the future (and specifically this article) is that by your argument, we certainly could “use the power” today, and for many applications we need that power critically. The tech hasn't materialized, although there are intriguing possibilities that have been discussed at length. Maybe [as in the 1930's] the tech will have to mature to its natural limit before the alternatives can get a toe-hold and begin to be developed. Hard to say. Probably so, if history is a good predictor of the future.

The future looks very bright indeed. I see a slowing of the rate-of-increase of the silicon-based computing systems coming (and some have argued that Intel itself has fallen off of Moore's curve). I also see a lot of high-potential next-generation technologies that stand darn good chances of letting “computing” move forward at a rapid clip.

To the future!

– by GoatGuy

Correct me if im wrong(8:42am EST Wed Dec 17 2003) dont these companies dip thier cpu's strait into liquid nitrogen when they are testing how far they can overclock them. why develope some kind of liquid nitrogen jell pack that surrounds the cpu in a small ammount of nitrogen ?

I suspect LV's point is that the microprocessor will evolve into whatever we need, to fit the need ..like Bruce said “be like water my friend”(think out of the box that is present day mp world) …that's true evolution.

Considering LV's comment I have to wonder what possible dramatic shifts might replace the microprox in the future ….maybe organic living nano-tech (we'll have come full circle back to our own beginings) how funny is that? – by Watcher

Sorry GG(10:50am EST Wed Dec 17 2003) just finnished yer post.

If we look at when tech made significant wormhole-like jumps we'll notice it was usually during life and death situations ..like WAR.

Right now America is saying they intend on openly challenging Chinas possible future space dominance by setting up a “permanent presence on the moon” and collonizing Mars is the obvious next move that China has been “bold enough” to actually say point blank.

This will spur some sweet toyz. Consider ..I remember when I had my 80386(with math copro) and my buddies told me I NEEDED the new fangled pentium I said please I dont see a difference.

But when I got one and had to go BACK to the older sys …IT was painfully slow and unacceptable for what i'd gotten accustomed to (i know there is pleny of room for attack but you get the point).

Listen If i could get a holodeck of my own for a couple hundren bux …beleive me there'd be a line up “today” that might go border to state border. Our minds are like black holes for information/evolution and the hunger only gets more intense under harsh cercumstances. – by Watcher

Nataku(10:07am EST Thu Dec 18 2003)Ahem, except for the fact that Intel has said that they are considering SIO for the future. I suppose only Strain Silicon matter since Intel will use it over SIO compared to AMD at .09.

Someone just posted that IBM is having success with SSOI at .09, but I suppose that can't be true, because Intel hasn't said it.

You're selectively picking and choosing as well, so let's not make believe you've made some major points.

As for MB pin count consistancy, AMD as well as Intel has advancements in technology all the time, but the difference is forced upgrading versus choice. And I hardly think it was by design on Intel's part. Legacy problems can be a concern, if the user isn't aware of the performance given up by not upgrading to a new MB. But if the users is fully aware, than his cost on only upgrading the CPU is far less costly.

Don't know about you, but choice is always a good thing. And if you don't want legacy problems, get a new MB. I mean really, you're making it sound like there isn't an upgrade path because of pin consistancy/compatibility. – by Who Cares

RE: Correct me if im wrong(9:53am EST Sat Dec 20 2003)Nice idea though, but the point of liquid nitrogen as a cooling aid, is that the LN has to evaporate in order to cool. So if you have a nitrogen gel cooler for you CPu, make sure you have plenty of it, cause you will be distributing it full-time :P