It has been a while since we saw the first factual proof of the existence of Fiji, the next generation flagship AMD GPU, that will be the first graphics card to come with High Bandwidth Memory (HBM) memory.

Shortly after the Consumer Electronics Show in Las Vegas, in early January, we heard that Oculus has the latest AMD hardware and a few weeks ago Tom's Hardware confirmed it. The publication saw a demonstration of "Showdown running on the Oculus Rift Crescent Bay, being powered by an unannounced Radeon R9 flagship ultra-enthusiast product."

The HTC / Valve Rift demo at the Mobile World Congress 2015 was carried out with Geforce GTX 980 cards. Our sources informed us that HTC and Valve have a Fiji-powered demo too, but they didn’t want to show it. As you can imagine, there is a Titan X version of this setup too.

Dual GPU hints dropped along the way

However, AMD raised our suspicions about the Fiji design when they announced LiquidVR SDK, where they clearly stated that you need two GPUs. The slide below clearly implies that you need Affinity multi-GPU rendering in order to reduce latency and increase content quality.

"Affinity Multi-GPU for scalable rendering, a technology that allows multiple GPUs to work together to improve frame rates in VR applications by allowing them to assign work to run on specific GPUs. Each GPU renders the viewpoint from one eye, and then composites the outputs into a single stereo 3D image. With this technology, multi-GPU configurations become ideal for high performance VR rendering, delivering high frame rates for a smoother experience. "

This is how AMD ended up with 8GB of HBM1 memory

So, when Fudzilla wrote that Fiji is going to ship with 8GB of RAM, we didn’t actually think that we were talking about two separate GPUs, on separate interposers, with each GPU using 4GB of HBM1 memory. This is how AMD got to 8GB, or should we say two times 4GB for this card. It makes much more sense now, and of course we would not be surprised to see Fiji for notebooks and lower-end desktop products in a single GPU configuration.

AMD is betting big on Virtual Reality (VR), and we have been aware of this since late 2014. The company knows that Nvidia is putting a lot of effort into VR research and development, so AMD wants to try to beat Nvidia in this new emerging market.

Competition heating up in VR segment

Since there are at least two big players coming this year with their retail products, Facebook owned Oculus and Valve / HTC, both Nvidia and AMD want to end up selling more of their GPUs for these kits.

We tried Crescent Bay last week and we have to admit that the latest VR glasses by Oculus are better than the Developer Kit 2, popularly known as DK2. We tried Oculus DK2 back in September 2014 and again in January 2015 at CES, but it made us feel dizzy, and overall graphics quality was quite poor despite running on high-end Nvidia hardware. It was all about the quality of glasses, leness and technology that Oculus used.

Crescent Bay makes you less dizzy, the picture quality is better, but we are not sure if we would be comfortable to wear that for more than few minutes, because we simply didn't have enough time to try it. We still didn’t have a chance to try the Valve / HTC virtual reality kit, but many of our distinguished colleagues have and they all claim that it is better than Oculus Crescent Bay.

Oh, one more thing – AMD is already working hard on a next generation High Bandwidth Memory HBM card that is smaller than 20nm. We still don't know which node it will use, or which type of memory (HBM1 or HBM2).

Gainward’s GTX 690 graphics card is based on Nvidia’s reference design, just like the rest of the aircooled GTX 690 cards that can be found. Nvidia openly stated that they don’t want the card’s air cooler replaced, although they did allow EVGA to strap it with water cooling. You can already find many different water blocks available, but also few air coolers such as Arctic Twin Turbo GTX 690, which launched a few days ago.

We’ve already tested the GTX 690, but the card performed slightly better today. However, it is not due to overclocking but rather due to the new drivers.

GTX 690 uses GK104 GPU, which is a part of Kepler family. Base GPU clock stands at 915MHz, which is 9% lower than the GTX 680’s Base Clock (1006MHz). In fact, this is the only difference when compared to GTX 680’s GPU. The rest of the specs reveal that the GK104 use on GTX 690 is identical to the one on GTX 680 cards. There’s no difference in memory either – the bandwidth is identical. The memory runs at 6008MHz (GDDR5) and each GPU has 2048MB of GDDR5 at its disposal.

The packaging that Gainward used for GTX 680/670 cards is already quite bulky, so there was no need to make a bigger one. The design is solid and will keep the card safe. Additionally, Nvidia’s decision to put the cards’ names on the sides made shopping for a new card much easier.

The box holds a single 2x6-pin to 8-pin power cable, driver CD and Quick Start Installation Guide. The CD includes EXPERTool graphics card management utility and a good part of the Quick Start Manual is reserved for it. Naturally, we’ll discuss this in more detail in our OC section.

Gainward GTX 690 is a spitting image of the reference card. When it comes to this particular cooler, Nvidia has every reason to be proud and if there was a graphics card beauty pageant, it would definitely rank highly. However, the GTX 690’s cooling deserves praise for much more than its looks – it’s efficient and isn’t loud. No wonder Nvidia likes to show this card off so much.

The cooler is two slots wide. On top of the cooler you’ll find “GEFORCE GTX” logo. The LED under the sign glows when the PC is ON.

The fan is enclosed in a magnesium bracket. Not only does this improve the looks, it ensures superior thermal and audio properties than most commonly used plastics and aluminum.

It’s obvious that the cooler received special treatment and every single inch of it breathes style. The side panels are made of polycarbonate and let users peek into the heatsink. The cooler actually has two separate heatsinks. The fan is placed in the middle, so as to cool both heatsinks at the same time.

Both heatsinks sit on nickel-plated vapor chambers. Vapor Chamber technology enables for making compact coolers, and GTX 690 is a perfect example of this.

Two GK104 graphics chips are linked via PCI-Express 3.0 SLI bridge chip (PLX design with 48 lanes). As for power supply, Nvidia used 10-phase power for the two GPUs and two phases for memory. The PCB is made of ten layers with 2OZ of copper per layer. All the components are low-profile, so that the cooler could sit on them comfortably and decrease turbulence and noise.

The card uses Samsung GDDR5 memory. The chips (model K4G20325FD-FC03) are specified to run at 1500 MHz (6000MHz GDDR5 effectively).

Complexity of the design is evident from the quite populated back of the PCB.

GTX 690 has four video connectors and all can be used simultaneously. The card has plenty of power to run 3D on three displays. The DVI connectors, all three of them, are dual-link capable. Two of them are DVI-I which means that they won’t support an analog VGA display. HDMI sound chip is in the GPU and that card comes with a DVI-to-HDMI dongle.

GTX 690 comes with a single SLI connector, meaning it can run in quad SLI.

Gainward GTX 690 is a card based on reference design and although it won’t allow for more extreme overclocking, you can get some extra performance with little to no effort. Our reviewing sample allowed us to up the GPU Base clock from reference 915MHz to 1015MHz, which is an 11 percent overclock. The highest clock we hit after our overclocking was 1176MHz. The memory cooperated as well and we upped the clock by 250MHz (1000MHz GDDR5 effectively). Together this translated to up to 16 percent higher gaming performance.

Of course, we took Ganiward’s EXPERTool out for a spin, as it now supports Kepler chips. Unfortunately, we could not get readings from both GPUs at the same time, but overclocking is easily doable from the panel with the same name.

Nvidia is pleased with reference air cooling and so are we. In short, it’s better than any high end dual-GPU reference cooling solution we’ve tried. At the same time, the cooler doesn’t get too loud, not even when the card works full throttle, which is always a plus, especially for a card of this caliber.

The fans won’t get loud in AUTO mode, as their RPM does not exceed 2300. Only when you go beyond 2600RPM will you be able to hear it good from inside your rig. The card remains almost inaudible when idle. To put things in perspective – the GTX 690 is about as quiet as a single GTX 680, but definitely quieter than two GTX 680s in SLI.

We didn’t have to speed up the fans manually for our overclocking, but we did it anyway to see how much will the temperature drop. Running at 1170MHz in AUTO fan mode, the GPU went up to 84°C. We noticed that the fans weren’t significantly faster than at reference clocks, which explained the higher GPU temperature.

Once we maxed out the fans, we measured 77°C. Note that the maximum allowed RPM, which is 2900, made the fans quite loud, but not what we’d call unbearable.

Power Consumption

Consumption is significantly lower and as much has 90W lower than what a GTX 680 SLI would draw.

Nvidia did a great job and Kepler has proven its dominance over Fermi generation, both in power consumption and thermals. These two advantages in particular are what made it possible to build a dual-GPU card that’s quieter than all the earlier dual-GPU cards, while being faster at the same time.

Gainward could have perhaps personalized the cooler, but we won’t pick hairs and besides – if it ain’t broken, it don’t need fixing. The reference cooling is, dare we say it, sexy and offers good performance at decent noise levels. In fact, considering that this is a dual GPU card, the thermals-to-noise ratio is quite nice. Additionally, the GTX 690 is quieter than two GTX 680 cards in SLI mode.

As far as performance goes, the GTX 690 is slightly behind two GTX 680 in SLI, mostly due to lower GPU clocks. In fact, the GTX 690’s GPUs run 9 percent slower that the ones on GTX 680 cards, which is practically the only difference between these GPUs.

Overclocking potential isn’t as good as the GTX 680’s, but we managed to squeeze out up to 16 percent better performance with very little effort. When it comes to consumption, the GTX 690 beats a GTX 680 SLI system by almost 90W.

The GTX 690’s performance is mouth watering and although some may say it’s an overkill for gaming, remember that 1920x1080 is a joke for what this card can do. If you want to play at 2560x1600 or are planning for some surround gaming, then you’ll want only the best. Two overclocked GTX 670 cards may be more affordable, and the same goes for two GTX 680s in SLI, but the GTX 690 offers a serious punch in a single package.

Gainward GTX 690 currently goes for about €900, which is a big plus as it cost €1000 only a few months ago. Not your average pocket money for sure, but not your average budget card either. Besides, we’re talking about the fastest graphics card on the market and that sort of performance never came cheap. In conclusion, if you’re looking for the leanest and meanest card on the market, Gainward GTX 690 has your name written all over it.

The GTX 690 is finally here and now we can see Nvidia's latest dual-GPU beast in action. Based on two Nvidia's 28nm GK104 Kepler GPUs placed next to each other on a single PCB, the GTX 690 clearly ends up to be the fastest graphics card that money can buy, and speaking of money you will need a lot of it to get your hands on a GTX 690.

In case you somehow managed to miss it, Nvidia's 28nm GK104 Kepler GPU certainly surprised everyone when Nvidia started to mention performance per Watt and with a total of 1536 CUDA cores, 128 texture units and 32 ROPs, all below 195W TDP, certainly made a significant pressure on AMD's high-end lineup.

Nvidia has always put a lot of effort to have a fastest graphics card around, no matter the cost, actual design, thermals or power consumption, but with Kepler, the job was certainly a much more easier. The GTX 690 features two fully enabled GK104 Kepler GPUs for a total of 3072 CUDA cores, 256 texture units, 64 ROPs and dual 256-bit memory interface paired up with 2GB of GDDR5 memory for each GPU.

In order to keep the TDP under 300W Nvidia had to cut some corners and clocks were the obvious way to go. While the GTX 680 was clocked at 1006MHz for base and 1058MHz for boost GPU clock, each GPU on the GTX 690 works at 915HMz for the base and 1019MHz for the boost clock. The memory on the GTX 680 was clocked at 6008MHz and luckly the GTX 690 shares the same clocks. As the new GTX 690 features two 8-pin PCI-Express power connectors, the theoretical maximum TDP is set at 375W (150W per 8-pin PCI-E power connector + 75W from PCI-E slot). This leaves a lot of headroom for GPU boost as well as additional overclocking in case you want it faster for some reason.

The good part of the story is that the GTX 690, perfomance wise, easily reaches the levels of two GTX 680 graphics cards run in SLI. Two HD 7970 in Crossfire still end up a bit faster in some benchmarks and games but we are yet to hear some precise details regarding AMD's dual Tahiti HD 7990 graphics card that is allegedly scheduled to show up at Computex and driver issues just make Crossfire too unstable.

Of course, since we are talking about ultra high-end graphics cards, it is just a waste to talk about resolutions below 2560x1600 and with a price set at US $999 it ends up to be the most expensive reference consumer graphics card to date.

We also must mention the looks of the new GTX 690 card as Nvidia certainly did its best with this one and cast aluminum, injection molded magnesium fan housing, transparent heat resistant polycarbonate shrouds and laser etched Nvidia and GTX 690 logos certainly makes it one of the most impressive graphics cards when it comes to design.

The GTX 690 is the fastest, the best looking, and the most expensive graphics card around. With a price tag set at US $999, it is reserved for those with deepest pockets that like to game at 2560x1600 and higher resolutions. All we need to see now is AMD's response that could hopefully push the price a bit down.

Last night, during its keynote at Nvidia Geforce LAN event in Shanghai, China, Nvidia finally announced its dual-GPU GK104 Kepler based GTX 690 graphics card.

Squeezed on a single 10 layer 2oz copper PCB, the two fully enabled GK104 GPUs will share room with a total of 4GB of memory and PLX bridge chip. Since we are talking about fully enabled GK104 GPUs found on the GTX 680 card, this one feature a total of 3072 CUDA cores, 256 TMUs and 64 ROPs (1536, 128, 32 per GPU). There is a total of 4GB of GDDR5 memory or 2GB per GPU paried up with a 256-bit memory interface each.

Of course, squeezing two GPUs on a single PCB plate has its drawbacks, so Nvidia had to cut down the clocks. Unlike the GTX 680 which works at 1006MHz base and 1058MHz Boost clock, the each of the two GK104 GPUs on the GTX 690 ended up clocked at 915MHz base and 1019MHz boost clock. Luckily or impressively, the memory clock remained at 6Gbps which provides a stunning 375.5GB/s of bandwidth.

The TDP is still set at 300W and it needs two 8-pin PCI-Express power connectors. Due to 8+8-pin design, the card can pull up to 375W (150W per 8-pin PCI-E power connector + 75W from PCI-E slot) which leaves a whole lot room for GPU Boost. The new GTX 690 graphics card also supports quad-SLI, which means that you can pair two of these for some extra fun.

Since we are looking at a highest-end product, Nvidia decided to go an extra mile and equip the new GTX 690 with nickel-plated fin stack heatsinks with dual vapor chamber that is cooled by center mounted axial fan. The entire cooler is covered by a cast aluminum and injection molded magnesium alloy shroud that should provide additional heat conduction as well as some noise insulation.

The new GTX 690 dual-GPU card is scheduled to launch on May 3rd with a rather hefty US $999 price tag.

Nvidia has shed some more details regarding its "mystery announcement". It appears that we'll have to hold on for a couple of more days as it is scheduled for 28th of April at GeForce LAN / NVIDIA Gaming Festival (NGF) 2012 in Shanghai, China.

Many recent rumours point at Geforce GTX 690, or simply dual GK104 Kepler card. We must give credit to Nvidia as many partners we talked to were quite surprised as well. Of course, we also happened to hear that this card might be limited to a few partners and our bet is that EVGA and probably some other Nvidia only partners will be getting this card. Of course, nothing is carved in stone yet.

We are quite sure that we'll hear more info as we draw closer to the 28th of April, but for now, rumourville is brewing rumors of a dual-GPU Kepler based GTX 690 card with 3072 CUDA cores and 4GB of GDDR5 memory. The card was previously rumoured to launch in the week of April 30th.

According to Sweclockers, the recently released teaser on Nvidia's Geforce Facebook page was indeed a teaser for Nvidia's dual-GPU GTX 690 graphics card based on two GK104 GPUs.

Sweclockers claim that, according to a couple of their sources, Nvidia's GTX 690 should launch in the week of April 30th. Unfortunately, the precise date is still unknown and it can happen anytime between April 30th and May 5th. Nvidia could be lining up this launch with Intel's Ivy Bridge schedule that should be officially launched on 29th of April. The precise specs of this dual-GPU Kepler based GTX 690 are still unknown but it should end up with a total of 3072 CUDA cores and 4GB of GDDR5 memory paired up with a dual 256-bit memory interface.

We must give credit to Nvidia as most partners that we asked about the mystery teaser were as surprised as we were. Nvidia managed to keep it under wraps and if the card is indeed coming during the week of April 30th, then they did a really good job. Despite previous rumours that AMD might come up with a dual-GPU HD 7990 in April, the card is nowhere to be seen.

It is no surprise that Nvidia is working on a dual-GPU graphics card that will feature two 28nm GK104 Kepler GPUs, and the folks over at Expreview have managed to dig out a couple of fresh tidbits about this card and now we are hearing that it is set for release in June.

According to a post over at Expreview, the new dual-GPU GK104 (as some call the "GeForce GTX 690" or "GeForce GTX 695," although there is no evidence of a confirmed name yet) [ed.] will draw power from two 8-pin PCI-Express power connectors and will feature PCI-Express 3.0-compliant bridge chip. The PSU requierement is set at 650W and the card will feature three DVI and one Displayport outputs. According to our info, precise specifications are still not finallized as Nvidia can use any of the recently rumoured versions of the GK104 chip.

Currently, our latest info puts the dual-GPU Kepler GPU release at sometime in June but as always take such information with a grain of salt.

The year 2011 will be remembered as the year of hope to see 28nm graphics from AMD and Nvidia, but serious quantities and launch of most 28nm desktop graphics won’t happen before early next year. So basically, in terms of graphics developments, 2011 will not be remembered at all.

AMD’s Radeon HD 7970 / 7950, codenamed Tahiti, are expected in January and they should be the fastest AMD single-chip solutions. Now we got confirmation that there will be a dual-chip card, something that might end up branded as Radeon HD 7990 and its codename is New Zealand, or Australia Light as some like to call it.

The launch schedule places the card around March, so it’s safer to say late Q1 2012 with an option to slightly slip in early Q2 2012. Naturally this card is faster than any other Radeon HD 7000 generation 28nm card but it will also beat the Radeon HD 6990, AMD’s fastest card to date.

Good news for 2012 is that dual-GPU graphics continue to exist and we are confident that Nvidia is working on its 28nm dual Kepler card.

While Nvidia's dual-GPU card is nowhere to be seen on Cebit, AMD's dual-GPU HD 6990 is happily running at MSI's booth and is showcased behind close doors by other partners around the show.

The partners' mouths are sealed when it comes to the details regarding the card but we did manage to get some details. As far as we know so far, it will be based on two Cayman GPUs with 1536 VLIW4 stream processors.

The details regarding the clocks might change but for now the card works at 830MHz for the GPU and 1250MHz (5.0Gbps) for a total of 4GB of memory (2GB for each GPU) paired up with a 2x256-bit memory interface.

The card has one DVI and four mini DisplayPort outputs, needs two 8-pin PCI-Express power connectors and should arrive on March 8th, as AMD actually rushed the official launch date a bit. Enjoy the pictures.

XFX HD 5970 4GB Black Edition graphics card is a limited quantity graphics card that comes at a pretty penny but in turn offers equally pretty performance levels. This naturally means that the card is not quite aimed at the general masses, and this kind of performance is surely not for the faint of heart. We know that Radeon HD 5970 is currently the fastest graphics card around, but XFX pushed the bar even further with its Black Edition card. So, for the price of US$1000/€1000, you’ll get XFX’s HD 5970 4GB Black Edition cards with six video outs, meaning Eyefinity 6.

The reference HD 5970 runs at 725MHz and the memory at 1000 (4000MHz effectively). Many expected the reference dual GPU HD 5970 to end up running at single-GPU HD 5870’s clocks (850MHz core, 1200MHz memory), but XFX offered that with its HD 5970 4GB Black Edition card. Naturally, this managed to give it the upper hand in the fight with the reference card.

XFX used non-reference cooling to make sure that the card will handle higher clocks. Furthermore, the card comes with 6 video outs and as much as 4GB of memory. While the reference card’s GPU has 1GB at its disposal, this time around it’s 2GB.

XFX HD 5970 4GB Black Edition card isn’t only special for its performance – XFX has got a few surprises for the owners of this card.

The large cardboard package holds a LAN Party bag with room for two keyboards and two mice whereas the three side pockets will take plenty of gadgets, USB sticks, etc.

All the HD 5970 Black Edition cards come with their own serial numbers and in a special box that resembles P-90. The gun-shaped box was designed by the G8 team and proves that XFX has been carefully working on every aspect of these cards.

We’re talking about a limited number of cards (1000 units) with serious pricing, but we do believe that they will find their place. For US$1000 you get a HD 5970 4GB that’s different from the rest of the HD 5970 2GB/4GB pack by its six video outs (Eyefinity 6) and is a dual slot card, rather than a triple-slot one. XFX’s card is 30.8cm long, just like the reference card.

As you can see from the picture, the cooler is not the reference solution and should thus cool the two Cypress XT cores much better.

The distance between the cores on the PCB is bigger than on the reference card and the fan is placed in the center rather than at the end of the card.

Beneath the hood are two aluminum heatsinks sitting on copper GPU blocks with three copper heatpipes going.

HD 5970 4GB Black Edition is an Eyefinity edition card, meaning it comes with six video outs. We're talking about six mini-DisplayPort connectors but we'll talk more on Eyefinity soon.XFX designed the card to run as fast as possible and crunching ultra high resolutions needs some serious overclocking. Compared to the reference card, the GPU runs 125MHz faster and the memory is 200MHz faster (800MHz effectively). The memory in question is Hynix H5GQ1H24AFR T2C.

The card comes with a CrossFire connectors, although it's a bit hard to imagine anyone buying two of these cards. Still, as our experience showed, you never know. Overclock and 4GB of memory spell higher consumption so the card is powered via two 8-pin connectors, unlike the reference card that uses 6-pin and 8-pin connectors.

Dirt 2 is not a problem for high-end graphics cards and all the cards, including the HD 5870, score a playable frame rate at 2560x1600. XFX HD 5970 4GB Black Edition outruns the reference HD 5970 by about 10% and the HD 5870 by up to 55%.

Aliens vs. Predator sees the XFX HD 5970 Black Edition outpacing the reference HD 5970 by about 15% and the HD 5870 by up to 81%. It appears that the XFX card's double amount of memory does more to improve the results in AvP than in Dirt2, where XFX ended up better by about 10%.

Metro 2033 also sees the XFX HD 5970 Black Edition come out on top by beating the reference HD 5970 by up to 15% and the HD 5870 by up to 80%.

Playing games with the picture spanning across more monitors can be done with Nvidia or AMD cards, but although both teams have made significant advances, we’d still refrain from calling them finished articles. In this respect AMD has done much better as cards from HD 5000 series will take up to three monitors (Eyefinity editions will take up to 6) whereas three screens on Nvidia cards require SLI. This means you need two graphics cards (except for the GTX 295 with two DVIs and HDMI) since each GPU can only handle two screens.

Not everything is as grand as it seems though, as AMD Eyefinity requires one of the three monitors to be a DisplayPort version. Naturally, DisplayPort monitors don’t come cheap but thankfully there are alternatives. For ultra high resolutions, the alternative is an active DisplayPort/Dual-Link DVI adapter ($100) whereas for resolutions for up to 1920x1200 you can do with an active DisplayPort/Single-Link DVI which is ($30). Sapphire is one of the first to offer this adapter and you can check it out here.

The active DisplayPort/Single-Link DVI adapter will allow for a resolution of 5760x1200 (landscape) or 3600x1920 (portrait).

XFX ships a few adapters with its HD 5970 4GB Black Edition card, among which you’ll find two passive MiniDP-to-Single-Link DVI, one passive MiniDP-to-HDMI and three miniDP-to-DisplayPort adapters. This means that although the card has six video outs, if your three (or all six) monitors only have DVI ins, you’ll have to look for additional active MiniDP-to-DVI adapters.

We tested with three 1920x1080 monitors (5760x1080) where one monitor had a DisplayPort video in .

Gaming at 5760x1080 is much more efficient with XFX HD 5970 4GB as the card not only runs overclocked, but it comes with twice the memory on the reference card.

Unfortunately, we must admit we expected better results in overclocking XFX’s HD 5970 Black Edition. We pushed the GPU from 850MHz to 890MHz, which is 40MHz higher than the card’s original clocks but 165MHz higher than the reference card’s clocks. After our overclocking, the memory ran at 1280MHz (5120MHz effectively), which is 1120MHz higher than on the reference card. Overclocking helped the card score from 4-10% higher on gaming tests.

As far as consumption goes, it goes without saying that an overclocked HD 5970 will consume more than a reference card, but at least idle consumption is identical. We measured our entire rig’s consumption (without the monitor). Note that our motherboard consumes quite a lot (in turn offering overclocking stability) so it is probably an overkill if you’re not using it to its full potential.

In idle operation, HD 5970 Black Edition is louder than the reference card. It runs a bit louder than the reference card during operation, but both HD 5970 are still too loud for our taste.

XFX’s custom designed cooling and PCB have paid off bigtime when it comes to thermals. In fact, thermals are much better than on the reference HD 5970, as you can see from the picture below.

The $1000/€1000 price, here, is serious dough and many of us would never splash out that much on a graphics card, despite the fact that XFX's HD 5970 4GB Black Edition is every gamer's dream. However, XFX didn't aim at the general masses with this card but rather at collectors and enthusiasts that like their graphics extra saucy. Indeed, XFX HD 5970 4GB Black Edition is one of the fastest cards around at the moment, not least helped by overclocking and the extra 2GB of GDDR5 memory.

XFX so far made sure that everyone knows what the company’s secret weapon is, and built a site that’s there to promote these beasts. You can check it out here.

Our hat is off to XFX for one of the most imaginative packages we’ve seen – HD 5970 4GB Black Edition comes inside a plastic package resembling a P-90 gun and comes with a specially designed LAN Party bag. This is not the first time XFX has managed to surprise everyone with quality of packaging, but what’s inside is capable of blowing anyone away.

Note however, that XFX’s HD 5970 Black Edition is much more than mere overclock of any HD 5970. Black Edition card comes with six video outs and, unlike the reference card with its three video outs, will take up to 6 monitors simultaneously. Thanks to the factory overclock, gaming performance is exquisite.

We were impressed with both the looks and performance, although we didn’t quite like the pricing and noise levels in idle and in 3D. However, XFX’s HD 5970 2GB Black Edition is the fastest card we’ve tested so far.