Back in late October MSI announced the GT80 Titan gaming laptop that included an impressive array of features, the most interesting of which was the full-size Cherry MX Brown keyboard embedded in the chassis. Seriously.

At CES this week we got hands on with the beast and I have to say I came away pretty impressed. Hardware powering the system includes an Intel Core i7-4980HQ processor, a pair of GTX 980M GPUs running in SLI, 24GB of DDR3 system memory, up to quad M.2 SSDs in RAID-0, Killer wired and wireless networking and more. All of that hardware sits under the top portion of the bottom of the notebook - the LED backlit Cherry MX Steel Series keyboard takes up the entire depth of the GT80 in the bottom portion.

Despite its appearance, the GT80 Titan is similar in size to some of the other 17/18-in Alienware notebooks currently selling, but they obviously don't include a Cherry keyboard will full travel switches. MSI also claims that access to the system memory, M.2 storage, 2.5-in HDD location and optical drive through the top panel allows for reasonable upgrade options down the road. Even the two MXM modules for the GTX 980M cards can be changed through the bottom of the GT80. (Mobile GPU upgrades have always been problematic.)

The GT80 Titan will be available next week and will start at $3299 with a $3499 option including the faster Intel processor. That is an incredibly high price for a gaming machine that is less "portable" than "transportable" but it would be hard to get more gaming horsepower in a smaller package anywhere else. We are looking forward to a review unit showing up shortly after our return! Stay tuned!

Titan has been officially canceled by Blizzard after a year and a half delay. Since around May of 2013, the developer attempted to "reset" the project by shrinking its staff down to a core group of thirty, down from a hundred. This team wanted Titan to embody their wildest ambitions, but they realized that it was not going to be fun. "Fun" is not the goal of every game, nor should it be.

If "fun" was the intention though, and it isn't, then you have a problem.

As for the employees, there does not seem to be any discussion of lay-offs. 16 months ago, when the team was downsized from 100 to 30, Blizzard claimed that its staff would be reassigned to other projects. The smaller, core team is not mentioned today at all, positively or negatively. Whether that is a good sign, and why it never came up in the inteview, is still unknown. Hopefully they will be transferred to an existing game or service, or work on a different, new product.

A slightly new architecture

Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year. As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA.

Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU. Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices). Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.

But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine. At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand. The question is: to where does that ship sail?

The AMD Hawaii Architecture

To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards. Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.

Hawaii is built around Shader Engines, of which the R9 290X has four. Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each. Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X.

The National Supercomputer Center in Guangzho, China, will host the the world's fastest supercomputer by the end of the year. The Tianhe-2, English: "Milky Way-2", is capable of nearly double the floating-point performance of Titan albeit with slightly less performance per watt. The Tianhe-2 was developed by China's National University of Defense Technology.

Comparing new fastest computer with the former, China's Milky Way-2 is able to achieve 33.8627 PetaFLOPs of calculations from 17.808 MW of electricity. The Titan, on the other hand, is able to crunch 17.590 PetaFLOPs with a draw of just 8.209 MW. As such, the new Milky Way-2 uses 12.7% more power per FLOP than Titan.

Titan is famously based on the Kepler GPU architecture from NVIDIA, coupled with several 16-core AMD Opteron server processors clocked at 2.2 GHz. This concept of using accelerated hardware carried over into the design of Tianhe-2, which is based around Intel's Xeon Phi coprocessor. If you include the simplified co-processor cores of the Xeon Phi, the new champion is the sum of 3.12 million x86 cores and 1024 terabytes of memory.

... but will it run Crysis?

... if someone gets around to emulating DirectX in software, it very well could.

If you have been wondering how the two flagship GPUs fare in a battle royal of pure frame rate you can satisfy your curiousity at [H]ard|OCP. They have tested both NVIDIA's TITAN and the finally released HD7990 in one of their latest reviews. Both cards were force to push out pixels at 5760x1200 and for the most part tied, which makes sense as they both cost $1000. The real winner was Crossfired HD 7970's which kept up with the big guns but cost $200 less to purcahse.

"We follow-up with a look at how the $999 GeForce GTX TITAN compares to the new $999 AMD Radeon HD 7990 video card. What makes this is unique is that the GeForce GTX TITAN is a single-GPU running three displays in NV Surround compared to the same priced dual-GPU CrossFire on a card Radeon HD 7990 in Eyefinity."

A very early look at the future of Catalyst

Today is a very interesting day for AMD. It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board. Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver.

If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed. The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet. That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised.

Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February. Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology. We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light. Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD.

Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered.

If you are just catching up on the story, you really need some background information. The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically. From that piece:

It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand. We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern. Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case. Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?” My answer based on the below graph would be no.

An example of a runt frame in a CrossFire configuration

NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen. For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.