NVIDIA GT300 'Fermi' Architecture

Revealing the GT300 'Fermi' Architecture

For all you enthusiasts out there, you would think that NVIDIA's going to surprise us with their GT300 based graphics cards to curb AMD's recent upswing at the world's first DirectX 11 graphics card and the fastest and most power-efficient high-end graphics card out there. Well, that's exactly what we got to hear and see among other matters pertaining to general-purpose GPU (GP-GPU) computing.

Showcased and first officially announced here at NVIDIA's GPU Technology Conference in San Jose, California by NVIDIA CEO and Co-Founder, Jen-Hsun Huang during his opening keynote speech. The event is targeted at developers, programmers, entrepreneurs, venture capitalists, researchers and academics is to propel GP-GPU computing initiatives, development and research to the next level by educating and showcasing what NVIDIA, its partners and research institutions have managed to unleash by tapping on the power of the GPU in new ways (specifically with NVIDIA's hardware and with NVIDIA CUDA technology).

Additionally several emerging companies have over time managed to tap on to NVIDIA's hardware and CUDA technology to provide specific purpose oriented solutions to accelerate or enhance visualization techniques, solve problems and many others which they've helped develop for companies they've worked with and are also present at this conference to inspire, share and network with others in the industry from their experience. We at HardwareZone.com were privileged to be invited for this forward looking tech conference and one of the first things we're going to share with you is the GT300 architecture codenamed Fermi.

Since this tech conference is targeted at tech developers and researchers, the GT300 details pertaining to the next GeForce lineup were withheld at the moment but they did share with us common details that the highest SKU Fermi architecture will have in both consumer and workstation/HPC products.

This 3-billion transistor GPU has 512 shader processing cores (organized in 16 shader multiprocessors, each with a 32-core configuration). This makes the Fermi architecture and consequently the GT300 a huge GPU indeed - pretty much 'normal' when every new GPU gets launched. The core is also vastly improved for single and double precision floating point calculation. Now if you consider ATI's latest incarnation with 1600 shader execution units - be reminded that each of its scaler processor has five shader execution units, which means the net number of true shader processors in ATI's Radeon HD 5870 is 320. So if you put that into perspective, NVIDIA turning up later than its rival for the next generation product has a valid reason indeed.

In terms of memory bandwidth, the new NVIDIA GPU has six channels of 64-bit GDDR5 memory controllers. This gives it a net memory bus width of 384 bits, supporting up to 6GB of graphics memory. The Fermi architecture is also the first GPU to support ECC for data stored in the memory for an added level of safety and assurance of computed data. Why even the memory subsystem for the shader multiprocessors have improved with configurable L1 and a unified L2 cache. This feature is dubbed as NVIDIA Parallel DataCache hierarchy and greatly speeds up certain functions and mathematical calculation routines.

And there are more reasons why the new architecture will probably be a game changer. The Fermi architecture is billed as the world's first computational GPU. It is designed to support all current programming tools such as Fortran, C, and now C++, among other compiler targets and many other functions (such as unified address space, OpenCL and DirectCompute optimization) thanks to the second generation Parallel Thread eXecution (PTX 2.0) instruction set. Factor in NVIDIA's Nexus development environment that allows Visual Studio developers to write and debug GPU source code the same way one does debugging on CPU code, and developers now have an environment that promotes heterogeneous computing whereby applications can take advantage of both the CPU and GPU. Even more so because Nexus has tools to manage and take advantage of tasks that can be run in parallel.

Back to more practical questions and wrapping up this update of the GPU Technology conference, no expected timeline has been mentioned of the availability of GT300 (Fermi architecture ) based GPUs. What we do know is that NVIDIA has working silicon of the GPU and that it should be available in a few months. When it does get released, it is likely that the corresponding versions from the GeForce, Quadro and Tesla line-up will be released in short succession of each other. Expected TDP for these babies are no more than current GPUs out in the market. We'll leave you with these shots of the card and the chip:-