The budget-priced trio feature full support for DirectX 10 features including pixel and vertex shader model 4.0. NVIDIA has yet to reveal the amount of shaders or shader clocks though. Nevertheless, the trio supports NVIDIA SLI and PureVideo technologies.

At the top of the mid-range lineup is the GeForce 8600GTS. The G84-based graphics core clocks in at 675 MHz. NVIDIA pairs the GeForce 8600GTS with 256MB of GDDR3 memory clocked at 1000 MHz. The memory interfaces with the GPU via a 128-bit bus. The GeForce 8600GTS does not integrate HDCP keys on the GPU. Add-in board partners will have to purchase separate EEPROMs with HDCP keys; however, all GeForce 8600GTS-based graphics cards feature support for HDCP.

Supported video output connectors include dual dual-link DVI, VGA, SDTV and HDTV outputs, and analog video inputs. G84-based GPUs do not support a native HDMI output. Manufacturers can adapt one of the DVI-outputs for HDMI.

NVIDIA’s GeForce 8600GT is not as performance oriented as the 8600GTS. The GeForce 8600GT GPU clocks in at a more conservative 540 MHz. The memory configuration has more flexibility, letting manufacturers decide between 256MB or 128MB of GDDR3 memory. NVIDIA specifies the memory clock at 700 MHz. The GeForce 8600GT shares the same 128-bit memory interface as the 8600GTS. HDCP support on GeForce 8600GT is optional. The GPU and reference board design support the required HDCP keys EEPROM, however, the implementation is up to NVIDIA’s add-in board partners.

GeForce 8600GT-based graphics cards only require a six-layer PCB instead of the eight-layer PCB of the 8600GTS. The physical board size is also smaller too – measuring in at 6.9 x 4.376 inches. GeForce 8600GT-based cards do not require external PCIe power. NVIDIA rates the maximum board power consumption at 43-watts – 28-watts less than the 8600GTS.

The GeForce 8600GT supports similar video outputs as the 8600GTS, however, the 8600GT does not support video input features.

NVIDIA has revealed very little information on the GeForce 8500GT besides support for GDDR3 and DDR2 memory. It supports dual dual-link DVI, VGA and TV outputs as well.

The 7900GTX has a 256bit memory bus, whilethe 8800GTX has a 384bit memory bus, plusnew architectures, the 7900GTX does not have.e.g. Stream Processors. Of course it's going to be fasterDo your research you freaking noob!

Also as for the Nvidia's Cards, they shouldn't have donesomething like this...

Good points about the high end, although I don't think he deserved to be called a noob, but whatever.

I think the point most people are missing is that we are looking at mid-range cards here. From a cost standpoint, Nvidia can't just throw 256-bit or higher into mid-range cards. Perhaps in the future, but we arent' at that point in time yet. A more realistic breakdown for now is:

LOL - so true.... And here we thought that the ignorant always believe that a higher clockspeed is better.

For anyone who doesn't know:a 256-bit bus with 500MHz memory will have the exact same bandwidth as a 128-bit bus with 1000MHz memory, but the 256-bit bus with the slower memory will be a whole lot more expensive.

7600 GT has a 128Bit Memory Interface at 1.4GHZ8600 GTS has a 128Bit Memory Interface at 2.0GHZ plus the New Stream processors the older architecture doesn't have.

Will it be enough to beat the 7900 GTX, it will be close as the projected level is 7950 GT ballpark performance. Not bad for a mainstream card based on the Geforce 8 architecture.

256Bit or Higher Interfaces generally have minimum die size requirements of around 200mm2 or greater, or you have to up PCB complexity considerably, either way increases production costs considerably which is unacceptable for a mainstream SKU.

So unless we find a way to make 200mm2 die sizes, much more affordable, maybe when we switch to 450mm Wafer technology, sometime next decade... or some major breakthrough comes in on PCB technology. You will always have high clocked memory on 128Bit Interface on the mainstream SKU.