GPGPU (General Purpose GPU) Technology: Utilizing an array of S3 stream processors, the Chrome GPU can accelerate parallel data workloads and perform work on thousands of concurrent threads to achieve Gigaflops (GFLOPS) of high-throughput computations. Applications that can utilize S3 GPGPU technologies include high performance computing (HPC) infrastructures, video transcoding/encoding, game physics, and many more.

S3 Graphics PowerWise™ Technology: Sophisticated algorithms and power control mechanisms will allow Chrome 500 Series GPUs to deliver the optimal balance between performance and power on-the-fly, to meet performance and application requirements for power efficient graphics intensive computing on small form factor PCs. Consuming under 25W of power, the Chrome 500 is ideal as the graphics of choice for HTPC (home theatre PCs), DIY enthusiasts, and existing PC users looking for the best bang for the buck upgrade.

PCI Express®2.0: Chrome 500 Series GPUs support the latest high throughput PCI Express® 2.0 bus technology for bandwidth intensive applications and games. The faster connection speed also allows users to take advantage of additional S3 Graphics technologies such as AcceleRAM™ which leverages system memory for image data storage, and MultiChrome™ Multi-GPU technology to unleash higher 3D rendering performance.

Revealing The Power of DirectX 11 (http://www.anandtech.com/video/showdoc.aspx?i=3507)

She's much cooler than her older brother, and way hotter too. Many under-the-hood enhancements mean higher performance for features available but less used under DX10. The major changes to the pipeline mark revolutionary steps in graphics hardware and software capabilities. Tessellation (made up of the hull shader, tessellator and domain shader) and the Compute Shader are major developments that could go far in assisting developers in closing the gap between reality and unreality. These features have gotten a lot of press already, but we feel the key to DirectX 11 adoption (and thus exploitation) is in some of the more subtle elements.

Rather than throwing out old constructs in order to move towards more programmability, Microsoft has built DirectX 11 as a strict superset of DirectX 10/10.1, which enables some curious possibilities. Essentailly, DX10 code will be DX11 code that chooses not to implement some of the advanced features. On the flipside, DX11 will be able to run on down level hardware. Of course, all of the features of DX11 will not be available, but it does mean that developers can stick with DX11 and target both DX10 and DX11 hardware without the need for two completely separate implementations: they're both the same but one targets a subset of functionality. Different code paths will be necessary if something DX11 only (like the tessellator or compute shader) is used, but this will still definitely be a benefit in transitioning to DX11 from DX10.

Pentru entuziasti nici intr-un caz nu e un lucru bun :)
''The ADP4100 lacks the I2C interface, which means voltage control will be much more difficult than on current PCBs of the GeForce 260,280, 285 and 295''

Samples based on the RV790XT A11 are currently running at speeds of 850/975 MHz (core/memory). AMD is reportedly telling its partners that the RV790XT is expected to be around 20% faster than the RV770XT (HD 4870), and has NVIDIA's GeForce GTX 285 in its sights, for head-on competition. Additionally, AMD may force NVIDIA to reconsider its pricing, since the RV790XT is expected to be priced between US$199 to US$249, up to $150 cheaper than the GeForce GTX 285 in its current pricing.

Samsung noted that it was able to tweak its 50nm process so much that it increased its production efficiency by a hundred percent over its 60nm process. If true, this will cut prices low enough that GDDR5 will be able to finally move into mainstream video cards.
Radeon HD 4770 Sets Sights on GeForce GTS 240 (http://www.techpowerup.com/index.php?85476)

The HD 4770 SKU will be distinct in being the first mainstream graphics card with GDDR5 memory. It will use its 128-bit wide bus to accommodate 512 MB of memory. The HD 4750 on the other hand, will stick to GDDR3. The reference model may have 1GB of it. HD 4770 is expected to be priced around the US $120 mark, making it a head-on competitor to the GeForce GTS 240, which is known to be a re-badged GeForce 8800/9800GT with higher reference clock-speeds. The HD 4750 has GeForce 9600 GT in its sights, with its expected initial pricing ranging around the $100 mark. The two are expected to follow the RV790 launch, and will arrive in May, close to two months after the GeForce GTS 240 comes to be.

Baubaul

19-02-2009, 15:47

Arctic Cooling Announces the Accelero Xtreme 4870X2
The Swiss low noise cooling solution provider Arctic Cooling today announced the launch of the Accelero XTREME 4870X2. Especially designed for the ATI Radeon HD4870X2, the Accelero XTREME 4870X2 follows the sophisticated design of the Accelero XTREME series, offering the best cooling solution for this high-end VGA card.

Extreme cooling performance
Outstanding cooling comes from outstanding components. The Accelero XTREME 4870X2 is equipped with three 92mm PWM fans running from 1,000 to 2,000 RPM, generating 81 CFM airflow which allows the fans to remove the heat from the two GPUs efficiently. The eight-heatpipe design also improves heat dissipation and achieves 320 Watts cooling capacity. The result is significant - the GPU temperature is 50°C lower than with the stock cooler. This enhances not only the overclock performance, but also extends the service life of your valuable graphics card.

Quiet cooling guaranteed
By using the low noise impeller and patented fan holder, the three 92mm PWM fans are incredibly quiet. Also thanks to the PWM function, the fans run just at necessary speed while offering sufficient cooling at the lowest noise level. Even running at full speed 2,000 RPM, the Accelero XTREME 4870X2 operates almost in silent with only 0.5 Sone of sound, creating a much quieter gaming environment beyond anyone's imagination.

The Accelero XTREME 4870X2 comes with a 6-year limited warranty. This product will be available by end of
March 2009. The MSRP is US$68.30 and 53.95€ (excluding VAT).

The economy right now is in a tight spot, the entire world is fighting off the global financial crisis. As such people will have less to spend on entertainment and niche products, that is where graphics card series like the ones shown today come in. This is one of the reasons why I believe ATI's RV740 products could be a nice success. Products like these offer very decent gaming performance targeted at the budget end-user below a 100 USD level. That's stuff to think about for a minute. I mean, for less than 100 bucks you'll be able to get a videocard that performs fairly close to a Radeon HD 4850. I think that's impressive, an achievement all by itself.

Typically it would make sense that over time the product would be outdated. But rest assured, there is plenty of life left on the G92B series GPUs powering up the 9800 GTX+ and now the GeForce GTS 250. In it's evolutionary development path, it did get a tweak or two and certainly got faster over time with a couple of architectural and driver tweaks. Next to that power consumption went down; more importantly .. price went down.

Props to forum old-timer Cowie for submitting this one. Chinese based Expreview got their hands on the much discussed single PCB based GeForce GTX 295 From Inno3D. It's a lot of info, courtesy of expreview of course:

Rumors on NVIDIA’s single-PCB reference GTX295 where unveiled in March, and Inno3D has is showing their GTX295 Platinum which is the first card to adopt single-PCB design.

The previous version of GeForce GTX295 adopted dual-PCB design with PCB code P656, the new GTX295 will employ single-PCB design known as P658. The length of PCB board remains as 267mm, and the cooler sports dual slots and single fan. You’ll find the two GT200 chips on both sides of PCB, and one NF200 and two NVIO2 chips are placed in the middle part.

Each GPU has 3-phase power supply, which means 6-phase power supply in total. Due to the limited space, Inno3D has used a DrMOS chip for this card.

Inno3D’s new GTX295 uses Hynix H5RS5223CFR NOC GDDR3 memory which actually works at 999MHz though the theoretic frequency is 1000MHz. The 14 DRAM chips build up 1792MB memory size.

BTW photo's showing up at this moment, tells me that we can expect a launch at Computex, or at least some demo's early next month.

Price of HD 4890 card will be lowered from US$249 to US$199 while HD 4870 will be priced at US$149, down from US$199. Lastly, HD 4850 which we already saw the prices plummeting due to HD 4770 shortages will go below US$100 this time.

As we reported, G300 will be based on 40nm manufacturing process, featuring 2.4 billion transistors, 495mm2 package, 512 shaders, 512-bit memory interface, GDDR5 and core/shader/memory clock of 700/1600/1100MHz. The TDP was rummored to be 300W, e.g. one 8pin and one 6pin power connector.
But according to Brightsideofnews (http://www.brightsideofnews.com/news/2009/6/8/nvidia-gt300-targets-225w-tdp.aspx), the GT300 part targets a thermal range of 225W, and should come with two 6pin power connectors. It’s a mystery how NVIDIA manages to control the power consumption so properly, but if what we hear is true, it’s something good for the users.

Coco

09-06-2009, 11:44

ATI to be first with DirectX 11 (http://www.fudzilla.com/content/view/14133/34/)

If all goes well, Nvidia might have its GT300 DirectX 11 capable card in very late 2009 but it’s almost certain that ATI will be the first to launch DirectX 11 hardware.

Coco

09-06-2009, 11:56

GTS 250 is selling like mad - Great success for Nvidia (http://www.fudzilla.com/content/view/14131/34/)

Renaming the G92b-based Geforce 9800GTX+ to Geforce GTS 250 was one of the best things Nvidia did in 2009. The rename worked and everyone wants to buy Geforce GTS 250 as this name definitely sounds better than the old 9800GTX+.

The hard reality is that consumers are too naive to notice the tricks of Nvidia's mighty marketing machine, but as we said, Nvidia doesn’t care much as the strategy works.

Tessellation is not exactly a performance boosting feature, but it can generate highly-detailed objects using less resources than using traditional technology; therefore, either game developers will be able to offer better graphics, or they will be able to render existing graphics with higher speed.

“After the post-processing work, I would probably switch to tessellation. I would use […] patch approximation to smooth out jagged objects, things like pipes, which are supposed to have smooth form, use N-patches there. Or I will be even more aggressive and take something like parallax occlusion mapping, which is a rather attractive kind of trick for improving the quality of pixels within an object. I would instead extrude the geometry and load a parallax occlusion height map and then would generate much improved silhouettes using the tessellator,” said Mr. Huddy.

Obviously, to use tessellation, developers will have to rely on DirectX 11 or, at least, hardware with tessellation support. Still, tessellation is a yet another advantage of DX11.

If money is no problem then we, or better yet, Asus, has one more thing you can spend it on - the limited edition Mars graphics card which boasts no less than 4GB of GDDR3 memory, and two GT200 GPUs summing up to 480 Processing Cores, all one one PCB.

Seen below, the dual-GPU MARS/2DI/4GD3 has a 2x512-bit memory interface, features GPU, shader, and memory clocks of 648, 1476 and 2484 MHz, respectively, and is now available for pre-order at UK's Scan with a price tag of (brace yourself) £1,030.98 or about $1,680 / 1,218 Euro. Any takers?

mascotzel

16-06-2009, 09:43

http://www.fudzilla.com/content/view/14223/1/

According to what we have heard ATI is playing fair on this one, as it has promised a US $99 card that they simply could not deliver in volume due to low yields and high demand. In order to live up to its promise ATI has simply adjusted the price of the HD 4850 in order to keep its customers, if you want US $99 card, you can have the HD 4850.

While technically it is derived from the AMD RV610 core with some tweaked clock-speeds, it will get the brand name ATI Radeon HD 4200. With 1800 points in 3DMark06, this IGP is as powerful as GeForce 6800 GT a high-end graphics accelerator from the DirectX 9 generation. The chipset will make a formal debut this August.
NVIDIA 40 nm GeForce G210 and GeForce GT 220 Now Official (http://www.techpowerup.com/index.php?98865)

The GeForce G210 has 16 processor cores and a 589 MHz CPU clock speed; that's paired with 512 MB of DDR2 memory with a 64-bit interface and 500 MHz clock speed. Its shaders run at 1402 MHz. As for the NVIDIA GeForce GT 220, it has 48 processor cores and a 615 MHz clock speed, paired with 1 GB of GDDR3 memory with a 790 MHz clock and a 128-bit interface. It has a slightly slower shaders clock speed of 1335 MHz. Neither of the two cards is expected to be available directly to consumers, both offerings are marked as OEM products and meant to be entry-level options in pre-built PCs.

With Nvidia, things are quite clear. The first DirectX 11 chip is in super high end, something that we all call GT300, even that might not be the real code name. Nvidia's desktop 40nm refresh is a minor facelift of existing chips, all done in 40nm and don’t expect any miracles.

They should be marginally better than the current 55nm generation and that’s it. We wonder if Nvidia will force the guys behind Assassin’s Creed to enable DirectX 10.1 again (http://www.pixelrage.ro/news/Fara-DirectX-10.1-pentru-Assassins-Creed-6487.html), as its been disabled for a while, at least we believe so.
AMD: DirectX 11 Radeons Pleasantly Fast (http://en.expreview.com/2009/07/10/amd-directx-11-radeons-pleasantly-fast.html)

AMD plans to launch Evergreen series DirectX 11 graphics card by the end of 2009, and hopefully promote it for the release of Microsoft’s Windows 7 on Oct 22nd.
Richard Huddy, AMD’s head of the Developer Relations Department, said in a talk with PCgameshardware, “I would say we don’t make money by delivering slow hardware. Our expectation is that we’ll give you a really pleasant surprise this year when we ship our DX11 hardware.”
After that he demonstrated the advantages of Hardware Tessellation and DirectX 11 Compute Shaders on DirectX 11 ready graphics cards.

DirectX 11 Compute Shader: Three times faster than DX10.1 due to Local Data Share (http://www.pcgameshardware.com/aid,689924/DirectX-11-Compute-Shader-Three-times-faster-than-DX101-due-to-Local-Data-Share/News/)

AMD's Developer Relations boss Richard Huddy explains the mode of operation of Ambient Occlusion - the DirectX 11 Compute Shader is said to provide up to three times the performance possible with DirectX 10.1.

ATI will catapult not one, but rather a complete DirectX 11 line-up into the orbit, ranging from $50 to the high end parts in their respective three-figure pricing brackets. Just like the Radeon 4000 series, consisting out of 4400, 4500, 4600, 4700 and 4800 parts, the new Radeon series will consist out of entry-level [Hemlock], mainstream [Cedar, Redwood], performance [Juniper] and high-end part [Cypress]. When it comes to codenames, you'll notice that all of these parts have codenames from plants, with some belonging into the same family [Cedar, Juniper, Redwood] Note that not all of these parts will be launched on the same day. Some parts might have to wait until Holiday season to get them in OEM-level numbers, but all in all - this is the strongest line-up ATI had in years. We might even dare to say strongest line-up ever from any GPU manufacturer.

Apparently NVIDIA wants to continue development of the GeForce GTS 240, at least for its OEM customers, if not AIC partners that cater to the retail consumer segment. The GeForce GTS 240 reference design accelerator is in accordance with the schematics that surfaced back in February, and maintains a single-slot design overall. Under the hood is the 55 nm G92b graphics processor with 112 shader processors, a 256-bit GDDR3 memory interface, 1 GB of memory, and reference clock speeds that match that of GeForce 9800 GT OC: 675/1620/1100 (core/shader/memory). The card supports 2-way SLI, and should be priced in the sub $130 space.
Mai precis, OEM-urile GTS 240 vor ajunge in compuri de-a gata gen Apple, ai caror utilizatori se vor minuna de capabilitatile nemaintalnite ale acestora...

One reader named Firewings [CCG] at Expreview forum (Chinese version) has figured out the way to get SLI support for its Asus Maximus Formula motherboard based on Intel X38 chipset, with one NVIDIA GeForce 8600GT and one GeForce GTX 260 graphics card.

Expreview.com has contacted the reader and run a quick test for validation. We chose Intel QX9650 OC (4GHz) CPU, Asus Maximus II Formular motherboard, one Yeston GeForce GTX260+ 896GD3, and one Galaxy GeForce GTX 260+ graphics card for the test.

Slated to ship from around August 18, the HD 4860 will substitute the HD 4850, which is handling the lower price segment of $90~$110.

Even as the Radeon HD 4770 is suffering stock shortages around the world, ATI seems to be going ahead with the HD 4750, a graphics card based on the 40 nm RV740 GPU. The HD 4750 has similar specifications to the HD 4770. It even sports GDDR5 memory. The RV740 core runs at 730 MHz, and its memory at 800 MHz (3200 MHz effective). While the memory clock speed is identical to that of the HD 4770, the core is clocked slightly lower. Overall, the card won't have as much overclocking headroom as the HD 4770, because it will not draw power from a 6-pin power connector. The design ensures existing low-performing raw materials are utilized more effectively. The HD 4750 is expected to be priced at $88 initially. The first batch of these cards will be very small, quantity-wise (aviz amatorilor).

After changing for a more powerful fan and adding heatsink for the Mosfet, the clocks can be improved to 820/1800/1375MHz.

Zotac GTX260 Extreme can be effortlessly overclocked to above 800MHz, and its performance can even outperform GeForce GTX285…

The awesome card is expected to be available in Chinese market next month, with lower pricing than GeForce GTX275. Zotac is also working on a GeForce GTX285 with 15 phase PWM design as well. We do look forward to seeing more of Zotac’s engineering skills in the future.

The seemingly huge die measures 338 mm² (area), and for 40 nm, it translates to "huge", and is vindicated by the transistor count of ~2.1 billion. In contrast, AMD's older flagship GPU, the RV790 holds 959 million, and NVIDIA's GT200 holds 1.4 billion.

The PCB has three distinct areas: the connectivity, processing, and VRM. To fuel the GPU is a high-grade 4 phase digital PWM power circuit, while the PCB has placeholders for an additional vGPU phase. The 8 (or 16 on the 2 GB model) memory chips, is powered by a 2 phase circuit. Power is drawn from two 6-pin PCI-Express power connectors, but there seems to be a placeholder for two more pins, i.e., to replace one of those 6-pin connectors with an 8-pin one. Bordering the GPU on two sides are the 8 GDDR5 memory chips, which AMD calls says is generation ahead of present GDDR5, and supports reference frequencies as high as 1300 MHz (2600 MHz DDR, 5.20 GHz effective). In the 2 GB variant, 8 more chips seat on the other side of the PCB. This is what perhaps, the backplate is intended to cool. On the connectivity portion of it, are the two CrossFire connectors, DisplayPort, HDMI and a cluster of two DVI-D connectors.

While HD 5870 sports 1600 stream processors, 80 TMUs, and 32 ROPs, HD 5850 has 1440 stream processors, 72 TMUs, and 32 ROPs. Although 32 ROPs puzzles us for a 256-bit wide memory interface, we suspect low-level design changes that make "32 ROPs" more of an effective count than an absolute count. While HD 5870 features over 800 MHz core clock and 5.20 GHz memory, its little sibling has over 700 MHz core clock and 4.40 GHz memory. Price points expected are US $449 for Radeon HD 5870 2 GB, $399 for HD 5870 1 GB, and $299 for HD 5850. AMD is expected to announce all three models on the coming 23rd.
First Radeon HD 5870 Performance Figures Surface (http://www.techpowerup.com/index.php?103786)
http://www.czechgamer.com/pics/clanky/RobertVarga_14-09-2009-11-29-47_hd5870hawx.jpg

Radeon HD 5870 is anywhere between 5~155 percent faster than GeForce GTX 285. That's a huge range, and leaves a lot of room for uncertainty.

When two HD 5870 cards are set up in CrossFire, the resulting setup is -5 percent (5% slower) to 90 percent faster than GeForce GTX 295. Strangely, the range maximum is lesser than that on the single card.

When three of these cards are setup in 3-way CrossFireX, the resulting setup is 10~160 percent faster than a GeForce GTX 295.
The Radeon HD 5850 on the other hand, can be -25 percent (25% slower) to 120 percent faster than GeForce GTX 285.
Mi se par extrem de mari diferentele (a se citi partinitoare)... insa si noua pastila este IMENSA...

HD 5770 card is codenamed Countach while HD 5750 is codenamed Corvette and they both come with 1GB GDDR5 memories on 128-bit memory interface. Juniper will possess all the features of its higher end counterpart like 40nm, DX11, Eyefinity technology, ATI Stream, UVD2, GDDR5 and best of all, it is going to be very affordable.

One of the reason why AMD is not mass producing Radeon HD 4700 (RV740) series now is because HD 5700 series will be replacing it soon and will come one month after HD 5800 series. It will meet head on against the NVIDIA's D10P1 (GT215) series in October so expect a full fledge war then. With a performance target of 1.6x over the HD 4770 and 1.2x over the HD 4750, they are surely packed with enough power to pit against the NVIDIA's lineup. Pair them up and you will get a boost of 1.8x which is roughly the performance of a Cypress card.
Daca e cum spun baietii astia (m-as mira), atunci lansarea HD 4770 a fost se pare doar una pe hartie, produsele vandute pana in prezent (mai ales la noi) fiind extrem de putine din cate stiu eu.

If anything, the HD 5870 really showed the inherent difficulties with cards like the GTX 295 and HD 4870X2 which both live and die by their drivers rather than raw horsepower. While both dual chip cards outperformed the HD 5870 on a regular basis, they couldn't shake their limitations in order to deliver a killing blow. If anything, our results should also make it apparent that only reporting average framerates means dick all when it comes to comparing GPUs. A gamer could have a totally smooth gameplay experience until that one intense scene where performance drops like a stone. We've all had it happen to us and those situations continue to be the bane of the dual GPU cards' existence.
ATI Radeon HD 5870 DX11 Video Card Review @ legitreviews (http://www.legitreviews.com/article/1080/4/) - testat versus HD 4870 X2, HD 4890, GTX 295, GTX 285 si crosfire de 5870-uri. Interesanta testarea/analizarea continutului audio iesit de pe HDMI via un film blu-ray + powerdvd.

I can't begin to express how relieving it is to finally have GPUs that implement a protected audio path capable of handling these overly encrypted audio streams. Within a year everything from high end GPUs to chipsets with integrated graphics will have this functionality.
... cat si sectiunile ce se refera la calitatea imaginii in jocuri. Concluziile lor:

the 5870 is the single fastest single-GPU card we have tested, by a wide margin. Looking at its performance in today’s games, as a $379 card it makes the GTX 285 at its current prices ($300+) completely irrelevant. The price difference isn’t enough to make up for the performance difference, and NVIDIA also has to contend with the 5850, which should perform near the GTX 285 but at a price of $259. As is often the case with a new generation of cards, we’re going to see a shakeup here in the market as NVIDIA in particular needs to adjust to these new cards.

AMD was shooting to beat the GTX 295 with the 5870, but in our benchmarks that’s not happening. The 295 and the 5870 are close, perhaps close enough that NVIDIA will need to reconsider their position, but it’s not enough to outright dethrone the GTX 295. NVIDIA still has the faster single-card solution, although the $100 price premium is well in excess of the <10% performance premium.

Moving away from performance, we have feature differentiation. AMD has a clear advantage here with DirectX11, as the 5870 is going to be a very future-proof card. When games using DX11 arrive, it’s going to bring about a nice change in quality (particularly with tessellation). However it’s going to be a bit of a wait to get there.

Gigabyte's newest entry-level graphics accelerator is based on one of NVIDIA's first GPUs built on the 40 nm fab-process. The Gigabyte GeForce GT 220 OC (GV-N220OC-1G) is based on the new GT216-300 GPU from NVIDIA that features 48 shader processors, support for DirectX 10.1, and a 128-bit wide GDDR3 memory interface accommodating 1 GB of memory.

The GPU is clocked at 720 MHz, with its shader domain at 1567 MHz, and memory at 800 MHz (1600 MHz DDR). With connectivity options that include DVI-D, D-Sub, and HDMI with a gold-plated connector (merely aesthetic, as digital connections don't benefit), the card sports a cooler design similar to its custom-design Radeon HD 4770 1 GB accelerator.

Intel Shows Off Working Larrabee, Set to Take on AMD, NVIDIA Next Year (http://www.dailytech.com/Intel+Shows+Off+Working+Larrabee+Set+to+Take+on+AM D+NVIDIA+Next+Year/article16332.htm) - FOARTE interesant

Built on a multicore die-shrink of Intel's Pentium 54C architecture, the powerful new graphics chip was able to render a ray-traced scene from the id Software game, Enemy Territory: Quake Wars, with ease. Ray-tracing is an advanced technique that has long been touted as an eventual replacement to rasterization in video games. Currently it is used for the high quality 3D animation found in many films.

The working chip was said to be on par with NVIDIA's GTX 285. With AMD's latest offerings trouncing the GTX 285 in benchmarks, the real question will be the power envelope, how many Larrabees Intel can squeeze on a graphics board, what kind of Crossfire/SLI-esque scheme it can devise, and most importantly the price.

A direct comparison between NVIDIA's GTX 285 and Larrabee is somewhat misleading, though, because Larrabee is unique in several ways. First Larrabee supports x86 instructions. Secondly, it uses tile-based rendering to accomplish task like z-buffering, clipping, and blending that its competitors do in hardware, with software instead (Microsoft's Xbox 360 works this way too). Third, all of its cores have cache coherency. All of these features stack up to (in theory) make Larrabee easier to program games for than NVIDIA and AMD's discrete offerings.

The GPU also still has some time to grow and be fine tuned. It's not expected until the first half of 2010. Intel is just now lining up board partners for the new card, so it should be interesting to see which companies jump on the bandwagon.

DirectX 11 GPU War Heats up Between AMD and NVIDIA (http://en.expreview.com/2009/09/27/directx-11-gpu-war-heats-up-between-amd-and-nvidia.html)

http://en.expreview.com/img/2009/09/27/AMD_NVIDIA_GPU_Roadmap.jpg

AMD announced the Radeon HD 5800 series (Cypress) graphics cards on Sep 23rd, and is now busy planning for HD 5700 series (Juniper) aiming at sub $200 mainstream market for October launch.

NVIDIA is concentrating on GT300 which is scheduled for launch in December. Though the specs are vague yet, sources suggest that AIC partners will get to design the GT300 cards themselves, which means no more boring reference cards.

NVIDIA is concentrating on GT300 which is scheduled for launch in December. Though the specs are vague yet, sources suggest that AIC partners will get to design the GT300 cards themselves, which means no more boring reference cards.

It is clear that Nvidia are concentrating as much, if not more, on parallel computing rather than gaming. GT300 is expected to be a revolution, rather than the evolution Cypress was. GT300 will be large and hot, and is rumoured to be faster than the HD 5870. It is unknown as to how the GT300 will compete with Hemlock (5870 X2) - although Fudzilla suggest a dual-GT300 version is being planned. That could mean - a) A significantly cut down version of GT300 x 2; b) GT300 is more efficient than we are expecting; c) World's first widely distributed >300W TDP card. (Asus Mars is the only one thus far, though it is very much a limited edition) or d) Nvidia have to wait till the next half-node die shrink, like they did with GT200.

Enable PhysX in Batman Game Play Without NVIDIA GPU Present - And get playable FPS to boot! (http://forum.beyond3d.com/showthread.php?p=1332461)

One thing that’s very clear in these benchmarks is that as things currently stand, the 5850 has made the GTX 285 irrelevant (again). The 5850 is anywhere between 9% and 16% faster depending on the resolution, cheaper by at least $35 as of Tuesday morning (with everything besides a single BFG model going for +$70 or more), and features DirectX11. The 5850 is a card that manages to – if at times barely – outclass the GTX285 in performance. If you’ve been waiting for a price shakeup, this is what you’ve been waiting for.

the GT 240 features 96 Processing Cores, a 128-bit memory interface and 512MB or 1GB of GDDR3 memory, PhysX and CUDA support, a single-slot cooling system, and D-Sub, DVI and HDMI outputs. In terms of clocks, the upcoming card has its GPU, shaders and memory set to 550, 1340 and 1700/1800 MHz (512/1020 MB VRAM).

The GeForce GT 240 is expected to be released later this month so expect more info (and pictures of the card) very soon. Not excited eh? Same here.

Today's reports claim Nvidia are likely to have working samples not before 2nd week of January 2010. Xbitlabs speculates an actual release for February 2010. HardOCP is less optimistic, suggesting any "real availability" is unlikely even for March 2010, in a best case scenario.

The memory of choice is GDDR3 at 2000MHz or GDDR5 at 3400MHz and the card will come with either 512MB or 1024MB. The emory interface is 128-bit wide and the card will sit between Geforce GT 220 and Geforce 9800GT in terms performance.

Nvidia compares GT 240 cards with Radeon HD 4670 and beats it in top 11 games, at least in the GDDR5 version.