This direction is a bit off from my intention. It doesn't matter to people on TPU so much or the GTX300 series discussion who gets M$'s next console, but it matters which company makes the best GPU to support the API M$ uses in it's next console.

Yes, well that's the whole thing...DX11 will be six months old when Fermi finally launches, meaning that developers have had thier hands on DX11 code for much, much longer(as well as ATi's DX11 hardware, which went out as early as April of last year), yet we know ATi does well with DX11...and nV's apparant lack of hardware tesselation is the nail in the coffin for thier acceptance by M$ for console gpus.

"Debating unified against separate shader architecture is not really the important question. The strategy is simply to make the vertex and pixel pipelines go fast. The tactic is how you build an architecture to execute that strategy. We're just trying to work out what is the most efficient way.

"It's far harder to design a unified processor - it has to do, by design, twice as much. Another word for 'unified' is 'shared', and another word for 'shared' is 'competing'. It's a challenge to create a chip that does load balancing and performance prediction. It's extremely important, especially in a console architecture, for the performance to be predicable. With all that balancing, it's difficult to make the performance predictable. I've even heard that some developers dislike the unified pipe, and will be handling vertex pipeline calculations on the Xbox 360's triple-core CPU."

I do not think nv is a contender at all for M$. DX10 being a failure has EVERYTHING to do with nV's snubbing it during it's inception, but that was a childish tactic to get revenge from M$ choosing ATi over nV for the current XBOX. You can literally see the bitterness in David's words. Anyway, if nV is gonna be in the next Sony box, it definately WON'T be in M$'s.

Nvidia would develop for the next PSP much better looking at the mobile phone market research and development they have been doing. Now when it comes to PS4 and the next xbox I don't know.

I would assume it would be a split between ATi developing for one and Nvidia for the other since the volume needed for console chips would be too much for one single company to handle with the amount that will be sold of each.

I'm not going to ignore that ATI has a lead and is innovating into new APIs, but to keep the discussion about the APIs on topic with the GT300 then I am forced to point out that the success of the GPU in the next most popular API is what we're looking to know. This is non-objective.

It might be boring not to wave a flag around for one company or the other, but for the sake of keeping it on topic you can easily choose to agree or add something to the discussion of the importance of API standards. Anything is better than focusing on the already staggering lead ATI has with compliance to new API.

This might seem like a stand-off, but if you posed an independent political thought, devoid of mainstream party ideals, to a group of people would you want to be the guy who jumps in with information on "the party's" new policy? The fact is that an external influence will drive gaming and force change in the hardware to meet a demand set by this external influence. DX11 may not even take on, but it might-- Food for thought is yes one side is a contender in the latest field and the other side is catching up. Does it make sense that whichever side performs better in the API most accepted by the console industry will end up the leader in PC gaming graphics? Do you have anything to say about the chance of DX11 taking hold of the console market? What are developers looking for as tools to succeed in the market? NV has obviously accepted in action and not in press that they are unable to resist changing with the standards set by M$. The GT300 is going to follow DX11 standards. It's performance, cost, and release date are all speculation for us. Is this significant? Will Fermi based enthusiast GPUs theoretically offer superior support for games written for the most recent API?

It's a bit hard for me to come off as unbiased when obviously I have questions about the GT300, but there are underlying thoughts most of which are important to leave up to further speculation. Being closed minded is useless when talking about the future, and it seems as accurate as Tarot cards in the hands of a stoner at the beach.

I suspect that if NV does develop a GT300-like architecture for the PS4, it will be on a 28-32nm process. This time around, Sony will be much more budget-minded and pressuring on NV to make a great, power-efficient chip that is not too costly to manufacture.

The chip that NV made for PS3 was not any better than the one that ATI made one year earlier for Xbox360. ATI's chip was roughly equivalent to an X1900 (not XT or XTX, but just plain X1900 like the All-in-Wonder version) plus tessellation and a bit of unified shader support like the R600. NV's chip was like a vanilla 7900GT. Of course, there was less memory for the consoles, with lightning fast embedded RAM on the PS3.

2011 is only a year from now, and we probably will not be seeing the PS4 until 2012 as Sony works hard on ensuring that the console is not going to cost an arm and a leg like the PS3 did. The PS3 still has not yet broken even--not even with the recently released Slim version. 2012 leaves enough time for NV to design a budget GT400 chip (one generation after the Fermi) for the PS4, developed on a slightly mature 28nm process. Sony is also working hard on venturing into improving the user-interactive experience on the PS4 after suffering the amazing success of the Wii that was so cheap to make, so do not expect Sony to rush it at all.

I expect the graphics race to continue between the next Xbox and PS4 because more people will have HDTV's, and everybody will want to game at 1080p with great graphics (most Xbox360 and PS3 games are not true 1080p, and do not even have antialiasing). Since PS3 had a hard time competing with Xbox360, even after it was supposed to have a one year advantage in technology (like when it destroyed Sega's Dreamcast released one year before the PS2), Sony will definitely not let NV have it as easy this time around when bargaining for the PS4.

My point, really, in all that, was that nV has already snubbed the API without hardware tesselation, as well as pointing out that nV's sights are on with Sony, not M$. So there's only so much speculation to be had...as you said, there are alot of factors to be considered in ALL MARKETS that Fermi will be launched in.

But, in the console market, things are too tight with APIs or anything like that when it come to Sony....I seem to recall a "We didn't want programming the PS3 to be easy", and given those comments from years ago about unified graphics seem to still ring true from nV ("Making gpus is DAMN hard"), I do feel that there is far more info out there about what Fermi and it's derivatives will bring than most admit, but companies' policies of "we do not comment on unreleased products" gets in the way of the real hard facts coming to light. API's don't matter to nV, as quote in that article...they just want to bring the best solution adn the best performance they can...and if they cannot do it within a specifc API, then they won't. But that's not gonna stop them from releasing products...and thankfully they have the established market as well as the seeded staff @ development house to ensure they do well.

I mean, you can read the whitepaper from the nV site if you want specifics. I have, and it sounds good to me.

I may post about ATi stuff primarily, but that doesn't mean I don't have nV products in my house...I most certainly do use nV products, but don't spend as much time with them, so choose not to discuss them too much.

This gen though..."S3D is for me". I'm buying Fermi-based products...and my current interest is merely to find out how much I'm gonna need to set aside for them...and how many I'm gonna need to get the performance I desire.

nV, as a business, is tough to knock. I must to Jen Hsun, because he does a fantastic job @ bringing success to him and those around him. With that in mind, I have no doubts Fermi will eb a success...and I'll help contribute to that success...I just want to know how much it's gonna cost, becuase I'm in whether it beats ATI or not. I don't just want performance..I want a great gaming experience.

This post has been a long time formulating, and I welcome any criticisms.

How many of us have gotten at least 3 different claims as to the performance or release of this card? Cynically I've decided that I'm not going to bat my eyelashes at any claims that come out of CES. There's bound to be a little more truth circling the bowl, but most people will excuse me if I assume the cycle of bullsh!t has yet to flush. I'm not sure if the majority of posters/readers will excuse my overall indifference because that isn't very exciting. Likewise it's not hard to speculate that NV may have a true performer to take a crown in 2010, but ATI has a firm place in this generation's the line-up which could mean good or bad things in the future. With the downturn of the global economy there is enough of a depressant force present in a number of software companies to recycle old engines, or adopt some sort of broad design utility. The mainstream GPUs will see more action than chopsticks during the Chinese new-year. I think it's wise to assume that it's getting dangerously close to a point in time where the GPUs must offer stellar performance in a new API because Microsoft not only authors DX runtimes, but they are also a console competitor. Realistically (and correct me if I'm wrong) they're going to merge development of their runtimes with console development. The paradigm shift will be when enough of the software industry is willing to move.

If you accept any of these ideas then I offer a summary of my thoughts.

-the GT300/GT100 series cards are going to take a crown in performance, but this generation will offer little more than a spitting contest between ATI/NV.
-3D environment software development will become further compartmentalized, and game developers will buy into a smart, economic, standard before leaning heat-on into a new API which is not yet mature/affordable in hardware support.
-Microsoft (3v!L3) will most likely decide which generation of GPU will hold the standard for a life determined by their next console.

I'm a bit off topic, and a little on topic. It's pretty obvious, but I figured this is a nice mix of topic all rooted around the importance of the GT300.

I would agree with most of your Post. Especially with the part that there won't be much difference between the performance of ATI and Nvidia this time around. I also agree with the fact that the Global economy will play a roll in the type of card and chips we see released in the near Future.
That last part I will comment a little on by myself. Regardless if Nvidia takes the performance crown back or not this round. I think the control of the market will be based on which card gives the best performance for price. (again because of the global economy.) So if that is the case and Nvidia follows it's normal rule of releasing cards that are Uber expensive. Then I believe ATI will give Nvidia a whooping this round.
I am sure that Femi is a great and really powerful card. Lets also pretend for a second that it really can beat a 5870 by 48%. That would be some amazing technology. But that would also be some very expensive technology. Also wioth the amount of time and Money Nvidia has spent on FEMI I seriously doubt it is going to be an affordable card that can compete with the 5870 Price per performance. Sure there are a lot of Enthusiasts out there that want a card like what I described up there. But in today's world how many of those same enthusiasts can afford that same card? I would bet a lot less than there used to be.
So unless Nvidia brings out a very great performing cheap card...... I think that Nvidia might have some problems.

The rest of your comment resounds loudly in my head...but knowing that ATi hardware was used for development of DX9, DX10, and now DX11, I find it hard to beleive that nVidia has any chance in the console market, as they've snubbed M$ too many times.

Now this comment above was made by someone else.... but I agree with him more than I agree with your views. ATI has had a racket on the console market for quite some time. I really don't see Nvidia being able to take that away from them.
Perhaps that is why they are going so hard after the GPU that does computations. That in my eyes is still a very open market that Nvidia has a wonderful chance of taking.

Well these are just some of my Idea's take them for what you will.

Yes, well that's the whole thing...DX11 will be six months old when Fermi finally launches, meaning that developers have had thier hands on DX11 code for much, much longer(as well as ATi's DX11 hardware, which went out as early as April of last year), yet we know ATi does well with DX11...and nV's apparant lack of hardware tesselation is the nail in the coffin for thier acceptance by M$ for console gpus.

The teams are happy to have you as a consumer. Thanks for understanding, and especially for not taking it the wrong way.

P.S. I do love my 5XXX series cards! RED POWER

You're particularly observant when it comes to previous trends in hardware folk trying to nose up the software guys. Adhering to hardware logic isn't as profitable as making it easy on the software side. Devs today need to be catered to as much as us end users. It's just that complex.

I would bet that fermi will cost an arm and a leg. I would imagine a £400 price tag MINIMUM for the gtx380, probably more like £450-£500, and around £300-350 for the gtx360. I also don't think the performance gain will be worth the premium, but it's always the same with nvidia for a few months I would hope for a lower price to drive down hd5xxx prices, but due to the huge delay on fermi, i can't see nV being able to feasibly do so. However, if these two cards do compete with the 5970 and 5870 respectively on performance, it could be very interesting indeed.

My name is Dave

Personally, I think the addition of Phys-X(even though I hate that it's a closed API, and that they won't liscence it to ATI) is justification enough for higher prices over the competition.

I mean sure, ATI supports Havoc gpu physics, but nearly noone uses it in thier apps. Phys-X has pretty good market saturation, and it does bring added features that ATi has no claim to.

So, slightly more performance, and more features = higher prices. Doesn't bother me one bit.

And while I may knock nV for not supporting DX10, I completely understand why they took so long, but I am also very much aware that the present install base for DX11 includes all Vista and Win7 machines, whereas DX10 just had the few Vista boxes that sold.

New Member

This has been said til death, but Nvidia not only wanted to licende PhysX to AMD, they offered it for free!! I was AMD who said no to PhysX in the first place, probably because they were (and still are tbh) significantly behind in GPGPU. Adopting PhysX would have opened the door to PhysX benchmarks, in which Nvidia would stomp Ati, just like they do in F@H, for example. On the other hand Ati cards were more than capable of doing the PhysX present in the games, because those can easily be run in a 8400 GS, so if the thing would have been about giving the best to customers they would have just accepted. Shaddy bussiness practices go both ways, sadly...

On topic

Regarding the Fermi numbers posted above, once again, they are more than believable and are a match to the numbers I have predicted based on how past Nvidia cards have scaled according to their specs.

Seriously, I don't know why people is so reluctant to believe that Fermi is going to be twice as fast as a GTX285, when its specs point to a card that is 2.5x times faster. The only reasons people are giving is "what happened in the past" in regards to GT200 vs. RV770... Well, RV770 was the best Ati card in years while GT200 was a semi-failure, it didn't meet the expectations at all: they had to decrease SP number early on on the development of the chip, missed target clocks by at least a 10%, used a memory that was one half the speed of the competition and it was developed at 65nm because they were too cautious and scared, (Huang's own words this last one). Those things made GT200 far slower than it should have been in Nvidia's mind. Remember one of their guidelines is "twice the performance every 12-18 months". Fermi has none of those problems: they went with the new process, they went for the high SP number they trully wanted (2.15x times a GTX285 to be precise) and according to the various specs posted fromdifferent sources, they met the clocks (650 core/1700 SPs). Those things make the chip "big" (smaller than GT200 tho, take that into account too) and difficult to produce on the ill 40 nm process, but that's the only drawback to their decision. This time they took the risk of making the chip they wanted, they didn't cut anything, something they did with GT200 and they are trully paying for that decision in the form of a delay. But the rest is not going to fail just because some people want it to fail. Common sense and history, tells that Fermi will be around twice as fast as a GTX285, because its specs say so and there's not a single thing that points to the contrary.

What are you doing Benetanegia! This thread is for Nvidia bashing and not rational thought. How dare you turn it intellectual again.

BTW thank you to those who are still throwing ideas and info out there. Many people(myself included) are looking for good analysis from those who know what they are talking about. Thats hard to find on the internet these days. Thank you.

My name is Dave

You don't know the full details of how that deal was to be made, and what stipulations would have been placed on AMD. I don't either, exactly, but I definately would like to know. Seems to me that the terms of sucha deal was something that AMD would not accept, and they were left waiting for Havoc, which is developed by Intel, also thier competitor. Fact of the matter is that if AMD/ATI could take the exclusivity of Phys-X away from nV, it would hurt nV's sales.

Case and point, as soon as nV took over Ageia, Phys-X drivers have disabled Phys-X with ATI cards as the primary dispaly device...even the add-in Ageia cards. And now that these updated libraries are shipping with games, and install with the game, nV has completely broken Phys-X for anyone but themselves.

If they truly wanted to offer it for free, why'd they break it with the add-in Ageia cards? There's not even any need for them to do any stability testing...or qualification...was already done...but I cannot play Phys-X with GRAW 1 & 2 any more, and I used to be able to...

Why'd they cut ATI from anti-aliasing in Batman:AA?

It's business, and giving your stuff away for free isn't good business. And nV is a OVERLY good business.

Regarding the Fermi numbers posted above, once again, they are more than believable and are a match to the numbers I have predicted based on how past Nvidia cards have scaled according to their specs.

Seriously, I don't know why people is so reluctant to believe that Fermi is going to be twice as fast as a GTX285, when its specs point to a card that is 2.5x times faster. The only reasons people are giving is "what happened in the past" in regards to GT200 vs. RV770... Well, RV770 was the best Ati card in years while GT200 was a semi-failure, it didn't meet the expectations at all: they had to decrease SP number early on on the development of the chip, missed target clocks by at least a 10%, used a memory that was one half the speed of the competition and it was developed at 65nm because they were too cautious and scared, (Huang's own words this last one). Those things made GT200 far slower than it should have been in Nvidia's mind. Remember one of their guidelines is "twice the performance every 12-18 months". Fermi has none of those problems: they went with the new process, they went for the high SP number they trully wanted (2.15x times a GTX285 to be precise) and according to the various specs posted fromdifferent sources, they met the clocks (650 core/1700 SPs). Those things make the chip "big" (smaller than GT200 tho, take that into account too) and difficult to produce on the ill 40 nm process, but that's the only drawback to their decision. This time they took the risk of making the chip they wanted, they didn't cut anything, something they did with GT200 and they are trully paying for that decision in the form of a delay. But the rest is not going to fail just because some people want it to fail. Common sense and history, tells that Fermi will be around twice as fast as a GTX285, because its specs say so and there's not a single thing that points to the contrary.

Yield issues, and delayed launches(2 now, first with Win7, and 2ndly, November). If it wasn't for the yield issues for ATI, with less transistor density, there'd be no questions about Fermi. Jen Hsun holding up a fake card didn't help either.

None of the delays are truly 100% up to nV, but thier design choices have greatly affected yields far more than ever before. ATi probably would have been just as complex, but had prior experience with the process that dictated how thier current gen was designed. None of what ATI has done really reflects on what nV is doing, however, the differences in design do say alot about what's affecting nV currently. nV admitted they need 0% leakage to hit thier target, already, which I linked to earlier in the thread.

New Member

Fact of the matter is that if AMD/ATI could take the exclusivity of Phys-X away from nV, it would hurt nV's sales.

Case and point, as soon as nV took over Ageia, Phys-X drivers have disabled Phys-X with ATI cards as the primary dispaly device...even the add-in Ageia cards. And now that these updated libraries are shipping with games, and install with the game, nV has completely broken Phys-X for anyone but themselves.

If they truly wanted to offer it for free, why'd they break it with the add-in Ageia cards? There's not even any need for them to do any stability testing...or qualification...was already done...but I cannot play Phys-X with GRAW 1 & 2 any more, and I used to be able to...

They disabled it LONG after the adquisition, a year and a half after the adquisition and the reason they disabled it was lack of support and QA from AMD's end. I'm not going to discuss what has been discussed like 1000 times already. Lack of support for QA is more than enough to not implement something. GPU and PhysX drivers have to work together. What was Nvidia supposed to do? Inverse engineer every single new AMD driver? Put spies in AMD HQ months inadvance, so that they could start working on PhysX compatibility for new AMD cards (i.e RV870) with enough time to make it work properly? They have a lab with 500 PCs to test stability and QA on Nvidia cards, should they pay another 500 so that they work well with AMD or should they allow an incomplete solution be widely available and take the blame when it doesn't work? Did they really had to pay so that it worked on the competitors hardware when the competitor didn't care at all from the beginning? Was AMD willing at least to pay for that QA and give some info on new releases so that everything could go smoothly on AMD's end? Answer is NO NO NO and NO to everything. Hence the only thing they could do is disable PhysX except for Nvidia cards or keep paying for doing a service to AMD, plain and simple.

Same as above. Batman uses Unreal Engine 3. UE3 has no AA. They paid and QA an special AA for Nvdia cards. The developer contacted with AMD to do the same with AMD, AMD refused to work with the developer since the beginning (not only regarding that feature) because it was TWIMTBP game. End of story: you don't help you don't get. And that's something Nvidia had nothing to do with.

It's absolutely good business when you know you are faster on that specific thing AND when that thing has the posibility of making you sell a second card per computer. Not to mention setting a precedent in the power GPGPU, when your next release is heavily based on GPGPU.

Yield issues, and delayed launches(2 now, first with Win7, and 2ndly, November). If it wasn't for the yield issues for ATI, with less transistor density, there'd be no questions about Fermi. Jen Hsun holding up a fake card didn't help either.

None of the delays are truly 100% up to nV, but thier design choices have greatly affected yields far more than ever before. ATi probably would have been just as complex, but had prior experience with the process that dictated how thier current gen was designed. None of what ATI has done really reflects on what nV is doing, however, the differences in design do say alot about what's affecting nV currently. nV admitted they need 0% leakage to hit thier target, already, which I linked to earlier in the thread.

Bad 40nm a mayor issue that is been heavily underestimated. Sure the nature of chip has it's influence, but 90% f te blame is on TSMC. You need an absolute minimum of 2 years to design a new chip. If you are told that you will have a working (implies good or decent yields) process by date X, you develop for that process. If you are designing an sports car that is suposed to be used in highways and when you reach the place and there's no asphalt and you car is shit on dirt... well I'm sure there are plenty of interpretations of that, but I know what to think. Fermi was designed for a mature (1-1.5 year old, 80-90% yields expected) 40nm process and they found a 30-40% yield process, who's really to blame? You can design your chip thinking on a failure that puts yileds at 60-70%, but 30-40% come on... Fact of the matter is that AMD has serious problems with the process too, to the point of selling a lame 300.000 HD58xx cards in 3 months when the chip has been in production since June (6 months of stock= 300.000 chips = shameful on TSMC's part*).

*Just to see how low of a numner of cards that is, in comparison, when 8800 GT and HD3xxx cards were released in Q4 2007 (same quarter, holidays, important) the number of discreete cards jumped from a typical 20-25 million units to a whooping 31 millions, which suggests lots and losts (millions) of those new cards were sold and remember that shortage was claimed for those too. It clearly was a different kind of shortage though...

Explain GRAW 1&2 then. Explain 3DMark Vantage. No Q/A to be done. Ageia had already done all of that, and I personally know when and how it transpired, as I bought a Phys-X card very shortly after release. Worked fine until the new libraries were released, and the very FIRST nV Phys-X driver broke it. Sure, it took some time before this happened, but we got nV guys saying "First, we make sure that what we release doesn't break anything", and this couldn't be further from the truth when talking about the titles released before nV took over. Now, NOTHING works. And them breaking that function by disabling it required no cash, but they could have also wirtten code just as short to see if it was an Ageia card, and popping up a disclaimer that functionality could not be guaranteed, rather than allowing the driver for the Ageia card to be installed, look like it's working, and then have it do NOTHING.

However, an intrepid team of software developers over at NGOHQ.com have been busy porting Nvidia's CUDA based PhysX API to work on AMD Radeon graphics cards, and have now received official support from Nvidia - who is no doubt delighted to see it's API working on a competitor's hardware (as well as seriously threatening Intel's Havok physics system.)

This cost nV money(the "support")...they actually paid the NGO guys off to stop development/threatened lawsuits upon release. Here's your cost-less solution for ATI cards, and nV squashed it like a bug. So much for offering it for free...it cost money to ensure that AMD did NOT get it free.

If you go back to then(first nV driver for Phys-X, August 2008), and check my posts on XS, you'll find both myself and others complaining about the issue. I just find it comical that it took this long for the media to pick it up...nV breaking Phys-X w/ ATI cards isn't something new...far from it. They have just recently completely disabled it with ATI and nV cards together, but they broke the Ageia cards almost 1.5 years ago...they bought Ageia february 2008, and by August, released the driver than broke the add-in Ageia cards, a short 6 months, not 1.5 years. And the only way to fix the function, after installing the nV driver, was a full OS re-install...I spent many months on this issue back then, then finally gave up and sold my Ageia card.

When it comes to thier own cards, paired with ATI cards, and gpu phys-x, I can hold no fault, but them disabling configs that already had been tested, and were working, is in-excusable.

As to the yields...if Fermi wasn't so complex, we'd have had a release of the lower-bin models by now, but even that hasn't happened, so yes, this issue is VERY underestimated. But shortly after Jen Hsun visted TSMC in person(first half of OCtober), we had TSMC's CEO publically state that they had yield issues, and that they were fixed. We are now almost 3 months later, and given the supply issue ATI is having, clearly the problem as only fixed a couple of weeks ago, if at all.

They disabled it LONG after the adquisition, a year and a half after the adquisition and the reason they disabled it was lack of support and QA from AMD's end. I'm not going to discuss what has been discussed like 1000 times already. Lack of support for QA is more than enough to not implement something. GPU and PhysX drivers have to work together. What was Nvidia supposed to do? Inverse engineer every single new AMD driver? Put spies in AMD HQ months inadvance, so that they could start working on PhysX compatibility for new AMD cards (i.e RV870) with enough time to make it work properly? They have a lab with 500 PCs to test stability and QA on Nvidia cards, should they pay another 500 so that they work well with AMD or should they allow an incomplete solution be widely available and take the blame when it doesn't work? Did they really had to pay so that it worked on the competitors hardware when the competitor didn't care at all from the beginning? Was AMD willing at least to pay for that QA and give some info on new releases so that everything could go smoothly on AMD's end? Answer is NO NO NO and NO to everything. Hence the only thing they could do is disable PhysX except for Nvidia cards or keep paying for doing a service to AMD, plain and simple.

QFT.. most companies do not want to keep on supporting legacy hardware for compatibility reasons. Even ATI has dropped support for their X1900XTX with newer drivers, while NV continues to support cards older than that (6800 Ultra), IIRC.. if I'm wrong, then please correct me.

Same as above. Batman uses Unreal Engine 3. UE3 has no AA. They paid and QA an special AA for Nvdia cards. The developer contacted with AMD to do the same with AMD, AMD refused to work with the developer since the beginning (not only regarding that feature) because it was TWIMTBP game. End of story: you don't help you don't get. And that's something Nvidia had nothing to do with.

ATI did add AA support to the UE3 engine after a month or two, after the game was released. It was causing quite a stir among the community, so ATI eventually did it with their HD2900XT back then. Now, there are several reports that many of the UE3 engine games do not have AA with ATI cards, so I do not know for sure. I havent tried an UE3-game (other than Batman which is obviously not supported for AA) with my 4870 and recent drivers--should I check to make sure for you guys?

BatmanAA is a much, much more bland game without PhysX--even worse than Mirror's Edge without PhysX anyways.

It's absolutely good business when you know you are faster on that specific thing AND when that thing has the posibility of making you sell a second card per computer. Not to mention setting a precedent in the power GPGPU, when your next release is heavily based on GPGPU.

Good point there.. but since Nvidia basically owned PhysX after buying Ageia, ATI probably did not want to fall into the trap and be charged royalty fees later on (there are probably millions of lines of fine print in the business stipulation that you have not yet read). It is not to be expected for NV to unconditionally give it away for free for a period of say, 10 years.

Bad 40nm a mayor issue that is been heavily underestimated. Sure the nature of chip has it's influence, but 90% f te blame is on TSMC. You need an absolute minimum of 2 years to design a new chip. If you are told that you will have a working (implies good or decent yields) process by date X, you develop for that process. If you are designing an sports car that is suposed to be used in highways and when you reach the place and there's no asphalt and you car is shit on dirt... well I'm sure there are plenty of interpretations of that, but I know what to think. Fermi was designed for a mature (1-1.5 year old, 80-90% yields expected) 40nm process and they found a 30-40% yield process, who's really to blame? You can design your chip thinking on a failure that puts yileds at 60-70%, but 30-40% come on... Fact of the matter is that AMD has serious problems with the process too, to the point of selling a lame 300.000 HD58xx cards in 3 months when the chip has been in production since June (6 months of stock= 300.000 chips = shameful on TSMC's part*).

*Just to see how low of a numner of cards that is, in comparison, when 8800 GT and HD3xxx cards were released in Q4 2007 (same quarter, holidays, important) the number of discreete cards jumped from a typical 20-25 million units to a whooping 31 millions, which suggests lots and losts (millions) of those new cards were sold and remember that shortage was claimed for those too. It clearly was a different kind of shortage though...

+1 on that. I do not expect 28nm process to be any easier than 40nm. I still hold in my hypothesis that 22 nm (or 18-ish nm) process is probably going to be the last process we'll be seeing this decade (yes, the new decade). There will need to be different materials other than Si-Ger, different fabbing tech (extreme lith), and there might be ways to make up for the "choke", like when the clock speeds stopped increasing so quickly after 2004 with the first implementation of dual-core CPU's--but in this case, stackable dies or "distributed" silicon to fix the super-dense problem of hot spots in one bundle.

New Member

2ndly, when your foundry company has NEVER released a chip with zero leakage, to expect them to be able to is foolish at best, and outright stupid at worst.

Explain GRAW 1&2 then. Explain 3DMark Vantage. No Q/A to be done. Ageia had already done all of that, and I personally know when and how it transpired, as I bought a Phys-X card very shortly after release. Worked fine until the new libraries were released, and the very FIRST nV Phys-X driver broke it. Sure, it took some time before this happened, but we got nV guys saying "First, we make sure that what we release doesn't break anything", and this couldn't be further from the truth when talking about the titles released before nV took over. Now, NOTHING works. And them breaking that function by disabling it required no cash, but they could have also wirtten code just as short to see if it was an Ageia card, and popping up a disclaimer that functionality could not be guaranteed, rather than allowing the driver for the Ageia card to be installed, look like it's working, and then have it do NOTHING.

This cost nV money(the "support")...they actually paid the NGO guys off to stop development/threatened lawsuits upon release. Here's your cost-less solution for ATI cards, and nV squashed it like a bug. So much for offering it for free...it cost money to ensure that AMD did NOT get it free.

If you go back to then(first nV driver for Phys-X, August 2008), and check my posts on XS, you'll find both myself and others complaining about the issue. I just find it comical that it took this long for the media to pick it up...nV breaking Phys-X w/ ATI cards isn't something new...far from it. They have just recently completely disabled it with ATI and nV cards together, but they broke the Ageia cards almost 1.5 years ago...they bought Ageia february 2008, and by August, released the driver than broke the add-in Ageia cards, a short 6 months, not 1.5 years. And the only way to fix the function, after installing the nV driver, was a full OS re-install...I spent many months on this issue back then, then finally gave up and sold my Ageia card.

When it comes to thier own cards, paired with ATI cards, and gpu phys-x, I can hold no fault, but them disabling configs that already had been tested, and were working, is in-excusable.

As to the yields...if Fermi wasn't so complex, we'd have had a release of the lower-bin models by now, but even that hasn't happened, so yes, this issue is VERY underestimated. But shortly after Jen Hsun visted TSMC in person(first half of OCtober), we had TSMC's CEO publically state that they had yield issues, and that they were fixed. We are now almost 3 months later, and given the supply issue ATI is having, clearly the problem as only fixed a couple of weeks ago, if at all.

The ageia card was broken EVEN if you had a Nvidia card and had nothing to do with having an Ati card. They said they would not support the card from the beginning. Face it, not even Ageia was supporting the card since long time ago and they were bankrupt. You got 6 mnths of support instead of nothing, you should be more than happy. Not that it really matters too much in the big squeme of things, less than 100.000 ageia cards were sold. Sorry for those who actually bought one, but 100.000 people, when you have 500 millions to care off is not financially viable for anyone.

This cost nV money(the "support")...they actually paid the NGO guys off to stop development/threatened lawsuits upon release. Here's your cost-less solution for ATI cards, and nV squashed it like a bug. So much for offering it for free...it cost money to ensure that AMD did NOT get it free.

BS. Or you have any proof? Actually Nvidia wanted it implemented and AMD didn't. To the point of not even sending a HD4xxx card to test on. A card!! FFS!! AMD scared them with lawsuits. I know Nvidia didn't because PhysX was actually free for everyone and many many developers were using it and can respond for that. Nvidia would lose any lawsuit based on that. And they were paid? Yeah, as much as TWIMTBP games are paid to break features on AMD. BS after BS. Seriously if anything of that had ever happened we would have a lawsuit against Nvidia already or one developer (a person not a company) that would have said something. Thousands of developers working under TWIMTBP and no one said anything? People that can be fired and that by statistics HAS been fired and no one takes his revenge with something so simple? There's nothing there, except lies from those who have something to win from a bad image of Nvidia, yep Ati/AMD. BS, BS, BS...

J/K. Just bugging ya man, don't take it personal. I only know what's in the public domain + my own experience, and I purposely try to keep my personal feelings out of it. Don't shoot the msnger...although I wonder why you are so quick to defend nV here...fact is, nV didn't have to make it so the driver, even after removal, still prevented Ageia cards from working. Now this bit of code in packaged in games, and hence we get AMD crying about it, as there's no way for the end user to know that installing a new game might break his previously working config.

thousands of developers? I do feel you are going off your rocker slightly. Have you any proof amd didn't send a hd4xxx card? And of course TWIMTBP games don't break features on ati cards, they just perform better on nvidia cards due to the drivers, as a result of better relationships with the devs. Although i have seen a couple of amd games lately, since the introduction of dx11. 'tis strange indeed and makes me wonder how this generation will pan out, since AMD are now the dx11 devs' best friend, even bfbc2 is going to be an amd game.

Still, i wouldn't base my purchase of physX, because i think it will either becomes totally open source or it'll flop in the next few years due to open source gpu physics, which everyone will have access to. Proprietary solutions rarely work out.

EDIT: out of interest i checked out the batman: AA physx changes, and i can't say that its making me want to sell my hd5870 and go for a nvidia card atm, still, if fermi turns out to be the best thing since sliced bread it'll be a nice addition. Although i still believe that it will be open source gpu physics that will win out, and when it does, we will see some amazing effects in games. Well, little additions, since most games are just delayed console ports.

J/K. Just bugging ya man, don't take it personal. I only know what's in the public domain + my own experience, and I purposely try to keep my personal feelings out of it. Don't shoot the msnger...although I wonder why you are so quick to defend nV here...fact is, nV didn't have to make it so the driver, even after removal, still prevented Ageia cards from working. Now this bit of code in packaged in games, and hence we get AMD crying about it, as there's no way for the end user to know that installing a new game might break his previously working config.

Nah, just restating what others stated. The link is above. Aparantly teh NGO guys said nV told them to stop, paid some of them, and threatened lawsuit. You got any proof?

Again, don't shoot the messenger, tis not MY info, just what's out there, that anyone can find on other sites.

Erm. It took 1.5 years to disable it, before that it was not supported or simply broken. Nvidia's video out feature has been broken plenty of times with plnety of drivers and for a long period of time. Did Nvidia purportedly break it? Let me think about it?? NO?

For your information I am so quick to defend Nvidia against the BS, simply because I hate BS and institucionalized ignorance. FYI up to 1950 black people were not considered inferior beings (many people had many arguments to support that idea and many people were just saying what other said too), I wasn't there but if I had, I would have defended them with the same passion as I'm fighting BS here. I defend gays against similar BS with the same passion and I fight agreession to women and animals too. For the record, I'm not black, I'm not gay, I can guarantee you I'm not a woman (I would have noticed) and I'm definately not an animal (apart from the fact that I am a human being)... I don't know if you get me...

Erm. It took 1.5 years to disable it, before that it was not supported or simply broken. Nvidia's video out feature has been broken plenty of times with plnety of drivers and for a long period of time. Did Nvidia purportedly break it? Let me think about it?? NO?

For your information I am so quick to defend Nvidia against the BS, simply because I hate BS and institucionalized ignorance. FYI up to 1950 black people were not considered inferior beings (many people had many arguments to support that idea and many people were just saying what other said too), I wasn't there but if I had, I would have defended them with the same passion as I'm fighting BS here. I defend gays against similar BS with the same passion and I fight agreession to women and animals too. For the record, I'm not black, I'm not gay, I can guarantee you I'm not a woman (I would have noticed) and I'm definately not an animal (apart from the fact that I am a human being)... I don't know if you get me...