Posted
by
ScuttleMonkey
on Friday April 11, 2008 @03:02PM
from the you-don't-scare-me dept.

J. Dzhugashvili writes "Is Nvidia worried about the advent of both CPUs with graphics processor cores and Larrabee, Intel's future discrete graphics processor? Judging by the tone adopted by Nvidia's CEO during a financial analyst conference yesterday, not quite. Huang believes CPU-GPU hybrids will be no different (and just as slow) as today's integrated graphics chipsets, and he thinks people will still pay for faster Nvidia GPUs. Regarding Larrabee, Huang says Nvidia is going to 'open a can of whoop-ass' on Intel, and that Intel's strategy of reinventing the wheel by ignoring years of graphics architecture R&D is fundamentally flawed. Nvidia also has some new hotness in the pipeline, such as its APX 2500 system-on-a-chip for handhelds and a new platform for VIA processors."

Ray vs raster. The reason we have so much tech in Raster is because processing was not sufficient to do ray. If it had been we'd have never started down the raster branch of development because it just doesn't work as well. The results are not as realistic with raster. Shadows don't look right. You can't do csg. You get edge effects. There are a thousand work-arounds for things like reflections of reflections, lens effects and audio reflections. Raster is a hack and when we have the CPU to do the real time ray tracing rendering raster composition will go away.

Raster was a way to make some fairly believable (if cartoonish) video games. They still require some deliberate suspension-of-disbelief. Only with raytracing do you get the surreal Live-or-memorex feeling of not being able to tell a rendered scene from a photo, except for the fact that the realistic scene depicts something that might be physically impossible.

This is true to some extent, but raster will never completely go away- there are situations where raster is completely appropriate.

For instance, modern GUIs often use the 3d hardware to handle window transforms, blending and placement. These are fundamentally polygonal objects for which triangle transformation and rasterization is a perfectly appropriate tool and ray tracing would be silly.

The current polygon model will never vanish completely, even if high-end graphics eventually go to ray tracing instead.

If realism is the goal, global illumination techniques, of which ray tracing and ray casting are a part of, would be your best bet. Yes, rasters have their place. But this is a small place in the grand scheme of things.

All bets are off if the intention is not photorealism. Some hybrid of the two may be best depending on the situation.

One level deep ray-tracing is no different from triangle rasterisation, even with supersampling. The problem is when you try and do reflections with an entire scene. A single character will consist of over 10,000 triangles, and the scene itself might consist of 1 million triangles. Then you would have to find the space to store all these coordinates, textures and tangent space information (Octrees?)There were experimental systems (PixelPlanes and PixelFlow [unc.edu]) which investigated this problem.

I see your 1 million triangles and raise you 349 million more [openrt.de]. (Disclaimer: I'm not affiliated with OpenRT, I just think this is a neat demonstration.)

Textures, verticies, and the like take up memory with ray tracing just like rasterization. There's a bit more flexibility with a ray tracer, though, to store non-triangular primitives. Most fast real-time ray tracers just use triangles, though.

As for acceleration structures, octrees have largely been superseded by kd-trees (preferably constructed using

Only with raytracing do you get the surreal Live-or-memorex feeling of not being able to tell a rendered scene from a photo, except for the fact that the realistic scene depicts something that might be physically impossible.

Photos, maybe. But we're still a loooooong way off for real time video when you consider that it is still relatively easy to tell CGI from live action in the highest budget prerendered movies. At close-ups anyway.

There must be a conspiracy behind that. There's no way big-budget studios with seven and eight figure budgets and virtually limitless cpu cycles at their disposal could be releasing big-screen features that are regularly shown up by video games and decade old tv movies. Maybe it has something to do with greenscreening to meld the cgi with live action characters, perhaps it's some sort of nostalgia, or the think that the general public just isn't ready to see movie-length photo-realistic features, but ther

The reason we can so easily tell the difference between CGI creatures and real creatures is not the photorealism of it, but the animation. Evaluate a screen cap of Lord of the Rings with Gollum in it, and then evaluate that entire scene in motion. The screen cap will look astonishingly realistic compared to the video.

Computers are catching up to the computational challenges of rendering scenes, but humans haven't quite yet figured out how to program every muscle movement living creatures make. Attempts for complete realism in 3D animation still fall somewhere in the Uncanny Valley [wikipedia.org].

I don't buy the 'raytracing is so much better than raster' argument. I do agree that it makes it algorithmically simpler to create near-photorealistic renders, but that doesn't mean that raster's only redeeming quality is that it's less burdensome for simpler scenes.

Hacks like radiosity? Indeed, a ray tracer needs to be supplemented with a global illumination algorithm in order to produce plausibly realistic images, but there are many global illumination algorithms to choose from, and radiosity is the only one I know of that can be implemented without tracing rays.

Photon mapping, for instance, is quite nice, and performs comparatively well. (Normal computers aren't fast enough to do it in real time yet, but they'll get th

The results are not as realistic with raster. Shadows don't look right.

As John Carmack mentioned in a recent interview, this is in fact a bonus, for shadows as well as other things.

The fact is that "artificial" raster shadows, lighting and reflections typically look more impressive than the "more realistic" results of ray tracing. This alone explains why raster will maintain its dominance, and why ray tracing will not catch on.

Raytracing doesnt magically get you better image quality. EXCEPT for shadows, the results look just like rasterization. As usual, people mix up raytracing with path tracing, photon mapping, radiosity, and other GI algorithms. Note: GI can be applied to rasterization as well.

So, which "benefits" are left? Refraction/reflection, haze, and any kind of ray distortion - SECONDARY ray effects. Primary rays can be fully modeled with rasterization, which gives you much better performance because of the trivial cache coherency and simpler calculations. (In a sense, rasterization can be seen as a cleverly optimized primary-ray-pass). This is why hybrid renderers make PERFECT sense. Yes, I know ray bundles, they are hard to get right, and again: for primary rays, raytracing makes no sense.

"Suspension of disbelief" is necessary with raytracing too. You confuse the rendering technique with lighting models, animation quality and so on. "edge effects" is laughable, aliasing WILL occur with raytracing as well unless you shoot multiple rays per pixel (and guess what... rasterizers commonly HAVE MSAA).

Jeez, when will people stop thinking all this BS about raytracing? As if it were a magical thingie capable of miracously enhancing your image quality....

Raytracing has its place - as an ADDITION to a rasterizer, to ease implementation of the secondary ray effects (which are hard to simulate with pure rasterization). This is the future.

In light of the often facetuos nature of any sentence containing the words, "British Engineering", the Comparison of Aston Martin's reputation for reliability with Hyundai's, and the comparison of their current parent company's reputations and stock prices... My word! That is news, indeed!

nVidia beating Intel in the GPU market would indeed be news. Intel currently have something like 40% of the GPU market, while nVidia is closer to 30%. Reading the quote from nVidia, I hear echoes of the same thing that the management at SGI said just before a few of their employees left, founded nVidia, and destroyed the premium workstation graphics market by delivering almost as good consumer hardware for a small fraction of the price.

apparently they don't even support vector operations and are scalar-only.

Yes, that's true. The reason is because it's hard to keep those vector units fully utilized. You get better utilization with more scalar units rather than fewer vector ones (for the same area of silicon).

Yep, and I think they're essentially dumbed-down general CPU cores; apparently they don't even support vector operations and are scalar-only.

I'm not sure that statement makes sense. Sure, you could talk about the GPU operating on 128 scalar values, but you could just as well talk about it operating on a single 128-dimensional vector. (Or any combination thereof: N vectors of degree (128 / N).)

Lol... Just like 64 bit and multicore, AMD was talking about Fusion way before anyone else was. Intel has "stolen" yet another idea from AMD. Unfortunately, the reality is that AMD doesn't have the capital to refresh it's production lines as often as Intel - and I think to some extent the human capital to exectute on the big ideas.

Until Intel can show us Crysis in all it's GPU raping glory running on it's chipset in 1600x1200 with all settings to Ultra High Nvidia and ATI will still be kings of high end graphics.
Then again, if all Intel wants to do is create a sub standard alternative to those high end cards just to run Vista Aero and *nix Beryl then they have already succeeded.

If Intel is right, there won't be much of an effect on existing games.

Intel is focusing on raytracers, something Crytek has specifically said that they will not do. Therefore, both Crysis and any sequels won't really see any improvement from Intel's approach.

If Intel is right, what we are talking about is the Crysis-killer -- a game that looks and plays much better than Crysis (and maybe with a plot that doesn't completely suck [penny-arcade.com]), and only on Intel hardware, not on nVidia.

Oh, and Beryl has been killed and merged. It's just Compiz now, and Compiz Fusion if you need more.

ATI have open specs. At least for a lot of their hardware. They are releasing more and more documentation as it gets cleaned up and cleared by legal. Open Source ATI drivers are coming on in leaps and bounds as a result.

I'm sure DEC engineers poo-pooed Intel back in the early 90s when the DEC Alpha blew away anything Intel made by a factor of four. But a few short years later, Chipzilla had drawn level. Now Alpha processors aren't even made and DEC is long deceased.

Nvidia ought not to rest on its laurels like DEC did, or Intel will crush them.

Some of the comments made were very interesting. He really slammed someone that I take was either an Intel rep, or otherwise associate. The best was when that rep/associate/whoever criticized Nvidia about their driver issues in Vista, and the slam-dunk response that I paraphase, "If we [Nvidia] only had to support the same product/application that Intel has [Office 2003] for the last 5 years then we probably wouldn't have as many driver issues as well. But since we have new products/applications that our dr

IMHO, Nvidia is stuck as the odd-man out. When integrated chipsets and GPU-CPU hybrids can easily handle full-HD playback, the market for discrete GPUs falls and falls some more. Sure, discrete will always be faster, just like a Porsche is faster than a Toyota, but who makes more money (by a mile)?

Is Creative still around? Last I heard, they were making MP3 players...

Pretty decent ones, too. I managed to pick up a 1gb Zen Stone for under $10, and the only complaint I have is that it won't play ogg and there isn't a Rockbox port out there for it (yet). Since I dislike iTunes anyway (it has too many "features" that serve only to piss me off, and any UI that treats me like 5 year old just sets me on edge) it pretty much does everything that I want an mp3 player to do.Plus, as an added bonus, I don't have to pretend that I'm hip or trendy while I listen to it; if people thi

ASUS M2N-VM DVI (with integrated video) + AMD64 BE-2300 (45W) plays true 1080p with max 80% cpu (one out of the two cores are used only!!). Tested on a trailer for Pirates of the Caribbean (since I have no Blue-ray player yet).

My Mac mini has a maximum load of 110W. That's the Core 2 Duo CPU, the integrated GMA950, 3GB of RAM, a 2.5" drive and a DVD burner, not to mention FireWire 400 and four USB 2.0 ports under maximum load (the FW400 port being 8W alone).

Granted the GMA950 sucks compared to nVidia's current offerings, however do they have any plans for low-power GPUs? I'm pretty sure the whole company can't survive on the FPS-crazed game players revenues alone.

They should start thinking about asking intel to integrate their (current) laptop GPUs into intel CPUs.

Actually nVidia is working a new technology called HybridPower which involves a computer with both an on board and discrete graphics card, where the low power on board card is used most of the time (when you are just in your OS environment of choice), but when you need the power (for stuff like games) the discrete card boots up.

Of COURSE they do, in fact they already HAVE low power offerings. I'm not sure why people seem to think the 8800 is the only card nVidia makes. nVidia is quite adept at taking their technology and scaling it down. Just reduce the clock speed, cut off shader units and such, there you go. In the 8 series they have an 8400. I don't know what the power draw is, but it doesn't have any extra power connectors so it is under 75 watts peak by definition (that's all PCIe can handle). They have even lower power cards

Since you seem to know nVidia's lineup, what would you recommend for a fan-less PCI or AGP card good enough to run Final Fantasy XI in 1280x1024?And before you say "FF XI is old, any current card will do", let me remind you that it's a MMORPG with more than a few dozen players on the screen at once (in Jeuno, for example), and my target is to have around 20 FPS in worst-case scenario.

I'm asking for PCI because I might try to dump my current AMD Athlon 2600+/KT6 Delta box (which is huge) with a fanless mini-

Hmmm, well I don't know that game in particular however in general I think a 7600GS should work ok. That's enough to run WoW at a decent rez with decent settings. It's also one of the few models available for AGP (the AGP market is getting real thin these days). http://www.newegg.com/Product/Product.aspx?Item=N82E16814121064 [newegg.com] is a link to buy it.

If you got a system with PCIe, there's more options, but AGP is being phased out so there's less cards available for it.

But why cannot e.g. 8800 shutdown most of the cores when they are not used? I.e. use only one or two for the compiz or Aero and use all with some whizbang game. Or drop the cores frequency depending on load?

If I understand them right, they're claiming that integrated graphics and CPU/GPU hybrids are just a toy, and that you want discrete graphics if you're serious.
Ken Olsen famously said that "the PC is just a toy". When did you last use a "real" computer?

And of course they and game developers slam Intel's product... but 99.9% of PCs are never used to play games.

Nvidia's statement sounds like famous last words, too. I think their laurels are getting pressed too flat from resting on them. Just as Intel's CPUs eventually caught up and overtook the Alpha, the same might happen with their graphics chipsets.

So, if you have a hybrid chip, why not put it on a motherboard with a slot for a nice nvidia card. Then you'll get all the raytracing goodness from intel, plus the joyful rasterbation of nvidia's finest offerings. The word "coprocessor" exists for a reason. Or am I missing something here?

The problem is not that Nvidia CEO Jen-Hsun Huang made one stupid statement.
The problem is that he said many foolish things, indicating that he is not a
good CEO. Here are some:

Quote from the article: "Nvidia CEO Jen-Hsun Huang was quite vocal
on those fronts, arguing hybrid chips that mix microprocessor and graphics
processor cores will be no different from systems that include Intel or AMD
integrated graphics today."

My opinion: There would be no need for all the talk if there were no
chance of competition. Everyone knows there will be new competition from
Intel Larabee and AMD/ATI. Everyone knows that "no different" is a lie. Lying
exposes the Nvidia CEO as a weak man.

"... he explained that Nvidia is continuously reinventing itself
and that it will be two architectural refreshes beyond the current generation
of chips before Larrabee launches."

The entire issue is that Intel+Larabee and AMD+ATI will make Nvidia
irrelevant for most users. The GPU will be on the motherboard. Nvidia will
sell only to gamers who are willing to pay extra, a lot extra.

"Huang also raised the prospect of application and API-level
compatibility problems with Larrabee. Intel has said Larrabee will support the
DirectX 10 and OpenGL application programming interfaces just like current AMD
and Nvidia GPUs, but Huang seemed dubious Intel could deliver on that
front."

Intel, in this case, is Intel and Microsoft working together. Both are poorly managed
companies in many ways, but they are both managed well enough to insure that the Microsoft
product works with the Intel hardware. Sure, it is an easy guess that Microsoft will release several
buggy versions, because Microsoft has a history of treating its customers as though they were beta testers,
but eventually everything will work correctly.

'[NVidia VP] Tamasi went on to shoot down Intel's emphasis on ray
tracing, which the chipmaker has called "the future for games." '

Ray tracing is certainly the future for games, there is no question
about that. The question is when, because the processor power required is
huge. It's my guess, but an easy guess, that Mr. Tamasi is lying; he is
apparently trying to take advantage of the ignorance of financial analists.

"Additionally, Tamasi believes rasterization is inherently more
scalable than ray tracing. He said running a ray tracer on a cell phone is
"hard to conceive."

This is apparently another attempt to confuse the financial analyists, who often
have only a pretend interest in technical things. Anyone understanding the statement
knows it is nonsense. No one is suggesting that there will be
ray-tracing on cell phones. My opinion is that this is another lie.

"We're gonna be highly focused on bringing a great experience to
people who care about it," he explained, adding that Nvidia hardware simply
isn't for everyone."

That was a foolish thing to say. That's the whole issue! In the
future, Nvidia's sales will drop because "Nvidia hardware simply isn't for
everyone." Most computers will not have separate video adapters, whereas
they did before. Only powerful game machines will need to by from Nvidia.

'Huang added, "I would build CPUs if I could change the world [in
doing so]." ' Later in the article, it says, "Nvidia is readying a platform to
accompany VIA's next-generation Isaiah processor, which should fight it out
with Intel's Atom in the low-cost notebook and desktop arena"

Translation: Before, every desktop computer needed a video adapter,
which came from a company different than the CPU maker, a company like Nvidia.
Now, the video adapters will be mostly supplied by CPU makers. In response,
Nvidia will start making low-end CPUs. It is questionable whether Nvidia
can compete with Intel and AMD making any kind of CPU.

The entire issue is that Intel+Larabee and AMD+ATI will make Nvidia irrelevant for most users. The GPU will be on the motherboard. Nvidia will sell only to gamers who are willing to pay extra, a lot extra.

AMD acquired ATI almost two years ago. How did this make Nvidia irrelevant? GPUs are ALREADY on motherboards, and have been for years. Plus, the cost of embedding CPU+GPU is not much cheaper than the discrete solutions since yield will be much lower. Moreover, CPU+GPU on one die has its risks. What if the GPU part fails? What if the CPU fails? You'll need to replace both.

Ray tracing is certainly the future for games, there is no question about that.

I beg to differ. Many questions surround this. Would you know more than John Carmack? [slashdot.org]

Before, every desktop computer needed a video adapter, which came from a company different than the CPU maker, a company like Nvidia. Now, the video adapters will be mostly supplied by CPU makers

The entire issue is that Intel+Larabee and AMD+ATI will make Nvidia irrelevant for most users. The GPU will be on the motherboard. Nvidia will sell only to gamers who are willing to pay extra, a lot extra.

NVDA was down 7% in the stock market today. As an Nvidia shareholder, that hurts!

If you don't believe Intel will ever compete with Nvidia, now is probably a good time to buy. NVDA has a forward P/E of 14. That's a "value stock" price for a leading tech company... you don't get opportunities like that often. NVDA also has no debt on the books, so the credit crunch does not directly affect them.

Intel is and always has been CPU-centric. That's all they ever seem to focus on because it's what they do best. Nvidia is focusing 100% on GPUs because it's what they best. AMD seems to have it right with their combination of the two (by necessity) because they're focusing on a mix between the two. I'm seriously stoked about the 780G chipset they rolled out this month because it's an integrated chipset that doesn't suck and actually speeds up an ATI video card if you add the right one.
Given, AMD isn't the fastest when it comes to either graphics or processors but at least they have a platform with a chipset, CPU, and graphics that work together. Chipsets have needed to be a bit more powerful for a long-ass time.

Only if you think the non-serious gamer market is a big hit. It's been a long time since I heard anyone complain about GPU performance except in relation to a game. Graphics cards run at the resolution you want, and play a lot of well... not GPU intensive games or rather non-GRU games at all, but games you could run on highly inefficient platforms like flash and still do alright. And for the games that do consider GPU-performance, more is usually always better. Sure, it's an integrated chipset but I've yet

The non-serious gamer market it TOTALLY a big hit. And the benefit of this chipset for many users is that you get decent 3D performance with a motherboard for the same price you would pay for a motherboard without the integrated graphics.

And if you decide to bump it up a notch and buy a 3450 it operates in Hybrid Crossfire so your onboard graphics aren't totally disabled. Explain to me how that isn't cool?

Granted NVidia is way out ahead in graphics performance - but generally you can tell when that someone is getting nervous when they start in the belligerant bragging.

The risk for NVidia isn't that Intel will surpass them, or even necessarily approach their best performance. The risk is that Intel might start catching up, cutting (further) into NVidia's market share.AMD's acquisition of ATI seems to imply that they see tight integration of graphics to be at least cheaper for a given level of performance, or higher performance for a given price. Apply that same reasoning to Intel, since they certainly aren't likely to let AMD have that advantage all to themselves.

Now try to apply that logic to NVidia - what are they going to do, merge with a distant-last-place x86 maker?

NVidia may just put a CPU or two in their graphics chips. They already have more transistors in the GPU than Intel has in many of their CPUs. They could license a CPU design from AMD.
A CPU design today is a file of Verilog, so integrating that onto a part with a GPU isn't that big a deal.

Once upon a time the floating point was done on a seperate chip. You could buy a cheaper "non-professional" machine that emulated the fpu in software and ran slower. You could also upgrade your machine by adding the fpu chip.

The FPU is a sparring partner, the GPU is a pipeline. Except for a few special uses, the GPU doesn't need to talk back to the CPU. The FPU (that's floating point unit, doing non-integer math) very often returned a result that the CPU needed back. The result is that it makes sense to put the CPU and FPU very close, the same doesn't hold true for the GPU. It's substantially more difficult to cool 1x100W chip than 2x50W chips. Hell, even on a single chip Intel and nVidia could draw up "this is my part, this is

I think Nvidia should talk to IBM's Power division about doing a JV, preferably with a Cell CPU core. After all, the PS/3, the XBox 360, and the Wii all use PowerPC cores... and Nvidia developed the PS/3 GPU. Worst case x86 scenario is VIA buying Nvidia and doing it. If that's the way the market is going, so be it.

...is that Nvidia is saying that Intel is ignoring years of GPU development.
Umm, wait. Isn't a GPU basically a mini-computer/CPU by itself that exclusively handles graphics calculations?
By making this statement, I think they've forgotten who Intel is. Intel has more than enough experience in the field to go off on their own and make GPUs.
Is it something to be scared of? Probably not, because as he correctly points out, a dedicated GPU will be more powerful. However, it's not something that can be ignored.
We'll just have to wait and see.

If Intel wants into the market for real, I think ATI/nVidia should be plenty afraid.Intel not only has their own fabrication plants, but they're high end ones not three or four sizes behind. Apple (of all companies) switched to Intel's CPUs for a reason - they have the capacity and the technology to produce lots of fast, low power components and a market base large enough (CPUs and chips of all kinds) to keep their GPU technology ahead of the curve.

Having the GPU built into the CPU is primarily a cost-cutting measure. Take one low-end CPU, add one low-end GPU, and you have a single-chip solution that consumes a bit less power than separate components.

Nobody expects the CPU+GPU to yield gaming performance worth a damn, because the two big companies that are looking into this amalgam both have underperforming graphics technology. Do they both make excellent budget solutions ? Yes they certainly do, but for those who crave extreme speed, the only option is NVidia.

That said, not everyone plays shooters. Back in my retail days, I'd say I moved 50 times more bottom-end GPUs than top-end ones. Those Radeon 9250s were $29.99 piles of alien poop, but cheap poop is enough for the average norm. The only people who spent more than $100 on a video card were teenagers and comic book guys (and of course, my awesome self).

The large corporations and engineering companies that have *THOUSANDS* of high-end workstations need graphics hardware compatible with complex, specialized software. I'm talking Unigraphics, CATIA, Patran, Femap, etc. You need to use the hardware certified by the software publisher otherwise you don't get support and you can't trust the work you are doing to be correct. And the vast majority of the cards that are up to the challenge are nvidia cards.

I have done CAD/CAM for ages, and my P3-750 with a Quadro4 700XGL isn't noticeably slower than a P4-3.4 with a Radeon X300SE running Unigraphics NX 5. I have a P3-500 with a QuadroFX-1000 card that freaking flies running CATIA V5. Again, in contrast, my 1.8GHz P4 laptop with integrated Intel graphics sucks balls running either UG or CATIA.

Speaking for the workstation users out there, please keep making high performance GPUs, Nvidia.

ATI/AMD hasn't been competitive with NVIDIA for two product cycles. That doesn't look likely to change in the near future, either; ATI/AMD's next generation GPU architecture isn't looking so hot.AMD is in a world of hurt right now, with Intel consistently maintaining a lead over them in the CPU segment, and NVIDIA maintaining a lead over them in the GPU segment. They're doing some interesting, synergistic things between the CPU and GPU sides, but who knows if that'll pan out. Meanwhile, they're being for

Competitive enough anyway. Long as I'm still on AGP, I'm still getting ATI cards (nVidia's agp offerings have classically been highly crippled beyond just running on AGP). But sure, I'm a niche, and truth be told, my next system will probably have nVidia.

But gamer video cards aren't everything, and I daresay not even the majority. If you have a flatscreen TV, chances are good it's got ATI parts in it. Then there's laptops and integrated video, nothing to sneeze at.

Quite true. With the 8500 and 8600 models, and now the 9500, nVidia trounces AMD even on budget cards.

But, nVidia got pummeled prior to their acquisition of Yahoo!^H^H^H^H^H^H Voodoo, and the two were quite neck and neck for a long time. So it's more of "the tables have turned (again)" rather than "they have no competition."

Until AMD completely quits making higher-end video cards, nVidia will have to keep on doing something to stay competitive. Same thing with Firefox - I don't think IE8 would have l

I don't know what they were smoking, but they pissed off Microsoft, too, and got left out of developing DirectX 8 IIRC. Because they didn't have their hands on the next version of DirectX, they were way behind the ball when the SDK proper was released.

But, I'm thrilled with the hardware they produce. And as long as AMD stays no more than one generation behind them, they won't be able to rest on their laurels, either.

I thought it was DirectX 9 that they were left out from, causing their FX range (5200 -> 5900) to be fairly useless when compared to ATI. They were legends in the DirectX 8 arena with the GeForce4 Ti series.

No they don't.Since the Voodoo 2 Nvidia have been making the best video cards performancevise all the time except when they let the 3DFX-guys make the FX5-series. TNT, TNT2, Geforce especially, Geforce2, Geforce3, Geforce4, had them on top. FX5 was shit and Radeon 9xxx was better, Nvidia did catch up in the next generation thought. So yes, Nvidia lost in one generation because they used other developers, but it's not like the game change the whole time. The x1950 was a nice card for the price thought.

They keep leapfrogging each other every few months, at least for the almost-pointless "Best Single Card Graphics Solution" race. Just this year, Nvidia has had the lead, lost it, and got it back again.

Intel always seemed to have bounced in and out of the graphics chip market. They jumped in just when the first graphics chips that fully supported OpenGL came out. Couldn't keep up, so jumped out [news.com], then jumped back in again once the market has consolidated. It should be easier for them now that there are only a handful of players, but then again the remaining players are much larger.

CPU and GPU integration is quite logical progression of technology. There are things the GPU is not optimal and same goes to the CPU. It seems that when combined, they prove successful.

Let's examine this statement:

"Bus and train integration is quite logical progression of technology. There are things the plane is not optimal and same goes to the bus. It seems that when combined, they prove successful. So let's put wings on a bus."

Now, I think there are plenty of good reasons why CPU/GPU integration is a

Now, the logic core needed to perform these two tasks is highly specific, which is why we have separate CPUs and GPUs to begin with. But there's a lot to be gained by integrating the two more closely. You can share memory interfaces, for example, and perhaps more relevantly for the high-end graphics segment, you can tightly couple CPU and GPU operations across a bus that's going to be a hundred times faster than anything PCI Express can provide, and with latency to die for.

:cycle of reincarnation:/n./ [coined in a paper by T. H. Myer
and I.E. Sutherland "On the Design of Display Processors", Comm.
ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known
effect whereby function in a computing system family is migrated
out to special-purpose peripheral hardware for speed, then the
peripheral evolves toward more computing power as it does its job,
then somebody notices that it is inefficient to support two
asymmetrical pro

Next you're going to tell me the sky is blue or that too much water can kill me. Onboard video isn't meant to be shiny, just to serve a basic need: being able to see what the hell you're doing. Rather than dismissing Intel because they (and many other board manufacturers) provide a bare-bones video solution, I'm interested in seeing what they'll pop out when they're actually trying.By the way, onboard video uses about as much RAM as a browser will use (And about as much as Win98 needs to boot in, but I digr

'Much more expensive'? I see the prices actually going down. Has adding three processor cores and boatloads of cache over the past four years raised prices much? Also, this would mean a GPU would no longer be a specialized component and be produced in higher volume as part of every desktop CPU.Upgrades? Just grab a new processor, tear up your thumbs removing the cheapshit heatsink you grabbed off eBay for $2, apply some thermal grease that absolutely refuses to come off your finger, replace the heatsink and