Posted
by
timothy
on Friday April 29, 2011 @02:30AM
from the feed-your-kill-o-watt dept.

MojoKid writes "In a rather surprising turn of events, NVIDIA has just gone on record that, starting with AMD's 990 series chipset, you'll be able to run multiple NVIDIA graphics cards in SLI on AMD-based motherboards, a feature previously only available on Intel or NVIDIA-based motherboards. Nvidia didn't go into many specifics about the license, such as how long it's good for, but did say the license covers 'upcoming motherboards featuring AMD's 990FX, 990X, and 970 chipsets.'"

You already can use an Nvidia card as dedicated physx with an AMD card, but in order to not create a bottleneck and actually experience a performance loss, you need an Nvidia card that is more or less on par with your AMD card. So if you have, say, an AMD 6970, you would need like an Nvidia 460 at the very least to get good enough performance boost for it to even be worth the extra cash instead of just going crossfire.

Parent is incorrect and it's therefore no suprise that he provided no evidence or even supportive argument for his assertions.

'physx' is a marketing term and an API currently only hardware accelerated through nVidia cards. Adding more AMD cards, as the parent suggests, doesn't do squat if what you want is 'physx' on a hardware path. Games typicall only have two paths, software or 'physx', so the load either lands on the main-CPU (you only have AMD card(s)) or on the GPU (you have nVidia card with physx enab

I thought that since Windows Vista introduced the kernel mode code signing requirement, hacked drivers required the user to reboot into "Test Mode", which places an always-on-top banner at all four corners of the screen. What am I missing?

I do not know why this myth keeps getting spread, only the 64 bit versions of Vista and 7 check for signed drivers, and they give you an option to install the driver if it is not signed. In fact, you can disable the driver signing check quite easily, if you wish.

Originally you needed a dedicated hardware PCI card that did PhysX and nothing but PhysX.

The problem there was that so few people had the hardware that no one would develop games for it, and since there were no games for it no one bought the card. The point of software PhysX was to let people without the card run the games, so the developer's investment in PhysX wasn't a total waste... and then there'd be enough PhysX games to get people to buy a PhysX car

Powered by the Fuzion technology that offers Non-Identical & Cross-Vendor Multi-GPU processing, the MSI 870A Fuzion allows you to install two different level and brand graphics cards (even ATI and NVIDIA Hybrid) in a single system, providing flexible upgradability and great 3D performance.

I wouldn't say they failed, in fact I know quite a few people who went out and bought one. What happened was that after nVidia bought Aegis they decided to implement PhysX in CUDA, and ditch the dedicated card altogether.

I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5. Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

Do you really think so? We've been down this road before and while it's sometimes a nice ride, it always leads to a rather anticlimactic dead-end.

(Notable examples are VLB, EISA, PCI and AGP, plus some very similar variations on each of these.)

2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5.

Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

HTX, which is mostly just Hypertransport wrapped around a familiar card-edge connector, has been around for a good while. HTX3 added a decent speed bump to the format in '08. AFAICT, nobody makes graphics cards for such a bus, and no consumer-oriented systems have ever included it. It's still there, though...

Either that, or it's high time we got 8GB per core for GPUs.

This. If there is genuinely a need for substantially bigger chunks of RAM to be available to a GPU, then I'd rather see it nearer to the GPU itself. History indicates that this will happen eventually anyway (no matter how well-intentioned the new-fangled bus might be), so it might make sense to just cut to the chase...

Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

The most awesome upgrade I ever had was when I went from EGA to a Tseng SVGA card with 1 MB memory. The next awesomest was when I upgraded from a 4 MB card to a Riva TNT2 with 32 MB. Every time I upgrade my video card there's less shock and awe effect. I'm willing to bet that going from 2 GB to 8 GB would be barely perceptible to most people.

I think the top graphics cards today have gone over the local maximum point of realism. What I have been noticing a lot lately is the "uncanny valley" effect. The only

You took my practical argument and made it theoretical, but I'll play.;)

I never had an EGA adapter. I did have CGA, and the next step was a Diamond Speedstar 24x, with all kinds of (well, one kind of) 24-bit color that would put your Tseng ET3000 (ET4000?) to shame. And, in any event, it was clearly better than CGA, EGA, VGA, or (bog-standard IBM) XGA.

The 24x was both awesome (pretty!) and lousy (mostly do to its proprietarity nature and lack of software support) at the time. I still keep it in a drawer -- it's the only color ISA video card I still have. (I believe there is also still a monochrome Hercules card kicking around in there somewhere, which I keep because its weird "high-res" mode has infrequently been well-supported by anything else.)

Anyway...porn was never better than when I was a kid with a 24-bit video card, able to view JPEGs without dithering.

But what I'd like to express to you is that it's all incremental. There was no magic leap between your EGA card and your Tseng SVGA -- you just skipped some steps.

And there was no magic leap between your 4MB card (whatever it was) and your 32MB Riva TNT2: I also made a similar progression to a TNT2.

And, yeah: Around that time, model numbers got blurry. Instead of making one chipset at one speed (TNT2), manufacturers started bin-sorting and making producing a variety of speeds from the same part (Voodoo3 2000, 3000, 3500TV, all with the same GPU).

And also around that time, drivers (between OpenGL and DirectX) became consistent, adding to the blur.

I still have a Voodoo3 3500TV, though I don't have a system that can use it. But I assure you that I would much rather play games (and pay the power bill) with my nVidia 9800GT than that old hunk of (ouch! HOT!) 3dfx metal.

Fast forward a bunch and recently, I've been playing both Rift and Portal 2. The 9800GT is showing its age, especially with Rift, and it's becoming time to look for an upgrade.

But, really, neither of these games would be worth the time of day on my laptop's ATI x300. This old Dell probably would've played the first Portal OK, but the second one...meh. And the x300 is (IIRC) listed as Rift's minimum spec, but the game loses its prettiness in a hurry when the quality settings are turned down.

But, you know: I might just install Rift on this 7-year-old x300 laptop, just to see how it works. Just so I can have the same "wow" factor I had when I first installed a Voodoo3 2000, when I play the same game on my desktop with a 3-year-old, not-so-special-at-this-pint 9800GT.

The steps seem smaller, these days, but progress marches on. You'll have absolute lifelike perfection eventually, but it'll take some doing to get there.

Assuming what you say is true (and I believe it can be if the API is properly implemented, for better or worse), then regular PCI Express x16 cards should be perfectly adequate.

Not to be offensive, but you used all that verbiage, and failed to realize that nobody needs or wants high-performance tab switching in Firefox. It'd be nice if it happened within a single monitor refresh period (60Hz, these days), but nobody notices if it takes a dozen times as long or more due to PCI Express bus transfers from mai

It's anecdotal, to be sure... All I can tell you is that when I have tons of windows open on Win7, then switching to old ones takes a while to repaint (and it's quite noticeable). With few windows open, it's effectively instantaneous (i.e presumably within a few VSYNCs). And, no offense taken, but I absolutely do want high-performance tab/window switching in my desktop applications. If I don't have to wait for contents to be repainted, then I don't want to.

It's anecdotal, to be sure... All I can tell you is that when I have tons of windows open on Win7, then switching to old ones takes a while to repaint (and it's quite noticeable). With few windows open, it's effectively instantaneous (i.e presumably within a few VSYNCs). And, no offense taken, but I absolutely do want high-performance tab/window switching in my desktop applications. If I don't have to wait for contents to be repainted, then I don't want to.

Since video memory transfers are so fast, it seems more likely that you're seeing normal swapping behavior -- Windows sees that you have 30 windows open but you're currently only using a few of them, so the rest get swapped out (even if you have a bunch of free memory). On Linux you can change the "swappiness" to fix this. You could see if there's a similar fix on Windows 7 (back when I used Windows XP I just disabled swap files and got a massive performance improvement).

Well, yeah, could maybe be swapping. But the system can't use the video card framebuffer as general-purpose memory. So swapping the contents to disk would really only make sense if the usage of the framebuffer is at a very high percentage. Hence my (untested) belief that having a larger framebuffer would help. But, maybe they're using the same memory management/swap code as for the normal system VM, that tends to page out to disk earlier than absolutely necessary. So it could be that the "swapiness" o

I don't mean the video card's memory being swapped, just that the memory for the programs you want to use it being swapped. If the program itself isn't ready, it doesn't matter how fast the video card can display it.

That generally not how it works. Both X and the old windows GDI were on demand painters. Basically they simply had the application repaint screen as necessary, clipping the non visible regions. Of course caching a portion of the painting speeds things up, but generally if your running out of ram the image is just thrown away. So having 200 windows open doesn't require sufficient ram/graphics memory to contain 200 maximized windows.

So what can you do with WDDM 1.1? For starters, you can significantly curtail memory usage for the Desktop Window Manager when it’s enabled for Aero. With the DWM enabled, every window is an uncompressed texture in order for it to be processed by the video card. The problem with this is that when it comes to windows drawn with Microsoft’s older GDI/GDI+ technology, the DWM needs two copies of the data – one on the video card for rendering purposes, and another copy in main memory for the DWM to work on. Because these textures are uncompressed, the amount of memory a single window takes is the product of its size, specifically: Width X Height x 4 bytes of color information.

...snip...

Furthermore while a single window may not be too bad, additional windows compound this problem. In this case Microsoft lists the memory consumption of 15 1600x1200 windows at 109MB. This isn’t a problem for the video card, which has plenty of memory dedicated for the task, but for system memory it’s another issue since it’s eating into memory that could be used for something else. With WDDM 1.1, Microsoft has been able remove the copy of the texture from system memory and operate solely on the contents in video memory. As a result the memory consumption of Windows is immediately reduced, potentially by hundreds of megabytes.

Also, just to be more specific about how fast PCI express [wikipedia.org] is, a PCI express 3.0 16x slot transfers at 128 GB/s. Your 8 MB texture should be able to get to it in around 60 microseconds [google.com]. To put that into perspective, rendering the screen at 60 fps means one frame every 17 milliseconds, so even if the texture was transferred from main memory every frame, the actual rendering of the frame would take almost 300x longer.

I'll be the first to say that I do not fully understand any of this -- all I have are anecdotal observations, educated conjecture, and wit.

That said: As others have mentioned, I think you're swapping, not experiencing a video issue.

Who knows why -- maybe you've got a memory leak in some program or other, or are just shy on RAM for what you're asking of the system. More investigation is needed on your part (Resource Manager in Vista/7 does an OK job of this). I've had

Once upon a time it used to be common for high-end video cards to have display memory and texture memory at different speeds, and sometimes you even got SIMM or DIMM slots for the texture memory, and you could add more. Are there not still cards like this in existence with a halfway decent GPU?

None that I've seen, though I've been wondering the same since I wrote that reply. (The video cards I remember had DIP sockets for extra RAM, but the concept is the same.....)

Perhaps the time is now for such things to return. It used to be rather common on all manner of stuff to be able to add RAM to the device -- I even used to have a Soundblaster AWE that accepted SIMMs to expand its hardware wavetable synth capacity.

My current resolution is 3240x1920. Though I play games at 3510x1920 with bezel correction on. I'm considering grabbing another two monitors and upping to 5400x1920. When playing GTA 4, which is not a new game, it takes up over 900MB with the settings around modest. When upping to two new monitors, I think having 4 GB would be substantial enough. 8 GB would future-proof you a bit.

Is that hardcore? Maybe. But I don't consider it to be. I consider it to be enjoyable. And enjoyable requires well over a giga

Exactly. Professional engines like mental ray, finalRender and V-Ray are moving towards GPU rendering, but right now there's nothing between high-end gaming cards and professional graphics cards with more RAM (and a shitload of money to shell out for it).

You realize the limiting factor in system RAM access is the PCIe bus, right? It isn't as though that can magically be made faster. I suppose they could start doing 32x slots, that is technically allowed by the spec but that would mean more cost both for motherboards and graphics cards, with no real benefit except to people like you that want massive amounts of RAM.

In terms of increasing the bandwidth of the bus without increasing the width, well Intel is on that. PCIe 3.0 was finalized in November 2010 and both Intel and AMD are working on implementing it in next gen chipsets. It doubles per lane bandwidth over 2.0/2.1 by increasing the clock rate, and using more efficient (but much more complex) signaling. That would give 16GB/sec of bandwidth which is on par with what you see from DDR3 1333MHz system memory.

However even if you do that, it isn't really that useful, it'll still be slow. See graphics cards have WAY higher memory bandwidth requirements CPUs. That's why they use GDDR5 instead of DDR3. While GDDR5 is based on DDR3 it is much higher speed and bandwidth. With their huge memory controllers you can see cards with 200GB/sec or more of bandwidth. You just aren't going to get that out of system RAM, even if you had a bus that could transfer it (which you don't have).

Never mind that you then have to contend with the CPU which needs to use it too.

There's no magic to be had here to be able to grab system RAM and use it efficiently. Cards can already use it, it is part of the PCIe spec. Things just slow to a crawl when it gets used since there are extreme bandwidth limitations from the graphics card's perspective.

While it doesn't solve the problem of DDR3 being slower than GDDR5, AMD has been pushing their "Torrenza [wikipedia.org]" initiative to have assorted specialized 3rd party processors be able to plug directly into the hypertransport bus(either in an HTX slot, or in one or more of the CPU sockets of a multi-socket system). That would give the hypothetical GPU both fast access to the CPU and as-fast-as-the-CPU access to large amounts of cheap RAM.

Ludicrously uneconomic for gaming purposes, of course; but there are probably

For nice fast RAM access, doesn't the new AMD Fusion GPU share the same silicon with the CPU anyway? Nvidia are planning something similar to with their upcoming kepler/maxwell GPU.

The future is surely where you'll be able to buy a single fully integrated CPU/GPU/RAM module. Not very modular maybe, but speed, programming ease, power efficiency, size and weight would be amazing and more than make up.

For nice fast RAM access, doesn't the new AMD Fusion GPU share the same silicon with the CPU anyway?

Indeed, and the Fusion chips are trouncing Atom-based solutions in graphics benchmarks mainly for this very reason.

The problem tho is it cant readily be applied to more mainstream desktop solutions because then you have CPU and GPU fighting over precious memory bandwidth. For netbooks and the like, it works well because GPU performance isnt expected to match even midrange cards, so only a fraction of DDR2/DDR3 bandwidth is acceptable. Even midrange desktop graphics cards blow the doors off of DDR3 memory

Look at effective real world transfers. DDR3 RAM at 1333MHz gets in the realm of 16-20GB/sec when actually measured in the system. It transfers more than 1 bit per transfer, and the modules are interleaved (think RAID-0 if you like) to further increase transfer speeds.

Only for a shitty coder. Use a megatexture and/or procedurally generated textures, and you'll only require 8-64MB of video memory, with the rest going to framebuffer and GPU. Did people stop paying attention to Carmack or what?

Why don't you get a pro video card if you need that? For games you don't need much more than 1GB these days (data for a couple of frames of 1080p just aren't that big). If you need to visualize anything better, you usually go with a Quadro or a Tesla (up to 6GB per card, up to 4 per computer), a Quadro Plex (12GB in an external device) or a rack-based GPU solution (however much you can put in your rack).

Since all the exclusion did was hurt nVidia in sales for people who stay loyal to AMD and refuse to go intel just for SLi. Allowing SLi on AMD boards will boost nVidias sales a bit.

It works both ways. nVidia has loyal customers, too, and with CPUs and mobos so much cheaper than a good GPU these days, there are plenty of people who buy the rest of the system to go with the GPU rather than the other way around.

In any case, more choices are good for everyone, customers most of all.

I don't think it's a secret that Intel has the fastest processors if you're willing to pay $$$ for it. And since a dual card solution costs quite a bit of $$$ already, I doubt there are that many that want to pair an AMD CPU with a dual nVidia GPU.

For the hardcore gamers who don't have unlimited budgets, it might be logical to buy the cheapest CPU that won't bottleneck your games, and pair it with the fastest graphics cards you can afford. Particularly if your games can use the GPU for the physics engine, you might not need even AMD's high-end CPUs to keep a pair of NVidia cards busy.

It might be that the name of the game is "Let's all gang up on Intel"... given that Intel has squeezed Nvidia put of motherboards, and AMD has integrated graphics all to herself for now, getting closer is sensible, because the dominating player has a)signaled that it wants to enter your arena, and b) has probably reached, as Microsoft has reached, a performance plateau after which further technology advances are not as valuable to consumers. Having said that, the SLI market is, and will remain, marginal

nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

Well it is in the best interests of both companies to play nice, particularly if Bulldozer ends up being any good (either in terms of being high performance, or good performance for the money). In nVidia's case it would be shooting themselves in the foot to not support AMD boards if those start taking off with enthusiasts. In AMD's case their processor market has badly eroded and they don't need any barriers to wooing people back from Intel.

My hope is this also signals that Bulldozer is good. That nVidia had a look and said "Ya, this has the kind of performance that enthusiasts will want and we want to be on that platform."

While I'm an Intel fan myself I've no illusions that the reason they are as cheap as they are and try as hard as they do is because they've got to fight AMD. Well AMD has been badly floundering in the CPU arena. Their products are not near Intel's performance level and not really very good price/performance wise. Intel keeps forging ahead with better and better CPUs (the Sandy Bridge CPUs are just excellent) and AMD keeps rehashing, and it has shown in their sales figures.

I hope Bulldozer is a great product and revitalizes AMD, which means Intel has to try even harder to compete, and so on.

On the top end, power and money is no object, fastest single thread performance, then yes Intel are the clear winners, which is why I buy Intel for desktop development tasks, where I want really good per-thread performance.

For number crunching servers, AMDs 4x12 core have a slight edge, though which is faster depends rather heavily on the wrokload. The edge is bigger when power and price are taken into account.

I've never found any case where they are winners performance wise. 4 core Intel CPUs outperform 6 core AMD CPUs in all the multi-threaded tasks I've looked at, rather badly in some cases. In terms of servers, Intel offers 10 core CPUs which I imagine are faster than AMD's 12 core CPUs much like in the desktop arena though I will say I've not done any particular research in the server arena.

Likewise the power consumption thing is well in Intel's court in the desktop arena. A Phenom 1100T has a 125watt TDP, a

Note that Intel's compilers refuse to use those instructions when their output runs on AMD's and, unfortunately, the popular scientific libraries are all compiled with ones of Intel's compilers (ICC or their Fortran compiler) and only use the SIMD paths if they see "GenuineIntel" output from CPUID.

One of the most renowned software optimization experts studies this in detail in his blog. [agner.org]

Nowadays, you can actually force ICC to emit code that will use up to SSSE3 on AMD CPUs, but only if you don't use runtime code-path selection. (More specifically, you have to tell ICC that the least-common-denominator code path should use SSSE3, which defeats the purpose of runtime code-path selection. ICC will always choose the slowest available code path for an AMD CPU, but you can prevent it from including a non-SSE code path.)

The rest of your post (mostly) focusses on desktop class processors, where Intel certainly win on the high end. In the lower and mid range, AMD are more competitive, especially in flops/$.

In the quad server socket market, things are a different. AMD's 12 core clock speed has been creeping up, where as Intel's large multicore processors clock relatively slowly. One reason Intel win the top spots on the desktop is due to faster clocks and doing

This is certainly due to Intel not really refreshing its server lines at all, focusing mainly on the desktop space, while AMD has steadily updated its server lines.

Lets not forget that AMD is also about to unveil its bulldozer cores, while Intel has recently updated its desktop chips. Until this year, Intel had an extremely hard time competing in performance per $ in the desktop space, and expect that even if bulldozer doesnt match i7

But at each dollar range AMD usually wins. Frankly At this point the CPU is rarely the bottleneck for most desktop users. They will usually get a lot bigger bang for buck with more ram, faster drivers, and a better video card than with a faster CPU. If you are a hard-core gamer then yea but for the 95% of PC users a Core2Duo or an X2, X3, or X4 is more than good enough.

nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

During the Athlon XP era, AMD did make a good chipset in the AMD 750. The problem was that all of the mobo manufactures at the time were using the VIA 686b southbridge instead of the AMD 766, which had a bus mastering bug which tended to cause crashes and eventually hard drive corruption.

It's a shame that this announcement is most likely going to result in the end of Nforce chipsets. Nvidia hasn't announced a new chipset for either Intel or AMD in years, Intel supports SLi, and now that AMD supports SLi, it just supports the rummors that Nvidia is killing the chipset division..

nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

My desktop has an ASUS A8N-SLI motherboard based on the nForce 4 chipset. Think its about time for me to upgrade?

Having built my last two gaming rigs to utilize SLI, my opinion is that it's more trouble than it's worth.

It seems like a great idea: buy the graphics card at the sweet spot in the price / power curve, peg it for all its worth until two years later when games start to push it to its limit. Then buy a second card, which is now very affordable, throw it in SLI and bump your rig back up to a top end performer.

The reality is less perfect. Want to go dual monitor? Expect to buy a third graphics card to run that second display. Apparently this has been fixed in Vista / Windows 7, but I'm still using XP and it's a massive pain. I'm relegated to using a single monitor in Windows, which is basically fine since I only use it to game, and booting back into Linux for two-display goodness.

Rare graphics bugs that only affect SLI users are common. I recently bought The Witcher on Steam for $5, this game is a few years old and has been updated many times. However if you're running SLI, expect to be able to see ALL LIGHT SOURCES, ALL THE TIME, THROUGH EVERY SURFACE. Only affects SLI users, so apparently it's a "will not fix". The workaround doesn't work.

When Borderlands first came out, crashed regularly for about the first two months. The culprit? A bug that only affected SLI users.

Then there's the heat issue! Having two graphics cards going at full tear will heat up your case extremely quickly. Expect to shell out for an after-market cooling solution unless you want your cards to idle at 80C and easily hit 95C during operation. The lifetime of your cards will be drastically shortened.

This is my experience with SLI anyway. I'm a hardcore gamer who has always built his own rigs, and this is the last machine I will build with SLI, end of story.

Has SLI really been so troublesome? My last system warranted a full replacement by the time I was thinking about going SLI, and ended up going with an AMD system and Crossfire instead. I've yet to have an issue gaming with dual monitors outside of a couple games force minimizing everything on the second monitor when activated in full screen mode, but this was fixed by alt tabbing out and back into the client. It may be worth noting however that I've not tried XP in years.

On Windows XP, with the latest nvidia drivers, any card running in SLI mode will only output to one of its ports. The secondary card's outputs don't work at all.

If you want you can disable SLI every time you exit a game (it only takes about two minutes!), but don't expect Windows to automatically go back to your dual monitor config. It's like you have to set your displays up from scratch every time.

As annoying as it is though, the dual monitor limitation is really just an annoyance. Having to disable SLI

I am running two 460 GTXs in SLI and am not having any heat problems (the cards stay around 40C at idle and 50-55C maxed). I also don't have any problems with my dual monitor config. Finally, turning SLI on and off is as simple as right clicking on the desktop, selecting the NVidia control panel, and going to SLI and hitting disable. I can't see that taking longer than about 10 seconds.

I've tried every possible configuration available, it does not work. But thanks for your helpful and informative post, which yet again fails to invalidate my experience by way of your (highly questionable and com

Ok, first of all the nvidia Forceware Release 180 drivers are the first drivers to support multi monitor SLI. From the Tom's Hardware story [tomshardware.com] at the time:

Big Bang II is codename for ForceWare Release 180 or R180. The biggest improvement is the introduction of SLI multi-monitor. Yes, you’ve read it correctly, Nvidia has finally allowed more than one monitor to use multiple video cards at once, something it’s been trying to do since SLI’s introduction back in 2004.

As I said in my original post, if you want to run multiple monitors with SLI enabled on your primary display in Windows XP, you have to buy a third card. Which, by the way, is the entire fucking point of the article you posted and is something I know full well because I've done it.

Windows XP x64 is hardly an option, it is one of the buggiest, least supported operating systems released in recent memory. "It just WORKS" is probably the single most ironic description of Windows XP

The link was merely posted in case you wanted something to actually read, as I doubt you have any clue as to just how far my nVidia experience goes (GeForce256 was my first nVidia card, FYI, and I've had EVERY generation since.)

Also, nVidia uses a UNIFIED DRIVER ARCHITECTURE.

One.ini hack is all it takes to re-enable Dual Monitor support in SLI mode.

No?THen you don't have half a clue, because you haven't stuck your nose into the driver stack.

Come back when you've worked very closely with the drivers - hint: I did OmegaDriver dev back when 3dfx was stilla viable company. Workingon nVidia's unified driver architecture is much easier, as it's a package to work across all NT operating systems. THat means you can enable features from one version not present in another version with simple tweaks and hacks.

What do you mean by "100% scaling", do you mean both card get full performance? Because I can assure you that was not the case with 3DFX, you would only get about 3/4 the full performance the cards were capable of. The SLI being used now also only shares the name, which has a different meaning, Nvidia is not using the same technology.

Note: It can now be seen that the original 3DFX SLI was very effective, more so than even current implementations by Nvidia & ATI, capable of achieving near 100% performance increase over a single Voodoo2 card. The reason this could not be observed before as the CPU was the limiting factor.[3]

How much do you want to bet that the CPU is still the limiting factor?

+1. Your experiences basically mimic mine. SLI doesn't even win in terms of bang for buck. People think "oh, I can but a second video card later on and boost performance"... but you might as well buy a previous gen high end card for the same price, same performance, and lower power requirements (than running 2).

> Having built my last two gaming rigs to utilize SLI, my opinion is that it's more trouble than it's worth.My current Gaming Rig has a XFire 5770 (XFire = AMD/ATI version of nVidia's SLI). I got it almost a year ago -- I paid ~$125 x 2 and saved at least $50 over the 5870.

I regular play L4D, BFBC2, RB6LV2. I too have mixed opinions on XFire/SLI but for different reasons.

> worth until two years later whenNo one says you have to wait 2 years:-) I waited 6 months.

When I built this box, Vista had just come out. So I installed Windows XP, obviously. With two GeForce 8800GT cards, each with 512MB RAM I still have 2.25 GB RAM left for the system, which is plenty. I haven't had a problem running any game released in the last five years and hitting a steady 60fps minimum, which happens to be the refresh rate of my display. So thanks for the advice, but it's not really that helpful and totally ignores the rest of the points I raised.

If you decide you need more RAM and don't have the money for Win 7 X64 (which really is a kick ass OS BTW) you might want to look into SuperSpeed RAMDisk [superspeed.com] as with PAE it will let you use the RAM XP can't see as a RAMDisk which will really give a machine a kick in the pants.

I have a couple of customers still on XP as well as keeping an XP partition myself for some old software that doesn't like Win 7 (Cubase) and having 4.5Gb of RAM as a temp drive really speeds things up. It is butt simple to use, you can se

^ This You are using SLI but living with the XP 3gb 32bit memory cap??? Remember, the more GPU Ram you have, the less address space for your regular Ram... and you have 2 GPU's with ram.

Is that actually an issue with PAE?

I'm using XP for gaming even though I have 8GB RAM in my desktop system, because some of the games I play do not run on Windows 7, but none of them require it. The rest of the time I run Ubuntu which just bloody screams on this hardware compared to XP. XP may be old, but Microsoft is still selling it.

OK, but it says it's locked to 4GB of physical RAM, not that it's locked to 4GB of address space. Your link does not back up the specific claims of the GP comment. The link specifically says that the driver manufacturer can place that wherever they want if PAE is enabled.

I don't know if nVidia is actually doing this but I do know that some apps work with PAE enabled and some get more crashy, so I added an option to my boot.ini to disable PAE when I like. This also disables NX so I don't use it all the time.

Hard to say(until bulldozer drops) whether NVIDIA thinks that they are really good, or good enough. Since NVIDIA no longer has an intel chipset business, tying SLI to Intel platforms no longer serves to move more product; but to restrict the size of their potential market(since anybody who wants the, often quite aggressive, price/performance of an AMD part won't be buying more than one NVIDIA card, at most).

So long as they are confident that AMD's CPUs will be good enough not to bottleneck SLI configurations, trying to sell multiple cards to people who purchase AMD CPUs seems only reasonable. If, of course, they think that the CPUs will be even better than good enough, the approach is even more reasonable, so it doesn't tell us too much about which it is.

And what exactly was that snork for? The "bang for the buck" has been firmly in the AMD side of the aisle for quite some time, as you can see here [cpubenchmark.net] on this handy chart. Thanks to the Intel socket roulette it has gotten to the point that depending on the socket one can build an AMD quad complete for less than a dual Intel and motherboard alone, that is how much extra you are paying for all those ads (and the kickbacks to OEMs of course).

Since it came out that Intel was rigging their compiler and bribing OEMs (where the hell is the antitrust bust anyway?) I've been selling AMD exclusively and my customers couldn't be happier. The lower price of AM3 boards and CPUs means they can get better features and more options for less money and having those extra cores helps to future proof a system. I mean how can you complain when you can get an AMD triple OEM PLUS a good ASRock motherboard for under $100, or that I can deliver a fully loaded AMD quad WITH a Geforce 210 AND a wireless keyboard/mouse combo and Win 7 HP X64, all for $550 while still making a decent profit? I certainly can't and my customers couldn't be happier with the performance. Not everyone is only worried about ePeen benchmarks you know.

As for TFA this isn't really surprising and the only shock is that Nvidia didn't do this sooner. Intel in their usual douchebag manner killed the Nvidia chipset business (again WTF? Where the hell is the antitrust already?) by refusing them the right to build on anything past LGA775, thus forcing everyone on Intel boards to have a craptastic Intel IGP whether you wanted it or not, so working out a deal to have SLI on AMD seems only natural.

To me the bigger question is why the hell don't they sell a 1x GPU board designed for PhysX. There are many of us I'm sure that don't have dual x16 boards that would be happy to buy an Nvidia chip just to add PhysX but screwing us over by disabling PhysX support if it detects an AMD GPU is just stupid, especially since that pretty much kills any point of having Nvidia on AMD with nearly all AMD boards having Radeon IGPs now. If they want to sell more GPUs a 1x board dedicated to PhysX would probably sell like hotcakes for those of us on AMD platforms.

Personally until they quit screwing us with the drivers I'll keep using Nvidia for strictly HTPCs (you can get a Geforce 210 for like $20 after MIR) and for those that want to game on the cheap sticking with the HD4850 (can't beat 256 bit pipeline for $60 refurb, and makes a cheap crossfire monster) or the HD5770.

In fairness to Intel, if you need pure bang, particularly if it is necessary that said bang be delivered to a single-threaded application, AMD has nothing on them.

However, as you say, AMD has plenty of more-than-fast-enough offerings that are dirt cheap, and tend to be supported by slightly cheaper motherboards. Given that many modern games tend to be GPU bound much of the time, gamers on budgets are generally pretty well served by cheaping out a bit on the CPU and going up a model or two on the GPU side. Since NVIDIA sells no CPUs(barring Tegra, which is irrelevant here) and no longer has a chipset business worthy of note, they'd be fools not to try to scoop up the "good enough AMD rig and enough cash left over for a slightly ludicrous video card" demographic.

Now, given Intel's strength, they aren't about to cancel SLI support on intel, despite being fucked over on the chipset side; but ignoring AMD doesn't make much sense.

Exactly, and I'd add that since most games are multiplatform even those that have PC enhancements are not really gonna slam the CPU enough to justify the huge price difference.

I like to game and looked at both sides when I built my machine, but for just $600 I got an AMD Phenom II 925 quad, dual 500Gb HDDs, a nice ECS business class motherboard with solid caps and plenty of USB headers, an HD4650 1Gb,8Gb of DDR 2 800Mhz, DVD burner and a nice black case with 8 USB ports. I just recently had my HD4650 repla

Your entire rant about Intel has been rectified. First AMD sued Intel, that case was settled over a year ago. Then the FTC gave Intel an anticompetitive smack down on top of that, which was settled nearly a year ago.

conditioning benefits to computer makers in exchange for their promise to buy chips from Intel exclusively or to refuse to buy chips from others; and
retaliating against computer makers if they do business with non-Intel suppliers by withholding benefits from them.

In addition, the FTC settlement order will require Intel to:

modify its intellectual property agreements with AMD, Nvidia, and Via so that those companies have more freedom to consider mergers or joint ventures with other companies, without the threat of being sued by Intel for patent infringement;
offer to extend Via’s x86 licensing agreement for five years beyond the current agreement, which expires in 2013;
maintain a key interface, known as the PCI Express Bus, for at least six years in a way that will not limit the performance of graphics processing chips. These assurances will provide incentives to manufacturers of complementary, and potentially competitive, products to Intel’s CPUs to continue to innovate; and
disclose to software developers that Intel computer compilers discriminate between Intel chips and non-Intel chips, and that they may not register all the features of non-Intel chips. Intel also will have to reimburse all software vendors who want to recompile their software using a non-Intel compiler.

Your entire rant about Intel has been rectified. First AMD sued Intel, that case was settled over a year ago. Then the FTC gave Intel an anticompetitive smack down on top of that, which was settled nearly a year ago.

Unfortunately, according to people looking at the compiler, things haven't changed [agner.org]

Sorry but it is YOU that are wrong. As you can see from the post below me Intel's latest compilers still use the "AMD is evil" flag, and it gets worse from there. That ruling gives Nvidia a PCIe slot after Intel had already driven them from the market so that is about as useful as saying Commodore should be given free shelf space now, as the fabs are shut down, the devs pink slipped, the lights turned off.

So that one is completely pointless and in fact rewards Intel for rigging the market as it doesn't giv