Can I buy 2 relatively inexpensive cards and SLI them to get more memory? The 550Ti (1GB GDDR) is on sale today at newegg $89.99 after $30MIR. There are others too in the $100 range - times 2 = ~$200. To step up to 2GB GDDR (from the 1.2 I have now, GTX470) might be worth it to me.

Does GDDR scale up that way? The more memory the better for apps like (esp) Premiere Pro - 2 GB is a pretty nice bump from 1.2G.

Do I need to concern myself with the memory interface? The 550 Ti has a 192-bit interface, the card I'm running now, the GTX 470 has a 320-bit. So I'll increase the memory cache but the access be slower? I wonder if that trade-off is worth it.

Well thanks for reading and for sharing any thoughts you might have - I appreciate it!

EDIT: I think maybe I better worry about the number of CUDA cores instead? That's where all the work is done right? I didn't realize the difference between the 470 and the 550Ti was so huge - 448 to 192. Those 192 are newer cores so maybe the technology is improved but that's a lot less. Maybe I better wait till the 600-series is out in more price ranges. Thanks!

Even is sli just since the cards have 1gb each it does not increase your frame buffer it will still act like the frame buffer of one card.

Perhaps you should just get a better single card a 560ti performs a slight but better then a 470 but they are pretty much equal.....perhaps a 560ti 448 or even a gtx 570 would give you more performance.If u can find another gtx 470 for cheap sli them instead of 2cheapo cards.Maybe find a2 gb 560ti they are out there for a little over 200. Why do u need more video memory?? Reasons and we can figure out something.

In short - You can't buy 2 x 1GB cards and have the 2GB used as one. It remains seperate to each card, and is reported to Windows as 1GB of video RAM.

Your monitor's native resolution will determine how much video RAM you'll need - Generally (with the games out today) 1GB of memory is plenty for 1920 x 1080/1200 size. The video RAM is mainly used to store texture data and buffered frames etc - Simple maths dictates that more space is needed as the resolution increases.

Memory bandwidth is important to some games, and less so in others. As a general rule, the wider the memory bus, the greater the bandwidth and the faster that card will perform. Some cards don't do this (look at the HD 7770, or older HD 5830). They perform ok though, thanks to more "cores" and faster clocks to cancel out the reduced memory bus. It all depends on what agmes you want to play though.

More of those "cores" the better though, and coming from a GTX 470 I would say you'll be better off looking for the 560 / 560Ti rather than the 550. The 560Ti performs especially well for it's price (literally just a few percent behind the GTX 570), but costs a lot less to buy.

Of course, it doesnt have to be Nvidia neither - There's a lot of HD 6950's knocking about the web for good money too as they have also been replaced. Might be worth a look in that direction too?

Last edited by geekl33tgamer on Wed May 16, 2012 7:06 am, edited 1 time in total.

You know, I never really thought about whether applications like premire use the GDDR memory on graphics cards. I suppose if they use the GPU for rendering assist, computation, or some sort of offloading it makes sense that they could, but I'm not sure.

Beyond that, you answer is no, on a number of levels.

First, memory in SLI doesn't aggregate. Each GPU has to render from its own set of data. The video memory in an SLI setup serves only the GPU it's attached to and feeds that GPU the necessary data. the memory doesn't pool into a larger common cache.

Second, I think if you're looking at doing some heavy application stuff (like photo or video editing), you're better off with larger amounts of system RAM. You should get a better response out of how much work material you can load in main memory. I don't think such applications load the work material into video memory when doing offload--they would just load whatever subset of data is necessary to do whatever process they ask the GPU to do.

Third, does Premire actually take advantage of SLI? I suppose it could, but I don't know.

Finally, in general though, I'd think you'd be better off holding onto that GTX470 for compute type applications (offloading for Premire), as "Higher class" (x70 > x60 > x50) cards tend to have more resources for accelerating computational work. I'm not totally sure in this case though, since the 500 generation is based on the fermi architecture, which was designed to do a lot more non-gaming stuff, so there might be an advantage going from 400 series to 500 series. However, since the 600 series has seen a shift back to gaming (its complicated), I don't think its worth your time to wait for a GTX670, or GTX660.

I'm not the final word on this sort of stuff, since I don't make a lot of use of video editing, let alone of GPU acceleration for video editing. But until you get a better answer this is my best understanding--take it with a grain of salt.

gbcrush wrote:First, memory in SLI doesn't aggregate. Each GPU has to render from its own set of data. The video memory in an SLI setup serves only the GPU it's attached to and feeds that GPU the necessary data. the memory doesn't pool into a larger common cache.

Yup. IIRC in an SLI/crossfire setup all of the texture data gets duplicated onto both cards, so you've effectively still got the same amount of RAM as a single card.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

so you know: - as far as system RAM I'm maxed out - 24G on the X58 platform. Added another Spinpoint but haven't RAIDed them yet. If my video-editing needs increase I'm sure I will RAID a few disks. - 80% of what I do is for fun, projects for friends and family. I use Adobe Production Premium CS5.5 and Cinema 4D. Mostly editing DSLR and Compact Cam footage, some BD/DVD-authoring, making slide shows, modeling animated 3D figures (from Flash or C4D) and adding them to the movies/slide shows, adding orig audio recorded in Audacity...

The "good news" is - if I get a better card - apparently Premiere Pro uses the GDDR cache...from Adobe's site: Note that whether a frame can be processed by CUDA depends on the size of the frame and the amount of RAM on the graphics card (VRAM).

the 470 is a Fermi card, CUDA-enabled, which is the main reason why I bought it. Premiere Pro uses what's known as the Mercury Playback Engine, which is a CUDA-based architecture but also includes a whole series of improvements over CS3 and previous versions. The biggest one is it's (finally) a 64-bit application and multi-threaded. Since CUDA is an NVIDIA thing I didn't look at any Radeon cards.

The Mercury Engine is pretty sweet but it doesn't accelerate everything; for instance encoding and decoding are strictly CPU-bound. But the list of filters, blend modes, color conversion algorithms, etc. grows longer all the time. It really does make a big difference too. You can choose not to use it (in the Prefs) and see the difference immediately. Not sure why that's an option but whatever.

I do a good amount of 3D stuff in Cinema 4D using particle systems and loads of polygons. The Quadros, the expensive ones anyway devour that stuff and there's little if any lag on screen. My 470 maxs when I use too many particles or polygons, or render them with too much detail, too high AA, etc. Previews become too slow or even impossible to render and I think that's a function of the "small" frame buffer in the 470. 1280MB gets used up pretty quickly on a 1920x1200 monitor. And I use 2 of them - though of course the preview is only on 1.

Seems like a Quadro card is in my future. Sure wish the one I want didn't retail for $3998 though...even the 2G card is $1750. Crazy (as in, "stupid") money for a hobbyist like me...but after I win the lottery ...

Thanks again for the info folks - I can always count on you guys - great site and a great forum!