But then the Quadro 5000 costs 3x as much as the GTX 580 so it better perform at least twice as good. It seems in those tests that the only significant improvement in performance came in scenes with very high poly counts and large object counts, which most people don't have to deal with.

For those that do then it might be necessary to get a Quadro, but unless you can afford it then it's probably not worth it.

And for GPU rendering the gaming cards still render a lot faster than any professional cards (Quadro and Tesla).

Originally Posted by filanek:I am quite curious how perform new cards,
could someone who owns gtx580 - gtx690 or newer quadro,
replicate the scene and send your results here?

In GPU rendering (VrayRT) the newer 600 series cards didn't perform much better than the GTX 580. I don't know about viewport performance, though the GTX 690 would be a bad choice, because the 3ds Max viewport doesn't support SLI, which the GTX 690 is technically two cards in one. The 690 is actually like two 680's put together, so 3ds Max would only be able to use one of the processors on that card for the viewport. Plus they are clocked down for stability so the processors aren't even as fast as a standalone GTX 680

Originally Posted by darthviper107: (snip) In GPU rendering (VrayRT) the newer 600 series cards didn't perform much better than the GTX 580. I don't know about viewport performance, though the GTX 690 would be a bad choice, because the 3ds Max viewport doesn't support SLI, which the GTX 690 is technically two cards in one. The 690 is actually like two 680's put together, so 3ds Max would only be able to use one of the processors on that card for the viewport. Plus they are clocked down for stability so the processors aren't even as fast as a standalone GTX 680

iRay doesn't support the 600 cards so it can't be tested there yet.

When you say that the 600 series cards didn't perform much better than the GTX 580, do you mean per CUDA core or overall? The GTX 580 has 512 cores, while the 680 has 1536 cores - almost triple. (Never mind the clock speed difference.) I was expecting at least triple the performance because of those numbers (and maybe 4x with the clock speed and memory bandwidth increase).

Moving the code over to the Kepler architecture requires a recompile and who knows what other modifications. I wonder if VRay rushed this out and there's a lot of potential improvements in optimizing how the code runs on the new hardware?

The other odd thing (and I'm being a conspiracy theorist here) is that Mental Ray is owned by nVidia, and iRay is an application that would get us to give our money to nVidia for GPUs rather than to Intel for CPUs. It's odd that iRay for Kepler hasn't been released yet. I would have expected it to be ready as soon as the 600 series cards started shipping as a way to show off their compute power. So here's the conspiracy angle: what if Kepler isn't performing well compute-wise, so nVidia is delaying the release of iRay because it would highlight the lack of performance. (Of course, other explanations are that the re-compile to Kepler is harder than expected and/or Mental Images simply isn't devoting a lot of resources to porting iRay over.)

Originally Posted by tomdarchitect:When you say that the 600 series cards didn't perform much better than the GTX 580, do you mean per CUDA core or overall? The GTX 580 has 512 cores, while the 680 has 1536 cores - almost triple. (Never mind the clock speed difference.) I was expecting at least triple the performance because of those numbers (and maybe 4x with the clock speed and memory bandwidth increase).

I would suggest you go read Phil Miller's post at the AREA, here. He works for Nvidia and has an understanding of the hardware and software. More specific this part of his post:

Quote:But to set expectations, you should not expect the initial Kepler products (out now) to deliver a dramatic speed increase for iray over their Fermi generation predecessors. While Kepler has many more cores than Fermi, they run at much lower power, which means they have less performance per core. The gain you are guaranteed to see is superior performance per watt. This also makes it much easier to fit larger or more GPUs into power-constrained systems.

Eric - thanks! That clears up a lot of questions I had, and brings up some interesting points.

(This is all from the perspective of someone who wants to use iRay, and wants to use it on gaming cards, rather than Quadros or GPU-compute cards because of the cost. I'm an architect who would like to do some realistic-ish visualization for clients periodically, rather than being in the full time viz business.)

So the Fermi to Kepler (1st gen) transition was about improving performance per watt, rather than increasing overall performance. (Not an unimportant issue - I'm running an iRay test animation right now - my toes are pretty warm under my desk and I can just imagine how fast the disk on my electric meter is spinning between the computer drawing power and the AC trying to balance out the heat...)

While, as a consumer, you would hope that when performance stays about the same, the Dollar-per-unit-of-performance could have come down over the course of a year - from the 500 series to the 600 series, that doesn't seem to have happened.

But on the topic of hardware cost, on that thread, Phil says, "Renderers based on NVIDIA OptiX can support paging to system RAM, but there aren’t any of these publically available for Max yet." If that system comes to iRay (or if OptiX comes to 3DS and does what I'm looking for), and there isn't too much of a performance hit, that could help a lot with the cost. Using 1GB or 2GB gaming cards sharing system RAM, rather than 2.5/3/4GB "high-end" cards could help to keep cost-per-performance down. (Actually, the scene I'm rendering currently is using less than 1GB of ram on the graphics card, but I don't want to set myself up to have to spend hours messing with models to keep them within some tight memory-availability limit.)

But I still suspect that this lack of performance increase from the 500 series to the 600 series is a big headache for nVidia's marketing people.

It's probably not a big deal for Nvidia---the Geforce cards are meant for games and in that area the GTX 680 is significantly faster than the GTX 580. Nvidia pushes the Quadro cards for stuff like 3ds Max and GPU rendering which is why they aren't concerned if the Geforce cards are performing low in that area.

In fact, things look like they are intentionally making their gaming cards perform poorly in those areas to get people to try and buy their more expensive workstation cards.

Still, for something like iRay the gaming cards still render much faster than the workstation cards. And since there's the GTX 580 with 3GB it's got a good amount of memory, you'd have to get a Quadro 6000 or a Tesla to beat that much memory, and would still render more slowly.

Sad thing though is that the GTX 580 is no longer in production so stock will not stay for long.

I just recently got a second one and they render pretty amazing. Only issue though, is the heat--in a 2 minute iRay render it went from 50C to 80C which is too much. But I think I can do some tweaking that can possibly fix that without much problem.

Originally Posted by darthviper107:I just recently got a second one and they render pretty amazing. Only issue though, is the heat--in a 2 minute iRay render it went from 50C to 80C which is too much. But I think I can do some tweaking that can possibly fix that without much problem.

I would check and see what the maximum operating range is for the card. For the Quadros it is up in the 105°C range, and quick googling shows that the GTX580 is around 95°C.

Which is another difference in gaming versus professional cards. The professional cards are burn tested at a higher rate. One other thing to note is that with most hardware technology upgrades are pushed to consumer lines before professional lines, from what I have seen.

-Eric

EDIT: According to the Wikipedia comparison page the Quadro 4000/5000/6000 are based on the same/similar tech as the GTX 465/470/480 (GF100). So comparing the 580 (GF110) or 680 (GK104) to the quadros is really kind of apples to oranges. You should compare GF100 cards to see how the consumer versions compare to the professional versions.

I just got a GTX 570 with 2.5GB of RAM, and that's what my animation is currently rendering on. I have the fan curve cranked up, and it's been rendering for a few hours, the core is reading 76C with the fan at 85% (in a room where the AC is set to 75F/24C) My understanding about the "max temp" listed for these cards is that they are able to hit those temperatures without malfunctioning, but not that they are designed to operate at those temperatures for hours. I wonder though, if operating for hours at 60C is terribly different for the card than operating at 70C or 80C?

The thing that concerns me more than the sustained temperature is the fact that between frames, the temp drops from 76C to 70C, then pops back up a second or two later. I've set iRay to 2min 30sec per frame (still grainy, but usable for my walk-through). I suspect that thermal cycling may be worse for the hardware than high sustained temperatures. Every time it goes through a cycle, the chip is expanding and contracting a few microns. I guess we've been doing this to CPUs, GPUs and RAM for years, and hopefully the engineers and manufacturing folks have figured out how to deal with it...

I hadn't thought about it, but it makes sense that the 500 series would go out of production at some point. I'll have to keep an eye out to see if there are any clearance deals on the units with 2.5 or 3 GB of memory. (But then I'd have to worry about upgrading the power supply... ah, first world problems.)

In there the GTX 480 is just behind the 580 and still much faster than 2xQuadro 4000

I've been looking around to see how to fix the heat problem, it seems it's OK for it to go in to the 80's but I think that's just too high, since I've got a thing I'd like to maybe set to render overnight I don't want to be concerned that I could burn out the cards if I'm not there watching it. Looks like the voltage is set higher than necessary and lowering it can improve it. I can also lower the clock speed a bit probably without effecting the speed much.

If your interested a a 580 with 3gb you better buy now. I just purchased 9 of them, a buddy bought 4 and it is slim picking. Vendors said people are buying them up fast, I had to wait a extra week for four of them. If I had to buy quadros or tesla I wouldn't of gotten anywehere near the rendering power I have now.

Runs fine, I just need to rearrange things to keep it cooled. If I had experience setting up liquid cooling systems I'd go for the ones with the waterblock on there, but I don't want to get into that type of thing.

Runs fine, I just need to rearrange things to keep it cooled. If I had experience setting up liquid cooling systems I'd go for the ones with the waterblock on there, but I don't want to get into that type of thing.

Distributed render?
I was able to add two of those to the set up. They are faster at rendering Iray than a standard 580. Louder but faster. I can do more tests but I would say at least 10% faster.

Originally Posted by darthviper107:
In there the GTX 480 is just behind the 580 and still much faster than 2xQuadro 4000

Thanks for posting that. I wish it would show core speeds (ie is it over/underclocked), mem speed ect. you can overcrank a 480GTX for 3 minutes off a fresh boot in a cool room no problem

You should also note that a spec 480 GTX's core runs at 700mhz to a 475mhz core of a Q4K.

What I really find surprising is that 480+2x teslas is only 2x faster than a single 480 GTX. That is $3500 bucks compared to $350 literally an order of magnitude greater expense for only twice the speed. For that difference in price I would expect at least 4x greater performance.

Follow Us On:

The CGSociety

The CGSociety is the most respected and accessible global organization for creative digital artists. The CGS supports artists at every level by offering a range of services to connect, inform, educate and promote digital artists worldwide. More about us on TheArtSociety.com