I have been thinking about the same thing over the past few days. My work desktop has a GeForce GTX and it has always felt slightly laggy with rotating large models and hiding showing etc.
I tested the same file on a dell precision laptop with a Quadro card and it’s another story - it is much more responsive.
I’ve always used Quadro on my personal laptops and rhino seems to work much better.

From my previous research on various websites that benchmark cards, it seems that gaming cards can perform very well in comparison to workstation cards but is very dependent on the application. For instance, a GeForce Gtx card will perform equally or better to a Quadro in AutoCAD but it will perform poorly in Maya in comparison. Some cards have drivers that are best suited for certain programs. Can someone point us to some resources that indicate Rhino specific benchmarking on various cards?

Additionally, something to consider is rendering engines you might use may have better performance on workstation cards vs. consumer cards so it’s hard to pinpoint an exact answer for your needs. (Say, render in native Rhino vs Vray vs Maxwell for instance)

One used to be able to generalize that design representation and raytracing were two different creatures, but with GPU raytracing in programs like Maxwell, more rendering is done by the GPU than the CPU.

But while you are designing, a fast CPU will help because it’s not easy to multi-thread 1+1.

For design, I opt for fast single core performance on perhaps a quad, but for raytracing more cores are usually better–even of they are relegated to feeding the video card things to do.

If a video card works better for one application than the other, it’s because the marketing people crippled it so.

[I once did at least 40 hours of Maya tutorials before I realized I hated the program. It’s supposed to be a good animator, though. As long as I don’t have to use it.]

I have had a few cards over the last couple of years but
the best card I have found is the NVIDIA GTX 980 - zero issues
Neon Brazil Vray and large 500mb files- they do like a bigger power supply than recommended

If you look at the Holomark 2 release post, you would find GTX 1070 got beaten by GTX 980 and most Quadro cards for GPU score. I guess gaming benchmark is a somehow different from how Rhino uses the GPU.

Here are the 1070 results on my “older” dual Xeon system:
No AA:
[image]
Notice how the mesh scores drops when I turn on AA:
2xAA:
[image]
4xAA:
[image]
8xAA:
[image]
So as you can see, unlike with Quadros, those pure mesh scores are strongly limited by turning on AA.
This doesn’t affect the normal Rhino nurbs modelling, but if you work with heavy architectural models with lots of meshes, then you will notice the difference.

AA plays a difference as you can see, but I get close to 20k at 8xAA on that “old” xeon cpu.

Edit: Removed duplicates from data; Edit2: Updated Regex & results (R15), now including two runs of Holo's GTX 1070 Edit2: Updated Regex & results (R16), now including 1 or 2 GPU cards in the listing (GPU_ & GPU2_) I created some Regular Expressions (attached) to extract some data from these pages and made a summary file in Excel of it (attached), see Fig 1. More info about how to work with the data (attached) far below. Fig 1. Summary of Holomark 2 (some data from the thread "Holomark 2 Re…

Rhino tries its best to harvest information about the graphics system and sometimes gets it wrong. This information is only used as exactly that - information. In no way is this used to limit its performance.

If you read the following posts in that thread, you will also read that the Rhino 6 WIP apparently finds better system information somewhere.

But that’s just about reporting a number. That’s not about fully supporting or not.

That said, the Rhino 6 WIP will make use of more modern OpenGL features that newer GPUs provide and also has a completely revamped display system. This does mean that the next version of Rhino more fully supports your card.