If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Intel Core i3 LLVMpipe Performance

07-05-2010, 02:00 AM

Phoronix: Intel Core i3 LLVMpipe Performance

Last week I put out new numbers showing the LLVMpipe performance with the latest Gallium3D code found in Mesa 7.9-devel. This Gallium3D driver accelerates all operations on the CPU rather than a GPU as a better software rasterizer than what is currently available for Linux, but even with a hefty Intel Core i7 CPU the OpenGL acceleration was still quite slow. In this article using an Intel Core i3 mobile CPU we are looking at the LLVMpipe performance again, but this time comparing it to the Intel graphics performance and also looking at the impact that the clock frequency and Hyper Threading have on this Gallium3D driver that heavily utilizes the Low-Level Virtual Machine for its CPU optimizations.

Comment

mesa is the old classic linux driver (lacking performance optimisations in many cases)

Gallium3D is a layer thats somehow pushed in between the kernel and the driver itelf, which provides some kind of environment for the specia?l gallium drivers. its an attempt to generalise graphics drivers (somehow).

LLVMpipe is a kind of Software Rasteriser (no hardware acceleration by the GPU) which is being developed on the Gallium3D infrastructure using LLVM (Low Level Virtual Machine) which somehow has some pluses in Linking optimisations while compiling the driver (or even recompiling for optimisations while at play???)

well, correct me if im wrong...

how can it actually be that the framerate doesnt jump to twice the framerate as without HT when the HT is being reactivated? (@2.60GHz)
is it some kind of bottleneck or overhead or is the state of the driver not as finished as it seemed to be to me?
the game itself isnt using multiple cores i guess, but does LLVMpipe really do?

Comment

how can it actually be that the framerate doesnt jump to twice the framerate as without HT when the HT is being reactivated? (@2.60GHz)
is it some kind of bottleneck or overhead or is the state of the driver not as finished as it seemed to be to me?
the game itself isnt using multiple cores i guess, but does LLVMpipe really do?

Why should it? It's not like you get 4 more physical cores. Even in Intel's PR material best-case-scenarios HT is only something like 30% boost.

Comment

<snip>This Gallium3D driver accelerates all operations on the CPU rather than a GPU as a better software rasterizer than what is currently available for Linux, but even with a hefty Intel Core i7 CPU the OpenGL acceleration was still quite slow. In this article using an Intel Core i3 mobile CPU we are looking at the LLVMpipe performance again, but this time comparing it to the Intel graphics performance and also looking at the impact that the clock frequency and Hyper Threading have on this Gallium3D driver that heavily utilizes the Low-Level Virtual Machine for its CPU optimizations.

I think it's inappropriate to call software-rendering LLVMpipe "accelerated". Acceleration in graphics context means hardware acceleration, that is, use of hardware that is dedicated to increase graphics performance. If you render using the CPU you're using the non-accelerated way of drawing graphics. That is not say that non-accelerated couldn't be faster than "accelerated". First "3D accelerators" were notorious for being slower than drawing things with just CPU. :-) To get real feeling of performance of LLVMpipe it should be compared with classic Mesa software renderer.

PS. For neophytes that are wondering why bother with software-rendering at all, LLVMpipe's real importance is that it's a prototype for GPU acceleration. LLVM can be adapted to compile for GPUs and thus get the most out of GPU-driven architectures (after we first get LLVMpipe work, and get LLVM to compile for GPUs.) Also, Brian Paul's Mesa has been a software reference for proper OpenGL. So you could verify that your hardware driver works correctly if it produces the same output as software Mesa.

Comment

Hello. Please help me understanding (i'm new to linux graphic drivers)..
What is the difference in definitions between Mesa, Gallium & LLVMpipe. I don't understand how they relate to each other.

Thanks in advanced!

Mesa is a software library that is an unoficial implementation of OpenGL. In Mesa there are also drivers for accelerating 3D rendering with graphics cards.

There are two types of driver inside Mesa:
-Classic
-Gallium3D

Gallium3D is a new kind of architecture for writing drivers to take advantage of modern graphic card architectures.

Gallium3D is a bit of a mindfuck, but brilliant at the same time:
All Gallium3D drivers have one purpose: Expose an API. All graphics cards out there with Gallium3D drivers expose the same API.

On top of this API features are written. So when somebody creates OpenGL on top of this API than suddenly all graphics cards support it. So when ATI/AMD would create an OpenGL or a DirectX program on top of this API than suddenly nVidia cards would suddenly also have OpenGL and vice versa.

Now Mesa without any drivers can also render 3D, but painfully slow. This is called the Mesa softpipe renderer. Enter LLVMpipe. LLVMpipe is a much faster software only driver that uses the LLVM compiler collection. Part of the LLVM compiler collection is a JIT compiler, that is an on-the-fly compiler that optimises everything before it is being rendered to the performance of the software only rendering is much, much faster.

The rest you can Wikipedia for details

Comment

Hey that was actually a Test on Phoronix which was quite good.
No endless rowing of charts and more trying to interpret things.

Also, I don't know why phoronix feels the urge to release soooo many tests. Most good testsites keep the numbers down and quality up.
You guys can just keep it low a bit and take your time making good tests.

But one thing I wanna know is, why differ the memory usage numbers between different clockrates so much?