Do you mean dual core and quad core CPUs? Yes, absolutely. But depending on what you are doing, it may not make much difference to performance. For instance, a single thread can only run on one core/CPU at once, so any performance gains that that thread makes will only be due to the possibility other load the computer is handling at the same time may be on the other core. In other words it certainly won't double the speed of such a task. On the other hand, if the task you are looking at is susceptable to being broken up over multiple threads, then it could improve the speed by up to almost twice for a dual core machine, assuming the task isn't memory or IO bound.

Here is a really stupid question. Linux MCE does NOT support multi core processing - am I right?

Thanks Colin,

I am led to believe that Windows supports simultaneous multi core processing - I thouight this may also be the case with Linux MCE - Yes / No? It's just that when playing back a recorded HD program (in 1080p) my CPU is at 100% - I am running an AMD Quad core 2.2Ghz Phenom CPU. When playing back the same thing with the same CPU on a Windows machine it results in just over 25% CPU useage (same video card / memory etc) - just wondering how come? Someone told me I need to install the H.264 codecs from CorCodec - is this true? I thought these codecs were alread present. I am confused.

OK, I'm probably not the best person to explain this as there are a few points that I have not been certain of myself in this area. However, some comments:

1) I have assumed (although not certain) that the per process CPU usage shown in top actually adds up to 100% x # of cores. So for a dual box, this would be 200%. The top manual page entry is a little ambiguous on this point, however I have often noticed that my sum CPU usage for the top few processes displayed in top is far more than 100%. The top manual page seems to me to imply that this is the reason why.

2) If you have a process that the bulk of the work is done by a single thread, and it is maxing out the core it happens to be on, and your box is a dual core machine, then I would expect top to show 100%, being all computing resources available for that core, and thus effective 50% of the overall machine's resources.

3) In your case, that would seem to suggest that that it is consuming 100% of one core from 4, being a quarter of the total 400% CPU available to you.... in other words 25% of the total available to the machine.....

4) The above point is highly suggestive, but not definitive - could just be a coincidence, but it is important to note that Windows definitely does not calculate CPU like this. 100% means 100%. If a CPU bound process is sucking up all CPU resources for a single thread on a single core of four, then Windows would report that as 25%. It would also look like the waveform is unnaturally "clipped" around that 25% mark rather than the more normal "spikey" look.

5) Note that Windows always has significant advantages in decompressing various video codecs as they can hand off a lot of these to the video card rather than sucking up CPU. In many cases, Linux has to do the heavy lifting directly on the CPU because of the stranglehold that M$ has on hardware manufacturers, limiting how much of the hardware we buy from them is exposed for our use!

OK, I'm probably not the best person to explain this as there are a few points that I have not been certain of myself in this area. However, some comments:

1) I have assumed (although not certain) that the per process CPU usage shown in top actually adds up to 100% x # of cores. So for a dual box, this would be 200%. The top manual page entry is a little ambiguous on this point, however I have often noticed that my sum CPU usage for the top few processes displayed in top is far more than 100%. The top manual page seems to me to imply that this is the reason why.

2) If you have a process that the bulk of the work is done by a single thread, and it is maxing out the core it happens to be on, and your box is a dual core machine, then I would expect top to show 100%, being all computing resources available for that core, and thus effective 50% of the overall machine's resources.

3) In your case, that would seem to suggest that that it is consuming 100% of one core from 4, being a quarter of the total 400% CPU available to you.... in other words 25% of the total available to the machine.....

4) The above point is highly suggestive, but not definitive - could just be a coincidence, but it is important to note that Windows definitely does not calculate CPU like this. 100% means 100%. If a CPU bound process is sucking up all CPU resources for a single thread on a single core of four, then Windows would report that as 25%. It would also look like the waveform is unnaturally "clipped" around that 25% mark rather than the more normal "spikey" look.

5) Note that Windows always has significant advantages in decompressing various video codecs as they can hand off a lot of these to the video card rather than sucking up CPU. In many cases, Linux has to do the heavy lifting directly on the CPU because of the stranglehold that M$ has on hardware manufacturers, limiting how much of the hardware we buy from them is exposed for our use!

Thanks again Colin, I think your point #5 hits the nail on the head. I am actually getting 100% CPU use on ONE core of a quad core AMD Pheneom 2.2Ghz CPU. So it appears there is no sharing of load amongst the balance of the cores.

if you mean when running under Windows this is what you see, then I would suggest that the progressive points 1-4 are more likely closer to the mark...

In any case, it is possible to multithread graphics decoding so that load is shared, but requires a lot more coding sophistication - typically dividing up the screen into 2 or 4 pieces, and dispatching the load to each of the cores. I suspect that xinelib does not do this, which is a shame, but I'm sure it will be addressed in future releases, especially given the drive to multiple cores/CPUs and the graphics driver issues...

if you mean when running under Windows this is what you see, then I would suggest that the progressive points 1-4 are more likely closer to the mark...

In any case, it is possible to multithread graphics decoding so that load is shared, but requires a lot more coding sophistication - typically dividing up the screen into 2 or 4 pieces, and dispatching the load to each of the cores. I suspect that xinelib does not do this, which is a shame, but I'm sure it will be addressed in future releases, especially given the drive to multiple cores/CPUs and the graphics driver issues...

Thanks Colin you have been very helpful and on reviewing your points 1 - 4 I now better understand what is happening. I also understand what you mean by multithread graphics, it is what I meant to say but did not know the right terminology. My background is in IT sales and marketing, before that I was in Commercial Automation systems so you need to excuse my dumbness in this arena.

FYI, I am getting a HP Touchsmart loaner to test with MCE - I have a Linux expert at hand who is going to do this. I will post the findings in due course including CPU use. The HP Touchsmart is one sexy piece of kit and would make an incredible hybrid or media director I reckon. Interestingly it runs a 9 series Nvidia, a good start & I think it should work - fingers crossed.

Sounds interesting! BTW, tho the 9 series nVidia chipset will be way too new, so you will definitely have graphics problems right from the install/setup phase. You will need to install the latest drivers from nVidia - you can find a pretty good guide to this in the wiki.

Sounds interesting! BTW, tho the 9 series nVidia chipset will be way too new, so you will definitely have graphics problems right from the install/setup phase. You will need to install the latest drivers from nVidia - you can find a pretty good guide to this in the wiki.

Thanks Colin. We did that with nVidia driver patch earlier and it works OK, only issue being when you update the OS it breaks the OS. So, the solution was to write a work around into fooling MCE that the drivers are 6 or 7 series - seemed to work but still had issues with tearing in Alpha Blending mode.