If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Hey, "max out the CPU" doesn't mean you use 100% of each instruction instead of 50% of each instruction. It means you use 100% of instructions instead of 50% of instructions.
If your program is faster, it uses less instructions for the same result, and as such A LOWER PERCENTAGE OF CPU USAGE (as it is calculated as per available instructions, whose number is fixed).

Also, if you have a task that:
- must run on 100% CPU all the time
- doesn't matter what it does during this time
you can underclock your CPU so that it calculates on instruction per year, and sleep the rest of the time. You should get a pretty good efficiency like that.

I thought I added a comment to this exact situation in my original post but I guess I forgot to put it in. The underclocking is a nice idea but a little inconvenient for an end-user. If you're using a server then that ends up being a waste of good hardware.

I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."

It's possible, but using sensors would probably not be a good idea.
A lot of games already have a frame-rate control scale that (I assume) adds a small sleep to the main game loop to slow it down, I use it in any game that has it available during the summer because my PC is way too hot.

It's possible, but using sensors would probably not be a good idea.
A lot of games already have a frame-rate control scale that (I assume) adds a small sleep to the main game loop to slow it down, I use it in any game that has it available during the summer because my PC is way too hot.

Example of a game with it, pepper? Because I've never once seen a game with that option available

I'm a PhD student involved in the ENTRA project from the University of Bristol. I'm also sysadmin at a big UK technology news website, but we won't talk about that . I've been working on energy modelling of software applications for multi-threaded embedded systems for nearly three years and so some of my work is relevant to ENTRA, which kicked off in 2012.

Ask me anything you like - I'll answer if I can. Pre-emptively, however, a few comments on things already discussed here:

1. "Faster = more energy efficient" is of course true, by virtue of being able to do more work. So writing more performance-efficient code will, for the same data set, invariably be more energy efficient.

2. Better still, if your idle power is low enough and you can do DVFS, then race-to-idle can sometimes work in your favour. The relationship isn't that straightforward, though. The dynamic energy consumption of a CMOS device is proportional to the voltage squared, so if you need to up the voltage in order to get a higher frequency, energy consumption can rise quicker than the speed gain you get. The specifics of this behaviour, and the sweet-spots of volatge/frequency operation as well as whether dynamic or static power are dominant are dictated by the process technology's feature size as well various other fabrication options. For example, standard 45nm may have one typical static/dynamic behaviour, whereas 45nm-LP (low-power) may have a smaller static power, but you're more limited in operating frequency.

3. Considering the inefficiencies of things like displays and power supplies is important at a system level. Saving 20% of energy in some IP block within the processor is one thing, but if that block contributes to only 3% of the processor energy, which is itself only 10% of the system, you're not changing the world. That said, I work with embedded systems - they might have a network interface (sometimes), but my systems have no display.

4. Compiler optimisations for speed = compiler optimisation for energy is typically true, because of point 1. That's not to say there aren't other optimisations that specifically improve energy with no performance impact (by smoothing out unavoidable slack, for example). But that's not typically been the goal of a compiler programmer when searching for new optimisation passes.

5. ENTRA stands for ENergy TRAnsparency - it's not just about the tools providing you with optimisations, it's about helping programmers understand where the energy is going. There are programmers that care about performance, and they can profile it relatively easily and see where system time is being spent. The energy behaviour of a system in relation to the software that somebody is writing is much more difficult to get a handle on. So one of our motivators is that if we can give people more information on the energy behaviour of their program, they can start to understand how to code with energy efficiency in mind.