Post Your Comment

37 Comments

"I feel like the more common use case in smartphones is to just lock your phone/display when you're not actively using it."

Wouldn't you benefit from PSR even when you are actively using it? When I'm reading a web page in a browser on my phone, I'm not constantly scrolling. I scroll, read a bit, scroll, read a bit, etc. I might spend 10% of the time scrolling and 90% of the time reading a static screen. It sounds like PSR would benefit there too.Reply

+1 that.Majority of my tablet use is actually reading ebooks, so in my case I spend a lot of time looking at one static page until it comes time to read the next one. Whenever I check my battery usage log it's actually something like 90% screen, since I do so little that actually taxes the CPU/GPU.Reply

Does the screen percentage include the GPU power used in sending updates to the screen? If (as I suspect) it doesn't then your battery life improvements will be limited to <10%. This tech doesn't effect screen power consumption. It lowers the power consumed by the SoC by removing the need to send updates to an idle screen.Reply

What I'm *really* excited about, though, is combining PSR with the IGZO panels coming out in those 3200 x 1800 Ultrabooks. Apparently, IGZO can hold its active state longer, so here you actually ARE decreasing the panel's power consumption. Sharp's marketing says a 67% decrease in power consumption: http://online.wsj.com/ad/article/vision-breakthrou...Reply

The key here is going to be how quickly it and efficiently it can more in and out of PSR. For many types of active use outside of playing games/watching movies there are still stretches where the contents of the screen aren't really changing quickly. For example reading a web page/email/text msg/ebook/etc results in scroll, read for a while, scroll again. PSR can easily kick in for that read portion as it will be several seconds at least but even for more active things you can still find a couple seconds here or there or even parts of a second where PSR can kick in and while it might not seem like much it can add up to spending a significant amount of time in PSR and save a lot of power.Reply

Switch latency should be no problem, as the SOC just needs to push an updated image to the display and its memory. At 60 fps it's got 16.7 ms to do this - which is eternity in the world of electronics.Reply

Maybe this is a stupid question, but i always thought, that lcd-pixels don't need refreshing (except when they need to change color of course).And that is the reason why lcd-panels don't flicker, like crt and plasmas do.

So the only thing you need to do for PSR is not turning of the panel, when you don't get new data?Reply

If you've ever crashed an LCD smartphone or laptop hard enough, you've seen this in action: A ghost of the last image on the screen can be darkly imprinted (sometimes with horizontal lines through it) because the pixels were left in their last state.Reply

I think the reason they don't flicker is because the pixels don't provide light; there's nothing TO flicker, unless the backlight starts flickering for some reason lol. I don't think it's because they don't need refreshing. As far as I know, all common LCD technologies have an internal refresh that's necessary, which is probably why we need PSR instead of already having GPU drivers that know how to make a GPU wait when it can. The system is the thing that PSR is pausing, but only because the display was forcing it to keep working before.Reply

PSR is part of the embedded Display Port (eDP) spec. eDP is only used in embedded displays because it requires a permanent physical connection. There is no reason why this wouldn't work on a discrete GPU, but the system must have the panel embedded. So computers using wired displays can't take advantage.Reply

You also have to remember this technology is aimed purely at power savings. It adds extra cost to a panel so will probably only be used in places where less power usage is really important. Which for now most likely means systems that already have integrated graphics.

Laptops and AOIs that have discrete GPUs might have support in the future, but I can't see it being a top priority.Reply

What's the technical reason to not have the GPU stop sending updates (like what it's doing with PSR) and have the LCD just not update the screen? Why have a memory buffer that holds the static image and still update the screen with it?Reply

The internal refresh on LCD is probably necessary to maintain the integrity of the pixel state. There are "zero-power" LCD technologies that haven't been able to get out of the lab for years. They don't need to be refreshed, but I think right now the speed at which they can refresh is still too low for prime time.Reply

How fast do the refreshes need to be to maintain a pixel's state? Does PSR refresh the screen at a slower rate that's just enough to not lose the state? Kind of like how DDR memory works where refreshes are only done when needed.Reply

I don't think PSR is LCD-specific, so I doubt it. Then again, it may very well be that LCD requires an internal refresh for compatibility purposes -- legacy software and paradigms. Lazy quick Googling has not shone much light on this. This looks like a job for anandtech.com!Reply

I find it odd that PSR is being touted as something new. It's not new. Phones, at least Nokia's, have had self refreshing panels for ever. Both MIPI DBI and MIPI DSI video busses support this. And some phones take this even further, as they only update the changed portion of the display.Reply

Would PSR make it possible to dynamically modify refresh rate to match frame rate when gaming? This would allow V-Sync to be enabled without limiting frame times to multiples of 1/60 of a second. Reply

Variable frame rate LCDs with really high internal refresh rate (from PSR) is one possible solution. It would work very well with OLEDs as the pixel switching time is very low. Some TVs already do this in one way or other. This would make it possible to vsync to any refresh rate as you suggest.Reply

It just moves the framebuffer to memory on the display itself, but that is essentially a noop, or even adds complexity and power consumption instead of reducing it, in the form of one added framebuffer layer. The link from this new framebuffer to the actual pixels still does essentially the same thing as what the link from the old framebuffer to the pixels should have been doing. If it does it in a more power-efficient way or whatever, why not just optimize the old link...Reply

To clarify, I am referring specifically to embedded devices, where "memory on the display itself" has absolutely nothing that can make it any more special than normal system memory, because the display and the rest of the system are literally millimeters from each other.Reply

It's about the GPU, that requires more power to run just to keep the frame buffer alive, on expensive high-bandwidth memory, when a lower bandwidth dedicated frame buffer memory can do so with less power consumption.The memory is just between the high-performance frame buffer, and the display. Kind of a LITTLE.big implementation of the frame buffer, where you double the amount of hardware, so you can optimize for two different usage profiles.Reply

Anand, based on the comments (and my own questions), it might be helpful if you added a few words that better explain what this "26% reduction in power" means. Is this a 26% reduction in SOC power consumption? Or a 26% reduction in the GPU portion of the SOC's power consumption? Or something else entirely.

And what proportion of total power consumption, assuming a static state, does the SOC/GPU/whatever use on these thrifty mobile devices? If the display is using 85% of the power, then a 26% reduction in SOC power usage would be small (4% reduction in total power usage), but not insignificant. If we are talking about a 26% reduction in something that uses even less power, this would be a smaller influence still. Possibly to the point of being unmeasurable in a battery life test...

If you are actively changing the display, but not all of it (say, browsing with an animation in a sidebar), you will likely benefit from the fact that your RAM isn't being grabbed to output a framebuffer all the time (although I doubt you will notice it even on a phone. The last time I heard memory DMA mattered was programming an 8-bit ATARI 400/800 (although this is largely due to the fact that any PC designed remotely for performance has a video card)).

On the other hand it requires a separate buffer for the video card (think the latest Intel onboard graphics with its own memory bus). 1080 graphics is something like 8M of memory, which is an odd duck (too large for SRAM on a reasonable sized chip, too small for DRAM). I'd guess it will take a few SRAM chips, that will add heat & power issues.Reply

"The last time I heard memory DMA mattered was programming an 8-bit ATARI 400/800"

DMA is still extremely important for computing, it just mostly happens in the OS and drivers now. By reducing the load on main memory you are not only lowering power consumption but also freeing up system resources. Feeding a 2560*1600 panel at 60hz actually requires ~1GB/sec memory bandwidth continuously at 32bpp.Reply

You would want to use DMA if it is available, to avoid clogging the CPU with the memory xfer. This is almost universally done in the graphics drivers whenever there is a need to upload the chunk of data to the VRAM, etc.Reply