I downloaded a little GPU Gadget that shows GPU load/memory/temp/fan speed. When scrolling from image to image in the develop tab and when making adjustments and local adjustments, the GPU load is basically zero (CPU usage is very high).

Seeing the same here.

Quote

I think the only difference in speed we're seeing between our systems is due to perhaps terminology of what we're describing, and due to the higher resolution of my screen.

Also to clarify, the time I quoted to move from image to image in the develop tab was not the time it took for the image to display, that happens almost instantly. The time I mentioned was from when I jumped to a new image until the "Loading" indicator at the bottom disappeared (Total time = press button -> new image displays -> "loading" appears -> short wait -> "loading" disappears). If we're talking about just moving from image to image without waiting for the "Loading" to disappear, I can move through at a little better than 1 per second.

Note that I'm on a Dell U2711 monitor while you are on a NEC 2690. While there's only an inch size difference, there is a significant resolution difference. The NEC is a 1920x1200 monitor while the Dell is a 2560x1440 monitor. The dell monitor is effectively at the resolution of a 30" screen (same resolution width, just shorter height). I also have my develop tab set full screen so that only the right panel is showing and the image takes up almost all of the screen. That higher image display resolution is a big factor in how fast Lightroom runs.

With regard to GPU usage and Lightroom, I was not aware that LR4 had implemented any significant rendering tasks to the GPU. I don't believe that it uses the GPU at all for the develop tab or rendering capabilities. See this thread for commentary from Adobe employees (Eric Chan specifically, he's a member here too) that they don't have much going for GPU usage in LR yet.

I think the only difference in speed we're seeing between our systems is due to perhaps terminology of what we're describing, and due to the higher resolution of my screen. I don't think buying a different video card would change anything for this specific situation.

1. Okay, understood. I'm at .6-.7 until the loading tab disappears.

2. I understand the resolution differences, but I'm also rendering two monitors so I'm thinking maybe the differences are a wash.. maybe not. I'd have to do more testing to know for sure.

3. This is one of the issues I think aren't well understood. You're right in that LR is not directly tasking the video card in the sense of off-loading processing tasks, LR doesn't yet support this. But, even without being tasked a video card renders screens in any program and how fast they render screens is attributable to the GPU processing power and to some extent the RAM on the video card. I've had multiple opportunities to try many video cards in my system and other systems, back to back with no other variables involved. There is a significant difference in how fast the screens render and how well/smoothly the sliders work. I've tried a lower end 5450 card, 5770, 5970, and 6870 cards in the same sitting and the differences were significant and consistent.

4. Now that I think we're on the same page with the terminology and the differences are still significant (.7 vs 2.5), and that the difference with slider controls remains, and because I've actually used different cards.. I'd have to disagree. Other than setup of caches the only thing I can see which accounts for the better performance of those specific functions on my less powerful system.. is my more powerful video card. I'm not trying to get you to buy a new one, but if you get the chance to try a more powerful card.. try it and let us know.

Other than setup of caches the only thing I can see which accounts for the better performance of those specific functions on my less powerful system.. is my more powerful video card.

So we can all keep up with what you're talking about here;How are your monitors configured ?Is the system driving one big desktop across two screens ? If so what's the total resolution being driven by the system ?

So we can all keep up with what you're talking about here;How are your monitors configured ?Is the system driving one big desktop across two screens ? If so what's the total resolution being driven by the system ?

Yes, a primary and extended monitor for a total resolution of 3840x1200 for 4,608,00 driven pixels. The Dell U2710 is 2560x1440 for 3,686,400 driven pixels. Pretty much a wash.

It seems like many are under the impression that unless a video card's GPU is being specifically tasked, all cards are equal, but they aren't. All video cards render screens regardless if they're tasked or not, and the available power of your card allows you to do this rendering faster/slower based on the level of available power. There's no doubt the real power is in the GPU tasking, but when you're rendering a lot of real estate the power of the card to render becomes significant. You can see this even in your browser or office program, but to a much smaller degree, by how fast screens render. Most don't notice this difference until you point it out and give them an A/B comparison, but once they see the comparison they can see the difference. When you're filling the screen with image mapping vs. text the difference is much greater. I wish I could explain this better.. next time I test different cards I'll do a Captivate video and share it.

It seems like many are under the impression that unless a video card's GPU is being specifically tasked, all cards are equal, but they aren't.

I've always understood that.

One complication here maybe the utilities people try to monitor their graphic cards with. I would never claim to fully understand how they work, but it's possible that they're just reporting on certain functions of the graphic card systems, eg OGL function use, so the 'normal' processing that drives normal screen redraws isn't reported.

However I think laying the blame for poor performance in the develop module just on graphic card speed is missing the point. Process 2012 flies in ACR in CS6, but is dog slow in LR4, Adobe shouldn't rely on people running the most expensive hardware to make their software run smoothly.

1. "Load time" is first a function of disk I/O performance, not so much RAM. This is the part where you need to match your work flow to the bottleneck to the hardware. If your work flow has you loading one image every so often then even an SSD won't benefit you that much even though benchmarking software shows it's loading that image 20x faster (or whatever it may be). But, if you're loading 20-30 images at a time.. now you'll see a big performance benefit. And there will be a point where as you load those 20-30 images where you'll saturate your RAM and now RAM makes a difference.

How so?

Why don't you demonstrate, with a video on youtube, how using a fast disk can make loading 20-30 images faster than using a slow disk?

Or provide some timing results and document what you do so that others can do the same test as yourself to verify?

Quote

2. My point was that if you ARE running more than Lightroom as most tend to do, then more RAM most definitely will increase performance.

Really?

Will increasing my RAM from 12GB to 16GB make a big difference?What about going from 16GB to 32GB?

Quote

3. I think it can for very specific work flows.

Can you provide some specifics on those work flows that benefit from huge amounts of RAM or faster disk?

Quote

4. No, that wasn't my counterpoint at all. And me building a web page on my work flow, or a specific work flow, won't help others understand their own work flow..

But it will help us understand how you work and how you benefit from faster disks and more RAM. Don't you think that this is useful information?

Quote

My counterpoint was partly yes, the SSD is demonstrably faster so I/O functions will be faster.

Note that it has been previously established that for large images, CPU time far outweighs I/O time.

Quote

With a work flow which loads one image at a time this will hardly be noticeable, but with a work flow that loads/saves multiple images at a time it will indeed be noticeable.

To the best of my knowledge, LR doesn't work on multiple images at a time - that is unless you've got multiple tasks running which I've found to be a bad idea of PCs in general.

So when you decide to do X with 20 or 30 images, they each get processed in serial. That means that LR does what it needs to do with one image, throws away the temporary data and starts on a new one.

Quote

Or if you're building a catalog/previews from an SSD, and I pointed out that perhaps the biggest gain is in the catalog.. doing searches and moving through libraries is light years faster than working off a normal mechanical drive.

So let me get this straight, you're advocating using SSD because it is faster but you're not even sure where the biggest gain is?

Have you actually timed any of your work flows to show that SSD is quicker when used for part X?

Quote

Not everyone will derive the same benefit from a specific work flow which is why I found that article flawed.

Thus far, you haven't provided any solid information, never mind anything that is any less flawed than that article.

Quote

It tested some, but not nearly all the functions where increased I/O performance is beneficial. The functions they checked would definitely show a performance increase in the I/O area, but if you're only using that function for a fraction of your work flow then it will appear it's not doing much at all.

Can you go into more details, please?

I'd very much like you to benchmark some actual work flows and show how and where benefit from SSD can be had.

Given that you know what to do, this shouldn't be very hard for you to demonstrate.

Afterall, people are adaptable and if there are better and more efficient ways for humans to work, I'm sure will adapt (or try to!)

I just opened a 1.24 GB B&W pano into Lr 4 It takes maybe a minuet to render the preview, but in the develop mode I'm seeing delays of about a couple of seconds or less.A raw file from my D7000 in develop mode, in the basic panel, adjustments are happening almost instantly.I don't get it, why should I be having this level of performance on such a old PC? Heck, its faster than ACR was in CS5 if I ran it in 32bit mode. Boy am I glad I decided to get a new NEC PA271W monitor this year instead upgrading my PC.

I love this if only because reading what some other folks are saying here says that what you're experiencing shouldn't be possible!

...3. This is one of the issues I think aren't well understood. You're right in that LR is not directly tasking the video card in the sense of off-loading processing tasks, LR doesn't yet support this. But, even without being tasked a video card renders screens in any program and how fast they render screens is attributable to the GPU processing power and to some extent the RAM on the video card. I've had multiple opportunities to try many video cards in my system and other systems, back to back with no other variables involved. There is a significant difference in how fast the screens render and how well/smoothly the sliders work. I've tried a lower end 5450 card, 5770, 5970, and 6870 cards in the same sitting and the differences were significant and consistent....

Here (when you move scroll bars) you're directly looking at the speed of the memory bus on the video card and the GPU speed as they're responsible for all of the blitter operations once you're using the card's driver. So what you are reporting makes complete sense.

But the same speed difference will be noticed in all applications, not just LR. If you've got a complex web page, or maybe a folder with 1000s of images (or at least several times more than the number of thumbnails you can see at once) and so forth.

One complication here maybe the utilities people try to monitor their graphic cards with. I would never claim to fully understand how they work, but it's possible that they're just reporting on certain functions of the graphic card systems, eg OGL function use, so the 'normal' processing that drives normal screen redraws isn't reported.

However I think laying the blame for poor performance in the develop module just on graphic card speed is missing the point. Process 2012 flies in ACR in CS6, but is dog slow in LR4, Adobe shouldn't rely on people running the most expensive hardware to make their software run smoothly.

1. Agreed, there is a lot to this stuff happening under the hood, more variables than most of us care to understand for sure.

2. I'm not sure I'm trying to blame the video card, more I'm trying to offer a way to improve performance because it's something that's within our control while much isn't. Hardware is something we can change, coding and fixing bugs we'll have to leave up to Adobe.

3. Adobe does rely on people running certain hardware in Cs4/5/6 with the tasking of the GPU. And I'll go back to a previous post I made in this thread, when you have a software application that draws so heavily on all aspects of the hardware just by it's very nature (gaming, CAD/CAM, Video, Imaging) then it's entirely reasonable to expect different levels of performance with different levels of hardware. We all understand we can't run our favorite first shooter game at the maximum resolution, detail level, or frame rate on a 5 year old mid-level laptop.. yet we can still play that game if it falls within the minimum system requirements. If you go out and get that custom builder to build you a custom gaming rig, then you can run that game at max resolution, detail level, and the faster frame rates. We know this and accept this, gaming requires heavy hardware for the best performance and experience.. but we want imaging which uses the computer in much the same way to somehow be immune to hardware requirements.

Why don't you demonstrate, with a video on youtube, how using a fast disk can make loading 20-30 images faster than using a slow disk?

No thank you. What you're requesting is very time consuming and I have other work that pays I'd rather be doing. Besides, I'd just get people who would pick apart anything you have to offer while at the same time demonstrating deficiencies in basic math skills, namely addition. If you can't add 1+1+1+1 20-30 times then my doing a video isn't going to help you. You run across this in forums and its why many choose to not share their experiences. I can detail the benefits of my experiences and I've done so, but it's just not my job to provide the elementary education necessary to understand it.

Here (when you move scroll bars) you're directly looking at the speed of the memory bus on the video card and the GPU speed as they're responsible for all of the blitter operations once you're using the card's driver. So what you are reporting makes complete sense.

But the same speed difference will be noticed in all applications, not just LR. If you've got a complex web page, or maybe a folder with 1000s of images (or at least several times more than the number of thumbnails you can see at once) and so forth.

1. I'm gratified you understood this part.

2. Of course it does and I've said as much several times in this thread. The only way LR 'could' be any different was if Adobe tasked the GPU as it does in CS4/5/6. I think we're fortunate to see more and more programs from imaging applications to disk utilities to virus scanners off-loading processing tasks to available GPU's since the standardization of the more common protocols.

I thought this quote from Eric Chan in Charles Cramers (rather useful) article: Tonal Adjustments in the Age of Lightroom 4, on LL's front page maybe explains why LR4 is causing some performance issues.

" I recently asked Eric Chan of Adobe, one of the people behind the new Highlights, Shadows and Clarity sliders, how they achieved these remarkable results. Eric said they had been exploring various algorithms that used edges to modify various tones, but they were incredibly slow. Initially, these algorithms took about one minute of computation for every megapixel in a file! Twenty megapixel file = twenty minutes. After months of work, they had sped this up to around 8 seconds for a typical file, but that was still too slow. After more months of work, they got it working in almost real time. I find this new slider incredibly useful."

I think, Eric's statement and the obvious increase in sophistication of the develop tools suggest that Adobe, rather than being incompetent or lazy, are the opposite, in that they are supplying us with the newest technology they have.

on LL's front page maybe explains why LR4 is causing some performance issues..............................I think, Eric's statement and the obvious increase in sophistication of the develop tools suggest that Adobe, rather than being incompetent or lazy, are the opposite, in that they are supplying us with the newest technology they have.

Nice quote, but it misses an important point;It's not the 'process 2012' aspect itself that's the problem, demanding as it would seem, as it works fine and smoothly in ACR.The Lightroom interface for process 2012 is the problem.

No thank you. What you're requesting is very time consuming and I have other work that pays I'd rather be doing. Besides, I'd just get people who would pick apart anything you have to offer while at the same time demonstrating deficiencies in basic math skills, namely addition. If you can't add 1+1+1+1 20-30 times then my doing a video isn't going to help you. You run across this in forums and its why many choose to not share their experiences. I can detail the benefits of my experiences and I've done so, but it's just not my job to provide the elementary education necessary to understand it.

But yet you're quite willing to engage in the very same behaviour that you don't want to have to put up with. Dare I say, pot, kettle, black?

I have a Dual Opteron system that I built back in 2006, running Windows 7 64bit, 6gigs of ram. LR4 runs plenty fast for me. Yes, it could be faster, but I don't notice that much of a difference from LR3.YMMV. Your mileage obviously may vary.

Yes, a primary and extended monitor for a total resolution of 3840x1200 for 4,608,00 driven pixels. The Dell U2710 is 2560x1440 for 3,686,400 driven pixels. Pretty much a wash.

It seems like many are under the impression that unless a video card's GPU is being specifically tasked, all cards are equal, but they aren't. All video cards render screens regardless if they're tasked or not, and the available power of your card allows you to do this rendering faster/slower based on the level of available power. There's no doubt the real power is in the GPU tasking, but when you're rendering a lot of real estate the power of the card to render becomes significant. You can see this even in your browser or office program, but to a much smaller degree, by how fast screens render. Most don't notice this difference until you point it out and give them an A/B comparison, but once they see the comparison they can see the difference. When you're filling the screen with image mapping vs. text the difference is much greater. I wish I could explain this better.. next time I test different cards I'll do a Captivate video and share it.

I don't know if the "total real estate" calculation is the best way to look at it. After all, we know that the CPU is involved in rendering the image to be displayed in the Develop tab, and it doesn't have to render the image twice just because it's being simultaneously displayed on both screens. Rendering a larger image once is likely going to take longer than rendering a smaller image and then displaying it twice on multiple screens.

Anyhow, since I'm trying to get my computer build dialed in to my satisfaction before sitting tight and using it for the next 3-4 years, I wanted to explore this graphics card issue further. I found a good deal on an AMD Radeon HD 6950 video card, which is a huge step up in GPU performance from the AMD Firepro v4900. It will arrive later this week and I'll get a chance to try the two cards back to back. Then I'll know if the GPU really makes much difference for LR 4. It it does, then I'll have to decide if I want the better performance or if I want to be able to use the 10 bit display capabilities of the Firepro + U2711 combo in Photoshop.

I don't know if the "total real estate" calculation is the best way to look at it. After all, we know that the CPU is involved in rendering the image to be displayed in the Develop tab, and it doesn't have to render the image twice just because it's being simultaneously displayed on both screens. Rendering a larger image once is likely going to take longer than rendering a smaller image and then displaying it twice on multiple screens.

Anyhow, since I'm trying to get my computer build dialed in to my satisfaction before sitting tight and using it for the next 3-4 years, I wanted to explore this graphics card issue further. I found a good deal on an AMD Radeon HD 6950 video card, which is a huge step up in GPU performance from the AMD Firepro v4900. It will arrive later this week and I'll get a chance to try the two cards back to back. Then I'll know if the GPU really makes much difference for LR 4. It it does, then I'll have to decide if I want the better performance or if I want to be able to use the 10 bit display capabilities of the Firepro + U2711 combo in Photoshop.

Will post here once I've had a chance to try out the new card.

Thanks. The weakest link in my current system is the video card. I have not given it much weight in thinking about my LR4 peformance issue, but perhaps it is a bigger deal than I think, and an easily upgradable component. I'll be very interested in your comparison.

I too would be interested to know your findings. I'm sure it is more than just the GPU, the bits the memory speed, and so forth...

Saddly that new card 6950, will be using 200watts vs your 75watt FPro v4900 card. It also does 160GB a sec vs the 64GB/sec bandwidth.So it should be no surprise, but how measurable that will be would be most interesting. time to download a desktoop stopwatch :-)