1. I use over 2gb of RAM in LR all the time. When? When building previews, outputting files, when using more than 3-4 localized editing points, when syncing files or applying develop pastes to multiple files, when running searches in my catalog based on keywords or exfil, building a reasonable size web gallery, and more. You have to know your work flow and when it can benefit from the RAM, not everyone had the same work flow and won't realize the same benefits. Also keep in mind that Win7's memory manager might not let you use more RAM in LR if it doesn't think you have enough in other areas, or in other words to see the RAM being used you need to be on a system which can take advantage of the additional RAM.

Right. So my computer has 12GB of RAM and most of the time it uses less than a third of it. And that's with 1GB thrown at a VMWare VM. However your comment about "know your workflow" is right but similarly, just adding more RAM is not going to make Lightroom faster for everyone. If an image is taking a long time to load in Develop then simply going from 3GB of RAM to 12GB is quite likely overkill. Going from 3Gb to 4, 6 or at most 8 would likely be sufficient if there was a large amount of paging activity.

Quote

And let's remember most people have more than LR going at one time. I personally have FF with 8-9 tabs open Outlook, CS5 Photoshop and Dreamweaver, Lightroom, various plug-ins, and more going. Most of us have something other than LR open because it's a reasonable way to use a computer. RAM benefits all of this.

If you're running more than just LR, sure, more RAM can benefit all of that. But just adding more RAM won't make LR itself faster.

Quote

3. A system is a system. More RAM being used as a cache or pagefile or RAMdisk or any way that benefits the system, benefits LR.

I'd be interested to know if there are ways in which a RAM disk can be used to speed up LR as I believe that LR assumes everything that it writes to disk is persistent.

Quote

4. I provided a counterpoint in the relevant thread. I took the time to point out areas the reviewers test failed to address and what part of the LR work flow benefits from faster drives. I know you participated in this thread, but if you weren't convinced then I don't have anything new to add other than to try and get your hands on some fast SSD's and try it yourself.

Your counterpoint was that benchmarking software used with an SSD says that SSDs are better whereas the article you are critiquing is saying that when LR is used (and not benchmarking software), there is little benefit.

Strange as it may seem, but I'm not interested in using benchmarking software to work on digital photographs, I'm interested in using Lightroom. Thus how fast an SSD might benchmark in a given test is far less important than the difference it makes to Lightroom's performance.

Since you believe that SSD is faster, why don't you put together a web page or two where you document a specific configuration and post a set of time trials for specific tasks that include Lightroom to show this?

That is I'm not interested in hearing about why SSD should be theoretically faster, I want to see you document and demonstrate with real tasks ways in which it is using real applications (not benchmarking software.) Someone showing me how long a particular task takes in Lightroom with and without an SSD is far more valuable to me than comparing read/write benchmarks of spinning disks vs SSD.

For me, own personal experience suggests that SSD is not going to make a huge difference to the "Loading..." time because the time it takes for LR to display that message far exceeds the time it takes for it to read it into memory.

I could be given to believing that using an SSD could show benefit if (say) I'm updating the metadata of 10000 images and they're all on SSD vs none on SSD. But if you want to make that argument, then I'd ask that you make it properly (using real workload testing) rather than just hand waving.

Quote

5. I'm not sure I understand the relevance of your comment about games. We were addressing LR. Please expand on this a bit if you wouldn't mind.

You were bemoaning the lack of information supplied by software houses (such as Adobe) about the real requirements to run their software. Software houses that author games have been addressing that (or at least did) with information on the outside of the box about a few CPU/RAM/GPU configurations delivering what performance (low/medium/high).

...- 1:1 batch previews: the first ones take a couple of SECONDS (very nice) but as it goes on it takes forever! Right now I'm still generating previews for a task I launched 16 hours ago (2000 images)! Something's going on here, each preview takes more than the previous one. I must clarify that my processor is only used to 15% and Ram keeps creeping up. Rigt now 8GB / 16GB used by Lightrrom. All other apps run quite fast....

All else aside, I think that this demonstrates that there are a few very significant bugs in Lightroom's internal routines. If an application leaks memory (or uses it badly) then adding more does not help it run better.

Right. So my computer has 12GB of RAM and most of the time it uses less than a third of it. And that's with 1GB thrown at a VMWare VM. However your comment about "know your workflow" is right but similarly, just adding more RAM is not going to make Lightroom faster for everyone. If an image is taking a long time to load in Develop then simply going from 3GB of RAM to 12GB is quite likely overkill. Going from 3Gb to 4, 6 or at most 8 would likely be sufficient if there was a large amount of paging activity.

If you're running more than just LR, sure, more RAM can benefit all of that. But just adding more RAM won't make LR itself faster.

I'd be interested to know if there are ways in which a RAM disk can be used to speed up LR as I believe that LR assumes everything that it writes to disk is persistent.

Your counterpoint was that benchmarking software used with an SSD says that SSDs are better whereas the article you are critiquing is saying that when LR is used (and not benchmarking software), there is little benefit.

Strange as it may seem, but I'm not interested in using benchmarking software to work on digital photographs, I'm interested in using Lightroom. Thus how fast an SSD might benchmark in a given test is far less important than the difference it makes to Lightroom's performance.

Since you believe that SSD is faster, why don't you put together a web page or two where you document a specific configuration and post a set of time trials for specific tasks that include Lightroom to show this?

That is I'm not interested in hearing about why SSD should be theoretically faster, I want to see you document and demonstrate with real tasks ways in which it is using real applications (not benchmarking software.) Someone showing me how long a particular task takes in Lightroom with and without an SSD is far more valuable to me than comparing read/write benchmarks of spinning disks vs SSD.

For me, own personal experience suggests that SSD is not going to make a huge difference to the "Loading..." time because the time it takes for LR to display that message far exceeds the time it takes for it to read it into memory.

I could be given to believing that using an SSD could show benefit if (say) I'm updating the metadata of 10000 images and they're all on SSD vs none on SSD. But if you want to make that argument, then I'd ask that you make it properly (using real workload testing) rather than just hand waving.

You were bemoaning the lack of information supplied by software houses (such as Adobe) about the real requirements to run their software. Software houses that author games have been addressing that (or at least did) with information on the outside of the box about a few CPU/RAM/GPU configurations delivering what performance (low/medium/high).

1. "Load time" is first a function of disk I/O performance, not so much RAM. This is the part where you need to match your work flow to the bottleneck to the hardware. If your work flow has you loading one image every so often then even an SSD won't benefit you that much even though benchmarking software shows it's loading that image 20x faster (or whatever it may be). But, if you're loading 20-30 images at a time.. now you'll see a big performance benefit. And there will be a point where as you load those 20-30 images where you'll saturate your RAM and now RAM makes a difference.

2. My point was that if you ARE running more than Lightroom as most tend to do, then more RAM most definitely will increase performance. Again, we need to look at a system approach. It's not enough to assume everyone is running only LR, you need to examine your work flow and other tasking and evaluate from there. Most are using plug-ins, CS5, a browser or email in the background, watching news on another monitor, it varies greatly..

3. I think it can for very specific work flows.

4. No, that wasn't my counterpoint at all. And me building a web page on my work flow, or a specific work flow, won't help others understand their own work flow.. unless they happen to be the same. My counterpoint was partly yes, the SSD is demonstrably faster so I/O functions will be faster. With a work flow which loads one image at a time this will hardly be noticeable, but with a work flow that loads/saves multiple images at a time it will indeed be noticeable. Or if you're building a catalog/previews from an SSD, and I pointed out that perhaps the biggest gain is in the catalog.. doing searches and moving through libraries is light years faster than working off a normal mechanical drive. I could sit and pick out each function in LR where an SSD will be faster.. but this is where I said in that thread we need to look at our individual work flows and see where we can derive benefit. Not everyone will derive the same benefit from a specific work flow which is why I found that article flawed. It tested some, but not nearly all the functions where increased I/O performance is beneficial. The functions they checked would definitely show a performance increase in the I/O area, but if you're only using that function for a fraction of your work flow then it will appear it's not doing much at all.

I can't stress enough how an individuals work flow needs to be evaluated and understood to know where hardware gains will be the most effective. There are some areas were "in general" we can say most people will benefit from the same hardware.. 4 core i7, 12gb of RAM (8gb could fit many, but with the price so cheap whatever you get the best deal on makes the most sense, 8-12-16..), a fast GPU (the more monitors and the more resolution the more this benefit), and fast drives.. these things 'in general' benefit most users. From there it's highly individualized to specific work flows.

Your comments about workflow making a big difference are right on point. Actually, I think of workflow as only part of the environment that needs to be considered, an environment that is much larger than just the hardware or Lightroom.

Is there a virus scanner running in the background? Some of those can be configured to be really paranoid and scan every file, executable or not, on access.

In Lightroom, is the write to XMP stuff turned on (Sorry, not on a computer with LR, so I don't have the exact terminology). From what I've read, if it is, every time you fiddle with a develop module control, you make changes to the XMP file in addition to the changes made in the catalog. Maybe it makes sense to have that on...or maybe it makes sense to sync metadata when you're done...but those are workflow configuration decisions that might have a significiant impact.

When you import, are you creating 1:1 previews? That can really hit performance.

These days there are many utilities running that get auto-started and don't show up as a "program" on the task bar. It's amazing what stuff is running on my PC that I never "started" Little things like Java updaters, Adobe updaters (so I can keep getting Flash fixes ), Open Office startup stuff, indexing programs. And each one of those can steal a little performance. But each one needs to be considered as part of the environment.

Just saying Lightroom is slow really isn't enough information to judge by or give any help with. And including hardware configurations is only part of what is necessary.

What it sounds like you're leading to, is that LightRoom4 needs a mid to top of the line separate dedicated computer?I agree with the technical observations you mention, such as the 1:1 preview, and background apps. etc.

Your comments about workflow making a big difference are right on point. Actually, I think of workflow as only part of the environment that needs to be considered, an environment that is much larger than just the hardware or Lightroom.

Is there a virus scanner running in the background? Some of those can be configured to be really paranoid and scan every file, executable or not, on access.

In Lightroom, is the write to XMP stuff turned on (Sorry, not on a computer with LR, so I don't have the exact terminology). From what I've read, if it is, every time you fiddle with a develop module control, you make changes to the XMP file in addition to the changes made in the catalog. Maybe it makes sense to have that on...or maybe it makes sense to sync metadata when you're done...but those are workflow configuration decisions that might have a significiant impact.

When you import, are you creating 1:1 previews? That can really hit performance.

These days there are many utilities running that get auto-started and don't show up as a "program" on the task bar. It's amazing what stuff is running on my PC that I never "started" Little things like Java updaters, Adobe updaters (so I can keep getting Flash fixes ), Open Office startup stuff, indexing programs. And each one of those can steal a little performance. But each one needs to be considered as part of the environment.

Just saying Lightroom is slow really isn't enough information to judge by or give any help with. And including hardware configurations is only part of what is necessary.

Tom

Tom,

I honestly appreciate what you've said, and I agree. BUT, there are two obvious elephants in the room, IMO.

1. 3.6 ran lickety-split for most of us on these same machines, with the same inefficiencies you discuss so well - and 4 does not. And not by "a little bit," but by a whole bunch, making it borderline (and that's being kind) unusable.

2. I suspect that the vast majority of Adobe's customer base is like me. I'm fairly computer literate in terms of being a user, but I'm far from a technical computer person and don't think I should be required to become one to extract acceptable performance from a piece of end-user software, the "specs" of which my current machine greatly exceeds.

That is the crux of Adobe's problem w/ LR 4 and while it is interesting, and helpful to some, to discuss tweaks and tech points on how we might get better performance from Lightroom - I find that a little like being told I need to learn to re-wire the flux capacitors on my toaster in order to have toast.

I love what LR4 can do, and respect the Adobe team very much for the incredible advances achieved. But I don't want to be required to learn to re-wire my flux capacitors and warp-drive to be able to just "use it."

I honestly appreciate what you've said, and I agree. BUT, there are two obvious elephants in the room, IMO.

1. 3.6 ran lickety-split for most of us on these same machines, with the same inefficiencies you discuss so well - and 4 does not. And not by "a little bit," but by a whole bunch, making it borderline (and that's being kind) unusable.

2. I suspect that the vast majority of Adobe's customer base is like me. I'm fairly computer literate in terms of being a user, but I'm far from a technical computer person and don't think I should be required to become one to extract acceptable performance from a piece of end-user software, the "specs" of which my current machine greatly exceeds.

That is the crux of Adobe's problem w/ LR 4 and while it is interesting, and helpful to some, to discuss tweaks and tech points on how we might get better performance from Lightroom - I find that a little like being told I need to learn to re-wire the flux capacitors on my toaster in order to have toast.

I love what LR4 can do, and respect the Adobe team very much for the incredible advances achieved. But I don't want to be required to learn to re-wire my flux capacitors and warp-drive to be able to just "use it."

I don't actually have a performance problem with Lr 4, but I fully agree with what you're saying here. Some of those "tweaks" are the kind of thing you do when you're running benchmarking contests for higher scores, but you shouldn't have to do those kinds of things when running software on a suitable system. A lot of high end systems are experiencing issues, even higher than the one I have....something seems amiss to me. Someone mentioned that maybe some catalogs aren't getting converted correctly. That idea has merit in my mind since it would fit with the varying range of system performance that seem to be having some problems.

I think it's a mistake to think that all reported performance issues stem from the same causes, and that means you won't necessarily have the same answer.

Yep, they are tweaks. It may help some. And it doesn't hurt to look, does it?

I agree, you shouldn't have to know about inodes (sort of sounds in the same catagory as "flux capacitor," doesn't it) or RAIDZ or other arcana to know how to use a computer, but there are some thing you do need to know. By the same token, I shouldn't have to know how to read an MTF graph or worry about Nyquist limits to take a photograph. However, I'd better know about f/stops and shutter speeds and focal length if I want to take advantage of anything other than a point-and-shoot camera.

Bad catalog conversion? Interesting thought. Why not create a new catalog, go wander off to where your photos are stored, and import a batch of them into the new catalog? Admittedly, any changes you make to the new catalog won't be copied to your real catalog, so don't do anything that you want to keep. Also, if you are writing sidecar files, you may end up with a "metadata sync" issue because the XMP files may have changes that won't be incorporated into your real catalog, but you can at least do a little playing around to see if it is still slow. And I'd only work on RAWs.

You could also copy a few directories of photos to another junk location to be your source for the "new" catalog import, then you wouldn't even have to worry about metadata changes.

Is that a tweak? I don't know. Seems to me that it wouldn't take a huge amount of time to do, provided you limit how many photos you import, and would certainly answer if you had catalog upgrading issues.

I think it's a mistake to think that all reported performance issues stem from the same causes, and that means you won't necessarily have the same answer.

Yep, they are tweaks. It may help some. And it doesn't hurt to look, does it?

I agree, you shouldn't have to know about inodes (sort of sounds in the same catagory as "flux capacitor," doesn't it) or RAIDZ or other arcana to know how to use a computer, but there are some thing you do need to know. By the same token, I shouldn't have to know how to read an MTF graph or worry about Nyquist limits to take a photograph. However, I'd better know about f/stops and shutter speeds and focal length if I want to take advantage of anything other than a point-and-shoot camera.

Bad catalog conversion? Interesting thought. Why not create a new catalog, go wander off to where your photos are stored, and import a batch of them into the new catalog? Admittedly, any changes you make to the new catalog won't be copied to your real catalog, so don't do anything that you want to keep. Also, if you are writing sidecar files, you may end up with a "metadata sync" issue because the XMP files may have changes that won't be incorporated into your real catalog, but you can at least do a little playing around to see if it is still slow. And I'd only work on RAWs.

You could also copy a few directories of photos to another junk location to be your source for the "new" catalog import, then you wouldn't even have to worry about metadata changes.

Is that a tweak? I don't know. Seems to me that it wouldn't take a huge amount of time to do, provided you limit how many photos you import, and would certainly answer if you had catalog upgrading issues.

Tom

Just for interest, I have not yet imported an older catalog but started fresh and am using LR4RC2 with fresh from the camera RAW to DNG on import, files. Probably a total of no more than 500 files from an a900 and a Fuji X-100. So, in my instance it has zero to do w/ catalog conversion or any such.

Just for interest, I have not yet imported an older catalog but started fresh and am using LR4RC2 with fresh from the camera RAW to DNG on import, files. Probably a total of no more than 500 files from an a900 and a Fuji X-100. So, in my instance it has zero to do w/ catalog conversion or any such.

Rand

Then I think we can rule out catalog conversion corruption in your case. From what was said earlier, one suspected cause WAS conversion issues.

That's what I meant about having multiple causes. Sneezing can be caused by a cold or allergies or getting too heavy with the pepper mill. Darned if I'm going to go home and lie down, Nyquil in hand, if it's the pepper.

Tom

BTW, I finally got to a computer with LR. The section is Edit->Catalog settings...->Metadata. There are two settings: "Include Develop settings in metadata inside JPEG, TIFF and PSD files." The other is "Automatically write changes into XMP." Not sure about the first one, but from what I've read twiddling the Exposure, Contrast, etc sliders in the Develop module (plus probably a lot of other things that affect metadata) causes the program to update the XMP file. I don't know if that's an issue or not; I wouldn't think so since those files are really light weight, but you never know.

A simple thing you should check is swap/virtual memory. Do you normal routine in Lightroom and check if you have heavy swapping. If you do then more memory should help. 2g is very small amount for a modern computer and upgrading to 4 should be cheap and well worth the money. Lightroom 4 is a newer application so it's going to tax your computer more than version 2 or 3. The same can be said with many applications and operating systems. This doesn't mean you need to buy a new pc it just means you may need to upgrade a few pieces of hardware to get faster results.

Btw I personally wouldn't run any version of Lightroom unless I had a bare minimum of 8 gigs. I do however work with 21, 40 and sometimes 100+ megapixel files. My MacBook pro has 12 gigs and my mac pro has 48. Before I had my Pentax 645d I was working with 10 and 21mp files..l was running fine with 8gigs of ram until I started using the 645d and scanning 6x7 negatives.

I just opened a 1.24 GB B&W pano into Lr 4 It takes maybe a minuet to render the preview, but in the develop mode I'm seeing delays of about a couple of seconds or less.A raw file from my D7000 in develop mode, in the basic panel, adjustments are happening almost instantly.I don't get it, why should I be having this level of performance on such a old PC? Heck, its faster than ACR was in CS5 if I ran it in 32bit mode. Boy am I glad I decided to get a new NEC PA271W monitor this year instead upgrading my PC.

Your comments about workflow making a big difference are right on point. Actually, I think of workflow as only part of the environment that needs to be considered, an environment that is much larger than just the hardware or Lightroom.Tom

All good points.

When someone calls and wants to configure a system I'll spend time asking questions like what virus scanner, software, model number of peripherals they'll be connecting, what they normally have open while working, file sizes, network connections, backup programs, etc, etc.. What I try to do is get a 360 degree view of their personal work flow, not just a single program but all of it. And then I try to get a feel for how fast they work. Once I have all that.. then I ask how they'd like to work if the computer wasn't getting in their way and this is often the most difficult. It's hard to conceptualize that which you've never experienced.

Finally, based on their information and my experience we design a system around their needs. This is what they're paying for, you're experience.. and why going to Best Buy or asking the guy answering the phone at Dell or even a gaming shop.. isn't the most effective way to spend your money. If you're building an imaging workstation.. then someone with day to day experience using a imaging workstation (and who builds computers) is the best choice.

2. I suspect that the vast majority of Adobe's customer base is like me. I'm fairly computer literate in terms of being a user, but I'm far from a technical computer person and don't think I should be required to become one to extract acceptable performance from a piece of end-user software, the "specs" of which my current machine greatly exceeds.

Your point is well taken.

I think if Adobe could make it more simple they would, they're getting a lot of grief over this stuff. But, we need to remember that processing images and especially videos is a very hardware and system intensive task. Tasks that requires the most important areas of a computer (CPU, RAM, GPU, Data I/O) to work at their maximum and perhaps most importantly, to work together. Keep in mind that the hardware drivers are all in play as well, a bad driver can slow LR down to a crawl. Or in other words, there is much more going on using LR to process images than most any other program we use. A close relative would be running powerful games.

We all know if you want your games to run the maximum FPS at the highest detail level and resolution, you need a powerful computer. You can run the games at their minimum requirements on 5-6 year old laptop.. but at a much lower FPS, resolution, and detail level. LR is exactly the same in this way. It requires power to operate at it's best and it requires the system to be in "tune." LR can work at a low hardware level, but we can't expect it to work it's fastest or best. If we want the best performance we need the hardware and system setup to allow this performance level.

Gamers, those who use CAD/CAM workstations, video workstations, they're accustomed to having someone design/build their systems around their software. Imaging is right there, the requirements are the same. Sure, people game and use CAD/CAM on off the shelf systems.. but they don't expect top performance. So, if you really want the top performance for your imaging workstation then it becomes reasonable to expect to either design and build an adequate system, or have someone do it for you.

A common question is: "Why did 3.6 work so well and 4.x not so well?" When your realize they're adding more functional capabilities you almost have to expect the hardware requirements will rise. However, this doesn't mean that something is amiss with their coding. If they got in a hurry and didn't refine their coding and even their design, OR if they are bugs in the coding, then things can slow down more than the progression of function vs. hardware you'd expect. I suspect this is what's happening now.

Yet, on the other hand I suspect the "program of the future", or the program with all the functions/features we really want.. isn't being offered because the hardware requirements are too stiff and too many people are still on older systems. What would be great for someone with the latest system isn't good for the masses running 3-4 year old computers. It's like web page design, I used to run a 800 pixel wide page on my site because that's what the majority of users were using. Today I run an 1100 pixel wide page. If my site was geared towards only professional photographers I'd consider a 1400 pixel wide page because that subset of web visitors would have monitors which support that resolution.

So.. are those with older machines holding Adobe back, or at least slowing them down, from putting out better and more interesting software? I think so. But because of the nature of image processing (hardware/system requirements) and because they want their product to be the professionals choice.. they're probably pushing the hardware requirements a bit stiffer than they otherwise would.

I'm okay with this because I naturally use a more powerful system, and I fear if hardware requirements become too much of an issue then Adobe will hold back on features we want until more of their users upgrade for other reasons.

There's a lot that goes into the decision making process with such software.. our part as end users is to do our best to understand what performance we can realistically expect for a given level of hardware. This is why I really wish Adobe would support ongoing builds vs. benchmarks for their most 'hungry' programs.

My experience with LR4 is that it certainly seems to be very demanding of a system. I upgraded my computer at the same time I jumped to LR4, so I came into LR4 with a significant bump in performance.

Currently running LR4 RC1 with a catalog of about 15k images, rollover conversion from an LR3 catalog. Computer is an i7 3770k, overclocked to 4.4GHz, 16 GB RAM, SSD for the OS/Programs, second SSD for LR Catalogs and ACR Cache, AMD FirePro 4900 video card (for 10 bit photoshop) on a Dell U2711 primary screen, Radeon HD 4850 video card for a Dell 2408WFP secondary screen, and WD Caviar Black 1TB (x2) and 2TB drives for storage. Working files for my primary RAW folder sit on the fastest partition of the 2TB drive. While I'm sure there are machines out there that are faster, this is pretty close to the top of food chain right now.

Opening LR and rolling through grid view and loupe view is very fast. In a folder of 1500+ 1Ds III and Aptus 22 RAW files, there is no lag at all moving from image to image in Loupe view at full screen (previews already rendered). It will move from image to image as fast as I can press the arrow keys. Even full screen 27" viewing at 1:1 is pretty quick to jump from image to image, I can scroll through maybe 2-3 per second. This is a noticeable improvement from my old machine which had the LR catalog on a regular drive instead of on a SSD. Also, scrolling up and down through a full screen page of thumbnails in grid view is much faster. Smooth scrolling up and down with almost no lag.

Moving from image to image (1Ds III RAW's) in the Develop tab takes just under 2.5 seconds for the "loading" indicator to disappear. If I cycle through the same images again (ACR Cache on the SSD already loaded for that image) it is ever so slightly faster, maybe a couple tenths of a second. It takes roughly same amount of time to generate 1:1 previews on import for each image.

Making adjustments in the develop tab using the sliders is not perfect. It's certainly useable with no major complaints, but it's not as much of an improvement from LR 3.6 as the change in hardware would cause me to expect. There is a bit of stuttering/redraw as you drag the exposure slider left and right. It will flicker maybe 5-6 times a second with image redraws as you drag the slider. Adding lens correction and noise reduction slows the redraws to maybe 3-4 a second. Having a full screen image also slows it down, vertical images that only take up part of the screen are noticeably faster. Overall, the lag forces you to slow down slightly when moving the slider, so that you are sure you are seeing the current redraw before you let go of the slider at your desired setting.

Looking at the CPU/RAM usage monitor, I've observed a few things.

It seems that moving around in the Loupe/Grid view is primary just loading previews from the disk and it doesn't demand much of the CPU. This is an area where an SSD is going to help out. Very fast seek times and high read speeds to pull info from the LR Catalog/Database.

The Develop tab is highly CPU intensive. Scrolling from image to image loads the CPU much more heavily than the Loupe view, moving the develop sliders also loads the CPU, especially as the size of the screen area to be redrawn is increased or as you increase the demands of lens correction/NR/etc.

The "Loading" moniker that appears on screen has seems to have nothing to do with pulling data from the disk, but rather appears to be generation of preview data by the CPU (both in Loupe and Develop view).

RAM usage is not much at all, never exceeding 25% of the total 16GB available.

Overall, I'm happy with LR4 on my machine. On a much slower machine I would imagine that it could be quite difficult.

I pulled up a folder of approximately 1500 1ds3 images (just now with FF running with 8 tabs, Outlook, virus software, etc, etc), and went from the Library to Develop tab (about 3 seconds), but then going from image to image where 1:1 previews had already been built.. took a "blink", or .3-.4 seconds each. Basically they render as fast as you'd want. Your system taking 2.5 seconds I'd guess is being limited by your video card. As you know this is relatively slow card.

When I had my ATI 5970 (dual GPU) card in my system there was absolutely no delay from image to image while in the develop module, no delay when moving the sliders on either the primary or secondary monitor. No flicker, no hesitation, just a smooth change as I move the slider. Now with the less powerful 6870 card I see a .3-.4 second image to image delay when in the develop module, and when moving the sliders the primary screen is still smooth and instantaneous but the secondary monitor lags about .4-.5 seconds.

So.. with the powerful system you have, the bottleneck affecting your image to image in the develop module delay, and your slider delays.. are most likely a direct result of your video card choice.

You didn't say what SSD's your running, what controllers you're using with them, how full they are, etc.. but it's possible you could further increase performance of certain areas of LR with faster SSD's. I'm using a Crucial C300 (AS SSD score of 650+) which is fast for a SATA SSD.. but when I build my next system soon I'll be using a OCZ Revo 3x2 and a Revo 1tb hybrid for my catalog. These PCIe SSD's are lightyears faster than my current 2 year old SSD..

Anyway.. if you get an opportunity to borrow a faster video card, throw it in there for grins and watch what happens to your current delays/issues..

I love what LR4 can do, and respect the Adobe team very much for the incredible advances achieved. But I don't want to be required to learn to re-wire my flux capacitors and warp-drive to be able to just "use it."

I think if Adobe could make it more simple they would, they're getting a lot of grief over this stuff. But, we need to remember that processing images and especially videos is a very hardware and system intensive task.

An interesting question is how much of LR4's problems stem from "supporting" video ?

I don't think LR is the place to work with video, anyone who wants to edit will have to use another program anyway.If it's the case that by integrating video 'support' they have done has significantly downgraded LR's performance for it's core still's work, Adobe have made a very poor strategic decision.

I pulled up a folder of approximately 1500 1ds3 images (just now with FF running with 8 tabs, Outlook, virus software, etc, etc), and went from the Library to Develop tab (about 3 seconds), but then going from image to image where 1:1 previews had already been built.. took a "blink", or .3-.4 seconds each. Basically they render as fast as you'd want. Your system taking 2.5 seconds I'd guess is being limited by your video card. As you know this is relatively slow card.

When I had my ATI 5970 (dual GPU) card in my system there was absolutely no delay from image to image while in the develop module, no delay when moving the sliders on either the primary or secondary monitor. No flicker, no hesitation, just a smooth change as I move the slider. Now with the less powerful 6870 card I see a .3-.4 second image to image delay when in the develop module, and when moving the sliders the primary screen is still smooth and instantaneous but the secondary monitor lags about .4-.5 seconds.

So.. with the powerful system you have, the bottleneck affecting your image to image in the develop module delay, and your slider delays.. are most likely a direct result of your video card choice.

You didn't say what SSD's your running, what controllers you're using with them, how full they are, etc.. but it's possible you could further increase performance of certain areas of LR with faster SSD's. I'm using a Crucial C300 (AS SSD score of 650+) which is fast for a SATA SSD.. but when I build my next system soon I'll be using a OCZ Revo 3x2 and a Revo 1tb hybrid for my catalog. These PCIe SSD's are lightyears faster than my current 2 year old SSD..

Anyway.. if you get an opportunity to borrow a faster video card, throw it in there for grins and watch what happens to your current delays/issues..

Thanks, interesting info. To answer your questions, the SSD's are both 120GB Crucial M4's, fairly empty, controlled by the Intel controller on my Z77 motherboard (ASRock Extreme 6), SATA 6 connection, and are new since the computer is only a couple weeks old.

In regards to the 1:1 previews and the develop tab, what is displayed in the develop window does not have anything to do with the pre-rendered previews. The previews are just for Loupe view. The image shown in the develop tab is rendered each time for display, only using a small chunk of data from the ACR Cache.

Also to clarify, the time I quoted to move from image to image in the develop tab was not the time it took for the image to display, that happens almost instantly. The time I mentioned was from when I jumped to a new image until the "Loading" indicator at the bottom disappeared (Total time = press button -> new image displays -> "loading" appears -> short wait -> "loading" disappears). If we're talking about just moving from image to image without waiting for the "Loading" to disappear, I can move through at a little better than 1 per second.

Note that I'm on a Dell U2711 monitor while you are on a NEC 2690. While there's only an inch size difference, there is a significant resolution difference. The NEC is a 1920x1200 monitor while the Dell is a 2560x1440 monitor. The dell monitor is effectively at the resolution of a 30" screen (same resolution width, just shorter height). I also have my develop tab set full screen so that only the right panel is showing and the image takes up almost all of the screen. That higher image display resolution is a big factor in how fast Lightroom runs.

With regard to GPU usage and Lightroom, I was not aware that LR4 had implemented any significant rendering tasks to the GPU. I don't believe that it uses the GPU at all for the develop tab or rendering capabilities. See this thread for commentary from Adobe employees (Eric Chan specifically, he's a member here too) that they don't have much going for GPU usage in LR yet.

To confirm this, I downloaded a little GPU Gadget that shows GPU load/memory/temp/fan speed. When scrolling from image to image in the develop tab and when making adjustments and local adjustments, the GPU load is basically zero (CPU usage is very high). I double checked the widget is working by going into CS6 and flick panning around in a 60 megapixel image at 100%. The GPU is enabled in CS6 and the activity monitor shows the GPU usage spiking to 100% in that situation. However, I don't think the GPU has any effect whatsoever on how fast Lightroom is in the develop tab. It's not the screen redraw itself that's the problem, it's rendering the RAW edits that need to be redrawn each time you make any changes.

I think the only difference in speed we're seeing between our systems is due to perhaps terminology of what we're describing, and due to the higher resolution of my screen. I don't think buying a different video card would change anything for this specific situation.