At least it could be FOUR times faster, if it would use all available cores, but so I get one preview per second (12 MP RAWs). So I've to wait in case of 1.000 RAWs round about 15 minutes. If would use all CPUs, which are available, it could increase speed up to 4 minutes. More then 10 minutes saved.

As there any chance, or does Adobe have no interest, like it has no interest to speed up LightRoom?

Bridge can only run one instance of Camera Raw at a time. Inside Camera Raw, it can sometimes use additional processors -- but much of the file decoding is not threadable. So Bridge does use all processor cores when it can to speed up processing, but can't always do so.

Is that true? Only one instance? No multi-threaded, parallel image processing by camera raw?

Who did this "performance" tweak and created a piece of software, that is not thread-safe? So no one can use their several cores? Is it not possible to at least start more then one process instead of threads of camera RAW?!

Nowadays the PC is going faster by using more CPUs and cores (and hyperthreading). Does Adobe know only one core and one CPU PCs?

There are still some part, I do not understand:
If I open Camera RAW. Camera is able to generate it's own preview pictures and allows me to work in parallel on one picture, isn't it? At least it was my impression.

Also it's possible to export a lots of pictures by using Camera RAW. Then it's processing these as background job. Then I can close and open Camera RAW an adjust another pictures and use Camera RAW in parallel to work at one picture and in parallel it's exporting other pictures. Did I understood it right?

As far as I'm understood it, Lightroom is using the very same Camera RAW as Photoshop/Bridge. If not you get trouble, that is an issue, some people had. So did I understood correctly, that Lightroom uses the same Camera RAW to handle imports, adjustments and exports?

Also was also my personal experience, like it is posted in the Lightroom thread http://feedback.photoshop.com/photosh... , you can speed up the export processes, by starting several export jobs manually. So they export in parallel and using the advantage of all available cores. Last time I tried it, my exports were three times faster then using only one queue.

If I get it right, and Lightroom is using the same Camera RAW as Camera RAW used by Bridge, and at least a user can start manually parallel processing their. Why is it possible in Lightroom to invoke parallel Camera RAW processing, if it shouldn't be possible? And why we get a speed up by factor of three, if it should be any speed up?

Don't get me wrong: I understand, that each single feature requires planning, programming, testing and maintenance. But perhaps you can also understand, that in times likes these, where you need a multi core PC to use PS or LR. Where your PC is really powerful. Then it's a little odd, if such time consuming stuff is using only one core, especially if it looks like, it's possible.

Personally I used Bash scripts and RawTherapee also ImageMagic to use all my CPUs and to speedup my image processing by more then factor 2 but less then 3. As I had to start for each single image a new process, I had much overhead. By starting 4 up to 6 processes in parallel did at least the trick to speed up my process. And If you have only to wait 30 minutes instead of 60 minutes. You're already happy.

For sure there are reasons, why I use PS and the Adobe RAW module, but after these experiences, it's simple hard for me to believe their is no way to improve the exporting and thumbnail generating by using all cores.

Again, Bridge and Lightroom are using more than one core already - for increased performance.
They just aren't using one core per image.

Just because all CPUs are busy does not mean that is the fastest method (usually it just means you're running a lot of bad code in parallel, and sometimes waiting longer than if it were done correctly).

Yes, you're right, you can keep the CPUs busy by blocking them each by the other.

But I was writing about, that it is already possible create the three times throughput in Lightroom by creating manually several parallel processing queues. So the Camera RAW and Lightroom code seems to capable to process images in parallel AND increase the throughput.

And if it is possible improve the code, even better, then thee throughput could be even increased. That as great!

So at the end the questions are still:
Why does Bridge does not parallelize the thumbnail and preview generating?
Why does Camera RAW does not parallelize the export of images as JPEG, DNG...?
As it is actually possible in Lightroom to parallelize the export AND increase the throughput, even the user has to do it manually.

And why does we have to encourage Lightroom manually to create several queues to take use of all cores to increase the throughput, as it is experimental proven by some users:
http://feedback.photoshop.com/photosh...

So I really wonder, if parallel processing of several different images is NOT the fastest method to process a huge of images, why was Camera RAW changed to process several different images in parallel and use now all my cores instead of only one? And why is this batch processing much more faster then the picture by picture processing?
https://forums.adobe.com/message/6669...

Me too, I also don't think this is not a question of multi-threading the process of creating a single preview -- but more the question of why the task of processing the queue of previews that still need to be computed can't be done in parallel. It is not easy to understand, what data-dependencies exist between separate preview computations, that would keep you from spawning #{of processors} many threads doing previews in parallel. Even non-threadable tasks can be processed this way, as long as there are enough instances of these tasks. Is it really the bandwitdh/cache size, that keeps you from doing this?
If You mention cache contention, have You for example considered to program more cache aware, with for example this technique? http://jason.cse.ohio-state.edu/ This gives You control over Your cache in the sense that it creates private fragments of Your cache, so that you can overcome cache contention....
Of course, if You are already limited with Your memory bandwidth, I completely agree with You, that more CPUs do not save You time....

Hm, reading the upper comments with the multiple manually started exports being faster then the automatic ones raise suspicions, that the memory bandwidth does not seem to be the major problem in this case... it actually seems to me, that having at least the option to try to have a parallel processing queue could be a major benefit....

Yes, I already said that we continue to work on performance improvements, and improving thread safety for our tools.
PLEASE read what has already been written in this topic and stop beating the poor deceased equine.

Actually, you only mentioned that parallel programming was hard by iterating over well known pitfalls. You claim that the speedup in LR was a special case only, and that the proposed automation needs to be tested.
Now I wonder:
1. Can you show us a quadcore CPU, where the proposed proceeding does not lead to a significant speedup?
2. If the parallel queue processing is dangerous and untested, why did you enable it in a manual fashion in LR already?
3. From senior programmer to senior programmer: do you seriously want to tell me, that implementing a threadpool for parallel queue processing is a challenge for your dev team? Especially as in LR, this is just automation of something which already works?
4. Do you really try to find bugs in threaded code via testing? Seriously? You must have a lot of time and patience. But in answering these posts, I guess You demonstrated that kind of patience already....