HDR labs -Review HDR in K-7

I understand well why, Class A, you'd like it to be better and HDR in K-7 isn't much of use for you right now. But is it really difficult to understand that others who do not know anything about (me included) will be VERY happy to use this feature...

Oh, it will be of use to me (once I got an K-7). And I can fully understand that it is of use to others. The only things that bug me are that it easily could have been a lot more useful and the attitude of JCPentax. The latter is just my personal problem but the former does not only affect a lot of users, it may also affect the K-7's press, which we all want to be awesome.

Huhh?? I shoot almost every single image from a tripod. Fiddling in photoshop is something that put me off (modern) photography.

Almost always using a tripod is nice. But not always possible. E.g., churches are ideal HDR subjects but forbid tripods, in many cases. Also, many people prefer to walk around w/o tripod and still would like to benefit from the higher dynamic range. Always assuming that computer work in post production is to be avoided ...

Originally posted by thibs

I think those who know much about HDR cannot understand that this feature may be useless FOR THEM but useful for others.

This comment misses the point. The point isn't that it is useless. It is not. It is useful, actually. But artificially limited in applicability. And because it is useful, this is actually sad.

FYI, HDR consists of three steps:
- Taking 3-5 image in fast succession (done; easy with fast fps feature)
- Aligning images (not done; easy, open source technology exists, not very computing hungry, arbitrarily left out in the K-7 firmware implementation)
- Tone mapping or blending into one image (done; difficult, actually an art, implemented in a rather nice and "want to have" way in the K-7 firmware implementation)

The point now is this: the HDR feature in the K-7 did 90% of the implementation for 10% of possible use cases (tripod). Isn't this supposed to be the other way round?

I believe all Class A wanted to say is that a user pointing to "Pentax' strange interpretation of the 90/10 rule" deserves a friendly response.

- Aligning images (not done; easy, open source technology exists, not very computing hungry, arbitrarily left out in the K-7 firmware implementation)

Uh, you don't work in the image-processing area of software development, do you?

While there is OS technology (like align_image_stack) this "easy thing" is still programmatically difficult to insure "black box" (in-camera) accuracy and repeatability - and building and comparing image pyramids like this is very computationally intensive for 14.5 MP images, for both processor and memory. Plus, the "free" technology like align_image_stack can only reasonably align processed images (not raw) so there is extra processing time and a loss of potential dynamic range in the pipeline right there.

Although I completely agree that in-camera image alignment for this function would've really made it more useful, don't underestimate just how hard this "seemingly simple" function is for a limited computing platform like a camera. This is something like the folks asking "why isn't there a 32-bit EXR output option in the firmware for these HDR shots!?" There is a lot more required, and a lot more going on, than just a few lines of firmware code...

I have to agree -- the amount of time it takes for my home computer (a fairly fast machine with 4 GB of memory and a quad core processor) to align and tone map an HDR image is quite long. It is possible that it could be done in camera, but it likely would take 5 to 10 minutes per image. This would be much worse than being unable to shut off the dark frame subtraction feature. For those who really want to align images, it will always be much faster to shoot your exposures and then process them at a later date on your home computer. Whether or not the feature is useful to the photographer is up to them, but certainly having it availabe does not take away the ability to do "real" HDR by shooting multiple exposures and processing later.

Uh, you don't work in the image-processing area of software development, do you?

I've done a lot of things in software development. Including a fast contrast autofocus. Or creation of a programming language. Fast alignment was a straightforward exercise. Not compensating rotation and using a quad tree monochrome pyramid, it is 5 (or 9) multiply-add per pixel and level. Or 6.65 (12) multiply-add per pixel. Plus 4 adds per pixel and level to compute the pyramid.
Or 270 (430) MOps total per image. Should be less than one (two) second(s) for all three images with a GOps-capable DSP. A bit more if the processor cannot do floating point.

Another method is feature extraction which must be used for distorted images like in pano photography. But this isn't required here.

And the demosaicing for every image is done anyway.

The problem is that people which just know something about the process tend to believe that alignment can be processing hungry and leave it out (management failure) or come up with a slow implementation (development failure). I've seen similiar things over and over again and part of my job is consultancy to corporate projects to avoid over- or under-ambition. My clear analysis here is this: The K-7 was a tremendous effort for Pentax and there simply wasn't enough resources/insight left for these last 10%.

Originally posted by Rondec

I have to agree -- the amount of time it takes for my home computer (a fairly fast machine with 4 GB of memory and a quad core processor) to align and tone map an HDR image is quite long.

As I said, tone mapping is the compute intense part of it (at least, if it includes a heavily non-local operator as most algorithms do).

And why does everybody assume their home computers are more powerful than their camera? My home computer (a single core notebook) cannot smoothly play back 1536x1024p@30Hz. The K-7 (via HDMI) can. And it got 2GB of main memory too ...

I've done a lot of things in software development. Including a fast contrast autofocus. Or creation of a programming language. Fast alignment was a straightforward exercise. Not compensating rotation and using a quad tree monochrome pyramid, it is 5 (or 9) multiply-add per pixel and level. Or 6.65 (12) multiply-add per pixel. Plus 4 adds per pixel and level to compute the pyramid.
Or 270 (430) MOps total per image. Should be less than one (two) second(s) for all three images with a GOps-capable DSP. A bit more if the processor cannot do floating point.

Another method is feature extraction which must be used for distorted images like in pano photography. But this isn't required here.

Well, you've nicely solved for alignment in a paralell plane, but sensor planes in the hands of humans have this annoying tendency to rotate in 3D space through a lens that has (variable) distortion. Have a look at the Hugin code for an idea of how it can be done, and if you come up with a better way, contribute the code.

I'm not doubting that you have talents in certain areas, but I work with many of the brightest minds attacking exactly these problems, professionally, every day - for commercial, shipping software. It is misinformation to state that the task of in-camera multi-image alignment is "easy" or that OS software can in any way shorten the path for firmware engineers, but your opinion that without alignment Pentax has solved 90% of the problem for 10% of the use cases is valid. Consider what other cameras available have solved more...

Well, you've nicely solved for alignment in a parallel plane [...] It is misinformation to state that the task of in-camera multi-image alignment is "easy"

Well, maybe we confuse two problems here:
- The correction of small shifts (what the SR mechanism does in hardware and what I suggested to do in firmware).
- The correction of significant rotations, as are typical in pano or architecture photography. Typically involving SIFT key extraction, distortion correction and spherical projection. Something I would not do in firmware indeed.

I'm glad you agree that alignment in a parallel plane can be solved nicely. This is all I would ever ask Pentax to include in their firmware. It is all what is required to align HDRs which are shot within a half second. Some even managed to produce freehand K-7 HDRs without alignment at all. My suggestion is good enough to raise the 10% of use cases to 50%.

Another suggestion is to refrain from making reference to brilliance of heads. Myself included Let's just stick to named and explained software problems which may or may not be obstacles to do this in firmware.

Unless being proven false, my point remains as follows: Not doing alignment at all is under-ambitious just like trying SIFT key based alignment would be over-ambitious.

This is all I would ever ask Pentax to include in their firmware. It is all what is required to align HDRs which are shot within a half second. Some even managed to produce freehand K-7 HDRs without alignment at all.

I have to agree an auto-alignment for HDR would make it much more useful. As it is you need a tripod and that's not my cup of tea for daylight shooting. I was surprised by this lack of auto-alignment but I'm wondering if it's because the resulting picture size would be less than 14.6MP due to the inevitable crop?

Should be less than one (two) second(s) for all three images with a GOps-capable DSP.

Originally posted by panoguy

It is misinformation to state that the task of in-camera multi-image alignment is "easy"

@panoguy,

I have taken the issue to the next step.

I have done it all the way down. Implemented an alignment operator, benchmarked it, compared its quality for HDR creation to other programs which are commercially available for the task (Photoshop, PhotoMatix, PhotoAcute).

Short result:

The quality of my operator is on par with PhotoMatix and computing the alignment parameters for a 3 sequence HDR takes 350 ms only (on a Mac Mini Early 2009 with one processor used).

i agree with falk: nobody is demanding full (hugin/panotools like), and/or subpixel alignment in camera, not yet at least, simple positional alignment should do fine for in camera use, for handheld hdr's takenin "burst" mode. i also agree that it would imensly increase the appeal of this feature, and make it that much more worthwhile. like falk, having played with hdr myself a few times, i am amazed how good a job they did with the tonemapping, which is, imho, the hardest to get right (and i am not only talking computational load, but also r&d, trial and error, voodoo and art to choose the right settings ), so they definetly did put a lot of effort into it, it's not somethign they just spat out as a gimmick feature, to "tick the box"

in short: hard to argue with plain facts, falk's last post seems to pretty much end the debate . thank you falk.

The quality of my operator is on par with PhotoMatix and computing the alignment parameters for a 3 sequence HDR takes 350 ms only (on a Mac Mini Early 2009 with one processor used).

Very nice work, Falk.
When comparing the images in your category #2, I see a slight advantage for the PhotoMatix result (better local contrast), but your resulting image is not to be sniffed at all!

Regarding your benchmarking: It seems your Java VM uses just-in-time compilation. Do you think compiled C code would speed up things (not that it would be necessary)? And yes, people seem to frequently underestimate the computing power of embedded systems like a camera with special hardware like a DSP.

Do you think compiled C code would speed up things (not that it would be necessary)?

Java Hotspot now outperforms C++ and comes close to C.
What would be a lot faster still is compiled Fortran
And optimizing my code (I didn't really pay attention, like blocking loops for actually using the caches. I guess, my code is memory bandwidth-limited as it is now).

Seems like all interesting functions (noise reduction, JPG and MP4 compression) come bundled with the processor (Fujitsu). Don't know how great Pentax engineers are in programming this beast on their own. At least, they managed to get DNG compression or HDR tonemapping implemented. Or their effect filters. I guess, their are only two or three team members who master this beast and they have been overloaded by feature requests

falk, if i may be as bold as to suggest a next step: maybe try to contact the CHDK Wiki team, porting your code to their platform would be very interesting, would show just how this would work on an actual camera, and might also mean having a prototype of in camera alignment to play with (maybe exr output will follow, not sure about tonemapping). who knows, maybe even pentax will follow, but meanwhile, it would be nice to see it running on anything with a lens on it

on the other hand, your code would be interesting for other people who would like just some nice little solid piece of code to do just that: quick alignment of handheld bracketed shots, without using a "behemot" like hugin/panotools, i know i'd like that for my exposure blending needs.