Well the color LUTs wasn't working yet, I thought I would be able to implement them quite easilly, just when I got to the point that is also should be able to extrapolate when it became really complicated ugly code.

So these two steps are not applying some form of color saturation multiplier on the image (they shouldn't, but was just wondering)?

I'm just wondering if that's the level of saturation you're getting straight from the camera head, or if there is a multiplier somewhere in your color-conversion steps to give the more saturated image I'm seeing as the end product. It sounds like from your description there isn't any saturation stages.

Thanks,

Jason

Welcome to natural CCD saturation:) Being used to CMOS, the difference might be striking.

I find it strange that there is a difference between CCD and CMOS in regards to color saturation. If both CCD and CMOS are the same size and have the same color filter on them them the same amount of light will fall in each photon well.

I find it strange that there is a difference between CCD and CMOS in regards to color saturation. If both CCD and CMOS are the same size and have the same color filter on them them the same amount of light will fall in each photon well.

Maybe you can't have the same filters for CCD and CMOS?

Cheers,
Take

They are not the same size at all! The Kodak pixel is 7.4um. The pixel on Red and SI and any low frame rate cmos is much smaller. The pixel on the SI sensor is 5um or something like that. The Kodak pixel is twice the area. The filters are also different depending on the manufacturer technology and experience. Kodak came from a huge film colorimetry background and Sony absolutely dominates the ccd market with its ccd technology. CMOS includes more processing on chip and comes with a higher noise floor. Sensor filters play a large part in saturation, the overlap, the relative balance etc.

I believe the cmos low saturation is inherent to the technology and the complexity of the sensor pixels. I have seen lots of unprocessed images from cmos sensors and this appears to be universally true.

Actually there are a number of CMOS manufacturers who get excellent color from their sensors . . . for instance, Micron can get the same level of color-saturation and accuracy from their camera-native RGB image, and so can Canon as what I'm seeing from the Kodak CCD's.

So I don't think it's fair to state that CCD=good color while CMOS=bad color. A lot of it has to-do with the manufacturing process, the pigments used, the color-fastness of the pigments (a trade-off of less saturation for more long-term robustness), and the compatibility of the color pigments with the manufacturing process.

Also the pixel size is 5um on the Altasens in order to get 1920x1080 in a 2/3" compatible format. And from seeing the work that Micron has done, small pixels (<5um) does not mean poor color saturation out-of-camera.

But any time I tried to get low light saturation in a CMOS I had to process a lot.

Have you tried the Microns? The "low-light" performance of those sensors might not be as excellent as a large-sensor CCD, but the color saturation is very nice.

Another thing to realize is the CCD's are clock-constrained . . . for instance, if one wants to have a single camera that can be as "film-like" as far as is possible in the range of frame-rates that one can cover, you can't do that with CCD's at the moment.

Also CCD's can get very hot compared to a simliar CMOS, and the hotter they get, the noisier. They also use up a lot of power, which gets dissipated at some point along the line as heat. Various off-chip generated bias voltages, etc. also can cause issues, especially as the sensor head gets hotter and more current must be drawn.

Yes, CCD is harder to design, more expensive and problematic to get right with all the extra components and costs a lot more in development and materials. This is also true for most Italian supercars, still, many people will prefer one of those over a BMW with a equivalent engine:)

EDIT: I think the car analogy suits the situation. We all know that a top of the line BMW might be a better tool for most transport applications compared to something with italian engineering. But the italian car still has its market because many people like the sound of the engine, the engineering mentality, the way these things work and look. And even if the specifications might be similar, the italian car can certainly be a lot more engoyable and handle better in extreme scenarios, even though the engineering is much simpler, the technology is not as advanced and it doesn't come with 20 3-letter acronyms of its various systems/technologies. This type of car is a financial nightmare for any automotive company, but engineers and management know there are reasons to maintain the production.

I don't believe the camera is desaturating on purpose, so this must be the out of camera saturation. What kind of processing is applied with a look file? Is there saturation processing?

Definitely . . . if you download the XML, there is a saturation matrix in there, and you can see all the settings that are being applied to the camera image.

In the end I feel that both technologies have their place, with advantages and disadvantages on either side . . . it's not just "marketing" false-hoods that have created the popularity around CMOS as you have described in your other posts. There are advantages, and ways to mitigate the disadvantages.

Increasing saturation with post look files does come at a cost though. It's better to get more from the camera directly so you avoid boosting noise etc. I made a comparison of the out of camera, neutral (looks undersaturated to me) and film look.

Yes, it does, but as noted, it's a "mitigated" loss, meaning that for a little more noise you get the "good" saturation we've been talking about, along with the benefits of flexible frame-rates, low-power, high temp tolerance, all data pipeline (on-board A/D converters), optical format compatibility with 2/3" and S16mm, up to 2K resolution, etc., etc.

Technology is always moving, and tomorrow's CMOS will make today's CCD's look bad and vice versa . . . both technologies will have their respective places for sometime as far as I can see.

There is one thing though that I am seeing, and that is a lot more R&D and intellectual property is being applied toward improved CMOS designs than what I'm seeing with CCD . . . I think a lot of this has to-do with the ability for "fabless" firms to design CMOS sensors compared to the difficulties required to create CCD's. As such, I think we will probably see CMOS in the long-run out-pacing CCD design, with the "end-results" being a bit of a "pseudo-CMOS/CCD sensor", that is CMOS designs being created on very high-end mixed-signal processes that are typical of CCD designs. At that point you'll get the advantages of both, with less of the disadvantages of either.

As you may have gathered I am not trying to make my camera have a certain look and I am taking a more scientific viewpoint. This is why I have taken so long to get the camera output perfectly linear (within 6%) and also get the colors as exact as possible as well.

This would allow the most consistent image in post and give you the most control over the colors.

I think one of the reasons that colors are pretty good already is because of linearity and getting black level correct. I've seen the same thing when calibrating a CRT projector and doing the greyscale tracking using a photosensor and a voltage meter instead of trying to do the same thing by eye.

Nope, you're right Take, and I'm sorry for hijacking your thread . . . I didn't want to get into a CCD vs. CMOS discussion, but just wanted to point out you've definitely done some very fine work here, and the images from your software look really nice.