Author
Topic: The EOS 1D X Sensor Demystified... (Read 13775 times)

[ISW] has given a nice and concise breakdown of the technology inside the new full frame sensor for the EOS 1D X. Below is a direct quote of the breakdown.

New photodiode construction has resulted in an improved photoelectric conversion rate that gives increased light sensitivity.

Improved transistors inside the pixels are said to make SNR higher

The first time that gapless microlenses have been employed on a Canon full-frame sensor.

14fps speed is achieved by a 16-channel analog output with two-vertical-pixel simultaneous readout. The 16 outputs are muxed in 4 ADCs siting on a separate image processor chip Digic 5+. It is around 1.4 times faster than the previous generation EOS-1D Mark IV and said to be a first for a 35mm full-frame digital sensor. At ISO 32,000 or higher the frame rate is reduced to 10fps.

My second question -- why is frame rate reduced at the ultra high ISOs?

Most likely it's simply a data transfer issue. I'd assume that 18 MP at 12 fps is pretty near the max throughput. On a current 18 MP sensor (7D, in this case), an example RAW file (same scene, data from TDP) has the following file sizes:

ISO 100 - 24.7 MB

ISO 3200 - 29.3 MB

ISO 12800 - 34.1 MB

Since data quantity goes up with increasing ISO (same trend on all Canon bodies), at some point the frame rate needs to slow down to compensate.

outsider

The RAW format is a lossless compressed format. The file size gets higher at higher ISOs because there is less to compress (due to higher nosise).

The data coming off the image sensor is likely coming off at the same rate regardless of ISO setting (doubt the RAW compression happens on the imaging chip), so I would not think that's the bottle neck.

Sampo

The RAW format is a lossless compressed format. The file size gets higher at higher ISOs because there is less to compress (due to higher nosise).

No, it's because CR2 format doesn't use adaptive Huffman, but predefined tables that are optimized for common case value distribution.

If adaptive Huffman or arithmetic coding (like h.264 CABAC) was used, high ISO raw images would be significantly smaller. Unfortunately, processing cost in terms of power consumption and silicon area needed would likely be higher, especially with more complex coding schemes.

Maybe future RAW formats will have PNG or lossless JPEG style spatial predictor functions. Combine that with adaptive arithmetic coding, and file size savings would likely be significant, even halved.

But may that doesn't really make sense - file size is not really a big issue anymore. Current scheme is very reliable - flip one bit in current CR2 format, and you can recover rest of the image with just one pixel error given software that can resync to Huffman stream. More complex coding could mean larger blocks of the image become corrupted without sophisticated error recovery and correction. Otherwise one bit flip could render whole image unusable.

Besides, if file size was really an issue, Canon would probably stop embedding a thumbnail AND a full size JPEG image in every RAW file! I prefer reliability over file size any day or night.

---

Correction: Having taken a look at reverse engineered CR2 implementation [1], I have to say I was wrong. CR2 RAW compression is actually based on modified Lossless JPEG [2]. Main differences are in data ordering, and of course that CR2 contains Bayer-filter values, not actual color components in any color space.

Sorry if I misled anyone! I think I mixed CR2 with some other camera manufacturer old RAW file format that used simple Huffman coding.

KitH

Most likely it's simply a data transfer issue. I'd assume that 18 MP at 12 fps is pretty near the max throughput. On a current 18 MP sensor (7D, in this case), an example RAW file (same scene, data from TDP) has the following file sizes:

ISO 100 - 24.7 MB

ISO 3200 - 29.3 MB

ISO 12800 - 34.1 MB

Since data quantity goes up with increasing ISO (same trend on all Canon bodies), at some point the frame rate needs to slow down to compensate.

Is that the data quantity goes up as a function of increasing noise (or decreasing signal to noise ratio)?

that is, are there smaller areas of pixels that can all be treated together as if they're the same, before some noise gets in the way and asks the encoding to record a new different value in the data. Let's assume the same happens back again afterwards (or else it's not noise, it's part of the picture) so there's two new data points from the passage of a particular "packet" of noise - and this effect aggregates to pump up the file size?

I just took a look at those numbers, there's a bit of a trend going on. Need more data points, but the shape I'd expect is a runaway file size explosion with increasing noise (until the picture is random hiss, where it levels out), hence the data throughput constraint is the limiting factor on ISO. More throughput capacity on the chipsets gives more ISO up to a point where the first thing to break is the frames per second speed.

The RAW format is a lossless compressed format. The file size gets higher at higher ISOs because there is less to compress (due to higher nosise).

No, it's because CR2 format doesn't use adaptive Huffman, but predefined tables that are optimized for common case value distribution.

If adaptive Huffman or arithmetic coding (like h.264 CABAC) was used, high ISO raw images would be significantly smaller. Unfortunately, processing cost in terms of power consumption and silicon area needed would likely be higher, especially with more complex coding schemes.

Maybe future RAW formats will have PNG or lossless JPEG style spatial predictor functions. Combine that with adaptive arithmetic coding, and file size savings would likely be significant, even halved.

But may that doesn't really make sense - file size is not really a big issue anymore. Current scheme is very reliable - flip one bit in current CR2 format, and you can recover rest of the image with just one pixel error given software that can resync to Huffman stream. More complex coding could mean larger blocks of the image become corrupted without sophisticated error recovery and correction. Otherwise one bit flip could render whole image unusable.

Besides, if file size was really an issue, Canon would probably stop embedding a thumbnail AND a full size JPEG image in every RAW file! I prefer reliability over file size any day or night.

So much for "Demystifying the EOS 1DX Sensor", all this techno-babble has me even more mystified! Hopefully when someone gets hold of the actual camera we'll see some real life tests that show us what it can offer to photographers in practice...

The RAW format is a lossless compressed format. The file size gets higher at higher ISOs because there is less to compress (due to higher nosise).

No, it's because CR2 format doesn't use adaptive Huffman, but predefined tables that are optimized for common case value distribution.

If adaptive Huffman or arithmetic coding (like h.264 CABAC) was used, high ISO raw images would be significantly smaller. Unfortunately, processing cost in terms of power consumption and silicon area needed would likely be higher, especially with more complex coding schemes.

Maybe future RAW formats will have PNG or lossless JPEG style spatial predictor functions. Combine that with adaptive arithmetic coding, and file size savings would likely be significant, even halved.

But may that doesn't really make sense - file size is not really a big issue anymore. Current scheme is very reliable - flip one bit in current CR2 format, and you can recover rest of the image with just one pixel error given software that can resync to Huffman stream. More complex coding could mean larger blocks of the image become corrupted without sophisticated error recovery and correction. Otherwise one bit flip could render whole image unusable.

Besides, if file size was really an issue, Canon would probably stop embedding a thumbnail AND a full size JPEG image in every RAW file! I prefer reliability over file size any day or night.

According to dxo labs the 1dsIII has about 12 stops of dynamic range, the Nikon d3x has an amazing 13.7 both at iso 100. If its near 14 stops like the Nikon flagship with low noise that would be amazing for wedding and landscape photographers.

Logged

Joseph

Most likely it's simply a data transfer issue. I'd assume that 18 MP at 12 fps is pretty near the max throughput. On a current 18 MP sensor (7D, in this case), an example RAW file (same scene, data from TDP) has the following file sizes:

ISO 100 - 24.7 MB

ISO 3200 - 29.3 MB

ISO 12800 - 34.1 MB

Since data quantity goes up with increasing ISO (same trend on all Canon bodies), at some point the frame rate needs to slow down to compensate.

You should think " file size " has no affect though , since the images pass to the buffer first - the buffer would slow you down from shooting - before a shutter would.

I have mentioned this in the past , but what the hell is stoping you ( CANON ) from putting in high speed , low latency buffers - several manufacturers have memory running beyond 6ghz " 25GB/sec + " , they are very small in size , and have low power consumption. Maybe cost effectiveness , maybe the Digic processors have an unsupported bit rate , or a bit rate too low to transfer any faster ?

So much for "Demystifying the EOS 1DX Sensor", all this techno-babble has me even more mystified! Hopefully when someone gets hold of the actual camera we'll see some real life tests that show us what it can offer to photographers in practice...[/quote]

According to dxo labs the 1dsIII has about 12 stops of dynamic range, the Nikon d3x has an amazing 13.7 both at iso 100. If its near 14 stops like the Nikon flagship with low noise that would be amazing for wedding and landscape photographers.

I believe even Nikon's 16 MP APS-C cameras like the D7000 achieve 13.9 eV of dynamic range (so much for all the myths about larger pixel size giving wider dynamic range), beating out their very own venerated 12 MP FF D3s which has 12 eV of dynamic range. Canon's Archille's heel really lies in their sensor electronics. We'll wait for real world tests to see if they have overcome this.