Audio sampling and photon capture are not directly commensurable. The best that audio has managed to do is approximately 21 bits (according to Dan Lavry) while advertising 24. But they are using an electron stream, rather than converting photoelectrons. And there is no parity in signal levels between these two phenomena.

Ah, well I didn't know that, the extent of my knowledge on this subject is that a voltage generated from either a photosite or microphone membrane gets digitized and that's it lol.

Quote

If your point is that A-D converters can convert 21 bits very well, that is true. But in photographic applications, there are not that many electrons to go around. And there is read error, and shot noise in addition. Erik or Emil would know better, but it otherwise seems Red is claiming a sensor that uses or exceeds single electron ADUs!

There are problems audio faces too, the limit of dynamic range capture in audio, even assuming perfect equipment performance, is limited by the room noise of even an extremely quiet studio.

Quote

But gain does not multiply out the amount of information. And it introduces noise. And you can't do HDR with gain, only pseudo-HDR.

The pseudo-step wedge is suggestive, but not genuinely informative. I'd like to see a frame from the Dragon that has that much DR. I'd really like a detailed technical explanation. Perhaps there is some innovation going on here, but it needs an explanation.

Honestly I'm not sure how it works myself exactly, just trying to figure it out from deduction, since this is a technology previously limited to labs. If anything, here it is from the horse's mouth: http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor.htmlEdit: It looks like I forgot the specifics, it says the exact opposite, the highlights are derived from the lower gain signal, and the shadows from the high gain. Sorry bout that, I'll change my previous post.

But as I've said before, it's only pseudo-HDR if the different gain levels are derived from one converter, not two converters calibrated to different gain levels. The Native ISO of cinema cameras is around 800-1250 but they still manage to get such extreme amounts of DR, this means that DR is not tied in any way to a camera's gain.

As for HDRx, the Red team says that the Dragon makes HDRx obsolete, and it likely won't be supported by Dragon. There are some members who still want the feature in because it makes still extraction easier, though.

If you see my exchange with Erik just before this, we figured out that there are two separate exposures being made to produce HDR-x. And that solves the puzzle. The sensor doesn't deliver that many bits in a single exposure, but in a combination of two. And the added dynamic range comes from the highlight end and not the shadow end.

As I said, it's easier to expand dynamic range into the highlights by effectively expanding the full-well capacity of the sensor than it is to expand dynamic range in the shadows by increasing quantum efficiency and reducing read noise. Even in the Nikon D4, the physical full well capacity is doubled over its predecessor in a single exposure, making for a base of ISO100 and a wider dynamic range. Multiple exposures are another way of doing this.

And if you read the last line of my post, then you'll see that the puzzle isn't solved because the Dragon is not blending two exposures via HDRx, which is being dropped from the camera entirely as a feature. HDRx already exists on the Epic, but it has it's own problems, since the shutter speed between the two exposures is different, and may create ghosting during motion. It was a neat work-around while it lasted.This sensor is claimed to capture 20 stops natively, which I don't particularly dismiss, but the real question is how they're reading that data off of the sensor. With a 16-bit ADC you're technically limited to 16 stops of dynamic range, so how are they getting another 4?

Thanks for the correction. And I apologize if you weren't referring to shadow DR in the first place, but highlight DR.

I would still guess that any additional dynamic range is being added at the highlight end through effective increase in well capacity. With several sensors yielding over 50% quantum efficiency, there isn't more than a theoretical stop to be gained at the low end. And with the noise floor as low as it is, we aren't /that/ far from counting photons singly.

But the additional headroom would still be great news for filmmakers. As they say, like "film DR." Lots of room at the top.

The main problem I see with 20 stops is that it would need very large pixels, having a full well capacity of about 1e6 electron charges. A normal camera sensor pixel is usually in the 30000 - 60000 range, so the pixels would need to be much larger then still camera pixels like 20 microns. Would they fit on the chip?

And if you read the last line of my post, then you'll see that the puzzle isn't solved because the Dragon is not blending two exposures via HDRx, which is being dropped from the camera entirely as a feature. HDRx already exists on the Epic, but it has it's own problems, since the shutter speed between the two exposures is different, and may create ghosting during motion. It was a neat work-around while it lasted.This sensor is claimed to capture 20 stops natively, which I don't particularly dismiss, but the real question is how they're reading that data off of the sensor. With a 16-bit ADC you're technically limited to 16 stops of dynamic range, so how are they getting another 4?

The main problem I see with 20 stops is that it would need very large pixels, having a full well capacity of about 1e6 electron charges. A normal camera sensor pixel is usually in the 30000 - 60000 range, so the pixels would need to be much larger then still camera pixels like 20 microns. Would they fit on the chip?

The D4 captures 120k photoelectrons at ISO100, which gives one more stop of headroom. But there might be other ways to increase "effective capacity."

I'm interested to see if they use on-chip converters and how well that works. These things run very hot. Using live view on the D800 almost doubles the amount of thermal noise to my eye.

They say so. Here are real world SEM pictures of a pair of CMOS sensels.

The main problem with CCD seems to be readout noise to get 20 stops of DR you need to have a Full Well Capacity (FWC) of 1000000, and a readout noise of 1 electron charge.

CCDs used in MFDBs used to have readout noise like 12-17 EC.

I'm somewhat skeptical of the FWC figures given by "sensorgen" as they give different values for cameras using the same chip. Sony Alpha and Nikon D3X both uses a very similar sensor by Sony. Sensorgen gives FWC = 48975 for the Nikon and FWC = 26843 for the Sony. But, chip geometry is the same. Nikon D3X makes much better use of the Exmoor sensor, but I'm pretty sure the FWC is same on both.

If I am correct in memory, the CCDs used on many MFDBs are interline, which is why some backs use microlenses; if they had used full-frame photogates instead, microlenses would make no sense as the fill-factor would already be 100%

In any case, I'm downloading some Raw files from the Aaton to see how the claimed DR handles on my own computer, at $90k for just the camera, it had better be good

I didn't read all the replies, but I remember a photographer who had a custom sensor made. I remember it being very large, but maybe BW. I forget the details. maybe mentioned already, I'll have to read this thread later :-)

One thing to keep in mind is that features essentially come free. The expensive thing is sensor surface area. Megapixels are free, square inches are expensive. Designing a 20-40 MP sensor at 6x7 would be much more expensive than just upscaling an existing sensor. A sensor based on the one used in the Sony A7s would fill the bill for you, that would come in at around 50 MP.

Keep in mind that such a sensor would really need an OLP-filter. The larger the pixels the more artefacts will they produce.

Sensor costs scale much higher than sensor area. Doubling sensor area may raise cost 4-8 times (I guess), and producing in small series is more expensive than producing in large series.

Another expensive development is the signal processing chip, Bionz, Expeed, Digic and it's programming. Deactivating the motion stuff is in all probability just changing a byte from true to false, but it may or may not reduce licensing costs.

Well, it seems that Sony can make large sensors at a reasonable cost, it may just happen that your dream comes true. But, in all honesty, I wouldn't bet on it.

Alright, I'm just a photographer, I'm not a pixel peeper or techie so don't jump down my throat for my simple minded question.

If Canon/Nikon can make low light cameras and sensors (or be it Sony's), why can they (any manufacturer) not make a medium format version that is a full frame sensor?This is what I'm thinking; a low light CMOS, 6x7 sensor that is around 20-40mp for under $20k, live view would be nice but it doesn't have to do video.My logic is that if Canon/Nikon can build a body with all the extra goodies in it (mount, titanium body, extra electronics etc etc), for under $5k why can they not simply cut a larger sensor out of the original wafer and throw it in a digital back for $20k?Why are we locked into this 36x48 format or even 40x54?

who would have guessed a brand new 50mp medium sensor was going to sell for 8k, attached to a quite wonderful MF body with a storied Photographic heritage no less? I mean, the Pentax 645D is now 4800... and that is not a lot of money for a weather-sealed MF digital camera loaded with features for making life easier. I wouldn't have guessed a 65% drop in price for the 645D in 3 years. The 5d MII hasn't dropped that far yet! So long as you have competition in the marketplace, and we as consumers support game-changing technologies, it will happen, sooner than you think. Sony Sigma and Pentax prove that it is a pretty cool thing that there's no monopoly in the camera game.

When the D800 came out did you think, "I'll just wait until Sony makes a mirrorless full frame camera using the same sensor in two years." I didn't. You would think at least Hasselblad would have guessed it and used those as the basis for their rich man's NEX line, and not the 7n or whatever.

Hang on to your hats people, it's going to be a bumpy ride! That spectravision company makes seamless ccd combination sensors can make seamless ones, and in 2008 some LF shooter had a 4x5 sensor custom made for 50k. These poo poo-ers have a very consumer-based idea of where the tech is. I'm sure there are a number of players in the sensor game who wouldn't kick you out if you offered 20k. You may have to use a color wheel to make color photographs, but they would look awesome.

I didn't read all the replies, but I remember a photographer who had a custom sensor made. I remember it being very large, but maybe BW. I forget the details.

It was a pair 10" x 8" sensors of very low resolution, using LCD panel fabrication technology which works at these large sizes. The buyer uses these for test shots in lieu of polaroids, before taking the final images on 10" x 8" film.

But cost is the only barrier: wafer-sized CMOS sensors are already offered on a custom-order basis. The new Pentax 645 with its 44x33mm CMS sensor has a "sensor cost increment" of about $6000 (on the basis that the rest of the body is comparable to a $2000 Pentax 645 AF film camera) compared to sensor cost increment of about $1000-$1200 for the least expensive 36x24mm bodies and roughly $200 or less for APC-C format.

Here is something that I read on the internet about dynamic range limits due to veiling flare (or glare) from lenses, with many references to earlier data. It suggests that even 14 stops is optimistic with typical scenes, but that with a completely stationary camera and subject, there might be techniques to overcome it, like one involving taking multiple images though a "mesh" mask that is carefully moved between frames, and then deconvolution processing based on analysis of the "glare spread function".

If I am correct in memory, the CCDs used on many MFDBs are interline, which is why some backs use microlenses; if they had used full-frame photogates instead, microlenses would make no sense as the fill-factor would already be 100%

Think you've got this backwards...I'm not aware of ANY digital back ever made using an interline CCD...those where popular in compact cameras and are still popular in small industrial cameras...DB's use full frame chips.