first, I'm not sure we can put the connectors on the side, it might be necessary to have them at the top/bottom, which means that the chimney would need to be more 'U' or 'V' shaped instead of the 'T' shape

intracube: Are you aware that sony has just released two new full-frame mirrorless cameras (a7 & a7r) that can accept lenses from different brands (via adapters) and shoot clean 'raw' video out via HDMI?

and even if early versions of the Axiom only support standard frame-rates. as an end user, if I hear that the hardware is capable of lower framerates and it's just a question of a software update - then that's would be good enough for me

dmj_nova: Yes, that's what I think too. Still, I've heard people describe these new sony models (sporting 24mp & 36mp respectively) as being in the spirit of 'open source', as you can get an adapter and use any brand of lens

intracube: The rapidly shifting landscape puts an emphasis on shipping a product. At the price range, do it and do it now is vastly more important than do it later. (as well as being historically unrealistic).

intracube: The reality is that the imagers that can afford a camera at this level are a fickle lot, and even a stellar product (can that be expected in a reasonable time frame with this project?) can be eclipsed in less than a year.

Bertl: Still theoretical. And that comes at a cost of delay. Many projects that have striven for such open ended designs fail miserably. There was a wonderful interview with the creator of World Forge on that.

troy_s: As an example, the following image was captured with the 'frankencam" , which was programmed to capture multiple exposures with different sensitivities: http://graphics.stanford.edu/projects/camera-2.0/images/cards-s.jpg

I would center our work into finding a proper way to connect stuff. Sensor board to processing board to recorder. And allow people to interconnect different sensor boards versions to different processing boards version as well. And aditionaly I'd use an existing processing board for the second prototype as well. But that's just me :-)

You are taking one set of arbitrary display referred values (the sensor - min to max) and arbitrarily feeding them to another display referred context (sRGB ideally, and likely unlike sRGB) with no transform

what I should mention incase you missed it, is in my DNG writer that I wrote it seemed that photoshop/lightroom requires a camera calibration. for quick testing I just used the profile out of a Canon camera. The first images Bertl sent I was able to adjust the whiteblaance and get a reasonable image. but since the pixel order has change, I'm not getting a reasonable image on the IF8 chart

gcolburn: That would be relative to your display, and even then, that only suggests that (assuming the data isn't broken) that the values are roughly close to the primaries of your display, which unless you have profiled it, aren't even likely close to sRGB.

gcolburn: As we can't move the camera, and as long as Bertl uses the same lamp, our color temp is constant. You can also estimate color temp from the neutral IT8 swatches if you get adept at the process.