This might be a dumb question, but will shooting continuous RAW kill off your shutter? From what I've seen so far, the camera captures raw images and you must compile them in post, so do all these raw files add to your cameras actuations? Would really appreciate some insight on this, thanks!

This might be a dumb question, but will shooting continuous RAW kill off your shutter? From what I've seen so far, the camera captures raw images and you must compile them in post, so do all these raw files add to your cameras actuations? Would really appreciate some insight on this, thanks!

It will be no different than any other live view mode: mirror is locked up, as I think the shutter is too.

Everything the sensor is doing is 100% identical here...it's just the cripple codec that is being bypassed. Running the codec is actually more processor intensive than skipping it! So the only critical thing here performance wise is all the continuous write activity to the CF card. We will find out what the fast and reliable CF cards are soon this way...

This might be a dumb question, but will shooting continuous RAW kill off your shutter? From what I've seen so far, the camera captures raw images and you must compile them in post, so do all these raw files add to your cameras actuations? Would really appreciate some insight on this, thanks!

It will be no different than any other live view mode: mirror is locked up, as I think the shutter is too.

Everything the sensor is doing is 100% identical here...it's just the cripple codec that is being bypassed. Running the codec is actually more processor intensive than skipping it! So the only critical thing here performance wise is all the continuous write activity to the CF card. We will find out what the fast and reliable CF cards are soon this way...

Hmm is this a general issue with Black Magic sensors? Or is it a contrived example?

It tends to be a problem with sensors that work at or near native resolution. The BMCC sensor is only 2.5 megapixels and as a result has no downsampling approach to dealing with moire. At that low resolution, they decided not to use an OLPF because it would lose them their sharpness. With a bayer pattern sensor the grid of photosites tends to cause false color artifacts, which are hard to suppress...but Canon and Sony have learned how to do so quite well. Not so Blackmagic.

Blackmagic has a camera in development that shoots 4K video and is Super 35 sized. However the inexpensive sensor they chose for that is not particularly great at dynamic range. The core ergonomic problems of the BMCC design have also not been addressed in that camera. They have a pocket camera coming out that competes with the GH3 but has the same problems as their current sensor.

What Blackmagic is extremely good at is internet marketing. Not cameras.

This was very helpful. It looks like the RAW workflow is starting to take shape!

I want to preface my next question by saying that I'm a video n00b. Should I do most of my tonal/color adjustments in ACR, or should I wait until I have the footage in AE/Premiere and do most of it in Davinci Resolve, MB Looks, etc.? Does it matter where I do it? My instincts tell me that there are two competing issues:

1. Which program is better at a specific task (e.g., recovering blown out clouds).

2. Flexibility of not having to go upstream to make changes to footage and then re-import downstream. For example, if I do my tonal/color work in ACR, won't that mean I have to go back-and-forth, rather than being able to quickly make adjustments via plugin without the extra step of re-importing?

This was very helpful. It looks like the RAW workflow is starting to take shape!

I want to preface my next question by saying that I'm a video n00b. Should I do most of my tonal/color adjustments in ACR, or should I wait until I have the footage in AE/Premiere and do most of it in Davinci Resolve, MB Looks, etc.? Does it matter where I do it? My instincts tell me that there are two competing issues:

1. Which program is better at a specific task (e.g., recovering blown out clouds).

2. Flexibility of not having to go upstream to make changes to footage and then re-import downstream. For example, if I do my tonal/color work in ACR, won't that mean I have to go back-and-forth, rather than being able to quickly make adjustments via plugin without the extra step of re-importing?

Again... I'm just a n00b here. :-)

I'm of the thought, at least for me. If I can get it from the camera, in the raw-est form possible to Davinci Resolve Lite, I'd like to do that...color correct/grade there, and then use in FCPX for editing...and round trip it from there if needed for tweaking....

Man, this looks like it may actually happen here in the not too far off future.

I do IT for a living, but I'm not familiar enough with this type of hardware hacking..so, waiting for something a bit more refiled to be released from ML.