Overall, we used the Pocket cam and the BMCC heavily. I would say in a 50/50 ratio, maybe more Pocket than BMCC because it was just so much easier to move around, plus the director (Patrick Johnson) felt more comfortable operating the Pocket since he owns one.

I have to say, both cameras are ABSOLUTELY amazing and the footage we got is incredible.

These photos were all taken with a cell phone as we were moving super fast so I couldn’t carry around my DSLR for pictures. Anyway, enjoy and you’ll see more stuff soon!

]]>http://pauldv.net/string-theory/feed/1THE PRESERVE – SHOT ON THE BMCChttp://pauldv.net/the-preserve-shot-on-the-bmcc/
http://pauldv.net/the-preserve-shot-on-the-bmcc/#commentsMon, 12 Aug 2013 19:45:46 +0000http://pauldv.net/?p=618Read more
]]>Here’s a montage film (I have to call it a montage film because calling it a “film” in my opinion is inaccurate). I shot at a Wildlife Preserve out in Long Island, NY. This place was always a beautiful but haunting place to me. I’ve fallen in love with this place so I went out by the water and shot this with my Blackmagic Cinema Camera.

And yes, that is the Empire State Building on the tilt up shot near the end. I was ABSOLUTELY AMAZED that you could see it (although barely) from the preserve. The location is approximately 30 miles from the Empire State Building. Amazing!

Here is a tutorial to remove chroma noise from your Blackmagic Cinema Camera footage WITHOUT losing much (if any) sharpness and detail. I personally find colored noise a bit distracting. I don’t like the look of it. However, I do like the look of the colorless noise in the Blackmagic Cinema Camera footage. It’s more “film grain” than “video noise” to me. This of course is personal preference, so keep that in mind.

On a side note, if you want to remove the noise, the best thing to do is expose to the right. Set your zebras at 100% and then expose all the way up to that, then pull back a little so you’re not clipping. This will give you the best image quality without much noise. Also, you’ll have to excuse the web compression, as it adds blockiness and inaccurately displays the video. At any rate, enjoy!

PS – Try this technique out on footage with moire and see if it makes the moire a little less noticeable. This could help remove that rainbow “pinging” that if present in fine detail with moire. If you try it, let me know the results!

]]>http://pauldv.net/chroma-noise-reduction-bmcc/feed/3Shrinking Our Toolboxhttp://pauldv.net/shrinking-our-toolbox/
http://pauldv.net/shrinking-our-toolbox/#commentsTue, 01 Jan 2013 12:00:50 +0000http://pauldv.net/?p=67Read more
]]>When we set out to make a film, what is the end goal? For me, it is to help the audience escape for the small amount of time they are watching my film. I use many tools in order to do this. You see, the audience must suspend their disbelief for the time they are watching a film. We’ve all heard that before:

Suspension of disbelief.

Many experts in screenwriting say that when an audience suspends their disbelief, they can only do it in a limited capacity. What does this mean and how does it apply to screenwriting? Well, first off, these experts advise us that if we’re writing a fantasy premise, limit it to one element that asks us to suspend our disbelief. Blake Snyder (“Save the Cat”) goes over this in his book. Whether you agree with it or not, it seems to be a valid point:

DOUBLE MUMBO JUMBO as Blake Snyder calls it: Don’t add more than one piece of magic in one movie. In other words, don’t have an alien invasion (first element) and then ask us to believe the dead come back to life, bite the aliens, and they turn into zombie aliens (second element). While this might sound like a great Syfy movie, it might not work. Depends on the type of film you’re making.

How does this relate to HFR and 3D?

We’ll get back to that, but let’s move on to an important belief of filmmaking:

LESS IS MORE.

Many of us believe that filmmaking is a subtractive process. Think about it. What are some go to methods for making a more cinematic image? In no particular order:

Shallow DOF

More contrast

24fps

What do these all have in common? Less. Shallow DOF removes parts of the image through blur. Contrast crushes/clips details at the high and low end. 24fps is less than 48fps or 60fps. Less. Less is more in filmmaking.

MORE IS LESS

3D and HFR (high frame rate) cinema provide us with more. More for the eye to look at. More frames per second. More opportunities for the audience to call our bullshit.

3D

3D is a tricky one. Most people don’t like it and it seems to be more of a gimmick than a storytelling device some say. What’s the problem with 3D and how is it shrinking our toolbox?

DEPTH OF FIELD

A deep Depth of Field is one of the issues with 3D. Aside from some instances of making the film look cheap (more like a home video camera), it removes one of the tools in our toolkit. A large depth of field removes the ability to control where the audience looks. When you have those insanely detailed environments, and in order to make it easier on the viewers’ eyes, the trend is to make the DOF so deep that you see everything. This is done so it resembles more what the human eye sees (a deep DOF), as the argument for 3D and HFR is to make the movie-going experience more like real life.

Removing this tool removes part of the control we have over the viewer. Again, we’re trying to trick our audiences into believing our story; we’re asking them to suspend their disbelief. By showing them everything, we’re limiting our ability to manipulate the them. By showing them everything, they don’t know where to look. By showing them everything, the only place they know where to look… is everywhere.

HFR

HFR, oh how I hate thee. I’ll admit, I personally hate the look of “HFR cinema.” It just looks cheap. I was walking around BestBuy and noticed something that looked like The Avengers on a 55” LED TV. I stopped, kept watching, and immediately my thought was, “What is this, a late night talk show/SNL parody?” To my horror, I realized that it was actually the film itself playing on a television set up with motion smoothing/higher Hz to simulate higher frame rates. I couldn’t believe my eyes.

People say kids under 20 don’t care about the looks of HFR cinema. Gamers don’t care about the look of HFR. The next generation doesn’t care. If you came up with movies shot at higher frame rates, you wouldn’t be so attached or partial to 24p.

Honestly, look/aesthetic is just part of the story, but it’s enough for me to dismiss HFR altogether. But let’s take it further. What higher frame rates does is decrease the amount of motion blur. Those that support HFR consider motion blur an artifact. I look at it as a tool. It distinguishes a movie from any other video source. It distinguishes a movie from reality.

TOOLS – NOT ARTIFACTS

Call me crazy, but what many supporters of HFR cinema call artifacts, I consider tools. Sure, you have to handle cinema cameras with care. You can’t go all Blair Witch with them and expect good results (if that’s not the intended look). But these “artifacts” are not limitations. They are tools we use in filmmaking to make our storytelling more believable. Let’s face this fact:

Most films have visual effects nowadays.

These visual effects might not be as in-your-face as Transformers, The Hobbit, or any other blockbuster, but visual effects are a tool we all use.

DOF

Depth of field as a tool was already mentioned. It’s fairly obvious. We don’t see as much of the frame, which allows us to hide many things. It allows us to manipulate the audience into looking where we want them to look. However, Shallow DOF is another way we can integrate VFX into a shot and have it be believable. Just watch the many VFX shots in the film Monsters by Gareth Edwards to see what I mean. Showing less or showing VFX for a shorter period of time can sometimes help integrate VFX into a shot. Our mind thinks, “Oh ok… the camera went out of focus there and so did the large tentacle. It must have been there.”

24fps & MOTION BLUR

Motion Blur at 24fps has a wonderful aesthetic to it. It just looks like a movie – admittedly an established look based on decades of cinema. Sure, one can say that it’s just because we’re used to it, but there’s more to it than that. It’s a tool.

What makes VFX blend into a shot well? Basically, we have to imitate and simulate what the camera captures. Take, for instance, a shot of buildings in the far distance. There are many naturally occurring phenomena that the camera captures while shooting. This could be anything from heat distortion to haze. When we’re trying to blend 3D elements or CGI into a shot, we have to duplicate the phenomena in order to make elements blend seamlessly into the shot.

So what are some of the methods to make CGI elements blend? We already mentioned heat distortion and haze but what about blur, grain, color correction, and yes, you guessed it… MOTION BLUR. It’s yet another layer we can add to help the Visual FX elements blend into a shot.

If you’ve been listening on Twitter and other social networking sites, you’ve seen many people say that some of the VFX in THE HOBBIT stand out as particularly bad and fake because of the 48fps smooth, drastically less motion blurred images on the screen. What we’re essentially doing is we’re removing elements, methods, LAYERS from our toolbox of tricks that help fool the audience into thinking what they’re seeing is real. Why on earth would we want to do that?

It’s A Wrap

By removing all these elements in our toolbox – some of the best tools to produce our “movie magic,” we severely limit ourselves with ways that we can trick our audiences into believing the worlds we create in our films. There has to be some distinguishing factor between films (“make-believe”) and the reality of the world we live in (news, sports, etc). We want to escape. We want to LEAVE reality. We don’t go to the cinema for reality. We go to the cinema to ESCAPE reality… to escape our problems… to escape real life in order to be entertained. We don’t need anything else pulling us back to reality while we’re trying to escape it.

Blake Snyder said that DOUBLE MUMBO JUMBO is adding more “magical” elements than we care to believe. I propose that by removing elements like 24p motion blur, Shallow DOF, etc, that we are performing REVERSE DOUBLE MUMBO JUMBO (speaking of Mumbo Jumbo…).

What do I mean? We’re REVERSING Snyder’s theory and REMOVING too much of the “magic” from our movies. We’re removing the very magical elements that have helped us distinguish film from reality. Shrink our toolbox – remove too many layers – and the magic is gone.

Are you willing to give that up?

]]>http://pauldv.net/shrinking-our-toolbox/feed/0Hello all!http://pauldv.net/hello-world/
http://pauldv.net/hello-world/#commentsThu, 27 Dec 2012 22:44:23 +0000http://pauldv.net/?p=1Read more
]]>Just a quick note that I moved my blog here. You can view the work in the VIDEOS section. I’ll be adding to it constantly, both new and old videos (mostly new) so bear with us while we get everything together.
]]>http://pauldv.net/hello-world/feed/0Transfering Your Timeline From Avid Media Composer 5.5 to Premiere CS5.5http://pauldv.net/transfering-your-timeline-from-avid-media-composer-5-5-to-premiere-cs5-5/
http://pauldv.net/transfering-your-timeline-from-avid-media-composer-5-5-to-premiere-cs5-5/#commentsSun, 24 Jul 2011 02:40:03 +0000http://pauldv.wordpress.com/?p=569Read more
]]>Here is a video tutorial showing how to get your sequence out of Avid Media Composer 5.5 to Premiere CS5.5 using your original media, not QT Reference files that link back to your DNxHD files.

I hope this helps!

A few notes:

DNxHD is a fine codec, but the only 10bit flavor as of this writing is the highest DNxHD setting (DNxHD has different names depending on frame rate, etc). ProRes, ProResHQ are 10bit. Also, ProRes is wrapped in .mov, so most other programs like After Effects and Davinci Resolve recognize these media files. Also, you can bring in your raw camera files if they’re are .mov files into your special fx programs and color correction programs.

Enjoy and let me know if there is anything that can be improved along the way in order to make the process simpler! Let’s help out the community.

]]>http://pauldv.net/transfering-your-timeline-from-avid-media-composer-5-5-to-premiere-cs5-5/feed/1STOP WAITING FOR RED TO DEMOCRATIZE FILMMAKING… IT’S ALREADY HAPPENEDhttp://pauldv.net/stop-waiting-for-red-to-democratize-filmmaking-its-already-happened/
http://pauldv.net/stop-waiting-for-red-to-democratize-filmmaking-its-already-happened/#commentsMon, 04 Oct 2010 04:53:32 +0000http://pauldv.wordpress.com/?p=562Read more
]]>Everyone is up in arms these days over the announcement of RED not targeting the prosumer crowd and the Scarlet being more expensive. I think we all need to take a step back here and realize what’s going on.

First off, what RED has done so far is amazing. The RED ONE simply is an amazing camera and at an even more amazing price. We all had hopes that the Scarlet would be THE camera to democratize filmmaking so any teenage kid with a rich daddy or someone saving their hard earned pennies could afford a tool that is truly (for pixel peepers) on par with Hollywood equipment. Stop right there, it was a nice fantasy…

See, there’s a reason equipment costs so much. RED, even with the price increase, seems like they’re not even charging for the R&D of their products. Cameras like this usually cost at least $60K. I mean, a camera that records RAW, 5K (or even 3K), and a 2/3″ sensor. Name one camera that does that for under $10K let alone for under $60K. In fact, is there even another camera out there that records 3K or 5K that we can afford without selling our house and our first born? We should STILL, even at this price point, be grateful RED is even letting us have this equipment for that price. It really is a gift.

Look, I’m not a RED fanboy, as I’ve frankly grown sick of hearing people talk about what they’re working on. That’s not to say that I hate the company, because I don’t – they’re doing amazing things. What I’m saying is that I’m sick of hearing people talk about buying a camera that isn’t finalized when they don’t even have a script to shoot yet. For me, I’ll worry about the camera after it ships.

I’m not saying you shouldn’t be mad. If anything, maybe you should be mad that RED said something but didn’t “deliver” but then again, they ALWAYS had the disclaimer, “Things are subject to change. Count on it.”

Guys, filmmaking has been democratized basically since the DVX or before that. DVX films won best film and best cinematography awards at numerous prestigious film festivals around the country. Those who couldn’t afford to buy the DVX could always rent one. Nowadays, we have cameras like the T2i that can do 24p, basically 35mm sensor size, all for about $800. $800. Someone in highschool can afford that on a part time job if they were willing to save their money for a few months. It’s not the camera holding you back anymore, it’s you or your script.

Don’t worry about the compression, etc, on an HDSLR or other decent cams because if you learn to shoot correctly with your tool, your audience is most likely not going to notice or even care. Put your energy into making a good movie.

Over the summer, I shot a commercial with a total budget of $30K. I could have bought a RED with that money, but I didn’t. I couldn’t justify spending that money because most likely, it may never get paid back. As a business owner, I need my equipment to pay for itself rather quickly in order to profit. What’s my point? My point is…

We all want our backyard videos to look as good as possible, but don’t be mad at RED because you can no longer afford the Scarlet. Use what you can afford because most likely, and I mean this in the nicest way possible, if you can’t afford to use a certain piece of equipment on set, then most likely you/your project doesn’t need it.

Stop waiting for RED to democratize filmmaking. That’s an excuse to not film something. Look at the tools we have now. It has already happened. Now stop waiting, go out there, and create something because the filmmaking world is moving ahead without you… and you don’t want to be left behind.

]]>http://pauldv.net/stop-waiting-for-red-to-democratize-filmmaking-its-already-happened/feed/5DORITOS COMMERCIAL “NICE CRUNCH” NOW ON YOUTUBE IN 1080phttp://pauldv.net/doritos-commercial-nice-crunch-now-on-youtube-in-1080p/
http://pauldv.net/doritos-commercial-nice-crunch-now-on-youtube-in-1080p/#commentsWed, 25 Aug 2010 19:59:21 +0000http://pauldv.wordpress.com/?p=556Read more
]]>Hi all. Just a quick update that our DORITOS COMMERCIAL “NICE CRUNCH” is now up on YouTube in 1080p. Check it out below or WATCH IT ON YOUTUBE to see it in High-Def.

I really am proud of what my actors did here. It has been said many times before and I’ll say it again… if you cast something well, 50% of your job as a director is done for you.

]]>http://pauldv.net/doritos-commercial-nice-crunch-now-on-youtube-in-1080p/feed/0A LITTLE MORE EDGEhttp://pauldv.net/a-little-more-edge/
http://pauldv.net/a-little-more-edge/#commentsSun, 08 Aug 2010 00:43:42 +0000http://pauldv.wordpress.com/?p=551Read more
]]>I got a request for more information on our workflow for the HARDCORE EDGE commercial in a forum, so here is a modified version of my answer.

The sequence settings inside FCP were the NTSC 24p setting. We shot on the Canon 7D but the spot was going to be an SD spot so we originally had the comp as Widescreen Anamorphic 720×480 24p. The main network it was airing on wanted it in 4:3 so we switched. This caused a little issue with the graphics.

Here’s the process:

First, we shot on the 7D so everything was tapeless. We shot the spot in less than one day. We had to extend out our greenscreen from 20 ft long to over 30 ft long in order for the actress to be able to walk the distance the ad agency wanted in each environment. On set we used AE CS5 & Premiere CS5 to check out the perspective and see how the effects were coming out. We did not transcode to a different format. We simply used the h.264 files straight out of the camera.

This proved valuable, as we would be able to put together a rough composite and see how the effects were turning out all without wasting time transcoding.

We dedicated approximately 30 minutes between each of the 4 setups to test the takes that we got. That was more than enough time because of the “speediness” that the CS5 suite provided us. Also, to put it in perspective, the actors hit MASSIVE traffic (there was a HUGE accident) and they arrived approximately 2.5 hours late, still needing to change and do makeup, we still broke for lunch for an hour, and we still finished just 30 minutes behind our NORMAL schedule. We work quickly and efficiently, so having the CS5 suite in our edit suite in the back fit right in with our quick work ethic. We hate waiting. I feel it is true – waiting kills creativity.

So when we went into post, we used Adobe Media Encoder to transcode the h.264 Quicktimes into ProRes files. This was so much faster (even without a CUDA card) than doing it in Compressor or FCP. Next, I brought everything into AE CS5 and started on the FX and just building the piece. The AE comp was 720×480 Anamorphic Widescreen. The footage was Square Pixel, 1920×1080 24p.

Once the FX were done, I brought the final rendered piece (rendered out at 23.976, 720×480 Anamorphic Widescreen – more on this later – codec was ProRes4444) into FCP. The timeline settings in FCP were NTSC 24p Anamorphic Widescreen). Here’s where some trouble started. The default codec for rendered files in this sequence is NTSC DV 24p. We did a little bit of basic text animation at the beginning and end inside FCP, as well as adding the music and voiceover. Now, because the sequence settings in FCP were NTSC DV, Final Cut used the DV codec for the timeline preview files. FCP seems to use the preview files to output your final export. This caused a huge degradation in the quality of the text phrases that come up. Even without using the preview files, it still didn’t stretch the 720×480 Anamorphic image with as good a quality as Premiere. We still saw lots of artifacting and jaggies in the text. I switched the sequence codec to ProRes but still got jagged edges on the text.

I tried Premiere CS5 with better results. A lot more acceptable, but we were still looking for better quality. So what I ended up doing was going back to After Effects, creating a new 720×480 4:3 comp and nesting the 720×480 Anamorphic comp inside that one (FIT TO COMP WIDTH- which auto-letterboxed). This provided us with beautiful lettering, no jaggies, etc. After Effects handled it perfectly, while Premiere was a good second, and FCP a horrible 3rd. This makes me wonder about what’s going on inside FCP and the “stretching” algorithm or quality. It also makes me wonder what other bad things are going on under the hood.

So once I had the 4:3 version, I imported that into Premiere and finished the spot in Premiere CS5. The majority of the work was done in FCP, but the final finishing with great quality was done in Premiere (with the help of AE for the 4:3 conversion.)

That’s basically it. If I forgot anything or anyone has any questions, I’d be glad to answer them!

We just finished up post on a commercial called HARDCORE EDGE. The commercial’s concept was from the ad agency. My company, TRIPLE E PRODUCTIONS, was brought in to execute the idea. You can view the commercial below as well as the BEHIND THE SCENES. The BEHIND THE SCENES is pretty cool, as it reveals some of the tricks we used to accomplish the concept behind the spot. Enjoy!