Monday, March 30, 2009

I was stumped with a good photography question over the weekend. I was asked why a series of tiny rings developed in the center of a digital photo while shooting long exposures of the northern lights (aka Aurora Borealis). I had seen similar distortions years ago, while scanning some large-format negatives with a flatbed scanner - but that didn't give me a lot to go on, except that it was the result of some reflecting light or concave distortion.

Here's one of the images I was shown. It's hard to see the distortion as this is compressed like crazy for the web, but I assure you... there's some rings center frame:

Here's a close-up of the distortion:

After scouring the web for any clue as to what could possibly cause a problem like this, I came across a forum on DPReview. It seems as though the problem is caused by light reflecting off of an inner lens elements back towards the UV filter. The rings are actually the reflecting light off of the UV filter's inner side. The distortion itself is generally known as Newton's Rings.

The most simple solution is to remove your UV filter while shooting the Aurora lights, although a really high-end filter (B&W, Leica) may also eliminate the problem. Some even recommend removing the UV filter when shooting at night, no matter what the subject matter is.

For you math nerds out there, here's the formula for why it happens:

Yep, that pretty much explains it.

So in short: if you're shooting Aurora, remove your UV filter. A little knowledge that just might come in handy when you least expect it.

Saturday, March 14, 2009

If you've been following our posts on the making of Condition:Human, then you've witnessed some of the awesome possibilities that green screen provides to film makers. Green screen, or chroma-key green fabric, is the best way to merge layers of footage and graphics onto a single, video timeline. Add in some 3D scenery, advanced CGI, and photorealistic texturing, and you've got yourself a pretty fine looking film.

Green screening offers huge potential for low-budget film makers as it allows for virtual worlds to be created around their actors at minimal cost. Set design, prop building, locations fees, lighting, and crew are just a few of the overhanging expenses that can be minimized, if not completely eradicated from the budget. However, that's not to say that monstrous, expensive films avoid green screen - quite the opposite, in fact.So what kinds of things happen on multimillion dollar film sets?

To answer this question, I want to provide some insight into two of my favorite blockbuster movies, and how green screen was used to create the types of settings and shots that were needed to achieve the director's vision.

1) Sin City
Quite literally, the entire stock of raw footage for Sin City was shot using green screen. Every shot was set up and filmed with a much greater vision in mind. The true magic of Sin City, in addition to the storyline, was performed by the post-production editors. Hours and hours of painstaking 3D scene building, layering, and editing were required to achieve a stunning result.In the end, the visuals in the film took the graphic novel to new heights while managing to retain Miller's original look and feel. Let's have a look at some actual raw footage:

If you can't remember what the edited scene looked like, check it out here.

Sometimes you just have to see how basic the raw footage is before you can actually understand how good some of the video editors and CGI experts are that work on these types of films. If you want to see just how much green screening went on in the making of Sin City check out this nine minute clip of various unprocessed footage. It's definitely worth seeing.

2) The Matrix
Realizing the complex concept behind The Matrix still stands as one of the most ingenious film making efforts ever. The Matrix unveiled new visual styles that set a new high for CGI-based film makers around the world.

There's far too many scenes worthy enough to discuss in the movie, but some of the most mind-blowing have got to be the time-haltering, panoramic pans. To achieve this look, a huge chroma-key green set was designed, and then outfitted with both still and video cameras. How many cameras? I don't know... I lost count. OK, fine, there are precisely 120 DSLRs, and two motion picture cameras.

Once the cameras were placed in accordance to the desired shot they were hidden behind another green wall to make things easier for the editors. The final footage is actually a time-lapse merge of still photos taken in sequential order by the array of cameras you see above. Here's a shot of the scene during a take:Everything in the set was made custom by the designers for The Matrix. Because the panoramic pans were used many times throughout the film, the set was designed to be modifiable so that it could be reshaped in accordance to the footage required. The style became loosely known as bullet-time photography, as the scenes required dynamic camera movement around slow-motion events that approached 12,000 frames per second.

Here's a look at how the unprocessed camera footage appears when placed on a video timeline:

This technique requires a lot of time and precision, but in the end, it allows for a compelling and quite revolutionary end product.

So there's a glimpse into how green screen, and chroma-key green sets are used to produce a final product. Green screening is best used when your required work demands footage that is either out of budget, and/or out of this world. Used wisely and creatively, the potential for green screen usage is virtually unlimited - as long as we're able to keep pumping out sweet ideas.

Friday, March 6, 2009

Up until now, I've focused primarily on 3D innovations that affect the photographic industry. Although some of these photographic innovations have dynamic properties that allow users to freely move throughout a virtual environment, it's important to note that the raw data used to build them is wholly static/still. Examples include: equirectangular photos, Photosynth, and photorealistic 3D models.

What we must understand is that photo-based technology is not the only medium charging down this path. Video technology too, has improved to a point where spacial environments can be freely navigated by the end user. The main difference is that in order for immersive video to be produced, literally petabytes of information need to be stored, rendered, and allocated. Here's a compressed sample of the footage I'm speaking about. While the footage is playing, drag your cursor inside the video window to change your field of view:

To achieve this type of footage, eleven individual lenses must be organized in a dodecahedron pattern; thus, the camera is appropriately entitled the Dodeca 2360 (but is also known as a telemersion camera). The device itself is produced by a company called Immersive Media. The recorded footage is seamlessly stitched in real time - similar to that of equirectangular photographs.

The result is a 360° view of the camera's immediate surroundings. It is capable of recording 100 million pixels per second (higher than HDTV) at 30 frames per second. Omnidirectional audio is recorded via four built-in microphones. GPS metadata is also recorded. The Dodeca 2360 retails at about $45,000, which can easily be bumped up to close to $100K with mounts and accessories.With digital storage issues and overall costs like this, it makes sense that one of Immersive Media's number one clients is Google. In fact, Google's Street View technology uses screen shots from this exact video footage. Immersive Media appropriately leads this operation and has already recorded video reaching many tens of thousands of miles across the USA with other countries currently in the works. There are five "street view" teams from IM who each gather about 1000 miles of footage per month. Volkswagon Beetles are outfitted with the high-tech gear because their physical body design is low to the ground and smoothly contoured allowing for the maximum video coverage.Street mapping is not the only application this technology can be used for. Here's a list of possible uses:

This technology may not be immediately relevant to photographers and videographers currently working in the industry, as the setup costs alone will likely keep any or all curious onlookers at bay. However, it is extremely important to stay aware of innovations like these as they pose to revolutionize the media industry. Furthermore, it exists as yet another facet in the long list of world mapping initiatives that employ experts in the areas of photo, video, and multi-media. In other words, it is vital in today's world to go beyond the still photograph if you expect to compete in the media market.

Wednesday, March 4, 2009

As you've probably already heard by now, Adobe has added new features for working with 3D images, motion-based content, and advanced image analysis into Photoshop CS4 Extended. Here we find yet another way that photographers can push the boundaries of standard photography and begin implementing photorealistic CGI into a variety of projects.

Just yesterday, we were looking as some of the advanced applications of this media merge in the marketing industry. Today I wanted to further touch upon this topic, and more specifically address how both amateurs and professionals alike can begin merging 3D models, textures, and photographs to achieve a pleasing final product.

Here's a sample image I recently exported using a combination of Google Sketchup and Ps CS4 Extended:

1) To keep it simple, I just downloaded a 3D model of an iPhone from Google's 3D Warehouse. Alternatively, for more advanced readers, you may choose to build your own 3D model in Sketchup, 3D Studio Max, Maya, or with any other 3D-based software program. Photoshop has some built in templates, but they are extremely basic.

2) If you decide to download a model from Google, you'll likely receive a .skp file which Photoshop will not recognize. Open the image in Sketchup first, and export it as a .kmz file. This file type is your only export option with the free version of Sketchup, but it plays well with Ps CS4 so it works out fine.

*Please note: the advanced 3D editing capabilities of Photoshop CS4 run primarily on the video card's GPU so you may have trouble opening some files properly (collada, kmz, obj, etc.). A list of video cards that work with Photoshop CS4 Extended can be found here.

3) Open Photoshop, create a new file and select " New Layer from 3D file..." from the 3D drop-down menu. Locate your .kmz file and open it. If you've never worked with 3D files in Photoshop before, you might benefit from a great intro tutorial by Command Shift Q. Also, it might take some time to open if you're running a slow computer.

4) Personally, I was a bit disappointed by the low quality photo texturing that was used in the downloaded model. I suppose beggars can't be choosers, or can they?

Ps CS4 extended will actually locate and list the textures used in the .kmz file and allow you to change them. I had some high-res front and back shots of an iPhone already, but if you don't, you can probably lift some off the web as you're just learning. Just double-click on the appropriate texture (located in the layers panel) to open it up in a new window. Open your improved textures and just drag them into the texture window. The resolution of the texture layer can be resized as required, just make sure you give your new layer a fine-tuned selection and close crop.5) Once I retextured my model it was just a matter of rotating it and adding some lights. The 3D object can now be layered over any 2D background you like. I added a false depth of field to mine for a more realistic look - read this if you don't know how. The base layer was created in Photoshop, texturized and adjusted for realism purposes. I added a light monochromatic noise to the entire image afterwards for consistency.

The advantages of having the 3D, photorealistic model can be extremely beneficial - especially models like iPhones. iPhone applications are being developed like crazy, and all kinds of software development companies will want advertising material created. Having a stock of 3D models will allow you to take jobs without even getting your hands on the product in question. Furthermore, you can avoid time-consuming studio setups, and also reposition the model if needed without having to reshoot.Animations can also be exported using your 3D models as the controls have been upgraded dramatically. The layout and operation is very similar to the Adobe After Effects timeline. If I see a need, I'll speak further about this functionality and how it blends with other forms of media.

Tuesday, March 3, 2009

How do you categorize all of the visual forms of creation that exist today? Simpler times have come and gone - times when painters were painters, and photographers were photographers. The ridged boundaries that once separated artistic mediums have become muddy and unclear. For instance, where now does photography end and video begin?

In the pursuit of new visuals and designs, artists have had to broaden their scope and incorporate alternative forms of media into their work. Thus, multimedia is alive and well. It's hard not to blame computer technology for this medium blend (perhaps "blame" is the wrong word as it is commonly associated with negative actions; multimedia is neither bad nor wrong). Nevertheless, it has just become too easy to digitize everything. Computer generated images too have grown to be so lifelike it can be hard to tell them apart from actual photos/videos.

Frank Moldstad states in his article All Hail the Renaissance that "CGI is not replacing photography, though. Rather, it joins photography as another tool in creating imagery that is given a final enhancement by digital retouchers. Indeed, photography and CGI are becoming more intertwined, as many images are a combination of photography (texture, backgrounds, HDRI lighting) and CGI elements. The photographer’s eye is needed more than ever, whether he or she is directing a live or computer-generated shot."

I can't help but agree, as the visual rules and aesthetic tastes that photographers build over time are extremely advantageous when working in a number of different mediums.

Let's have a look at a CGI/photo created by the inspiring, and talented Simon Plant in conjunction with J3D Imaging. Here photography exists as the base medium, but 3D modeling, photographic texture, and CGI are added in to create a stunning end product. The process can be seen here.

In productions like this, the photographer's role becomes part of a multi-tiered process, but continues to be vitally important in the achievement of the final image.

Photography exists as a bridge to historical mediums like painting and drawing as well. Artists who are pushing to reveal a new look and feel to their creations can now utilize traits of a wide variety of mediums. For example, the extremely popular Khoda that was recently uploaded onto Vimeo by artist, Reza Dolatabadi incorporates over 6000 paintings that were digitized and laid out on a video timeline. Check it out:

Essentially, the role of the photographer is changing. However, the understanding of lighting, reflections, light temperature, framing, and editing will continue to necessitate photographers as vital in the creation of complex, multimedia images. Furthermore, it is becomming increasingly important that all innovative photographers start learning more about other mediums and how they might blend with their own. I believe that it will become common practice to employ photographers with a keen eye and a generous media knowledge to direct large-scale mutimedia projects.