Recent Blog Posts in 2010

Last week I dropped in to Createasphere's Reducation, the week-long immersive training course taught by Red's leader of the rebellion, Ted Schilowitz, and OffHollywood's founder/CTO, Mark Pederson. Schilowitz has always been the man that has this infectious dynamic energy. His smart enthusiasm combines the excitement of a kid that is constantly being handed really cool toys with the intelligence of an engineer who can quote you the science behind how it works. This was the 7th Reducation and it was held for the first time at Red Studios in Hollywood.

"The original intent," says Schilowitz, "was to give training that came directly from the Red company." The comprehensive curriculum, divided into the broad categories of Red Tech and Red Post, covers everything from the detailed workings of the Red camera to the intricacies of post production. One of the dramatic highlights of the class is the ability to shoot footage with the Red One and then see your dailies projected in 4K on the super bright (21K lumens) Sony SRX-T420 projector on a 40-foot screen. This year included an in-depth demonstration of the Epic camera and an entire day devoted to shooting 3D with Red.

"We're a learn as you go dynamic," continues Schilowitz. "The technology moves so fast that pretty much every class we have something to talk about."

One of the intriguing components of the class is the demonstration of third-party Red-related hardware and software such as Element Technica's Quasar 3D system, Lightiron's OutPost system, the Foundry's Storm and Nuke software, R3D Datamanager and Qtake HD among many others. Red Community Day, which happened in the evening after normal classes, was like a mini-trade show featuring all kinds of accessory camera and post production gear. This greatly enhanced the in-class instruction by underscoring the fact that Red is not just a camera but an entire technology system.

Schilowitz says the three "take aways" he wants his students to walk out with are: "Learn how simple and user friendly the camera is; learn how logical the post production options are and how many of them there are; and know that what we are talking about in class is not the only way to do things but it's what we believe, and the experts in the room believe, are the best practices."

To Be Continued: Stay tuned for more break out bloggiing information about break out components of Reducation, the Epic camera and shooting on Spider-Man 3D.

UNIVERSAL CITY, CA — Last week, Universal Studios introduced its new virtual filmmaking stage, Universal Virtual Stage 1. Located on the Universal’s backlot, Stage 36 has been transformed into a state-of-the-art digital production studio. It’s a turnkey digital production space that features a 40x80-foot greenscreen cyc with embedded motion capture sensors and ceiling mounted camera tracking markers. In addition to the stage, the facility also features a conference room with CineSync and RV software for global collaboration and six artist workstations featuring Autodesk’s Maya and Motionbuilder attached to 60TB of storage. Each room features a live feed from the nearby stage.

The Virtual Stage’s highlight is the realtime interactivity it allows directors. It works like this: You load a low-poly model into the software or use pre-built models such as Universal’s own revamped New York Street backlot location. The real camera is then synced to the virtual camera in the 3D scene. Next, bring in the talent and go. They’ve married tons of complex technology — camera tracking, motion capture, and realtime keying — into a turnkey solution for filmmakers.

The stage is pre-rigged to work with a Red One digital camera and lenses. For the demonstration, the camera was mounted on a calibrated 24-foot Technojib camera crane. Other cameras and lenses may be used, and the stage can be configured to work with multiple cameras as well.

BURBANK — What was once called the “HD Expo” was freshly re-branded Creatasphere Entertainment Expo recently. Doesn’t quite roll off the tongue as a tradeshow brand, but the name does manage to get away from that passé “HD” thing into a catch-all, be-all for cinema technology in about any form available for tradeshow fare. In fact, “CreativeSpheres” was something of a miniature NAB with many of the favored shakers and movers of the cinema entertainment trade. Sony, Panasonic and Canon were on the floor and had a ready presence at seminar “intensives” meant to enlighten and persuade. Notably missing from the mix were familiar industry standards such as Arri, Silicon Imaging, Red Digital Cinema, Adobe, Avid and Autodesk (although Adobe did sponsor a Canon-focused HDSLR intensive) and some others.

So how did the show do as a snapshot of the cinema culture?

I confess I didn’t get to see every intensive or visit every booth. I did manage to hit most of the high spots and I can say the show made for an appealing picture of what top suppliers wanted media makers to know and maybe more vital: what they wanted to keep to themselves.

At the high-end 3D space, the Kerner Group showed off a camera rig composed of two prime-level zooms rigged together for a simpler take on capturing 3D for anything from studio to indie producers, directors and DPs. Fuji, with its impressive elite lens lineup, was there, and for post production AJA showed a trove of improvements to an already solid mid- to high-end workflow from the Kona G3 to new deck and panel options. Ditto for BlackMagic Design, with its novel DeckLink and Da Vinci lines, and for an innovative gaggle of others.

On the cinema camera front, show goers were treated to the wake of Canon’s astounding (and unintended) HDSLR 5D revolution across to fellow rivals such as Panasonic with its new AG-AF100 and — at the high-end — Sony with its world-class SRW-9000 camera.

At a base body price of about $6,000, there is small doubt that Panasonic with its micro 3/4 chip AG-AF100 cam was looking in the rear view at the Canon 5D wunderkind. The show was a bit ironic for Canon since it was not stressing its 5D camera on the floor. Instead Canon presented its three 1/3 inch CMOS chipped pro fixed lens effort, the $8,000 XF305 camera, with an announcement for a singe 1/3 inch CMOS chip XF105 fixed lens camera in the $5,000 range. Both cameras are “long depth of field” cameras so they won’t have that look many indie filmmakers have learned to love about the Canon 5D. However both Canon cams have pro post features including extended dynamic range (about 10 stops), 10-bit output to popular post packages (Adobe & Apple) with XLR pro audio, G-lock timecode, SDI support, etc. And Canon EF lenses can be used by way of a Red Rock adaptor. Obviously these are fine cameras that are not ideal for all indie filmmakers. Just as obvious, this latest round is not the last word from Canon — especially in regard to a wider indie cinema base.

At its price, the Panasonic AG-AF100 is a serious contender with industrial audio, interchangeable lenses, robust imaging, HD-SDI and “cine” style depth of field performance that so many indie filmmakers clamor for. However, as interesting as this effort is — with its micro 3/4 chip — depth of field performance is not as narrow and finely controllable as that available on full frame 35mm chipped cameras. Note: beyond the front-end feature sets, all these “CreataSphere” show cameras have their own unique post production paths that will be the major stumbling block or blessing to cinema makers. (A hot topic for future blogs and stories here and everywhere).

Of course, at about a $72,000 base price and with its 12-stop plus dynamic range added to a host of utterly professional studio features, it would be a heavy stretch to say the new Sony 35mm SRW-9000 wonder-cam is a response to any HDSLR (a rival to the brilliant Arri Alexa cam would make more sense).

But could the same be said of the new Sony camera that didn’t make it to the show?

Sony’s latest effort is a professional-grade cam with interchangeable lenses, pro audio, HD-SDI and planned 10-bit post support into Avid, Apple FCS and more. At $16,000 base for its camera body, the full-frame 35mm chipped Sony PMW-F3 was announced days after Creatasphere under the notion that: "If you have a first-unit F35 or SRW-9000, this would be a perfect second camera,” notes Peter Crithary, Sony marketing manager for production.

Maybe.

But the Sony PMW-F3 could also be seen as a pre-emptive move on what Canon no doubt has up its sleeve next year via a keenly anticipated and real cinema follow-up to its unplanned indie 5D revolution. And at these prices the new smaller footprint cams are targeted for an indie purchase market — not so much for the tony rental market.

In other words, the indie camera and post production scene is about to be shaken up harder with more cinema quality choices than anyone could have suspected a few dog years ago. That’s all to the good for struggling and strapped indie filmmakers and for established pros alike. And as we’ve all seen, true cinema competition is a healthy thing at a place made for surprises.

Bumped into Martin Pilchner in the W hotel bar Friday night and ended up spontaneously hanging out with Francis Monzella, Martin Pilchner and Chris Pelonis. We joked that if Carl Yanchar and John Storyk dropped in, we'd have a quorum of studio designers!

Missed Phil Ramone at the show...did anyone see him? Al Schmitt and Elliot Sheiner were tied at the hip all weekend. Chatted with Chuck Ainlay about coming to hear our ZR Acoustics studio design tech at the new Universal Mastering Studios. Mandy Parnell from London was also roaming about checking out the latest in gear. Said she mastered an album that morning and still made it to the show.

Undertone Audio [UTA] has a new, custom-built console with VU meters, metal isolation and phase reverse, and in/out capacity on each band of each EQ. The designer is very cool and despite not being sure how much to charge for it, looks like a very interesting product. Very curious to hear one in a real studio setting.

Bob Clearmountain made a humble heartwarming speech thanking everyone who ever made a great record and said that he would continue to strive to do better as everyone in the room had made better sounding albums than he. His humility and true love of music and recordings touched everyone to the core and reminded us all why we started down this path. Thanks Bob! Ray Dolby unfortunately did not make it to the event.

Thanks to Maureen Droney and everyone who helped put on this lovely yearly event.

Arri Alexa at Local Hero Post with Scratch and Moviola Digital Arts Institute with Autodesk Smoke for Mac

Last week I was invited to an open house at Local Hero Post, a boutique post house in Santa Monica, CA. The draw was to see the Arri Alexa camera and to experience their DTE, Direct To Edit, workflow in person.

Stephan Ukas-Bradley, Product Manger, Digital Production at ARRI, led us through the first part of the presentation. He showcased the Alexa camera’s beautiful images and elegant post workflow. The camera records a variety of flavors of Apple’s ProRes (including ProRes 4444) to an onboard SxS card. This media carries all the metadata for a later conform to ArriRaw, their data format for finishing in the highest possible quality. Earlier in the evening he demonstrated for me how the SxS card mounts through the ExpressCard/34 slot on a 17” MacBookPro. He then opened the card on the Mac, found the camera generated XML log of all the captured footage, drag and dropped that file into FCP then relinked the footage to the xml and you’re ready to edit.

The next part of the presentation was guided by Local Hero founder and lead colorist Leandro Marini. He was able to access the Arri Alexa ProRes footage in his Scratch system and begin to grade it directly without any transcoding. Since the Alexa can record the ProRes footage in Log C, and the camera does an intelligent bracketing of the image during capture (similar to a HDR image) there is tremendous range in the files. For example, there was footage shot in bright sunlight where the clouds in the sky appeared to be blown out, but with a few adjustments, Leo was able to reveal detail in the clouds. Arri says the camera has a dynamic range of 14 stops and from what I saw in the demo I believe it.

Last night the Moviola Digital Arts Institute in Hollywood, hosted an event sponsored by ProMax to demonstrate Arri Alexa post workflow for their newest product, Autodesk’s Smoke for Mac.

The Institute’s Director of Education and Production, Johnathon Amayo, was the host of the event and he introduced Stephan from Arri to show the Hollywood peeps the features of the Alexa camera. Johnathon followed Stefan with a demonstration of the DTE FCP workflow as I described above, and also showcased the Avid Media Composer 5 AMA (Avid Media Access) workflow by opening the SxS card inside Avid through AMA and linked to the files on the card to start editing.

The evening’s program was capped off by Autodesk Master Trainer Sibille Cooney showing how Autodesk Smoke for Mac is also able to link to the Alexa ProRes media directly with no importing or transcoding necessary. Autodesk is a partner in the ArriRaw SDK development team and in the latest release at IBC; Smoke 2011 Extension 1 now supports the ArriRaw format natively. Sibille then showed the ease in which Smoke deals with Log material. Long the standard of high-end finishing, Smoke supports 1D and 3D viewing and processing LUTs. In practice this means no conversion or color correction is necessary to properly view log based material. Just toggle a button in the player to turn on a viewing LUT and off you go.

Both events were well attended and it’s always great fun to stay on the cutting edge of the rapidly changing world of post with good friends and free booze. See you at the next one!

I took part in Autodesk's offer of free hands on training for Smoke on Mac at this years IBC. It is a great way of showcasing the software that has always looked a little daunting for Avid or Final Cut Pro users. Now that the price is significantly decreased, their demographic has changed as well, and this is a good example of trying to reach their new market.

About 5 workstations were set up running Smoke on Mac, all facing a projector on which the tutor leads you through a couple of projects to show you some keying, tracking and 3D text all through a speaker system and headphones which are essential to overcome the din of hall 7.

It proved to be just the right amount to understand a little more of how Smoke works and how it could be useful. I use Avid DS on a daily basis and grow ever more frustrated with Avid's disinterest of the product and lack of major releases at every broadcast event. With every month that I get closer to deciding whether to stick with DS or jump ship to Smoke, I am afraid to say I lean towards the latter.

Smoke's tracker is fantastic. It's keyer is effortless. It has true 3D compositing and the ability to import 3D models, adding lighting and textures. It is everything I want Avid DS to be, just with a strange interface.

My DS has a stay of execution while I wait for Smoke to play nicely in an Avid Unity environment (it can only handle the HD flavor of Avid codec's DNxHD - not the SD MXF) and I hope that with a cheaper software price point and possibly more sales, Avid might once again see the reason to invest in the RandD of the product. From my part it has until NAB next year, else it is back to compositor school for me!

Once upon a time in the land of Post Production there were simple format choices. When I started to work as an edit assistant, there was only Digi-Beta (maybe some old Beta-SP's kicking around somewhere) and occasionally some Panasonic formats. These days we need to consider a whole host of file-based formats, especially as they all seem to require a slightly different workflow.

This is why I am seeing more and more post production people walking around the camera halls at IBC. It is now as important to stay abreast of the happenings in Hall 7 (where most of the post vendors are located) as it is halls 8, 9, 10, 11 and 12 (camera and other production equipment). Knowing what is happening with the cameras always gives you an idea of what to expect in a job and how to deal with it.

This year the Arri Alexa was quite prominent, in its bid to reclaim some ground lost to the Red camera. Speaking of which, those clever people at Red again managed to generate hype without even having a stand at IBC. Red camera's could be seen at most stands and Ted (the leader of the revolution!) was seen a few times brandishing the much anticipated Epic camera.

Canon seems to have pulled a march on Sony with the XF 105 and 100 hand-held HD cameras that shoot at 50mbs and are BBC approved for acquisition, which trumps the EX range that is still struggling for broadcast approval while they only shoot 35mbs. Expect to see the 4:2:2 MPEG2 MXF media coming to an editing system near you soon.

The trend is now definitely underway for software solutions of the tools we know and love. Avid have been playing with a software DS for a few years which has culminated with version 10.5 only being available as software for $10,000, announced at IBC this year.

Smoke on Mac is now in it's second version (2011) available for $15,000 and with Blackmagic's acquisition of DaVinci, they are now offering the grading software Resolve for only $825 (and no there aren't any missing zero's there). This is a company making grading systems that once ruled all others and would cost the same as a three bedroom house, now available, all in one box, for $825!!

What has caused this trend towards low cost software solutions? It might be that customers have been discovering for the last few years that these high-end post manufacturers were offering hardware that they either bought from another manufacturer and rebadged or were making video boards that weren't quite as good as third parties like AJA or Blackmagic.

With budgets going south year after year, paying less for a system would be appealing to any facility owner and these software based systems do appear to be cheaper, although not drastically. If you spec a system to run Avid DS or Smoke on Mac and want them to perform as well as they did as hardware solutions, don't expect to be paying less than $30,000.

A colleague of mine has been talking about editing online for years. Sitting behind my fully featured Media Composer watching examples of limited and slow online editing has never really got me excited. What I saw on Avid's stand this year at IBC did.

Avid were showing a technology preview of their Web-based editing software. There is still much to be decided with this product but essentially low res proxies are loaded up to a cloud and then from any machine, as long as it has Flash player, users can log in, view and edit the footage in an interface much resembling Media Composer.

Many different features were shown, such as auto-updates back to the craft editor with comments and notes when the Web-cut sequence is opened up in a fully featured hardware edit station. Even an iPhone application for viewing and making notes as well as basic edit changes was available. Keep an eye out for an advanced iPad version!

So what might this mean? The possibilities are endless and productivity could go through the roof. But fellow editors, a word of warning... when you are sitting at home, relaxing, don't be surprised to get the odd phone call from a producer or director asking if you can make a quick change to the edit you were working on earlier in the day!

Check out the same preview at this year's NAB here..... http://fp.avid.com/fpcache/podcasts/events/NAB/EventsRewind_2010-04-12_Web-Based-Editing-NAB.m4v

I had the honor to do a presentation for Avid at IBC this year and travelled to Amsterdam with a head full of facts and quips as well as the anticipation (and nerves) of a 14 year old boy who's band was about to play in front of school assembly for the first time. LIttle did I know that my performance for Avid, talking about the new features of Media Composer 5, would be in front of 600+ Final Cut Pro fanatics at the Final Cut Pro Users Group Supermeet 2010. Now substitute the hall full of school children for hall full of Simon Cowells (Hello Mr Anderson - Matrix style!)

I might be exaggerating a little bit, The user group guys were a great bunch and as a Final Cut and Avid editor it wasn't the horror show I was expecting. My Avid co-presenter Michael Krulik did a great job of getting them laughing and thankfully there were no Avid crashes!

So, I hear you ask, why was Avid presenting at the Supermeet? Avid have done a great job at listening to the customers in the last couple of years and implementing some of the requests that have come from that. With Avid Media Access or AMA they are able to offer features that Final Cut users have always used in an argument on the merit of FCP over Avid.

Media Composer can now accommodate an editor which ever way he likes to store and use their media. The smart tool is another example of features that have been available in FCP (as well as Avid DS) but have now found their way to Media Composer?

Will this turn more Final Cut Pro editors to Avid? Maybe? With competitive pricing for the software, educational establishments now have more options and with the more students leaving school with Media Composer in their bag of experience, that can only be good for Avid.

More interestingly is what Apple's plan for Final Cut Pro is? Like many of their products since the astronomical success of the iPhone and the iOS, Final Cut pro news has been a little thin on the ground. They have made no secret of this, even stating that the full operating system OSX would see less major updates as has been the custom in the last 5 years, and why should they? It wouldn't take an accountant to realize they make far more money from the iPhone.

It seems Avid has now picked up the ball they dropped a while back, and with Media Composer firmly seated back on the Mac platform I wonder how much we really need Final Cut Pro, after all Apple bought it because the relationship between them and Avid got so bad that Media Composer became a windows only application. It is hard for me to be objective, I am a Media Composer editor before I am a Final Cut Pro editor, but I just feel the future looks brighter with a company dedicated to Television and film solutions rather than a company dedicated to mobile phones.

What an incredible show! I spent 2 days wandering around the floor and was amazed by the diversity of products, ideas and technology being showcased at this great conference and city.

For myself, this was unlike any conference I have ever attended and well worth the 10-hour flight and jetlag. The city of Amsterdam is a great host and was more than welcoming, but as expected — things are a little "different." I have never passed Coffee shops in the US that were wafting out that particular "roasted coffee" smell — well except for certain shops on Haight Street in San Francisco!

As for the conference: They should have issued everyone 3D glasses upon entry, because there were 3D stereoscopic technology and displays everywhere. Seriously, everywhere except for the bathrooms, and I bet they try that next year. One could watch 3D sporting events [women's volleyball and boxing], various live camera set-ups — the Arri Booth even had a live tap dancer for their real time 3D camera demo. Some of the products were great, other were still work in progress.

The viewing conditions did play a big role in the stereoscopic success. I noticed if the displays were catching any environmental reflections, the convergence was thrown off. Well not specifically "The" convergence, but rather the lack of 3D of the reflections impeded on the stereoscopic image.

The 152-inch 4K professional monitor that Panasonic showed had me drooling. Wow! Wow! Wow! That is one sweet piece of viewing hardware. If there was a way I could have crammed it into my checked baggage, I would be so there.

Lots of LED lights being marketed, and can't wait for them to become more of the standard on sets. The power usage is minimal and the heat that is NOT generated would make the sets more enjoyable. Had a long talk with the Gekko LED group and the possibilities are endless.

The geek in me fell in love with one specific product: The Steadicam for the iPhone!!! Time to by one and make a movie. Price point is within acceptable range — in my opinion. Fun product.

The diversity of products would take a while to run down and since the lack of sleep is catching up with me. I would highly recommend planning a trip to Amsterdam for the next IBC Conference.

One problem faced by filmmakers using DSLRs and other cameras that record to memory cards is how to quickly offload the files so you can wipe the card and continue shooting. The problem is worst if you're a one-person operation, you can't shoot while you're offloading, and the offloading to a laptop can take what feels like forever. Nevermind the added bulk of the computer and drives that you're lugging around with you.

I have faced this dilemma myself, so I was curious what I might find at IBC.

Near the back of hall 11 is Nexto Di, which was showing its currently-shipping NVS2500 and a more professional and soon-to-be shipping version, the NVS2525. The NVS2525 is a small device sporting a P2 card slot as well as an SxS card slot. Inside the small enclosure is a 750GB hard drive and rechargeable battery and the front has a small display for menus and status communication.

The idea is you insert your camera's card — and BTW, CF cards are supported via a CF to Expresscard 34 adapter that fits into the SxS slot-- press a button and the system offloads the card at up to 80 MB/sec. You can even create an instant backup by attaching a portable eSATA drive to the NVS2525 and have the device automatically copy your camera's files to both the internal and external drives at the same time.

When you're done offloading and ready to move the files to your editing system there are USB, Firewire 800 and eSATA connections on the side for offloading this handy offloader.

The NVS2525 will be shipping in Q4 2010 for $2700 in a bundle that will include an external rechargeable battery and an external portable 750GB eSATA drive.

What an incredible show! I spent 2 days wandering around the floor and was amazed by the diversity of products, ideas and technology being showcased at this great conference and city.

For myself, this was unlike any conference I have ever attended and well worth the 10-hour flight and jetlag. The city of Amsterdam is a great host and was more than welcoming, but as expected — things are a little "different." I have never passed Coffee shops in the US that were wafting out that particular "roasted coffee" smell — well except for certain shops on Haight Street in San Francisco!

As for the conference: They should have issued everyone 3D glasses upon entry, because there were 3D stereoscopic technology and displays everywhere. Seriously, everywhere except for the bathrooms, and I bet they try that next year. One could watch 3D sporting events [women's volleyball and boxing], various live camera set-ups — the Arri Booth even had a live tap dancer for their real time 3D camera demo. Some of the products were great, other were still work in progress.

The viewing conditions did play a big role in the stereoscopic success. I noticed if the displays were catching any environmental reflections, the convergence was thrown off. Well not specifically "The" convergence, but rather the lack of 3D of the reflections impeded on the stereoscopic image.

The 152-inch 4K professional monitor that Panasonic showed had me drooling. Wow! Wow! Wow! That is one sweet piece of viewing hardware. If there was a way I could have crammed it into my checked baggage, I would be so there.

Lots of LED lights being marketed, and can't wait for them to become more of the standard on sets. The power usage is minimal and the heat that is NOT generated would make the sets more enjoyable. Had a long talk with the Gekko LED group and the possibilities are endless.

The geek in me fell in love with one specific product: The Steadicam for the iPhone!!! Time to by one and make a movie. Price point is within acceptable range — in my opinion. Fun product.

The diversity of products would take a while to run down and since the lack of sleep is catching up with me. I would highly recommend planning a trip to Amsterdam for the next IBC Conference.

For while I've been interested in efficient ways to record long performances, like my daughters' dance company's two hour dance showcases. I'm always working alone so I don't have the luxury of an assistant who can help me continually swap P2 cards, offloading while I shoot. AJA's popular Ki Pro and the Convergent Design nanoFlash are well known solutions to this problem but I can't afford them for my charity projects.

Two new products shown at IBC have the budget producer in mind. AJA announced the Ki Pro Mini, a smaller version with a few less features than its larger cousin, but still packing SDI and HDMI inputs. Instead of recording to disk like the Ki Pro, the Ki Pro Mini writes its files to CF cards. And the mini has two slots, extending your recording time and also making a card swapping scenario possible. The Ki Pro Mini should be available in October for $1999.

The other device I same across is Atomos Ninja, coming soon from Atomos which was started by some ex-Blackmagic Design guys. The Atomos Ninja is a $995 field recorder that looks quite different from the other devices I've mentioned above. Featuring a 4.3 inch touchscreen display the Atomos Ninja records ProRes from an HDMI input to a standard 2.5 inch hard drive that you provide.

This is an interesting idea since hard drives are just about as cheap as CF cards these days, but of course their capacity is much much larger, so you can record for hours and hours. The Atomos Ninja is expected to be shipping in December.

Next time I go to IBC I will remember the last 2 days and think carefully about how long I really need to be able to do the Exhibition justice. Arriving early Friday morning and leaving on Sunday meant I very quickly had to condense my visit and attempt to make a plan of attack. 13 halls of various technology and resellers, and I had approximately 14 hours. It seemed not unrealistic to do a quick sweep of the area, make a note of the stands and companies that caught my eye and return to them on the Saturday for more info — simple.

My first port of call was Hall 7, the Post Production Zone. This was for 2 reasons: The first being I'm Head of Ops at the Clipstone office of The Mill in London — which is only 3 months old and entirely tapeless to boot — and I’m rather proud of it. So Post seemed like a good place to start. Secondly, our sister company Beam (formerly Beam.tv) has their stand at IBC in Hall 7, which was rather exciting for all concerned and looked very smart sat in amongst other post giants like Blackmagic and Avid, eagerly demonstrating the Beam retail aspects amongst all the services that it offers.

I began to wade through the free blurb that I was handed at registration (events like this make me feel bad for trees) and attempted to draw up a bit of a hit list, but soon found myself still walking around hall 7 by the early afternoon — my great plan already failing spectacularly.

As well as generally trying to get an idea of what is new and exciting for The Mill, I very much wanted to find that one thing that would intrigue me and allow me to effortlessly blog for Post, talking fluently about its technological advances and how it will no doubt excite the creatives. It wasn't initially easy to pick the wheat from the chaff, many exhibitors I had never heard of, and I found myself making very quick decisions of whether to talk to them, based on the look and feel of their space (again, nice one Beam — you can check out images of their stand at www.beam.tv).

One such company that I perhaps would have initially passed by if it were not for the fact that it had an iPhone and iPad on demo (I'm easily swayed, I'm sorry) is Streambox. Founded in 1999 and based in Seattle, they specialise in a software based platform for live and file based video transport via IP. CNN use their software for video phone reporting for example. Their new toy is AVENIR, an advanced version of one of their standard Streamboxes, designed for video streaming and delivery via bonded 3G/4G networks. A battery powered portable unit that fits on a camera person’s waist and takes in a live HD/SDI feed which is then pushed out over IP, via 8 3G and 4G wireless cards. Through the use of dynamic bandwidth negotiation it can do real-time delivery of HD/SD and near real time file based delivery of the ACT-L3 QuickTime format. The AVENIR has 2 paths on which to transmit so can send live media to a decoder or media player as well transmit the files to a server or data centre for edit and broadcast.

As with all these live stream boxes and devices, I’d very much like to see it in action, but if it does what it says on the tin, then it has effectively rendered OB (Outdoor Broadcast) vans redundant and makes external reporting far more cost effective. From a live TV point of view, your team can now consist of a single reporter/artist and single cameraman, maybe even just the one of them? It goes without saying that cost ramifications would be considerable, however I see it possibly being used in a film or set environment. Beam was initially designed as a quick way of delivering CGI rushes back to Sir Ridley Scott as encoded files while he was filming on set in Malta for the movie "Gladiator," so why not stream the footage from set back to a post house, so they can start work immediately, even if it is rough rotoscoping or prep? It also gives 2nd units, a way of keeping up to date with multiple shoots and the individual sets could (in theory) all be streamed to one place for the editor to work up an offline edit combining all the day’s footage?

This is why I like IBC; it’s not just about the kit that is available, it’s also about what it will lead to. People far more clever than I will glue three of these companies together to create something even more impressive, and that’s when things get really interesting.

When Post Magazine's Randi Altman asked me to blog about IBC I was concerned about finding something cool and new to write about, because many of the show's gems are tucked away and off the beaten path, and the huge size of the expo can make them hard to find. But yesterday during a wonderful chat at The Beach with my old friend Knut Helgeland of Toxic in Oslo Norway he shared his excitement over Imagineer Systems' forthcoming mocha Pro. And I'm glad he did, because Imagineer Systems is hiding out in Hall 13 (did I mention IBC was huge?), which is a bit out of the way.

Imagineer Systems is the innovator of planar tracking, a different approach to 2D tracking that makes it possible to easily perform complex tracks that can be very difficult to achieve using other more common methods. Users of After Effects will be familiar with mocha for After Effects, a version of their popular tracking application that is bundled with Adobe After Effects. While mocha may be what introduced many users to Imagineer Systems' powerful technology, they actually had other products available long before.

Mokey is an application for automatically removing unwanted elements from moving footage such as wires and logos. Conversely monet is used for adding objects to a scene, such as screen insertion and product placement. Both products are based on the same planar tracking engine.

Inevitably some capability overlap developed between money, mokey and mocha, and in order to simplify the product offerings Imagineer Systems has introduced mocha Pro which unifies the capabilities of the three products into one updated 64-bit interface.

Among the features of the new product is the lens module, which helps compositors account for camera lens distortions when adding an element into a shot, distorting the new object to match the plate and adding realism to the effect. The Autofill feature in the Stabilize module fills in the black edges that occur when a stabilize operation moves the frame around to compensate for the motion that is being removed. Instead of the user needing to scale up the image to push the black edges offscreen, Autofill generates new contextually appropriate pixels for these areas.

Mocha Pro will be available in Q4 2010 and is expected to be $1,495 for a node locked license. Imagineer Systems has also simplified their upgrade paths for existing users interested in the new product.

To travel between halls 7 and 8 visitors walk across a skybridge that takes them over The Beach, an actual beach that is quite popular with exhibitors and attendees alike. Many people are stopping, however, to watch a beach volleyball game being played on the sand below. These oglers should go down and take a closer look, though, because the spectacle is actually a showcase of some very cool tech from Hego.

The first tip-off that this isn't a spontaneous match between rival gangs of beautiful Dutch girls is the large outdoor display showing a live view of the game, the kind you're likely to see in any sports arena around the world. The view switches to show serves and returns and also follows the play as the ball passes over the net.

Mounted beside the net is a box containing the capture element of Hego's OB1 "complete sports production solution". Six fixed HD cameras are arrayed inside the box each offset from each other to produce what will be panoramic view of the playing field. The software running on the PC receiving these signals stitches the views together to create a seamless panorama, and the single operator controls the what is seen by virtual cameras.

This separation of camera and performance reminds me of the technology James Cameron created for Avatar. The computer is recording the entire panoramic image, so if the operator of the live stream was to miss a key play they can easily produce a replay showing the action with a corrected camera view. Indeed the Hego rep I spoke with described OB1's use as a training tool, where an entire game or practice can recorded by the system and then later the coach can review each play over and over, each time focusing on a different player or position.

IBC visitors can certainly be forgiven for not appreciating what is actually being demonstrated, but I encourage you to take a closer look at what's happening just off the court. It is enough to distract from the distraction.

My name is Wes Plate, president and co-founder of Automatic Duck, Inc., a company that makes plug-ins for translating timelines between After Effects, Avid, Final Cut Pro and more. Though I have exhibited at IBC for several years this is the first time I am attending as a mere visitor, and so the good folks at Post Magazine invited me to share my thoughts and impressions of IBC from the vendor side.

Since we started Automatic Duck in 2001 we have exhibited at the industry's two largest trade shows: NAB in Las Vegas and IBC here in Amsterdam. Exhibiting is tremendously hard work and is not nearly as glamourous as you might think. In addition to all the planning and organizing and shipping and setting up and tearing down and shipping back, there is the main job of being accessible in your booth for many hours talking to a variety of people who range from exciting profit prospects through those who, well, aren't. While we all like to moan about how much trade shows suck, meeting customers is actually refreshing, energizing and rewarding. People come up to talk about how much they love your product, and even those who are experiencing a problem are respectful because in person it is much more difficult to be a jerk like email communication too often encourages. And I'm very nice in person.

If you've ever seen Automatic Duck at NAB or IBC you probably noticed we've been a part of a group of plug-in vendors called the Plug-in Pavilion. We team up to get a large space and then we share the various costs, it is quite a good idea and while it doesn't necessarily make exhibiting much cheaper it does allow us to get good space in the hall. I love the Plug-in Pavilion, and over the years us repeat offenders in the group have become good friends. However since our group started exhibiting at IBC fewer and fewer plug-in companies have participated. Last year when there were only two of us left sharing the space and the costs, we looked at each other and decided it was getting too expensive. But because I love IBC and I love Amsterdam, I still came out. After seven years of exhibiting at IBC this is the first year I'm just a visitor.

There is a lot to like about IBC. Amsterdam is a fantastic city full of beauty and kindness, with loads of fantastic restaurants and places to visit with friends. And for me this aspect of IBC is important. Many of us vendors get to know each other as we see our same exhausted faces year after year at NAB, IBC and other industry events. At IBC we seem to have more of an opportunity to get together in the evening, something that no one is able to do at NAB because of all the events, user meetings, parties and other working-while-drinking occasions. But here in Amsterdam I feel like we get to relax a little after the exhibits close for the day.

We can even relax and socialize at the show itself, if you can get away from your booth (or just forego the booth altogether). It turns out located on the grounds of the RAI exhibition center is the Strand Zuid City Beach, a lovely outdoor area with two restaurants, beer, sand, benches and chairs that all work together to nearly make you believe you're not on the clock.

Naturally there is work to be done at IBC, otherwise our bosses wouldn't be sending us, and IBC is fantastic place to do business. As you join the queue to enter the building you immediately sense that you're amongst professionals, just about everyone is wearing a suit. And not in a stuffy wearing-a-suit-at-NAB way, but in a cool European way that exudes power and style, with a dash of you can tell these dudes also know how to party. Many of the visitors to IBC are power players in large broadcasting companies, there to get questions answered and potentially make deals. In my experience visitors will stay at a stand a lot longer than they will at other shows, asking very thoughtful questions and won't leave until their understanding is crystal clear. But the slowness isn't a problem, there aren't crushing crowds here, rather a crop of high value prospects. No tire kickers.

This visit has so far been again quite positive. I've enjoyed successful meetings at the beach, memorable dinners with friends and business partners, plus loads of good and insightful chat all around IBC. I will leave Amsterdam reinvigorated and excited about the opportunities that lie ahead.

Some cool stuff happening here at IBC in Amsterdam. Last night Omneon and Harmonic held their first ever combined press event. The deal for Harmonic to buy Omneon is still not complete, but it should be finalized shortly. The two companies emphasized how by working together they can provide an end to end infrastructure that involves direct ingest of live feeds into Omneon's MediaGrid. A new Omneon product discussed was the ProXchange 1.6, which supports ProRes 4:2:2.

Today Quantel talked about how — for the post part of their biz — 3D stereo has now overtaken 2D in terms of sales. They also announced that Cinnafilm's Dark Energy noise reduction technology is available for its iQ, eQ and Pablo products.

AJA has three new offerings in each of the product categories they server. Acquisition: a new KiPro Mini ($1,995) that weighs one pound and offers a scaled down version of its big brother the KiPro. For editing, there is the next-gen Kona card, the Kona 3G ($1,995), which offers HDMI 1.4a output for stereographic monitoring to consumer 3D displays, among other new offerings, including 10-bit uncompressed video 3G/HD/SD SDI I/O. For conversions, AJA introduced the Hi5-3D mini converter ($495) featuring the new HDMI 1.4a output.

The A-List of Hollywood post was in attendance at the Grand Opening of Oasis Imagery on Sunset Boulevard in the heart of Hollywood. Oasis Imagery is led by Chief Visionary Officer Scot Barbour and Chief Technology Officer Adam Green, who led the multi-million dollar build out of the 27,000 sq. ft. facility.

Green and Barbour presented the new facility to the standing room only crowd in their 50 seat DI Theater. Presentations of the new facility's resources included, among others, Avid, Final Cut Pro, Autodesk Smoke and Autodesk Lustre. They stressed their 3D capabilities which include their DI Theater which is capable of presenting 3D imagery in the RealD, Dolby 3D, or XpanD formats. They also proudly announced their recent THX certification.

Talent abounds a-plenty as well. During the presentation we were treated to the demo reel of their lead colorist, Chris Jacobson, whose credits include American Gangster, The Foot Fist Way, and Mulholland Dr. Their lead Smoke Artist, Shursen Parsad, is an expert operator having many significant credits as a Smoke operator since version 1.

All great things must end, and so follows that SIGGRAPH 2010 has finally come to a close. Overall, it was a nice conference this year, with some interesting new technologies to be seen and a few new techniques and ideas to walk away with. Unfortunately, it was definitely, in my opinion, not one of the best expo floors I've been to, likely due to a few things. For one, despite a few new advances this year, there weren't many revolutionary breakthroughs in the industry. In this realm, it seemed as though the biggest push of new technology which is finally starting to just take hold are the GPU embedded graphics cards and some new real-time or near-real time software that is finally beginning to take advantage of this hardware. More on this in a bit. Second, there seemed to be quite a number less vendors on the floor showing their products. While some companies were simply absent from the show altogether, others were no longer present because they have either merged or been acquired by another vendor at the show. This industry consolidation (which seems to be a cyclical thing over the years) is to be expected, but regardless has an impact of the implied presence at the show. Finally, I'm sure the recession and economic state has a great deal to do with it as well. Between budgetary cuts and marketing restructuring, the “freebie” giveaway factor was definitely much lower this year. Aside from a few random t-shirts, pens, and the ridiculously long line for the Pixar teapot toy, gone were the squishy toys, magnets, keychains, mint boxes, and random collectible things that tend to fill up the free bags of many an expo-goer.

On the topic of the teapots, what a fantastic marketing job Pixar has done with that, and kudos to them for once again, as always, being geniuses. Being a Renderman user, my “special” limited edition teapot which is somewhere in the 400's out of 1000 was obtained at the Renderman User Group meeting. I am not, however, a collector of such fine memorabilia, so I gave it to my three-year-old son who is playing with it around the house now. Watching the smile on his face as he winds it up and runs around the house shouting “Renderman Teapot” is worth far more to me than a tin box on a shelf collecting dust. While there is nothing wrong with wanting a toy or collectible like this for free, it saddened me a bit to see how long the line of young aspiring artists was for this, as perhaps more of their interest should have been in about learning more of the new tech about this field, which is supposed to be the whole purpose of this show. Maybe some of the video card manufacturers can start giving out limited edition toys of “Jimmy The GPU”, and the rendering companies can offer “Irving The Image Based Light” to drum up a bit more interest from the student crowd.

While I had intended to possibly squeeze in a few more talks, I ended up instead spending more time on the floor. I wanted to make sure I had an opportunity to take in the new offerings and get some one on one time with the developers and their products. A friend and colleague of mine, Erik Gamache, was also presenting a half hour talk on Digital Domain's making of the Hydra character in the recent Percy Jackson film, so I decided to sit in at the Autodesk booth to check it out and support him. The presentation was very well delivered, and the work looked great. A number of old friends of mine worked on that, and to them, congratulations for another job well done. To try and get the most out of the floor time, I decided to do a quick pass through all the aisles, grabbing a brochure from any booth that was a place I wanted to visit more in-depth. Afterwards, I sat with one of my coworkers and formulated a plan of attack. In the software arena, it was interesting that many of the packages were Windows only, with a Linux port either in (slow) development or not available at all in the foreseeable future. The only reason I could think of for this is that most of these tools are targeting markets with are either more involved in games production, or doing commercial/film work using Window's based tools such as 3DS Max. Being a Linux based studio, and with many of the larger studios using this platform, I would have thought more of these new tools would be available for it. This basically leaves three options if I would like to use this software: set up a few Windows based machines (we currently do this as dual boot systems or on Linux via VMWare), try to get on a beta program for the unreleased Linux versions, or simply lose the ability to take advantage of these new tools. There were definitely some interested items out there, and while I won't discuss any particular toolset at this time, suffice it to say that many of the products are definitely maturing to a point where they are becoming particularly good at what they do.

The other thing of interest, as I mentioned above, was the GPU technology and some of the software taking advantage of it. I am still a bit under-educated in how this processor works, and my attempts to get some explanation at the chipmaker booths yielded me with little more info. I simply got the runaround and was handed from rep to rep, only to finally give up when the fifth person on the hand-off chain was “back in an hour”. However, what I do know about these vector processors is that they are extremely efficient at processing many of the same type of calculation simultaneously, and this so happens to fit the requirements of raytracing. Therefore, this was the big sell, and there are a number of new software tools available taking advantage of this. Trying to figure out how this could be of immediate significant benefit to our studio was the challenge. First off, being in the commercial production and design business, we have no real requirements for real-time performance of visual display. Of course, it goes without saying that if I could obtain the same desired result in an image that I get with a software render out of a real-time hardware render, I would be all for it. However, there are certain types of rendering results that, for the time being, can still only be calculated using a software render, and until that changes, that is what will have to suffice for final images. While many of the GPU images (whether real-time or near real-time) were very impressive and beautiful, they are still not photoreal, and if that's the requirement of the spot you are making, this will not work. Of course, if the result approximates the software render in type (in other words, it is a valid representation of your shading setup), then it could serve as a huge benefit during the process leading up to the final software render. Of course, there are some renderers now which are taking better advantage of the GPU, such as VRayRT, basically being used to aid in certain parts of the render while other parts are rendered on the CPU. I am looking into those renderers as well to possibly augment our current setup. One though I had and would like to look into is potentially using the GPU to accelerate the creation of point cloud and voxel cloud data for things like occlusion, reflection, etc., and then use that quickly generated cache as an input on the software render portion, greatly speeding up the overall render time. In order for this to work, Renderman will need to be able to process those portions using the GPU (not sure if the new version will allow us to do that or not), and of course I will need to make sure we are using machines which have GPUs on the graphics cards. This also means that, unless we place graphics cards into our racks on the farm, these caches will have to be generated on our local workstations, which isn't the most efficient way to work either. Of course, I will need to do a fair amount of research on this to find out how to best leverage this technology and with what combination of hardware and software. There are a few other products out now which are GPU based and interface with Renderman, such as MachStudio Pro 2, so this may provide this type of solution, along with many of the new features in Renderman Studio 3 and Renderman Pro Server 16 due out in two months.

The rest of the things I saw at the show weren't really my cup of tea. There were a number of 3D scanners being presented, but unfortunately nothing that was really a one-stop solution. By that, I mean that I would like to see a scanner which can be operated handheld, with range adjustments to accommodate close, small subjects, as well as large, mid range subjects such as a full body person or car, and the ability to further scan far large structures via LIDAR, while at the same time taking high resolution (at least above 10 megapixel) full color images for texture purposes, all in a single wireless unit. Additionally, it would be nice for the price point to be below the five thousand dollar range for a unit like this. The same goes for 3D printing. The technology seems to be improving in some of these units, but the entry price point seems to be around ten thousand, extended up to the half million mark. As I mentioned in a previous post, the only way to get below this is to use a unit like the MakerBot where you assembly your own tool, and then have to work hard to get results which approach some of these pre-assembled units. While extremely large facilities and research institutes may be able to afford these units, smaller studios like ours find it difficult to justify the expense of these units when we simply wouldn't get all that much benefit from them, and when we do occasionally need to use them, we can just pay for the one-time service from a provider. I personally would love to bring this technology in-house, as it would save time to be able to perform these operations on set and in the studio at will. Hopefully, we see a price evolution that mirrors the other, longer established hardware, in a short amount of time, opening up this market a bit more.

A funny thing happened later in the day. Not paying much attention, I didn't realize when the show ended and thought it would go until at least 6:00pm. I hadn't eaten any lunch, so around 2:30pm we decided to walk over to the LA Live area and try one of the new restaurants. That was quite enjoyable, and we ended up coming back around 4:15pm. On the way, I ran into some more friends, chatted briefly, then headed over to the expo floor again. I was literally shocked when I got there and the place was already half dismantled, the lights full on and multitudes on workers disassembling scaffolding and crating up equipment. I didn't realize the show ended at 4:00pm, and was simply amazed at the efficiency of the convention center staff and how rapidly they were putting the show to bed. We then headed over for one last talk which was still going, and finally called it quits just before 6:00pm.

Now back in the studio, I have my work cut out for me. I have this giant list in my head of things to go over with the other artists, research and development I want to get started on, software and hardware to look into, and a multitude of workflow improvements I think we can pursue. I took a great deal away from this year's show, and hopefully you the reader had a similar opportunity and experience. I hope you enjoyed my recap of the events and my various random thoughts on things, and I would welcome any comments or questions any of you might have at my email address listed in my profile. Until next time, good luck with your endeavors and the pursuit of the magical art that is animation and visual effects. This field is what we all make of it, and frankly it's one of the coolest things to be involved with.

Siggraph is pretty much a wrap, I would have written earlier but literally have not stopped since stepping off the first plane in to Burbank Monday morning. I headed straight in to the talk on computational photography - I have to say this is one of the more exciting research areas to me. Some development has been evolving in this sector for a little while and is now getting more focus (ha). A few years ago at Siggraph, there were a number of fun emerging technologies related to this field - encoded apertures, origami lenses, lens arrays - and plenty of work on up-res-sing, deblurring, infinite depth of field, etc... As with most tech, there are tradeoffs, loss of contrast, ringing or other artifacts, but what I find really exciting is to think of this research more philosophically, thinking of light in a new way, changing photography and imaging by being creative about how we analyze light and how that will change the creative process. This is definitely a sector to watch.

On with the show, with a full conference pass, I always feel like I need to be in three places at once, luckily ILM sends a few qualified folks to cover the subject matter (and contribute of course). I tend to hit up the replicating realism sessions, anything to do with faces, lighting acquisition, 3d spatial things, human motion, and as usual there are a few good pieces here and there, sometimes there are things we know but haven't implemented, and sometimes it is the result from a different approach that can be leveraged in an entirely new way. One of the things I'd like to see more of is research collaboration between industry and academics, I find that some of the research going on is solving something that certain companies already have but can't/don't share. I understand the protection of intellectual property, but at a minimum I think the big shops can help consult and guide some of this academic research by collaborating (not saying it doesn't happen, just that I'd like to see more of it). Quick example, I attended a talk on analyzing and extracting 3d human performance from a single 2d video source. The research included a number of secondary techniques that helped the process and rounded out the work in a nice way, but also distracted from focusing on a more elegant solution to the principal issue. Of course research can head in a magnitude of directions for many reasons, I just like to think that with some more collaboration, I can integrate some of the techniques sooner and keep raising the visual bar.

I always enjoy seeing the research coming out of ICT, specifically the facial capture stuff using normal mapping - and even the headrig version of this tech. Great datasets for shapes, bump and spec, performance - if and when you have a controlled setup. The headmount frees some of that, but still leaves plenty of fun in the research arena for capturing facial performance in context to a live action environment with minimal tech/footprint.

On to the floor, it seemed nice to see the major shops all with recruiting efforts. I know all Lucas divisions have a number of opportunities for folks on the upcoming slate of work. As for the vendors - nothing like the magnitude of a show like NAB, but I guess that is a good thing, you can get right down to seeing the tools you really need. There was plenty of opportunity to play with desktop scanners and 3d printers, a good all around representation of those market options that you could touch and feel. And as usual, all the mocap players were representing - optical systems, accelerometer based, etc... nothing super new here except that everyone is jumping on the virtual cinematography band wagon. It is nice to see the tools becoming packaged and available for expanded use at reasonable price points. Most form factors look like shoulder mount broadcast cameras, and give that kind of look to your virtual camera move. Guess what people, if you have a tracking technology, you already have the capability of doing virtual cinematography (still need the talent, but piecing the tech together isn’t that hard). Heck, put your tracking object on a dolly, a hot head (or just port the rotations directly), or a steadicam (make sure to add mass). Point here is that you can leverage traditional tools with the technology to replicate the cinematic look.

One observation, emerging tech always has the most random graphical interaction technologies, cracks me up to use augmented reality to get a virtual smell, crazy haptic feedback contraptions and fun little graphic games. One cool tech that is letting us peer into the future, a 3d display that had a little 3d video game within the hologram-like spinning LED display – this tech will be fun in the near term. These projects make me think that it would be fun to be back in school working on something like this – I was working on animatronics back then, I guess that runs the same vein. Come to think of it, a lot of the research we do to solve a vfx problem is extremely similar. We leverage hardware and software in some undiscovered way, usually no where near solid state, yet just capable enough to get the job done, then we start all over again.

Thanks to Jim Morris, for his keynote, covering many of the pivotal moments in computer graphics, amazing to see how far in a short amount of time this industry has evolved. It is the moment when we get to see our hard work and innovation pay off through the amazing visual and the audience appreciation, that keep me inspired. I rounded out the conference with plenty of socializing and networking, thanks to all the vendors and sponsors for Bordello Bar, J Lounge, Club Nokia and 7 Grand – good thing the conference is only a few days long, now back to the pile of deadlines waiting for me back at the studio.

Good evening loyal readers.Wednesday is rapidly coming to a close, and so is this year’s SIGGRAPH exhibition and conference.Tomorrow is the last day, and there are a few classes left I might try to attend. ’ll also be spending a fair amount of time on the expo floor, talking to different vendors about new products I’m currently interested in while learning about how some of the newest technologies, such as the multi-GPU video cards, actually work and how my studio might be able to take advantage of them.

Today, the front lot appeared filled by the time I arrived, so I had to resort to parking down in the bowels of the underground parking structure. I had planned on attending a number of courses and talks, but I ended up running into a number of old friends and colleagues and getting into conversations which overlapped the start time of some of the courses, so alas I skipped a few. I did however attend a “birds of a feather” special interest group on 3D printing. This was essentially an open session, complete with full introductions from everyone present, where everyone was free to discuss their knowledge of both 3D printing and subtractive rapid prototyping (the technique I am most interested in). Although the two are definitely related (one builds up a three-dimensional model in successive layers using a polymer deposited into some sort of removable support matrix, while the other takes a solid block of material and eats away at it until the resulting milled object is complete), they have different uses, benefits and drawbacks, and relatively separate user-bases. The ensuing conversation was informative and fun, with brief discussions on laser cutting, paper folding, sign manufacturing, and molecular model representation. This year, there is a new 3D printer called the Objet (http://www.objet.com) which allows for printing in other materials, including metal. There was a lot of discussion as well about a do-it-yourself 3D printer called the MakerBot (http://makerbot.com/) which is obtainable for as little as $750. While this needs assembled by the user, it is definitely an affordable entry level 3d printer with a loyal fanbase and user support community.

Around midday, I attended a press event for Jon Peddie Research (a technically oriented multimedia and graphics research and consulting firm). This was held at the Palm restaurant, an upscale eatery located only two blocks from the convention center. I have been to this particular restaurant nearly a year ago for my birthday, where my wife took me for a fantastic dinner consisting of a tender filet mignon, a 4-pound lobster, and one of the best Singapore Slings I’ve had outside of my home bar. This was definitely a terrific location for a press event, and the lunch served was quite delicious. The panel of speakers was also quite impressive, including Eric Demers (GPG Chief Technology Officer at AMD), Brian Harrison ( Director at SolidWorks Labs), Rolf Herken (CEO and CTO at Mental Images), Bill Mark (Senior Research Scientist at Intel), and Paul Stallings (Vice President of Development at Kubotek). In addition to market forecasts, the discussion revolved greatly around the notion of an HPU, or Heterogeneous Processor Unit, which is a multiple core processor chip with integrated CPU and GPU cores and what impact this will have on graphics computing over the next few years. Some talk also revolved around cloud computing and what benefits and drawbacks this will have for users, as well as its impact on the industry as a whole. At one point during the talk, there was an interruption and the doors to the private section of the restaurant where we were sitting swung open and a commotion ensued. Before anyone knew what was happening, into the room walked William Shatner and Dick Van Dyke. It turns out they were presenting later in the day at one of the expo booths, speaking on their thoughts of how the visual effects industry has changed and impacted their careers. I can only assume that they came to the Palm for lunch before that event, and learning about the type of meeting we were having upstairs, decided to crash the party and have a little fun. It was definitely quite entertaining to see both of them, and it really made the lunch something to remember.

From there, I returned back to the expo, where I decided to walk the floor for a bit. I was lucky enough to run into some more old friends and catch up on things for a little while. Once the evening came, it was time for the annual Renderman Users Group meeting, held this year at the brand new Marriot at the LA Live center. While I won’t recap all the new features coming in the next release of Renderman Studio and Renderman For Maya, suffice it to say that there are a great number of fantastic improvements and I am really looking forward to this new release. The new Tractor job queue management system was also demonstrated (this is a replacement for the older Alfred queue system), and as a clever trick for everyone before the show, where attendees could log into a local wifi and play a trivia contest about renderman, what was actually happening was Tractor was being used to perform distributed rendering of a mandlebrot fractal image, demonstrating not only the flexibility and power of the new system, but finally bringing to fruition the often joked about notion of using everyone’s iPhone on the farm to render images. The presentation finished up with the requisite “Stupid Renderman Tricks” and a raffle for some various Pixar stuffed animals, hats, and t-shirts. Of course, no Renderman Users Group meeting would be complete without receiving the wind-up teapot toy in a tin. This year’s version was red with a black hat.

Tomorrow, I might try to check out a studio presentation on “Hi Res Rapid Prototyping for Fine Metals and Jewelry”, a course on “Global Illumination Across Industries”, or a talk on “Fur, Feathers and Trees”, time permitting. If any of you are still interested in seeing the show, make sure you go tomorrow, lest you miss this year’s event. Thanks for tuning in, and check back tomorrow for my final wrap up.

This morning the SIGGRAPH trade show opened. By all accounts there are less exhibitors from previous years. Although the industry is slowly climbing back from last year recession, it is about half the size from five years ago. By far the most attended and popular booth is Pixar. The line to visit Pixar literally winded around the entire exhibit hall. I assumed that most in line were aspiring animators in hopes of landing that coveted job with their dream company. I was in fact wrong; they were giving out small plastic teacups. I truly do not understand what would make people wait for hours in line for a plastic toy, but then again everything Pixar touches seems to turn to gold, even plastic toys. In the past the teacup was a symbol for a type of typology used in 3D graphics and the default icon for previous SIGGRAPH shows. I wonder if the endless young people in line waiting in line were even aware of this.

The floor was full today and as expected 3D is all the buzz. There were virtual cameras that enabled the viewer in real time to view stereoscopic virtual sets. There were also real time 3D motion capture rigs, both with those iconic markers the actors have to wear and setups were no markers were needed at all. There were numerous 3D television monitors that require polarized glasses to view 3D. Presently this is by far the most popular consumer choice for home viewing of 3D. The competitor to this technology is autosteroscopic monitors, The viewer does not require glasses in order to view 3D. In order to experience the 3d, there are "sweet spots" where the viewer must stand in order to experience the 3D. This technology still have a long ways to go before consumer in my opinion is ready to spend the money, but still the technology is progressing.

Our company utilizes 3D cgi for our work. (Not to be confused with 3D stereoscopics.) There have been a lot of developments on this front. We heavily use Softimage XSI in our pipeline. The behemoth company Autodesk recently acquired this software. Autodesk in addition to XSI, also owns Maya and 3ds Max. I am relived to hear Autodesk will continue to invest in this product and make it stronger and continue evolving the software. As an educator, Autodesk will be offering an entertainment creation education suite. This package will offer the following Autodesk products: Maya, Motionbuilder, Mudbox, Sketchbook Pro, Softimage and 3ds Max. Students today will probably have to be multi-versed in the aforementioned softwares. We are finding as well as other design houses, that it is hard to stick with one software for our pipeline. I am hoping Autodesk will continue upgrading each of these products and not push one software at the expense of another.

I also attended two panel discussions today. The first was entitled "Blowing $hit Up." A great name for a panel discussion on the scientific understanding of how 3d cgi is creating photo real explosions and destruction of cities for film. These panels are not for the technically deficient. The first team to present was hosted by the always amazing Industrial Light and Magic. They were explaining some of the techniques utilized in making the film Avatar, primarily the explosion sequence involving the Dragon aircraft. Without getting in too much detail, much of the initial work was done with model proxies, rigid body dynamics, and later highly detailed models were substituted for final shots. The next panel again was ILM. This time they were explaining some of the work they accomplished for Transformers 2. The work spoke for itself. What I found most interesting is ILM’s goal is to make the tools as easy as possible to allow the artist to do what they do best….to be an artist and not a scientist. For the layman there is no magic explosion button to blow things up. There is a tremendous amount of pre-calculations and iterations to get the right “look.” And for ILM is not enough to have an exact explosion, the look and feel and composition is always at the forefront. It more important that the shot that looks right, as opposed to scientifically right. Digital Domain represented the last panel and they explained the earthquake sequence from the movie 2012. The work was outstanding and the ability to destroy downtown LA photo realistically was unimaginable only a few years ago.

The last panel discussion I attended was "The Making of Tron : Legacy." This movie may have the record for the most amount of time for a sequel. This was a full house and the crowd was looking forward to hearing from the directorial debut of Joseph Kosinski. As someone who also received his Master’s of Architecture from Columbia University, I was excited to see what Joseph imagined the world of Tron would look like. I was relieved and truthfully blown away at the seven-minute 3D preview of the film. It stayed true to the original vision of the artists Sid Mead and Mobius. Apparently the original star of the film Jeff Bridges will be playing reprising the role of Kevin Flynn as well a digital double playing himself as thirty five year old. Maybe the uncanny valley has finally been crossed. Also it was nice to hear with Pixar’s new relationship with Disney they were able to make suggestions on how to make it better. I hope in this case their touch does make this film turn to gold.