Tag Archives: Siggraph

Floyd Norman, the first African-American animator to work for Walt Disney Animation Studios, has been named SIGGRAPH 2017’s keynote speaker. The keynote session featuring Norman will be presented as a fireside chat, allowing attendees the opportunity to hear a Disney legend discuss his life and career within an intimate setting. SIGGRAPH 2017 will be held July 30-August 3 at Los Angeles.

Norman was the subject of a 2016 documentary called Floyd Norman: An Animated Life from filmmakers Michael Fiore and Erik Sharkey. The film covers Norman’s life story, also includes interviews with from voice actors and former colleagues.

Norman was hired as the first African-American animator at Walt Disney Studios in 1956 and was later hand-picked by Walt Disney himself to join the story team on The Jungle Book. After Walt’s death, Norman left Disney to start his own company, Vignette Films and produce films on the subject of black history for high schools. He and his partners would later work with Hanna-Barbera to animate the original Fat Albert TV special Hey, Hey, Hey, It’s Fat Albert, as well as the opening title sequence for the TV series Soul Train.

Norman returned to Disney in the 1980s to work in their publishing department, and in 1998 moved to the story department to work on Mulan. After all this, an invite to the Bay Area in the late ‘90s became a career highlight when Norman began working with leaders in the next wave of animation — Pixar and Steve Jobs — adding Toy Story 2 and Monsters, Inc. to his film credits.

Though he technically retired at the age of 65 in 2000, Norman is not one to quit and chose, instead, to occupy an open cubicle at Disney Publishing Worldwide for the last 15 years. As he puts it, “I just won’t leave.”

While not on staff, Norman’s proximity to other Disney personnel has led him to pick up freelance work and continue his impact on animation as both an artist and a mentor. As to his future plans, he says, “I plan to die at the drawing board!

“I’ve been fascinated by computer graphics since I purchased my first computer. I began attending SIGGRAPH when a kiosk was all Pixar could afford,” he says. “Since then, I’ve had the pleasure of working for this fine company and being a part of this amazing technology as it continues to mature. I’ve also enjoyed sharing insights I’ve garnered over the years in this fantastic industry. In recent years, I’ve spoken at several universities and even Apple. Creative imagination and technological innovation have always been a part of my life, and I’m delighted to share my enthusiasm with the fans at SIGGRAPH this year.”

Mikki Rose has been named conference chair of SIGGRAPH 2019. Fur technical director at Greenwich, Connecticut-based Blue Sky Studios, Rose chaired the Production Sessions during SIGGRAPH 2016 this past July in Anaheim and has been a longtime volunteer and active member of SIGGRAPH for the last 15 years.

Rose has worked on such film as The Peanuts Movie and Hotel Transylvania. She refers to herself a “CG hairstylist” due to her specialization in fur at Blue Sky Studios — everything from hair to cloth to feathers and even vegetation. She studied general CG production at college and holds BS degrees in Computer Science and Digital Animation from Middle Tennessee State University as well as an MFA in Digital Production Arts from Clemson University. Prior to Blue Sky, she lived in California and held positions with Rhythm & Hues Studios and Sony Pictures Imageworks.

“I have grown to rely on each SIGGRAPH as an opportunity for renewal of inspiration in both my professional and personal creative work. In taking on the role of chair, my goal is to provide an environment for those exact activities to others,” said Rose. “Our industries are changing and developing at an astounding rate. It is my task to incorporate new techniques while continuing to enrich our long-standing traditions.”

SIGGRAPH 2019 will take place in Los Angeles from July 29 to August 2, 2019.

My first video card review was on the ATI FireGL 8800 more than 14 years ago. It was one of the first video cards that could support two monitors with only one card, which to me was a revolution. Up until then I had to jam two 3DLabs Oxygen VX1 cards in my system (one AGP and the other PCI) and wrestle them to handle OpenGL with Maya 4.0 running on two screens. It was either that or sit in envy as my friends taunted me with their two screen setups, like waving a cupcake in front of a fat kid (me).

Needless to say, two cards were not ideal, and the 128MB ATI FireGL8800 was a huge shift in how I built my own systems from then on. Fourteen years later, I’m fatter, balder and have two 27-inch HP screens sitting on my desk (one at 4K) that are always hungry for new video cards. I run mulitple applications at once, and I demand to push around a lot of geometry as fast as possible. And now, I’m even rendering a fair amount on the GPU, so my video card is ever more the centerpiece of my home-built rigs.

So when I stopped by AMD’s booth at SIGGRAPH 2016 in Anaheim recently. I was quite interested in what AMD’s John Swinimer had to say about the announcements the company was making at the show. AMD acquired ATI in 2006.

First, I’m just going to jump right into what got me the most wide-eyed, and that is the announcement of the AMD Radeon Pro SSG. This professional card mates a 1TB SSD to the frame buffer of the video card, giving you a huge boost in how much the GPU system can load into memory. Keep in mind that professional card frame buffers range from about 4GB in entry level cards up to 24-32GB in super high-end cards, so 1TB is a huge number to be sure.

One of the things that slows down GPU rendering the most is having to flush and reload textures from its frame buffer, so the idea of having a 1TB frame buffer is intriguing, to say the least (i.e. a lot of drooling). In their press release, AMD mentions that “8K raw video timeline scrubbing was accelerated from 17 frames per second to a stunning 90+ frames per second” in the first demonstration of the Radeon Pro SSG.

Details are still forthcoming, but two PCIe 3.0 m.2 slots on the SSG card can get us up to 1TB of frame buffer. But the question is, how fast will it be? In traditional SSD drives, m.2 enjoys a large bandwidth advantage over regular SATA drives as long as it can access the PICe bus directly. Things are different if the SSG card is an island in and of itself, with the storage bandwidth contained on the card itself, so it’s unclear how the m.2 bus on the SSG card will do in communicating with the GPU directly. I tend to doubt we’ll see the same performance in bandwidth between GDDR5 memory and an on-board m.2 card, but only real-world testing will be able to suss that out.

But, I believe we’ll immediately see great speed improvements in GPU rendering of huge datasets since the SSG will circumvent the offloading and reloading times between the GPU and CPU memories, as well as potentially boosting multi-frame GPU rendering of CG scenes. But in cases where the graphics sub-system doesn’t need to load more than a dozen or so GBs of data, on board GDDR5 memory will certainly still have an edge in communication speed with the GPU.

So, needless to say, but I’m going to say it anyway, I am very much looking forward to slapping one of these into my rig to see GPU render times, as well as operability using large datasets in Maya and 3ds Max. And as long as the Radeon Pro SSG can avoid hitting up the CPU and main system memory, GPU render gains should be quite large on the whole.

Wait, There’s More
On to other AMD announcements at the show: The affordable Radeon Pro WX line-up (due in the fourth quarter of 2016), refreshing the FirePro branded line. The Radeon Pro WX cards are based on AMD’s RX consumer cards (like the RX 480), but with a higher-level professional driver support and certification with professional apps. The end-goal of professional work is stability as well as performance, and AMD promises a great dedicated support system around their Radeon Pro line to give us professionals the warm and fuzzies we always need over consumer level cards.

The top-of-the line Radeon Pro WX7100 features 256-bit 8GB memory and workstation class performance, but at less than $1,000, which I believe replaces the FirePro W8100. This puts the four-simultaneous-display-capable WX7100 in line to compete with the Nvidia Quadro M4000 card in pricing at least, if not in specs as well. But it’s hard to say where the WX7100 will sit with performance. I do hope it’s somewhere in-between the Quadro M4000 and the $1,800 M5000 card. It’s difficult to answer that based on paper specs, as the number of (OpenCL) Compute Units vs. the number of CUDA cores are hard to compare.

The 8GB Radeon Pro WX5100 and 4GB WX4100 round out the new announcements from SIGGRAPH 2016, putting them in line to compete somewhere between the 8GB Quadro M4000 and 4GB M2000 and K1200 cards in performance. Seems though that AMD’s top-of-the-line will still be the $3,400+ FirePro W9100 with 16GB of memory, though a 32GB version is also available.

I have always thought AMD brought a really good price-to-performance ratio, and it seems like the Radeon Pro WX line will continue that tradition, and I look forward to benchmarking these cards in real world CG use.

Dariush Derakhshani is a professor and VFX supervisor in the Los Angeles area and author of Maya and 3ds Max books and videos. He is bald and has flat feet.

Pixar Animation Studios has released Universal Scene Description (USD) as an open source technology in order to help drive innovation in the industry. Used for the interchange of 3D graphics data through various digital content creation tools, USD provides a scalable solution for the complex workflows of CG film and game studios. With this initial release, Pixar is opening up its development process and providing code used internally at the studio.

“USD synthesizes years of engineering aimed at integrating collaborative production workflows that demand a constantly growing number of software packages,” says Guido Quaroni, VP of software research and development at Pixar. USD provides a toolset for reading, writing, editing and rapidly previewing 3D scene data. With many of its features geared toward performance and large-scale collaboration among many artists, USD is ideal for the complexities of the modern pipeline. One such feature is Hydra, a high-performance preview renderer capable of interactively displaying large data sets.

“With USD, Hydra, and OpenSubdiv, we’re sharing core technologies that can be used in filmmaking tools across the industry,” says George ElKoura, supervising lead software engineer at Pixar. “Our focus in developing these libraries is to provide high-quality, high-performance software that can be used reliably in demanding production scenarios.”

Along with USD and Hydra, the distribution ships with USD plug-ins for some common DCCs, such as Autodesk’s Maya and The Foundry’s Katana. To prepare for open-sourcing its code, Pixar gathered feedback from various studios and vendors who conducted early testing. Studios such as MPC, Double Negative, ILM and Animal Logic were among those who provided valuable feedback in preparation for this release.

At the SIGGRAPH 2016 show, AMD will webcast a live showcase of new creative graphics solutions during their “Capsaicin” event for content creators. Taking place today at 6:30pm PDT, it’s hosted by Radeon Technologies Group’s SVP and chief architect Raja Koduri.

The Capsaicin event at SIGGRAPH will showcase advancements in rendering and interactive experiences. The event will feature:
▪ Guest speakers sharing updates on new technologies, tools and workflows.
▪ The latest in virtual reality with demonstrations and technology announcements.
▪ Next-gengraphics products and technologies for both content creation and consumption, powered by the Polaris architecture.

A realtime video webcast of the event will be accessible from the AMD channel on YouTube, where a replay of the webcast can be accessed a few hours after the conclusion of the live event. It will be available for one year after the event.

Vicon, which makes precision motion tracking systems and match-moving software, will be at SIGGRAPH this year showing its two new camera families, Vero and Vue. The new offerings join Vicon’s flagship camera, Vantage.

Vero is a range of high-def, synchronized optical video cameras for providing realtime video footage and 3D overlay in motion capture. Designed as an economical system for many types of applications, the Vero range includes a custom 6-12 mm variable focus lens that delivers an optimized field of view, as well as 2.2 megapixel resolution at 330Hz.

With these features, users can capture fast sport movements and multiple actors, drones or robots with low latency. The range also includes a 1.3 megapixel camera. Vero is compatible with existing Vicon T-series, Bonita and Vantage cameras as well as Vicon’s Control app, which allows users to calibrate the system and make adjustments on the fly.

With HD resolution and variable focal lengths, the Vicon Vue camera incorporates a sharp video image into the motion capture volume. It also enables seamless calibration between optical and video volumes, ensuring the optical and video views are aligned to capture fine details.

Redshift Rendering has updated its GPU-accelerated rendering software to Redshift 2.0. This new version includes new features and pipeline enhancements to the existing Maya and Softimage plug-ins. Redshift 2.0 also introduces integration with Autodesk 3ds Max. Integrations with Side Effects Houdini and Maxon Cinema 4D are currently in development and are expected later in 2016.

New features across all platforms include realistic volumetrics, enhanced subsurface scattering and a new PBR-based Redshift material, all of which deliver improved final render results. Starting July 5, Redshift is offering 20 percent off new Redshift licenses through July 19.

If you are a character designer and thinking about attending the SIGGRAPH conference on July 24-28 in Anaheim, this is your lucky week. The Spirit of SIGGRAPH winner will receive complimentary full conference registration for SIGGRAPH 2016. Travel and incidental expenses are not included. The winning design will also be featured in promotion of the conference through social media, and the designer will be credited.

Designs must be submitted by midnight on April 15. The winner will be announced on May 2. The contest is seeking character designs that “embody the spirit of SIGGRAPH and should be original creations.” Submissions will be judged on any of the following criteria:
– Creativity
– Design
– Relevance to SIGGRAPH
– Suitability to 2D graphic design, 3D animatable design
and 3D printable solid design
– Ability to be turned into a wearable costume
– Suitability for use in a variety of on-site promotions

The winner will be chosen by this year’s SIGGRAPH event chair, Mona Kasra, next year’s event chair, Jerome Solomon, and a board of judges comprised of 2016 program chairs and experts across the animation and design industries.

Motion- and facial-capture companies Animatrik Film Design and Dimensional Imaging (DI4D) have launched a new collaboration based on their respective mocap expertise. The alliance will deliver facial performance-capture services to the VFX and video game communities across North America.

Animatrik technology has been used on such high-profile projects as Image Engine’s Chappie, Microsoft’s Gears of War series and Duncan Jones’ upcoming Warcraft. DI4D’s technology has appeared in such shows as the BBC’s Merlin and video games like Left 4 Dead 2 and Quantum Break. The new collaboration will allow both companies to bring even more true-to-life animation to similar projects in the coming months.

Animatrik has licensed DI4D’s facial performance-capture software and purchased DI4D systems, which it will operate from its Vancouver, British Columbia, and Toronto motion-capture studios. Animatrik will also offer an “on-location” DI4D facial performance-capture service, which has been used before on projects such as Microsoft’s Halo 4.

IKinema, a provider of realtime animation software for motion capture, games and virtual reality using inverse kinematics, has launched a new natural language interface designed to enable users to produce animation using descriptive commands based on everyday language. The technology, code-named Intimate, is currently in prototype as part of a two-year project with backing by the UK government’s Innovate UK program.

The new interface supplements virtual reality technology such as Magic Leap and Microsoft HoloLens, offering new methods for creating animation that are suitable for professionals but also simple enough for a mass audience. The user can bring in a character and then animate the character from an extensive library of cloud animation, simply by describing what the character is supposed to do.

Intimate is targeted to many applications including pre-production, games, virtual production, virtual and augmented reality and more. The technology is expected to become commercially available in 2016 and the aim is to make an SDK available to any animation package. Currently, the company has a working prototype and has engaged with top studios for the purpose of technology validation and development.