One request I get frequently from consulting clients and readers of this column is how to speed up transcoding. If your house of worship records on MiniDV or HDV or even direct to DVD, those formats have too high a bitrate to stream over the web to the average user by about a factor of 10. The initial recording, then, needs to be transcoded to a streaming format such as Adobe Flash (VP6 or H.264), WindowsMedia, or QuickTime.

While there’s a significant amount of confusion on which streaming format or codec to choose, speeding up the transcoding process is equally misunderstood in terms of what the best approach should be.

There are four ways to speed up encoding:

Multiple cores/processors. One of the great fallacies in today’s transcoding solutions is the idea that a faster processor is better; the general concept is that buying the fastest processor is the way to go and that more processors yield even faster transcodes. Oh, if only that were true. Right now the majority of software encoding software tools don’t leverage more than two processors, and others don’t take advantage of multiple cores or processors at all, even if a machine has four dual-core processors (eight cores). The biggest issue is the lack of programming skill sets: Harnessing multiple processors or multiple cores is a daunting task, and it’s one that is best implemented at an operating system level. This issue is being addressed, somewhat, by the addition of multicore awareness in the operating system (as part of the HAL or Hardware Abstraction Layer), but it is still in its early stages.

Programmers with the skills to optimize applications for multicore systems face another issue: how to keep their program from completely overriding all other programs that want access to the CPU. It turns out that full-on processing isn’t necessarily the best approach, and those who might have the skills in balancing multiple cores or processors are often also versed in one of the other areas noted below, especially DSPs. These programmers often will have the desire to create their own hardware solutions rather than muck around with a software-only solution that puts their program at the mercy of Word, Excel, or iTunes running in the background.

Multiple computers. A variation on the first strategy, this approach uses multiple machines tied together to render. This is often accomplished by putting a piece of software on each machine and then authorizing one machine to do the rendering/compressing. In a home or edit bay with multiple idle machines, it makes sense, but the problem again needs to be solved at the OS level.

Digital Signal Processors (DSPs). These specialized chips are often found in stand-alone hardware transcoding appliances or on PCI cards that are inserted into slots on a desktop computer. Because the DSP is a low-power consumption chip, multiples can be run in parallel (some cards house eight or 16 DSPs) and each DSP can be flushed and loaded with new codecs so that the board is capable of encoding or decoding multiple types of media.

Texas Instruments (TI) is one of the leaders in DSPs and has, until the last few years, failed to provide chip programmers with media-centric guidance on codecs or video formats, which means that DSPs were less likely to be used in consumer products (with exceptions like ProTools and rendering farm cards). TI’s move toward the DaVinci platform, which combines DSPs with ARMs (the low-power CPUs found in some smartphones and PDAs and now in IPTV set-top boxes) has turned the tide, and TI is now bundling key codecs of various flavors (VC1, WindowsMedia, H.264, AAC, MP3, etc.) with the DaVinci platform of DSPs, which means there may be cards that trickle down to consumers in fairly short order.

A slight variation on this theme is the use of a USB "stick" to offload the processing of video to a dedicated encoding chip. Several of these are already on the market for tasks like H.264 analog encoding. Some use DSPs.

Graphics Processor Units (GPU). At the heart of all recent—and most legacy—graphics cards is a robust chip called a GPU. As gamers know, GPUs from ATI, NVIDIA, and a few others are designed to do one of the most critical tasks on a desktop or laptop computer: make the screen redraw as quickly, smoothly, and nicely as possible.

The GPU has potential in both transcoding and decoding video, since, like the CPU, it can handle both compression and playback. During compression, most machines’ graphics systems are idle while the CPU churns away. As a result, there’s a lot of "wasted" processing that could be used for encoding. Some leading GPU manufacturers and a few key third-party developers are beginning to explore this area, in search of ways to use this idle processing time during transcoding to speed up the process.

The other side of the GPU equation is playback, and this is where much of the effort is going in to playing back content full screen (even if it’s not full-screen quality). This is where companies such as Adobe, with its new version of the Flash Player, are using GPUs to allow playback of H.264 content that normally requires very high CPU processing power to decode for playback. While most older machines might not have the CPU power to do this, their GPUs/graphics cards are more than adequate.Hopefully this overview helps as you make decisions about speeding up the process of getting the message out.

Tim Siglin (writer at braintrustdigital.com) writes and consults on digital media business models and "go to market" strategies. He is chairman of Braintrust Digital, Inc., a digital media production company, and co-founder of consulting firm Transitions, Inc.