TV News Feb 2016

WHEAT:NEWS TV Feb 2016 - Vol 3, No.2

Got feedback or questions? Click my name below to send us an e-mail. You can also use the links below to follow us on popular social networking sites. The tabs at the very top of the page will take you to our web sites.

TR-03 Roadmap to IP

Photo: The first interoperability testing for TR-03 took place last month in Woodlands, Texas.

The road to IP is paved with good intentions and a whole lot of protocols, standards and acronyms. Just when we thought there couldn’t possibly be room for one more, along comes TR-03 and a whole new SDI-to-IP path for live video production. What we immediately liked about this set of recommendations released by the Video Services Forum (VSF) is its endorsement of existing standards, like AES67 for audio streaming and RFC 4175 to carry uncompressed video. But, like you, we had a lot of questions. How does de-encapsulating audio and video into separate IP streams affect synchronization between the two? Does this migration path include HD-SDI? And, the big one: Will TR-03 eventually become a SMPTE standard? We talked to Mike Bany, who led the VSF Studio Video Over IP Activity Group that set the new TR-03 recommendations.

WS:As you know, Wheatstone makes an IP audio network that is compatible with the AES67 audio standard specified in TR-03. So for our customers, a TR-03 environment means they’ll be able to simply plug into the IP architecture without having to worry about HD-SDI encapsulation, right?

MB: That’s the goal. The benefit of this is that with a lot of studio video productions, the audio comes from a different source than video. With TR-03, we will be able to pull in the audio and video from two different places and describe it as a bundle so that the receiver just receives both and it will automatically reassemble them using the correct time code. As it is now, in the HD-SDI realm, you actually have to embed the audio. You have to have a box that embeds it. With this, you can just stream both streams to the receiver that needs them, and you’re done. There’s no extra box to embed them.

WS:Nice. Let’s talk a little about the bigger picture and the benefits of migrating from baseband HD-SDI to IP technology.

MB: As you know, HD-SDI encapsulates audio, video and ancillary data, whereas what we’re proposing for TR-03 are separate IP flows for each. One of the big benefits of that is being able to send audio or video to multiple devices and remix them or recombine them however you want. For example, you can have several alternate languages go along with the video. You can have all these different streams tied to the same visual. You don’t have to embed them. Or, what if you needed to insert a bug like a logo on the screen? With HD-SDI you have to take an entire bitmap of the screen and put that bug in and then re-encapsulate it with all the other stuff. With TR-03, each packet specifies pixels on the screen. So you could actually have a device that only looks for those pixels and only operates on that. This allows for less complex devices to do operations on individual flows rather than on the entire thing. Even closed captioning has to be embedded on one of the lines of the video today. With this capability, closed captioning can be a separate data stream. And with the HD-SDI, the ancillary data space is a huge amount of overhead that is mostly unused. With TR-03, we are only sending the active video and ancillary data, so it actually cuts the overhead so you’re able to fit more flows in less bandwidth.

Continue Reading: TR-03 Roadmap To IP

WS:IP fits more flows in less bandwidth? How much are we talking here?

MB: There is something like 16 to 25 percent additional overhead (with HD-SDI flows compared to IP flows). It’s not a lot but it’s enough that you might be able to get one or two additional uncompressed streams on a 10 gig interface.

WS:One of the questions raised about separate audio and video flows is what this will mean for synchronization (lip sync, audio imaging). TR-03 provides for streams to be synced via IEEE 1588 (Precision Time Protocol), and if every packet is time stamped accurately, we should see better synchronization between streams than current SDI solutions, right?

MB: Synchronization was carefully considered during the development of TR-03. We are just getting ready to test that. The first interop test we did was with video, then we will add audio, and then the ancillary data. So far we’ve interoperated with RFC 4175 and interoperated with AES67 separately. The next step is to combine those together with the timing and sync and test that out. So that’s what we will be working on after VidTrans16 (annual conference) in New Orleans this month.

WS:There’s some confusion as to whether or not TR-03 supports HD-SDI, and if it provides any kind of migration from HD-SDI to IP.

MB: Actually, we architected TR-03 in such a way that you can take in an SDI stream and reassemble it at the other end. The VSF also introduced TR-04 at the same time, and if you look at it, you’ll see that its title is Utilization of 2022-6 Media Flows within a TR-03 environment. So it’s actually adding an additional type flow, that being 2022-6 or HD-SDI.

WS:One last question, and it’s an important one. Can we expect TR-03 to become a SMPTE standard anytime soon?

MB: We’ve submitted the work to SMPTE, and we are very optimistic that it will become a SMPTE standard.

WS:Thanks, Mike!

Mike Bany is the owner of DVBLink, which provides network architecture and support for live production clients such as Fox Sports. He is the lead in charge of the VSF Studio Video Over IP Activity Group that set the new TR-03 recommendations. VSF’s annual conference and exposition, VidTrans16, will be held in New Orleans Feb. 23 to 25 at the Sheraton New Orleans Hotel.

The Secret Life of Broadcast Gear

By Scott Johnson

When you think of Wheatstone audio processing, you naturally think of broadcasting. But if an audio engineer tucked an Aura8-IP under his arm and left the station, would he find other uses for it? The answer, I found out recently, is a resounding yes!

Wheatstone processing gear has myriad applications in the broadcast world. There’s almost no corner of a broadcast facility where a Wheatstone processor can’t be of assistance. But we rarely think of what we might be able to do with, say, an Aura8-IP outside the station’s doors. I did wonder. There’s a big, wide world of audio out there, waiting to be tamed.

A couple of years ago, I mixed sound for a local theatre’s production of a musical. On the spur of the moment and with some assistance from Wheatstone’s Phil Owens, who played bass in the show’s orchestra, I recorded the four vocals and the band to eight discrete tracks on a digital audio workstation (DAW) for later mixing. I recently got around to completing that work; it was quite a challenge given the impromptu nature of the tracks and the characteristics of a live performance that was not being produced with recording in mind. After a few weeks of work, I had fully mixed material, ready to go. But could it benefit from a bit of mastering?

To answer that question and satisfy my curiosity about unconventional uses of Wheatstone processing, one day I brought the tracks with me to the office and did some experimenting. I thought through several options and arrived at what I thought was probably the simplest test setup.

First, I installed a digital audio workstation (Ableton Live) on a PC in my office, one which already had a WheatNet-IP driver set up and running from some previous testing. I configured the workstation to send the audio of my final mixes, which I imported as uncompressed WAV files, out through a stereo channel of the driver. I then created a second audio track and configured it to take its input from a WheatNet-IP driver channel and record it. You can see the basic Ableton setup in Figure 1.

Figure 1 - Ableton Live

I then walked back to the factory floor and borrowed an Aura8-IP BLADE-3, fresh off the production line, and brought it to my office. (I love the smell of new gear in the morning!) Because there would be only two devices on this network, I connected the processor’s gigabit Ethernet port (which is auto-MDI / MDIx) directly to the corresponding port on my PC. Following the simple setup wizard dialog on the unit’s front panel, in short order I had the two communicating perfectly.

I then brought up WheatNet-IP Navigator, the software used to set up WheatNet-IP networks, and ensured that both devices showed up. I then did some simple routing as shown in Figure 2. I routed the audio from the DAW (PC Audio) to the first processing channel of the Aura-8IP (shown as Proc A on the crosspoint matrix). I routed the output of that processor channel back to the PC. (The destination is shown as OMT Rec 1 because this machine had previously been set up to test an automation system, and that was the name already assigned. I could have changed it if I liked.)

Figure 2 - Navigator Crosspoint Routing

I also routed the processor’s output to the Aura8-IP’s headphone jack for listening, and also to a digital output, just so that I could see it on a front panel meter.

What I now had was the ability to play the audio back, looped through the processor, and record the result on a separate track. The last step was to run the Aura8-IP Pro GUI, the software which would allow me to adjust the parameters of the processor from the computer, shown in Figure 3.

Figure 3 - Processor Pro GUI

My first thought was to explore using simple AGC functions to tame the widely varying levels of the stage performances. I chose a particularly dynamic track for testing; that is, one with very soft and very loud passages.

The best place to start was with a simple preset. I chose the “HD 3-Band Neutral” factory preset in the GUI; this gave me light AGC and compressor action and a fairly flat EQ. The results weren’t perfect and I didn’t expect them to be, but they were in the ball park.

Over the course of several passes through the musical number, I slowly adjusted the AGC, equalization, and compression to my own taste. Using the Pro GUI made the adjustments very easy to perform but gave me extremely detailed control over every parameter. Had I wished to, I could also have used the “Guru GUI,” which simplifies operation by providing very general controls. But like most audio engineers, I am a control freak when it comes to sound.

After an hour or so (again, control freak here) I had managed to tailor the processor's settings to create a sound that made me happy. It kept the levels much more consistent, yet preserved the feel of the material’s original dynamics.

Using the GUI made adjustments extremely convenient; I was able to easily switch between Ableton Live’s window for transport and playback control, Navigator for routing, and the Aura8-IP Pro GUI for processing as I fine-tuned my adjustments and bounced the track.

Continue Reading: The Secret Life of Broadcast Gear

The results were pretty impressive, even for a first pass. Here’s the track before any processing took place, including a short audio clip from a minute or two in at a build point in the music:

(Click image to download the audio file)

As you can see, the song starts off quite soft with just piano and one very subdued vocal. But toward the end, it builds to a strong, dense crescendo. Combining the action of the Aura8-IP’s very smooth AGC and very precise compressor, working in three separate bands, I was able to tame that a bit and gain the result seen here. The audio clip is from the same place:

(Click image to download the audio file)

From the waveform alone, you can see that the softer passages have been brought up considerably, but in a gentle way with long release times that preserve the dynamics of the music. We haven’t squashed the piece’s dynamic range into oblivion; we’ve merely given the soft spots a very gentle boost so they’re more audible. The slight reduction in contrast makes the listener less likely to reach for the volume control, especially in a noisy environment where those parts might disappear.

What you don’t see is the flat rectangular waveform so typical in much popular music today, where loudness is the priority. If you’re processing audio for loudness, to compete with other tracks in an MP3 playlist or other stations on the air, clean, clear loudness is what you need, and our gear can certainly do that. But here we have a case where loudness takes a back seat to the importance of a natural sound with the dynamic range of a live performance. It’s nice to know that when properly used, our processors easily accommodate those needs, too.

The ability of the Aura8-IP Pro GUI to save and recall presets on the fly was invaluable as I continued to experiment with “mastering” this show audio. I could save a conservative set of parameters, then get as wild as I liked with the settings from there, knowing I could easily recall the saved preset if things went terribly wrong. I could also play various bits of audio while recalling several different presets in sequence in order to make comparisons and determine what processing strategies gave me the best overall results.

Having multiple processing channels and the routing to use them was also extremely nice. While it didn’t make the final cut, I did use one configuration with processor A running a slow, single-band AGC, routed to the input of processor B running a multiband setup with faster settings. A couple of clicks in Navigator made the daisy-chain routing a snap, and ready-made factory presets gave me good starting points with each processor.

My eventual conclusion was that if I could achieve such compelling results using only one or two of the eight processing channels available in the Aura8-IP, the possibilities for other uses were endless. Like a pocket multi-tool, the box put a versatile and powerful set of audio processing and routing implements into my hands to use in any way the situation dictated. I could easily see this box finding a comfortable home in the outboard rack at the front-of-house console at a concert or theatre gig, in the credenza at a recording studio, or by the console in a remote production truck. Or, of course, in your on-air studio, TOC, production studio, or remote van.

About Scott: Scott Johnson is a Wheatstone systems engineer as well as the company’s webmaster, social media manager, newsletter editor, and video director/producer. He’s also an audio engineer and has spent most of his life recording, reinforcing, mixing, and mastering sound. His most recent credits as sound designer / A1 include regional productions of "Les Miserables", "RENT", "Pippin", "Evita", "In The Heights", and "Into The Woods". He is looking forward to mixing “You’re a Good Man, Charlie Brown” at Carteret Community Theatre in early February.

WheatNet-IP for TV: Associated Connections How-To

In this, the first in a series of how-to videos, Phil Owens introduces you to one of Wheatstone's TV control surfaces, and then demonstrates the creation of associated connections, an important WheatNet-IP feature that automates the routing of such things as IFBs and mix-minus feeds.

Your IP Question Answered

Q: Why is a distributed network like the WheatNet-IP better for redundancy than a centralized system?

A: Centralizing network management is a single-point-of-failure waiting to happen, whereas distributing network resources to every IP point naturally builds in redundancy. If one part of the network fails for any reason, the rest can keep on functioning. Each IP connection point (or WheatNet-IP BLADE) stores the entire configuration of the network onboard, which means that failover is immediate. And because WheatNet-IP BLADEs talk to each other, adding onto the network is plug-and-play for easy system expansion.