AES San Francisco 2008Broadcasting Event Details

Live Sound Symposium: Surround Live VI—Acquiring the Surround Field

Abstract:Building from the five previous, highly successful Surround Live symposia, Surround Live Six, will once again explore in detail, the world of Live Surround Audio.

Frederick Ampel, President of consultancy Technology Visions, in cooperation with the Audio Engineering Society, brings this years event back to San Francisco for the third time. The event will feature a wide range of professionals from both the televised Sports arena, Public Radio, and the digital processing and encoding sciences.

The day’s events will include formal presentations, special demonstration materials in full surround, and interactive discussions with presenters. Seating is limited, and previous events have sold out quickly. Register quickly to insure you will be able to attend.

Further details will be added as they become available

Thursday, October 2, 9:00 am — 10:45 am

B1 - Listening Tests on Existing and New HDTV Surround Coding Systems

Abstract:With the advent of HDTV services, the public is increasingly
being exposed to surround sound presentations using so-called home theater environments. However, the restricted bandwidth available into the home, whether by broadcast, or via broadband, means that there is an increasing interest in the performance of low bit rate surround sound audio coding systems for “emission” coding. The European Broadcasting Union Project Group D/MAE (Multichannel Audio Evaluations) conducted immense listening tests to asses the sound quality of multichannel audio codecs for broadcast applications in a range from 64 kbit/s to 1.5 Mbit/s. Several laboratories in Europe have contributed to this work.

This Broadcast Session will provide profound information about these tests and the results. Further information will be provided, how the professional industry, i.e. codec proponents and decoder manufacturers, is taking further steps to develop new products for multichannel sound in HDTV.

Abstract:A discussion of different codecs used throughout the world, USA HD Radio, Eureka, Surround Sound, Electronic Program Guide, Other Data Services, and public adoption. Various implementations of digital radio including both terrestrial and satellite services throughout the globe.

Thursday, October 2, 2:30 pm — 4:30 pm

P3 - Audio for Broadcasting

Chair: Marshall Buck, Psychotechnology, Inc. - Los Angeles, CA, USA

P3-1 Graceful Degradation for Digital Radio Mondiale (DRM)—Ferenc Kraemer, Gerald Schuller, Fraunhofer Institute for Digital Media Technology - Ilmenau, GermanyA method is proposed that is able to maintain an adequate transmission quality of broadcasting programs over channels strongly impaired by fading. Although attempts of providing Graceful Degradation are manifold, the so called “brick wall effect” is inherent in most digital broadcasting systems. The main concept of the proposed method focuses on the open standard Digital Radio Mondiale (DRM). Our approach is to introduce an additional low bit rate parallel backup audio stream alongside the main radio stream. This backup stream bridges occurring dropouts in the main stream. Two versions are evaluated. One uses the standardized HVXC speech codec for encoding the parallel backup audio stream. The other version additionally uses a specially developed sinusoidal music codec.
Convention Paper 7517 (Purchase now)

P3-2 Factors Affecting Perception of Audio-Video Synchronization in Television—Andrew Mason, Richard Salmon, British Broadcasting Corporation - Tadworth, Surrey, UKThe increasing complexity of television broadcasting, has, over the decades, resulted in an increased variety of ways in which audio and video can be presented to the audience after experiencing different delays. This paper explores the factors that affect whether what is presented to the audience will appear to be correct. Experimental results of a study of the effect of video spatial resolution are included. Several international organizations are working to solve technical difficulties that result in incorrect synchronization of audio and video. A summary of their activities is included. The Audio Engineering Society Standards Committee has a project to standardize an objective measurement method, and a test signal and prototype measurement apparatus contributed to the project are described.
Convention Paper 7518 (Purchase now)

P3-3 Absolute Threshold of Coherence Position Perception between Auditory and Visual Sources for Dialogs—Roberto Munoz, U. Tecnológica de Chile INACAP - Santiago, Chile; Manuel Recuero, Universidad Politécnica de Madrid - Madrid, Spain; Diego Duran, Manuel Gazzo, U. Tecnológica de Chile INACAP - Santiago, ChileUnder certain conditions, auditory and visual information are integrated into a single unified perception, even when they originate from different locations in space. The main motivation for this study was to find the absolute perception threshold of position coherence between sound and image, when moving the image across the screen and when panning the sound. In this manner it is possible to subjectively quantify, by means of the constant stimulus psychophysical method, the maximum difference of position between sound and image considered coherent by a viewer of audio-visual productions. This paper discusses the accuracy necessary to match the position of the sound and its image on the screen. The results of this study could be used to develop sound mixing criteria for audio-visual productions.
Convention Paper 7519 (Purchase now)

P3-4 Clandestine Wireless Development During WWII —Jon Paul, Scientific Conversion, Inc., Crypto-Museum - CA, USAWe describe the many advances in spy radios during and after WWII, starting with the huge suitcase B2 suitcase transceiver, through several stages of miniaturization and eventually down to small modules a few inches in size just after the War. A top secret navigation set known as the S-Phone, provided navigation and full duplex voice communications at 380 MHz between clandestine agents, partisans, ships, and planes. The surprising sophistication and fast progress will be illustrated with many photographs and schematics from the collection of the Crypto-Museum. This multimedia presentation includes vintage era music and radio clips as well as original WWII propaganda graphics.
Convention Paper 7520 (Purchase now)

Thursday, October 2, 2:30 pm — 4:30 pm

B3 - Considerations for Facility Design

Abstract:A roundtable chat with design experts Sam Berkow, John Storyk, and William Hallinsky. We’ve modified the format of this popular session further to allow attendees to hear from several of today’s top facility designers in a more relaxed and less hurried format.

What makes for an exceptional facility? What are the top pitfalls of facility design? Bring your cup of coffee and share in the conversation as Radio World U.S. Editor in Chief Paul McLane talks with Sam Berkow of SIA Acoustics, John Storyk of Walters-Storyk Design Group, and William Hallinsky of Meridian Design Associates, Architects, to learn what leaders in radio/television broadcast and production studios are doing today in architectural, acoustic, and facility design.

How are the demands of today’s multi-platform broadcasters changing design of facilities? How do streaming, video for radio and new media affect the process? What does it really mean to say a facility is “green”? How should broadcasters handle cross-training? What are the most common pitfalls broadcasters should avoid in designing and budgeting for a facility? What key decisions must you make today to ensure that your fabulous new facility will still be doing the job in 10 or 20 years?

Thursday, October 2, 4:30 pm — 6:30 pm

B4 - Mobile/Handheld Broadcasting: Developing a New Medium

Abstract:The broadcasting industry, the broadcast and consumer equipment vendors, and the Advanced Television Systems Committee have been vigorously moving forward toward the development of a Mobile/Handheld DTV broadcast standard and its practical implementation. In order to bring this new service to the public players from various industry segments have come together in an unprecedented fashion. In this session key leaders in this activity will present what the emerging system includes, how far the industry has progressed, and what’s left to be done.

Abstract:Who would have guessed that teenagers and everybody else would be clamoring for devices with MP3/AAC (MPEG Layer III/MPEG Advanced Audio Coding) perceptual audio coders that fit into their pockets? As perceptual audio coders become more and more integral to our daily lives, residing within DVDs, mobile devices, broad/webcasting, electronic distribution of music, etc., a natural question to ask is: what made this possible and where is this going? This panel, which includes many of the early pioneers who helped advance the field of perceptual audio coding, will present a historical overview of the technology and a look at how the market evolved from niche to mainstream and where the field is heading.

Friday, October 3, 9:00 am — 10:45 am

L4 - White Space Issues

Abstract:The DTV conversion will be complete on February 17, 2009. The impact of this and surrounding FCC decisions is of great concern to wireless microphone users. Will 700 MHz band mics retain type certification? Will proposed white space devices create new interference? Will there be an FCC crack-down on unlicensed microphone use? This panel will discuss the latest FCC rule decisions and decisions still pending.

Friday, October 3, 11:00 am — 1:00 pm

L5 - Practical Advice for Wireless Systems Users

Abstract:From houses of worship to wedding bands to community theaters, there are small- to medium-sized wireless microphone systems and IEMs in use by the millions. Unlike the Super Bowl or the Grammys, these smaller systems often do not have dedicated technicians, sophisticated frequency coordination, or in many cases even the proper basic attention to system setup. This live sound event will begin with a basic discussion of the elements of properly choosing components, designing systems, and setting them up in order to minimize the potential for interference while maximizing performance. Topics covered will include antenna placement, antenna cabling, spectrum scanning, frequency coordination, gain structure, system monitoring and simple testing/troubleshooting procedures. Briefly covered will also be planning for upcoming RF spectrum changes.

Friday, October 3, 11:00 am — 1:00 pm

B5 - Loudness Workshop

Abstract:New challenges and opportunities await broadcast engineers concerned about optimum sound quality in this contemporary age of multichannel sound and digital broadcasting. The earliest studies in the measurement of loudness levels were directed to telephony issues, with the publication in 1933 of the equal-loudness contours of Fletcher and Munson, and the Bell Labs tests of more than a half-million listeners at the 1938 New York Worlds Fair demonstrating that age and gender are also important factors in hearing response. A quarter of a century later, broadcasters began to take notice of the often-conflicting requirements of controlling both modulation and loudness levels. These are still concerns today as new technologies are being adopted. This session will explore the current state of the art in the measurement and control of loudness levels and look ahead to the next generation of techniques that may be available to audio broadcasters.

Friday, October 3, 2:00 pm — 6:00 pm

TT6 - Tarpan Studios/Ursa Minor Arts & Media, San Rafael

Abstract:World-renowned producer/artist Narada Michael Walden has owned this gem-like studio for over twenty years. During that time artists such as Aretha Franklin, Whitney Houston, Mariah Carey, Steve Winwood, Kenny G, and Sting have recorded gold and platinum albums here. The tour will also include URSA Minor Arts & Media, an innovative web and multimedia production company.

Note: Maximum of 20 participants per tour.

Price: $35 (members), $45 (nonmembers)

Friday, October 3, 2:30 pm — 4:00 pm

P11 - Listening Tests & Psychoacoustics

P11-1 Testing Loudness Models—Real vs. Artificial Content—James Johnston, Neural Audio Corp. - Kirkland, WA, USAA variety of loudness models have been recently proposed and tested by various means. In this paper some basic properties of loudness are examined, and a set of artificial signals are designed to test the "loudness space" based on principles dating back to Harvey Fletcher, or arguably to Wegel and Lane. Some of these signals, designed to model "typical" content, seem to reinforce the results of prior loudness model testing. Other signals, less typical of standard content, seem to show that there are some substantial differences when these less common signals and signal spectra are used.
Convention Paper 7564 (Purchase now)

P11-2 Audibility of High Q-factor All-Pass Components in Head-Related Transfer Functions—Daniela Toledo, Henrik Møller, Aalborg University - Aalborg, DenmarkHead-related transfer functions (HRTFs) can be decomposed into minimum phase, linear phase, and all-pass components. It is known that low Q-factor all-pass sections in HRTFs are audible as lateral shifts when the interaural group delay at low frequencies is above 30 µs. The goal of our investigation is to test the audibility of high Q-factor all-pass components in HRTFs and the perceptual consequences of removing them. A three-alternative forced choice experiment has been conducted. Results suggest that high Q-factor all-pass sections are audible when presented alone, but inaudible when presented with their minimum phase HRTF counterpart. It is concluded that high Q-factor all-pass sections can be discarded in HRTFs used for binaural synthesis.
Convention Paper 7565 (Purchase now)

P11-3 A Psychoacoustic Measurement and ABR for the Sound Signals in the Frequency Range between 10 kHz and 24 kHz—Mizuki Omata, Musashi Institute of Technology - Tokyo, Japan; Kaoru Ashihara, Advanced Industrial Science and Technology - Tsukuba, Japan; Motoki Koubori, Yoshitaka Moriya, Masaki Kyouso, Shogo Kiryu, Musashi Institute of Technology - Tokyo, JapanIn high definition audio media such as SACD and DVD-audio, wide frequency range far beyond 20 kHz is used. However, the auditory characteristics for the frequencies higher than 20 kHz have not been necessarily understood. At the first step to make clear the characteristics, we conducted a psychoacoustic and an auditory brain-stem response (ABR) measurement for the sound signals in the frequency range between 10 kHz and 24 kHz. At a frequency of 22 kHz, the hearing threshold in the psychoacoustic measurement could be measured for 4 of 5 subjects. The minimum sound pressure level was 80 dB. The thresholds of 100 dB in the ABR measurement could be measured for 1 of the 5 subjects.
Convention Paper 7566 (Purchase now)

P11-4 Quantifying the Strategy Taken by a Pair of Ensemble Hand-Clappers under the Influence of Delay—Nima Darabii, Peter Svensson, The Centre for Quantifiable Quality of Service in Communication Systems, NTNU - Trondheim, Norway; Snorre Farner, IRCAM - Paris, FrancePairs of subjects were placed in two acoustically isolated rooms clapping together under an influence of delay up to 68 ms. Their trials were recorded and analyzed based on a definition of compensation factor. This parameter was calculated from the recorded observations for both performers as a discrete function of time and thought of as a measure of the strategy taken by the subjects while clapping. The compensation factor was shown to have a strong individual as well as a fairly musical dependence. Increasing the delay compensation factor was shown obviously to be increased as it is needed to avoid tempo decrease for such high latencies. Virtual anechoic conditions cause a less deviation for this factor than the reverberant conditions. Slightly positive compensation parameter for very short latencies may lead to a tempo acceleration in accordance with Chafe effect.
Convention Paper 7567 (Purchase now)

P11-5 Quantitative and Qualitative Evaluations for TV Advertisements Relative to the Adjacent Programs—Eiichi Miyasaka, Akiko Kimura, Musashi Institute of Technology - Yokohama, Kanagawa, JapanThe sound levels of advertisements (CMs) in Japanese conventional terrestrial analog broadcasting (TAB) were quantitatively compared with those in Japanese terrestrial digital broadcasting (TDB). The results show that the average CM-sound level in TDB was about 2 dB lower and the average standard deviation was wider than those in TAB, while there were few differences between TAB and TDB at some TV station. Some CMs in TDB were perceived clearly louder than the adjacent programs although the sound level differences between the CMs and the programs were only within ±2 dB. Next, insertion methods of CMs into the main programs in Japan were qualitatively investigated. The results show that some kinds of the methods could unacceptably irritate viewers.
Convention Paper 7568 (Purchase now)

Abstract:The participants of this session pioneered audio processing and developed the tools we still use today. A discussion of the developments, technology, and the “Loudness Wars” will take place. This session is a must if you want to understand how and why audio processing is used.

Abstract:When any new technology develops, the limitations of current systems are inevitably met. Bandwidth constraints then generate a class of techniques designed to maximize information transfer. Over time as bottlenecks expand, new kinds of applications become possible, making previous methods and file formats obsolete. By the time broadband access becomes available, we can observe a similar progression taking place in the next developing technology. The workshop discusses this trend as exhibited in the gaming, Internet, and mobile industries, with particular emphasis on audio file types and compression techniques. The presenter will compare and contrast obsolete tricks of the trade with current practices and invite industry veterans to discuss the trend from their points of view. Finally the panel makes predictions about the evolution of media.

Friday, October 3, 5:30 pm — 6:45 pm

T8 - Free Source Code for Processing AES Audio Data

Abstract:This session is a tutorial on the Xilinx free Verilog and VHDL source code for extracting and inserting audio in SDI streams, including “on the fly” error correction and high performance, continuously adaptive, asynchronous sample rate conversion. The audio sample rate conversion supports large ratios as well as fractional conversion rates and maintains high performance while continuously adapting itself to the input and output rates without user control. The features, device utilization, and performance of the IP will be presented and demonstrated with industry standard audio hardware.

Saturday, October 4, 9:00 am — 10:45 am

B7 - DTV Audio Myth Busters

Abstract:There is no limit to the confusion created by the audio options in DTV. What do the systems really do? What happens when the systems fail? How much control can be exercised at each step in the content food chain? There are thousands of opinions and hundreds of options, but what really works and how do you keep things under control? Bring your questions and join the discussion as four experts from different stages in the chain try to sort it out.

Saturday, October 4, 11:00 am — 1:00 pm

B8 - Lip Sync Issue

Abstract:This is a complex problem, with several causes and fewer solutions. From production to broadcast, there are many points in the signal path and postproduction process where lip sync can either be properly corrected, or made even worse.

This session’s panel will discuss several key issues. Where do the latency issues exist in postproduction? Where do they exist in broadcast? Is there an acceptable window of latency? How can this latency be measured? What correction techniques exist? Does one type of video display exhibit less latency than another? What is being done in display design to address the latency? What proposed methods are on the horizon for addressing this issue in the future?

Join us as our panel covers the field from measurement, to post, to broadcast, and to the home.

Saturday, October 4, 11:00 am — 12:00 pm

T11 - [Canceled]

Saturday, October 4, 2:30 pm — 4:30 pm

B9 - Listener Fatigue and Longevity

Abstract:This panel will discuss listener fatigue and its impact on listener retention. While listener fatigue is an issue of interest to broadcasters, it is also an issue of interest to telecommunications service providers, consumer electronics manufacturers, music producers, and others. Fatigued listeners to a broadcast program may tune out, while fatigued listeners to a cell phone conversation may switch to another carrier, and fatigued listeners to a portable media player may purchase another company’s product. The experts on this panel will discuss their research and experiences with listener fatigue and its impact on listener retention.

Abstract:The 20th Annual GRAMMY Recording SoundTable is presented by the National Academy of Recording Arts & Sciences Inc. (NARAS) and hosted by AES.

YOU, Inc.! New Strategies for a New Economy

Today’s audio recording professional need only walk down the aisle of a Best Buy, turn on a TV, or listen to a cell phone ring to hear possibilities for new revenue streams and new applications to showcase their talents. From video games to live shows to ringbacks and 360 deals, money and opportunities are out there. It’s up to you to grab them.

For this special event the Producers & Engineers Wing has assembled an all-star cast of audio pros who’ll share their experiences and entrepreneurial expertise in creating opportunities in music and audio. You’ll laugh, you’ll cry, you’ll learn.

Saturday, October 4, 5:00 pm — 6:45 pm

B10 - Audio Transport

Abstract:This will be a discussion of techniques and technologies used for transporting audio (i.e., STL, RPU, codecs, etc.). Transporting audio can be complex. This will be a discussion of various roads you can take.

Abstract:Internet Streaming has become a provider of audio and video content to the public. Now that the public has recognized the medium, the provider needs to deliver the content with a quality comparable to other mediums. Audio monitoring is becoming important, and a need to quantify the performance is important so that the streamer can deliver product of a standard quality.

Sunday, October 5, 9:00 am — 11:00 am

B12 - Art of Sound Effects—Performance to Production

Panelists:David ShinnSue Zizza

Abstract:Sound effects: footsteps, doors opening and closing, a bump in the night. These are the sounds that can take the flat one-dimensional world of audio, television, and film and turn them into realistic three-dimensional environments. From the early days of radio to the sophisticated modern day High Def Surround Sound of contemporary film; sound effects have been the final color on the director's palatte. Join Sound Effects and Foley Artists Sue Zizza and David Shinn of SueMedia Productions as they present a 90 minute session that explores the art of sound effects; creating and performing manual effects; recording sound effects with a variety of microphones; and using various primary sound effect elements for audio, video and film projects.

Sunday, October 5, 11:30 am — 1:00 pm

T15 - Real-Time Embedded Audio Signal Processing

Presenter:Paul Beckmann, DSP Concepts, LLC - Sunnyvale, CA, USA

Abstract:Product developers implementing audio signal processing algorithms in real-time encounter a host of challenges and tradeoffs. This tutorial focuses on the high-level architectural design decisions commonly faced. We discuss memory usage, block processing, latency, interrupts, and threading in the context of modern digital signal processors with an eye toward creating maintainable and reusable code. The impact of integrating audio decoders and streaming audio to the overall design will be presented. Examples will be drawn from typical professional, consumer, and automotive audio applications.

Sunday, October 5, 2:30 pm — 4:30 pm

T18 - FPGA for Broadcast Audio

Abstract:This tutorial presents broadcast-quality solutions based on FPGA technology for audio processing with significant cost savings over existing discrete solutions. The solutions include digital audio interfaces such as AES3/SPDIF and I2S, audio processing functions such as sample rate converters and SDI audio embed/de-embed functions. Along with these solutions, an audio video framework that consists of a suite of A/V functions, reference designs, an open interface to easily stitch the AV blocks, system design methodology, and development kits is introduced. Using the framework system designers can quickly prototype and rapidly develop complex audio video systems.

Sunday, October 5, 2:30 pm — 4:00 pm

P26 - Audio Digital Signal Processing and Effects—Part 2

P26-1 Applications of Algorithmically-Generated Digital Audio for Web-Based Sonic Measure Ear Training—Christopher Ariza, Towson University - Towson, MD, USAThis paper examines applications of algorithmically-generated digital audio for a new type of ear training. This approach, called sonic measure ear training, circumvents the many limits of MIDI-based aural testing, and may offer a valuable resource for computer musicians and audio engineers. The Post-Ut system, introduced here, is the first web-based ear training system to offer sonic measure ear-training. After describing the design of the Post-Ut system, including the use of athenaCL, Csound, Python, and MySQL, the audio generation procedures are examined in detail. The design of questions and perceptual considerations are evaluated, and practical applications and opportunities for future development are outlined.
Convention Paper 7645 (Purchase now)

P26-2 A Perceptual Model-Based Speech Enhancement Algorithm—Rongshan Yu, Dolby Laboratories - San Francisco, CA, USAThis paper presents a perceptual model-based speech enhancement algorithm. The proposed algorithm measures the amount of the audible noise in the input noisy speech explicitly by using a psychoacoustic model, and decides an appropriate amount of noise reduction accordingly to achieve good noise level reduction without introducing significant distortion to the clean speech embedded in the input noisy signal. The proposed algorithm also mitigates the musical noise problem commonly encountered in conventional speech enhancement algorithms by having the amount of noise reduction adapt to the instantly estimated noise amplitude. Good performance of the proposed algorithm has been confirmed through objective and subjective tests.
Convention Paper 7646 (Purchase now)

P26-3 Real Time Implementation of an ESPRIT-Based Bass Enhancement Algorithm—Lorenzo Palestini, Emanuele Moretti, Paolo Peretti, Stefania Cecchi, Laura Romoli, Francesco Piazza, Università Politecnica delle Marche - Ancona, ItalyThis paper presents a software real-time implementation for the NU-Tech platform of a bass enhancement algorithm based on the FAPI subspace tracker and the ESPRIT algorithm for fundamentals estimation to realize bass improvement of small loudspeakers exploiting the well known psychoacoustic phenomenon of the missing fundamental. Comparative informal listening tests have been performed to validate the virtual bass improvement, and their results show that the proposed method is well appreciated.
Convention Paper 7647 (Purchase now)

P26-4 Low-Power Implementation of a Subband Acoustic Echo Canceller for Portable Devices—Julie Johnson, David Hermann, John Wdowiak, Edward Chau, Hamid Sheikhzadeh, ON Semiconductor - Waterloo, Ontario, CanadaPortable audio communication devices require increasingly superior audio quality while using minimal power. Devices such as cell phones with speakerphone functionality can generate substantial acoustic echo due to the proximity of the microphone and speaker. To improve the audio quality in such devices, an oversampled subband acoustic echo canceller has been implemented on a miniature low-power dual core DSP system. This application is comprised of three subband-based algorithms: a Pseudo-Affine Projection adaptive filter, an Ephraim-Malah based single-microphone noise reduction algorithm, and a novel nonlinear residual echo suppressor. The system consumes less than 4 mW of power when configured with a 128 ms filter. Real-world tests indicate an echo return loss enhancement of greater than 30 dB for typical input levels.
Convention Paper 7648 (Purchase now)

P26-5 A Digital Model of the Echoplex Tape Delay—Steinunn Arnardottir, Jonathan S. Abel, Julius O. Smith, Stanford University - Stanford, CA, USAThe Echoplex is a tape delay unit featuring fixed playback and erase heads, a moveable record head, and a tape loop moving at roughly 8 ips. The relatively slow tape speed allows large frequency shifts, including "sonic booms" and shifting of the tape bias signal into the audio band. Here, the Ecxhoplex tape delay is modeled with read, write, and erase pointers moving along a circular buffer. The model separately generates the quasiperiodic capstan and pinch wheel components and drift of the observed fluctuating time delay. This delay drives an interpolated write simulating the record head. To prevent aliasing in the presence of a changing record head speed, an anti-aliasing filter with a variable cutoff frequency is described.
Convention Paper 7649 (Purchase now)

P26-6 A Digital Reverberator Modeled after the Scattering of Acoustic Waves by Trees in a Forest—Kyle Spratt, Jonathan S. Abel, Stanford University - Stanford, CA, USAA digital reverberator modeled after the scattering of acoustic waves among trees in an idealized forest is presented. Termed "treeverb," the technique simulates forest acoustics using a network of digital waveguides, with bi-directional delay lines connecting trees represented by multi-port scattering junctions. The reverberator is designed by selecting tree locations and diameters, with waveguide delays determined by inter-tree distances, and scattering filters fixed according to tree-to-tree angles and trunk diameters. The scattering is modeled as that of plane waves normally incident on a rigid cylinder, and a simple low-order scattering filter is presented and shown to closely approximate the theoretical scattering. Small forests are seen to yield dense, gated reverb-like impulse responses.

Sunday, October 5, 5:00 pm — 6:45 pm

T20 - Radio Frequency Interference and Audio Systems

Presenter:Jim Brown, Audio Systems Group, Inc.

Abstract:This tutorial begins by identifying and discussing the fundamental mechanisms that couple RF into audio systems and allow it to be detected. Attention is then given to design techniques for both equipment and systems that avoid these problems and methods of fixing problems with existing equipment and systems that have been poorly designed or built.