New chipset support. New target DSP. New real-time tuning. New compiler. New options for audio algorithms.

What are point-releases for, but to introduce new functions? We’ve recently released Hexagon SDK 2.0, with a fistful of ways you can take even greater advantage of the high performance and low power consumption waiting for you on the Hexagon DSP:

I’ll cover the first three new functions in this post and the others in my next post.

1. Support for Snapdragon 810

The Snapdragon 810 processor is inside the most powerful evaluation board yet: the DragonBoard™ 810 (based on the Snapdragon APQ8094) from Intrinsyc Technologies Corporation. As the Ultra HD Processor, Snapdragon 810 is currently the top-of-the-line chipset available on a development platform. Look for it in top-tier commercial devices later in 2015.

Are you working on DSP customization for those devices, especially on demanding applications around audio, camera, computer vision and speech? If so, you’ll be pleased to be able to offload processing to the Hexagon v56 built into the Snapdragon 810, and you’ll enjoy developing on the DragonBoard 810 even more.

2. 500MHz Additional Processing Power by Offloading to the mDSP

Speaking of demanding applications, imagine the camera and computer vision applications you can build to really take advantage of an additional 500MHz of DSP MIPS made available on the modem DSP (mDSP).

The Snapdragon architecture includes multiple Hexagon DSPs. In the past, the Hexagon SDK has exposed only the application DSP (aDSP) for offload, but with SDK 2.0 you’ll have access to the modem DSP (mDSP) as well. That applies to the Snapdragon 810 you’ll find on the DragonBoard 810 and the Snapdragon 800 on the DragonBoard 800.

The result is more than double the instantaneous DSP MIPS available to developers on the Snapdragon 810 and Snapdragon 800 platforms, with lower power consumption than running on the CPU. Additional benefits of using the mDSP are a faster data bus, a larger L2 cache (768KB) and a shorter hop to the DDR memory controller than on the aDSP. That means lower latency for memory operations, particularly in imaging, video processing and computer vision applications.

To take advantage of this, you’ll use exactly the same Fast Remote Procedural Call (FastRPC) method that you use when you offload to the aDSP, except that you’ll map an additional DSP. No recoding is necessary; you just reroute your call to the mDSP’s address. For that matter, you can offload to both the aDSP and mDSP at the same time to double your offload processing power and, in limited cases, triple your effective DSP offload processing power.

Using the mDSP offload capability opens the door to concurrent use cases in which, for example, the aDSP can perform traditional audio and speech optimization while the mDSP simultaneously executes computer vision tasks.

3. Dynamic Tuning

When you’re running an audio app on the ARM CPU, it’s easy to change parameters like bass and treble in software. But when you’re running audio on the aDSP and the settings are hard-coded into the Android build, it’s much harder to make those changes. And much harder still to make them persist when, say, the aDSP falls asleep or the device shuts down.

Now, with the dynamic tuning in Hexagon SDK 2.0, you can accept changes in parameters at the application level and have them pushed down to the DSP in real time. That means your users can have the control they expect from an audio app while the device consumes a fraction of the power.

Suppose a user is in her car, listening to audio running your DSP customizations. She has adjusted highs, lows and mids to compensate for road noise. Still listening, she arrives home and connects to speakers in her living room, where she changes settings to compensate for the room’s acoustics. Then she goes outside to listen, with yet different settings. Dynamic tuning lets her select different parameters for each environment in real time, then it changes them on the DSP and ensures they persist.

Many developers have integrated their audio modules to the aDSP, but have applications requirements for tuning at run time from the Java layer. Now you can create a GUI with controls that immediately affect parameters in code running on the aDSP. We’ve included sample code and an .apk designed for any audio application going through post-processing in the pipeline – ringtones, audio playback, etc.

Next Steps

If you’re already a Hexagon developer, the SDK is available for you now. If not, this is a good time for you to start. Find out more about the Hexagon SDK and let us know what kind of application you have in mind.