I always thought hardware storage / playback of wave data would have the lowest latency. I was reading the FMOD docs last night though and remember reading a section that said in some cases, due to poor sound driver coding, software mixing can have lower latency.

I’m currently writing a realtime quantizer for a game (sound playback due to in-game actions are synchronized to eighth notes or quarter notes, for example) and thus would like playback to have the least latency possible so the quantized sounds are on beat.

Is there any hard and fast rule for which type of mixing I should be using? Or any good way of measuring in code what the latency is from the time of a playback call to the time the sound starts coming out of the speakers?

I’m still in research mode right now so I’m less worried about other users. Now I’m curious about ASIO – I hadn’t heard of it until you mentioned it. After looking it up, is it easy to harness it via FMOD or does it require extra programming to use that interface?

ASIO is oriented to situations where there aren´t many used simultaneous soundtracks neither is required a precise and estable synchronization, and i guess that games require a very good synchronization. The choice is yours

yes if it was ASIO, everyone would have to have asio compatible soundcards and drivers, which is definitely not mainstream. A user has to go to effort to install asio (if it exists) instead of it just being present when you install standard drivers.

FMOD_HARDWARE is probably lower latency for you if you are using dsound output. Latency is controllable with FMOD_SOFTWARE using System::setDSPBufferSize but that can lead to crackling if you set it too low.