Event sounds better to me. This WASAPI 3.0 update is great. I was having problems with my visualizations only updating at around 15fps when going through my Wavelink HS USB/SPDIF converter box, and now it's working properly at 60fps. I'm getting a more clean and spacious sound with WASAPI event mode. The difference isn't super pronounced, but it's akin to a nice upgrade at the source level for my stats. Very, very happy with this update.

Thank you for that JRiver link. That explained the difference quite well.

I just "upgraded" my WASAPI component and noticed the difference immediately with push. It could all be in my head but I could swear that push sounds "thicker" and less defined than event on my E17+E9 setup.

Has anybody experimented with the WASAPI hardware buffer settings? They're available in Foobar's Preferences > Advanced > Playback > WASAPI. I wouldn't anticipate a major audio difference, but just wondering.

In my quick testing, "Libera" on Libera by Libera () sounds much clearer to me on push. The event option blurs the vocals and makes them lose their magic. I'm genuinely surprised how significant the difference is.

Paradoxically, "Yakusoku" on "CLANNAD The Movie Image Song - Yakusoku" has clearer vocals on event than push. Augh!

The Computer Engineer in me tells me that someone, somewhere made a mistake and the difference is the result of a bug. Buffer size conceptually shouldn't matter as long as it plays without skipping. But who knows, if they screwed this up- maybe there's a bug there too.

I'm not a PC guy but I have some thoughts here that might clarify some of the "push" and "event driven" modes in WASAPI.

These modes appear to be the interface modes for handing off the audio data from the audio source application(s) to the sound card processing. The "push" mode is one direction and synchronous. The application has to keep up with the draining buffers. It was an older technique and most all cards support it. The "event-driven" or "pull" mode is bidirectional and asynchronous. The application only loads audio data into the buffers when it has been requested to do so (i.e., there is some handshaking going on). The latter is probably a bit more efficient in software.

Now here is the interesting thing. Once the final output bitstream is produced, let's say it is output on a TOSlink, RCA coax, or AES type connection. These are all S/PDIF type protocols which are essentially all "PUSH ONLY" type protocols. In other words, the bitstream is simply shoved down the cable or fiber synchronously where the receiving DAC has to receive it, recover the clock, and do whatever processing (e.g. upsampling) is necessary to eliminate jitter from the PC's imprecise clocking of the data and any other noise induced jitter contributed by weaknesses in the digital cable's losses, etc.. Assuming that you are passing all data in a bit perfect fashion, the only real difference in sound that you will hear will be due to jitter artifacts caused by software loading in the PC. In general, there should be very little difference audible in the output due to these two settings.

HOWEVER. Consider the situation where you are using a USB connection. The USB Audio Class 2 standard has two communication modes: Adaptive mode and Asynchronous mode.

The Adaptive mode is the "Push" type mode that's been around for years on USB interfaces and works basically the same as the other S/PDIF interfaces where the PC is the clock reference and data is synchronously sent to the DAC where, again, the DAC has to recover the clock and eliminate jitter. Basically it should sound similar to the other interfaces (actually, it could be a bit worst because the USB bus speed is not a nice multiple of a standard S/PDIF like clock and so clock recovery off a USB synchronous signal is a lot more difficult).

Now the Asynchronous mode of USB data transfer is a totally different beast! It is a "Pull" or "event driven" type of mode where the data clock is actually in the DAC itself, and therefore IMPERVIOUS to jitter introduced by the PC or the quality of the connecting cable. Only newer model DACs tend to support this mode as it is a recent development in the last few years. I suspect that some sound cards may not be able to connect to a USB port driver using this mode either if they are not new enough. Anyway, when the WASAPI setting is set to event driven and you are connected via USB to a DAC that supports Asynchronous transfer mode, the DAC will now be requesting data directly from the application's buffers via the USB connection and there will be no jitter what so ever when the data arrives at the DAC. Now THAT can make a difference in the sound.

So in summary, several folks here had made comments about not hearing much difference between the WASAPI modes. I believe that you are only going to get a significant difference if you use the event-driven setting of WASAPI, but you also are connected to a DAC that can operate in Asynchronous transfer mode over a USB connection (and of course you have a resolving enough system to hear the differences caused by jitter removal).

Now I've had to kind of figure all this stuff out on my own so if I've gotten something wrong here, please, please, somebody correct me!

I just downloaded the new wasapi 3.0 and it changed the way the music is played through the headphones. The sound is much more spread out. I am using the push setting, the event setting sounds like the old wasapi. You guys should check it out.

is there any difference in terms of sound quality when listening (MP3 320 kbps vs. FLAC 24/96 vs. normal Audio CD) between using Windows Media Player and Foobar ?