I might or might not be confused about why the size of my buffer has to be the way it is in order for my application to work. So I'll explain everything I'm thinking and you guys can tell me if I'm on the right track and/or explain it to me better. Pardon if this seems a little messy, I wrote it as if I was "thinking on paper."

No one should have to 'experiment' to determine an appropriate buffer size for an audio sample, especially on a computer with known variables. Not to mention my application works with the above settings on -various- systems with fairly varied specs.

It's not per second, it's per 1/1000th a second. My buffer is continuously filled and emptied, meaning that whether it's per second or not is irrelevant since at -any- time interval possible i deal with BUF*11 amount of data. And I'm trying to discover why BUF*11. I hope that clarifies my intent.

My application draws an oscilloscope and the spectrum analysis for each sample through openGL.

But that's regardless, as my concern is in discovering the reason for the buffer size at all. (\~44100/2)

No one should have to 'experiment' to determine an appropriate buffer size for an audio sample, especially on a computer with known variables. Not to mention my application works with the above settings on -various- systems with fairly varied specs.

Experimentation is a completely valid method of finding an optimal buffer size. No one can calculate what the best buffer size should be in the presence of all the myriad bandwidths and latencies of a modern computer with many hardware details undocumented.

whether it's per second or not is irrelevant since at -any- time interval possible i deal with BUF*11 amount of data. And I'm trying to discover why BUF*11.

The length of the buffer is still a time interval; it's irrelevant whether you are continually filling and emptying that buffer or not. I'm not sure what you mean by your comments about per second vs per millisecond.

If your buffer is storing PCM wave data at 44.1 kHz, 16 bits/sample, 2 channels, then the data rate is 176400 bytes/second and a buffer of 2048*11 = 22528 bytes would hold 0.128 seconds of audio. However you haven't said what data type the buffer is; if it's rather 2048*11 32-bit words then you have 0.511 seconds of audio...unless you are expanding each 16-bit sample to 32 bits for processing, in which case only 0.255 seconds. *shrug*

If I don't use BUF*11 minimum the application simply fails.

What do you mean by "simply fails"? That's no more descriptive than "won't work".

I just want to know if the math I did corresponds to something real and if it's why the value's I have to use are explained by that. If the entire notion of what I'm seeking is insane, then tell me that and explain to me my fallacy so I can better understand this science.

If you are using SDL_mixer or any other open-source sound player you can look at the source code and find out for yourself.

I think you need to tell us more about your program if you want to find out why it's crashing if you set the buffer too small. Normally it just gets too choppy when the buffer is too small.

Normally the left and right side are stored like you said, side-by-side in a 32-bit integer. The endianness comes into play if you are concerned about that (depends on the file format and processor type you are using).

So is it logical to conclude that my buffer size (int32) for any given sample must be larger than half the sample rate (hz) for 16bit stereo?

No, that's not logical. As SamuraiCrow pointed out you should be able to use a smaller buffer if you capture samples more often. I would recommend that you check the return codes from each OpenAL function you call; if one of them gives an error code that may help you pinpoint the problem.