Posted
by
Cliff
on Tuesday September 18, 2001 @09:10AM
from the where-did-the-platform-independent-multimedia-libs--go dept.

delYsid asks: "About a year ago, Slashdot had a story about the OpenAL project by Loki and Creative. There was much hype around it. But if you check their website now, the last changelog entry appears to be from December of 1999! Does anyone know of a good (preferable platform independent) library for 3D audio? The only answer I get when I ask professionals in this field is DirectX. I'd love to have my app under Linux instead of having to move to Win again. Any pointer or hints about the current status of OpenAL? Are there any alternatives?"Update: 09/18 15:33 PM GMT by C:Corrected the link which referred to Slashdot's previous story on OpenAL.

I forwarded this question on to Loki, and here's the response from Scott Draeker, president of Loki Games:

As you can imagine, everyone is pretty busy right now, especially as we
had some folks out on vacation the last couple of weeks. So I'm sorry
about the slow response.

In answer to your question, work on OpenAL continues. Creative has
already added EAX and hardware acceleration to the Mac and Windows
versions, and are working on adding both to the Linux implementation as
well. Work is also continuing on an OS X port as well as other OSes.
OpenAL continues to be the sound API which Loki uses in all of its
products, and many other companies are either using or evaluating OpenAL
for their products as well.

Hope that helps.

So there you have it, straight from the source. Development is progressing, although it's likely to be a bit slow at present. Here's hoping we'll hear more updates on the progress of OpenAL, over the next 6 months. Thanks a bunch for taking the time to answer, Scott!

The most important factor of today is time to market. Java apps can, with proper coding, run as fast or faster than the equivalent C code. Sure, you can create blazingly fast apps in C with lots of effort, but you can gain even more by moving to 100% assembler code. It's a development time versus slight speed difference tradeoff.

You don't have to use Java just because others do. If you work best in C, C++, Python or some other language which you happen to prefer... Use that and stop complaining over my choice of language, dammit!

actually, i think moving to 64 bits will make java programs more complicated. i recently wrote a java class for my current project, and i was amazed and horrified at the number of explicit type casts i had to make. moving to 64 bits will most likely make it worse because Sun(tm) won't want to break compatibliity, so they'll introduce new types.

Java is just great for strategy or RPG games that do NOT need high end graphics.

As Java's performance has improved, so has the situation with "high end graphics". Check out Arkanae [babeloueb.com] for instance. It is awesome that OpenGL and Java are playing nicely together (and it will only get better). Also check out the Grand Canyon Demo [sun.com] for what will be possible with JDK 1.4, OpenGL and the DirectBuffer interface.

The best graphics card is your imagination!

True, but I find mine is helped quite a bit by a 10 million poly/sec rendering engine.;-)

Anytime someone says "X is a silver bullet", I'm critical of what disadvantes they are overlooking.

We tried, and successfully used Java in one of games. It dropped in, in about a week (of course the game logic took months to write.) Having to use 2 IDE's was a pain, but workable, for debugging. (We made the C++ code into a DLL) There were TWO big problems -- the OVERHEAD from calling Java from C/C++ (or vice versa) *completely* bogged down the game. The other issue was the garbage collector - the game froze while it was doing it's thing, which is unacceptable. (We were doing a single player strategy-sim hybrid, that unfortunately got cancelled, due to other issues.)

Will we use Java again? Maybe. But the scripting language used is only one of the issues regarding gaming tech! One must look at the "pros" AND the "cons." The designers were able to get up to speed quickly with it, appreciated the ton of books on Java programming, and it freed it up one of our programmers of having to maintain a in-house properietary language. However, the designers also lacked the many years of formal training and experience that programmers normally have to go thru, of using a "real language" compared to our easy-to-use previous in-house language. These two things (very bad C/C++ integration, & requiring designers to be programmers) will most likely determine whether we use Java in the future.

I'm the graphics programmer, and personally think Java is not the best tech for game scripting. HOWEVER, I do see its advantages and elegance in using it for game scripting and game logic. It's the old trade-off of "slow & flexible (interpreted) OR "fast and hard-coded (compiled)." Java definately has some advantages - and some disadvantes - like all languages.

I'm sure you can search Carmack's old plans for why he didn't use Java - his reasons were different from ours.

Is Java a viable option? Wild Tangent has clearly shown that a Java-based game works. They have some pretty cool tech.

You might want to read the Gamasutra Post Mortem article on "Vampire" -- That's the only other game developer I know that used Java in a commercial game.

> Too bad most game developers seem to think that a game can't be any good if it

How many game developers do you personally know? That's a pretty broad statement with no basis in fact.

> doesn't spew out 3D graphics at a rate of 500 fps or,
> if it is a strategy game, doesn't have at least have a 3D isometric view in true color.

Game developers are well aware of raising the technical requirements so high, that you loose a lot of customers that don't have the "latest and greatest" video card.

They are also quite aware of Graphics != gameplay.

However, if you DON'T have some of the best graphics, your game is criticized as having "dated graphics." Is that an excuse? No, it's what the consumer wants. Pretty graphics are (usually) what catch the gamers attention, but gameplay is what keeps him/her playing.

It's interesting to note that most of the top selling games are all 2D. i.e. Sims, Diablo, RollerCoaster Tycoon.

If you want a good insight to how the games industry really works, read this Derek Smart's rant on Gaming Industry - Where We Are [3000ad.com] which dicusses what really happens with the marketing.

Bringing this long thread. back on topic -- OpenAL is a good thing, and I'm glad it is progressing. I'll be sure to mention it to our engine architect, next time we do a Mac Port. If we do a Linux port, it will definately make things easier for us. It's a REAL shame OpenGL is completely ignored by so many developers -- using one cross platform API is very cool. Now if only consoles supported it better;-)

Can slashdot not report on these types of projects until they start producing somethings besides vapor. It just causes people to think that something is already being done and people who otherwise would create their own projects move onto other things.

With that said I was/am part of a project which hasn't done anything for quite some time, but we don't bury our code in much marketing hype, so I think it is pretty clear to anyone who is interested to see what progress is or isn't being made.

Can slashdot not report on these types of projects until they start producing somethings besides vapor

It's not vapour. You can check it out of their CVS today, compile it, install it. It's shipping with their games: Rune for example uses OpenAL. (Although, as an aside, I wish there were an installation option to use the system library for stuff like SDL and OpenAL rather than installing one just for the game).

sorry. The original post implied that openAL was a dead project. I was responding to that.

Mostly I was expressing a bit of frustration that often I see projects use slashdot to announce themselves, get some attention and then fade away because of mismanagment. This can be worse than if they didn't exist in the first place.

OpenAL doesn't seem to be this type of project.

I think of indrema as an example of a project that didn't get very far and really seemed cool on paper (so to speak). If they had never existed then it is very likely that someone else would have gotten attention for doing something similar. I bet Indrema used up any venture capital that could have gone towards a real open source game system.

Anyway, I jumped a bit too fast at the OpenAL story. Doing exactly what I was accusing slashdot staff for doing.

I think you can't have 3D audio library without advanced sound device drivers. And AFAIR emu10k1 driver supports effects like delay/flanger/gain, but do not support 3D sound.
Main problem is - nobody need advanced sound drivers.
Most of people just need to listen mp3 (it's similiar to word processors - most people need only.doc import/export). The only way to fix that is to create more 3D games for Linux.
So download SDL [libsdl.org] then get some info from OpenGL site [opengl.org] and gamedev [gamedev.net] then start coding!:-)

OSS/3D is a new 3D audio architecture developed by 4Front Technologies. It is 100% cross platform (available for Windows and UNIX/Linux) and has advanced features that often take 3 or 4 separate products to accomplish.

PLIB's sound is definitely it's worst feature -
and even I (as original author of PLIB) would
recommend you use OpenAL or something. However,
(in case people read this comment out of context),
I presume the "OH MY GOD, it was BAD" part only
refers to the audio - which is a very small fraction of what PLIB does.

another initiative to create an alternative cross-platform media api is currently underway under the name OpenML. OpenML is a merging of several media apis from SGI and others. the specifications have been released, and whitepapers, specs, and presentations are on
the
OpenML website [khronos.org].

As well as that, are there any decent cross-platform 2D & Input layers that could fit in with OpenGl, OpenML & OpenAL?

For input, you could use SDL [libsdl.org]'s or Allegro [sourceforge.net]'s input layer. For 2D, just use the special case of OpenGL graphics where z = 0 (note that this is also how DirectX 8 does DirectDraw, as a special case of Direct3D 8).

It would be interesting to see how those three fit together, as it really could give OSS a set of API's to combat DirectX with effectivly

using OpenGL with a Z of 0 would work too, certainly. Maybe it could be over complicated for 2D though

I don't see how OpenGL would be over-complicated. If you worry about performance issues, rest assured that most modern video cards accelerate parallel (2D or isometric) projections as easily as they do perspective projections.

Alegro is a new one of me. How well does its API fit in with OpenGL/AL/ML? Does it fit at all?

The Allegro library has its own 2D graphics, stereo sound, and joy/mouse/key input functions; it also has fixed-point math (essential for developing for 486 or other low-end targets that leave out FPU for cost or power consumption) and basic 2D and 3D matrix and quaternion manipulation. The base Allegro distribution [sf.net] contains no OpenGL support, but George Foot's AllegroGL extension [sf.net] lets you start OpenGL, make all OpenGL calls, and copy between Allegro surfaces and OpenGL surfaces. However, it doesn't fit in with OpenAL etc. yet.

(unless things have changed in the past month) Since someone here seems to know a thing or two about openml, I'll bring up a few questions I have here.

First, and probably most significantly, how does OpenML deal with seeking in a stream? In a degenerately-multiplexed stream (e.g. all data for one media type before the rest because that data can't interleave properly or one block of it covers a large time area e.g. lyrics / captions), seeking properly becomes very difficult, and even in normal multiplexed streams, sample-granularity streaming (or, in OpenML's case, probably UST(?)-granularity seeking) is a challenging problem. I searched the downloadable specs and didn't find the works 'seek' or 'seeking' once.

On a similar topic, how are multiplexed streams (e.g. MPEG program streams) dealt with? A demux transcoder feeding into codec transcoders? How would this work for seeking?

What about playing files? The specs discuss transcoding assuming that the transcoder can always handle the type of file given to it. So that either means one big monolithic transcoder, or some magical ability of any component to guess which transcoder to use for a given data stream (and for multiplexed streams that's a big guess). Insight on this?

Are the standards revised yet to allow for VBR codecs, where there is not a one-to-one correspondence between input and output buffers for a transcoder?

Finally, where can I get a fully compliant implementation? dmSDK uses dm[FUNC], which is in most cases equivalent to ml[FUNC], but that means renaming a ton of constants, functions, and types when compiling against a reference implementation, if one ever comes to be. And is there any example code whatsoever for transcoders, even if it's no more than reading in a WAV file with header and writing the PCM data from it with sampling rate and channels set? When I looked at trying to write a transcoder for decompressing Vorbis, I was bluntly discouraged by the sheer size of the null (pass-through) transcoder. Example code for a transcoder that actually does something with real-world streaming data would be immensely helpful.

Other than that, the standard looks quite well thought-out, including a well-done union-type message passing structure, the concept of which I stole almost exactly for my rework of the ogg123 internals as part of the major overhaul that should hopefully be complete before Ogg Vorbis RC3.

Two things are holding such people back from making more substantial contributions to OpenAL. First of all it is not entirely clear to me that the API is all that well designed. Modelling it after OpenGL was probably a mistake. In addition, there are certain fundamental assumptions put into the API that assume preemptive multitasking for some things to work well, most notably spooling file play. There was no thought put into using it for anything other than 3D sound effects for games. So, for example if you attempt to write a MOD player using OpenAL to hopefully be able to take advantage of their SoundFont technology and EAX in your MOD player's core and reverb functionalities, you are pretty much out of luck. OpenAL's source queues lack the functionality required for doing proper timing of various effects that you would need in order to pull it off.

The other problem is that the designers of OpenAL dont want to fix these problems, or let 3rd party developers do it for them. I have argued passionately for months for API improvements such as queue completion callbacks, defered object deletion and a more extensible API to make the library more generally useful for applications and operating systems other than 3D games on linux. I have been unable to convince them to make even the smallest changes to the spec. So, really until we can get some more flexibility and input into the API design, it is somewhat unrealistic to expect me or any other third party, including Apple, to be able to do much for OpenAL.

Avoiding preemptive multitasking in high performance single-processor programming is a good idea, period.

Both Quake III [quake3arena.com] and Quadra [sourceforge.net] use single-threaded, non-preemptive sound output with great success, so why are so many Linux/others game developers so stubborn on the idea of putting the sound in its own thread?

Personally, I blame the original Doom port, which everyone duplicated, even though at the time it had big problems of latency compared to the single-threaded DOS version because of latencies in the scheduling of the sound process.

No, threads don't avoid this problem. Thank you for trying, but sorry.

Avoiding preemptive multitasking in high performance single-processor programming is a good idea, period.

Why?

Both Quake III [quake3arena.com] and Quadra [sourceforge.net] use single-threaded, non-preemptive sound output with great success, so why are so many Linux/others game developers so stubborn on the idea of putting the sound in its own thread?

Because so Q3A's sound sucks compared to other games (such as Rune and UnrealTournament) which use OpenAL under Linux, perhaps?

When you have vector math routines that are optimized to take as much advantage of the cache as possible and you have a program that spends 70% of its cycles in those math routines, what do you think happen when you have these vector operations going in a tight loop and the scheduler preempts it to run the sound thread a bit? Dirties the cache like hell, performance goes down the drain, everything sucks.

For those who have two processors, inter-processor communication usually kill multi-threaded performance if it isn't VERY carefully done (i.e., it's practically never done correctly). Cache coherency is a killer for most multi-threaded applications on SMP systems.

I don't know if this still works, but a while ago (a year or so), I was playing with the CVS version of OpenAL, and back then it was possible to avoid multi-threading, so I would credit this to the OpenAL developers (unless they broke that feature at some point).

What you (the context switch to the sound thread screwing up the cache) makes perfect sense, but I'm a bit confused as to how a single-threaded design can avoid that problem. You still have the same amount of vector math and the same amount of sound processing to do, but instead of the OS taking care of scheduling them, you have to do it manually. Doesn't the cache still get dirtied when the single process switches from doing some graphics work to doing some sound work? Or do you arrage it so you do the sound in between unrelated parts of the graphics work, so the cache would be dirtied anyway?

If so, can't a multi-threaded design work if you have enough control over scheduling? Enough control meaning being able to give the scheduler hints like: no, don't switch me out now and yes, do a switch now.

Yes, the same amount of work has to be done, but (essentially) the OS doesn't have the same insight into the work as you do, so you can do a better job. You can do all the vector math for a 3D scene, and *then* dirty the cache doing some sound processing, but you'll dirty the cache only once, not multiple times when the OS will go back and forth between the two.

The last sentence of your first paragraph is correct, the cache would be dirtied anyway (unless you can write a program which fits all (including data and appropriate kernel parts!) in cache. Then you have no problem.:-)

You are also right in your second paragraph. But we were talking about pre-emptive multitasking here, and the pre-emptive part means just the opposite of what you are saying.

GNU Pth [gnu.org] is a non-preemptive multithreading package that gives you this kind of control with a familiar POSIX Threads API.

Personally, I prefer event/message-based architectures, which can scale to multiple threads or processes if needed (like if you want to use multiple CPUs or if you have totally blocking operations like libresolv-based gethostbyname). But GNU Pth is nice too, a shame it's not used more...

Thanks for your reply, I'm getting a better idea of some of the issues involved.

I happen to be interested in some of these architectural-performance issues because I'm currently working on a network server for an online game. It has to stay responsive to "unreliable" movement data at all times while other parts of it might be blocking on various things (e.g. libmysql calls, big file i/o, interpreting extension code in Scheme). I started out thinking it could be done in a single thread with lots of message-passing, but I switched over to a small number of (preemptive) threads when I realized that some of the stuff I'll be using (the mysql, for example) simply blocks and can't be forced into an asyncronous model.

Getting back to the topic, I was thinking about how my decision to use preemptive threads to "guarantee"* that the time-critical data would be processed quicky would affect cache consistency. Now, a network server is much less CPU-intensive than sound or graphics, so my intution suggests that cache effects would be much less as well. Does that make any sense? Are there alternative structures I should be considering, give my (vague) requirements?

I have looked into pth, and it seems useful for some things, but I'm not sure it's appropriate for me. Also, switching over to it would require lots of effort at this point, and I'm not sure I would even get any benefits.

*Of course that's only a soft guarantee, but it's enough for this application.

Avoiding preemptive multitasking in high performance single-processor programming is a good idea, period.

Why?

Threading is not 'free'. It comes at a cost in context switches that in simple desktop applications is not noticable, but becomes quite noticable once you're doing a game. In addition, it's impossible to have fine control over the scheduling of preemptive threads. In fact, by definition, you don't have control over it, because it's preemptive. I imagine this can cause serious timing issues for something like sound.

Yes, audio is a bit of a special case, because when the audio hardware needs data, it needs data NOW! That means that the audio thread has to get time from the scheduler and it cant block on some synchronization primative before it completes its job. You can solve that problem using large buffers so that there is plenty of time to prepare the next one before running out. Unfortunately, if you want low latencies, (who wouldn't for a game!) then you cant do that. A good alternative is to instead have "asymmetric multithreading" such that when the audio hardware needs data, it gets unobstructed processor time, on demand, regardless of what is currently occupying the processor.

Asymmetric multithreading (or interrupt level tasks) require a special set of data synchronization primatives. The normal semaphore/mutex will generally cause the high priority thread to block until the low priority one exits a critical region. That is not what you want. You want the audio thread to push through and make the low priority thread move out of the way. Fortunately, atomic operations have this behavior. Making proper use of these tools can be a bit tricky and usually requires that you keep your API simple and straightforward. The current Alsource queue/unqueue is a little bit too complex to do easily with atomic operations. It would be better for this purpose if it was a simple queue (with callbacks, of course!).

Two things are holding such people back from making more substantial contributions to OpenAL. First of all it is not entirely clear to me that the API is all that well designed. Modelling it after OpenGL was probably a mistake.

Gotta love this statement. Assert - never qualify.

One of the key points of environmental 3D audio is that it is intended to go hand-in-hand with a 3D visualization of a world. Choosing to use a similar set up to OpenGL struck me as being both an intuitive and sensible way to proceed. Creative certainly thought so when they looked at OpenAL hardware support. This does not mean that you have to use OpenAL for 3D worlds only - OpenGL works well for 2D as well - just look at Chromium BSU.

In addition, there are certain fundamental assumptions put into the API that assume preemptive multitasking for some things to work well, most notably spooling file play.

Well if your system doesn't support pre-emptive multitasking, you are going to have to live with interrupt control. Tricky but not impossible, and something that 99% of the developers aren't going to have to worry about.

There was no thought put into using it for anything other than 3D sound effects for games. So, for example if you attempt to write a MOD player using OpenAL to hopefully be able to take advantage of their SoundFont technology and EAX in your MOD player's core and reverb functionalities, you are pretty much out of luck.

Assertion! No facts!

OpenAL's source queues lack the functionality required for doing proper timing of various effects that you would need in order to pull it off.

Timing is critical in any sound API - OpenAL works fine. Maybe the key difference is that OpenAL does not give a mechanism to stream data into a buffer object, choosing instead to allow the programmer to queue buffers for sources. Essentially this means that the applications is free to do funky stuff up front before submitting the buffers or even (re)processing the buffers on the fly during playback. If you need to do funky things like real-time DSP processing then you are going to have to be able to make guarantees that you can process the data fast enough to keep the sound buffers populated. Beyond that, there is nothing stopping you from writing a MOD player using OpenAL.

The criticisms of OpenAL seem entirely reasonable to informed readers. OpenGL is an immediate mode API, an audio API and the nature of audio inherently requires a retained mode implementation from the ground up, probably a scene graph like API. The flaws in OpenAL are obvious and serious.

I am a game programmer, with a fair number of years of experience at this point. And I think the OpenAL API is horrible.

I don't care at all about the preemptive multitasking part -- old versions of MacOS can bite me. But I do care that it is way, *way* harder to use OpenAL than it should be, and the reason why is that the guys who designed the API had a horrible sense of priorities.

Sound is fundamentally different than graphics; the paradigms that work well for outputting 3D graphics are inappropriate for audio. My biggest issue with OpenAL is that they took the OpenGL/Direct3D driver model of "download texture to the video card" and made their audio buffers work that way by necessity. Now, this is a feature that no game programmer will ever use, for various reasons (the most important of which being that it's just not necessary; sound consumes negligible bandwidth on modern computers, so why try to optimize that bandwidth?). But this feature that nobody with a clue will ever use, being at the core of the API, makes everything way more complicated than it should be. (And, consequently, more bug-prone).

People with 3D graphics experience will tell you that the texture downloading step in 3D graphics is the biggest pain in the ass, and that they'd rather have it not be there if they had the choice. (See e.g. Carmack's plan files of a couple of years ago regarding virtual-memory-style textures, the idea being that you keep them on the host CPU and never explicitly download them.) So the idea of thinking that downloadable textures are cool, and wanting to emulate that in a sound API, smacks of inexperience.

I suspect that part of the reason for this API lameness is that a large part of the development is being undertaken / organized by Creative Labs, who have the agenda of pushing many of their useless hardware acceleration features onto the market. When working on OpenAL, they want to support those features, not necessarily make the objectively best API. (I doubt that the engineers at Creative think their hardware buffer stuff is useless... but it is. Anyone with game experience will tell them so.)

[I was on the OpenAL dev mailing list for a while. I tried to make API suggestions but they were basically ignored.]

My biggest issue with OpenAL is that they took the OpenGL/Direct3D driver model of "download texture to the video card" and made their audio buffers work that way by necessity. Now, this is a feature that no game programmer will ever use, for various reasons (the most important of which being that it's just not necessary; sound consumes negligible bandwidth on modern computers, so why try to optimize that bandwidth?). But this feature that nobody with a clue will ever use, being at the core of the API, makes everything way more complicated than it should be. (And, consequently, more bug-prone).

Fact: There are sound cards which have memory onboard for storing sound. There is no reason NOT to implement this feature.

While you claim that the bus bandwidth used in transferring audio is "negligible", it is still a factor. In addition, when sending this data to the sound card, additional interrupts are generated. If your sound card is sharing an interrupt with, say, your video card and your SCSI card, which is not at all impossible, and you are doing something odd with very small samples being repeated (perhaps for engine sounds?) you may in fact be sending a small amount of data but many commands to the card. It's conceivable that in the future, this sort of practice will be commonplace.

...the idea of thinking that downloadable textures are cool, and wanting to emulate that in a sound API, smacks of inexperience.

Oh yeah, this explains why there are so many linux programs which use the sound memory features of the GUS MAX. It must be inexperience. Or maybe it's just a desire to make things easier on the supporting hardware.

I suspect that part of the reason for this API lameness is that a large part of the development is being undertaken / organized by Creative Labs, who have the agenda of pushing many of their useless hardware acceleration features onto the market. When working on OpenAL, they want to support those features, not necessarily make the objectively best API. (I doubt that the engineers at Creative think their hardware buffer stuff is useless... but it is. Anyone with game experience will tell them so.)

Great. Tell that to Amiga game developers, not that they're so easy to find these days. Sound cards using downloadable sounds can help take load off an already beleaguered older machine.

Now, the Disclaimer: I am not a game developer, though I know lots of 'em. I do however have a pretty good idea of what I'm talking about. I also don't know jack about OpenAL, nor do I claim to, and the way they do things may be entirely brain damaged, but that doesn't mean that downloadable sounds aren't useful or desirable, which is the only thing I'm trying to say here - Besides that your arguments are flawed, or at least incomplete.

The other problem is that the designers of OpenAL dont want to fix these problems, or let 3rd party developers do it for them.

Huh? Just like with OpenGL, an OpenAL developer can create extensions to the core API, if there is need. If the extensions are generally useful, they often get folded back into the "main" API. A lot of useful functionality for OpenAL is implemented in Loki created extensions [openal.org], some of which are obsoleted by the 2.0 API.

I must admit I am a little suprised to see my maccentral post here. I have written quite a bit about OpenAL in the past. This particular one was in response to a user curious whether Apple would adopt OpenAL. Often I take the time to commend the hard work going on at Creative and elsewhere. I regret that I did not say enough positive things about OpenAL or the heroes who keep it alive this time. With that in mind, let me stress...

I dont think OpenAL sucks! I think it is the best cross platform 3D audio API available. It also could be a lot better. I feel about OpenAL in much the same way one might feel about your best friend on learning he has acquired a drug habit -- often frustrated, sometimes confused and always deeply concerned.

Let me address a couple of comments:

You have to realize the reason he doesn't like the API requiring preemptive multitasking is that the "classic" MacOS doesn't have it.

This is not a religious issue. It is something else. The crux of both my OpenGL arguments and the PMT one arises from the fact that sound is necessarily an asynchronous medium. In video, you can afford to drop a frame or two, since the last one you completed will be fine until you complete the next one. Video works fine, even best, with a synchronous API.

Unfortunately, you cant afford to drop any frames in sound at all! For this reason, enforcing a synchronous API onto an audio library makes no sense to me. That is why I criticised modelling it after OpenGL. It is also why I brought up the PMT issue, because only with symmetric multithreading can you write a synchronous audio API and hope to get away with it. However, if you would like to have low audio latencies, as you would in a game, you should not do that. The audio processor thread should not block on anything other than the audio hardware readiness for the next buffer.

If you get rid of the current polling mechanism for removing blocks from the queue (polling is rarely a good idea on any OS) and instead use completion callbacks, then it works everywhere, PMT or no, as an asynchronous API. Even better, you can use the completion callbacks to precisely time audio events (e.g. volume changes) in the future without having to worry about polling at just the right time. This is because the callbacks occur sychronously relative to the important thread, the audio processor thread. The queue could also benefit from being able to queue delays as well.

Timing is critical in any sound API - OpenAL works fine.... there is nothing stopping you from writing a MOD player using OpenAL.

OpenAL works fine for games, where the timing for sounds is not critical, as they dont have to be synchronized with each other. However there is no provision in the API whatsoever to control the timing of when sounds play. There is no latency information available. You do not know when sounds queued will actually start to play. There is no way (such as in Quicktime) to coordinate two different sound sources to make sure they start at precisely the same time. For this reason, though you could write a "MOD player" in OpenAL, it wouldn't be a very good one. There is nothing to keep instruments from playing at the wrong time.

Maybe the key difference is that OpenAL does not give a mechanism to stream data into a buffer object, choosing instead to allow the programmer to queue buffers for sources.

At one point, it did. There is or was a LOKI streaming buffer extension. It was deprecated because the whole concept of a streaming buffer is an oxymoron. That was the right thing to do. The queue that replaces it is fine too. It just needs to do more than just accept buffers. The un-queue is bizarre.

You see, audio is a result of the kinetic energy of air molecules, the effects of which can only be observed in three dimensions. Any number or dimensions that are more or less than three are merely mathematical abstractions, and it is not known whether sound exists in those dimensions.

The pace of openal development for the linux branch during 2001 has slowed, but has not stopped. I think it's natural that some slow down would have occured, because January saw the commit of the 1.0 spec compliant linux implementation. Until the 1.1 spec is well defined there's not much to do but bug fix and write new backends. So the frenetic pace of commits that lead up to January 2001 can't really be maintained.

That being said, there are lots of spots in the linux implementation that could use bug fixing, or optimizing, or whatever. Since leaving Loki I don't think my position as full-time maintainer was ever filled. I've committed fixes for some more serious bugs but in the vacuum left by my departure a lot of good patches have been left by the wayside.

But the API is good, the implementations on the whole good and getting better. Concerns about the viability of using openal can be addressed by looking at the Loki software titles. In the absence of an official maintainer I'm more than happy to fix things as time permits, but this leaves things open to the vagaries of my work and personal schedule. In general, I really think this criticism is unwarranted ( and your observations about cvs commit dates totally incorrect! ).

This is nice, but couldn't Ask Slashdot deal with things a little closer to home?

'Anonymous Coward writes: A few years ago, Slashdot had several comments by the user MEEPT!. There was much excitement around it. But if you check the site now, the last MEEPT! comment appears to be from December of 1999! Does anyone know of a good (preferably non-goatsex-infested) site for such comments? The only answer I get when I ask Slashdot users this question is to be moderated (-1, Offtopic). I'd love to read Slashdot instead of having to move to Kuro5hin again. Any pointer or hints about the current status of MEEPT!? Are there any alternatives?'

It seems that the only companies involved in OpenAL are Loki and Creative. Creative basically stopped supporting anything other than the newest cards on "the other" operating systems. Look at the last driver release dates for the Live series W2K. There have been lots of complaints about the way they mess up the PCI bus and Via chipsets and poor signal to noise ratios and on and on...

In the past it was expected that audio under Linux would suck because there was no support. So, now Linux will catch up and pass Windows support?

I would LOVE it if nVidia nForce came in with support for Linux AND Windows AND produced good CLEAN sound with good speed under both OSs, but I'm not holding my breath.

Also what about high end card (M Audio, Aardvark, etc...) support under Linux? Are any of these cards good for gaming? I could care less about 5.1 right now. I want fast, clean, stereo sound, a digital out, low CPU load under the most demanding games, standards compliance, under about $150-200 and regular driver updates. You'd think after, what, twelve years of sound cards for PCs we would have figured all of this crap out by now.

Hmm...Perhaps because no one likes Win2k, I mean really, most game devleopers and companies refuse to 'officially' support it win2k...I think that's a sign:)

As far as good clean sound, I feel like the sound is very clean, especially when coming from my Klipsch 4.1 Promedia THX certified speakers...

I would call the quality excellent. And to be honest, the only chipset with real PCI problems is VIA, and that's because the Live! generates a very tight PCI feedback loop, something that works fine with every intel chipset i've ever owned, and "Mysteriously" didn't work with the one horrible via chipset I owned (Apollo Pro 133a). As a side note, AGP support was horrid with that via chip as well. Last i'll ever buy a via chipset...

Just to present a different side of the coin, I had the same problems with sound under Win2K with a Creative card. Looking for support on Creative forums blamed it on Via for not producing decent drivers, while Via blamed Creative. However, my motherboard manufacturer (Gigabyte), quietly released a BIOS update which fixed these problems instantly.

Hmm...Perhaps because no one likes Win2k, I mean really, most game devleopers and companies refuse to 'officially' support it win2k...I think that's a sign:)

Much like Linux and MacOS. Are they bad as well? Everyone knows W2K is much better than 9x and Me. XP is just W2K warmed over with a bunch of useless marketing tack-ons.

I've never owned a single non-Intel chipset. I have had problems with the Creative drivers. I have asked Creative for support. I have been ignored/denied. I don't plan on buying an Audigy (what a stupid name). I have a whole mess of Creative Labs boxes in my attic that will probably show up on ebay really soon, I just need a suitable replacement. I play games. I like W2K over Linux (client). I know that I'm not alone.

My speakers, preamp & amp are good as well. (big long rant about how good, edited out)