Maybe I can try to force the second one to see if there is any difference.

Let me know if it works any better than the other one. If so, I'll try and set up some test machines with Linux and webcams to see if there is any way to predict which one works. If neither work, then it really won't be solvable until I finish writing my software mixer.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

That's odd, I wouldn't have expected it to get into ChannelJavaSound.attachBuffer without printing some of the debug messages from the Mixer selection process first.

Anyway, what is the brand/model of your webcam, and the Linux version? I'm going to set up some Linux test environments and want to see if I can reproduce the problem here. I'm not placing a high priority on this bug, but it is something I'll continue to look at among other things.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

That's odd, I wouldn't have expected it to get into ChannelJavaSound.attachBuffer without printing some of the debug messages from the Mixer selection process first.

Anyway, what is the brand/model of your webcam, and the Linux version? I'm going to set up some Linux test environments and want to see if I can reproduce the problem here. I'm not placing a high priority on this bug, but it is something I'll continue to look at among other things.

Mandriva Linux 2010 (one of the most famous Linux distro just behind Ubuntu).Logitech, Inc. QuickCam Messenger

If I understand it correctly it should make the player stand at the bottom of the screen and look up (to the top of the screen). His head points out of the screen. This way, his right ear is to the right and his left ear is to the left. Correct? I have tried to experiment, but am not sure that it sounds right.

Of course, each frame I update the player position. I never need to update these other vectors, right?

I use mostly quickPlay since that allows a sound to overlap itself (I don't want to keep track of multiple sources from the same object). Will 30 calls per second to quickPlay be a problem?

A design note, I would have preferred if a source wasn't tightly bound to a sample. Instead, I would like to have a source: "enemy_1" or "door_2" and then those sources could play different samples. As it is now, I have to have lots of sources that always are at the same position. A door can open and close, so it must have two sources. A creature can walk, attack, take damage and die, that means at least 4 sources. In the end, I just use quickPlay for almost everything.

The call to setListenerAngle is not necessary (zero is already the default value, and it would be overwritten by setListenerOrientation anyway). The best way to visualize the orientation you wrote, you are laying face-down flat on your stomach (looking in the -y direction) with your feet pointing into the screen (up is in the +z direction, toward the player). Thus, if you were to position the listener in the center of the screen, then things on the left side of the screen will sound like they are to the right and things in the right side of the screen will sound like they are to the left (and if you have an OpenAL version designed for surround-sound system, things that are at the top of the screen would sound like they are behind you and things that are at the bottom of the screen would sound like they are in front of you). A better orientation might be (0, 1, 0, 0, 0, -1).

Notice, however, that you must also change the listener's position, not just his angle. Without changing position, the listener will be at the top-left (0, 0), and everything will sound like it is to one side.

Next, you must consider the fact that if (0, 0) is the top-left, this means that +y is down in your coordinate system, but it is up in SoundSystem's coordinate system. So to keep things correct, you would want to reverse the sign of any y-coordinates you pass to SoundSystem.

Assuming you have the listener position and coordinate signs correct, but you are still not hearing what you expect, there are a couple of problems that can be easily adjusted (both can be done if needed):

Problem #1: Panning between left and right speaker either pans too rapidly or too slowly compared to the 2D positionsSolution: Ignore y and z (use only x coordinates) and adjust the diameter of the circle used to calculate the pan. You would use the default listener orientation for this (where +y is up, -z is into the screen), and when setting the position of the listener and all sources only pass them the x-coordinates. So say you want to have the listener at (5, 7) and to play an explosion at (2, 4):

The good thing about this method is you don't have to worry about the sign difference in y coordinates, because you are only using the x coordinates.

Problem #2: Far away sounds are either too loud or too quiet based on their distance from the listenerSolution: Adjust the rolloff factor if using Logarithmic/Rolloff Attenuation (or fade distance if using Linear Attenuation)

For your question about the listener's orientation, correct you should only need to set it one time. Also, you really only need to update the listener's position if it changes. It doesn't hurt to update it every frame but it may not be necessary (for LibraryJavaSound, each update of the listener position requires a loop through all the sources to calculate a new pan/gain, so only calling setListenerPosition when it changes could potentially be a useful optimization if there are a ton of sources to loop through).

For your question about 30 calls per second to quickPlay, in stress tests that is not a problem as far as stability goes, however you should note that there are generally only 28 normal channels available, so sources will be getting cut off if you are playing that many simultaneously. Depending on the speed of the user's system, there could be a noticeable performance hit as the channels are rapidly being started, stopped, reset, and started again. Also, if a lot of copies of the same sample are playing at close to the same time, you will experience phase-resonance interference (this is true for any sound library, not just SoundSystem). What this sounds like is randomly amongst the numerous playing sources, you will hear a distorted version of the sample played either extremely loud or extremely quiet (this behavior is very noticeable). It is caused by the amplitude values of more than one sample aligning to either amplify or cancel each other out. It happens most noticeably with sound effects that are made up of repeating sample data (such as bells, engine hums, laser pulses, etc). Interestingly, this phenomenon actually happens in the real world as well (for example, some migratory birds have specially-designed in-flight calls that reduce the effect of echo off of mountains by inversely aligning the wave amplitudes as returning sound waves passes back over the oncoming waves, making it easier for them to locate other flocks and coordinate their movements).

As for your design note, true it would be better optimized to allow more than one sample to play from a single source, but it would also create more of a headache for the developer, who would now have to keep track of which sample is currently assigned to each source. Since sources are relatively cheap, I think the potential optimization benefit compared to the "easy to use" benefit falls in favor of easy to use. Additionally, in both of your examples, it is more realistic to play the samples from more than one source, because they may overlap (a door could be slammed shut then rapidly opened again, or a creature could be walking and attacking at the same time or taking damage and dying at the same time, etc.). Just quickPlay the sample where it needs to play, and let SoundSystem worry about creating and getting rid of the sources for you.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

The sounds that I use are played correctly the first time I play them but after that, I hear nothing I'm going to update the library and check if I reproduce this problem when my webcam is not plugged.

When I stop playing a sound (especially the introduction sample), it fails, it goes on playing it until the end.

That's a new one. What is your OS, 32/64 bit, Java version, etc? I'm sure you've told me before, but the thread has gotten so long that I can't seem to find it. Let me know if it is related to the webcam. I'll try and reproduce it here if I can.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

Ok I confirm that the bug is reproducible only when the webcam is used because it uses another mixer. It is not important, don't worry. I'm impatient to use your low-latency software mixer Thank you for your good work

I've written a native software mixer for the Android (or rather I've hijacked one from someone else's code and ported it to the NDK ). That has helped me quite a bit with understanding what all is involved. I'm still in the process of taking that knowledge and translating it into something usable in Java, though.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

I've written a native software mixer for the Android (or rather I've hijacked one from someone else's code and ported it to the NDK ). That has helped me quite a bit with understanding what all is involved. I'm still in the process of taking that knowledge and translating it into something usable in Java, though.

Do you use OpenSL ES or OpenMAX on Android to do this? Is the use of native code mandatory to write such a software mixer?

I have to write this report about my problem of webcam. It would be fine if it was fixed in Java 1.7.

Do you use OpenSL ES or OpenMAX on Android to do this? Is the use of native code mandatory to write such a software mixer?

The mixing is done purely by c code. The code is taken from the open-source SDL software mixer. The output data is passed in buffers through an interface method into Java where it is played through an AudioTrack instance.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

The mixing is done purely by c code. The code is taken from the open-source SDL software mixer. The output data is passed in buffers through an interface method into Java where it is played through an AudioTrack instance.

Ok I see what you mean. When you port this code to the desktop, will you keep all this C code? Is it a pure utopia to imagine a pure Java software mixer? SFML has a nice software mixer, maybe you could look at it too.

Well, I'm attempting to port it to pure Java (somewhat difficult to deal with the whole pointers vs Objects issue). If this fails or turns out to simply be too slow, I may have to go with native code in the end (I hope not). If that happens, I will modify LWJGL's AppletLoader so that it deploys the necessary natives (which will make using the library in Applets easier, but will of course still require the end user to accept the digital signature).

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

Well, I'm attempting to port it to pure Java (somewhat difficult to deal with the whole pointers vs Objects issue). If this fails or turns out to simply be too slow, I may have to go with native code in the end (I hope not).

Well, I was just talking in general terms, since I have not actually finished the port yet. I meant that Java, as an interpreted runtime environment, trades some amount of speed for its portability. For the most part, this isn't a problem (the fact that there are pure-java 3D engines is proof of that). Audio data manipulation and mixing is rather math-intensive, so it remains to be seen if a straight port from native code will run fast enough, or if it will require additional optimizations.

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

Well, I was just talking in general terms, since I have not actually finished the port yet. I meant that Java, as an interpreted runtime environment, trades some amount of speed for its portability. For the most part, this isn't a problem (the fact that there are pure-java 3D engines is proof of that). Audio data manipulation and mixing is rather math-intensive, so it remains to be seen if a straight port from native code will run fast enough, or if it will require additional optimizations.

Ok. Maybe you could use JOCL even to perform computations on the CPU to benefit of features that are not exposed in Java, it would avoid to write native code.

I have looked a bit at the sound system of SFML. Actually, even the software renderer relies on OpenAL.

Secondary I've encountered an issue with playing .ogg files that are of a small size (for instance 13kb). I can play it in another media player (Winamp for example) but not using the sound system. When I change the file to a larger sized .ogg (a track of size 500kb) it works fine. Is there a lower size limit for use of .ogg?

I encounter this issue using LibraryLWJGLOpenAL.class and LibraryJavaSound.class whilst I have also tried CodecJOrbis.class and CodecJOgg. Any ideas?

EDIT: It also works with .ogg of file size 184kb. So I guess there may be a lower size limit?

Secondary I've encountered an issue with playing .ogg files that are of a small size (for instance 13kb). I can play it in another media player (Winamp for example) but not using the sound system. When I change the file to a larger sized .ogg (a track of size 500kb) it works fine. Is there a lower size limit for use of .ogg?

I encounter this issue using LibraryLWJGLOpenAL.class and LibraryJavaSound.class whilst I have also tried CodecJOrbis.class and CodecJOgg. Any ideas?

EDIT: It also works with .ogg of file size 184kb. So I guess there may be a lower size limit?

Do you use the latest version? I use tiny ogg files with JavaSound and it works fine. Do you use the streaming?

Thanks for a reply. I am using the versions from the first post. I'm using the assumption that these are the newest (as a lot of people tend to re-edit their first post with the links). I've tried it with both newSource(...) and newStreamingSource(...) methods.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org