Ok so this isn't strictly gaming-related, but it could be useful as a tool in gaming projects for generating and processing audio.What I'm working on is a software synthesizer/audio processor. Actually it could be used for any digital signal processing, but my focus is on sound here.The idea is that you can drag-and-drop DSP objects to your workspace, connect them as you like, and as such create your own sound generators, effects and whatever you can come up with.

Here is a screenshot:

So what you're seeing here is a network of digital signal processors (in this example, it's an 'auto-wah' effect that actually sounds quite funky with a bass-guitar).You can just drag-and-drop components from the left-side, and connect them together as you please to create your own sounds and save them as XML.There are actually many more DSP components already there (delay, FM synthesis, reverb, phaser, chorus, wave-shaping etc), but I just need to add them to the left-hand menu to make them available in the editor.

Although I'm mostly focusing on musical applications for this at the moment, I think it could be useful for real-time audio processing in games as well. For example a reverb effect that adjusts itself to the area the player is in. Well, whatever you can think of. Anything that surpasses simply playing back samples I suppose.

If there is enough interest in something like this, I'm considering to make this an open source project (otherwise it'll just be my own toy).

I for one would love to have a tool like this to mess around with. Are you thinking of releasing a demo? I believe the interest would be there since to my knowledge there isn't an open source tool like this out there and the commercial ones are quite expensive for casual hobbyists like most of us.

I've just uploaded a very early demo to play around with.It's all still quite rough, but it sort of works.

Some pointers:- You can't yet drag from the left-hand menu: You have to double click on it and it will appear on the top left of the workspace.- Asio IN/OUT obviously only works on windows if you have audio hw with Asio support (or have Asio4All installed). On other platforms you use Audio In/Out (i.e. JavaSound) instead.- Connections are always made by dragging from an output (i.e. right side of a DSP block) and dropping on an input (left side).- To disconnect, drag a connected output on the off it and on the workspace.- The Keyboard component simulates a piano keyboard on your computer keyboard, but you have to give it focus (i.e. click on the button in the center of the Keyboard block).- MIDI IN is untested at the moment (I will work on that today).

Be interested if you do open this up - there may be some potential for code sharing here.

Also, if you're interested, take a look at the JAudioLibs AudioServers project. I've just had a code contribution to review with a direct ASIO backend, but this would also give you access to JACK - low latency audio and inter-application routing on Windows, Linux and OSX. This would for example allow you to route any other ASIO program through your DSP environment.

Regarding opening it up, I think it's a bit too early for that as I'm still tinkering with the general design of it so some things might change drastically in the near future.The editor is sort of in the prototype phase.

One basic thing is that currently all signals are floats, but I'm considering making it all double. I think it shouldn't negatively impact performance on a 64bit system, and in fact it might even improve since I already use doubles in various places where precision is important so there's already quite some casting going on.

This is awesome Ok, it's still in an early alpha state but there's huge potential.I could use something like this myself as I'm working on a 2D game engine with graphical Editor for nearly everything. We didn't start on the sound system yet but this would defiantly make it a lot cooler to use I don't know if it's possible ATM but you could think of a random function to make a simple sound sound different everytime it's played due to random pitch or other effects.I'll watch this and once it's Open Source (I hope it will be one day) I'd try to contribute and (as I mentioned) use it myself.

One basic thing is that currently all signals are floats, but I'm considering making it all double. I think it shouldn't negatively impact performance on a 64bit system, and in fact it might even improve since I already use doubles in various places where precision is important so there's already quite some casting going on.

LOL I actually went the other way, and switched the pipeline from double to float, though I also use doubles in places where precision is important. Performance should be practically the same - AFAIK it doesn't actually make a difference to the FPU(?) However, the thing that swung me was memory usage, particularly because of using things like in-memory sample buffers. Also the IO that offers floating-point is likely to be 32-bit I think (not sure on ASIO).

I remember reading somewhere (think CSound docs) that around 6 chained DSP operations can lead to audible differences between 32bit and 64bit signal paths, though depending on the algorithm the number of operations might be higher. To me it justifies double precision where necessary, but not necessarily double precision throughout.

RasmusDSP (an old project by the author of Gervill in the JRE), is the only Java audio project I'm aware of that uses doubles throughout.

However, the thing that swung me was memory usage, particularly because of using things like in-memory sample buffers. Also the IO that offers floating-point is likely to be 32-bit I think (not sure on ASIO).

I remember reading somewhere (think CSound docs) that around 6 chained DSP operations can lead to audible differences between 32bit and 64bit signal paths, though depending on the algorithm the number of operations might be higher. To me it justifies double precision where necessary, but not necessarily double precision throughout.

RasmusDSP (an old project by the author of Gervill in the JRE), is the only Java audio project I'm aware of that uses doubles throughout.

Hm good point about memory usage, I haven't considered that yet. But what sort of memory usage are we talking about here that made you go back to float?My consideration for using doubles is not necessarily for sound quality, but sometimes you need doubles in algorithms where high precision does make an audible difference (for example when there's a lot of feedback involved). So I thought if performance will be the same anyway, why not use doubles everywhere and forget about precision concerns altogether? It'll be simpler, and I like simple Asio is indeed 32bit float, but that would then be the only place where a down-cast is necessary, so I think that would still be cheaper than casting multiple times in your audio chain.

I should probably just test it out.

Quote from: twinflyer

I don't know if it's possible ATM but you could think of a random function to make a simple sound sound different everytime it's played due to random pitch or other effects.

There are noise generators, so that should be useful there probably. And it's very easy to add new dsp blocks.

Hm good point about memory usage, I haven't considered that yet. But what sort of memory usage are we talking about here that made you go back to float?

Partly as I said around sample banks, etc. If you're using a range of samples stored in memory, ideally you want them loaded up and ready to play instantly (ie. probably in the data format of your pipeline). Doubling the memory overhead here can be a problem.

My consideration for using doubles is not necessarily for sound quality, but sometimes you need doubles in algorithms where high precision does make an audible difference (for example when there's a lot of feedback involved).

I think I know what you're getting at, though it sounds slightly like two opposing things. I found the CSound reference I referred to (somewhat incorrectly I realise) earlier - http://www.csounds.com/manual/html/MiscCsound64.html The audio accuracy arguments should be correct, though the performance assertions are probably irrelevant. Be interesting to benchmark a few of your algorithms at both precisions, though a full pipeline is probably more accurate (from a caching, etc. point of view).

Partly as I said around sample banks, etc. If you're using a range of samples stored in memory, ideally you want them loaded up and ready to play instantly (ie. probably in the data format of your pipeline). Doubling the memory overhead here can be a problem.

Ah, I see what you mean. I'm not using sample banks atm but I can see how that can impact things.I suppose you can always have them in memory in a smaller data format

The only places where I store signals in memory now is in buffers for delay lines and such, but even there I could imagine it could more quickly lead to cache misses.

As an aside, I got MIDI working now. I just patched myself a super-dirty dual-osc analog synth for giggles Things are quickly getting complex though. Just this simple monophonic synth has the screen filled with dsp blocks already.I'll have to start thinking about the option to create 'meta-blocks' or something (i.e. collapsing a whole network into a single DSP block)

Wow! I just discovered this thread. Great work! This is something I am passionately interested in myself. I have also been making audio tools, but haven't tackled filters or reverberation yet. I'm pleased to have found this thread and be able to follow along. I probably won't have a lot to contribute, since my background is more as a musician and am couple steps behind on the java and audio engineering expertise.

I'm hot on a project that functions as a "tanpura" but also an intonation-training tool for classical musicians. When it is "done" (i.e., commercially released), I'm planning on continuing to expand the event-system tools developed for it to support ideas I have for algorithmic composition, generative/dynamic scores for java games.

I haven't opened my audio library yet either, but various "toys" have been posted along the way.

Am curious, what size buffer are you using in passing audio from one unit to the other? I'm simply using a single frame which is a significant inefficiency, but allows for considerably simpler coding. I do use buffering for actual file i/o (e.g., wav playback and SourceDataLine output). But the inner mixing loop and effects processing mostly occurs on a per-frame basis. It works.

Also curious, have you been able to get these tools to work on Android or iOS? I own neither and haven't tried this yet on my own audio library, but the output is just a single, never-ending stream (SourceDataLine) which seems like it should be possible on both system.

Are you using any native code or any libraries other than javax.sound?

My FM synth can either create play in real time or be used to generate DSP data samples required for the upcoming score segments. I'm thinking it might be useful to be able to have the option, allowing one to dynamically optimize according to whether the bottleneck is processing or RAM. Just as much visual/graphic data can be generated at level-transitions, the same could be done with the sound design and musical score.

"We all secretly believe we are right about everything and, by extension, we are all wrong." W. Storr, The Unpersuadables

I do plan to redo the wavetable synth sources to eliminate the aliasing

That reminds me of a question I was going to ask erikd -

Are the oscillators you currently have just sin waves, or have you implemented square, saw, etc? If so, are they band-limited? I was having a look a while back at (fast) options for generating band-limited oscillators, either real-time or table lookup - lots of algorithms around though haven't found anything in Java, or yet had the chance to try porting something. That might be some code that would be useful to share thoughts on around here?

The idea I intended to try next was to build the 'band-limited" square/sawtooth tables from sine waves. The tables are small enough, and it is a one-time event, so the speed of the computation isn't that critical, is it? Or am I off base, conceptually?

"We all secretly believe we are right about everything and, by extension, we are all wrong." W. Storr, The Unpersuadables

Currently the oscillators are not band-limited so unfiltered you can get some aliasing, but it's on my to-do list.I'm not sure yet how I'm going to implement that. Perhaps pre-filtering the waveforms, or maybe simply oversampling and/or averaging is enough?There are various waveforms though (sine, triangle, block, saw and more) and there is a pwm input.

Quote

Am curious, what size buffer are you using in passing audio from one unit to the other?

So I'm basically simply passing single floats around.In the units that handle external I/O such as javasound and Asio there is of course some internal buffering, but implementation details like SourceDataLine etc are hidden in such units to make the 'core' of it independent on J2SE-specific APIs like javax.sound.

Quote

Also curious, have you been able to get these tools to work on Android or iOS?

I have done some audio processing on Android (although not with this particular project), but I found that latency is an issue there so for this project I didn't get into that yet. I don't have any iOS devices, but it seems that iOS is more suitable for musical low-latency applications.

Quote

Are you using any native code or any libraries other than javax.sound?

I'm using JAsioHost on Windows because unfortunately javax.sound has way too much latency there. Asio uses a pull-mechanism rather than blocking I/O, so it was a little bit more tricky to properly synchronize things, but you can really have good latency numbers there (about 2.5ms in Asio instead of ~150ms in javasound). I found that using Asio, you can even use it for real time guitar effects with virtually unnoticeable latency.

As an aside, a good source of information is www.musicdsp.org for this sort of thing.

One of my main issues in the editor now is to keep things scalable from a usability p.o.v.

To give you an example, this is a fairly simple emulation of a funky clavinet sound with a resonant filter (think Stevie Wonder ) using Karplus-Strong for basic sound generation:Not only is this mess a fairly simple patch, it's also monophonic. As you can see, things can quickly get quite unmanageable

At least I'll have to create some way to collapse a selected network of units into 1 'meta-unit' that can be reused.And then to make synths polyphonic, I'll probably need to create some specific interface for such a 'meta-unit' so that I can simply duplicate them into multiple voices that can be driven by a polyphonic set-up.

A little update:I have experimented with band-limiting the oscillators, but I've obviously been on the wrong track there.The oscillators use pre-calculated wave-forms, and what I did was simply pre-filtering them. Sure it helps with aliasing at higher frequencies, but of course it also made it all sound quite dull.I think the best solution there is to do real-time oversampling of the oscillators instead.It might be a bit more expensive, but then again I haven't gone beyond ~2% CPU load of a single core yet (and that is while the CPU was running at about half speed), so I probably shouldn't worry too much about performance yet.

I have also worked a bit on making the editor a bit more manageable. Now you don't really have to add 'controller-knob' units anymore. Instead it will simply show knobs for all editable settings for the selected units at the bottom. Much less clutter!The following screen shot is of the same patch as the last one (but now including soft clipping):

I have experimented with band-limiting the oscillators, but I've obviously been on the wrong track there.The oscillators use pre-calculated wave-forms, and what I did was simply pre-filtering them. Sure it helps with aliasing at higher frequencies, but of course it also made it all sound quite dull.

The problem with doing that is that you'd need multiple wave-forms (possibly one per octave), otherwise when you pitch the table up you'll bring in aliasing, but when you pitch down you're missing more and more of the harmonics that give the sound its richness.

I think the best solution there is to do real-time oversampling of the oscillators instead.It might be a bit more expensive, but then again I haven't gone beyond ~2% CPU load of a single core yet (and that is while the CPU was running at about half speed), so I probably shouldn't worry too much about performance yet.

Oversampling and filtering is definitely an approach among many. As well as the BLEP approach @BurntPizza mentioned, I've been wondering about BLIT (band-limited impulse train). I know those approaches are related in some way, but not sure of the pros and cons of each. What I do know is that The Synthesis Toolkit has implementations of a BLIT Saw and Square wave algorithm in C++, which shouldn't be too hard to port. There is also some related code within the Music DSP archive. This article (using Reactor) seems one of the easiest to understand the approach - it's not exactly in my comfort zone!

I've also seen a few posts suggesting that suitably optimized real-time generation of waveforms might beat wavetables, again on the basis of cache misses - not sure how that pans out in practice.

I'm using JAsioHost on Windows because unfortunately javax.sound has way too much latency there. Asio uses a pull-mechanism rather than blocking I/O, so it was a little bit more tricky to properly synchronize things

I'm intrigued by why you find a callback API trickier than a blocking one? Also slightly concerned what you're meaning by "synchronize" things - I'm assuming not in the sense of locks! Either way, one article I'd highly recommend reading around low-latency audio programming is http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing He's written a few other interesting articles around communication with real-time audio that might be worth a read too.

The problem with doing that is that you'd need multiple wave-forms (possibly one per octave), otherwise when you pitch the table up you'll bring in aliasing, but when you pitch down you're missing more and more of the harmonics that give the sound its richness.

Yes, in hindsight it was a bit silly to even go that way

Quote

I'm intrigued by why you find a callback API trickier than a blocking one? Also slightly concerned what you're meaning by "synchronize" things - I'm assuming not in the sense of locks!

It's not really trickier in itself; in fact it's easier. And yes, I meant synchronization in the sense of locks.But the way it works now is that I have an object called 'Context' where DSP units are registered that need to be updated (for example to generate the next sample of an oscillator).Then there is a thread that updates the Context. This was fine for blocking I/O such as javax.sound, but with Asio this needs to be synchronized with Asio's thread.

It might be cleaner to refactor this a bit so that the updating of the Context is regulated by the audio I/O units themselves instead of this separate 'I-know-nothing'-thread that blindly updates the Context. That way, no synchronisation/blocking is necessary anymore.On the other hand, it works fine as it is now so I'm not exactly in a hurry there

Then there is a thread that updates the Context. This was fine for blocking I/O such as javax.sound, but with Asio this needs to be synchronized with Asio's thread.

Yes, don't do this! Run everything off the callback thread. It's possible to wrap a blocking API to provide a callback API. It's not possible to do the opposite without adding overhead, and potential threading issues. You'd be better building a callback system on top of JavaSound. Feel free to have a look at this code which does just that, and has a few other tricks to improve performance (timing loop, etc.)

Are you suggesting that eventually using all those delicious CPU cores for fancy real-time audio processing will never really work and we'll be stuck in single-threaded land forever?

I've read the "Time Waits for Nothing" article and wondered the same thing. It is difficult for me to understand how having one thread do the work of two is more performant than having the two threads that work in parallel but just occasionally have to synchronize.

One thought is that a modern cpu/compiler can more efficiently figure out what, in a single thread, can be handled via a dual process, than when that same work is in two threads that have to interact at certain points. But I don't know if that is a sufficient explanation.

What I've done in response to reading this article is the following:(1) made a study of functional programming and made an attempt to use things like immutables when possible (I'm thinking of the EventSystem I wrote, where the constituent "AudioCommands" and frame times of AudioEvents are final);(2) in some instances, programmed out some flexibility that would have required synchronizing or making use of a synchronized collection (e.g., my mixer can only have tracks added or taken away when it is not running);(3) but also making use of synchronized collections when interacting with the main audio thread: e.g., the collection that holds the Event schedule is a ConcurrentSkipListSet which allows me to add to it without danger of throwing a ConcurrentModificationException);(4) making use of volatile variables for all "instruction" or "settings" changes to the various effects and synths;(5) optimized for speed of execution of all code in the main audio loop.

Now, a volatile variable, or a ConcurrentSkipListSet will also block. But the overhead or amount of blocking is going to always be less than the use of Synchronized? I don't know if that is necessarily true.

It is very easy in this business (as with many things in life) to glom onto a principle and overuse it. I wish I had a better understanding of synchronizing and parallel computing, but despite reading "Java Concurrency in Practice" I feel like there is a lot that I am taking on faith.

One thing I'm interested in trying with the audio mixer: fork/join for the separate tracks. But in truth, so little is done in a given frame, that the overhead is probably not justified. This might be a solid argument, though, for having the audio mixer increment by a buffer's amount of frames rather than by single frames.

"We all secretly believe we are right about everything and, by extension, we are all wrong." W. Storr, The Unpersuadables

Are you suggesting that eventually using all those delicious CPU cores for fancy real-time audio processing will never really work and we'll be stuck in single-threaded land forever?

I've read the "Time Waits for Nothing" article and wondered the same thing. It is difficult for me to understand how having one thread do the work of two is more performant than having the two threads that work in parallel but just occasionally have to synchronize.

I'm not saying that multi-core audio processing isn't doable, and there is obviously software already that does it. I would say it shouldn't be done naively, lots of core library stuff in the JVM is probably unsuitable, and it requires a deep understanding of what's going on and whether it's worth it. In particular don't assume that having multiple threads will instantly make things more performant than being single threaded considering the overheads of managing that. Also don't assume that performance (throughput) is what matters most - the point of that article is that guaranteed execution time is essential. eg. Praxis LIVE always runs with the incremental garbage collector, and a few people on here recommend it for stable video framerates as well as doing audio - this GC has less throughput. We are basically trying to get close to real-time semantics (and AFAIK there is not much in the way of hard real-time stuff that supports multiple cores!).

Don't assume that sharing data between threads requires synchronization in the synchronized / blocking fashion either - there are various lock-free / wait-free ways of doing that too.

(3) but also making use of synchronized collections when interacting with the main audio thread: e.g., the collection that holds the Event schedule is a ConcurrentSkipListSet which allows me to add to it without danger of throwing a ConcurrentModificationException);

This collection is not synchronized in a typical Java sense - it is non-blocking. I would question whether you need to order events within the same collection that handles cross-thread communication. I'm generally in favour of a single access point to the audio thread using something like ConcurrentLinkedQueue<Runnable>

(4) making use of volatile variables for all "instruction" or "settings" changes to the various effects and synths;

Volatile is non-blocking but there are problems with using it like this (as opposed to passing in Runnables as above). While non-blocking, they are a memory barrier which means caches may be flushed when hitting one, and certain optimizations regarding reordered instructions may not happen. They also suffer from a lack of atomicity (you can't guarantee two instructions happen together), and they reduce some possibilities for optimization (eg. makes it harder to switch off elements off a processing graph that aren't required). I wrote some more about this with regard to Praxis LIVE architecture here if you're interested.

One thing I'm interested in trying with the audio mixer: fork/join for the separate tracks.

This seems to be similar to the way some pro-audio software approaches this. The important thing in parallelizing would be ensuring that the different cores do not rely on data from each other, so separate mixer tracks would be a logical way to do it. You'd probably want to write a specialized fork/join mechanism that tracks the processing time required for each mixer track to try and spread them across available cores, and not have more threads than cores running. You'd probably want to look at an efficient non-blocking communication model from the worker threads, and probably have the processing threads aware of time in the stream - thinking that if processing completes close to the time the next audio buffer is available you'd want to spin-lock rather than let the thread be descheduled.

@erikd - apologies if this is diverting your (forum) thread somewhat. With specific regard to this project, I'd recommend sticking with what I said earlier about running everything off of the primary callback thread. If you get to look at running multiple DSP graphs at once (ie. without dependencies except on final mix) then splitting on to worker threads might be worth it. Be aware of one JVM specific issue though, which is probably a consideration with ASIO (it is definitely the case with JACK), in that the callback thread into the VM has priority settings that are not possible to achieve from within Java without resorting to JNI. It would be important that any worker threads also gain those priority settings - I haven't tried creating a Thread from the callback yet to see if the settings get inherited.

Not at all! I think it's all very interesting and I learn a lot from these discussions.

To be honest, multi-threading isn't really a concern for the time being but I can imagine at a later stage it might become useful to fork certain heavy tasks that don't require inter-thread communication. For example having complete voices of a polyphonic synth spread across multiple cores.It might not be worth it now, but there is this trend of CPUs getting more and more cores so it's an interesting subject.

For now, I'll follow your advice and simply run everything off Asio's thread.I do understand the implications of real-time audio, and my project has largely been developed with these idioms in mind, but to be honest I'm not really experienced with doing real-time audio stuff multi-threaded.

Anyway, I have made some progress in other areas It's now possible to save a sub-selection of a patch as a 'meta'-unit. This might make things a bit more manageable when your networks get more complex.

I'm also busy with creating a higher-quality version of my main 'oscillator' unit to reduce aliasing.What it does extra now is oversampling and filtering. Though rather expensive, it already sounds a *lot* better but at higher frequencies there's still a little bit of aliasing so there's room for improvement.

Ok, I've followed your advice and made the audio single-threaded. It seems to behave better when doing 'expensive' stuff in the GUI, so that's good.Anyway, I think it's better this way because it's simpler and that's usually a Good Thing

I'm currently playing around creating a vocoder with this thing.It's a good test of using MIDI, and Audio I/O together.I don't have a proper spectrum analysis unit yet, so I'm working around that for now with band-filters and envelope followers, but it's starting to sound quite cool

Anyway, I think I'll move this to SourceForge or something soon. I'm a bit nervous about it because I know there are issues in the code that many software architects would snuff at, and it's all very much just a WIP of a hobby project so I hope it won't attract too much attention at this point The 'core' part of it (the pure DSP part that doesn't depend on J2SE) I'm quite happy about though: It's simple and relatively clean.

So I decided to upload my project to SF and created a project there.I chose 'share project' in Eclipse, which then decided to completely destroy my workspace and remove 75% of everything.So thank you Eclipse GIT team provider

My last backup is a few days old, so it'll take me a day or so to get me back to where I was.

Disclaimer: I know it's all rough around the edges and obviously it can be improved in many ways. At this point it's probably not all that useful for most people. Many will say the code is incredibly dirty.It's basically my toy-project, but just see it as a bunch of code that might be useful one way or another. If you want to know how you can implement a filter, pitch-shifter, chorus, reverb, etc: there's lots of code here.

If you want to try the editor, try running class org.modsyn.editor.PatchEditor. It will create a folder called 'ModSyn' in your user.home directory. You can copy the files in ModSyn-j2se/example-patches there for a few examples.But please be aware: these examples depend on ASIO, so that's Windows-only and depend on you having an ASIO driver installed (if you don't: install ASIO4ALL, it works surprisingly well), and some patches use MIDI.It's on my todo list to remove the distinction between Asio-IN/OUT and JavaSound-IN/OUT, and resolve the best audio implementation based on platform, whether or not you have ASIO installed (or another audio host that doesn't suck as hard as JavaSound) and configuration. Currently, JavaSound (Audio-IN/OUT in the editor) doesn't work at all in the editor.

With all that said, I'm actually using this for my own band so for me it's already useful . It's quite nice to have this box of tools available if your own synth or effects pedals or whatever don't quite support the sound that you're after. And it's quite easy to add more DSP objects to this box of tools.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org