This forum section was originally created while we were discussing a new, additional engine and sampler format designed from scratch. In the meantime this resulted in our new SFZ2 engine, which is already implemented to a large extent. However this is still the right place for ideas, feature requests, drafts and plans for new engine / format concepts and ideas. We now have 3 sampler engines (Gig, SFZ2, SoundFont 2). Why not having more?

To be honest, the basic capabilities don't seem to be beyond what any modern software sampler is capable of. In fact, my feature requests involving modulating loop points and the like appear to go beyond the Synclav's capabilities. Only the resynthesis capabilities seem to be unique. I'll keep reading and see if I can find anything interesting.

Consul wrote:To be honest, the basic capabilities don't seem to be beyond what any modern software sampler is capable of. In fact, my feature requests involving modulating loop points and the like appear to go beyond the Synclav's capabilities. Only the resynthesis capabilities seem to be unique. I'll keep reading and see if I can find anything interesting.

Loop points would be parameters to the sample player (which I should have called "synthesis/resynthesis" to begin with as samples are just a subset of that field and new synthesis/resynthesis modules could be added going beyond sample playback) so hooking up a modulator to them should be an expected task when creating a instrument.

I haven't drawn the block diagram yet, but what I'm working on turns out to be actually simple:

0. Events arrive from the "outside world" (via MIDI and GUI).

1. The events pass a series of event processors. The processors are either pre-build (like the mapper) or scripts.

2. The processed events are then sent to the bank of voices. Each voice consist of synthesis, filters, amplifier and modulators (the modulators are connected to parameters of the synthesis, filters and amplifier). All of them have parameters that is modified by one or several events. E.g. the amplifier has a volume parameter that is modified by (listens to) the velocity value in note events while a filter has a LFO modulator connected to one of its parameters and the modulator is also modified by (listens to) the velocity value in note events. The voices generates an audio signal.

3. The audio signal is sent to the "outside world" (for further processing).

Things I ignored in the description above: The "outside world" can actually be inside the sampler but is outside the scope of what I'm working on. Voice groups or voice layers so that each instrument can have multiple voice signal paths and exclusive voice groups and voice stealing per group/layer.