This forum section was originally created while we were discussing a new, additional engine and sampler format designed from scratch. In the meantime this resulted in our new SFZ2 engine, which is already implemented to a large extent. However this is still the right place for ideas, feature requests, drafts and plans for new engine / format concepts and ideas. We now have 3 sampler engines (Gig, SFZ2, SoundFont 2). Why not having more?

When I used to design and build large web applications for a living, I always started with the interface design, nailing down how the user will use the software before implementing the functionality behind it (that also made it easier to integrate the use of Smarty templates and the like, but I digress). Let's approach this exercise from a new direction: what are some new and different ways a musician can interface with a sampler and, more importantly, with the samples themselves?

I keep seeing flashes of images in my head, and I can't help but think there might be a whole new paradigm here nobody's touched yet. Maybe everything can be done within a waveform view, where functions from setting loop points to trimming can be done with handles associated with each waveform directly (with the occasional right-click, perhaps) with automatic zero-crossing detection. Instead of envelopes, we can define "frames" where these parameters change from one state to another over the course of a period of time. Perhaps we can do the waveform view in a Melodyne-like window, with all of sample mapped out and easily available. Perhaps an OpenGL window could also work, being able to shuffle around and manipulate the waveforms of a set in a 3D space.

I'll see if I can draw up an illustration at some point before Spring Break is over.

PS - I'm actually kinda liking the "frames" idea. I'm looking forward to seeing if we can all run with this.

I've been thinking a lot about the frames and key framing idea, and I like some of what I'm seeing in my head. It's like envelopes taken to a new level.

Imagine, right-click on any parameter (whether that be a button, knob, or a handle in the waveform window, or whatever), and set start and end key frames (turn the knob, set key frame, turn it again, set keyframe) at which point the system will ask for the trigger for the frame movement (note on, note off, mod wheel, MIDI control number, random number generator, whatever), the transfer function (I guess we could call them morphing functions to be cute) for the movement (linear, logarithmic, cubic, other polynomial, custom script, drawn envelope, etc.), and the time it'll take, if applicable. Another window could be made available to edit these as more normal kinds of envelopes for certain situations.

Some examples of use:

A simple volume envelope: Right-click the output gain, set a key frame for note-on with a particular gain, and sustain with a particular gain. Set the transfer function to logarithmic and the time for the transfer function to act. Then create a new key for note off, and a new key for note off (defaulting from previous key frame if you like) and a key for gain off, again with time and transfer function set.

It sounds a roundabout method for something envelopes were long ago invented for, but let's consider a more complicated example:

Modulating the loop points of a filter-sweep sample with mod wheel: Set a start and end loop point for a single cycle of the sample at the head (filter open). Set that as a key frame. Go to the tail end of the sample (filter closed), set a single-cycle loop there as the ending key frame. Set the transfer function to the mod wheel. Now, we can effectively simulate the control of the filter sweep of the sound with the mod wheel, as the keyframing system is only going to care about the start and end parameters and the function used to move between them. But there is one problem...

For this single-cycle looper patch to work, the loop points can't move smoothly up and down the sample, otherwise you'll get a lot of pops. The loop points actually need to jump from one zero-crossing to the next. This would be the responsibility of the transfer function. Remember the "custom script" option I mentioned above? One solution could be to expose a bit of functionality so that new transfer functions could be scripted by the patch creator.

Every parameter could possibly be key-framed in a similar manner. Again, all the key-framing engine needs is the parameters, the start condition, the end condition, and the control (or transfer or morphing) function to act upon anything within the sampler.

I just hope this translates to text as well as I can see it in my head.

EDIT: More brainstorming...

Another useful thing for the keyframe engine to do, would be to allow for copy/paste and linking of frame sets. For example, set up the framing on a parameter, copy that exact framing to one or more other parameters. From there, you could choose to link the parameter frames together, so that a change to one means the same change to all.

For something even cooler, morphing functions (I like that term, it has a nice marketing ring to it) could be applied between linked frame sets (another great marketing term). That might sound like overkill, but let's think of it in terms of multi-sample sets. Create an output gain frameset (aka, volume envelope) on one sample, copy it to other samples of the set (up and down the keyboard), but with a morphing function to make the frames shorter going up the keyboard. Suddenly, something that could be hours of dull work becomes a few minutes of setting and copying key frames.

Yes, I know there are auto-scaling envelopes out there. My point is that such enveloping could be done with a base tool that can do tons of other things, and not have to be specifically created for the task.

Now start thinking about how to use all of this to slice up loops and beats...

Darn, here I was, editing my thread title to make it less antagonistic (I was tired and a bit of a grump last night) and you replied to me before I could finish. That's okay, really, but you might want to edit your post to change the subject so it doesn't look like you're the one being the grump.

I may check out that AN1x editor the next time I'm booted into Windows, which I have to admit happens rarely these days (to do homework and watch Youtube vids, mainly, and not even then once I get Flash working in Archlinux). One never knows where a good idea might come from.

What I'm trying to do here is to decompose every possible modulation possibility into a single engine with a single paradigm, with the idea that a frame (a pair of key frames and a morphing function) or a frame set (a combination of frames in sequence on one parameter) can replicate any kind of modulation possible. Keeping it easy to use is also part of the challenge. I imagine a system where hovering over a control shows how it's being modulated and what it might be linked to, and right-clicking on any control allows easy set up of frames.

The multidimensionality (wow, my spell checker didn't trip on that!) of frames, which is the applying morphing functions to the links between frames of different parameters, is where a lot of the real power happens.

This whole idea actually came about from thinking about how 3D animators work in their respective software packages.

Definately check out the AN1X then. A sampler based on how that works would be very flexiblle and very powerful. The only reason the AN1X isn't as revered as much as other synths is because it wasn't covered in buttons, knobs and sliders.

Yamaha's scenes are a simple, more limited version of my idea. Using frames, scenes could be simulated by setting key frames for each parameter you want to change, then setting the same morphing function on all of them (mod wheel, velocity, etc.). I'm not knocking Yamaha's own implementation, though, as the simple two-scene system is perfect for a performance synth, and would give a performer a lot of power.

I've been researching the Qt framework, and it looks like it has a lot of the capability I'd need to implement this kind of interface, complete with a scripting language that can interface with the signal and slot system. It looks like for this to work, though, that the engine would have to be built to work closely with the engine, which means no more separation of engine and UI, unless there's a trick I've overlooked.

It just occurred to me that the morphing functions are incomplete as I've described them so far. Really, a morphing function needs two things, which we'll call the control function and the transfer function, just to finally use those two terms up (plus, they fit what they're about to define):

Control Function: This is what is used to control the change of the parameter from during a frame. This can be a set time, or any kind of real-time controller (MIDI, OSC, etc.).

Transfer Function: This is the curve along which the change in the parameter will follow as it is changed by the control function. This can be a linear function, or any polynomial, or anything else that can define a curve.

Those used together is what I'll call a morphing function from now on.

The change in one frame to another in a frameset (a linear series of frames on a parameter) could also be triggered by a timer, or key release, or aftertouch, etc. That way, key release can control the way a note fades out, and myriad other ideas possibilities.

EDIT: Some more thinking about framesets, which I think I will now call serial frames for clarity.

After a frame is finished running (assuming it can finish running; I would imagine this wouldn't apply to frames controlled by a real time controller like the mod wheel), it needs to determine what will set the next frame in motion. It could simply be serial (start the next frame once it's reached), or it could be event driven (upon key release, for example). The best part is, you could design a patch to use release velocity from the keyboard as well. That way, the release velocity can be used to determine how the next frame behaves (a slower release for lower release velocities, for example). So, being able to pass MIDI values to a scripted morphing function would be a plus.

Now, not all keyboards have release velocity, but my Kurzweil MIDIBoard does, and to be able to support this kind of functionality would be trivial with this framing system.

Consul wrote:Another useful thing for the keyframe engine to do, would be to allow for copy/paste and linking of frame sets. For example, set up the framing on a parameter, copy that exact framing to one or more other parameters. From there, you could choose to link the parameter frames together, so that a change to one means the same change to all.

It also just occurred to me that frames for two or more parameters can be linked without having first been copied. Just to clarify things a bit.

Either you're all dazzled by my brilliance, or baffled by my bulls***. Please don't be afraid to speak up, anyone, even if it's to call me a nutter.

I'm working on the logic of how keyframing could be used to do those two nifty features of the great samplers, velocity cross-fading, and positional (keyboard) cross-fading. I'll get back to you all on that one. This one's taking a while because I'm trying to keep the interface abstracted from the engine, since we want to keep from tying the two closely together.

A friend of mine from back in Colorado suggested coding the Keyframes engine as a library, so it could be easily integrated into any time-based application that could use the concept (a kick-ass modular synth, anyone?). That way, keyframes could be implemented either right in the engine, or as a middleware layer, so the GUI could still be kept separated.

Envelopes and LFOs* are a thing of the past, an artifact from analog days. The future is here.

* LFO functionality can be emulated, and surpassed, by allowing looping frames.

Cool man. Stay creative. Keep brainstorming. Thats why I purchased an Akai S6000 8 years ago. For the interface and grunt designed buy someone who actually cared about the needs of a musician. Re: 3d visuals. I renember seeing an instrument patch shown in 3D space on an Impulse Tracker 3 ideas site. It was very informative. Because you guys keep the GUI seperate from the Sample playback engine, you could dedicate a processor to powering a 3D interface.

Autodesk Autocad 2009 (3d drafting program) has a new feature for orientation called the "view cube" IT's a little cube that stays in the corner of the screen no mater where you are in the model space. If you click on one of this little cubes sides (E.G LEFT), the whole screen pans (glides into the LEFT view, (Or orientation). while the little cube stays in the exact position on your monitor.(up in the corner) displaying LEFT, RIGHT, FRONT, BACK, TOP ETC) This would give musicians an easy way to pan around all there sampler instruments if they can't get there head around 3d space. I.E A usefull restricition to orientation. (((((Clicking the cube would be like holding the Linux Sampler's Samples as dice in your hands)))))

An idea: Perhaps seeing all the Instruments layed out in 3D side buy side, with all the parameters wraped around each sample like modifying a 3d Column in Autocad. Each side of a 3d colomn could have a node (bit to click on) and the CC MIDI controller message graph could be represented with XYZ in 3 parameter/dimentional space instead of the XY, (2 parameter/dimentional space). E,G You could visualise Panning as YZ and ADSR (Atack, Decay, Sustain, Release) on XY. Opacity (transparencey) for volume and colour for pitch.

XYZ will seem confusing to anyone unfirmiliar with 3D orientation. But ounce you get used to it, it's the best way to draw digitaly and Audio MIDI Samplers are fundamentally digital. You could use 3d shapes built in 3d editors for template CC MIDI controller messages. As you know a helix is shaped like a spring. imagine a waveform siting inside the middle of a giant helix (spring). "Time being the direction that the spring squishes" (The spring being static un-squished) As the sound moves down the middle of the spring, it reads the position of the 3d helix (spring) shape and it's CC messages comply with the spiral accordingly. E.G A spring would pan around in circles. X to Z to Y to X to Z to Y . "Imagine pressing your finger on a Screw and turning the screw, the feeling would be the CC controller message and the Axis centre fo the screw would be the time. " But not simply panning. We could modulate these paramters with a different perspecitve. You could littraley modulate ADSR and panning and AUDIO OUT with one helix. This could be eligant if the design was kept simple. Great for repetative synth loops. ha ha ha ha I should put my programming skills (none) were my mouth is.

E.GCLICK the switch button and the helix inverts. You could get the helix to invert (change direction of spring) every time the sample starts. This is a Colosol amount of modulation posibilities all from a simple spring, octogon, loft, dimond, etc.

At the end of the day basic shapes sound crap" or don't sound at all" unless fabricated into violins or guitars . But basic shapes do have edges and lines that could be used as points and paths. for MIDI Controller messages. The Helix being an excellent example.(Helix could be the samplers Interface name)

Waves http://www.waves.com have a VST plugin that adds WIDTH to stereo audio. You could incorporate a similar algorithem into your sampler. If you modify the width of a cube in 3d space. It could littrillay stretch the sound out. <<<LEFT - RIGHT>>>.. A reliable 3d interface for a sampler would shatter any thing else. I know because you can fly around Autocad like a bullet ounce you get to know it. You can simply see so much more in 3d and it doesn't have to look cheezy to be informative. And Samplers have many little nity grity bits of information that we every now and then need to see quickly to keep the melody in our minds.

HERE IS THE GOOD BIT> If the 3D view is switched to Paralel not Perspective. You could visualise your Instruments from the LEF to see as they look now in your JSampler. Or from the FRONT as a whole lot of Hexigons or Octagons for ADAT (8channel Audio card). One Octogon for each of instruments.E.G. When mixing for surround sound (6 (Hex) channel), the wave form could be sitting in a column with the base shape of the column as a Hexigon 6 Sides/Audio channels (0 on the time line). Each side of the Hexigonal colomn is an Audio out. Time travels down the column while you pan to either of the 6 sides. Thus could give 6 3D places to pan too. It could look cheezy. but if built it properly it would be very imformative and the pros would out weigh the cons. You could use knobs or a mouse to drive position of the CC MESSAGE on the 2D grid traveling down the 3d shape as Z dimension is written in time/ADSR. If kept simple visulaly it would become logical to anyone quickly.

ONLY YOU GUYS COULD DO IT. WHO ELSE? You guys have dedicated a CPU to Sample playback seperate from a Console (JSampler). A 3D engine and a sampler all running simultaniously on one CPU just wouldn't work properly. And Roland, Akai, Yamaha are too busy feeding there kids to make a new classic. I look forward to what ever you guys come up with for an interface, I know it's going to be snappy and quick like a hardware sampler should be. Why not be the first to break into 3D Sampling? I would be happy to do any modeling in CAD. Is there a free program for linux that opens .dwg autocad files? I could perhaps build you an example of Your/My ideas.

Alot of the older Architects in there 70's that still draw houses with pencil and paper may critisize a 3d drawing programe for it's complexity. They think it's un-nessasary. "Can't teach an old dog new tricks perhaps" But at the end of the day there are things that (3D) can do amazingly, impresavely, informativley that pen and paper just can't do. An analog synth interface is at home with knobs. and a digital sampler interface would be at home in a digital 3D space.