We are used to the idea of morphing in the visual domain, where one image is transformed into another.

However, when it comes to sound, we have crossfading between 'fixed' sound streams.

May I suggest that there may be a way to just take a series of snippets of a sound stream, and synthesise the parts in between, using morphing algorithms.

What that opens up is being able to change the upcoming snippets to be from another stream, and morph between them.

The other functionality required is to be able to select the next streams in real time.

Then, during a performance, say while playing a violin, the subtleties of changing bow rotation angle and attack could be smoothly integrated DURING notes, opening up far more possibilities than what a bunch of discrete full-note samples can offer.

One thing that such technology may allow is a substantial reduction in library sizes, as maybe as little as 1/10 of the sample data may be all that's required for it to work, in the same way as the demos of visual morphing show a 'movie' of the transition between two snapshots.

Maybe even borrowing some convolution techniques may enable greater reductions.

That would sure make download, memory and storage requirements a lot less. Of course, the downside is increased CPU usage, but the algorithms used will define how much.