Alternatives to elastic~

Elastic~ (by elasticmax)seems like a great tool for individual controlling of time-stretch and pitch-shift. Is there other solutions in max for this kind of tasks? (for less than 25$)
Does anybody have experience with the differences between elastic~ and elasticx~ (they are sold indipendently)

[elastic~] uses the Elastique efficient algorithm from Zplane I think, and is limited to double/half speed and pitch.

[elasticx~] and [elasticindex~] sound like a granular algorithms, and are not limited in pitch or playback rate. They sound quite different, and [elastic~] is more suited to small changes, while the [...x~] objects are more suited to when you want to mangle sounds.

It’s really easy to patch together a [groove~]-based alternative using [gizmo~], an example of which can be found here:

It probably won’t sound as good as [elastic~] on as many different source sounds, but it’s free!

I have [elastic~] and the [...x~] objects, and I use the [...x~] ones a lot more often, but only for reeeeally extreme time-stretching. They sound quite warbly and phasey when operating at/close to normal speed/pitch, and only come into their own with extreme settings.

If you end up getting any of them, I recommend patching together an abstraction that uses a standard [groove~] or [index~] when speed and pitch = 1, and then smoothly crossfades over to the elastic objects when you change a parameter.

Thank you for very usefull reply.
"…..sound like a granular algorithms": Do you mean that the order of the sound is not cronological with the elasticx~ object? Does it become like a collage/cloud of soundfragments like munger1~ (http://www.maxobjects.com/?v=objects&id_objet=3972)? In that case that sounds like something quite different than the elastic~ (timestretch/pitchshift)

[munger1~] as far as I know, isn’t designed to produce natural sounding time-stretching, and the [...x~] objects are. They still sound smooth, but exhibit different artefacts than the standard [elastic~].

It gives you essentially the "same" kind of sound effect, but much more extreme, while still sounding smooth. It doesn’t sound like granular synthesis as such, if that’s what you’re asking.

Extreme stretching eh? I’m curious how elasticx~ might go maintaing the character of a voice while time stretching at a factor of around 500 which makes it hard to get a really smooth continuous (and still solo and natural sounding voice) So far I haven’t settled on an optimal solution for this yet – I’ve tried the usual suspects in the spectral domain as well as granular and still haven’t decided on the best method yet. FWIW, free_elastic is OK at not so extreme stretches but at the kind of rates i’m looking at its "all over the shop" – seriously jittery and squelchy for want of a better way of describing it…

So far for me, the best subjective overall results for this purpose have been using the GMU externals (bufgranul~) to get some really extreme granular stretches.The windowing wizardy (yet to be achieved on my part) to avoid the AM sub-tone artefacts, or depending on the rate – evidence of discrete windowing becomes an issue when going for a lower number of grains per unit time, and though solving the problem with a really high grain density sounds pretty cool, it ends up sounding more choral and epic than a solo voice in that everything gets smeared and diffuse – which – in this particular case – is not what I want.

With all that said, the GMU externals http://dvlpt.gmem.free.fr/web/static.php?page=max_externals are very useful general purpose granulators given just about all the parameter inlets accept signal rate inputs. And at less extreme stretch factors the results for stretching could be pretty useful for many contexts..

Unfortunately I haven’t found any (online) available examples of extreme stretching using elasticx~, so pointers suggestions or examples welcomed.

Edit: Sorry I didn’t see the last couple of posts from you guys before I sent this post. Kind of looks out of place now…

Well, [elasticx~] will not give you natural sounding time-stretching at all. What it does is the classic granular epic-expansive-soundscape thing.

I’m interested to hear how natural sounding stretching you’ve achieved with the GMU externals, as I’ve not used them. I’ve done stretching only with standard msp and high resolution objects so far, with no real results in terms of keeping it sounding natural.

the guy is planning a commercial release at some point, so was understandably not giving too much away, but graciously provided me with some parts of the main engine, describing how he achieved this effect. I couldn’t decipher it even though it is fairly straightforward (dynamically variable messages to [line~] objects). I can upload it if anyone is interested enough to tinker with it….Tim?

Attachments:

Just to be clear, I didn’t mean to give the impression that the results of the long stretches I’m getting *are* indeed natural sounding – I certainly wouldn’t consider them as that. Its more the case that at the kind of stretch factors I’m looking at, and with this sound material it seems to be more a case of identifying the least artefact laden or, even least offensive(!) result – rather than " the most natural sounding". And that’s a very subjective issue…

I was imagining that the elasticx~ results would be better than what I was getting which is why I was hoping to hear an example of it at work on an extreme stretch with it.

I understand that the reality is that its virtually impossible to get extreme stretches to sound natural using most technically feasible methods, as any method is going to bring out qualities in the sound that are not inherently what we identify as part as that sound over it’s normal temporal evolution (ie transients).

Attachments:

That’s interesting Chris. As far as a smooth granulator goes, the one you provided gets as close as anything else does IMO. Thanks.

I have to say that I have in the past overlooked FTM &Co as a viable alternative mostly because it is a (yet another) paradigm unto itself – in a similar way to Jitter for example – and as such requires a reasonable commitment of time and effort to come to terms with.

This last aspect is not made easier by the fact that although there is a range of examples the help files always seem quite minimal and there seemed to be no explanation of the arguments and messages the objects accept – or so I thought, until recently discovering the uber-useful "postdoc" message that all FTM objects respond to (Gabor and MNMincluded) while browsing the FTM wiki page.

For the purposes of monophonic extreme granular stretching (of a solo voice in this case) – a problem inherent in all *fixed* grain size methods (ie Sychronous granular) without jitter or rate variations -which serve to smooth but smear things – appears to be that of the relation of the window size to the source’s harmonic content – where the current grain window size must be equivalent to (or a multiple of) the fundamental frequency -when the content is mainly voice). If this relation is not observed or accounted for such as in a grain window of fixed size, other usually inharmonic artefacts are introduced. Keeping a limited number of grains and minimising smear is important to maintain the singular quality of the voice (

The question then becomes: should one try and enhance this method and adjust grain parameters accordingly for voiced (& pitch period) and unvoiced or does it become more practical to resynthesise the results of an analysis by other methods (ie ifft, additive etc).

i am amazed at how little code it takes to produce these sort of things with ftm (not to say it doesn’t take a lot of time …..)

i think FTM and Gabor are very interesting – its Ircams collective knowledge and experience of digital audio given away free (if you have max). it is a different way of working than max in some respects, which takes a little getting used to, but once i understood its basic operation i didnt look back. i’d say its well worth persevering with. with little bit of dsp knowledge you can go a long way.

have a look out for FTM + gabor workshops – i’ve been to a couple with diemo schwarz which were excellent.if you sniff around the wiki there are a couple of tutorials that uses have written which deal with the syntax and messages etc.

with regards to reducing artifacts i think it is mostly a game of finding the best combination of grain length, window type, randomness etc. but have you tried using good quality filters or eq to tame the unwanted frequencies ? its amazing what one can achieve with some old school music production kit.fft methods give good results but also introduce artifacts too as they are window based as well.

If you’re still looking for examples of extreme timestretching with elasticx~, check out some of the tracks I’ve put up on soundcloud (soundcloud.com/davidestevens) Scroll down below all the _redux versions to the original "electroacoustic poesetics" tracks – all done live with voice and patches based on elasticx~

Thanks Dave, some nice stuff in there BTW. So, from what I think i heard there, elasticx~ is pretty smooth but essentially asynchronous granular in nature, and tending to sound "epic" and diffuse… (not that there’s anything wrong with that ; )

As an exercise in coming to terms with FTM & co. I decided to attempt the task of creating a file based stretcher for vocal material (which is what I need this for) by culling from the FTM examples and/or brute trial and error to get something approaching what I want. So far the results aren’t too bad at all, but there is still lots of room for improvement for more general use. The approach I’ve taken is to include both granular and FFT methods for their respective strengths. Some things to note:

This method doesn’t really sound transparent at normal speed – rather warbly. I tried smaller window size for the analysis/ resynthesis but then other issues come up.
The relative levels and equalization of the two components (granular for noise/unvoiced and fft/ifft for voiced) are pretty arbitrary. Dynamic eq and crossfading based on yin analysis could help but I have only taken baby steps in that directionin the current patch.
Extreme stretches can be pretty "stable" (solo!) sounding – what I was looking for, but the little transient sections between vowels (glottal stops etc) etc tend to bring out "stray" harmonic tracks – and perhaps the analysis parameters need some tweaking to optimise.

Stereo files sound weird. I have missed something somewhere along the line…

The current method of changing the playback rate is pretty clunky…

Anyway, even though it’s likely that someone in FTM land already done something like this (and likely that its much better) it’s been a great exercise for getting to at least partly understand it, and I now certainly appreciate the benefits it can offer the MSP based DSP experimenters amongst us.

Feedback, suggestions or even patch improvements welcomed. I realize I have more or less hijacked this thread but…

PS: If anyone knows if it is possible to get the contents of a buffer~ *into* an fmat via ftm.buffer, I’m keen to know how that is done. It doesn’t seem to work for me but I’m not even sure if it is supposed to…

FTM of course, is required for the patch to work.

Edit: I don’t seem to be able to either attach the .maxpat or paste compressed code in line. Turns out it is 23 Mb! I would bet that the audio file I was using to test the patch is saved in the fmat buffer.

Here’s my personal granular engine in a patcher specific for time stretching and pitch shifting.

It’s just the regular granular stuff, but I think it is interesting for the purpose of understanding how granulation works as it doesn’t use a dedicated external. It does use the vb.phasor0~ from Volker Böhm however but you could use a regular phasor~ without too many drawbacks. By the way, it’s a version for stereo files (it doesn’t spread mono files over the stereo ramp but it’s easy to modify the patcher to achieve this).

The engine is within a poly~ object, so you can increase the number of voices but for transposition and time stretching I wouldn’t go beyond two voices as it would blur the sound.

As this patcher is for my students, all comments are in French language.

Attachments:

I had started experimenting with elasticx for some short loops and it seemed fine. However, more recently (yesterday), I wanted to try using it as a replacement for sfplay so that I could use it as a rehearsal tool for slowing down a complete song and/or changing pitch slightly. However, after loading in a Pink Floyd song and trying to play it normally, I immediately noticed the warble. I’m wondering however how one would do a cross fade with some other object because as soon as one changes the speed or pitch even slightly, I can’t imagine it working properly.

Any other recommendations for an object that will play at high fidelity for speeds and pitches that are close (but not exactly 1.0) to the original? Free or commercial.
Thanks,
D

————-
They sound quite warbly and phasey when operating at/close to normal speed/pitch, and only come into their own with extreme settings.

If you end up getting any of them, I recommend patching together an abstraction that uses a standard [groove~] or [index~] when speed and pitch = 1, and then smoothly crossfades over to the elastic objects when you change a parameter.

If you load a song into it (I’ve been using a 5 minute long song from Pink Floyd, for example), for example) and play it at normal pitch and speed, does it sound identical to playing that same song with sfplay? When I tried this, I could clearly hear some warble, I.e, as if there was a very slight vibrato applied to the playback.