SpeedKing wrote:I'm just lucky in that the type of music I make (metal/rock with occasional electronic elements) really doesn't require all that much in the of ridiculous numbers of tracks with tons of plugins. If my type of music did, I'd be just as pissed as the ones are who are complaining about this. I'm just thankful this issue will never apply to me.

True words!

So we found the perfect use case for live: Metal/Rock, not electronic music.
This is good to know - we should tell this live marketing, maybe they will reposition the product to rock musicians

so now that people can buy bitwig, and that has *REAL* pdc, where's all this amazing music that was being held back by the shoddy programming at ableton?

want a solution? record your automation instead of drawing it. or like, figure it the fuck out. guess what, making music is hard.

for those of you having problems with *audio* latency, not automation and PDC, you've either found a bug or are doing it wrong. latency on sends? turn off the sends for the return tracks so that live can calculate and compensate.

First of all, speaking for myself, I've been turning out quite a steady flow of personal slop for two years in live. There is NOTHING wrong with using live to make music......it's a duh no brainer. Just because I'm not the worlds finest musician doesn't mean I don't make music. I'm pretty sure that many others are in the same boat.

But this is an annoying thing for many people and frankly it's needs to be addressed. That doesn't mean what you are implying, that people are not "making music" with it or not working around it

BTW, until 1.1, there were still "PDC" issues in bitwig. I haven't done any extensive testing in either to see how much better it is. I have a tendency to be like the poster a few posts back where the PDC only rears it's head under specific conditions.

leisuremuffin wrote:Well buy a guitar with a humbucker or deal with the noise from your single coil. That's what I'm saying.

But what if it was possible to have a noiseless single coil?

What if I could somehow be arsed to create a metaphor which implied Live 10 will have PDC on automation and this whole discussion will be lost, like tears in a prison shower, when they inevitably announce it and this feature at winter NAMM. And we will all say "well, that's ... fractionally better" . And the devs will say "thats IT? That's all the thanks we get? Do you know how difficult that was? You absolute bastards"

Here comes a wall of text, im sorry but its difficult to say this in few words, but if you are concerned about or believe you hear latency creating muddy mixes, this should prove you are right, clear up what actually is happening, and also give you a solution to almost fully eliminate it. Im here to help you! Its actually very simple.

A trick to get all the various latencies (latency length are different for every track and every send unless they are exact copies) that trainwrecks big projects with loads of 3rd party plugs, is to render the project out with as low sample buffer size as you possibly can. (Options/Preferences/Audio/Hardware Setup/Sample Buffer Size) 128, 64 or even better 32 samples, if you can manage to render it without a crash. The lower the sample buffer size, the lower the pdc latency on sends, tracks and automation. Its easy to test it. I did it some years ago to check the latency on sends, and it was a lot! Simple send test: set the buffer size at 1024 samples. use a very short length 3rd party room reverb. I used Valhalla Room. Use it on a short snare. Record a short loop from the master into a track (audio from: resampling) with the soundcard at 1012 samples, then 512, 128, 64, and 32. Here any difference on these? Its huge. On big projects with many sends, each send gets different latency, making the project sounding muddy and sloppy, especially sends with delays.

Tonight, because I wanted to have it fully confirmed I tried it also with automation. The reason for this test is to prove or show in the simplest and clearest way possible what is going on. Start with an empty project, and set the soundcard at 1024 samples. Add a white noise 1/8th note from Operator repeated like a 4/4 kick, release env to shortest possible so that the noise stops exactly when the note ends. Mute the notes with automation by creating an empty rack and draw the rack volume down to -inf with automation, but only on the duration of the note so that volume fully opens after each note. (cant do this with Utility plug) This should already create a tiny bit of sound as even an empty rack has some latency that pdc cant handle. This noise is from the beginning of the note, not the end. Go into the clip and set the start point of the note just a tiny bit after the 1, like 1/1024 late. This should make it almost silent.

Ready to start the test while soundcard still at 1024 samples: put a clean Fabfilter ProQ between Operator and the volume rack. Set ProQ to Zero Latency. Playing this you should get a sound similar to a closed hihat, and the length of this sound is the length of the latency on this track. If you want you can add another 3rd party plug, like Valhalla Room. (set to 0% wet so it lets the sound fully through) This should even double the latency. This latency noise gets shorter down to a tiny click at 64 buffer size samples, and at 32 samples the click is half that size again and now just a tiny pop sound. In other words Live seems to half the latency for every step down in sample buffer size. Imagine having variable latency on all your tracks, that's the reality with using a mix of 3rd party plugs, but also just with native plugs at a lesser degree. Automation and sends with latency as long as a closed hihat, and think about the variable sloppy timing all over the place, its not uniform, every track has different latency then the other. You will get noise glitches from automation, sloppy side chaining, muddy sends, instruments and drums slightly off, short drum reverbs have become longer on your once tight snare, synced delays creating mudd because they are off, etc etc. Its simple really, especially if you work with big projects with a lot of 3rd party plugs, lots of side chaining and automation.

Simple test: Just try to render your project at 1024 samples, then at 64 or 32 samples and listen to the difference. Its rather shocking actually, and I cant believe I have even released some tracks sounding to me now as utter trainwrecks from before I was aware of this. If only I new back in the day.. If only someone had told me what I try to tell everybody that uses Live now.

In my usually big projects with usually loads of 3rd party plugs and loads of sends and automation, I have to increase to 512 and shortly after 1024 samples to be able to work with it at all. As the project slowly grows, so does the sample buffer size to be able to free up the cpu for more stuff. It sounds like a train wreak in the end, and if you are not aware of it, it might increase so slowly that you might overlook it. Even if most vst synths are recorded to audio, it can be severe. There are effects still that I use for mixing, and these might have automation and so on. If you are like me and like to program and draw lots of stuff during the whole process. Side chaining all over the place cant be frozen or bounced/flattened. Ive been using Live for 10 years steady and this has been a torn in my side the last 4-5 years as ive been increasingly aware of it. Its been rather hard to nest up this feeling we had back then, as there was very little knowledge to be found about our concerns. (sarcasm/) Live have aids, and unless you are aware of it and use a condom, all the kids you make (music you release) will be born with aids. (/sarcasm)

FOR THOSE WHO CANT BOTHER READING THE FULL STORY, READ THIS:
The positive thing is that its possible to render the project at a lot lower buffer size then it can play back, which means you get a lot less train wreak latency sound on your final product then you had to work with on larger projects. On my i7 cpu I can render (but not play obviously) a 1024 buffer size hog all the way down to 64 or even sometimes 32 samples without crashing. This cleans up the render A LOT, the sends, the tracks and the automation, and I can never say it enough times to Live users I meet around the world. If you are a releasing producer this is very important information!

Yes, its true, BitWig does not have this problem at the same extent, its far less present. But its a baby DAW, it misses a ton of futures. Its far from the maturity of Live. Maybe in version 2 or 3 it might be future full enough for serious big productions. But for now, and if Live type of sequencers is your "thing" I recommend staying with it for a few more years. When Bitwig 2 is out Ableton might even fix the pdc latency problem with the release of Live 10, and then there really is no reason to escape to Bitwig. As leisuremuffin so eloquently emphasized: work around the problems. There are different problems in all DAWs.

edit: some typos

Last edited by ze2be on Thu Nov 06, 2014 11:24 pm, edited 1 time in total.

That doesn't really sound like good practices are being used. Use one buffer size from start to finish, don't use sends on return tracks unless you are creating feedback and be smart about when you apply automation and you won't have any problems.

leisuremuffin wrote:That doesn't really sound like good practices are being used. Use one buffer size from start to finish, don't use sends on return tracks unless you are creating feedback and be smart about when you apply automation and you won't have any problems.

Well, in that case you should use 32 samples all the way and only produce ultra minimal techno.

I don't know about that but I think working with consistent settings start to finish is the way to go. And I also think it's best to use the highest sample rate that you can handle. Not for fidelity but for lower latency. 512 samples is obviously faster at a higher rate.

leisuremuffin wrote:I don't know about that but I think working with consistent settings start to finish is the way to go. And I also think it's best to use the highest sample rate that you can handle. Not for fidelity but for lower latency. 512 samples is obviously faster at a higher rate.

Doesn't help that much.

At least you should set your reverbs at 64 or 32 samples buffer size, to at least get a clue what it sounds like after export, and when that's set increase it to 512 or 1024 to give work space for the cpu. But you should ALWAYS render your project at lowest possible sample buffer size, especially if you use 3rd party plugs, unless you really want a muddy sloppy phazy 2bus export.