I'm not sure if this is a coincidence or not. But over the months I've noticed when a project takes a really long time to render, it tends to use extremely low amounts of RAM. Generally between 70-100MB, which seems too low for a 2014 project.

Is this down to features the author has enabled, or rather, not enabled? Are people choosing to disable rendering optimization features that use more RAM?

Basically if there's a trade-off, obviously using more RAM would be preferable to spending hours on rendering.

I have also noticed this and agree. I would rather a WU use 1-2 GB of RAM and take 20 minutes to render than use less than 300 MB of RAM and take 90 minutes or more to render.

The setting from the BOINC client under Disk, Memory usage sets how much memory each client can use for BOINC. Can't it therefore be used to set how much memory each BURP task can use once it starts computing on said client?

So then if your computer has more memory, your tasks get more memory and compute faster.

That is if there actually is a link between RAM used and compute time as there seems to be.
____________

It is very hard to say something definitive about memory vs CPU. In some cases it is possible to make a scene render faster by using more RAM (for example by tweaking the BVH cache size, using particular structures to store intermediate results in etc. etc.) but understanding when and how to apply this to an actual scene requires people to be extremely well versed in the inner workings and technical details of the renderer.
It is much more complex than just a slider between CPU and memory use - unfortunately =)

I was a little curious to see if you were right about the scenes being either RAM heavy or time heavy - it seems that there is something about it, here is a scatterplot; time is X and memory is up the Y axis: