I don't think we need to use a different version R here. Fragmentation doesn't help and most of the improvement achieved through pqR will be ported to R very soon.

Now, many R users have Julia installed in their machine and some are even core developper of the language, I think the Julia community is great with very smart guys and the day I'll want to switch I'll probably use Julia.

Now comming back to this problem, one reason why many useRs stick with R despite this kind of performance is because of the rich library composed of more than 5000 packages and one of them is Rcpp. So learning Julia is good but learning Cpp and Rcpp is better investment for a useRs.

Please note that on modern multicore-CPU machines it is possible to boost R performance with “multicore” package. You may have noticed that R runs single-core, even on such modern machines. This in big part explains the dramatic difference in performance against Julia which has been designed for parallel processing and (I assume) makes use of all available cores on a host.Package “multicore” enables R to utilize multiple cores in a straightforward manner. On my 8-core CPU machine I got over 5x speedup for the above cont model:> REPS<-24> system.time(lapply(rep(1000,REPS),cont.run, 10000, 1000, 0.005, 10.0, 0.01))user system elapsed 19.054 0.000 19.127 > library(multicore)> system.time(mclapply(rep(1000,REPS),cont.run, 10000, 1000, 0.005, 10.0, 0.01))user system elapsed 21.535 0.172 3.669

As you can see the trick was to replace a call to “lapply” with a call to “mclapply” - its parallelized version provided by “multicore” package. Going further you may parallelize large chunks of existing code in a transparent, nearly effortless way by redefining basic functions such as “sapply” and “replicate” to use “mclapply” instead of “lapply”.You can also gain some additional seconds by enabling implicit (background) JIT compilation in R: