This is a follow up to my previous post. There is a quicker way to compute the function I created (basic cumulative sum) in R.

Instead of:

function f(x) {

sum = 0;

for (i in seq(1,x)) sum = sum + i

return(sum)

}

Use this:

f2 = function(x){

return(sum(seq(x)))

}

If I time it, we see:

system.time( (out = apply(as.array(seq(10000)),1,f2)))

user system elapsed

0.35 0.05 0.39

Nice! Spread that across 3 CPUs and we can bring it down a bit:

system.time( (out2 = foreach(i=seq(0,9),.combine=’c’) %dopar% {

apply(as.array(seq(i*1000+1,(i+1)*1000)),1,f2)

}))

user system elapsed

0.02 0.00 0.26

Not too shabby. How fast can we do this in SAS:

optionscmplib=work.fns;

procfcmp outlib=work.fns.fns;

function csum(x);

sum = 0;

do i=1to x;

sum = sum+i;

end;

return (sum);

endsub;

run;

data_null_;

doi=1to10000;

x = csum(i);

end;

run;

NOTE: DATA statement used (Total process time):

real time 0.24 seconds

cpu time 0.25 seconds

SAS on a single CPU is just as fast as R on 3. It’s not worth attempting to multi-thread this in SAS. The overhead would be too much as SAS/CONNECT is made for bigger problems.

So what about NumPY in Python? If we use the version compiled with MKL we ought to be able to do reduction in blazing fast time. MKL should use the SSE registers on the processor. Further, we’ll use the “fromfunction” method that lets us pass a lambda to the array creation method.