How to account for "time taken to complete a single run" while comparing the results?

If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

How to account for "time taken to complete a single run" while comparing the results?

I have two machines that I would like to compare each other with using PTS. Say, machine1 gives the result 80 transaction/sec for Postmark benchmark and the time taken to run is 52 minutes and machine2 gives 110 transactions/sec but the time taken to run is 1hr and 10 minutes. My question is if I don't consider the time taken, machine2 is better but what happens otherwise? Do we have to take the time into consideration or not? Why?

The standard deviation is there only for the scores and not for the time right?

Yes the standard deviation only applies to the scores of the test. Now if it is "better" or "worse" is debatable but it shows that for some reason the results were not consistent enough to make a determination to somewhat accurately give you a result representative of the performance. Ideally when benchmarking you would have little deviation otherwise with a large deviation outside influences can be effecting your score and masking the true performance of the system.

Thanks for the reply. I do undertand the purpose of standard deviation. The scores from both machine have std. dev. less than 5%. So I don't have a problem with that. My only problem is the time taken for the benchmark to run. It takes longer on one machine and shorter on the other. Should I just neglect it?