Discussions

The benchmark formerly known as ECperf 1.1 has just been released as SPECjAppServer2001, part of www.spec.org. It is a modified version of the ECperf 1.1 benchmark spec and toolkit, changed to comply with SPEC run and reporting rules. All future SPECjAppServer2001 results will be announced on TheServerSide but posted on SPEC's site.

The new SPEC results will also include a price/performance metric, like ECperf did.

Since no one ever tests two app servers on the same hardware/OS/database, the results are utterly meaningless. This leaves us in the situation we already had: run your own tests if you want to know. None of the vendors are going to help you.

How can price/performance possibly be useful when multiple parameters are varying in every test? There is no way to compare. You don't know if it got a good score because the app server is good, or if the app server is terrible but the hardware is cheap, etc. This is basic stuff, and unless they start doing tests with only one paramater varying these numbers will always be meaningless.

<snip>
This is basic stuff, and unless they start doing tests with only one paramater varying these numbers will always be meaningless.
</snip>
Price/perf is just one measure. Overall performance is the more important measure. While it is probably easier to interpret the results if the tests are performed on the same hardware, the key point is that this is a serious workload that pretty well represents a real-world app and real-world workload.

Given the strong support for SPECjAppServer2001 from all the vendors, this is a serious and credible tool to assess performance. From submitted results or from private assessments- any customer wanting to evaluate performance can either use the publicly available results or using the information in these, re-run the benchmarks on a given hardware and assess for themselves.

Again, the best aspect about this benchmark is that it is very easy for anyone to setup and run. And, if a vendor has made atleast one submission, then the optimal config/tuning for that vendor can be got from the submission disclosures. The serious participation from all leading vendors will ensure that there are submissions available (on some platform).

Using the above, anyone interested ina head-on comparison (for a purchasing decision) can very easily do so. This will negate the vendors, all not having submissions on a given platform.

Strong support from all vendors is the biggest USP for this benchmark (in terms of utility to the J2EE server users/developers/deployers)

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.