Software Tree has developed a benchmark framework that tests the performance of object-relational mapping (OR-Mapping) engines. The benchmark is called STORM(tm), which stands for Software Trees Object Relational Mapping benchmark. Although there are many different performance aspects of an OR-Mapping engine that can be measured in different ways, we set the following major goals for the STORM benchmark:

- Easy to understand, configure, and use
- Measure the OR-Mapping engine performance for the two most common use cases  inserts and queries
- Produce meaningful results matrix
- Easily portable for different OR-Mapping products
- Don't require a web server or an application server to run the benchmark

This benchmark does not prescribe any particular hardware or software setup. You can use any combination of hardware, JVM, RDBMS and JDBC driver. The idea is to easily and quickly run the benchmark programs and get meaningful results. We believe that by running the benchmark programs in your particular environment and settings, you can get both absolute and relative performance numbers, which will make sense for your particular situation.

We hope that our efforts on the STORM benchmark will help the community in evaluating the two most basic and important performance characteristics (inserts and queries) of different OR-Mapping technologies and help us improve our offerings.

We have developed the STORM benchmark program for our JDX OR-Mapping engine. We have also ported it for Hibernate.

The STORM benchmark package is shipping with the JDX evaluation kit, which can be downloaded from Software Trees web site. Your feedback is welcome.

Well, while I'm waiting for them to send me out the license key to run their benchmark, lets just take a quick look at the description:

QUOTE>>>>

· Initialize the OR-Mapping subsystem
· Delete all the old database records for the Account class
· Create and insert configured number of new Account objects in each transaction for configured number of transactions (iterations)
· Print the statistics for insert operations
· Query and print the total number of Account objects in the database
· Query configured number of Account objects in each transaction for configured number of transactions (iterations)
· Print the statistics of the query operations

The STORM benchmark uses the instances of an Account class, which has four attributes and maps into a database table with a record size of 100 bytes. The benchmark program does the operations in the following order:

<

So, if I'm reading it correctly, this does not test:

(1) associations, or the efficiency of association fetching (!!)
(2) updates, or the efficiency of dirty checking (!!)
(3) any nontrivial query
(4) by the looks, any query against data which is not completely cached in memory by the database itself

ie. it looks to be a complete joke.

I can make a benchmark like this absolutely SCREAM by simply enabling the Hibernate query cache in version 2.1.

This is not a useful contribution to the debate over ORM performance, which is mostly about the efficiency of caching, and smart association fetching strategies. In particular, the configurability at runtime of eager outerjoin fetching vs. lazy fetching.

Measuring the OR-Mapping engine performance for the two most common use cases (inserts and queries) was a major goal for this version of STORM. We also wanted to to keep it simple for ease of portability. Hope this proves useful for many customers.

As a person who maintains inhouse OR-mapping layer for an enterprise software company, I am in constant search for an OR-mapping solution that can replace my inhouse code. Although a number of products in the market (including the standards like CMP) tout about the lot of features and caching, unfortunately I am yet to find a product that can really do all that it claims with some acceptable performance and ease of maintenance in a high load and constantly changing environment.
I have done evaluations of both Hibernate and JDX (including some others) earlier in my quest to find a perfect match for my requirements. But have always felt the need for a standard way of knowing performance of one product vs. another. Last time I tried to evaluate a few products, I had to create my own test scenario and had to go through hell to configure some of the products to work for my scenario.
So I downloaded this today and decided to give it a test run. Here are my findings:
# Just went straight to the readme and did as said (although I used pointbase rather than hdb). Seemed to work as described without me making any extra efforts.

# It was easy to use with no extra overhead of JSP/Servlets or ejbs so no appserver in the equation. Again a plus point in my opinion.

# The scenarios is very simple. One table, with inserts and queries. But I think that's OK to begin with. If some product cannot perform for this scenario, I don't want to go any further with that.

# I don't think this scenario would use caching (different inserts and queries in every iterations) on any OR-mapping product. So in my opinion it is giving raw engine performance statistics, which is important for me to know. Although a variation of this case with some 50% cache hit would also be useful. But again if an OR-engine is already performing more than 50% poorer than the other then I don't think caching is going to solve all the problems.

# I would love to see if this can be extended to some other products, and yes, if association can be added in the scenario.

# Getting some interesting results. Will post details about my results in a later post.

# Gavin, I don't understand what you want to say in you comments 3 and 4. Any example of a nontrivial query? Also, I don't think database caching affects comparison in any way.

Do not trust any benchmark too mutch. If you maintain your inhouse OR mapping tool, you know all of OR mapping problems and critical aspects,
just review the source code and documentation before to make decision.
You must know implementation detais in reality.

you cant really take a benchmark serious that hasn't been created by an independant entity. Probably jdx already has a query cache and now they are touting xx times faster then hibernate !! to their customers, may god bless their ignorant soles

I can appreciate that any benchmark from a vendor needs to be taken with a grain of salt. Good news is that this benchmark is totally open. You can see the source code and configure and run it in your own environment. I hope that your concerns go away once you touch and feel the actual implementations.

It is very nice to see someone spend some time looking into the benchmark and coming up with the Truth. Gavin *could* have just said "Hey this benchmark is a joke!" and moved on... but he took the time.

Thanks to Gavin for taking the time in looking into the benchmark. It was interesting to read the report. Since this version of STORM is about measuring raw performance of OR-Mapping engines for inserts and queries of simple objects, I will keep my comments limited to those aspects.

First of all, we spent quite a bit of time in making sure that Hibernate implementation is optimized as you can see in some special optimization code. It's quite possible that the implementation can be optimized further. We would like to include the best possible implementation for Hibernate. So please send us that.

Secondly, this benchmark is about measuring the performance characteristic of the OR-Mapping engines. So, to get the best possible results, one needs to minimize other overheads that may come into the measurements. If the overhead of network access and database activities starts dominating the total measured time, it obfuscates the OR-Mapping engine performance aspects.

Thirdly, this version of STORM is not meant to measure the concurrency control and caching aspects. It's not that they are not important but we need to think through all the required parameters, configuration, portability, and driver implementation for different OR-Mapping engines. I think that before doing that it may be better to extend the benchmark for 1:1 and 1:many relationships.

I agree with some other postings that running benchmarks may not be a substitute for actually evaluating a technology for your particular situation. Many other factors like architectural fit, flexibility, robustness, and ease-of-use are important too. However, many customers do ask us for some performance numbers. So we have created STORM, which can give them some meaningful performance numbers in their own environment and setup.

In our experience, creating an open, simple, easy-to-configure and portable benchmark is a non-trivial exercise specially for a vendor. Nonetheless, we have taken the first shot hoping that customers get some value out of it.

We are glad to learn that within a few minutes you were able to port and run the STORM benchmark for JAXOR. I agree STORM is very simple right now. However, based on the feedback, it's also very easy to port, configure, run, and interpret. By the way, you may be aware of the 007 benchmark. It has been adapted for JDO by Object Identity, Inc. Looks quite sophisticated. Have not seen any results by any JDO vendor though.

For STORM, we are leaving it upto the users to configure and run the benchmark as per their target environment and relevant use cases. Of course, as mentioned earlier in the thread, there are a lot of other considerations that go into choosing a product besides performance numbers of simple inserts and queries.

We want to thank everyone who has publicly and privately given us encouragement and constructive feedback on the STORM benchmark.

It appears that in spite of our best intentions, there is a non-zero probability that we may not be able to provide the most optimal implementation of the STORM benchmark for a competing technology. And if we end up providing a sub-optimal solution, there is a high probability that our intentions will seem suspect. Furthermore, even if we can come up with the most optimal solution for a particular version of a competing technology, it may not remain as the best possible implementation for its newer versions unless we keep the implementation up-to-date and always get it right. Considering all this we have decided the following:

Instead of agonizing over whether we have implemented the most optimal solution, we will provide the framework, which can easily be completed by the users by writing just a few lines of their own implementation for the operations they choose to measure (e.g., inserts and queries). The program file would have about 90% of the reusable template code; you still get all the documentation, configuration files, and ant scripts. So, just implement the minimal benchmark logic as per the best practices of the target product, run the benchmarks with different configuration parameters, and draw your own conclusions.

The new version of the STORM benchmark can be downloaded from our web site with the JDX software.

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.