This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Devil's in the DeWitt Clause

In my commentary a few weeks ago ("The Truth About the TPC," InstantDoc ID 38348), I briefly mentioned the DeWitt clause that major database vendors such as Microsoft, Oracle, and Sybase include in their End User License Agreements (EULAs). DeWitt clauses forbid the publication of database benchmarks that the database vendor hasn't sanctioned. Here's the exact clause from the SQL Server EULA:

e. Benchmark Testing. You may not disclose the results of any benchmark test of either the Server Software or Client Software to any third party without Microsoft's prior written approval.

Not surprisingly, most readers who sent me comments about these restrictive clauses think that using them is wrong. After all, most of us believe that consumers should have free access to information about the products they use, even if that information isn't flattering to the product or its vendor. I agree. However, I'd like to play devil's advocate and examine the database vendors' position.

A reader from the vendor community sent me a message that sums up the pro-DeWitt argument: "The reason the DeWitt clause is in the EULA is to prevent the publishing and propagation of poor results that result from either intentional or accidental poor configuration of the database system. Requiring the vendor's agreement to publish a benchmark gives the vendor the opportunity to validate the hardware and software configurations to ensure that they are set up in a way that provides optimal performance of the product."

The reader makes a valid point. Two major problems could arise if DeWitt clauses were stricken from EULAs. The first and most obvious problem is that testers could publish inaccurate or misleading benchmarks. For example, tweaking a configuration parameter or adding or removing an index could have a dramatic effect on test results. It's not easy to be a tuning expert, but it's easy to make tuning mistakes. And a group with an agenda could make a subtle mistake on purpose to put one database in a more favorable light than another.

The second problem is that creating a useful database benchmark is hard. Because benchmarks simply measure the performance of a given workload (i.e., the application that's running), it's almost impossible to use the result of a benchmark to gauge the performance of a different workload. Too many factors are different, even when the applications seem similar. I spend a lot of my professional time conducting performance-tuning audits for clients. In nearly every case, the application—not the database—is what causes performance problems. Although I focus on SQL Server, I'm certain this observation holds true for other major database systems.

Because poor application design can easily change benchmark numbers, a database might be properly tuned for a given benchmark, but that doesn't mean that you can use the benchmark scores to make accurate capacity-planning decisions for your application. In addition, modern application-development environments and middleware have grown increasingly complex. A middleware-layer application might efficiently access data through one database engine but be inefficient in accessing data through another database engine. Benchmarks performed absent a well-defined methodology that accurately reflects database performance (instead of other factors) could degrade into application-design benchmarks, which don't serve consumers if taken out of context.

Database vendors have a vested interest in ensuring that customers don't receive misleading benchmark information. The DeWitt clause lets vendors ensure that misleading benchmark numbers based on poorly tuned or configured systems don't sully their product's image. In some ways, this kind of control also serves the best interests of the customer; the two benchmarking problems above would lead to benchmarks results that could cause customers to make misinformed decisions. Is the DeWitt clause good for database vendors? Maybe. However, the question should be, What's best for the consumer? I'd argue that focusing on the consumer is ultimately in the best interest of the vendor anyway. Next week, I'll share reader comments that are anti-DeWitt and explain why I think the benefits of eliminating DeWitt clauses would more than outweigh the drawbacks.