There are no solutions, only tradeoffs.

Menu

A Siege On Benchmarks

My regular readers (and perhaps the irregular ones as well know that I have been obsessed with baseline-responsiveness benchmarking of frameworks for years now. The idea has always been that, in order to know how far you can optimize your framework-based applications, you need to know the limits imposed by the framework itself. Only then can you have an idea of where to spend your limited resources on improvement. For example, if you need 200 dynamic requests/second, but the framework itself (with no application code in use) is capable only of 100, then you know that no amount of application or database optimization will help you — it’s time to start scaling, either horizontally or vertically.

To perform these benchmarks, I have only employed the ab tool provided by the Apache web server. It was easy to use, and relatively easy to parse the output to automate reporting. However, it turns out that ab over-reports responsiveness of Apache when serving static HTML files, and when serving minimal PHP scripts such as <?php echo "hello world"; ?>. I discovered this just recently when attempting to find out why PHP appeared to be faster than HTML, and then only with the assistance of Paul Reinheimer, whom I now owe a bottle of vodka for his trouble.

It turns out that the siege tool from JoeDog Software is more accurate in reporting static HTML and PHP responsiveness. This is confirmed through Paul Reinheimer as well, who reported the expected responsiveness on other systems.

The over-reporting from ab means that all my previous reporting on benchmarks is skewed too low when comparing framework responsiveness to PHP’s maximum responsiveness. As such, I have re-run all the previously published benchmarks using siege instead of ab. Previous runs with ab are here …

… and below are the updated siege versions. As with previous attempts, these benchmarks are performed on an Amazon EC2 “small” instance. There is one difference to note: previous runs used Xcache for bytecode caching, but these use APC; I don’t suspect this change in caching engines has a significant effect, but I have not tested that assertion.

framework

rel

avg

baseline-html

1.1878

985.69

baseline-php

1.0000

829.82

cake-1.1.10

0.0938

77.84

cake-1.1.11

0.1277

105.96

cake-1.1.12

0.1288

106.84

cake-1.1.16

0.1166

96.77

cake-1.1.17

0.1165

96.70

cake-1.1.19

0.1298

107.69

cake-1.2.0-rc2

0.0516

42.79

solar-0.25.0

0.1852

153.66

solar-0.26.0

0.1789

148.43

solar-0.27.0

0.1734

143.93

solar-0.28.0

0.1671

138.64

solar-1.0.0alpha1

0.1706

141.58

symfony-0.6.3

0.0629

52.22

symfony-1.0.0beta2

0.0758

62.91

symfony-1.0.6

0.0746

61.91

symfony-1.0.6-dw

0.0820

68.03

symfony-1.0.6-fp

0.0853

70.78

symfony-1.0.17

0.0744

61.73

symfony-1.1.0

0.0745

61.84

zend-0.2.0

0.2176

180.56

zend-0.6.0

0.1998

165.78

zend-1.0.0

0.1268

105.25

zend-1.0.1

0.1263

104.80

zend-1.5.2

0.0951

78.93

Note the baseline-html and baseline-php numbers. Using ab previously, these were reported as 2100-2400 requests/second and 1100-1400 requests/second, respectively. The siege tool reports a much lower number for both, but the dropoff between static HTML and dynamic PHP is much smaller; with ab it looked like about 40-50%, but now with siege it looks like only about 15-18%. This behavior is much more like what we would expect from a memory-based PHP script.

Note also the separate framework requests/second; they are very similar between ab and siege. This means that the framework responsiveness numbers are almost unchanged.

Because the nearly-identical framework numbers are compared to a much smaller baseline PHP number, the frameworks now appear to be doing much better in relation to PHP’s maximum responsiveness. For example, Solar-1.0.0alpha1 with ab appeared to run at about 11% of PHP’s max, but with siege it looks close to 17%. All of the frameworks tested see this kind of comparative gain in their reporting.

However, when compared to each other, the framework rankings are the same as before: Solar has the highest baseline responsiveness, followed by Cake and Zend (their respective releases are very close to each other in responsiveness), and Symfony trails with the lowest baseline responsiveness.

In summary, using ab skewed the “percentage of PHP” comparisons because it over-reported PHP’s maximum responsiveness, but the framework requests/second numbers and the framework comparative rankings are unchanged from previous reporting. The Google project for the benchmarking system has been updated to use siege, so all future reporting will reflect its results, not those of ab.

Post navigation

22 thoughts on “A Siege On Benchmarks”

While I think the methodology is sound, I’m not sure I entirely agree with your conclusion: “For example, if you need 200 dynamic requests/second, but the framework itself (with no application code in use) is capable only of 100, then you know that no amount of application or database optimization will help you.”

Many frameworks provide methodologies for full page caching that make it trivial to return the content early. Additionally, with tools such as Zend Platform’s full page content cache or using a squid server, you can easily define resources to cache and basically bypass PHP altogether. Using such strategies, you can easily boost your performance dramatically — often with little or no additional programming. That said, this sort of thing might be encompassed in the completion of that quote, “it’s time to start scaling.”

@Matthew — I was not explicit enough on that, I guess. I say “dynamic” requests per second above (and you quote me correctly). This is as opposed to “static” requests.

The point is that any time you touch the framework dispatch cycle for a *dynamic* request, that will be your upper bound of responsiveness.

Full-page caching will give responsiveness comparable to that of a static HTML page. All of the frameworks tested are capable of full-page caching (some with more ease than others), but then you are gauging the responsiveness of the web server, not the framework.

Great post, Paul! But where’s my sortable link so I can sort by columns! (Just kidding!)

Always great to at least have a clue of relative framework speeds, even with the limitations noted in the article. I don’t know of anywhere else where this stuff is published. Thank you for taking the time to perform these tests.

Question: why are the new versions of some frameworks (cake 1.1->1.2, zend, etc) getting a lot slower? Is it the traditional software release cycle (push features, then optimize), or can we expect Zend and Cake to stay this slow? The drop off from zend r0.2 to 1.5.2 is pretty significant. Is it just a feature issue (i.e. it’s doing more stuff for you), or something more sinister?

Also, isn’t the Zend framework designed to be completely modular, so would I expect to see significant improvements from excluding unnecessary modules? (I only ask b/c I want to learn Zend, and every bit of data about it helps).

Hi Paul.
Have you ever benchmark YII Framework (from the same ppl who brought Prado) ?
They claim that it is the fastest framework available (compared to some of them).
I have found their framework pretty fast, and they use a method similar to yours for the benchmarking.http://www.yiiframework.com

I agree with Matthew. If you’re hitting your entire framework 200 times a second, you’re not using it right. There’s no reason not to cache, and caching strategies vary from data caching via page caching to complete vasnish-like solutions that hardly hit the php at all anymore. On the rare occasions that you do execute the whole set of php code, it’s suddenly a lot less interesting how fast that is.

So: save development time by using a framework AND save performance by having a proper system architecture. Far better than the ‘save time OR have something that performs’ approach many (including Rasmus) seem to promote.

“If you’re hitting your entire framework 200 times a second, you’re not using it right.” — Maybe, maybe not. The point is not the specific numbers, but the general principle. There is in fact an upper limit, and it can be valuable to know what that limit is.

“There’s no reason not to cache, and caching strategies vary from data caching via page caching to complete vasnish-like solutions that hardly hit the php at all anymore.” — I completely agree; cache-when-you-can is rewarding.

Static full-page caching on high traffic sites has the best reward, when you can do it and meet your functional requirements. In those cases, it is still useful to know the outer limits of responsiveness of the server and/or PHP, so you can compare what you are actually seeing with what you think you should be seeing.

You also mention the data-caching strategy. In a data-caching scenario, you are likely *still* hitting the dynamic dispatch cycle, and so the framework responsiveness limit is a good figure to know.

“On the rare occasions that you do execute the whole set of php code, it’s suddenly a lot less interesting how fast that is.” — Less interesting to you, maybe.

“So: save development time by using a framework AND save performance by having a proper system architecture.” — Again, I completely agree.

Incidentally, Ivo, are you a Symfony developer, devotee, or advocate? I ask because it seems like the Symfony folks are the only ones who regularly have objections to and complaints about this benchmarking series.

“… why are the new versions of some frameworks … getting a lot slower?” — I think it’s the normal course; as you add features, it take more processing time and memory. In addition to feature-adds, there may be architectural changes that cause the slowdown, one hopes with a corresponding gain in flexibility or some other value. I don’t see anything sinister here.

“… isn’t the Zend framework designed to be completely modular … ?” — Yes, at least more so than some other frameworks. (I’m of the opinion, as I have expressed in other forums, that Zend Framework is more like a “corporate PEAR” than anything else.) The benchmarks already exclude the unnecessary modules, in the name of providing an even playing field for the different systems. You could in theory swap out their controllers with your own, but then you would have a different core dispatch cycle, and the benchmark numbers presented here would be inapplicable.

“Congratulations on the nice work of keeping Solar’s speed up, even through additional releases!” — Well … :-/ … I happen to know that alpha2 is slower than alpha1 due to some architectural shifts and feature adds (e.g., moving things out of the Solar arch-class and into their own classes). The slowdown still leaves it more-responsive than the other benchmarked systems, but it’s still a reduction. Future releases may improve that somewhat.

Love these stats as always and i switched to siege myself after reading your previous post. I would love to see you test cake 1.2.1 stable btw as opposed to the release candidate your using now as the latest version since i use Cake a lot myself.

“Incidentally, Ivo, are you a Symfony developer, devotee, or advocate? I ask because it seems like the Symfony folks are the only ones who regularly have objections to and complaints about this benchmarking series.”

…and you seem to want to demonstrate that symfony is way slower than other frameworks:

“and Symfony trails with the lowest baseline responsiveness.”

even if the Zend Framework and symfony have quite similar numbers.

Anyway, as the lead developer of the symfony framework, I never had objections or complaints. As a matter of fact, those benchmarks have some value. But, even if you take the time to describe the methodology and what you are trying to demonstrate with these benchmarks, people seem to sometimes make bad decisions based on these numbers only, without having a look at the whole ecosystem of the frameworks: community, features, documentation, activity, …

Great work, it’s interesting to see how fast a framework can deliver “Hello World” compared to HTML/”just” PHP ()!

I have a few of suggestions to improve the usefulness of your work:

First, as other pointed out, include more frameworks. The two I would like to see (which I’ve actually already benched myself using your scripts) are CodeIgniter and Kohana. If you want to be on the bleeding edge, take FLOW3 for a spin.

Second, include DB connectivity in the frameworks’ tests. While you can argue this is outside the original scope of your work, nobody uses a framework [just] to output static text hardcoded in the source files. So even if it’s just retrieving the same row from the same table (which a RDBMS like MySQL should cache), it instantiates the framework’s DB components and makes a network/socket connection. This is important because a) some frameworks are smart enough not to load their DB classes if they’re not used and b) DB classes greatly vary in performance. Overall, it should be a step closer to real-world. Think of it as the Hello World of the web.

Lastly, consider measuring, and reporting, memory usage. Frameworks vary wildly in memory footprint, this variance being even larger when DB connectivity is used as some DB abstraction layers are more heavyweight than others.

[…] the right word?) with keeping benchamrks on recent versions of several popular PHP frameworks, has posted another look at a slight change in his testing method – a move away from the Apache ab tool and towards seige (a […]

“Anyway, as the lead developer of the symfony framework, I never had objections or complaints. As a matter of fact, those benchmarks have some value. But, even if you take the time to describe the methodology and what you are trying to demonstrate with these benchmarks, people seem to sometimes make bad decisions based on these numbers only, without having a look at the whole ecosystem of the frameworks: community, features, documentation, activity, …”

Sry Fabien but

– Solar has community, features and activity. Need more docs, but it’s unreleased so ppl can wait.
– yii has community, features, documentation, activity.

Both are faster than symfony.

I can talk about django, wich have the same resources and is a lot faster but it’s python and are out of the scope here.

[…] point is that ab no longer appears to be over-reporting the baseline cases, as I noted in an earlier benchmark posting. There are two major changes between then and now: (1) the updated project uses Ubuntu 10.10 […]

[…] the creator of the Savant template system. He has authored a series of authoritative benchmarks on dynamic framework performance, and was a founding contributor to the Zend Framework (the DB, DB_Table, and View components). He […]

Hey Paul, thank you for the post. You are actually the first on to give me a more detailed understanding of the whole process since I am totally new to the subject of framework-based applications. In my last trials I always had a lot of problems to find the right method of scaling. The ab-tool has also helped me a lot. I will start to read more of your posts in your archive in order to get more informations like this. Thumbs up.