Before you comment on this post, please have the courtesy to read at least the first two articles above; I am tired of refuting the same old invalid arguments about “hello world makes no sense”, “if you cache, it goes faster”, “the ORM systems are different”, and “speed isn’t everything” with people who have no understanding of what these reports actually say.

Full disclosure: I am the lead developer on the Solar Framework for PHP 5, and I was an original contributor to the Zend framework.

In the interest of putting to rest any accusations of bias or favoritism, the entire project codebase is available for public review and criticism here.

Flattered By Imitators

They say that imitation is the sincerest form of flattery. As such, I am sincerely flattered that the following articles and authors have adopted methodologies strikingly similar to the methodology I outlined in Nov 2006.

Rasmus Lerdorf here. I am considering writing a separate post about this talk by Rasmus.

Methodology, Setup, and Source Code

The methodology in this report is nearly identical to that in previous reports. I won’t duplicate that narrative here; please see this page for the full methodology.

The only difference from previous reports regards the server setup. Although I’m still using an Amazon EC2 instance, I now provide the full setup instructions so you can replicate the server setup as well as the framework setup. See this page for server setup instructions.

Results, Part 1

Update: FYI, opcode caching is turned on for these results.

The “avg” column is the number of requests/second the framework itself can deliver, with no application code, averaged over 5 one-minute runs with 10 concurrent users. That is, the framework dispatch cycle of “boostrap, front controller, page controller, action method, view” will never go any faster than this.

The “rel” column is a percentage relative to PHP itself. Thus, if you see “0.1000” that means the framework delivers 10% of the maximum requests/second that PHP itself can deliver.

framework

avg

rel

baseline-html

2309.14

1.7487

baseline-php

1320.47

1.0000

cake-1.1.19

118.30

0.0896

cake-1.2.0-rc2

46.42

0.0352

solar-1.0.0alpha1

154.29

0.1168

symfony-1.0.17

67.35

0.0510

symfony-1.1.0

67.41

0.0511

zend-1.0.1

112.36

0.0851

zend-1.5.2

86.23

0.0653

zend-1.6.0-rc1

77.85

0.0590

We see that the Apache server can deliver 2300 static “hello world” requests/second. If you use PHP to echo "Hello World!" you get 1300 requests/second; that is the best PHP will get on this particular server setup.

Cake: After conferring with the Cake lead developers, it looks like the 1.2 release has some serious performance issues (more than 50% drop in responsiveness from the 1.1 release line). They are aware of this and are fixing the bugs for a 1.2.0-rc3 release.

Solar: The 1.0.0-alpha1 release is almost a year old, and while the unreleased Subversion code is in production use, I make it a point not to benchmark unreleased code. I might do a followup report just on Solar to show the decline in responsiveness as features have been added.

Symfony: Symfony remains the least-responsive of the tested frameworks (aside from the known-buggy Cake 1.2.0-rc1 release). No matter what they may say about Symfony being “fast at its core”, it does not appear to be true, at least not in comparison to the other frameworks here. But to their credit, they are not losing performance. (Could it be there’s not much left to lose? 😉 In addition, I continue to find Symfony to be the hardest to set up for these reports — more than half my setup time was spent on Symfony alone.

Zend: The difference between the 1.0 release and the 1.5 release is quite dramatic: a 25% drop in responsiveness. And then another 10% drop between 1.5 and 1.6.

To sum up, my point from earlier posts that “every additional line of code will reduce responsiveness” is illustrated here. Each of the newer framework releases has added features, and has slowed down as a result. This is neither good nor bad in itself; it is an engineering and economic tradeoff.

Results, Part 2

I have stated before that I don’t think it’s fair to compare CodeIgniter and Prado to Cake, Solar, Symfony, and Zend, because they are (in my opinion) not of the same class. Prado especially is entirely unlike the others.

Even so, I keep getting requests to benchmark them, so here are the results; the testing conditions are idential to those from the main benchmarking.

framework

avg

rel

baseline-html

2318.89

1.7710

baseline-php

1309.39

1.0000

ci-1.5.4

229.29

0.1751

ci-1.6.2

189.89

0.1450

prado-3.1.0

39.86

0.0304

CodeIgniter: Even the CI folks are not immune to the rule that “there is no such thing as a free feature”; between 1.5.4 and 1.6.2 releases they lost about 18% of their requests/second. However, they are still running at 14.5% of PHP’s maximum, compared with the 11.68% of Solar-1.0.0-alpha1 (the most-responsive of the frameworks benchmarked above), so it’s clearly the fastest of the bunch.

Prado: Prado works in a completely different way than the other frameworks listed here. Even though it is the slowest of the bunch, it’s simply not fair to compare it in terms of requests/second. If the Prado way of working is what you need, then the requests/second comparison will be of little value to you.

This Might Be The Last Time

Although I get regular requests to update these benchmark reports, it’s very time-consuming and tedious. It took five days to prepare everything, add new framework releases, make the benchmark runs, do additional research, and then write this report. As such, I don’t know when (if ever) I will perform public comparative benchmarks again; my thanks to everyone who provided encouragement, appreciation, and positive feedback.

Are you stuck with a legacy PHP application? Subscribe to "Modernizing Legacy Applications in PHP" for tips, tools, and techniques that can help you improve your codebase and your work life!

0.13% of baseline php. This is using a cache system I can’t remember the name of.

I would take those great results with a grain of salt as I was not using another system for the client and my Apache configuration is not tuned at all. But at least, it means that my approach is not completely flawed

To compare them accurately, you would need to test Pluf on exactly the same system as the others were tested on. There may be something about your system that introduces variability into the test, something that is (or is not) present in the system I used.

I encourage you to download the project codebase, perhaps even set it up on another EC2 system, and *then* add Pluf to the mix to see how it compares.

Finally, you would need to make sure you are using a fully-dynamic dispatch cycle, with no caching enabled (other than APC, XCache, or some other bytecode cache).

Paul, what exactly do you mean by “no cache enabled”. What about config cache (which exist in e.g. Symfony or Agavi) – its often important part of framework?
Agavi compile xml config to plain PHP (not just arrays, but almost part of core because of use bootsraping aproach in that compiled code).

It doesnt get any better with the repeated argument of measuring the maximum “responsiveness” of the framework, because it is just not true: the maximum responisveness of a framework does massively increase with the chosen caching mechanisms.
Frameworks are a set of tools.. and not the tool-box which is the lightest is the best.. but the one which suits best for the job, that has to be done…

and by the way:
having a public blog-post with such a statement: “I am tired of refuting the same old invalid arguments”…. i agree.. you should stop posting benchmarks

1. You said, “I’m tired of reading invalid benchmarks betwenn web-develepment-frameworks witch default-configs wich have totaly diffrent approaches.” I would be very happy to read exactly which “totaly diffrent approaches” are used, and why they would matter, perhaps in a blog post of your own.

2. You say “the maximum responisveness of a framework does massively increase with the chosen caching mechanisms.” You are only partly correct; if we measure the cached responsiveness, say with full page caching, then we are measuring the web server, not the framework. We already know the responsiveness of the web server: 2300 req/sec. Any of the listed frameworks is capable of generating a full-page cache, so they would all be equal in such a case. Again, if you wish to blog a more detailed argument on your own, I’d be happy to see it.

3. You say “frameworks are a set of tools.. and not the tool-box which is the lightest is the best.. but the one which suits best for the job, that has to be done…” I completely agree. One of the important considerations in picking the tool that “suits best for the job” is responsiveness.

4. You said, “i agree.. you should stop posting benchmarks.” Nice try. Just for that I might keep posting them; thanks for the encouragement. 😉

You got to LOVE the whole caching makes “Hello World” a bad example and such!

Take a look at a lot of the top blog or other online applications, yes most of them do provide caching but the caching and main output is controlled by the core code.

I would venture 99% of the php powered sites out there do NOT directly serve up html! This means that “Hello World” is the perfect example of the most basic responsiveness you can expect because in essence your testing how fast the framework can spit out a “precompiled html age”

You are completely right, as said, it is in fact your post that triggered a “Hey! I must test my framework too.” reaction and I have just run a quick test. You have done a fantastic work at preparing an easy way to reproduce the tests, so I will take a look at that and propose my code in download for others to test.

Just one question, have you run ab/siege from the same instance or another one?

1. I actually did a very detailed evaluation of frameworks (zend,cake,symfony,prado – sorry, not solar :-)) – only problem: its in german 😐
Short summary: all these frameworks have a very much different approach to the term “framework” – simple indicator: one just as to read the “what is xy” page on the framework-homepages. It reads very different for each of the frameworks.
Example:
one framework has an orm configured as default, because the philosophy is: one who uses a framework, does use a db
the other framework has the orm disabled by default
If you compare both with a hello-world benchmark and default config… thats my point.. and there are many more examples (configuration, filtering, escaping.. etc..)

2. short version of more detailed:
Your argument: hello world benchmark = shows me maximum responsiveness of framework
My argument: maximum responsiveness of framework has to to with caching
Why: if an outpout is static – it can be cached, if it is not static, that it would involve many more functions of a framework that are tested with a “hello-world” (db, orm, etc…)

3. we almost agree here: only difference in opinions: I’d say that for business-decisions the “maximum responsiveness” of a framework is a minor-point, there are many more important points to be considered.
Why? Because
a. a framework is used to develop applications which generaly are somewhat more complex than serving static pages and a contact-form
b. the framework is a tool for a developer/a team of developers which should ease the development process.. there are many points to be considered in a development cycle….

4. sorry for my last remark.. was unnecessary (2am here in berlin ;-))

1. You say, “one framework has an orm configured as default, because the philosophy is: one who uses a framework, does use a db”. Database access is turned off for these tests, so the ORM has nothing to do with the results. I don’t mean to be a jerk, but this is what I meant by saying I’m tired of refuting the same arguments — I outline all of this in the earlier versions of the reports. The configurations for each system are as close to being “equal” as they can be.

2. I addressed each of those points earlier, but in short: once you hit the dynamic dispatch cycle, you need to know to limits of responsiveness. That is all these tests tell you. Comparing database access, etc., is a far more difficult and involved process. Again, these points are addressed in the earlier reports.

3. It’s not a minor point when you can only afford a certain number of servers.

Paul, about “Symfony being fast at its core”. Symfony uses a third-party framework (at it’s core) which is no longer maintained. Or should I say Symfony is just a decorator over the original framework? Well, this core framework – Mojavi – is no longer maintained. But Mojavi was a really well designed frameworks that performed well. At it’s core. Symfony was based on a 3rd revision of it, if I remember correctly. I should still have the source somewhere if you’re interested.

I guess Symfony killed it in the end because the main Symfony’s strength has been it’s marketing. Which Mojavi simply lacked

BTW: from the Symfony docs – “(…) He subsequently spent a year developing the symfony core, basing his work on the Mojavi Model-View-Controller (MVC) framework (…)”

Michal – Agavi is based on Mojavi as well. And what? Now its is part of whole new framework. What is your point.

P.S. Paul, thank you for quick answer.

P.P.S. Speaking about agavi. As You wrote Rasmus Lerdorf did his own benchmarking. Agavi is quite fast there. In the future you should include Agavi to your benchmark. Got my email. I will be glad to help u setup Hello World app.

Oh, and Alan, Paul wrote “No matter what they may say about Symfony being “fast at its core”, it does not appear to be true”. That’s why I mentioned Mojavi in the first place. Just to answer your question.

I’m glad to see someone take the time to run these the way they were intended. Thanks.

Having said that, I don’t know Pluf, so I don’t know how fair a comparison between it and the other frameworks is. I keep CodeIgniter and Prado “off to the side” because they don’t approach things the same way as Cake, Solar, Symfony, and Zend. I guarantee there is something out there faster than Pluf is, even if it’s a homegrown project.

As I say elsewhere, there’s more to frameworks than speed alone, and one needs to use some level of professional judgment as to which ones are comparable. But at least this way we have a reliable method of comparing like-with-like.

Another use of this system is that you can make various releases of a framework, and keep a copy of them in the system, to see the relative performance tradeoffs between releases.

Basically, Pluf is a “port” of Django. I started it 2 years ago when I had to write a webapplication where one of the requirements was to use PHP, at that time I was coding a lot with Django, so I started a port (without the auto admin of Django). If you want to see what the code looks like for a Pluf based application, take a look here:

I’d love to see developers of these frameworks participate in some sort of automated testing program so this doesn’t take so much time to do (i.e. somehow offload the work to them). This would not only give them the chance to optimize configuration settings, but allow the information to be updated as new releases are made.

I did a quick search on your site and skimmed the comments but failed to get an answer. I hope you don’t mind me asking, but what is it that puts CodeIgniter in a different class from the other frameworks? I can see why for Prado, but doesn’t CodeIgniter use the same MVC style as the others?

This is the second benchmark post that I’ve seen where CodeIgniter trumps every other PHP framework in numbers and I’m considering moving over (currently using Zend) depending on how much recoding and re-learning I’d have to do to get my previous projects working on it.

Something around running benchmarks on frameworks like these with a “Hello World” just feels wrong.

It feels sort of like comparing which sledge hammer is faster and tapping in a thumb tack. I mean, these frameworks do soooooo much more and NOT testing those features….it’s kinda like, ‘what’s the point?’.

At the end of the day, response time is going to be the least of your worries when dealing with the avg. web app. DB access, is where it’s at, but as you said, that’s a completely different animal and more difficult to test accurately.

Tests like these are good, but I don’t think this should dictate what framework you use. You spend tons of time and money converting over to the “fastest” framework, then they release version 2.0 with new features and new bottlenecks and the process repeats itself.

I think the main benefit of these frameworks is significantly decreased development time. Also, proper coding on top of these frameworks is important. No matter how fast it is, someone can write some crappy code that makes it suck.

Hi Paul and Chin.
Nice test Chin
Your benchmark is a little different to tests you point in Preface.
So I have a some questions:
Is really Solar alpha2 faster than CodeIgniter (both versions) ? :):)
Why you don’t use cake_1.2.0.7692-rc3.zip ? I know it’s not stable but ….
Why has symfony no test ? (specially for me Symfony is the key framework)
In my opinion ZF 1.7 is too slow in your tests (about 5 times).
Ver. 1.6.x in other tests was about 2.5 times slower…. Do you agree?

[…] with slide 59, and you have the reason why you need to know your application responsiveness (i.e., benchmarks). As part of this you need to know the overhead imposed by your framework of choice, so that you […]