You are viewing entries for April, 2013 and may want to check out the most recent entries.

April 22, 2013

Frameworks Round 3

We've previously posted two rounds of results of benchmarking many web application platforms and frameworks. The community's response remains strong! We have really enjoyed your comments, advice, questions, criticism, and pull requests. Speaking of pull requests, we received tests for several additional frameworks since Round 2 and we have posted Round 3.

Thanks to those contributions, the number of tests has grown to over 50. With that breadth we decided to move the project's results to a stand-alone site separate from this blog.

View the latest results from Round 3 now.

Round 3 notes and observations

View Round 3 results
  • Thanks to enhancements made to Wrk by its author, wg, the tests in Round 3 are time-limited rather than request-limited. In previous rounds, each test ran 100,000 requests, which meant execution time spanned from seconds to hours depending on the framework. With Round 3, all tests run for 1 minute each. Timed tests means Wrk's computation of latency statistics is more accurate for high-performance frameworks.
  • The community contributed numerous framework tests giving us coverage of several platforms we were missing in previous rounds. Round 3 includes Snap on Haskell; Elli and Cowboy on Erlang; Openresty on Lua; Tornado on Python; Onion on C; Slim, Codeigniter, Phreeze, Kohana, Lithium, Laravel, Silex, Fuel, and Symphony2 on PHP; Grizzly-Jersey and Play1 on Java; Scalatra, Lift, Unfiltered, and Finagle on Scala.
  • The full results table is huge. We'll work to add better filtering controls for later rounds.
  • Although we can't say for certain how many rounds this project will see, we have no plans to stop. Round 4 is already planned. If we're missing your favorite frameworks, we would love to receive a pull request.
  • We have heard from a contributor who is working on a set of .NET/Mono tests, so we are optimistic that Round 4 will finally include .NET!
  • In this round, we've tested Go and WebGo using Go 1.1 at the community's strong recommendation. The JSON tests on i7 have improved dramatically, with Go at just slightly over 200,000 requests per second. However, while Go is among the leaders in the JSON test, its newly added database tests are among the worst. The Go community is helping diagnose the problem and we expect its database performance to be improved in Round 4 or soon thereafter.
  • In addition to Go, several other newly added frameworks now exceed 200,000 requests per second for the JSON test on i7: Finagle, Onion, and OpenResty join Round 2's leaders, Netty, Servlet, and Gemini.
  • In database tests, the newly added Openresty test demonstrates extremely efficient database connection pooling and simple query execution. Openresty takes the lead for multiple queries on EC2.
  • Play1 was added alongside its successor (Play 2) in Round 3. The Play1 implementation isn't able to successfully complete the database tests without errors. Based on community dialogue we've observed, we anticipate this problem will be resolved for a later round.
  • In our tests, Slim and CodeIgniter appear to be the champions among the PHP frameworks. Symfony2 clocks in with lower performance than Cake.


A reader has posted this blog entry to Hacker News, and we invite you to comment there.


Very big thank-yous to all of the following who contributed to Round 3: Skamander (who has been especially generous with framework contributions), huntc, bitemyapp, sp1d3rx, keammo1, stevely, Licenser, torhve, Falmarri, kardianos, RaphaelJ, shenfeng, brendanhay, wsantos, greg-hellings, christkv, pakunoda, sidorares, PerfectCarl, tarndt, m20o, trautonen, jasonhinkle, gregwebs, and bakins.

Thanks also to everyone who has e-mailed us and participated in the Hacker News conversations.

About TechEmpower

We provide web and mobile application development services and are passionate about application performance. Read more about what we do.

April 5, 2013

Frameworks Round 2

Last week, we posted the results of benchmarking several web application development platforms and frameworks. The response was tremendous. We received comments, recommendations, advice, criticism, questions, and most importantly pull requests from dozens of readers and developers.

On Tuesday of this week, we kicked off a pair of EC2 instances and a pair of our i7 workstations to produce updated data. That is what we're sharing here today. We dive right in with the EC2 JSON test results, but please read to the end where we include important notes about what has changed since last week.

JSON serialization test

In this test, each HTTP response is a JSON serialization of a freshly-instantiated object, resulting in {"message" : "Hello, World!"}. First up is data from the EC2 m1.large instances.

Repeating its performance from last week, the Netty platform holds a commanding lead for JSON serialization on EC2. Vert.x, which is built on Netty, retains second place. Third place is held by plain Java Servlets running on Caucho's Resin Servlet container.
In this round, we added latency data (available using the rightmost tab at the top of this panel). The latency data is captured at 256 concurrency. Plain Go delivers the lowest latency, with a remarkable 7.8 millisecond average and tight standard deviation on EC2.

Dedicated hardware

Here is the same test on our Sandy Bridge i7 hardware.

On our dedicated hardware, plain Servlets lead with over 220,000 requests per second. Tapestry sees a marked improvement versus last week, in part thanks to a pull request that updated our test.
In this week's tests, we have added latency data (available using the rightmost tab at the top of this panel). On i7, we see that several frameworks are able to provide a response in under 10 milliseconds. Only Cake PHP requires more than 100 milliseconds.

Database access test (single query)

How many requests can be handled per second if each request is fetching a random record from a data store? Starting again with EC2.

We received pull requests that have improved the performance of several frameworks in this database access test. Wicket and Spring have seen notable improvements. Plain Servlets are paired with the standard connection pool provided by MySQL and Gemini is using its built-in connection pool and lightweight ORM.
Other minor improvements versus last week may be attributed to our use of Wrk as a test tool in this round versus the first round use of WeigHTTP.

Dedicated hardware

The dedicated hardware processes nearly 100,000 requests per second with one query per request. JVM frameworks are especially strong here thanks to JDBC and efficient connection pools.
In the latency data (rightmost tab at the top of this panel), it is not surprising that processing a query requires more work than the JSON test. However, several frameworks are still capable of providing a database-sourced response in less than 20 milliseconds. Sinatra on JRuby struggles dramatically in this test, with an alarming average of 583 milliseconds. Meanwhile, Django has the widest standard deviation probably in large part because Django does not provide a MySQL connection pool (Postgres tests are planned).

Database access test (multiple queries)

The following tests are all run at 256 concurrency and vary the number of database queries per request. The tests are 1, 5, 10, 15, and 20 queries per request.

Looking at the 20-queries bar chart, roughly the same ranked order we've seen elsewhere is still in play, demonstrating the headroom afforded by higher-performance frameworks.
The latency data (available using the rightmost tab at the top of this panel), shows ten frameworks—predominantly running on the JVM—are capable of executing twenty queries per request on EC2 in under 1 second on average. As before, Raw PHP is also very strong in this test. The Flask and Django results are impacted heavily by the lack of a connection pool. Later rounds will either test on Postgres or use a third-party MySQL connection pool.

Dedicated hardware

The dedicated hardware produces numbers nearly ten times greater than EC2 with the punishing 20 queries per request. Again, Raw PHP makes an extremely strong showing. PHP with an ORM and Cake improved dramatically from last week's test thanks to configuration changes recommended by the community.
An impressive demonstration of modern hardware and networks: seven frameworks are able to provide a response with 20 individually database-sourced rows (that's twenty round-trip conversations with a database server regardless of how you slice it) in less than 100 milliseconds on average.

New & Improved: Now with latency!

At the advice of readers, this round of data was collected using Wrk ( In the first round from last week, we used WeigHTTP ( This change accounts for the very slight increase in rps seen in several frameworks, including those that saw no change to their benchmark or library code. Our conjecture is that Wrk is just slightly quicker at processing requests.

We didn't switch tools to improve the rps numbers, though. Some readers wanted to see data points that WeigHTTP wasn't providing us. Wrk gives latency data including average, standard deviation, and maximum. For example:

Making 100000 requests to 8 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.07ms 7.80ms 73.59ms 77.37% Req/Sec 2.99k 1.07k 8.00k 88.42% 100002 requests in 3.68s, 59.89MB read Requests/sec: 27202.70 Transfer/sec: 16.29MB

The latency information is now available in the results panels above (the rightmost tab in each panel).

The raw Wrk output from the latest run is in the Github repository.

Additional “stripped” tests

We received community contributions for Rails and Django that removed unused "middleware" components to fine-tune the configuration of these two frameworks to the particular use-case of these benchmarks. We've accepted these contributions but identified them as Django Stripped and Rails Stripped.

We have also retained the original Django and Rails tests (with some other tweaks).

To reiterate the intent of this benchmark exercise: we want to identify the high-water mark of performance one can expect from each framework for real-world applications. Real-world applications will do much more than serialize "Hello, World" and random rows from a simple database table. But we use these simple tests as stand-ins for an application. For that reason, we intentionally did not turn off features that are enabled by default (such as support of HTTP sessions) in our first-round tests.

Still, there is value in demonstrating the degree of increased performance that can be realized by fine-tuning a framework to your application's specific needs. Don't need sessions? What kind of savings can you expect if you turn session support off?

We are not yet certain how best to differentiate tests that exercise the framework mostly as provided versus those that fine-tune the configuration for the particular use-case of these benchmarks. For now, we use the "stripped" name suffix.

Revised Environment Details

  • Two Intel Sandy Bridge Core i7-2600K workstations with 8 GB memory each (early 2011 vintage) for the i7 tests
  • Two Amazon EC2 m1.large instances for the EC2 tests
  • Switched gigabit Ethernet
Load simulator
Operating system
Web servers
Java / JVM

Images for sharing


We are grateful to have received Github pull requests and comments from dozens of users: Licenser, th0br0, davidmoreno, Skamander, jasonhinkle, pk11, vsg, knappador, RaphaelJ, chrisvest, dominikgrygiel, jpiasetz, mliberty, nraychaudhuri, bjornstar, shenfeng, bitemyapp, jmgao, larkin, ryantenney, normanmaurer, hlship, burtbeckwith, sashahart, abevoelker, tarndt, skelterjohn, myfreeweb, gleber, sidorares, philsturgeon, patoi, dcousineau, asadkn, BeCreative-Germany, rrevi, goshakkk, tarekziade, julienrf, mitsuhiko, jerem, huntc, alexbilbie, AlReece45, jameswyse, CHH, hassankhan, Nazariy, and onigoetz. A big thank you to all of you!

We have indicated any frameworks that received community review or for which the tests were wholly contributed by the community with a flag after their name in the results tables. For example: play-scala.

About TechEmpower

We provide web and mobile application development services and are passionate about application performance. Read more about what we do.