You are viewing entries tagged benchmarks and may want to check out the most recent entries.

November 23, 2015

Framework Benchmarks Round 11

Round 11 of the ongoing Web Framework Benchmarks project is now available! We'll keep this blog entry short and sweet.

The highlights for Round 11

View Round 10 results
  1. Three new languages are represented in Round 11: Crystal, D, and Rust.
  2. Meanwhile, the total number of frameworks has increased by 26.
  3. The new frameworks are:
    • silicon (C++)
    • aleph (clojure)
    • pedestal (clojure)
    • crystal-raw (crystal)
    • moonshine (crystal)
    • vibed (d)
    • jawn (java)
    • mangooio (java)
    • rapidoid (java)
    • koa (js)
    • sails (js)
    • clancats (php)
    • limonade (php)
    • asyncio (python)
    • cherrypy (python)
    • webware (python)
    • klein (python)
    • turbogears (python)
    • web2py (python)
    • wheezyweb (python)
    • hyper (rust)
    • iron (rust)
    • nickel (rust)
    • akka-http (scala)
    • colossus (scala)
    • finch (scala)
    • http4s (scala)
  4. More than 150 pull requests merged since Round 10.
  5. Fixed many tests that were broken in Round 10 (e.g., many Lua tests and several JavaScript tests)
  6. Fixed a few issues causing inter-framework conflict due to processes not properly closing and releasing TCP ports.
  7. New dependency tool that helps organize, unify, and simplify required software (e.g., software such as python2 is now easily loaded/reused without contributor knowledge required)

View the full results of Round 11.


As always, we thank all contributors to the project. 1,112 total closed pull requests in the project's lifetime paints a vibrant picture of community involvement. Thank you!

By Jeffrey Papen, CEO and Founder, Peak Hosting

At Peak Hosting, we're big fans of TechEmpower's Framework Benchmarks, an open source project the company has been coordinating since early 2013. Covering a wide variety of web application frameworks, this project gives developers useful data that can help them find the framework that will provide the performance and features they need for their application.

TechEmpower's benchmarking now includes six test types, more than 120 frameworks, 290 test permutations, and results that include latency and framework overhead.

Hardware comes into play when performance is important. And TechEmpower will tell you performance is always important. The best results were derived from a real-world environment running physical hardware. As a managed hosting provider, we were able to provide the project with the same types of machines that our customers use to run their production environments.

We first contributed to TechEmpower's Framework Benchmarks in Round 9 when we set up for the project five dedicated Dell R720 dual-Xeon E5 servers with 10 Gigabit Ethernet running in our data centers. High-end hardware directly correlates to high performance and the results from Round 9 to Round 10 bear this out. According to TechEmpower's Round 10 blog post:

Competition for the top position in the JSON-serialization test within the Peak Hosting environment has heated up so much that Round 10 sees a more than 100% increase in the top performance versus Round 9 (2.2M versus 1.05M). A year ago, TechEmpower showed that one million HTTP responses per second without load balancing was easy. We're delighted that 1M is already old news.

So what does hardware have to do with this impressive round-over-round improvement? We didn’t change the hardware between Rounds 9 and 10. What did change was that between rounds, test implementation contributors realized they had hardware available to them with 40 hyperthreading cores, and they were able to optimize their code to take advantage of that performance and capacity. A bit of tweaking for high-end hardware was all that was needed to utterly smash the previous round's leaderboard.

We're pleased that we're able to provide TechEmpower and the open source community with this hardware environment—and that it's the same type of hardware our customers use every day in production environments, making the results as valuable as possible. And we will eagerly await the results of Round 11 where we anticipate more significant performance leaps!

Round 10 of the Framework Benchmarks project is now available! It has been a little less than a year since the previous round and in that time, approximately 133 contributors have made 2,835 git commits. View Round 10 resultsThese contributions have improved the project's toolset and added many new framework test implementations.

We retired our in-house i7-2600K hardware environment for Round 10, and we changed our Amazon EC2 environment to c3.large instances. Meanwhile, the Peak R720 dual-Xeon E5 environment with 10-gigabit Ethernet is our default view for the results rendering.

Much of the effort in the past year has been focused on improving the toolset, allowing contributors to create their own test and development environment with less effort and to optionally focus on just the frameworks or platforms of interest to them. Between Round 9 and Round 10, we saw an average of 7 commits per day.

RoundFrameworksFramework permutations
Round 9~105205 configurations
Round 10~125293 configurations

View Round 10 results now.

Round 10 notes and observations

  • Competition for the top position in the JSON-serialization test within the Peak environment has heated up so much that Round 10 sees a more than 100% increase in the top performance versus Round 9 (2.2M versus 1.05M). For Round 10, Lwan has taken the crown. But we expect the other top contenders won't leave this a settled matter. A year ago, we said one million HTTP responses per second without load balancing was easy. We're delighted that 1M is already old news.
  • Compiled languages such as C, C++, Java, Scala, and Ur continue to dominate most tests, and Lua retains its unique position of standard-bearer for JIT languages by showing up within the top 10 on many test types.
  • While Go has, if anything, slightly improved since Round 9, the increased competition means Go is not in the top-ten leaderboard within the Peak environment. Go remains a strong performer in the smaller-server scenario as demonstrated by our EC2 c3.large environment.
  • During our preview cycles on Round 10, we elected to—for the time being as least—remove SQLite tests. SQLite tests miss the spirit of the database tests by avoiding network communication to a secondary server (a database server), making them a bit similar to our future caching-enabled test type. The SQLite tests may return once we have the caching test type specified and implemented.
  • The 2,835 git commits since Round 9 averages out to 7 commits per day. The contributors to this project have been keeping very busy! Since Round 9, 675 issues were opened and 511 issues were closed. Of those issues, 441 pull requests were created, and 321 pull requests were merged, which is roughly one PR merged per day.
  • The project is now Vagrant-compatible to ease environment setup.
  • Travis CI integration allows contributors to get a "green light" on pull requests shortly after submission. The massive breadth and test coverage represented by this project has created an inordinate load on the servers Travis provides for free use by the open source community. Going forward, we are working with Travis to more intelligently narrow our work-load based on the particulars of each PR. A great big thanks to Travis for being so tolerant of the crushing load we've created.
  • If you would like to contribute to the project, we've migrated documentation to ReadTheDocs.
  • Windows support has again fallen behind. We have received a great deal of Windows help in the past, but we don't have the internal capacity to keep it current with the evolution of the project. Round 10 does not include Windows tests, but we'd very much welcome any help catching Windows up for the next round.
  • For a bit of novelty, we are presently testing our benchmarks on a Raspberry Pi 2 Model B environment. If the results are interesting, we may include this environment in Round 11.

Change visualization

Hamilton Turner created a Gource video that illustrates the changes between Round 9 and Round 10.


A huge thank-you to Hamilton Turner, whose contributions to Round 10 are legion. He even referenced the project in his Ph.D. thesis!

A continued and special thanks to Peak Hosting for providing the dedicated hardware server environment we're using for our project. In a world that seems all too content to consider physical hardware as exceptional, we're living a life of multi-Xeon luxury. It is so choice; if you have the means, I highly recommend picking some up.

As always, we also want to thank everyone who has contributed to the project. We are at 572 forks on Github and counting. Considering we have only recently put serious effort into making the project approachable for contributors, we're super impressed by this number.

If you have questions, comments, criticism, or would like to contribute a new test or an improvement to an existing one, please join our Google Group, visit the project at Github, or come chat with us in #techempower-fwbm on Freenode.

About TechEmpower

We provide web and mobile application development services and are passionate about application performance. If this sounds interesting to you, we'd love to hear from you.