You are viewing entries tagged benchmarks and may want to check out the most recent entries.

November 16, 2016

Framework Benchmarks Round 13

Round 13 of the ongoing Web Framework Benchmarks project is here! The project now features 230 framework implementations (of our JSON serialization test) and includes new entrants on platforms as diverse as Kotlin and Qt. Yes, that Qt. We also congratulate the ASP.NET team for the most dramatic performance improvement we've ever seen, making ASP.NET Core a top performer.

View Round 13 resultsThe large filters panel on our results web site is a testament to the ever-broadening spectrum of options for web developers. What a great time to be building web apps! A great diversity of frameworks means there are likely many options that provide high-performance while meeting your language and productivity requirements.

Good fortunes

As the previous round—Round 12—was wrapping up, we were unfortunately rushed as the project’s physical hardware environment was being decommissioned. But good fortune was just around the corner, thanks to the lucky number 13!

New hardware and cloud environments

For Round 13, we have all new test environments, for both physical hardware and the virtualized public cloud.

Microsoft has provided the project with Azure credits, so starting with Round 13, the cloud environment is on Azure D3v2 instances. Previous rounds’ cloud tests were run on AWS.

Meanwhile, ServerCentral has provided the project a trio of physical servers in one of their development lab environments with 10 gigabit Ethernet. Starting with Round 13, the physical hardware environment is composed of a Dell R910 application server (4x 10-Core E7-4850 CPUs) and a Dell R420 database server (2x 4-Core E5-2406 CPUs).

We’d like to extend huge thanks to ServerCentral and Microsoft for generously supporting the project!

We recognize that as a result of these changes, Round 13 is not easy to directly compare to Round 12. Although changing the test environments was not intentional, it was necessary. We believe the results are still as valuable as ever. An upside of this environment diversity is visibility into the ways various frameworks and platforms work with the myriad variables of cores, clock speed, and virtualization technologies. For example, our new physical application server has twice as many HT cores as the previous environment, but the CPUs are older, so there is an interesting balance of higher concurrency but potentially lower throughput. In aggregate, the Round 13 results on physical hardware are generally lower due to the older CPUs, all else being equal.

Many fixes to long-broken tests

Along with the addition of new frameworks, Round 13 also marks a sizeable decrease in the number of existing framework tests that have failed to execute properly in previous rounds. This is largely the result of a considerable community effort over the past few months to identify and fix dozens of frameworks, some of which we haven’t been able to successfully test since 2014.

Continuous benchmarking

Round 13 is the first round conducted with what we’re calling Continuous Benchmarking. Continuous Benchmarking is the notion of setting up the test environment to automatically reset to a clean state, pull the latest from the source repository, prepare the environment, execute the test suite, deliver results, and repeat.

There are many benefits of Continuous Benchmarking. For example:

  • At any given time, we can grab the most recent results and mark them as a preview or final for an official Round. This should allow us to accelerate the delivery of Rounds.
  • With some additional work, we will be able to capture and share results as they are made available. This should give participants in the project much quicker insight into how their performance tuning efforts are playing out in our test environment. Think of it as continuous integration but for benchmark results. Our long-term goal is to provide a results viewer that plots performance results over time.
  • Any changes that break the test environment as a whole or a specific framework’s test implementation should be visible much earlier. Prior to Continuous Benchmarking, breaking changes were often not detected until a preview run.

Microsoft’s ASP.NET Core

We consider ourselves very fortunate that our project has received the attention that it has from the web framework community. It has become a source of a great pride for our team. Among every reaction and piece of feedback we’ve received, our very favorite kind is when a framework maintainer recognizes a performance deficiency highlighted by this project and then works to improve that performance. We love this because we think of it as a small way of improving performance of the whole web, and we are passionate about performance.

Round 13 is especially notable for us because we are honored that Microsoft has made it a priority to improve ASP.NET’s performance in these benchmarks, and in so doing, improve the performance of all applications built on ASP.NET.

Thanks to Microsoft’s herculean performance tuning effort, ASP.NET—in the new cross-platform friendly form of ASP.NET Core—is now a top performer in our Plaintext test, making it among the fastest platforms at the fundamentals of web request routing. The degree of improvement is absolutely astonishing, going from 2,120 requests per second on Mono in Round 11 to 1,822,366 requests per second on ASP.NET Core in Round 13. That’s an approximately 85,900% improvement, and that doesn’t even account for Round 11’s hardware being faster than our new hardware. That is not a typo, it's 859 times faster! We believe this to be the most significant performance improvement that this project has ever seen.

By delivering cross-platform performance alongside their development toolset, Microsoft has made C# and ASP.NET one of the most interesting web development platforms available. We have a brief message to those developers who have avoided Microsoft’s web stack thinking it’s “slow” or that it’s for Windows only: ASP.NET Core is now wicked sick fast at the fundamentals and is improving in our other tests. Oh, and of course we’re running it on Linux. You may be thinking about the Microsoft of 10 years ago.

The best part, in our opinion, is that Microsoft is making performance a long-term priority. There is room to improve on our other more complex tests such as JSON serialization and Fortunes (which exercises database connectivity, data structures, encoding of unsafe text, and templating). Microsoft is taking on those challenges and will continue to improve the performance of its platform.

Our Plaintext test has historically been a playground for the ultra-fast Netty platform and several lesser-known/exotic platforms. (To be clear, there is nothing wrong with being exotic! We love them too!) Microsoft’s tuning work has brought a mainstream platform into the frontrunners. That achievement stands on its own. We congratulate the Microsoft .NET team for a massive performance improvement and for making ASP.NET Core a mainstream option that has the performance characteristics of an acutely-tuned fringe platform. It’s like an F1 car that anyone can drive. We should all be so lucky.

February 25, 2016

Framework Benchmarks Round 12

Round 12 of the ongoing Web Framework Benchmarks project is now available!

A race against the clock

Recently, we were notified that the physical hardware environment we have used for Rounds 9 through 12 will be decommissioned imminently. This news made Round 12 unusual: rather than wait until we can equip and configure a new environment, we decided to conclude Round 12 while the current environment remained available.

View Round 12 resultsAs a result, no previews of Round 12 were made available to the participants in the project. Pull requests that we would normally expect to see after a preview cycle will need to be processed for Round 12. So bear that in mind that participants were not able to sanity check the Round 12 results and submit fixes.

Furthermore, due to the modestly rushed nature (at least on our side) of Round 12, we elected to not capture Amazon EC2 results for this Round. The only data available for Round 12 is from the Peak dual Xeon E5 servers.

View the full results of Round 12.

We are now working to find and setup a new hardware environment for Rounds 13 and beyond.

Notable changes to Clojure tests

@yogthos noticed (in issue #1894) that the Compojure and http-kit test implementations were using def (value bound at compile time) instead of defn (value bound at runtime) for the JSON, single query, and fortunes tests. While the impact on the JSON test was likely minimal, this had a significant impact on the single query and fortunes tests because these implementations were not actually running a query for every request as expected. This change was unintentionally done in the Compojure test by TechEmpower staff, and then later copied to http-kit to keep the implementations in sync. We have corrected this error in Round 12.

Other notable changes

  1. The plain PHP, Slim, and Laravel tests have been upgraded to PHP 7. For example, Slim's performance in the JSON test and Laravel's performance in the Fortunes test both approximately doubled versus Round 11 with PHP 5.
  2. All JVM-hosted tests have been upgraded to Java 8.
  3. Several new frameworks were added.


As always, we thank all contributors to the project, especially in light of the rush to get Round 12 concluded!

November 23, 2015

Framework Benchmarks Round 11

Round 11 of the ongoing Web Framework Benchmarks project is now available! We'll keep this blog entry short and sweet.

The highlights for Round 11

View Round 10 results
  1. Three new languages are represented in Round 11: Crystal, D, and Rust.
  2. Meanwhile, the total number of frameworks has increased by 26.
  3. The new frameworks are:
    • silicon (C++)
    • aleph (clojure)
    • pedestal (clojure)
    • crystal-raw (crystal)
    • moonshine (crystal)
    • vibed (d)
    • jawn (java)
    • mangooio (java)
    • rapidoid (java)
    • koa (js)
    • sails (js)
    • clancats (php)
    • limonade (php)
    • asyncio (python)
    • cherrypy (python)
    • webware (python)
    • klein (python)
    • turbogears (python)
    • web2py (python)
    • wheezyweb (python)
    • hyper (rust)
    • iron (rust)
    • nickel (rust)
    • akka-http (scala)
    • colossus (scala)
    • finch (scala)
    • http4s (scala)
  4. More than 150 pull requests merged since Round 10.
  5. Fixed many tests that were broken in Round 10 (e.g., many Lua tests and several JavaScript tests)
  6. Fixed a few issues causing inter-framework conflict due to processes not properly closing and releasing TCP ports.
  7. New dependency tool that helps organize, unify, and simplify required software (e.g., software such as python2 is now easily loaded/reused without contributor knowledge required)

View the full results of Round 11.


As always, we thank all contributors to the project. 1,112 total closed pull requests in the project's lifetime paints a vibrant picture of community involvement. Thank you!