You are viewing entries tagged frameworks and may want to check out the most recent entries.

February 14, 2018

Framework Benchmarks Round 15

As of 2018-03-13, Azure results for Round 15 have been posted. These were not available when Round 15 was originally published.

What better day than Valentine's Day to renew one's vow to create high-performance web applications? Respecting the time of your users is a sure way to earn their love and loyalty. And the perfect start is selecting high-performance platforms and frameworks.

Results from Round 15 of the Web Framework Benchmarks project are now available! Round 15 includes results from the physical hardware environment at Server Central and cloud results from Microsoft Azure.

We ❤️ Performance

High-performance software warms our hearts like a Super Bowl ad about water or an NBC Olympics athlete biography.

But really, who doesn't love fast software? No one wants to wait for computers. There are more important things to do in life than wait for a server to respond. For programmers, few things are as rewarding as seeing delighted users, and respecting users' time is a key element of achieving that happiness.

View Round 15 resultsAmong the many effects of this project, one of which we are especially proud is how it encourages platforms and frameworks to be fast—to elevate the high-water marks of performance potential. When frameworks and platforms lift their performance ceiling upward, application developers enjoy the freedom and peace of mind of knowing they control their applications' performance fate. Application developers can work rapidly or methodically; they can write a quick implementation or squeeze their algorithms to economize on milliseconds; they can choose to optimize early or later. This flexibility is made possible when the framework and platform aren't boxing out the application—preemptively consuming the performance pie—leaving only scraps for the application developer. High-performance frameworks take but a small slice and give the bulk of the pie to the application developer to do with as they please.

This Valentine's Day, respect yourself as a developer, own your application's performance destiny, and fall in love with a high-performance framework. Your users will love you back.

Love from the Community

Community contributions to the project continue to amaze us. As of Round 15, we have processed nearly 2,500 pull requests and the project has over 3,000 stars on GitHub. We are honored by the community's feedback and participation.

We are routinely delighted to see the project referenced elsewhere, such as this project that monitors TCP connections that used our project to measure overhead, or the hundreds of GitHub issues discussing the project within other repositories. We love knowing others receive value from this project!

More Immediate Results for Contributors

When you are making contributions to this project, you want to see the result of your effort so you can measure and observe performance improvements. You also want/need log files in case things don't go as expected. To help accelerate the process, we have made the output of our continuous benchmarking platform available as a results dashboard. Our hardware test environment is continuously running, so new results are available every few days (at this time, a full run takes approximately 90 hours). As each run completes, a raw results.json file will be posted as well as zipped log files and direct links to log files for frameworks that encountered significant testing errors. We hope this will streamline the process of troubleshooting contributions.

We used run ed713ee9 from Server Central and run a1110174 from Azure.

In Progress

We are working to update the entire suite to Ubuntu 16 LTS and aim to be able to migrate to Ubuntu 18 LTS soon after it's available. This update will allow us to keep up with several features in both hardware and cloud environments, such as Azure's Accelerated Networking. Watch the GitHub project for more updates on this as they arrive!

Thank You!

Thank you so much to all of the contributors! Check out Round 15 and if you are a contributor to the project or just keenly interested, keep an eye on continuous results.

Results from Round 14 of the Web Framework Benchmarks project are now available! This round's results are limited to the physical hardware environment only, but cloud results will be included again in the next round.

Recent improvements

Our efforts during Round 14 focused on improvements that help us manage the project, mostly by removing some of our manual work.

Continuous Benchmarking

When we are not running one-off tests or modifying the toolset, the dedicated physical hardware environment at ServerCentral is continuously running the full benchmark suite. View Round 14 resultsWe call this "Continuous Benchmarking." As Round 14 was wrapping up, Continuous Benchmarking allowed us to more rapidly deploy multiple preview rounds for review by the community than we have done in previous rounds.

Going forward, we expect Continuous Benchmarking to facilitate immediate procession into community-facing previews of Round 15. We hope to have the first Round 15 preview within a few days.

Paired with the continuous benchmarker is an internally-facing dashboard that shows us how things are progressing. We plan to eventually evolve this into an externally-facing interface for project contributors.


Contributors and the project's community will have seen several renderings of the differences between Round 13 and Round 14. The final capture of differences between Round 13 to Round 14 is an example. These help us confirm changes that are planned or expected and also identify unexpected changes or volatility.

We have, in fact, observed volatility with a small number of frameworks and aim to investigate and address each as time permits. Although the benchmarking suite includes two phases of warmup prior prior to gathering data for each test, we might find that some frameworks or platforms require additional warmup time to be consistent across multiple measurements.


We added Facebook's mention-bot into the project's GitHub repository. This has helped keep past contributors in the loop if and when changes are made to their prior contributions. For example, if a contributor updates the Postgres JDBC driver for the full spectrum of JVM frameworks, the original contributors of those frameworks will be notified by mention-bot. This allows for widespread changes such as a driver update while simultaneously allowing each contributor to override changes according to their framework's best practices.

Previously, we had to either manually notify people or do a bit of testing on our own to determine if the update made sense. In practice, this often meant not bothering to update the driver, which isn't what we want. (Have you seen the big performance boost in the newer Postgres JDBC drivers?)

Community contributions

This project includes a large amount of community-contributed code. Community contributions are up recently and we believe that is thanks to mention-bot. We expect to pass the milestone of 2,000 Pull Requests processed within a week or two. That is amazing.

Thank you so much to all of the contributors! Check out Round 14 and then on to Round 15!

November 16, 2016

Framework Benchmarks Round 13

Round 13 of the ongoing Web Framework Benchmarks project is here! The project now features 230 framework implementations (of our JSON serialization test) and includes new entrants on platforms as diverse as Kotlin and Qt. Yes, that Qt. We also congratulate the ASP.NET team for the most dramatic performance improvement we've ever seen, making ASP.NET Core a top performer.

View Round 13 resultsThe large filters panel on our results web site is a testament to the ever-broadening spectrum of options for web developers. What a great time to be building web apps! A great diversity of frameworks means there are likely many options that provide high-performance while meeting your language and productivity requirements.

Good fortunes

As the previous round—Round 12—was wrapping up, we were unfortunately rushed as the project’s physical hardware environment was being decommissioned. But good fortune was just around the corner, thanks to the lucky number 13!

New hardware and cloud environments

For Round 13, we have all new test environments, for both physical hardware and the virtualized public cloud.

Microsoft has provided the project with Azure credits, so starting with Round 13, the cloud environment is on Azure D3v2 instances. Previous rounds’ cloud tests were run on AWS.

Meanwhile, ServerCentral has provided the project a trio of physical servers in one of their development lab environments with 10 gigabit Ethernet. Starting with Round 13, the physical hardware environment is composed of a Dell R910 application server (4x 10-Core E7-4850 CPUs) and a Dell R420 database server (2x 4-Core E5-2406 CPUs).

We’d like to extend huge thanks to ServerCentral and Microsoft for generously supporting the project!

We recognize that as a result of these changes, Round 13 is not easy to directly compare to Round 12. Although changing the test environments was not intentional, it was necessary. We believe the results are still as valuable as ever. An upside of this environment diversity is visibility into the ways various frameworks and platforms work with the myriad variables of cores, clock speed, and virtualization technologies. For example, our new physical application server has twice as many HT cores as the previous environment, but the CPUs are older, so there is an interesting balance of higher concurrency but potentially lower throughput. In aggregate, the Round 13 results on physical hardware are generally lower due to the older CPUs, all else being equal.

Many fixes to long-broken tests

Along with the addition of new frameworks, Round 13 also marks a sizeable decrease in the number of existing framework tests that have failed to execute properly in previous rounds. This is largely the result of a considerable community effort over the past few months to identify and fix dozens of frameworks, some of which we haven’t been able to successfully test since 2014.

Continuous benchmarking

Round 13 is the first round conducted with what we’re calling Continuous Benchmarking. Continuous Benchmarking is the notion of setting up the test environment to automatically reset to a clean state, pull the latest from the source repository, prepare the environment, execute the test suite, deliver results, and repeat.

There are many benefits of Continuous Benchmarking. For example:

  • At any given time, we can grab the most recent results and mark them as a preview or final for an official Round. This should allow us to accelerate the delivery of Rounds.
  • With some additional work, we will be able to capture and share results as they are made available. This should give participants in the project much quicker insight into how their performance tuning efforts are playing out in our test environment. Think of it as continuous integration but for benchmark results. Our long-term goal is to provide a results viewer that plots performance results over time.
  • Any changes that break the test environment as a whole or a specific framework’s test implementation should be visible much earlier. Prior to Continuous Benchmarking, breaking changes were often not detected until a preview run.

Microsoft’s ASP.NET Core

We consider ourselves very fortunate that our project has received the attention that it has from the web framework community. It has become a source of a great pride for our team. Among every reaction and piece of feedback we’ve received, our very favorite kind is when a framework maintainer recognizes a performance deficiency highlighted by this project and then works to improve that performance. We love this because we think of it as a small way of improving performance of the whole web, and we are passionate about performance.

Round 13 is especially notable for us because we are honored that Microsoft has made it a priority to improve ASP.NET’s performance in these benchmarks, and in so doing, improve the performance of all applications built on ASP.NET.

Thanks to Microsoft’s herculean performance tuning effort, ASP.NET—in the new cross-platform friendly form of ASP.NET Core—is now a top performer in our Plaintext test, making it among the fastest platforms at the fundamentals of web request routing. The degree of improvement is absolutely astonishing, going from 2,120 requests per second on Mono in Round 11 to 1,822,366 requests per second on ASP.NET Core in Round 13. That’s an approximately 85,900% improvement, and that doesn’t even account for Round 11’s hardware being faster than our new hardware. That is not a typo, it's 859 times faster! We believe this to be the most significant performance improvement that this project has ever seen.

By delivering cross-platform performance alongside their development toolset, Microsoft has made C# and ASP.NET one of the most interesting web development platforms available. We have a brief message to those developers who have avoided Microsoft’s web stack thinking it’s “slow” or that it’s for Windows only: ASP.NET Core is now wicked sick fast at the fundamentals and is improving in our other tests. Oh, and of course we’re running it on Linux. You may be thinking about the Microsoft of 10 years ago.

The best part, in our opinion, is that Microsoft is making performance a long-term priority. There is room to improve on our other more complex tests such as JSON serialization and Fortunes (which exercises database connectivity, data structures, encoding of unsafe text, and templating). Microsoft is taking on those challenges and will continue to improve the performance of its platform.

Our Plaintext test has historically been a playground for the ultra-fast Netty platform and several lesser-known/exotic platforms. (To be clear, there is nothing wrong with being exotic! We love them too!) Microsoft’s tuning work has brought a mainstream platform into the frontrunners. That achievement stands on its own. We congratulate the Microsoft .NET team for a massive performance improvement and for making ASP.NET Core a mainstream option that has the performance characteristics of an acutely-tuned fringe platform. It’s like an F1 car that anyone can drive. We should all be so lucky.