We have retired the hardware environment provided by Server Central for our Web Framework Benchmarks project. We want to sincerely thank Server Central for having provided servers from their lab environment to our project.

Their contribution allowed us to continue testing on physical hardware with 10-gigabit Ethernet. Ten-gigabit Ethernet gives the highest-performing frameworks opportunity to shine. We were particularly impressed at Server Central's customer service and technical support, which was responsive and helpful in troubleshooting configuration issues even though we were using their servers free of charge. (And since the advent of our Continuous Benchmarking, we were essentially using the servers at full load around the clock.)

Thank you, Server Central!

New hardware for Round 16 and beyond

For Round 16 and beyond, we are happy to announce that Microsoft has provided three Dell R440 servers and a Cisco 10-gigabit switch. These three servers are homogeneous, each configured with an Intel Xeon Gold 5120 CPU (14/28 cores at 2.2/3.2 GHz), 32 GB of memory, and an enterprise SSD.

If your contributed framework or platform performs best with hand-tuning based on cores, please send us a pull request to adjust the necessary parameters.

These servers together compose a hardware environment we've named "Citrine" and are visible on the TFB Results Dashboard. Initial results are impressive, to say the least.

Adopting Docker for Round 16

Concurrent to the change in hardware, we are hard at work converting all test implementations and the test suite to use Docker. The are several upsides to this change, the most important being better isolation. Our past home-brew mechanisms to clean up after each framework were, at times, akin to whack-a-mole as we encountered new and fascinating ways in which software may refuse to stop after being subjected to severe levels of load.

Docker will be used uniformly—across all test implementations—so any impact will be imparted on all platforms and frameworks equally. Our measurements indicate trivial performance impact versus bare metal: on the order of less than 1%.

As you might imagine, the level of effort to convert all test implementations to Docker is not small. We are making steady progress. But we would gladly accept contributions from the community. If you would like to participate in the effort, please see GitHub issue #3296.

February 14, 2018

Framework Benchmarks Round 15

As of 2018-03-13, Azure results for Round 15 have been posted. These were not available when Round 15 was originally published.

What better day than Valentine's Day to renew one's vow to create high-performance web applications? Respecting the time of your users is a sure way to earn their love and loyalty. And the perfect start is selecting high-performance platforms and frameworks.

Results from Round 15 of the Web Framework Benchmarks project are now available! Round 15 includes results from the physical hardware environment at Server Central and cloud results from Microsoft Azure.

We ❤️ Performance

High-performance software warms our hearts like a Super Bowl ad about water or an NBC Olympics athlete biography.

But really, who doesn't love fast software? No one wants to wait for computers. There are more important things to do in life than wait for a server to respond. For programmers, few things are as rewarding as seeing delighted users, and respecting users' time is a key element of achieving that happiness.

View Round 15 resultsAmong the many effects of this project, one of which we are especially proud is how it encourages platforms and frameworks to be fast—to elevate the high-water marks of performance potential. When frameworks and platforms lift their performance ceiling upward, application developers enjoy the freedom and peace of mind of knowing they control their applications' performance fate. Application developers can work rapidly or methodically; they can write a quick implementation or squeeze their algorithms to economize on milliseconds; they can choose to optimize early or later. This flexibility is made possible when the framework and platform aren't boxing out the application—preemptively consuming the performance pie—leaving only scraps for the application developer. High-performance frameworks take but a small slice and give the bulk of the pie to the application developer to do with as they please.

This Valentine's Day, respect yourself as a developer, own your application's performance destiny, and fall in love with a high-performance framework. Your users will love you back.

Love from the Community

Community contributions to the project continue to amaze us. As of Round 15, we have processed nearly 2,500 pull requests and the project has over 3,000 stars on GitHub. We are honored by the community's feedback and participation.

We are routinely delighted to see the project referenced elsewhere, such as this project that monitors TCP connections that used our project to measure overhead, or the hundreds of GitHub issues discussing the project within other repositories. We love knowing others receive value from this project!

More Immediate Results for Contributors

When you are making contributions to this project, you want to see the result of your effort so you can measure and observe performance improvements. You also want/need log files in case things don't go as expected. To help accelerate the process, we have made the output of our continuous benchmarking platform available as a results dashboard. Our hardware test environment is continuously running, so new results are available every few days (at this time, a full run takes approximately 90 hours). As each run completes, a raw results.json file will be posted as well as zipped log files and direct links to log files for frameworks that encountered significant testing errors. We hope this will streamline the process of troubleshooting contributions.

We used run ed713ee9 from Server Central and run a1110174 from Azure.

In Progress

We are working to update the entire suite to Ubuntu 16 LTS and aim to be able to migrate to Ubuntu 18 LTS soon after it's available. This update will allow us to keep up with several features in both hardware and cloud environments, such as Azure's Accelerated Networking. Watch the GitHub project for more updates on this as they arrive!

Thank You!

Thank you so much to all of the contributors! Check out Round 15 and if you are a contributor to the project or just keenly interested, keep an eye on continuous results.

Results from Round 14 of the Web Framework Benchmarks project are now available! This round's results are limited to the physical hardware environment only, but cloud results will be included again in the next round.

Recent improvements

Our efforts during Round 14 focused on improvements that help us manage the project, mostly by removing some of our manual work.

Continuous Benchmarking

When we are not running one-off tests or modifying the toolset, the dedicated physical hardware environment at ServerCentral is continuously running the full benchmark suite. View Round 14 resultsWe call this "Continuous Benchmarking." As Round 14 was wrapping up, Continuous Benchmarking allowed us to more rapidly deploy multiple preview rounds for review by the community than we have done in previous rounds.

Going forward, we expect Continuous Benchmarking to facilitate immediate procession into community-facing previews of Round 15. We hope to have the first Round 15 preview within a few days.

Paired with the continuous benchmarker is an internally-facing dashboard that shows us how things are progressing. We plan to eventually evolve this into an externally-facing interface for project contributors.


Contributors and the project's community will have seen several renderings of the differences between Round 13 and Round 14. The final capture of differences between Round 13 to Round 14 is an example. These help us confirm changes that are planned or expected and also identify unexpected changes or volatility.

We have, in fact, observed volatility with a small number of frameworks and aim to investigate and address each as time permits. Although the benchmarking suite includes two phases of warmup prior prior to gathering data for each test, we might find that some frameworks or platforms require additional warmup time to be consistent across multiple measurements.


We added Facebook's mention-bot into the project's GitHub repository. This has helped keep past contributors in the loop if and when changes are made to their prior contributions. For example, if a contributor updates the Postgres JDBC driver for the full spectrum of JVM frameworks, the original contributors of those frameworks will be notified by mention-bot. This allows for widespread changes such as a driver update while simultaneously allowing each contributor to override changes according to their framework's best practices.

Previously, we had to either manually notify people or do a bit of testing on our own to determine if the update made sense. In practice, this often meant not bothering to update the driver, which isn't what we want. (Have you seen the big performance boost in the newer Postgres JDBC drivers?)

Community contributions

This project includes a large amount of community-contributed code. Community contributions are up recently and we believe that is thanks to mention-bot. We expect to pass the milestone of 2,000 Pull Requests processed within a week or two. That is amazing.

Thank you so much to all of the contributors! Check out Round 14 and then on to Round 15!