Blog

You are viewing entries tagged frameworks and may want to check out the most recent entries.

Results from Round 14 of the Web Framework Benchmarks project are now available! This round's results are limited to the physical hardware environment only, but cloud results will be included again in the next round.

Recent improvements

Our efforts during Round 14 focused on improvements that help us manage the project, mostly by removing some of our manual work.

Continuous Benchmarking

When we are not running one-off tests or modifying the toolset, the dedicated physical hardware environment at ServerCentral is continuously running the full benchmark suite. View Round 14 resultsWe call this "Continuous Benchmarking." As Round 14 was wrapping up, Continuous Benchmarking allowed us to more rapidly deploy multiple preview rounds for review by the community than we have done in previous rounds.

Going forward, we expect Continuous Benchmarking to facilitate immediate procession into community-facing previews of Round 15. We hope to have the first Round 15 preview within a few days.

Paired with the continuous benchmarker is an internally-facing dashboard that shows us how things are progressing. We plan to eventually evolve this into an externally-facing interface for project contributors.

Differences

Contributors and the project's community will have seen several renderings of the differences between Round 13 and Round 14. The final capture of differences between Round 13 to Round 14 is an example. These help us confirm changes that are planned or expected and also identify unexpected changes or volatility.

We have, in fact, observed volatility with a small number of frameworks and aim to investigate and address each as time permits. Although the benchmarking suite includes two phases of warmup prior prior to gathering data for each test, we might find that some frameworks or platforms require additional warmup time to be consistent across multiple measurements.

Mention-bot

We added Facebook's mention-bot into the project's GitHub repository. This has helped keep past contributors in the loop if and when changes are made to their prior contributions. For example, if a contributor updates the Postgres JDBC driver for the full spectrum of JVM frameworks, the original contributors of those frameworks will be notified by mention-bot. This allows for widespread changes such as a driver update while simultaneously allowing each contributor to override changes according to their framework's best practices.

Previously, we had to either manually notify people or do a bit of testing on our own to determine if the update made sense. In practice, this often meant not bothering to update the driver, which isn't what we want. (Have you seen the big performance boost in the newer Postgres JDBC drivers?)

Community contributions

This project includes a large amount of community-contributed code. Community contributions are up recently and we believe that is thanks to mention-bot. We expect to pass the milestone of 2,000 Pull Requests processed within a week or two. That is amazing.

Thank you so much to all of the contributors! Check out Round 14 and then on to Round 15!

November 16, 2016

Framework Benchmarks Round 13

Round 13 of the ongoing Web Framework Benchmarks project is here! The project now features 230 framework implementations (of our JSON serialization test) and includes new entrants on platforms as diverse as Kotlin and Qt. Yes, that Qt. We also congratulate the ASP.NET team for the most dramatic performance improvement we've ever seen, making ASP.NET Core a top performer.

View Round 13 resultsThe large filters panel on our results web site is a testament to the ever-broadening spectrum of options for web developers. What a great time to be building web apps! A great diversity of frameworks means there are likely many options that provide high-performance while meeting your language and productivity requirements.

Good fortunes

As the previous round—Round 12—was wrapping up, we were unfortunately rushed as the project’s physical hardware environment was being decommissioned. But good fortune was just around the corner, thanks to the lucky number 13!

New hardware and cloud environments

For Round 13, we have all new test environments, for both physical hardware and the virtualized public cloud.

Microsoft has provided the project with Azure credits, so starting with Round 13, the cloud environment is on Azure D3v2 instances. Previous rounds’ cloud tests were run on AWS.

Meanwhile, ServerCentral has provided the project a trio of physical servers in one of their development lab environments with 10 gigabit Ethernet. Starting with Round 13, the physical hardware environment is composed of a Dell R910 application server (4x 10-Core E7-4850 CPUs) and a Dell R420 database server (2x 4-Core E5-2406 CPUs).

We’d like to extend huge thanks to ServerCentral and Microsoft for generously supporting the project!

We recognize that as a result of these changes, Round 13 is not easy to directly compare to Round 12. Although changing the test environments was not intentional, it was necessary. We believe the results are still as valuable as ever. An upside of this environment diversity is visibility into the ways various frameworks and platforms work with the myriad variables of cores, clock speed, and virtualization technologies. For example, our new physical application server has twice as many HT cores as the previous environment, but the CPUs are older, so there is an interesting balance of higher concurrency but potentially lower throughput. In aggregate, the Round 13 results on physical hardware are generally lower due to the older CPUs, all else being equal.

Many fixes to long-broken tests

Along with the addition of new frameworks, Round 13 also marks a sizeable decrease in the number of existing framework tests that have failed to execute properly in previous rounds. This is largely the result of a considerable community effort over the past few months to identify and fix dozens of frameworks, some of which we haven’t been able to successfully test since 2014.

Continuous benchmarking

Round 13 is the first round conducted with what we’re calling Continuous Benchmarking. Continuous Benchmarking is the notion of setting up the test environment to automatically reset to a clean state, pull the latest from the source repository, prepare the environment, execute the test suite, deliver results, and repeat.

There are many benefits of Continuous Benchmarking. For example:

  • At any given time, we can grab the most recent results and mark them as a preview or final for an official Round. This should allow us to accelerate the delivery of Rounds.
  • With some additional work, we will be able to capture and share results as they are made available. This should give participants in the project much quicker insight into how their performance tuning efforts are playing out in our test environment. Think of it as continuous integration but for benchmark results. Our long-term goal is to provide a results viewer that plots performance results over time.
  • Any changes that break the test environment as a whole or a specific framework’s test implementation should be visible much earlier. Prior to Continuous Benchmarking, breaking changes were often not detected until a preview run.

Microsoft’s ASP.NET Core

We consider ourselves very fortunate that our project has received the attention that it has from the web framework community. It has become a source of a great pride for our team. Among every reaction and piece of feedback we’ve received, our very favorite kind is when a framework maintainer recognizes a performance deficiency highlighted by this project and then works to improve that performance. We love this because we think of it as a small way of improving performance of the whole web, and we are passionate about performance.

Round 13 is especially notable for us because we are honored that Microsoft has made it a priority to improve ASP.NET’s performance in these benchmarks, and in so doing, improve the performance of all applications built on ASP.NET.

Thanks to Microsoft’s herculean performance tuning effort, ASP.NET—in the new cross-platform friendly form of ASP.NET Core—is now a top performer in our Plaintext test, making it among the fastest platforms at the fundamentals of web request routing. The degree of improvement is absolutely astonishing, going from 2,120 requests per second on Mono in Round 11 to 1,822,366 requests per second on ASP.NET Core in Round 13. That’s an approximately 85,900% improvement, and that doesn’t even account for Round 11’s hardware being faster than our new hardware. That is not a typo, it's 859 times faster! We believe this to be the most significant performance improvement that this project has ever seen.

By delivering cross-platform performance alongside their development toolset, Microsoft has made C# and ASP.NET one of the most interesting web development platforms available. We have a brief message to those developers who have avoided Microsoft’s web stack thinking it’s “slow” or that it’s for Windows only: ASP.NET Core is now wicked sick fast at the fundamentals and is improving in our other tests. Oh, and of course we’re running it on Linux. You may be thinking about the Microsoft of 10 years ago.

The best part, in our opinion, is that Microsoft is making performance a long-term priority. There is room to improve on our other more complex tests such as JSON serialization and Fortunes (which exercises database connectivity, data structures, encoding of unsafe text, and templating). Microsoft is taking on those challenges and will continue to improve the performance of its platform.

Our Plaintext test has historically been a playground for the ultra-fast Netty platform and several lesser-known/exotic platforms. (To be clear, there is nothing wrong with being exotic! We love them too!) Microsoft’s tuning work has brought a mainstream platform into the frontrunners. That achievement stands on its own. We congratulate the Microsoft .NET team for a massive performance improvement and for making ASP.NET Core a mainstream option that has the performance characteristics of an acutely-tuned fringe platform. It’s like an F1 car that anyone can drive. We should all be so lucky.

February 25, 2016

Framework Benchmarks Round 12

Round 12 of the ongoing Web Framework Benchmarks project is now available!

A race against the clock

Recently, we were notified that the physical hardware environment we have used for Rounds 9 through 12 will be decommissioned imminently. This news made Round 12 unusual: rather than wait until we can equip and configure a new environment, we decided to conclude Round 12 while the current environment remained available.

View Round 12 resultsAs a result, no previews of Round 12 were made available to the participants in the project. Pull requests that we would normally expect to see after a preview cycle will need to be processed for Round 12. So bear that in mind that participants were not able to sanity check the Round 12 results and submit fixes.

Furthermore, due to the modestly rushed nature (at least on our side) of Round 12, we elected to not capture Amazon EC2 results for this Round. The only data available for Round 12 is from the Peak dual Xeon E5 servers.

View the full results of Round 12.

We are now working to find and setup a new hardware environment for Rounds 13 and beyond.

Notable changes to Clojure tests

@yogthos noticed (in issue #1894) that the Compojure and http-kit test implementations were using def (value bound at compile time) instead of defn (value bound at runtime) for the JSON, single query, and fortunes tests. While the impact on the JSON test was likely minimal, this had a significant impact on the single query and fortunes tests because these implementations were not actually running a query for every request as expected. This change was unintentionally done in the Compojure test by TechEmpower staff, and then later copied to http-kit to keep the implementations in sync. We have corrected this error in Round 12.

Other notable changes

  1. The plain PHP, Slim, and Laravel tests have been upgraded to PHP 7. For example, Slim's performance in the JSON test and Laravel's performance in the Fortunes test both approximately doubled versus Round 11 with PHP 5.
  2. All JVM-hosted tests have been upgraded to Java 8.
  3. Several new frameworks were added.

Thanks!

As always, we thank all contributors to the project, especially in light of the rush to get Round 12 concluded!