Blog

You are viewing entries tagged frameworks and may want to check out the most recent entries.

Today we announce the results of the twentieth official round of the TechEmpower Framework Benchmarks project.

Now in its eighth year, this project measures the high-water mark performance of server side web application frameworks and platforms using predominantly community-contributed test implementations. The project has processed more than 5,200 pull requests from contributors.

Round 20 Updates from our contributors

In the months between Round 19 and Round 20, about four hundred pull requests were processed. Some highlights shared by our contributors:

(Please reach out if you are a contributor and didn't yet get a chance to share your updates. We'll get them added here.)

Notes

Thanks again to contributors and fans of this project! As always, we really appreciate your continued interest, feedback, and patience!

Round 20 is composed of:

Round 19 of the TechEmpower Framework Benchmarks project is now available!

This project measures the high-water mark performance of server side web application frameworks and platforms using predominantly community-contributed test implementations. Since its inception as an open source project in 2013, community contributions have been numerous and continuous. Today, at the launch of Round 19, the project has processed more than 4,600 pull requests!

We can also measure the breadth of the project using time. We continuously run the benchmark suite, and each full run now takes approximately 111 hours (4.6 days) to execute the current suite of 2,625 tests. And that number continues to steadily grow, as we receive further test implementations

Composite scores and TPR

Round 19 introduces two new features in the results web site: Composite scores and a hardware environment score we're calling the TechEmpower Performance Rating (TPR). Both are available on the Composite scores tab for Rounds 19 and beyond.

Composite scores

Composite scores

Frameworks for which we have full test coverage will now have composite scores, which reflect an overall performance score across the project's test types: JSON serialization, Single-query, Multi-query, Updates, Fortunes, and Plaintext. For each round, we normalize results for each test type and then apply subjective weights for each (e.g., we have given Fortunes a higher weight than Plaintext because Fortunes is a more realistic test type).

When additional test types are added, frameworks will need to include implementations of these test types to be included in the composite score chart.

You can read more about composite scores at the GitHub wiki.

TechEmpower Performance Rating (TPR)

TechEmpower Performance Rating

With the composite scores described above, we are now able to use web application frameworks to measure the performance of hardware environments. This is an exploration of a new use-case for this project that is unrelated to the original goal of improving software performance. We believe this could be an interesting measure of hardware environment performance because it's a holistic test of compute and network capacity, and based on a wide spectrum of software platforms and frameworks used in the creation of real-world applications. We look forward to your feedback on this feature.

Right now, the only hardware environments being measured are our Citrine physical hardware environment and Azure D3v2 instances. However, we are implementing a means for users to contribute and visualize results from other hardware environments for comparison.

Hardware performance measurements must use the specific commit for a round (such as 801ee924 for Round 19) to be comparable, since the test implementations continue to evolve over time.

Because a hardware performance measurement shouldn't take 4.6 days to complete, we use a subset of the project's immense number of frameworks when measuring hardware performance. We've selected and flagged frameworks that represent the project's diversity of technology platforms. Any results files that include this subset can be used for measuring hardware environment performance.

The set of TPR-flagged frameworks will evolve over time, especially if we receive further input from the community. Our goal is to constrain a run intended for hardware performance measurement to several hours of execution time rather than several days. As a result, we want to keep the total number of flagged frameworks somewhere between 15 to 25.

You can read more about TPR at the GitHub wiki.

Other Round 19 Updates

Once again, Nate Brady tracked interesting changes since the previous round at the GitHub repository for the project. In summary:

Notes

Thanks again to contributors and fans of this project! As always, we really appreciate your continued interest, feedback, and patience!

Round 19 is composed of:

Round 18 of the TechEmpower Framework Benchmarks project is now available!

When we posted the previous round in late 2018, the project had processed about 3,250 pull requests. Today, with Round 18 just concluded, the project is closing in on 4,000 pull requests. We are repeatedly surprised and delighted by the contributions and interest from the community. The project is immensely fun and useful for us and we're happy it is useful for so many others as well!

Notable for Round 18

Nate Brady tracked interesting changes since the previous round at the GitHub repository for the project. Several of these are clarifications of requirements for test implementations. In summary:

  • Thanks to An Tao (@an-tao), we clarified that the "Date" header in HTTP Responses must be accurate. It is acceptable for it to be recomputed by the platform or framework once per second, and cached as a string or byte buffer for the duration of that second.
  • To keep frameworks from breaking the test environments by consuming too much memory, the toolset now limits the amount of memory provided to the containers used by test implementations.
  • The requirements for the Updates test were clarified to permit a single update. We are still considering whether to classify test implementations by whether they use this tactic.
  • The requirements were clarified to specify caching or memoization of the output from JSON serialization is not permitted.
  • The toolset now more strictly validates that responses are providing the right JSON serialization of responses.
  • Cloud tests in Azure are using Azure's accelerated networking feature.
  • Postgres has been upgraded to version 11.
  • Nikolay Kim (@fafhrd91) explained the tactics used by Actix to acheive record performance on the Fortunes test.

Other updates

  • Round 18 now includes just over two hundred test implementations (which we call "frameworks" for simplicity).
  • The results web site now includes a dark theme, because dark themes are the new hotness. Select it at the bottom right of the window when viewing results.

Notes

Thank you to all contributors and fans of this project! As always, we really appreciate your continued interest, feedback, and patience!

Round 18 is composed of: