Announcing the AI Developer Bootcamp

I’m excited to share something we’ve been working on: the TechEmpower AI Developer Bootcamp. This is a hands-on program for developers who want to build real LLM-powered applications and graduate with a project they can show to employers.

The idea is simple: you learn by building. Over 6–12 weeks, participants ship projects to GitHub, get reviews from senior engineers, and collaborate with peers through Slack and office hours. By the end, you’ll have a working AI agent repo, a story to tell in interviews, and practical experience with the same tools we use in production every day.

Now, some context on why we’re launching this. Over the past year, we’ve noticed that both recent grads and experienced engineers are struggling to break into new roles. The job market is challenging right now, but one area of real growth is software that uses LLMs and retrieval-augmented generation (RAG) as part of production-grade systems. That’s the work we’re doing every day at TechEmpower, and it’s exactly the skill set this Bootcamp is designed to teach.

We’ve already run smaller cohorts, and the results have been encouraging. For some participants, it’s been a bridge from graduation to their first job. For others, it’s been a way to retool mid-career and stay current. In a few cases, it’s even become a pipeline into our own engineering team.

Our next cohort starts October 20. Tuition is $4,000, with discounts and scholarships available. If you know a developer who’s looking to level up with AI, please pass this along.

Learn more and apply here

Framework Benchmarks Round 23

March 17, 2025

Mike Smith

As the Director for Open Source Solutions at TechEmpower, I am excited to share the latest results from our Framework Benchmark suite. This round marks significant improvements overall, as our sponsor Microsoft has generously provided new hardware that is making jaw-dropping performance improvements.

New Hardware and Upgrades

Our new setup includes updated servers and network hardware:

  • ProLiant DL360 Gen10 Plus servers
    • Intel Xeon Gold 6330 CPU @ 2.00GHz (56 cores)
    • 64GB of memory
    • Mellanox Technologies MT28908 Family [ConnectX-6] 40Gbps Ethernet

Impact on Benchmarking Results

We’ve seen a substantial increase in performance across the board, particularly in network-bound tests. Here’s a breakdown of the improvements:

  • 3x Improvements in Practical Network-Bound Tests: We’ve seen a threefold increase in performance among the top-performing frameworks. This is entirely due to the efficiency and power of the new servers and fiber-optic network setup.
  • 4x Improvements in Theoretical Network-Bound Tests: In tests where the network is the limiting factor, the performance improvements have been even more dramatic, reaching up to four times the previous capabilities.

Why This Matters

  1. Enhanced Accuracy: Better hardware means that we can more accurately measure the performance potential of various frameworks.  This means clearer insights for developers and architects.
  2. Future-Proofing: The upgraded infrastructure helps ensure that our benchmarks will remain relevant as technology continues to evolve.
  3. Community Benefits: Developers rely on our benchmarks to make informed decisions about the frameworks they choose. Improved testing environments mean more reliable data for the entire tech community.
  4. Establishing a High-Water Mark: CTOs and developers should know the absolute performance threshold that their tech stack affords them. Establishing a high-water mark helps set realistic expectations and encourages continuous improvement and innovation in framework development.

Contributor Updates

bbrtj writes:

I am a maintainer of Kelp framework. I’ve done a lot of work to modernize the framework, squash bugs, develop new features, and of course improve the performance. These improvements, together with completely overhauled benchmark code, should prove Kelp to be one of the best performing Perl web frameworks, as it historically was.

In addition, I’ve refactored the benchmark code for Mojolicious, the most popular Perl framework. I got in touch with Mojolicious developers to have them review and approve my changes to the code. I’ve also modified other Perl benchmarks to make them functional, but without altering their code (much).

Both frameworks got their perl version set to the most recent v5.40, which should give them a decent speed boost in itself. Should be interesting to see how these two fare against frameworks written in more popular scripting languages.

rsamoilov writes:

The Ruby Rage framework comes packed with performance optimizations:

  • A fast router
  • Focus on maximizing work at boot time
  • A lightweight database connection pool

Thanks to these optimizations, using the same codebase and ORM as the Rails test, Rage achieves 81% to 219% faster performance in every database-related test.

Looking Ahead

We are committed to continually enhancing the TechEmpower Framework Benchmarks, and these recent upgrades are a significant step in that direction. We’d like to extend our thanks to Microsoft for their support and look forward to seeing how these improvements will benefit developers worldwide.

Stay tuned for more updates and detailed benchmark results as we continue to explore the capabilities of our new hardware. As always, we welcome feedback and collaboration from the community to keep pushing the boundaries of what’s possible in framework performance testing.

Thank you for your continued support and interest in the TechEmpower Framework Benchmarks.

Join the Conversation

We encourage you to engage with us and share your thoughts on these developments. Visit our GitHub repository to ask questions, provide feedback, and join the ongoing dialogue about framework performance and benchmarking.

Feel free to reach out if you have any questions or need further information about our benchmarks and the recent upgrades.

 

Want to learn how TechEmpower can help make your web application faster?