Announcing the AI Developer Bootcamp

I’m excited to share something we’ve been working on: the TechEmpower AI Developer Bootcamp. This is a hands-on program for developers who want to build real LLM-powered applications and graduate with a project they can show to employers.

The idea is simple: you learn by building. Over 6–12 weeks, participants ship projects to GitHub, get reviews from senior engineers, and collaborate with peers through Slack and office hours. By the end, you’ll have a working AI agent repo, a story to tell in interviews, and practical experience with the same tools we use in production every day.

Now, some context on why we’re launching this. Over the past year, we’ve noticed that both recent grads and experienced engineers are struggling to break into new roles. The job market is challenging right now, but one area of real growth is software that uses LLMs and retrieval-augmented generation (RAG) as part of production-grade systems. That’s the work we’re doing every day at TechEmpower, and it’s exactly the skill set this Bootcamp is designed to teach.

We’ve already run smaller cohorts, and the results have been encouraging. For some participants, it’s been a bridge from graduation to their first job. For others, it’s been a way to retool mid-career and stay current. In a few cases, it’s even become a pipeline into our own engineering team.

Our next cohort starts October 20. Tuition is $4,000, with discounts and scholarships available. If you know a developer who’s looking to level up with AI, please pass this along.

Learn more and apply here

We’re starting to see a pattern with LLM apps in production: things are humming along… until suddenly they’re not. You start hearing:

  • “Why did our OpenAI bill spike this week?”
  • “Why is this flow taking 4x longer than last week?”
  • “Why didn’t anyone notice this earlier?”

It’s not always obvious what to track when you’re dealing with probabilistic systems like LLMs. But if you don’t set up real-time monitoring and alerting early, especially for cost and latency, you might miss a small issue that quietly escalates into a big cost overrun.

The good news: you don’t need a fancy toolset to get started. You can use OpenTelemetry for basic metrics, or keep it simple with custom request logging. The key is being intentional and catching the high-leverage signals.

Here are some top reads that will help you get your arms around it.

Top Articles

AI Coding Assistants Update

September 16, 2025

Tony Karrer

The conversation around AI coding assistants keeps speeding up, and we are hearing the following questions from technology leaders:

  • Which flavor do we bet on—fully-agentic tools (Claude Code, Devin) or IDE plug-ins (Cursor, JetBrains AI Assistant, Copilot)?
  • How do we evaluate these tools?
  • How do we effectively roll out these tools?

At the top level, I think about:

  • Agentic engines are happy running end-to-end loops: edit files, run tests, open pull requests. They’re great for plumbing work, bulk migrations, and onboarding new engineers to a massive repo.
  • IDE assistants excel at tight feedback loops: completions, inline explanations, commit-message suggestions. They feel safer because they rarely touch the filesystem.

Here’s a pretty good roundup:

The Best AI Coding Tools, Workflows & LLMs for June 2025.

Most teams I work with end up running a hybrid—agents for the heavy lifting, IDE helpers for day-to-day quick work items.

Whichever path you take, the practices you use matter the most.

Some examples to get you started:

Reading list