2-week spike to ramp up on AI Coding Tools

October 23, 2025

Tony Karrer

We’ve seen many companies stumble when rolling out AI coding assistants. Success depends on building knowledge, skills, and practical habits. We’re helping across all aspects of rolling out AI tools, but we have found one practice that accelerates proficiency:

2-week (10 work-day) AI Coding Tool Ramp-up Spike

Here’s how it works:

  • 2 days of focused training
    • Day 1 (Fundamentals): Core patterns of AI-assisted development – How to write precise prompts, how to review AI results, and how to refine code without creating technical debt. Engineers leave with a systematic workflow rather than just ad-hoc examples.
    • Day 2 (Advanced): Context management, multi-file refactors, breaking down features into AI-manageable chunks, debugging AI outputs, rules, MCP servers/services. Exercises surface common failure modes, ensuring teams build the reflexes to reset context, enforce consistency, and debug AI outputs.
  • 8 days of supported, hands-on ticket work
    • Developers pick up a variety of tickets and use the AI tool as part of getting the work done.
    • Task journaling — Each developer keeps a lightweight daily log of what worked and what didn’t, building a shared playbook.
    • Feedback loops: with AI champions — Daily check-ins with champions and facilitators and asynchronous support to help overcome early friction quickly and build skills quickly.

By the end of the two-week spike, engineers have built a foundation of habits, shared practices, and a clearer sense of where the tools genuinely improve code quality and developer experience. Leaders need to provide support for continued learning beyond this two-week period, but we’ve found this to be a critical first step.

Additional Reading:

AI Coding Assistants Update

September 16, 2025

Tony Karrer

The conversation around AI coding assistants keeps speeding up, and we are hearing the following questions from technology leaders:

  • Which flavor do we bet on—fully-agentic tools (Claude Code, Devin) or IDE plug-ins (Cursor, JetBrains AI Assistant, Copilot)?
  • How do we evaluate these tools?
  • How do we effectively roll out these tools?

At the top level, I think about:

  • Agentic engines are happy running end-to-end loops: edit files, run tests, open pull requests. They’re great for plumbing work, bulk migrations, and onboarding new engineers to a massive repo.
  • IDE assistants excel at tight feedback loops: completions, inline explanations, commit-message suggestions. They feel safer because they rarely touch the filesystem.

Here’s a pretty good roundup:

The Best AI Coding Tools, Workflows & LLMs for June 2025.

Most teams I work with end up running a hybrid—agents for the heavy lifting, IDE helpers for day-to-day quick work items.

Whichever path you take, the practices you use matter the most.

Some examples to get you started:

Reading list