

Yes we still need QA, but what “QA” means is rapidly changing.
Teams using agentic coding are shifting quality both left and right. Left into work definition, acceptance criteria, and test architecture. Right into integration confidence, production signals, and release confidence.The shift matters because no downstream checkpoint can absorb the volume agentic engineering can produce.
QA has to stretch in both directions – into the decisions that shape the work, and into the signals that show it will hold up in production.
Where this is heading
The shape of QA is still in flux.
Some teams are pushing quality further left, with more rigor in specs, acceptance criteria, and engineering guardrails before agents ever start writing code. Fowler’s recent “harness” framing pushes even further in that direction: the goal is to create stronger guides, checks, and boundaries around agentic work so that quality is built into the system earlier, rather than rescued at the end.
At the same time, teams are also being pushed right. Faster code generation increases the need for integration confidence, release confidence, observability, and production feedback.
Both moves follow the same pattern: QA is no longer a stage in the pipeline. It’s a thread running through the whole SDLC.
Where AI helps
AI can be genuinely useful in QA work.
- It can generate baseline unit tests quickly.
- It can help fill gaps in integration coverage.
- It can suggest edge cases.
- It can help analyze failures faster.
- It can reduce repetitive test-authoring work.
We worked with one client that had a 10+ year old Java application with almost no test coverage. Spending weeks writing tests by hand was not getting prioritized. AI generated a baseline suite in hours.
That was valuable. But “better than nothing” is not the same as “good enough to trust.” That is still the trap.
QA responsibilities still exist
Whether you call it QA or absorb the responsibility into other functions, the responsibilities still exist:
- Helping define what “correct” means before work starts.
- Shaping acceptance criteria, workflows, and edge cases.
- Validating outcomes, not just checking whether scripts pass.
- Improving release confidence across systems, not just within a single change.
- Watching production signals and feedback after release.
- Helping teams identify where AI-generated coverage looks good but misses the real risk.
QA’s future will vary by organization: some will keep a distinct function, some will shift testing into engineering, and others will refocus specialists on end-to-end validation, production risk, and release confidence.
What does not hold up is the idea that QA can remain a cleanup step at the end.
A more practical QA playbook
A practical approach for 2026 looks something like this:
- Use AI aggressively to generate baseline unit and integration test code, followed by careful human review to finalize.
- Involve QA earlier in shaping acceptance criteria and edge cases.
- Put more energy into integration confidence and release confidence.
- Spend less time maintaining brittle scripts and more time validating outcomes.
- Treat QA as part of quality system design, not just test execution.
That last point is one of the biggest changes.
The QA teams that create the most value in an agentic world are not just writing or maintaining scripts. They are helping design the system that makes rapid change safe.
Additional reading
- Harness engineering for coding agent users – The best conceptual piece in this set. Useful for thinking about how trust moves from manual review toward a stronger surrounding harness.
- A Thoughtworks perspective on CircleCI’s 2026 State of Software Delivery – A strong operating-model piece on what happens when delivery systems designed for human-speed change meet AI-assisted throughput.
- How to Lead an AI Testing Transformation: A Playbook for QA Leaders – Vendor source, but useful for thinking through how QA organizations may need to change.
- The SDLC Doesn’t End at Code Generation: Why Platform Engineering Teams Must Modernize Quality Next – Helpful on the idea that shift-left alone is not enough.
- Stop Maintaining Scripts. Start Testing Outcomes. – Good shorthand for one of the biggest changes in QA work.
- Does using AI in QA testing increase risk for software companies? – A useful counterweight to the idea that AI-driven testing is automatically safer.