You Can’t Scale Software Quality Without Team Buy In
A Practical Guide to Aligning Engineers, Testers, and Product Around Test Objectives
Last week, I posted an unpopular opinion on LinkedIn: if you want your automation strategy to succeed, your whole team needs to be involved. It turns out it was more controversial than I expected.
In this post, I want to break that idea down and get into the details of what involving the whole team actually means and why it’s essential if you want automation to deliver real value.
Define Clear Automation Objectives (Before You Write a Single Test)
No project succeeds without a clear goal, and automation is no different. Without shared objectives, your automation suite will drift fast. Here are three common objectives:
Enable faster development:
Good automation allows incremental, fast, and safer deployments — if two conditions are met:
The suite runs fast enough to trigger on every commit.
It covers the high-risk, high-impact areas the whole team agrees on.
Free up testers’ time:
Automation should liberate testers from repetitive work, freeing them to focus on real, investigative testing — the kind only humans can do well.
Raise red flags early:
Contrary to popular belief, automation doesn’t find bugs — it raises alerts. Only a human can determine if a failed test is a real issue, a flaky test, or an environment glitch.
In reality, most teams will aim for a combination of these goals.
👉 Action: Document your objectives and make sure the team buys in before you start building.
What Whole-Team Involvement Actually Looks Like
When I say “whole team,” I mean everyone: engineers, QA, PMs, POs. Yes, even those who claim they "don’t know anything about automation." If automation becomes a side-project owned by one person, you’ll face two major issues:
Lack of coverage visibility:
If coverage decisions happen in isolation, the team won’t know what’s protected and what’s not — until it’s too late. Worse, they may disagree with the choices made, causing friction and wasted effort.
Misunderstanding of tester workload:
Ever hear: “Why is testing taking so long?”
If automation is invisible to the team, they’ll misinterpret the tester’s time investment. Making it squad work clarifies priorities and creates shared ownership.
💡 Important: Involvement doesn’t mean everyone codes the automation. It means everyone shapes the strategy. The tester still leads implementation.
Practical Ways to Involve the Whole Team
So, how do you go from a tester quietly building automation in isolation to a team that actively collaborates on automation strategy and coverage?
This shift isn’t just about holding one-off meetings or asking for quick sign-offs. It’s about embedding automation into the way your team thinks, plans, and delivers software. It means creating visibility, opening up conversations, and treating automation not as an afterthought, but as an integral part of how your team ensures quality. Here’s how to shift from solo automation to true team involvement:
Set up a team session to discuss coverage
Before diving into writing automation, carve out dedicated time with your team to align on what should be tested and why. This session is your opportunity to bring clarity, invite collaboration, and build shared ownership of your automation strategy.
Come prepared to lead the conversation with intention and structure. Your preparation will help make the best use of everyone's time and build credibility around the automation effort. Bring these with you:
A map of the key surfaces your squad owns:
This should include all major functional areas, services, and workflows the team is responsible for. Think in terms of customer-facing features (e.g., login, checkout), backend services (e.g., auth, payments), and shared components (e.g., design systems, APIs). Having this visualized, even in a simple spreadsheet or diagram, helps ground the conversation.Suggested critical paths that need protection:
Come with a preliminary list of the most important user flows or service calls that, if broken, would result in high-severity incidents. Be ready to explain why you’ve selected them and ask for feedback. This isn’t about having the “right” answer — it’s about creating a starting point for productive discussion.
💬 Pro tip: Frame the session as a collaborative risk-identification workshop. You're not there to get sign-off — you’re there to co-create a shared understanding of what matters most, and how your team will protect it through automation.
During the session:
Discuss Test Coverage at Every Layer
One of the most impactful parts of involving the team is having an open discussion about test coverage across all levels of testing. This includes unit tests, integration and API tests, as well as end-to-end (E2E) automation. Too often, teams jump straight into automating at the UI layer without first asking:
What’s already covered?
Where are the real risks?
What level of the stack is the most efficient place to catch issues?
Bringing the team into these conversations helps prevent duplicate coverage, identify blind spots early, and optimize where automation is most effective. For example, engineers might identify areas that can be tested more efficiently at the unit level, while testers highlight gaps in critical user flows that only E2E tests can catch.
This is what “shifting left” looks like in practice, not just talking about quality earlier, but embedding testability and risk assessment into the development process itself. By aligning on coverage strategy before writing a single test, teams reduce rework, improve feedback loops, and build more confidence in their delivery pipeline.
Identify the Critical User Paths
These are the workflows that, if broken, would immediately trigger a Sev 0 or Sev 1 incident. They represent your product’s non-negotiables — the functions that must work at all times. Think: login, payments, order placement, or core data flows.
This exercise isn’t just about listing features. It’s about partnering with product and business stakeholders to align on what’s mission-critical. It’s also a chance to uncover hidden dependencies, clarify edge cases, and determine the right level of test coverage for each path.
Prioritize What Matters Most
Once your team has identified the key user paths and test layers, the next step is prioritization. Here’s where many teams go wrong: they try to test every variation of a scenario “just in case.” But more tests ≠ better coverage.
Testing too many permutations leads to:
Longer suite execution times
Increased maintenance burden
Flaky test noise that obscures real issues
Instead, focus on coverage that delivers maximum signal with minimum noise. Prioritize high-impact paths, common use cases, and areas most likely to change. Talk through what’s worth automating now versus what can wait or be monitored through other means (e.g., observability, manual exploratory testing).
After coverage is agreed:
Make reviewing failing tests part of the daily standup.
Share the load. Engineers should help investigate failures when their commits break the build.
Keep iterating. Review coverage quarterly or when your product changes significantly.
Overcoming Common Pushbacks
Introducing this model isn’t always smooth. Here’s what you might hear — and how to respond:
“We don’t have time to discuss this in sprint.”
Solution: Break the conversation into 15-30 minute chunks during sprint planning or backlog grooming. Save time by prepping thoroughly.
“Isn’t this just the tester’s job?”
Solution: Remind them, this is test coverage strategy, not just implementation. Shared strategy = better outcomes.
“Our PM/PO doesn’t know automation.”
Solution: Perfect. They’re not here to code, they’re here to help prioritize what matters most to customers and the business.
Automation is only as effective as the conversations that shape it. If your automation strategy is built in isolation, it will fall short, no matter how well-written the tests are. But when the entire team contributes to defining risk, coverage, and priorities, automation becomes a strategic asset rather than a side project. So the next time you're kicking off or revisiting your automation efforts, don’t go it alone.
🗣️ How does your team handle automation coverage? Are testers leading the strategy, or is it still siloed? Drop your thoughts in the comments — I’d love to hear how others are making this work in practice. 👇
Great read Champion. Would love to work with you again one day , especially if Australia ( Australian based company) can draw you back to our beautiful shores.
Excellent article! Very few team at my company follow this process. Most of them are working in silo... so sad to see a team so proud of their 25k automated test missing critical issues found in pre-prod and sometime in prod because they don't know what are there critical user journey. I think I'll write an article on the subject in our next internal R&D newsletter and point to your article as a reference, if you don't mind.