Early testing saves time and money
A bug found in requirements costs cents. In production, it costs thousands. Here's how early testing works and how to apply shift-left in your team.

It's Friday at seven in the evening. The team was already wrapping up the week when an alert goes off in production: a critical bug in the payment flow. Time to put together an emergency hotfix, coordinate the deployment, notify support, and pray there isn't more collateral damage. Three hours later, the patch is in production and the team is drained. The worst part is that someone remembers seeing something odd in the requirements two months ago, but nobody paid much attention. That Friday night bug could've been fixed in fifteen minutes during a requirements review. This is the third ISTQB testing principle, and probably the one that can save you the most money.
The earlier you find it, the cheaper it is
ISTQB puts it this way: “Early testing saves time and money”. The idea is that testing activities should start as early as possible in the development cycle, not wait until there's executable code. The later a defect is detected, the more it costs to fix, because the error has already spread across more layers of the system and more code has been built on top of a flawed foundation.
This principle isn't new. Barry Boehm documented it back in the 80s with his studies on the cost of defects across different stages of development. What he found has been confirmed again and again in real projects: an error caught in the requirements phase may cost a few hours of work. That same error, found in production, can multiply its cost by a hundred or more.
The 1:10:100 rule
This ratio shows up constantly in software engineering literature, and it's a good way to visualize the impact. If fixing a defect in the requirements phase costs 1 unit of effort, fixing it during development costs roughly 10, and fixing it in production costs 100 or more.
The exact numbers vary depending on the context, but the trend is consistent. This isn't just about the time a developer spends fixing code. You also have to add the cost of investigating the issue, reproducing it, designing the fix, testing it, deploying it, and in many cases handling the impact on users who were already affected.
In my experience, the biggest hidden cost isn't technical, it's organizational. A bug in production creates emergency meetings, interrupts the sprint's planned work, forces people to reprioritize on the fly, and wears the team down. None of that shows up in any bug metric, but you feel it in the team's velocity over the following weeks.
What early testing really means
When we talk about early testing, we don't just mean writing unit tests sooner. The idea is broader: it's about applying testing thinking at every stage of development, starting with the earliest ones.
This includes activities many teams don't think of as testing, even though they are:
- Reviewing requirements for ambiguity, contradictions, and gaps. A requirement that says “the system must respond quickly” without defining what “quickly” means is a bug waiting to happen.
- Questioning technical designs before a single line of code is written. If the proposed architecture has a single point of failure, it's better to catch it in a diagram than in a postmortem.
- Taking part in user story refinement with a QA mindset. Not to block progress, but to ask the questions nobody else asks: “What happens if the user does this twice in a row?”, “What if the field comes in empty?”.
In the industry, the term shift-left testing has become popular to describe moving testing activities to the left side of the project timeline. The idea is simple: don't wait until the code is ready to start looking for problems.
Examples we've all lived through
There are scenarios that repeat themselves in pretty much every development team.
The first is the poorly defined requirement that nobody questions. The product owner writes a user story, the team implements it, QA tests it against the story, and everything passes. But once it reaches production, users complain because the behavior isn't what they expected. The requirement was technically correct but functionally incomplete. If someone with a testing mindset had reviewed that requirement before development started, they would've spotted the ambiguity in minutes.
The second is the bug discovered during integration. Two teams build their modules separately, each with unit tests passing at 100%. When they integrate, they find out both sides made different assumptions about the format of a field. A week and a half of rework. A one-hour API contract review during the design phase would've been enough.
And then there's the classic one: the Friday hotfix. Production breaks, you patch it in a hurry, the patch introduces a regression that shows up on Monday, and the week starts with another hotfix. I've seen that chain too many times. Almost every time, you can trace it back to a decision or a requirement nobody validated soon enough.
Common mistakes when this principle is ignored
The most common mistake is treating testing as a phase that comes after development, instead of an activity that goes along with it from the start. When testing only begins once the code is “ready”, you're already late. Defects that could've been avoided with a requirements review now need code changes, test updates, and possibly data migrations.
Another common mistake is confusing early testing with unit tests. Unit tests are valuable, but they're only one part of the picture. You can have excellent unit test coverage and still discover serious defects in production if nobody reviewed the requirements, if there are no integration tests, or if exploratory testing is nowhere to be found.
We also run into teams that leave QA out of the early stages because “there's nothing to test yet”. That mindset is expensive. QA doesn't need code to start adding value. They can review documentation, identify risks, prepare testing strategies, and question assumptions before they turn into code.
Lastly, there are teams that underestimate the cost of late defects because they don't measure it. If you don't keep track of how much time you're spending on hotfixes, production bug investigations, and rework, it's easy to think it's not that much. Once you start measuring it, the numbers are usually scary.
How to apply it in your team
Bring QA into refinement from the start
The change with the biggest immediate impact is inviting QA to refinement and planning sessions. Not as a listener, but as an active participant who questions requirements, spots missing scenarios, and suggests concrete acceptance criteria. In teams where I've seen this done well, the number of defects reaching production drops noticeably within a few weeks.
Set up static analysis from the first commit
Tools like ESLint, SonarQube, or language-specific linters catch entire categories of bugs before the code runs for the first time. Set them up in the CI pipeline so they run on every pull request. They don't replace tests, but they remove a layer of trivial mistakes that shouldn't take up human review time.
Write tests before or during development, not after
You don't need to be a TDD purist to benefit from writing tests early. Even if you don't follow the red-green-refactor cycle to the letter, writing at least the tests for critical paths before calling a task done changes the way you think about the code. It forces you to consider edge cases while the context is still fresh, not two weeks later when you've already moved on to something else.
Design reviews with a risk checklist
Before you start implementing a complex feature, spend thirty minutes reviewing the technical design with a basic risk checklist: failure points, external dependencies, expected data volumes, edge cases. You don't need a formal document, a whiteboard and the checklist are enough. I've seen half-hour sessions prevent weeks of rework.
Continuous integration with tests from day one
If your project doesn't have a CI pipeline running tests automatically on every push, that's the first investment you should make. You don't need a full suite to get started: even a smoke test that checks the application starts correctly already adds value. From there, every test you add makes the safety net stronger.
The key is that tests run automatically and that a failure blocks the merge. If the tests exist but nobody looks at them, it's like having a fire alarm that's disconnected.
An investment that pays for itself
Early testing isn't an extra cost, it's an investment that lowers the total cost of development. Every hour spent reviewing requirements, questioning designs, and writing tests early saves many hours later in debugging, hotfixes, and rework.
The mindset shift is simple but powerful: stop treating testing as the last stop before production, and start treating it as a continuous activity that goes along with development from the very first conversation about a feature.
If I had to pick just one action for this week, it'd be this: review the next user story that comes into your sprint before development starts. Read it through QA eyes, look for what it doesn't say, what it assumes, and what could go wrong. That thirty-minute exercise can save you days of work later.
Third of the seven ISTQB testing principles. You came from Exhaustive testing is impossible and next up is Defects cluster together.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this
Variables de entorno en scripts E2E: secretos seguros en JMO Labs
Los scripts E2E necesitan datos sensibles —tokens de API, credenciales, URLs privadas— sin que aparezcan en el código. En JMO Labs hemos añadido variables de script con modo privado: se inyectan automáticamente, se enmascaran en los logs y se acceden con una sintaxis limpia.

Tests E2E que se reparan solos: cómo construimos un pipeline de self-healing con IA
Los tests E2E se rompen con cada cambio de interfaz. En JMO Labs construimos un pipeline de 5 fases con IA que planifica, ejecuta, repara selectores, diagnostica fallos y verifica resultados de forma autónoma. La caché de selectores hace que cada ejecución sea más rápida que la anterior.

Construir una plataforma de testing con Playwright: arquitectura de JMO Labs
Playwright no es solo para tests E2E. En JMO Labs lo usamos como motor completo: 9 fases de comprobación, localizador de 9 estrategias con self-healing, grabación de vídeo, testing responsive con viewports reales y accesibilidad con axe-core.