The pesticide paradox, the fifth testing principle your team ignores
It's the fifth testing principle, and the one most teams overlook. Running the same tests over and over ends up being an expensive placebo.

Your test suite passes at 100%. Solid green. Nothing has failed for weeks. Sounds good, but there's a problem: bugs still keep showing up in production. Your tests don't catch them because they've been testing exactly the same thing for months, in the same way, with the same data. Welcome to the pesticide paradox.
What the pesticide paradox is
The term was coined by Boris Beizer in Software Testing Techniques (1990), and it's one of the seven fundamental principles of testing according to ISTQB. The idea is simple: if you apply the same pesticide over and over, insects build resistance and stop dying. The same thing happens in testing. If you run the same tests repeatedly, they stop finding new defects.
It's not that the tests are badly written. It's that software evolves, usage patterns change, and risk areas move around. But the tests keep looking where they looked on day one.
Why it happens
There are technical and psychological reasons, and they usually come together:
- Confirmation bias leads us to design tests that validate what we already know works, not what might fail in unexpected ways.
- Suite inertia means that once a test passes, it rarely gets reviewed. A huge suite piles up, eats execution time, and brings less value over time.
- Crystallized knowledge shows up when the same team has spent months testing the same module. They know the system so well that they unconsciously avoid the paths where problems might show up.
- Static test data means always repeating the same test user, the same values, the same combinations. Bugs hiding in extreme ranges or unusual combinations never get touched.
- Coverage as a misleading metric tricks you, because 90% code coverage doesn't mean you're testing 90% of the scenarios. You can have lines covered without the asserts validating anything meaningful.
Signs it's affecting you
If you recognize several of these situations, the paradox is probably already embedded in your team:
- The suite hasn't found a single bug in weeks, but the support team keeps reporting incidents.
- The new tests you add are just slight variations of existing ones, they don't explore new functionality.
- No one remembers the last time an obsolete test case was removed or rewritten.
- The suite's execution time keeps growing, but the defect detection rate goes down.
- Production bugs always land in areas that “nobody thought to test”.
Practical solutions
The pesticide paradox isn't solved by adding more tests of the same kind. You have to change the approach. These are the strategies that work best in real teams.
1. Rotate QAs between projects
This is, in my experience, the measure with the biggest immediate impact. When a QA has been on the same project for months, they develop blind spots. They know how the system works, they know the shortcuts, and without realizing it, they stop questioning what they've already internalized.
Rotation breaks that inertia. A QA coming fresh into a project:
- Doesn't assume something works because “it's always worked”.
- Asks questions the original team stopped asking a long time ago.
- Brings techniques and patterns from other projects that might apply here.
- Spots inconsistencies in the documentation and flows that the usual team no longer sees.
How to implement it without chaos
- Quarterly or half-year rotations, not weekly ones. You need enough time to bring real value.
- At least two weeks of overlap with the outgoing QA for context transfer.
- Living testing documentation, because if the knowledge only exists in the QA's head, rotation will be painful. That's already a problem you need to fix.
- Don't rotate everyone at once. Keep stability while introducing fresh perspectives.
2. Structured exploratory testing
Automated tests verify what you already know. Exploratory testing discovers what you didn't know could fail. But “exploring freely” without structure isn't very effective. Use mission-based sessions:
- Define a clear goal: “Explore the checkout flow with international cards for 45 minutes”.
- Document what you find in real time (bugs, questions, risk areas).
- When you're done, decide which findings should become automated tests.
Exploratory testing doesn't replace automation, it complements it by covering exactly the gaps the automated suite can't reach.
3. Regular test suite review
Just like you review and refactor production code, the test suite needs active maintenance:
- Remove redundant tests. If three tests are basically validating the same thing with slightly different data, keep the one that's most representative.
- Update the ones that no longer reflect reality. A test that validates a flow that changed six months ago is noise, not coverage.
- Add tests for the areas where recent bugs were found. If production tells you where it hurts, answer with tests that cover exactly that.
A good time for this review is at the end of each sprint or after each release. You don't need a massive audit. Spending just one or two hours per cycle is enough to notice the difference.
4. Mutation testing to measure the real effectiveness of your tests
Code coverage measures how many lines get executed. Mutation testing measures how many bugs your tests would catch if they were introduced. Tools like Stryker inject small mutations into the code (changing a > to >=, removing a condition, inverting a boolean) and check whether any test fails.
If a mutation survives, it means your tests wouldn't catch that kind of bug. It's the most objective way to know whether your suite actually protects the code or just walks through it.
5. Vary test data and environments
If you always test with the user [email protected] and the password 123456, you're only validating one path. Introduce variability:
- Random data with libraries like Faker that generate names with special characters, real addresses from different countries, and amounts in extreme ranges.
- Property-based testing, where instead of defining specific values, you define properties that must hold for any input. Libraries like fast-check generate hundreds of combinations automatically.
- Different environments, because if you only test in Chrome on macOS, Firefox-on-Linux-specific bugs will never show up.
6. Pair testing, QA and development together
Put a QA and a developer in front of the same screen. The developer knows the code shortcuts, the race conditions they're worried about, the edge cases they never documented. The QA brings the destructive mindset and the knowledge of how real users interact with the system.
That combination finds bugs neither of them would find alone. It's especially useful before major releases or in critical functionality.
It's not about more tests, it's about different tests
The pesticide paradox is a reminder that testing isn't something you set up once and forget about. It's a living process that needs to adapt to the pace of the software it's protecting.
If your suite hasn't found anything in months, don't celebrate, investigate. Rotate people, vary the techniques, question the data, and review what you already have. The goal isn't to have more tests, it's to have the right tests at each moment.
Start with something small. This week, pick a critical module and ask someone who doesn't know it to test it for an hour. You'll be surprised by what they find.
Fifth principle of the seven ISTQB testing principles. You're coming from Defects cluster together and continuing with Testing depends on context.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this
Variables de entorno en scripts E2E: secretos seguros en JMO Labs
Los scripts E2E necesitan datos sensibles —tokens de API, credenciales, URLs privadas— sin que aparezcan en el código. En JMO Labs hemos añadido variables de script con modo privado: se inyectan automáticamente, se enmascaran en los logs y se acceden con una sintaxis limpia.

Tests E2E que se reparan solos: cómo construimos un pipeline de self-healing con IA
Los tests E2E se rompen con cada cambio de interfaz. En JMO Labs construimos un pipeline de 5 fases con IA que planifica, ejecuta, repara selectores, diagnostica fallos y verifica resultados de forma autónoma. La caché de selectores hace que cada ejecución sea más rápida que la anterior.

Construir una plataforma de testing con Playwright: arquitectura de JMO Labs
Playwright no es solo para tests E2E. En JMO Labs lo usamos como motor completo: 9 fases de comprobación, localizador de 9 estrategias con self-healing, grabación de vídeo, testing responsive con viewports reales y accesibilidad con axe-core.