Testing depends on context: why copying another team's strategy can sink you
You don't test a banking app the same way you test a mobile game. The sixth testing principle explains why there's no universal recipe, and what happens when you copy one.

You've spent months refining your testing strategy. You've got high coverage, green pipelines, end-to-end tests covering the main flows. Then you switch projects and find out none of what you were doing fits. The tests that were essential in your previous team are unnecessary here, and the ones this team urgently needs hadn't even crossed your mind. It's not that your strategy was wrong. It's that testing doesn't work with universal recipes.
The principle many teams ignore
ISTQB's sixth principle says it plainly: testing depends on context. You don't test a banking application the same way you test a mobile game, a pacemaker, or a personal blog. Every system has different risks, different users, different consequences when it fails, and different regulatory constraints. Your testing strategy has to reflect those differences.
It sounds obvious when you read it like that. But in practice, most teams apply the same testing template to every project. The same test pyramid, the same distribution of effort, the same tools. As if context didn't exist.
Three worlds, three realities
To understand why context changes everything, let's look at how software gets tested in three sectors with completely different pressures.
Banking and financial services
In banking, a bug can mean a customer loses real money. Or that the bank breaks a regulation and gets hit with a massive fine. I've worked with teams where every change in the interest calculation engine required full traceability from the requirement to the test case, an audit review, and formal approval before deployment.
Tests aren't just a quality tool, they're regulatory evidence. You need to prove that you tested exactly what the regulation says, with the data the regulation requires, and that you kept the results. Automation matters, but documentation matters just as much, if not more. A test that passes but leaves no auditable trail may as well not exist.
Here, security testing isn't optional: penetration testing, vulnerability analysis, encryption validation. And performance testing has contractual requirements. If the system takes more than two seconds to process a transfer, you're breaching the SLA.
Startup in validation stage
Now imagine the opposite scenario. A startup with six people, three months of runway, and an MVP that needs to hit the market before the money runs out. If you apply the same testing rigor here as in banking, the company shuts down before launch.
In this context, iteration speed is the absolute priority. The product is going to change radically every two weeks based on feedback from early users. Writing an exhaustive test suite for a feature that will probably be rewritten next month is a waste.
That doesn't mean not testing. It means choosing very carefully what to test. Critical business flows like payments or signup deserve solid tests. Everything else gets covered with exploratory testing, quick smoke tests, and lots of observability in production so you can catch problems before users report them.
Medical software
With medical devices, testing goes to another level. A failure in a pacemaker or a dosage system can kill someone. That's not an exaggeration, it's the reason standards like IEC 62304 exist, along with certification processes that can take months.
Every requirement has to be linked to specific tests. Every test has to be linked to documented results. Every change, no matter how small, triggers a full regression cycle. The idea of “move fast and break things” simply doesn't exist in this world.
Here, testing doesn't just verify that the software works correctly, it shows that it's safe for clinical use. Traceability isn't bureaucracy, it's what lets you reconstruct exactly what was tested, when, and with what result if something goes wrong with a patient.
The mistake of copying strategies without adapting them
The most common problem I see in teams is importing testing frameworks without asking whether they fit. It happens in both directions.
Small teams copying big companies. They read that Google has millions of unit tests and decide they need 95% coverage too. With a team of three developers and a product that changes every week. The result is a test suite nobody maintains, that constantly breaks because of UI changes, and ends up being ignored or switched off.
Large teams adopting startup practices. They hear that successful startups deploy ten times a day with minimal tests and decide to relax their controls. In a product with thousands of users, regulatory dependencies, and a team of thirty people where not everyone knows every part of the system. Bugs start reaching production more and more often.
I've also seen more subtle cases. Teams applying the same strategy to all their microservices, when some are stateless and trivial and others handle financial transactions. Or teams testing the internal API with the same rigor as the public API, when the risks and attack surface are completely different.
How to adapt your strategy to the real context
1. Start with risk analysis, not tools
Before deciding what tests you need, ask yourself one simple question for each system component: what happens if this fails. The answers will tell you where to focus your effort.
- If failure affects real money, sensitive data, or people's safety, you need maximum rigor.
- If failure causes a minor annoyance that can be quickly reversed, you can get away with lighter coverage and good observability.
- If the component is temporary or experimental, manual testing and monitoring may be enough.
Not every module in your system deserves the same level of testing. Treating everything with the same intensity is just as inefficient as not testing anything.
2. Know the real constraints of your environment
Your testing strategy has to be viable within the constraints you actually have, not the ones you'd like to have. That includes team size, budget, deadlines, applicable regulations, and the technical maturity of the project.
A two-person team with an aggressive deadline can't afford the same process as a twenty-person team with quarterly release cycles. Recognizing that reality isn't settling, it's professional pragmatism. What matters is getting the most value out of testing within your limits, not pretending those limits don't exist.
3. Adapt the test pyramid to the product
The famous pyramid with lots of unit tests, fewer integration tests, and a handful of end-to-end tests is a reasonable starting point, but not a universal law. In an application with complex business logic and little user interface, unit tests bring a lot of value. In an application that's basically a form connected to an external API, integration and end-to-end tests are far more useful than hundreds of unit tests for trivial validators.
Some teams work better with a “testing trophy” where most of the investment goes into integration, and others where contract tests between services are the key piece. The right shape depends on where the real risks are in your product, not on what some generic article says.
4. Review the strategy regularly
Context changes. A product that started as an MVP can become a business-critical system. A three-person team can grow to fifteen. A new regulation can add traceability requirements that didn't exist before.
Review your testing strategy at least once a quarter. Ask yourself whether the priorities are still the same, whether the risks have changed, whether there are areas of the system that now need more attention. Don't wait for a serious incident to force you to rethink everything.
There is no single right way to test
If experience working on very different kinds of projects has taught me anything, it's that the best testing strategies are the ones designed for a specific context. There are no universal best practices, only practices that fit a given situation.
The ISTQB principle doesn't say that some contexts need more testing and others less. It says they need different testing. A personal blog and a pacemaker may require very different levels of effort, but both need a testing strategy built around their real risks.
The next time someone tells you that “you need at least 80% coverage” or that “end-to-end tests don't scale”, before you accept or reject it, ask yourself: in my context, with my risks, with my team and my constraints, does that make sense. Sometimes the answer will be yes. Other times you'll realize the generic recipe doesn't apply, and that the best decision is to design your own strategy from scratch.
Sixth principle of the seven ISTQB testing principles. You came from The pesticide paradox and continue with The fallacy of the absence of errors.

Jose, author of the blog
QA Engineer. I write out loud about automation, AI and software architecture. If something here helped you, write to me and tell me about it.
Leave the first comment
What did you think? What would you add? Every comment sharpens the next post.
If you liked this
Variables de entorno en scripts E2E: secretos seguros en JMO Labs
Los scripts E2E necesitan datos sensibles —tokens de API, credenciales, URLs privadas— sin que aparezcan en el código. En JMO Labs hemos añadido variables de script con modo privado: se inyectan automáticamente, se enmascaran en los logs y se acceden con una sintaxis limpia.

Tests E2E que se reparan solos: cómo construimos un pipeline de self-healing con IA
Los tests E2E se rompen con cada cambio de interfaz. En JMO Labs construimos un pipeline de 5 fases con IA que planifica, ejecuta, repara selectores, diagnostica fallos y verifica resultados de forma autónoma. La caché de selectores hace que cada ejecución sea más rápida que la anterior.

Construir una plataforma de testing con Playwright: arquitectura de JMO Labs
Playwright no es solo para tests E2E. En JMO Labs lo usamos como motor completo: 9 fases de comprobación, localizador de 9 estrategias con self-healing, grabación de vídeo, testing responsive con viewports reales y accesibilidad con axe-core.