Cognitive Biases in Software Testing: A Guide To Overcome
We are humans, and humans sometimes make mistakes. We make hundreds of decisions on a daily basis, and sometimes those decisions are not entirely based on rationality, but on cognitive biases.
Anyone, including testers, can be subjected to the trappings of cognitive biases. Those biases are the result of years of evolutionary adaptation, and they allow us to make quick judgements (we all want to survive). However, they usually aren't the best judgements.
In this article, we’ll explore the most common cognitive biases and how you, as a QA tester, can overcome them while writing and running your tests.
The Origin of Cognitive Bias
A book that covers relatively well the topic of cognitive biases is Thinking Fast and Slow by Daniel Kahneman.
In this book, Kahneman proposes 2 ways of thinking: fast and slow, called System 1 and System 2.
- System 1 (Fast Thinking): This is our intuitive, automatic, and emotional way of thinking. It operates quickly and efficiently, but it often relies on heuristics (mental shortcuts) that can lead to cognitive biases.
- System 2 (Slow Thinking): This is the more deliberate, analytical, and effortful mode of thinking. While it can be more accurate and rational, it is slower and requires more mental energy, so we don’t always engage it.
Cognitive biases arise from System 1 thinking. A good example of this can be found in the snake detection theory. As apes, we have to be on the constant alert for venomous serpents, which eventually developed into our unique ability to instantly activate fight-or-flight mode when we see anything with an elongated, slender shape (just like a snake) in the wild.
That's just basic pattern recognition, but back in the day, this very skill helps us survive. It’s safer to mistake a long rope for a snake than getting bitten with those scary fangs. Scientists have proposed that apes which are better at recognizing snakes have a much higher survival rate to transfer such skills to their offspring. In fact, without that cognitive bias, we wouldn’t have survived as a species.
Over time, our brains have evolved beyond just recognizing threats like snakes. We’ve developed a more sophisticated pattern-recognition system to protect us from a wider range of dangers, such as spiders and other predators. In modern day, we develop heuristic thinking to help us quickly deal with tasks that don't require too much brain-power. While these cognitive biases can still be useful in some situations, they often lead to unnecessary errors, especially in modern contexts where the threats are less obvious.
And we need to rise above those primitive impulses.
Common Cognitive Biases in Software Testing
1. Confirmation Bias
Confirmation bias is the tendency to search for, interpret, and remember information in a way that confirms our pre-existing beliefs.
In other words, we naturally focus on evidence that supports what we already think is true, while disregarding or downplaying evidence that contradicts our beliefs. This happens very frequently in communities where participants echo each other’s beliefs and rejecting opposing viewpoints (known as echo chambers).
How it happens in software testing: testers may be more likely to choose positive tests rather than negative tests to run, or they can cherry-pick the tests that they know will confirm their existing hypothesis, while trying to avoid rare edge cases or alternative user inputs that could cause failure.
Confirmation bias can even happen in a team setting. If a team has been working on a product for a long time and all initial tests are passing, they might assume the software is stable and avoid running more thorough tests for fear of delaying the release.
Team members can collectively convince themselves that the software is functioning well. They may resist exploring deeper because it could challenge their collective belief that the product is ready for release.
How to overcome:
- Introduce Automated Testing: automating thorough tests ensures that biases don’t cause any steps to be skipped. Automation can quickly run comprehensive tests without the time pressures that might lead to biased decision-making.
- Establish Clear Testing Protocols: set predetermined testing standards that require thorough tests, even when initial results are positive. This helps avoid the temptation to skip crucial steps based on assumptions of stability.
- Use Data-Driven Decision Making: encourage decisions based on objective data rather than gut feelings. Implement metrics and analytics that show potential areas of risk, requiring the team to back up their assumptions with evidence.
- Promote a "Testing is Learning" Mindset: shift the team's focus from seeing tests as hurdles to overcome to seeing them as opportunities for learning. Remind the team that finding issues before release leads to better long-term outcomes, even if it challenges their belief in the product’s readiness.
Read More: 9 Core Benefits of Automation Testing
2. The Golden Hammer Bias
You know the saying: if all you have is a hammer, everything looks like a nail.
The Golden Hammer Bias happens when a tester or a team tends to use a familiar tool, technology, method, or approach for solving a wide range of problems, regardless of whether it is the most suitable solution. We want to experience the comfort and success associated with that familiar tool, leading to its overuse, even in situations where other, more appropriate solutions exist.
How it happens in software testing: A good example is sticking to manual testing for everything. Picture a veteran tester who's spent years manually clicking through test cases, feeling the satisfaction of finding bugs. It’s what they know, what they’re good at. It’s the hammer they've used to build their testing career. But as the project scales and test cases pile up, this manual process begins to drag. What started as a meticulous method becomes a bottleneck.
Manual testing is invaluable, especially for those creative, exploratory sessions where you're interacting with the product like a real user. But using it for everything—especially the repetitive stuff is really labor-intensive.
The same can be said with automation overload. Some teams are all-in on automation. It’s easy to fall into the trap of thinking automation covers everything. It doesn’t. Usability, exploratory, and even some security testing need human eyes and intuition. Otherwise, you’ll end up with a system that’s functionally sound but frustrating to use.
How to overcome: have a balance of approach. It’s essentially “do not put all your eggs in one basket”. Diversify and remember that it’s all about balance. The key is knowing when to switch tools, when to step back, and when to realize that the comfort of familiarity might actually be slowing you down.
Learn More: How To Go From Manual To Automation Testing?
3. Availability Heuristics
Availability heuristic is a cognitive bias where people tend to rely on immediate examples or information that come to mind when making decisions or judgments. This mental shortcut often leads us to overestimate the likelihood of events or outcomes based on how easily they can recall similar instances, regardless of how rare or common those events might actually be.
How it happens in software testing: let's say you encounter an error in a database query and assume it's a common problem you've seen before with query syntax, while in fact the actual root cause could be a more obscure issue, like a configuration problem or database connection timeout. Another example is using the same set of functional test cases across multiple projects might reuse them without adapting them to the unique requirements or challenges of the current project.
How to overcome:
- Data-driven testing prioritization: instead of relying on memory, use data to prioritize testing efforts. Bug tracking systems can provide historical data on the most common and impactful issues. Analyze trends and patterns to determine where testing should be focused based on actual defect rates and criticality, not just what feels most memorable. Learn more about test case management.
- Risk-Based Testing: use risk analysis to guide testing efforts. Focus on high-risk areas of the software based on the business impact, user behavior, and technical complexity, rather than relying on gut feeling or past experiences alone. This ensures critical features or potential failure points get sufficient attention.
- Peer Review: having multiple testers or developers review test plans can reduce the bias of any single individual. This brings different perspectives to the testing process, ensuring a wider range of potential risks and issues are considered, beyond just the experiences of a single tester.
Conclusion
Cognitive biases are unavoidable—but they don't have to hold you back! The key is to stay curious and open to new perspectives. Embrace testing as a learning process, and you'll not only catch more bugs—you'll grow as a QA professional.
Want to level up your skills as a tester? Check out some of our courses we have here in Katalon Academy: