The problem with False dichotomy:

False DIchotomy. Rope bridge spanning a deep rocky chasm with words 'THIS' and 'THAT' on opposite cliffs
A fragile rope bridge connects two cliffs labeled ‘THIS’ and ‘THAT’ over a foggy valley.

Cognitive bias in Software Testing:

The bias of False dichotomy

Are you testing software with a sense of false dichotomy? Have you ever questioned such biases? Well, this blog post shows how biased we can be. In the world of software testing, biases can color our perceptions and decision-making processes. It’s crucial to recognize these biases and consider new perspectives, particularly when evaluating the effectiveness of our testing strategies. This is the Software Testing news, or so I thought.

“Manual Testing… has become as stale as last week’s bread! It has become so predictable lately. If you don’t have a clue about testing, you as well be trying to juggle jellybeans,” joked a Sr. Executive of a company during a 1:1 interview. This statement encapsulates a common sentiment in the industry. Many believe that manual testing lacks the excitement and dynamic nature that it once had. However, we must explore how to revitalize this field and encourage testers to think outside the box.

Dealing with biases like False dichotomy

Let us see here. It’s essential to delve deeper into the various testing methodologies and how they impact the overall quality of software. For instance, traditional testing methods often rely heavily on scripted processes that can stifle creativity. In contrast, more exploratory approaches allow testers to use their intuition and experience, leading to potentially more insightful outcomes.

You probably have a test report. It boasts a shiny Pass/Fail column when you’re running test cases. But let’s be real—software testing is like dating; it’s far more complicated than just swiping left or right! Sure, this black-and-white binary system helps you code like a pro. Still, when it comes to testing software, it’s about as useful as a chocolate teapot. Uncovering risks is a wild ride. It sometimes involves testing methods that leave you scratching your head. There aren’t even any test cases to guide you.

Software testing strategies:

Finding solutions to False Dichotomy:

Hello, chaos! Let us consider the the following techniques which do not have test cases but are useful in uncovering risks as just a couple of examples. Exploratory Testing encourages creativity and spontaneity in finding defects that will not be captured in scripted tests. Rapid Testing focuses on delivering feedback quickly while prioritizing risk-based evaluations. It’s time to embrace these methods and rethink our testing strategies.

The case for Testing as an activity opens a whole new can of sardines. It does so with or without test cases! Imagine testers entering a Zen state. They become de-focused and float through a luxurious sea of chaos. They are not tied down to a set of boring test scripts. Now, some management folks argue, “But we need those test cases! They’re our safety net!” Sure, they are — like having a life preserver in a kiddie pool. But when we rely too heavily on them, we may overlook critical issues that arise in real-world scenarios.

Furthermore, the complexity of software applications today necessitates a multifaceted testing approach. Instead of solely relying on pass/fail metrics, it may be beneficial to implement a more robust framework that includes risk assessment and exploratory techniques. This allows for a more comprehensive understanding of software quality and user experience.

Let’s consider the situation. How many risks have your testers found? They are valiantly fighting the good fight against the tyrannical test case execution. Have they donned their capes of critical thinking? Or are they too busy racing against the clock to finish those tests and hit the nearest coffee machine? It’s sad, really. Many companies have inadvertently turned their testers into mere Zombies of the Test Case Brigade. These testers stumble around in a Pass or Fail world. Let’s be honest, it feels like a false dichotomy. However, by encouraging a culture of exploratory testing, we can empower testers to embrace creativity and innovation, transforming them from mindless executors into insightful contributors.

Do write about your thoughts on the false dichotomy of Pass/Fail status in Software testing, in the comments below. Your insights could spark a discussion that leads to greater understanding and innovation in our testing practices. Let’s challenge the norms and embrace a more holistic view of software quality, moving away from the rigid constraints of traditional metrics processes that include test cases too!

Moreover, engaging in collaborative testing sessions can drastically enhance the testing process. By involving developers and other stakeholders in the testing phase, teams can gain insights that lead to a more thorough understanding of the software’s strengths and weaknesses. This collaboration can also foster a culture of quality across the organization. Ex: Ignore writing test plans and test cases. Set up a live session on an open source product with a few testers, and start exploring the app. Share your views, but watch for your biases Confirmation bias being one of them.

It’s also important to recognize that the role of a tester is evolving. With the advent of automation and AI, testers are now expected to possess a broader skill set. This includes understanding programming languages, working with automation tools, and being able to analyze complex data trends. Embracing this evolution means moving beyond the traditional notions of testing and redefining what it means to ensure quality in software development.

Conclusion:

False dichotomy is good for simplification of a difficult task like Software Testing. However, it can limit testing and impedes value addition.


Let us know what you thought about this post in the comments section below

4 Responses

Leave a Reply

Discover more from Test Pulse

Subscribe now to keep reading and get access to the full archive.

Continue reading