When creating a strategy for software testing, it is important to know the ‘Why’ testing is a current priority. For example, if you are working for a startup or a greenfield application with no users on the software, the presence of a defect is not impacting anyone, and could be argued isn’t valuable. While on the other hand you have 1,000 paying customers using a piece of software, that is critical for your customers to run their businesses, the risk is extremely high. Based on where you land on the scale you will have different strategies and approaches. What I will attempt to lay out below are general strategies that can be effectively applied ot many different contexts.
Ditch Your Test Cases
Yes you read that correctly. Ditch them, or at least place them in a nice safe place that you can reference later. This may sound crazy to some but let me just propose this question. How many bugs have your test cases uncovered? This number shouldn’t include that execution where the exact test case passed, but you noticed something was off that wasn’t called out in a test case. Keeping test cases up to date is a very difficult task in itself, and can be really frustrating as requirements and areas of the system change due to new learnings from your customers. So what should you do instead of creating and executing test cases? The remainder of this article will walk through ways to get by without them.
Work Towards a High-Level Understanding of the Software You are Testing
This is the biggest factor needed to ditch the mountain of test cases you and your company have accrued. The team’s mindset has to change from being given exact input and outputs designed months or years ago, to exploring the system for themselves. This can be accomplished in many different ways. Within my company, we have a new team member self-paced training video series that covers customer-facing product features along with administration configuration pages. This has allowed for our testing team to approach testing from a systems-minded perspective. When looking at how the system operates as a whole, our team has a greater understanding of how different areas of the software influence one another. Having a broad understanding of the system gives a great launchpad into deeper learning and exploration of new features or existing features within the system.
Create and Use a Feature Map
Another tool we use to get us to that high-level understanding mindset is a feature map. Our feature map consists of a list of all pages of the system with a brief description or list of links/features that are available on that page. This is another tool that new team members can use to get familiar with the software, or can reference when exploring a new area of the system, that helps visualize the entire system. This tool can also be used to regress the software for code version upgrades, new infrastructure migrations, or risky site-wide changes such as user permission updates. This simple document doesn’t have a lot of detail into the specifics of how the features work but is used as more of a tool to spark testing ideas or as a reminder of the scope of certain features.
Work Towards a Deep Level Understanding of Specific Features Within Your Team
After a new team member has gone through the onboarding process, the real fun begins. Around week three a new tester joins a product delivery team consisting of developers, a designer, a product manager, and tester(s). Within this team, the tester is an integrated part of an agile team participating in sprint planning, refinement meetings, and retrospectives. On these delivery teams, we are adding new functionality to the system through new features, integrations, refactors, and bug fixes. In each of these areas, both developers and testers work together to refine user stories, establish acceptance criteria, and give clear technical direction of how the problem will be solved. This meeting is also a great place for testers to challenge acceptance criteria, to have conversations with developers going over some specific areas in which you plan to apply exploratory testing. By doing this, it is actually possible to find bugs before they are even coded, which is a huge efficiency gain. Identifying these complex or problematic areas come from knowing the new feature that is being built, being involved in the developer discussions. Knowing if the developer is planning on solving the problem with an asynchronous vs synchronous solution, can change the way you test a certain feature. Gaining an in-depth understanding of the new feature or functionality being delivered allows the team to deliver higher quality software products through identifying and fixing those not so obvious defects.
Create Good Documentation for Things That are Not Straight Forward
The agile manifesto contains the statement ’Working software over comprehensive documentation’ with a follow-up statement ’That is, while there is value in the items on the right, we value the items on the left more.’ I’ve found that the majority of the features in well-designed software are very straight forward and don’t require documentation in order to understand what the product is supposed to do. In situations like these having test cases for simple tasks makes no sense to me. However, there are certain things that may require well-written, easy-to-follow, documentation. The question I ask my team on determining if it needs documentation is: “Did you have to have a conversation with anyone to be able to complete the testing?” If the answer is yes, we probably need some sort of documentation. Some examples of this would be testing complex batch jobs, third party integrations, hidden features that may not be listed within the software, or technical testing tasks. For those not so straightforward items, our test team has a shared wiki where we can share and update information on how to test these complex areas. The idea being that once a tester has figured it out, the documentation should be able to walk any other tester without the deep knowledge of the feature through it.
Write Automated Checks for Your Critical Paths
One of the main reasons I believe that test cases became the standard across most test organizations is there are critical paths in our software systems, that should always work. This is a great place to go find that nice safe place you placed your test cases and begin reviewing and prioritizing what are the most critical paths. These are areas that our users rely heavily on and they should behave consistently. For these critical paths, I would recommend having some sort of automated test coverage. This is not a job for your summer test intern to take on. To write reliable, sustainable, consistent automated tests you have to rely on your developers for guidance, or someone who has proven successes in test automation. This is not easy to get right, but once you have a good reliable suite of automated tests you will have higher confidence in the quality of your releases.
Focus on Targeted Regression Testing
In 2014 our full regression testing cycle took one team member two weeks to complete. We would test through our feature map. These were the days when we had one release every two months. A lot of has changed since then. Now we deploy major features on a two-week cadence, with the ability to deploy bug fixes on any day at any hour. With the use of our automated tests for our critical paths, we still do targeted regression. Prior to our release, we have a code freeze, where no new code is added to our proposed release code branch. During this time, the test team works together reviewing the features and fixes that are going to be released to assess the risk of each item. We do quick targeted regression testing to ensure that there were no code merge issues and that what will be deployed is consistent with the previous testing done by the tester on the feature team.
How Do You Track What Has Been Tested
Rather than using test cases, we use test sessions. On every user story or bug ticket, we track what was tested, who tested it, in what environment, and with what test data. We also use these test sessions to communicate everything we tested to mitigate the risks. The idea is that when the ticket is ready to be released, any member of the team can review the session and have a solid understanding of what has been tested.
In Conclusion
I don’t hate test cases. I think there is a time and a place where a solid set of test cases make a lot of sense. I would also never tell a tester that was on my team or another team that their work is less valuable because they are using lots of test cases. I would, however, argue that there are more efficient ways to test a bug or a feature using the techniques outlined above rather than having a batch of test cases executed. Don’t believe me? Go try it for yourself!