Tuesday, June 13, 2017

Pragmatic Test Case Management

Having recently worked on a waterfall project where 600 test cases were documented and 200 defects logged, I wondered at the end of the project just how much time and effort had been spent in creating this documentation and of what use it is when the project was delivered. I asked the tester involved, how many of the 600 test cases were of high importance that they would be used as regression tests in the future. I was surprised by the answer, that only approx. 30 of the test cases could be used to validate the projects features in future regression campaigns. To me this means that stunningly 95% of the effort spent in test documentation was of no value in the future. I also asked if any project member or senior manager was looking for test case traceability or test/defect metrics, the answer was again a surprising No. I therefore felt what was the point of all this documentation, and surely couldn't a more pragmatic approach be employed.
So as I and the tester involved move onto our next project, which is using Agile, I wanted to share some guiding principles to reduce the documentation burden. Below are my thoughts.
Only document test cases that you will execute in the future
What this means is that if a test case is validating part of a feature, but will not be used in future regression campaigns, then yes for sure run that test - but do not document it! If your never going to execute it again that test case simply becomes a statistic, and if no one on the project is looking for statistics, its a waste of your time. An example of this can be checking that a web page is using certain fonts, yes its a good test when validating that requirement, but is it important to keep including in each regression test? I beg to say No it isn't.
Test cases are only kept up to date if they are executed regularly
A test that has been documented but for one reason or another has not been executed for some time gets out of date. The application changes, and no tester is re-executing that test and updating its documentation. Multiply that by hundred or thousands of tests and you have an entire test library that is ageing, losing value and increasing the debt on testers. Further when automation testers turn their attention to these tests they run into problems that the automation engineers don't have exact test definitions to work from and a whole review of updating the documentation has to be done. This leads to:
Delete test cases that have not been executed for more than X releases
If we are really only documenting the important tests, then the real litmus test of their importance is if they are actually included in future regression campaigns. If a test has not been included in the regression campaigns of X releases (or alternatively Y sprints, or Z months) then the test really isn't as important as originally thought, and can be safely deleted or archived.
Conversations replace defect reports
I often say to testers don't just log defects and expect them to be fixed, have a conversation first. In a pure pragmatic setting, the conversation itself can be used to replace the defect report. Promoting collaboration, all defects observed should be discussed between developers and testers first. Really defect reports should only be used in cases where the developer is busy, and needs a mental reminder to come back to that conversation later.
When a test is automated the test documentation can be deleted
To me the only documented tests are one that require manual execution. Automated tests become the documentation for those tests. With this in mind there is no need to spent time keeping the automated test and its associated documentation both up to date, the automated test IS the test case, and only it needs executing and maintaining.
When I presented these idea's the reaction was some rather confused looking testers! To traditional testers NOT writing tests, NOT writing defect reports and DELETING test documentation seems somewhat heart breaking, and taking away from them any measure of their performance. Gone will be the days when a tester can report I wrote 600 test cases and logged 200 defects. My reply to this is that in an Agile setting, writing documentation and of course finding defects reduces the teams velocity. We only need to measure the output of the team, when this take's a dip a sprint retrospective can give more answers than pure statistics. All agile team members get measured by their collective output and the quantity of production defects.
I'd be interested to hear of other's who have employed more pragmatic approaches to test and defect management.

Expected Conditions and Interactions

Getting stable and reliable Web test cases is the goal for any automation engineer. Part of providing test reliability is to check for a par...