Friday, May 21, 2010

Reading Hidden Messages

As a Test Engineer or, more specifically, as a member of Quality Control team, I was never so frustrated when I look at the defects and the information provided. But as time passed, I realized that a lot of the useful information that was found on the defect management system couldn't be reported using metrics. Following were the hidden messages that were important to decipher.

Defect Description and Comments from Developer(s)Much of what turned out to be useful information was found in the defect description and consequent developer comments. What mattered was how the description was written, not what the information was. While some testers reported defects in step-by-step detail-including preconditions and all relevant data, others simply stated that the system had crashed or that a calculation was wrong.

The large variety in reporting styles told me a number of things. It told me who were the experienced and confident testers and who were not so confident. It told me which technical areas caused frustration-which became apparent in the language ("Surely the developers can see . . .", "This is the third time . . ."). This information added to the weight of evidence that certain functional areas were high risk.
Also, the developer's replies and consequent debate told me something about the relationship between the developers and testers. Again, the language defined much of this, providing some clear evidence that relations weren't as good as they could have been. The number of replies--often three, four, five or more from each side--indicated that either the defect resolution process wasn't working and that improved contact between development and the test team was needed, or there was some confusion about what the requirements were.

Something good to follow - Defect History
A defect's history sometimes gave me interesting information. Once, when trying to track down why a given defect had been closed months before with no explanation, it was found that on the day of closing, the defect had gone through a number of hands. Following through with the parties in question led to the discovery that a wrong assumption had closed this and other defects. The defects were reinstated. On another occasion, we tracked another group of defects that had been allocated to an incorrect party. Again, studying the history enabled us to rectify incorrect details and feed lost defects back into the workflow. At least history is useful somewhere.

No. of Defect Fields and Formats
I was responsible for Defect Root Cause Analysis activity which used to take place on weekly basis, it was manaeble. There was another guy responsible for doing Defect Analysis and its used to be a HUGE task for him because of bad reasons. Once I was discussing defect analysis with the guy and it was the time when I realized how complex the defect management system was becoming. The number of different defect statuses should have given this away. There were many however countable. There were also twenty fields for testers to fill in however not all were mandatory. many times our test manager will come to the team and will say "We need to work on this weekend" and It was surpeising everytime as to why we need to work on weekend when everyone is doing good as per expectations, except few. A common reason we cited for why testing taking much longer than expected was because having to raise and detail so many defects was an onerous task. However, it didn't occur to me until this point that this was partly of our own making and not just a result of defect numbers. I started hating when I find a defect, oh no, will have to fill that no of fields.
I tried to reduce this complexity, proposing elimination of some of the fields, reducing the number of field values, and removing some of the statuses. This could have a small but important effect on the speed of the defect process. Though we accepted that capturing the correct data up front was time consuming, it did mean less time was spent in the number of informal queries returning from developers. Again with no luck, manager declied to do these changes in the defect management :(

I believe that a tester loves to report bugs or as many problem as he can but do not want to fill unnecessary things which is just for the sake of defect template being good at look.

Defect Tool and Its Use
Some time managers ask your opinion about some tool, technology or anything about which they already formed some opinion, you express yourself with happiness that my opinion seems good to me and the manager is going to like it for sure. Wrong, they have already made a decision, it is just to feel you your value, may be you feel below your value but asking to you they are upgrading your value. As the program moved later into the testing cycles, the use of the tool became more widespread and higher profile. Who was using the tool, when, and why gave me insight into the program and how to use.

By asking myself who was interested in the information, it became possible to identify and build up relations with key players in the team. It became clear that one of the strand coordinators, who reported to the Test manager, was really one of the driving force of the Test Activities. The relationship that was fostered was mutually beneficial, with the coordinator having readily available information on software quality and the test team having an active and vociferous ally. But that driving force did not get enough to drive himself after some point of time and that is where team started looking beyond visibility.

Correct or shown correct?
The statistics that I provided on a daily basis were simple. My daily report explained how many new defects had been resolved, the total number of defects to date to be validated. No one ever asked me for historical data or how many defects had been fixed since the start of the project or the average time it took to fix a high-priority bug but I did provide something more than asked.

Sometimes doing more than expected is not expected and if you do so, you are not meeting expectations. So publish only the data being asked by the manager.

Conclusion:
The defect management tool and process should be a guide. Statistics and defects are useful, but they are only a window onto the health of the program. As always, it is the awareness into the human element of any part of the program that makes for the full story. Elements to be aware of include the developer's writing style as well, the amount of "debate" within the defect descriptions, and how successfully the tool is being used. A successful test manager should not take data and information merely at face value but should use this to inform his view and to ask him/herself further questions. So, when you are looking at statistics from your defect management tool, know that there is more you can understand :)

Thank You All.

No comments:

Post a Comment