Its not just for tracking bugs. If you are monitoring the in-coming bugs, you can get alot more information about the feature and product state. Here are a couple of things I have learn to look for when monitoring the in-coming bugs.
FYI... all the top testers, always monitored the bug list - knew every bug entered and the cause of them :)
1) Old Bugs - look over old bugs.
Bug Count Per Feature - If you are inheriting an area, look for areas that do not have very many bugs. This could indicate that there was minimal testing done. No one ever writes a feature that is perfect... its not possible.
Bug Count vs Type of Feature - you need to look at how many bugs were entered in an area. Does the feature have UI or is it only api's? Does the feature have heavy integration with another product? Is the feature legacy code or brand new? The expected bug count should go up or down depending on how you answer the questions above.
Higher Bug Count
UI
Integration with 3rd party products
Customizable applications - UI and API level
New Features
Lower Bug Count
API (only because UI typically exercises this code, so it all depends to goes through the code path sooner)
Legacy Code
Code that has no integration.
Database level
2) Watch new bugs being entered - they will give you an idea of the types of bugs being found, and what to try in your area.
A)Investigate for repeats - If a dev made a mistake, see if you can apply it somewhere else in the product? Typically the same kind of mistake occurs in several places. These mistakes can be done by the same developer or across multiple developers. If its a very simple bug like a feature do not work with Unicode characters, then its probably a product wide bug, where the developers have not has sufficient training in this area. Make sure you see determine if its a product or isolated case.
B)Is it a very basic bug? - if the bug is very basic but the feature has been checked in for a long time... test it(its for sure a bug farm)!!! There are several things it could indicate when a basic bug is found late in the cycle -
- Could simply mean the Tester is not testing their area
- Area might not have an owner. I have found huge testing holes in the product because i looked into why a simple bug was found late in the cycle. Typically, holes occur when one feature is providing data to another feature, example web services. Who tests what? Typically both testers think the other is testing the feature, when neither are. Test Contracts can help, as long as everyone is using the same terminology.
- Tester has become numb. Tester might have been aware of the issue, but no bug was ever logged. This happens very frequently, tester goes and talks with the dev or pm, and they punt on the issue saying... "well if you put in the bug we are just going to won't fix it". Unless you switch up the testers (get a new set of eye), these types of bugs make it to production... and cause alot of usability issues. So if new bugs are entered, and they are basic and they are entered by someone other then the area owner, I think its about time to switch up area ownership.
- Tester might have known about the bug, but was afraid to put it in because it exposes the fact that they did not test their area. Again, changing area ownership is the best thing here... because the new tester will not be worried about covering their ass while testing. AND yes this totally happens in the real world.... because, there is no way to prove a tester did not do their job unless someone else finds the bug. And the poor performers are not going to risk entering a basic bug... its like shooting themselves in the foot - their job could be at risk - at least they will be interrogated by everyone on why it was not found earlier.
C) Is there a sudden spike in bug count? - and the spike does not correlate to feature check in! - I hate when this happens. Its cause by something external. Typically it happens a month or so just before review time. You need to take note of those testers that improve productivity around review time. I have noticed that simply testing in their area lights a fire under their ass! So a simple resolution is put bugs in their area ever once in a while just to keep them going.
D) Watch out for missing spikes in bug count - There should always be a huge spike in bug count right after a feature gets checked in. If there is not then something needs to be done right away. Possible reasons are:
- Tester is swamped with other areas. This is really bad. Give the area to a different tester, the longer the time between feature check in and fixing bugs, the worse the dev's are at fixing them. When a feature is just implemented, its all in the dev's head... is all in RAM. To delay putting in bugs, means that the dev's moved on and swapped out the RAM. To swap back in when your tester might have enough time is just bad for the product.
- Tester does not have a clue how to test the feature. This can be because they are new, do not know the technology, or not technical enough. This is where you want to pair the tester with someone Senior. They can help review the test plans or help create one, and provide idea's on how to break the new feature.
D) Random bug that does not make sense - Have you ever read a bug... and say 'what?'... 'how is that possible'... 'shit'... When i see these bugs, it usually means that i made an assumption that something was working or it worked in a particular way.
- You need to revisit your feature set, and see what the impact it has on your testing if you find you made an assumption. KEY is that you have to go back - when you get this gut feeling! I usually feel like the floor was taken out from under me, they way i visualized the entire system was wrong. You need to go back... and enter bugs, even if you don't want to because everyone will know you missed something. Better you find the bug then some other tester and better you find the bug before the customer does!!!
E) Regressing bugs with NO new new bugs being entered - When regressing bugs you should try to find new bugs. You should try to find 3 bugs for every 1 bug you entered. I know its not realistic, but it keeps you mind switching to 'verify mode'. You should still keep the 'break it' mode while regressing bugs.
- Testers commonly have bug regression nights, where they have to go through a crap load of bugs. You should review bugs that were closed by the tester, and pick out once that you know should have taken longer then 5 mins to regress, or ones that are high risk. Talk with the tester to see what they tried, or go around and play with the feature yourself.
I know there are more things to look for... but I can't think of them right now... :)
No comments:
Post a Comment