Friday, November 14, 2008

Great Developers from a Testers Perspective

There are some dev's i loved working with! Here are some reasons why...


Communication
Bad Communication->Most developers and testers only communicate through bugs. One reason is Testers are frequently afraid of going to their developer for questions. Testers are even told sometimes not to bother their developer since the dev is really busy. IF the tester is blocked, they should not let anything get in their way of doing their job. Testers time is just as valuable as the Developers. If a tester is told not to bother the dev, then the tester should schedule office hours with the developer where they come in with a list of questions they need answered. (sorry got off track... but i feel its important to say this because i hate it when testers act like the victim)


TEAM work
Back to qualities of a great developer...
  • Great developers frequently discuss issues/designs with their testers.
  • They kept them up to date about changes coming up.
  • They tell their tester, area's of concern, area's the tester should focus on.
  • They understand the more the tester understands the feature, the better they could test it.
  • The great developers always included the tester on emails and meetings that discussed their features.
  • Great Developers do NOT take bugs in there area personally
  • Great Developers know they need great testers to challenge them. They know you can't ship a product with out them.
Basically... the developer and the tester are 1 unit... 1 team. They should do everything together.


Why communication/team work sometimes fails...
Sometimes testers might not be included because they can be a distraction. They will start breaking the feature where the purpose was to design it. The tester needs to just be reminded of this fact... and everything will be ok. Don't just not invite the tester because they just break everyones idea... just tell them what you need from them before the meeting is started and it will go smoothly.

Here is a list of things I have seen great developers do:
  • writes simple well designed code - oh how nice it is to do white box testing on a feature that has a simple design!
  • rewrites crap! Code might not have started off as crap, but as the bugs were being checked in, the code became patchy hard to follow, alot of exceptions to the rules added all over the place. The great developers, will stop, and revisit the area and refactor/clean it up!
  • Comment their code
  • Design a flow chart of how it might go and let the tester loose on it.
  • THEY NEVER do hacks (or very rarely). They will write code to fix the problem the right way even if it takes them alot longer. They understand that putting hacky code in, still have to go through testing which will find alot of bugs around it, and eventually they will have to go back and do the right fix, which will again have to be tested... (so the dev that tried to save 1 day of work or 1 week of work, actually caused more work on themselves and all other people involved.
  • I am curious - what is the life span for a hack? how long on average does it actually stay in the code? What ever the answer - short or long half life... you can't win... it always comes back to bite you in the butt... in sooo many ways!
  • Almost forgot - making the feature testable!!!! YAHOO!! this can be done by providing hooks for automation, providing some extra UI. If a feature is a black box, put a light bulb in it... light it up! The area will get stabilized faster, and both you and the tester can move on to the next feature!!

Bug Bashes - how to make them less sucky

Bug Bashes are when a bunch of people (cross disciplines) test the product. I have found that they can be useful if done correctly. (note: i personaly don't like them, but when they are done correctly, i hate them less:) )



1. Product needs to be stable.

There should have been a couple iterations between the dev and tester that own the area.

Unit testing should have been completed.

The tester that owns the area should have a goal that no one will find any bugs because they already found it. If your tester feels this way about the product, then schedule the bug bash.

What happens if you have a bug bash too soon. It is similar if you ship a product too soon... a lot of noise. Alot of people getting frustrated because they are hitting basic bugs, duplicate bugs in the database. The area owner dev and test waste a ton of time trying to repro and filter the bugs. Majority of the bugs are not repro or dup's. You are lucky to get one good bug if your product was not really ready for the bash... when 50% of the bug bashers hit 1 basic bug, 90% of the bugs entered are all around that bug... sort of like an open gate way to hell.


2. Guided Bug Bashes
If you really want to get something out of the bug bash, you need to provide guidence. Otherwise, 90% of the people are going to spend 80% of the time all going through the exact same path (which is probably covered by automation anyways). At the end of an unguided bug bash, you end up with nothing! You don't know where people were testing, you don't know what they tried, you don't know anything... basically you wasted peoples time and got no results (or superficial results that the product is good to go).

Details you should provide to the testers who are going to participate in a bug bash.
a) Provide a goal - set the tone - example... "let stop ship" - 'be malicious' - 'break the build'
b) Provide Areas of Risk -
provide details of why an area might be risky
provide how to get to the risky area
provide various entry points to your feature set
list past bugs in the area - *** super important
recent fixes
c) Provide a list of Integration Points - how the feature integrates with the rest of the product
d) Bug Template - provide a template for entering bugs. Along with a bug template, you should provide a definition list example Toolpane, ToolPart - so the testers can describe the bug with the correct definition.

3) Instead of Bug Bashes
I typically don't like bug bashes - when i am participating, half of the time it is installing and configuing, and the second half is me just figuring out the feature set. I feel very unproductive and felt like the 4 hour crunch was more just a surface ramp up on the feature set.

Instead of Bug bashes - i prefer SWAT teams - highly specialized, individually pick persons to do focused testing in specific areas. There is no reason why everyone has to bug bash at the same time. Before I start talking more about alternatives to bug bashes, I will write a future post...

4) Prizes
Yup prizes. You are taking people away from their work to help you do your work. Prizes are great motivators! It can be as simple as winning a huge plastic penguin! I think people love to win, they will do anything... cheat, lie. This is the kind of mentallity you want the testers to have when bashing on your feature set. They are less likely to verify, and more likely to try and break the product... push the limits.

Wednesday, October 1, 2008

Top Testers - are there common traits?

I know there are common traits among top testers... because I can relate to them and I can pick them out really really fast. They stick out like a sore thumb among the rest of the testers. Funny thing is that it does not matter what type of product... top tester are always top testers.

Here is a list of non obvious traits. (I know I have used Non-Obvious alot so far in my blogs, but i feel like i am writing stuff that other people don't write... and i am not sure why that is?)

Highly Competitive
Top testers are extremely competitive. They want to be the best. They produce more by large margins compared to everyone else. There is no shade of grey. They don't just dominate the bug count, they dominate at everything. They write the most automation, they write the best test cases and the most complete test suites, they write the best most complete test plans (test design specifications). If average tester produces X, the top tester will produce 1000 times more and faster (with higher quality).

I have to admit, I was(and still am) like this. But the top testers I met over the years, are always trying to grow other testers to be mini-me's. They would love if everyone tried their best. Its alot more fun when you have closer competition. :) Plus they can feed off each other. Its amazing what happens when one top tester feels threatened by another top tester... both tester productivity goes through the roof!!! BUT its not talked about, and they totally work well together because there is a mutual respect and they both have the same goal of shipping a high quality product. BUT in secret deep down, they will work longer and harder, just to be one bug ahead of the other tester.

Extreme Ownership
Top testers treat their product as their baby. Notice I said 'product'. They take a sense of ownership of ensuring every thing is covered. Typically testers just test their area, and then they are done, they don't care if the rest of the product sucks... all they know is they are done. Top testers, branch out, they reach very deep and very wide. Its like they are creating vines that grow and branch out touching every part of the product. Eventually the top testers will know the state of every feature, they are the walking bug database. When they see a bug, they know without looking at the code, what is causing the bug.


Always Analyzing
Top testers are always analyzing the product... when i say always... i mean... when they go home their minds are still trying to break the product mentally. When they go to sleep they are still trying to break the product. By morning, they have a whole new set of idea's they want to try to break the product.


Breaking the Product
All top tester try to break the product all the time. It can look like it comes easy since they can find so many bugs quickly. The key is every test case they try, it is always to break the product. "what happens if I try this, in combination with this... can i break it?". Notice the "what happens if"... that means they are not sure about what the out come will be because they pushed something to the limits, or are combined features in a unique way.

I am trying to think what people that verify say... and I can't think of anything... you know why? because verifying takes no thought process... steps are already outlined for you, result is already defined, the tester just has to execute them. I think when people verify, its like they are watching a tv show but not really watching it... they are just staring at something. Can you tell i hate to verify?... i actually can't do it... it hurts too much... it takes the fun out of the game...


No Assumptions - question everything
This is a huge one. To become a top tester, never assume anything. Never assume the dev is right, never assume feature is implemented they way it was specified in a document, never assume the technology your feature is using works, never assume someone else will test it, never assume a fix by a senior dev is safe, never assume things work.
Since top testers question everything under the sun, they end up finding everything under the sun!

Tangents
I personally have problems staying on topic when talking... i am always taking tangents. But I am not talking about verbal skills here. Top testers while testing can easily take tangents, they might be testing one area, then notice something odd, and go down a completely different route. They don't test one area 100% and then go to another area. They are all over the place. If you look at their bugs... they will be all across the board. This typically happens once their specific feature set is stable... and then they get to go out and play with the rest of the product... adhoc/exploratory testing. They are very fluid, moving around from one area to another... they see highways between feature sets where typical testers only see walls.

I know there are more... but I can't think of them right now...

Monday, September 29, 2008

Who should be a testers best friend?

Answer: The developer.

The person a tester should talk with the most on the entire team is their developer. They should be in constant communication. It does not matter if you are in agile or waterfall development.

New hires, that are new to testing, are so surprised when I tell this to them. They are often shy and afraid to talk with the developer (often intimidated). And the developer is so deep in writing their feature, they don't notice the awkwardness :)

I also, firmly believe the tester should sit near the developer. I have worked in places where there is a separate dev/testing hall (hallway of testers and a different hallway of developers) and I worked at a place where I got to sit beside the developer.

From my experience the amount of information I absorbed sitting near the developer was amazing. I could just listen, or provide real time feedback on discussion that took place, provided valuable feedback on bug fixes (verbally told the developer things they should watch out for), helped to be a sounding board on design decisions. And I know if i said just a little further, the interaction would have been limited to only when i initiated the conversation or when the dev just checked in, or when i found a bug.

For me, I know once I sit in my seat, i hate getting up... its like a context switch and its hard to reorient yourself when you come back to your desk and have to remember what you were doing... (especially if you are like me and have a thousand windows up) :)

Sunday, September 28, 2008

How to be a Great Tester

... coming soon

Non Obvious ways to find bugs - Look at the Bug List

Here are someways to use the Bug List to help you find bugs and figure out the most unstable areas.



Its not just for tracking bugs. If you are monitoring the in-coming bugs, you can get alot more information about the feature and product state. Here are a couple of things I have learn to look for when monitoring the in-coming bugs.

FYI... all the top testers, always monitored the bug list - knew every bug entered and the cause of them :)

1) Old Bugs - look over old bugs.

Bug Count Per Feature - If you are inheriting an area, look for areas that do not have very many bugs. This could indicate that there was minimal testing done. No one ever writes a feature that is perfect... its not possible.

Bug Count vs Type of Feature - you need to look at how many bugs were entered in an area. Does the feature have UI or is it only api's? Does the feature have heavy integration with another product? Is the feature legacy code or brand new? The expected bug count should go up or down depending on how you answer the questions above.

Higher Bug Count
UI
Integration with 3rd party products
Customizable applications - UI and API level
New Features

Lower Bug Count
API (only because UI typically exercises this code, so it all depends to goes through the code path sooner)
Legacy Code
Code that has no integration.
Database level

2) Watch new bugs being entered - they will give you an idea of the types of bugs being found, and what to try in your area.

A)Investigate for repeats - If a dev made a mistake, see if you can apply it somewhere else in the product? Typically the same kind of mistake occurs in several places. These mistakes can be done by the same developer or across multiple developers. If its a very simple bug like a feature do not work with Unicode characters, then its probably a product wide bug, where the developers have not has sufficient training in this area. Make sure you see determine if its a product or isolated case.

B)Is it a very basic bug? - if the bug is very basic but the feature has been checked in for a long time... test it(its for sure a bug farm)!!! There are several things it could indicate when a basic bug is found late in the cycle -
  1. Could simply mean the Tester is not testing their area
  2. Area might not have an owner. I have found huge testing holes in the product because i looked into why a simple bug was found late in the cycle. Typically, holes occur when one feature is providing data to another feature, example web services. Who tests what? Typically both testers think the other is testing the feature, when neither are. Test Contracts can help, as long as everyone is using the same terminology.
  3. Tester has become numb. Tester might have been aware of the issue, but no bug was ever logged. This happens very frequently, tester goes and talks with the dev or pm, and they punt on the issue saying... "well if you put in the bug we are just going to won't fix it". Unless you switch up the testers (get a new set of eye), these types of bugs make it to production... and cause alot of usability issues. So if new bugs are entered, and they are basic and they are entered by someone other then the area owner, I think its about time to switch up area ownership.
  4. Tester might have known about the bug, but was afraid to put it in because it exposes the fact that they did not test their area. Again, changing area ownership is the best thing here... because the new tester will not be worried about covering their ass while testing. AND yes this totally happens in the real world.... because, there is no way to prove a tester did not do their job unless someone else finds the bug. And the poor performers are not going to risk entering a basic bug... its like shooting themselves in the foot - their job could be at risk - at least they will be interrogated by everyone on why it was not found earlier.

C) Is there a sudden spike in bug count? - and the spike does not correlate to feature check in! - I hate when this happens. Its cause by something external. Typically it happens a month or so just before review time. You need to take note of those testers that improve productivity around review time. I have noticed that simply testing in their area lights a fire under their ass! So a simple resolution is put bugs in their area ever once in a while just to keep them going.

D) Watch out for missing spikes in bug count - There should always be a huge spike in bug count right after a feature gets checked in. If there is not then something needs to be done right away. Possible reasons are:

  • Tester is swamped with other areas. This is really bad. Give the area to a different tester, the longer the time between feature check in and fixing bugs, the worse the dev's are at fixing them. When a feature is just implemented, its all in the dev's head... is all in RAM. To delay putting in bugs, means that the dev's moved on and swapped out the RAM. To swap back in when your tester might have enough time is just bad for the product.
  • Tester does not have a clue how to test the feature. This can be because they are new, do not know the technology, or not technical enough. This is where you want to pair the tester with someone Senior. They can help review the test plans or help create one, and provide idea's on how to break the new feature.

D) Random bug that does not make sense - Have you ever read a bug... and say 'what?'... 'how is that possible'... 'shit'... When i see these bugs, it usually means that i made an assumption that something was working or it worked in a particular way.

  • You need to revisit your feature set, and see what the impact it has on your testing if you find you made an assumption. KEY is that you have to go back - when you get this gut feeling! I usually feel like the floor was taken out from under me, they way i visualized the entire system was wrong. You need to go back... and enter bugs, even if you don't want to because everyone will know you missed something. Better you find the bug then some other tester and better you find the bug before the customer does!!!

E) Regressing bugs with NO new new bugs being entered - When regressing bugs you should try to find new bugs. You should try to find 3 bugs for every 1 bug you entered. I know its not realistic, but it keeps you mind switching to 'verify mode'. You should still keep the 'break it' mode while regressing bugs.

  • Testers commonly have bug regression nights, where they have to go through a crap load of bugs. You should review bugs that were closed by the tester, and pick out once that you know should have taken longer then 5 mins to regress, or ones that are high risk. Talk with the tester to see what they tried, or go around and play with the feature yourself.

I know there are more things to look for... but I can't think of them right now... :)

What is testing

I want to define what I think testing is. I find it really important to drill this into new hires. Its the first thing I talk about with them. Note: this is my definition... (i don't know if other people think the same way... but i know it works, you will ship high quality products following it)

Testing:
Testing is all about finding bugs. Bugs are only found if you are trying to break the product. If at somepoint you start to verify the product, then you are not testing. So I guess "testing" == "breaking"... in my world. :)