Thursday, May 17, 2012

Developer Tester Relationship - make love not war!

Developer Tester Relationship


So its not dev against test. These roles might have different bosses, different organization, different job functions but that does not mean its war. In reality is it the opposite, dev and test are basically like two pea's in a pod... you are married to each other... you are in a relationship. And depending on your relationship... life can either be awesome or suck!

Dev's and Test must work together, especially if they want to ship high quality work quickly!

Just like there are expectations, guidelines to marriage, dating. There are things that all testers and developers can do to make the relationship solid... and last forever :) ahhh...

I will break it down into Rules and Suggestions on how to be good to each other :)

 

Rules:

1. Tester supports their developer and the developer supports their tester.

(thats it for rules)

 

Suggestions on how to Support each other:


Testers
  • During specification phase, or early coding phase, ask the developer how you will be able to test this feature? Are there hooks? Are there logs? What kind of information do you want in the logs. And ask them to put in these features.
  • When getting bugs with no information on what the fix was - ask the developer, the scope of the change, what the developer thinks could be affected, what is high risk, what you should focus on? Ask the developer to include this info in the bug, or you can do this.
  • Don't be shy! I know especially if you are new, you might be shy or afraid to interrupt the developer. BUT, if there is something you need to know, because you can not do your work, it is your responsibility to go to the developer (through email/in person/IM) and ask the questions you need to be answered. If the developer gives you attitude, then tell him, to fuck off (no just joking)... tell him that you are blocked. Remember if you can not do your job, it is up to you to fix it. (no excuses... because they are just excuses)
  • Ask the developer the preferred way of communications is (email/IM/in person/scheduled meetings)? You can't always just talk through bugs. If that is the only way you are talking with your developer(is through a bug database), then your relationship with them sucks! And you have no idea what is happening with the product. You need to start doing the other things mentioned above!

Developers
  • Figure out how technical your testers are.
    • If they are technical, help them configure a dev box, so they can do white box testing, enable them to be able to debug the problems, or simply trace through the code. The more details they know about the product, that faster they can test, you will not hear any more "we have to test everything" comment. The tester will be able to reduce the test cases considerably if they understand what changed.
    • If they are not technical. Make sure you are very verbose in your logging! Write simple tools to make the entry point to your feature easier.
  • Explain to them how the infrastructure works. The better the understanding of the product, the better the tester. Draw a diagram for them if one does not exist. Then they can reference it!
  • Enter information in the bug when you assign it back to the tester to verify. Valuable information is, was this a hacky fix? what to focus on? what integration points they should worry about? talk about high level what the fix was.
  • Invite the tester when you are having discussion with your program manager. (this is very important). You can tell the tester before the meeting "this meeting is not to test break, but brainstorm" (just to get the tester out of breaking mentality and into finding solutions).
  • Tell your tester your preferred way of communication. Are you ok with them just dropping by to ask questions? Do you prefer meeting everyday at a specific time to go over any questions?
  • If you have a new tester, reach out to them. Alot of testers are easily intimidated by developers. Yes, adults get shy! Be the bigger guy and start the communication by introducing yourself, and offer to help them ramp up on your features.

Ok this is all i can think of right now! Will expand when i think of other things! :)

Monday, September 14, 2009

Redefining why we run Test Passes and Regress Bugs

Redefining why we run Test Passes and Regress Bugs

I feel like running a test pass and regressing bugs has sort of lost their real intended purpose or potential. It has become something for management, and less about making the product better.

1) Test Passes

Right now, people will mind numbingly run through test cases - following test scripts exactly. They don't question what they ran before, what they are currently running or what they are going to run... basically the tester is half asleep. (funny thing is they were probably half asleep when writing the test cases in the first place!!) :)

Here are ways to get the most out of running a test pass!

NEW GOAL - if the testers are to run a full test pass (does not matter if its area owner or a contractor) - make it a requirement that they must find X number of bugs per day when going through the test pass.

What will happen:
  1. The test cases they are running, now become a baseline (guideline), the tester now must push the tests to a new level to find bugs.
  2. Test Cases are a simple entry point/code paths to consider.
  3. The tester will step back at different points and look at a group of test cases - holes will be found!
  4. Running a test pass will take a bit longer, but if the tester found a problem area - let them flush out the issues then and there.
  5. You will sleep better at night knowing your testers were testing vs verifying!
  6. Testers will now challenge the test cases themselves.
2) Regressing Bugs

Regressing bugs - typically testers try to get through them as fast as possible. They want to get them out of the way.

NEW GOAL -for every 3 bugs the tester regresses they need to find 1 bug. (or some ratio).

What will happen:
  1. Pushes the tester to not only try the specific bug that was fixed, but to branch out and fully exercises other code paths that could have been affected by the fix.
  2. Fixing bugs became just a little less risky - since the tester is trying to Break vs Verify.
  3. Regressing a bug is no longer something to get out of the way or 'off your plate'. Fixed bugs tells the tester 'look at me... new code that has never been tested' which typically translates to potential bug farms!
  4. Testers hopefully will not wait to regress their bugs, but see it as an opportunity to find more bugs. This means faster turn around and quicker stabilization.

With the new goals above - I guess you can think of test passes and regressing bugs as a way to perform focused ad-hoc testing!! (wow... that's sooo deep!!)

Sunday, September 13, 2009

Testers are bitching - 5 simple fixes for 5 common complaints

Testers are bitching - 5 simple fixes for 5 common complaints

Here are a couple of solutions/processes that can be put in place to stop some common complaints you might hear from testing!

Common Complaints from Test:

1. Problem - Tester says: What the f*!k? When did this get checked in? Why didn't anyone tell me? I don't know what is getting checked in?

Solution: EVERY CHECK IN MUST HAVE A BUG ASSOCIATED WITH IT. Any organization that does not have this requirement, sucks. Yes it is a little more work... but ask yourselves this... is there a checkin that is safe enough not to be tested. If it is alot more work... then there are serious problems...

2. Problem - Tester says: When did this feature appear?

Solution: Stop feature creep. Commit to a feature set for the release. If a new feature needs to be added because of feedback from beta testing, then testers need to be in the loop from the beginning(Testing should provide workarounds/risks/test estimates). But if features are being checked in, and are not related to anything else that is being done... it should be not allowed in. Remember Testing is already overwhelmed trying to ship the current product, they don't have time to entertain a new feature and bring it to stability. There is always next release to add the new feature.

3. Problem - Tester says: I only have time for surface testing

Solution: QA estimates need to be part of the release plan. If you are doing agile or waterfall, you have to have QA estimates for all features. If testers are saying they do not have enough time, there could be several reasons, but one place to start is to see if they were involved in the release plan.... Did Testers sign off on the release plan dates? If not, then... no wonder your product slips frequently OR lots of bugs are found by customers!

4. Problem - Tester says: We have no voice

Solution: QA Feature Progress Reports/Sign Off Sheet. This is really bad to hear... it can mean many things a)means testers are not informed of decisions being made, they are out of the loop. b) Or the testers do not feel like their profession opinion matters, or acknowledged or have an avenue to bring up concerns. c) OR testing is not providing information that is measurable (example they only provide gut feelings)

BIGGEST VOICE for Testers are providing Numbers!
Testing should be providing more then just gut feelings. They should provide a list of features and what they think the status of the feature is. This includes, incoming weekly bug rate, weekly fixed bug rate, list of areas still need to be tested, number of bugs still left to regress, number of bugs regressed. How the current week relates to the previous weeks.

Once testers provide numbers, everyone listens. :)


5. Problem: Tester says: I feel like I am out of the loop

Solution: Assign Feature Ownership. Sometimes testers are not in the loop because people do not know who they should inform on the Test side. Testers must be assigned features which they will be responsible for the feature from the beginning to the end. If there is more then 1 tester, and you have not divided up the areas of ownership, then it is total chaos. Developers have no idea who to talk to when there are code changes. Testers are only testing on a per bug basis, they are not seeing the feature as a whole. Overall quality sucks. No one is responsible for shipping a crappy feature. A feature never gets the full attention it needs. Only person that is in the loop might be the qa manager instead of the tester actually testing.

There are soo many benefits by assigning Features that its a whole blog in itself.

When to shake things around

A) General Rule - After every release - switch Area Ownership

No tester should have the same area two releases in a row. If you have short releases, then it should be no more then a year or two.

Conditions before swapping areas:

  • Make it clear to the testers, that before handing off their area, the feature must be stable and any deliverables must be completed, automation/test cases etc.

Why switch up ownership?

  • Tester start to get desensitized.
  • They start to make assumptions that certain areas work.
  • Enthusiasm decreases - willingness to keep testing drops significantly.
  • They will not touch the area again unless there was a code change - and testing the fix in an area that was considered stable is minimal at best. (One of the reasons fixing bugs later in the cycle is very high risk)
  • Ignorance is Bliss - when an area owner finds a bad bug late in the cycle, there is a chance they might not enter it, in an attempt not to look like they suck. This is totally possible and chances of this increases if testers are questioned publicly on why a bug was not found earlier

What you get?

  • Spreading knowledge. You don't want only one person to know everything about a feature.
  • Increase number of eyes on an area. The new area owner has a different perspective on the product. The way they will start testing the new feature in how it relates to the old features they owned. As a result a whole new set of integration tests will be tested... which leads to new bugs. (assuming you encourage ad-hoc testing)
  • Excitement. If you make it clear to the new area owners not to assume anything was tested, and they are supposed to break it. It will bring a bit of excitement back, especially when the product has started to stabilize.
  • Surfaces problem testers - this is an awesome way to detect if a tester sucks. If the new owner starts to find a crap load of issues... you just covered your ass. You NOW know the old area owner needs a backup tester... which raises a RED FLAG! (time to give the old area owner a highly visible area which gets alot of traction by other testers.)
  • Saving Face - ** if a tester knows in a couple of weeks they will be handing off the area, they are probably going to start testing just a little bit harder to find the issues before someone else does. All testers hate it when someone finds a bug in their area... and they will try just a little bit harder to look like they suck!
  • Old feature owners never really let go. They will still be checking up on the feature every once in a while.

B) Other testers are finding bugs in another persons area

You really need to analyze the bugs being entered:

  • Are they basic bugs?
  • Are devs pissed off that the bug was not found earlier?(need to talk with them)
  • How long has the bug been around for?
  • Was the bug a regression? If so, how long has the regression been around for?

If you are frequently seeing bugs entered in a persons area - then something needs to be done. See if the tester is overwhelmed? Do they have too much on their plate? The features on their plate, is there a common integration link between them? Or do they own areas where if they are testing Feature A they are totally neglecting Feature B and both Features A and B are high risk?

If a tester has two high risk features, but there is no common integration point between them, you should assign one of the high risk features to another tester that has the bandwidth to look at it. (All testers do not have a high risk area)

C) Miscellaneous Reasons

I am just outlining some general rules in A) and B). Of course there are lots of other reasons to switch areas.

What you should get out of reading this blog post:

What I wanted to convey is switching area ownership is a good thing... and PLEASE don't be afraid of it. Testers LOVE it! Any if they don't like the idea... its because they are shitting their pants :) (NOTE: there are bad ways of switching areas... but doing A) is always good. And doing B) is for the quality of the product.... its stuff in C) which can cause problems.)

Wednesday, February 4, 2009

The Sweet Spot

Out of all the years testing... there were these moments where everything aligned perfectly and as a result, my features were implemented, tested and stabilized in record time. These moments I am calling 'the sweet spot'. (Note: made up the term "the sweet spot")

My definition of 'the sweet spot' is:

  • when you have the right developer with the right tester, and they are both working on the same feature at the same time.

Sounds simple... but in reality it's pretty rare.


People these days are all hyped up about scrum, like its some magical solution to shipping faster. Its so complex (so many rules), that people think it must be the answer... eventually once you follow the path, you will see the light!? Lots of wishful thinking... and its missing the crucial key of what ships high quality products... the the emphasis of 'the sweet spot'. (yahoo one more rule to add!!!:)


What happens when you are in "the sweet spot":

  1. Developer checks in code. The Tester and Developer are now in the sweet spot. They are both working extremely close. They go through this intense cycle of fixing and entering bugs as fast as they can. Both are 100% working on the same feature.

  2. Developer fixes all bugs found regardless of severity, no punting/postponing/no backlog. They do not work on anything but bugs in the same area that the tester is working in. Developer does not fix other bugs in other area.

  3. Tester is focused on testing the same area the developer is working on. They do not postpone testing to do automation or tools or writing test cases etc. They start pounding on it right away. They test from unit to integration and all the different testing themes.

  4. Managers - life is easy for the managers. They just need to sit back and watch. This is the most stress free time for a manager. Their one responsibility is to make sure the dev and test keep the sweet spot going by removing external factors that could risk a context switch.

  5. If someone asked the developer or the tester what they are doing, while in the sweet spot, both should be able to answer for each other. They are so close, they can finish each others sentences.

  6. The Sweet spot should continue until the dev and test consider it done. Where both of them think that it can't get any better. Bug rate has slowed down to just a trickle.

  7. Spike in bug count.

  8. Time span from bug being entered, to fixed, to verified is very short. (Fast turn around)


Results of "the sweet spot":

  1. Any feature that goes through the sweet spot is no longer a risk. You can check it off. Sure there might be minor issues, integration with other features that were implemented later. But its no longer an unknown, 80% or 90% should be working.

  2. You can move the dev and tester to a new feature set to work on.

  3. Two brain better then one. If you put an average dev with an average tester, and they go through the sweet spot... you WILL get ABOVE average results. Speed and stability will be way higher.


How to help line up everything such that "the sweet spot" can happen:


  1. Dev and Test should be physically close together. Ideal is that they can talk to each other with out having to get up from their seats. (not a must, but boy it makes a HUGE difference)

  2. Pair up the Dev with a Tester. State that they are a team. All information(meetings,emails) about the feature they are working on, they both included- no filtering.

  3. Tell the developer and tester what should happen when the code is checked in.

  4. DO NOT time bound the sweet spot. Commit to finishing - while incoming rate is high let them stay in the sweet spot. Reason it's ok not to time bound is because 100% of the tester and developers attention is on the product, it won't drag on because there are no other distractions all external factors are filtered out. AND another reason not to time bound is because if you cut it short, we all know it comes back under a different title...'bake time','integration phase', 'stabilization phase', 'code complete phase'... its that span of a couple of months before ship... where testers are finishing their testing they should have finished earlier but could not because it was time bound.


For the visual people out there... i made a super diagram of how things should go in Paint!!... when things went well for me in the past, its because it has the exact same flow below!






Sunday, January 25, 2009

Look at the Tester - Non-Obvious ways to find bugs

As a tester, I was always trying to figure out what area i should hit next. Where are the holes, the problem areas, the places that have bugs. One of the most efficient way is to watch the testers, figure them out, understand how they test, what their background is, etc.

Here is a list of testers I have seen over the years that you should look for. They all have a high potential for shipping low quality products if no one is watching them.

Note: the labels i gave them are my own and just for fun.

1) The Complainers

Listen to the hall way conversations. What areas or developers are the testers complaining about? If testers are complaining about a specific dev or area. Any kind of complaining demonstrates a problem with the dev test relationship. Sometimes you won't hear verbal complaints, instead you will notice that you never see the developer and tester talk.

Another reason testers might be complaining is because they are overloaded. They can't get to everything at once and no one is stepping into help them out. This causes the tester to do only surface testing, and deep in depth testing is not done just because of time constraints.

Types Of Bugs:

If the problem is dev/test relationship then there is always huge holes in areas that these two own and they will probably be very very very basic bugs.

If the problem is the tester is overloaded. Work with the tester, and say any testing you do is unofficial and under the table - but ask them what areas they are worried about etc. Any type of testing is bonus in their eye and it will alleviate some of the stress.

Reduce Risk:

For dev/test relationship: From a management point of view, you should definitely separate these two and assign this area to a new tester.

For the overloaded: move areas around. Bring other testers in. NOTE: i have randomly heard 'mythical man month' as an excuse not to do anything... what a bull. Get the area owner what they need or give them the option of forming a SWAT team - where they drive and pick out individuals to help and test specific areas that need to be covered.

2)The Underachievers

Look for the testers, where skills/experience do not match what they are producing. There can be a thousand reasons for this... but in the end, results are usually the same. If you see a tester that is not producing at the level they are capable of... that means they have checked out. (Time to move in on their area. :) ) Motivation and interest is gone, and they are probably just doing the bare minimum to stay afloat. You will know if they checked out by a simple test... Enter bugs in their area, if they don't react, they've checked out. (same test you would do if you are trying to see if an animal is dead, poke it with a stick and see if it moves :) ).

Types of Bugs:

All over the place. You never know when they lost focus. Look for recently fixed bugs, since bug regression is one of the first places that they will skimp on.

Management:

Got to move these guys to something new. To a new team, new product, fresh start. You might want to hold them tight and close to you because at one point they might have been awesome. But for some reason, the passion is dead, and you will never see it again (for you specifically)... cut the cord and let/help them move on.


4)Repeat offenders

If a tester dropped the ball on a feature they shipped before, they are going to do it again. Since they did not get fired the first time... why try harder? You will always find bugs in their area. Why? Because they are not getting it and don't care enough to try to get it. Typically these are senior testers. Funny thing is with these guys, you would never know by meeting them that they suck, because they don't think they did anything wrong. Sometimes these guys are big talkers with low bug counts or they are the guys that are just floating by.

Type of Bugs:

Everything under the sun. I have found that just following the specification design document found tons of problems. Other times I have found problems where the feature integrated with another product. You can always ask this tester what they hate the most, and you will probably get your answer of where to start looking for holes.

5)New Hires
I love new hires... motivation is high, work ethics high, they are young and energetic... only problem is they are new. When you are testing a new hires area, its best to actually teach them different techniques and let them run with it. I feel that its very important to give new hires the right feature set - set them up to learn what it takes to be a top tester. Features you should give a new hire, 1) is where alot of other features overlap it and 2) give them a high bug count feature. Reason to give overlapping/high traffic features, means other testers will uncover bugs that the new hire missed (reduce risk). And the reason to give the new hire a feature that typically generates a high bug count (like UI) is... it is easier to go deeper into the product if they have a UI entry point, so they can go top-down (I rarely ever see people go from bottom up (database level to UI)).

Types of Bugs:

Leave their areas alone. Better to take the opportunity to mentor them, show them what they missed etc. Don't take away the opportunity for them to practise finding bugs in their area.

5)Hurts to Think

When creating a test plan, it requires alot of analysis and thinking about their features and how it will integrate with the rest of the product. This requires thinking. So look at the test plan (if they are required). Definition of 'test plan' - is a document outlining how the tester is going to approach testing the area. Typically the testers that put no effort or thought into the test plan... will do the same when testing their area. It's the first sign there is going to be trouble.

Types of Bugs:

The type of problems you will see out of their area, are major redesigns later in the cycle. Huge integration misses.

6) The Verifiers

Verifying sucks. Its the worst thing ever. All they tried was very basic unit tests, which the developer already did. Basically its open season. I bet any test case that you try that even remotely pushed the limits will break. Instead of testing their area, its best to convert them to becoming testers. They can be converted over, it just might be that no one has mentored them, or showed them how to be successful. (This is all assuming that they want to be in testing, and have motivation to become a great tester)!

Type of Bugs:

Only thing that was tested was basic unit tests. Since there is so much testing left its best to mentor these testers and teach them how to be super stars if the motivation is there. You will see by bug count if they truly want to learn. And if they are all talk and no action then its time to start testing their area.

7)The Push Overs

The push overs are the testers that can't stand up for a bug. They find a serious issue, but get over ruled by their Program Management or Developer. Instead of pushing for the bug, they let it go. All three are at fault in making the wrong decision about a bug, but sometimes the tester claims innocent because in their world their responsibility ends at 'logging the bug'.

Types of Bugs:
The types of bugs you will find that make it to the final product are usability issues. Usability is a common type of bug where testers opinion is void and null because of the common saying 'your a tester, you are not a real world customer'.

Management:

It is best to pair up a push over with a senior program manager or senior developer. It usually the younger pm's that make the bad calls or the young developer that does not want to do the work.

8) The Keeper/Old Timer

The keeper is someone that has owned an area for several releases. People always think, this is good because they are the area expert, but its not!!! Totally opposite, its the worst thing you can do and you are in for trouble. They have become desensitized, no longer question anything. They no longer go through an area with a fine tooth comb. They skim over things, they test for 'it generally works'. Its the same old. Passions starts to ween, attention/focus starts to wander to other things.

It is better for the team, if this person takes a new area on and someone else takes over their old area. Why? 1) The keeper will have a hard time letting go, which means you will still see bugs coming in by the old area owner even though he has a new area (so you get two sets of eyes) 2)Expertise is distributed to more then one person on the team 3) You have a fresh set of eye looking at the feature 4) The keeper is less likely to get bored and leave your team.

Types of Bugs:

Regressions will not be found. Basic scenarios could be broken since the tester assumes they are working.

Thursday, January 22, 2009

Weekly Bug Quota - why it's soo good in soo many ways

I swear the best way to ramp up anyone is to give them ONE goal... hit a weekly bug goal. The magic number I always liked was 10 bugs a week minimum. And the rules to this game is, always hit it no matter what.

Who should this be applied too?
  1. New Hires for sure, it simplifies their life and teaches them to focus on what makes them valuable... finding bugs.
  2. Anyone that has not been through a full release cycle from start to end.
  3. Any testers that are labeled as verifiers
  4. Any tester that does not consistently produce results.
  5. Any tester goes off and does there own thing and is leaving the product hanging.
  6. Any tester that thinks they are done.
Why is a weekly bug goal good to have?
  1. Teaches the tester to never take their eye off the ball, or at least in our case the product. It forces them to practice finding bugs. The tester should never go longer then a couple of days without being in the product.
  2. Everyone knows how the product is doing (including devs and management). Its funny how people always assume the product is better when there are no bugs... when in reality you don't have a clue if its because the product is stable or the tester is not testing. By forcing the quota, you don't have to worry so much about the tester not testing.
  3. Later in the product cycle, this quota is super important because it forces the tester to do integration testing - (where 80% of all bugs are integration). Alot of testers think that if their area is stable it vacation time! Not true, its heavy integration time. NOTE: I have noticed that very few testers spread outside their area, the mandatory bug quota forces them to branch out.
  4. If you have hired someone to go through a test plan and verify. You should still apply the bug quota. It will apply a little bit of pressure, and force them to question the test cases they are running - makes them have the 'break the product' mentality. You will get higher quality testing, since they will deviate outside the test plan if they see something interesting.

Excuses you will hear (when they don't make the quota)
You will hear alot of excuses... here are a couple that come to mind:
  1. "My area is stable" - my answer would be - "test other peoples area"
  2. "I don't have UI" - my answer would be - "test other areas"
  3. "I was regressing bugs" - my answer would be - "while regressing you should have been trying to find bugs" (regressing bugs is the best way to find bugs... so anyone says this statement probably did a really crappy job regressing the bugs!)
  4. "I was writing my tool" - my answer would be - "write your tool, but also find 10 bugs. tool means nothing if you don't find bugs"
  5. "I am blocked" - my answer would be - "test something else"