Vote my video to send me to EuroSTAR

Tuesday, March 29, 2011 14:14 PM

This is bit of an unconventional post in this blog. I hope you will forgive me :)

The message is simple. Watch my video above and if you like it (I hope you do), please vote for me here. Remember, I am "Video 6: Sajjadul Hakim" :)

Your vote can take me to the EuroSTAR 2011 conference. Many of you do not know that I will be speaking at CAST 2011 in Seattle, on this same topic, i.e. "Understanding Gut Feelings in Software Testing". No, there was no voting for that. Just the belief of James Bach that my talk may be worth listening to :)

Voting ends on Friday, April 1. So please hurry. Thanks to many of your support I am already neck and neck with the leading contestants. It's an intense competition. Your vote can make a difference now!

Vote for "Video 6: Sajjadul Hakim" here.

Masters of Illusion - Fake Testers (Part-1)

Sunday, August 29, 2010 15:02 PM

Before you start, let me warn you that this is an experiment. I encourage opinions even if you do not agree. I am not sure how some readers will take this, but I assure you that my writing does not implicate anyone in particular.

Testing has come to be a mainstream job. There was a time when I wouldn't get enough resumes when I posted a job ad for a testing position. Not anymore. Now I am literally overwhelmed with the number of resumes I receive. But even then I am not able to hire testers who can do magnificent testing. I try to hire those who can learn to do magnificent testing. Even that is pretty hard to find. I am concerned about this predicament. I notice that most people I meet, testers or non-testers, seem to think they know testing and are clueless about how wrong they are.

Being a tester is about busting illusions, but what if the tester is the illusionist. Being a tester is about not being fooled, but what if the tester is the one fooling his audience. Things become worse as this illusionist is the one who is rewarded and encouraged in his workplace and community. Due to this he is even able to build followers and zombies. How do you recognize this fraud? I have identified some traits from my own experience communicating with testers. Do question my thoughts and knowledge. This post is for those who do not want to be fooled, and to encourage them to question the motives of such testers.

The attributes I outline below are sometimes, in my opinion, what fake testers possess. However, having these attributes does not necessarily make someone a phony. This is what Nassim Nicholas Taleb calls the Round-trip Fallacy, i.e. "these statements are not interchangeable".

Escape Demonstration

You would think it's obvious that testing should be taught by "demonstrating" testing. Apparently that is not the case. Take for example the ISTQB trainings. Every time a student tells me that it helped him to know more about testing, I ask him if the instructor ever demonstrated testing to him or let him test something. Of course, he would answer NO.

Demonstrating testing is extremely important to teach testing. But it's extremely difficult to do if you never tested before or if you don't practice testing. This is probably the most effective way to expose a phony. You can tell a lot about someone's skill if you watch him test. It's also a great way to judge testers during interviews.

Usually a fake tester would avoid public testing demonstrations, and you can forget about rapid testing sessions within an hour or half an hour. He will always find some excuse. Mentors such as James Bach, Michael Bolton and Pradeep Soundararajan do demonstrations all the time, and when they can't, they articulate their testing in their blogs or articles. I sometimes demonstrate one hour rapid testing sessions to my testers on an application they choose for me. It's a great way to teach and also build my own demonstration skills. I find that demonstration for the purpose of teaching is more difficult to do, because you have to continuously communicate your thoughts and actions to your audience. I have certainly become better at it.

When my testers cannot reproduce a problem reported by the end user, I try to do it myself and then explain to them how I figured it out. Even if I can't reproduce it, I would still explain how I investigated. If my testers saw a problem, but can't figure out how they found it, I sit next to them and try it out with them. If my testers tell me they can't figure out how to test something, I would either explain that or demonstrate it in person. You will not see that happening with fake trainers or Test Managers.

Here is a question for you. If you never saw your Test Manager or Trainer demonstrate or articulate testing then how do you know they are testers?

Weekend Testing (TM) -- Why Not

As I was writing the last section, it got me thinking about Weekend Testing. If you haven't heard of this before, read what James Bach has to say about it. Oh, and it's no longer just an Indian thing.

I know I love testing and jump at the first testing opportunity, but why was I not participating in Weekend Testing? The obvious reason was that I was already working long hours during the weekdays and the electricity going out every other hour was just too frustrating to go on an online bug chase. An enticing bug chase. A mind boggling discussion opportunity with culturally diverse sapient testers of the context-driven testing community. The more I thought about it, the more pathetic my excuses sounded. This was what I always dreamed about. So I decided to give it a try one weekend, when I happened to be still in my office and the clock struck 3pm IST. It was just as awesome as I expected it to be. So I did it again, and then again.

I know that many testers feel shy or are afraid to participate because of the possibility of performing badly in public. It's kind of the same fear that some have questioning in public. I know my testers are pretty enthusiastic about it, but are shy. That's because every time I do a Weekend Testing, they find out what I had tested and start testing that at work on their own. They feel that they want to practice a little more before jumping in. That is ok with me. In fact I want to speed up this process. So recently I ditched the weekly 1 hour rapid testing sessions, and started a mock Weekend Testing session, at work. It's like we are all sitting in the same room but using Skype to communicate. We pick an application at random in 10 minutes and start testing the application in the next 30 minutes. Then 5 minutes for the experience report, followed by discussions, all on Skype. Sometimes when we have less time, we discuss without Skype. I am hoping pretty soon more of my testers will gather the courage to participate in Weekend Testing.

Anyone can make up their own brand of Weekend Testing if it works for them. Even James Bach said on Twitter, "They can set up their own weekend tester thing with just their own friends".

If you never did Weekend Testing, that doesn't necessarily make you a phony. But if you never plan to be in Weekend Testing, you probably aren't enthusiastic about testing at all. Here is what James Bach said on Twitter; "It would be like a carpenter saying that he'd never build something for fun, at home. Such a man is in the wrong job."

I Don't Know -- The Forbidden Phrase

When faced with a question you don't know the answer to, it is ok to say "I don't know". There is no shame in it, even if you are an expert. I say it all the time. Then I also say "But I will find out". However, that is not my exit strategy. I do try hard to find out. Unfortunately fake testers don't want to do that. I notice this during conversations and even in testing forums. Usually I have disagreements and debates with non-context-driven testers about what they preach as "Best Practices". So when I present them with credible scenarios and question their beliefs, they usually avoid the question or cherry pick the questions they "think" they can answer. But eventually, they would say something like they are busy or label me as a heretic or just wouldn't reply. Sometimes they will even give vague replies. I understand this could also be a side effect of my "Transpection" practices, introduced by James Bach in his blog:

"One of the techniques I use for my own technical education is to ask someone a question or present them with a problem, then think through the same issue while listening to them work it out."

When I read about "transpection", I did not feel foreign to the idea. But the post made me aware of the side effects of transpection. Also, I now have a name for it. I have only used transpection on the testers I coach and close associates. I don't think that was intentional though. I did it without thinking about my approach. I don't remember ever doing it in public so I can safely rule out that cause in my unpleasant scenarios. However as James mentions, it is a legitimate cause for someone to get irritated with you and look the other way.

To Be Continued...

And so ends Part 1 of "Masters of Illusion - Fake Testers". I have a lot more to say in Part 2 and Part 3. But I wanted to give you a break to ponder over what I have written so far.

Coming up in Part 2:
  • The Techie Tester: Fake testers are usually oblivious to the notion that there is more to testing than technology and process...
  • Exploratory Testing is not Predictable: They seem to assume that exploratory testing is uncertain (because testers make it up as they go) and scripted testing is not...

Picture reference:

Reconnaissance - A Testing Story

Tuesday, October 27, 2009 07:22 AM

Our application is so huge that we have teams of programmers assigned to separate modules. One day, the development Team Lead of the messaging module came up to me and said...

Team Lead: We are giving a release today with the changes to the messaging module.

Me: Today?

Time Warp -- About Two Months Ago:

The application has a messaging module. We added a feature to enable or disable this module for users. While testing the feature it was found that some user names did not appear in the UI, and so could not be added to the recipient list of the message. On further investigation it was found that this was a side effect of another module; the user creation module. Users are created by entering user profile data. It was assumed that once a user is created it will insert some default user preferences in the database. When implementing the new preference for the messaging module the code was written assuming that the preference values will always be found in the database. But it turned out that the preference data was not inserted when users were created with another feature; i.e. importing all user profiles in bulk from an Excel spreadsheet. This feature did not insert the default preferences. The feature was developed about five years ago. Why was this bug not detected earlier?

There aren't too many preferences. When dependent modules and features look for their respective preferences, the code assumes certain defaults. These defaults are the same as the default preferences that are inserted. So this went undetected all these years, without any noticeable side effects.

Since the release date was close, a quick fix was made that basically checked for the absence of the messaging preferences. This fix was marked to be refactored at a later time.

Time Warp -- About a Week from the Present:

Some performance improvements were being made to the application. Since the messaging module is widely used in the application, some of its database queries were optimized. In one of the optimizations, the extra messaging preference checking was removed. Of course everyone knew that this also required a bug fix for creating new users; i.e. the default preferences should always be inserted in the database. So that fix was done for the Excel spreadsheet import feature. Since it was suspected that the bug in the messaging list (the one where the user names did not appear in the UI) may reappear again, two testers were assigned to test it.

Time Warp -- Back to the Present:

I had been preoccupied the whole week trying to write automated test scripts using a combination of Fitnesse, Selenium RC, JUnit and the test harness I created 2 years ago. I abandoned its progress back then because of lack of resources and the difficulty of maintaining it. I was revisiting the progress with some new ideas I have picked up over the years. Due to this I was not able to give much time to followup on the testing that was done recently. This was an expected side effect and my peers and testers are well aware of it.

So now getting back to the conversation...

Team Lead: We are giving a release today with the changes to the messaging module.

Me: Today?

Team Lead: Yes. They (my testers) said it was ok.

It was a performance improvement. I heard a little about the changes that were made, but did not debrief the testers on their progress. I also play the role of a Test Toolsmith, and there are occasions when I disappear from testing to develop tools to speed up our testing effort. Most of the time these are not automated tests, but are tools that assists in test acceleration. I can probably write about those on another day. This is way more interesting to write about now.

My team uses Yammer to write short notes about what they are working on. Yammer is very similar to Twitter, but it lets you create closed groups with all your coworkers automatically, based on their company email address. It is kind of like twitter for the workplace. Yammer helps to keep me aware of what is going on with my team when I am away. I can interact with the testers later at certain chunks of time during the day about their tasks. Not a foolproof practice (in fact it is far from it), but it is working well for us during such times. When I put aside my Toolsmith hat after a week (or couple of weeks) we become more agile and start interacting all throughout the day.

Me: Ok. Let me talk to them about their tests just to followup.

Whatever hat I was wearing at that time, it was important that I got back to being a Test Manager immediately. I trust my testers, but I am paranoid by nature. Two of my good testers were working on this. So I wasn't anticipating a disaster. Yet I felt curious enough for a debrief with one of the testers (since the other tester was busy working on something else).

The debrief went well, and she seemed to have some good ideas about what to test. I also called one of the programmer's to come over and talk a little bit about any dependencies he was aware of. We didn't go too far with that. So I continued to bounce my test ideas with the tester. At one point I asked her...

Me: What problems did you find?

Tester: I didn't find any problems.

Me: I mean, did you find any problems after they made the changes?

Tester: Actually, I didn't find ANY problems. (She looked and sounded disappointed)

Me: You did not find ANY problems?

Ok, here is the thing. This was not a big issue, and it is not that we always found problems after a code change. In fact, sometime things are pretty rosy. But every time I hear this, I have bells ringing in my head. It was as if "I didn't find any problems" was a heuristic that makes me wonder. Were we looking for the right problems? Were there important tests we didn't think of? So we started talking a little more about other possible tests. She had checked those as well. Impressive. So, I asked her...

Me: How do you know that you did not verify the wrong data?

Illusions happen. I witnessed quite a few myself. In her case, this was a list of user names and titles, that were word wrapped, and the readability was not that great.

Tester: I created the user names and titles by labeling them with the kind of test I was doing. That's how I made sure.

I just love it when they can explain why they did what they did. It may seem that I am just making life difficult for my tester, but hey, things like this can go wrong, and in my opinion we need to be conscious about it. During the discussion I realized that she did not test the other features of creating a user profile. Yes, there are six ways to do this.

This is an application that is being built for more than five years. It has many features and not so obvious dependencies. A thorough regression test before each release is impossible with the limited resources we have, so we have to analyze dependencies to figure out what to test. Since the dependencies are so meshed, checklists haven't been working out that well for us during regression. We figured mind maps would be a useful way to remind us of dependencies. The problem is that once we put down detailed dependencies on the mind map it becomes very difficult to follow. We needed a tool to zoom in on specific nodes, hide the rest of the clutter and show its dependencies. Then zoom in on the specific dependent node and notice if it has its own set of dependencies. Then maybe follow the trail if necessary. An interesting tool that does something similar with mind maps was PersonalBrain. It is not very intuitive and does need some getting used to. Its user interface was not exactly what we had in mind, but it is still helpful when you want to manage complicated dependencies. Here is a snapshot of how the dependencies of creating user profiles looked like (the first one is zoomed out, and the second one is zoomed in on Create User to reveal more branches)...

Getting back to the story. Luckily the Team Lead was walking by, so I told him that we did not test all these other ways to create a user profile. I knew there wasn't much time left for the release, and it will take a while to test all these features, so I wanted to see if he thought we deserve more time. He said, they didn't make any changes there and these were developed a long time ago. There were no signs of any problems. Ok, so clearly this did not have any priority, and with what I had analyzed so far I didn't think it had much priority either.

Back to the debrief with the tester. While I was talking to her, I had asked another tester (who is also a Test Toolsmith) to pull up the source code change logs. Now that we had exhausted our test ideas, I wanted to see if I get any clue from looking at the code differences. It is worth noting that although I was once a programmer, I don't know much about this application's source code (which by the way is pretty huge). I never invested much time reviewing it. I don't usually analyze the code changes, but since this was dubbed as a very small change, and the code differences looked friendly enough, I thought I could dare to investigate.

I could only make out some query changes. The programmer was also kind enough to explain the changes. He calmly reiterated it was simply a performance tweak. He removed a check for a null value in a particular table column. That was basically it. He explained that this value would have been null when importing user profiles from an Excel spreadsheet. Even that was also fixed by another programmer. Also, they will be running a migration script on the database to update these null values. I knew about the migration script before, but now it was all making sense. I realized that everyone was very focused about this one particular feature. So I went to talk to the programmer who fixed the Excel import feature. He said that his fix was only for the import feature and it will not affect any other features for creating user profiles. Well, that's what he said. There are six ways to create a user profile, and these were implemented about five years ago. The original programmers were long gone, and this programmer was assigned to fix this very simple bug for this very specific case. He didn't know about 'all' the other ways of creating user profiles, and said he did not review the code for 'all' the other implementations either.

Suddenly I felt an urgency to test every known way to create a user profile before the release. Clearly there wasn't any time left, but from the evidence I gathered, although there was little proof that there may be a problem there, there was just too much uncertainty and too much at stake. The problem with this change was that, if it went live, and the other user profile creation procedures had the same bug, those users will suddenly stop appearing in user lists used to send messages. That would affect a major portion of existing users, and since we had more than 50,000 users, that could result in a customer support nightmare.

So I quickly instructed another tester (who usually deals with the application's administration related features) to start testing all the five ways to create a user profile, while I continued the debrief (had some new dependencies to discuss). I briefly told her what to look for, i.e. the database table values she should monitor, and to check the values of the messaging preferences when creating user profiles from the UI. Note that she was not a geek, nor was she very technical. But checking database values is not that difficult. Since we run Oracle, we use Oracle SQL Developer to open tables and filter values. This is very simple to do. In this particular case there is no complicated SQL query writing capabilities required. However, if anyone did need specific SQL queries, then someone with more technical experience would write that for them.

After about ten to fifteen minutes, the results of one of the tests were in...

Tester: Are you sure there will be a new entry in the table?

Time for me to look into the results. We have two database servers running. We usually keep one server for ongoing development changes to the database tables. Then these changes are made into migration scripts by the programmers and database administrators. These migration scripts are then run on the other server that has a similar snapshot as the live database. This is done to verify if the migration scripts causes any side effects.

I wanted to make sure she was checking the right database. So we modified some values from the UI to check if that value changed in the corresponding table. It did. I wanted to make sure she was extracting the correct user ID from the browser HTML, and verifying that in the database. She was. I wanted to make sure if our test procedure was correct. So we did the Excel import test again, and checked if it entered the relevant values in the preference table. It did. Clearly there was a problem.

I informed the Team Lead of this and told him we are checking the other methods as well. He quickly went to investigate the code with his programmer. After a while the next test result came in. Same problem. Then the next. Same problem. All other methods of creating a user profile had the same problem. It was finally decided that the release will be postponed to another day, until these get fixed.

Picture reference:

Surviving a downsizing

Saturday, November 15, 2008 10:44 AM

I think it is fair to assume that if you don't have programmers, you will not have a product. So when your employers run into a financial crisis, will they prefer to let go their testers before considering the programmers? A recent discussion thread at Software Testing Club got me thinking.

Testers are mostly service providers, i.e. they provide quality related information about the product to help the stakeholders and programmers make critical decisions. They ask important questions that are generally overlooked by others. These are usually regarded as very important functions, since most others are not doing it. It is a different mindset and a different set of skills that most programmers, project managers or customer support personnel cannot acquire very quickly.

There is a popular myth that testers with technical knowledge or automation skills may be preferred over manual (or sapient) testing skills. We must be careful when we undermine the value and skills of the so-called manual testers. It is rather questionable how much automation testers add value to the project (depends on the kind of product of course), since they are not really questioning the product like the testers are doing. The automation tasks may be easily taken over by the programmers, since it is programming, and learning new tools may not be that difficult. But if automation testers have important sapient testing skills, then that may not be the case.

There is another popular myth that programmers will probably be better at testing their product, since they created it. I think creating and questioning your creation are two different skills that are very difficult to practice together. They require very different mindsets and experiences. However, I can understand that programmers can write very good unit tests since that requires intimate knowledge of the code.

In my opinion, during any financial crisis, the employees who are more dynamic will be the survivors. Testers may actually have an edge in this case, since they would probably know about the business domain, about customer issues, about the overall product infrastructure, about product usability, about recommending important new features etc. Testers who are not quality police and are able to take on deadline challenges will probably have more preference when management decides to reconsider project timelines. Testers who are able to derive important feature related information about the product by questioning and exploring, rather than always demanding spelled-out specifications, may seem more favorable. My point is that if management perceives the tester not being a liability, but rather facilitating the project in essential ways, that will probably make it much harder for the management to consider them for downsizing.

The value a tester brings to the team is more than about finding bugs. The product will always have bugs, even with testers, since testing cannot ensure the absence of bugs. A tester's value depends on how he questions the perceived value of the product. For example, if the tester is able to identify problems in the product that could disappoint or frustrate the end user, he has successfully defended the expected quality from the product. It is true that some companies do not experience this value from their testers, maybe because the testers do not have the required skills. If such testers would want to survive then they need to start giving attention to these skills.

Unfortunately, downsizing may not always be dependent on performance, but more of a budget issue. So it may very well be the case that good testers are shown the door, simply because the company cannot meet the budget. However, do consider another angle to this predicament. If your company is not as huge as Microsoft, then chances are that you have far less testers than programmers. This ratio may tilt in the favor of the testers considering a strict budget constraint where a certain number of employees need to be laid off. Remember that all programmers' skills are not equal in importance to management. Programmers have an equal challenge to prove their worth, especially since they will probably be much higher in number than the testers, and therefore be more preferable for downsizing. So if management values the work done by the testers then the ratio of layoffs may not favor the programmers. In the thread at Software Testing Club, Jim Hazen shared a very interesting experience depicting such a scenario:

I've been in Software for 20+ years, 20 of it in Testing. I have been through 3 merger/acquisitions and a few companies downsizing. I have survived some of those events, others I was not as lucky. Now I have seen situations where the whole development team was cut loose (because they did F'up badly) and the Test group kept intact (management figured that other developers in the company could be re-tasked and could take over the code, but that they were understaffed on testing as is and because the product was close to shipping they needed to keep the test staff). This was a unique situation.

You will notice that most of what I wrote above assumes you have a mature and responsible management who recognizes the non-monetary value added by testers. The kind of value addition I wrote about are in no way worthless activities. Yet it may not receive the credit that it deserves with immature management. In the case of naive management, it is easier for them to perceive programmers and sales people adding monetary value, instead of testers. Nevertheless, understand that this is only in the narrow sense of adding value. In a recent blog post, Michael Bolton suggested some ways testers can add monetary value. Whenever I talk about testers adding value, I mean it in the broader sense. This will always be difficult for immature management to realize, if they are strictly looking to quantify the value. But like Michael says, "there are lots of things in the world that we value in qualitative ways."

Downsizing decisions depend on the kind of company you work for. I do not work for employers that do not value my testing skills and my contributions. I would advice other testers to do the same. Of course, I also do my part in proving my worth to management.

Picture reference:

Software Testing for Dummies?

Friday, October 17, 2008 11:47 AM

In my ongoing debates and discussions about saying no to testing certifications, I came across a very interesting comment:

BBST is an excellent course (so far what I have learned) and I think every tester should learn from it. But would you please tell me how many testers in Bangladesh actually learned or will learn all the lessons from this course. I know very few of them. Many tester will start learning very seriously but after few days he/she will lose interest because it's free and it will not give any direct output, many of us (especially Bangladeshi people, of course there are many exceptions) are always looking for instant output for their given effort. But most of them have a lot of potential.

I have a feeling this holds true, even in other countries. Comments like this are pretty common actually. It is implied that testing certifications provide an easier entry to this craft. They are easier to prepare for and pass. Also since certifications are not free, people will take it seriously, or would try hard to get the return of their investment. That would be a lot easier than reading blogs and articles of veteran testers, or participating in online testing forums, or coming up with context-driven testing strategies or blah blah blah. So fortunately there is a way to be INSTANT testers.

Unfortunately it is time to break the bad news. I believe...

Testing is not for the LAZY.
Testing is not for the IMPATIENT.
Testing is not for those who seek SHORTCUTS.
Testing is not for those who do not know how to LEARN quickly.
Testing is not for those who are not SHARP.
Testing is not for those who do not have PASSION for it.
Testing is not for those who cannot QUESTION.
Testing is not for those who are not INVESTIGATORS.
Testing is not for PROCESS freaks.
Testing is not for CONTROL freaks.
Testing is not for those who cannot be DYNAMIC.
Testing is not for those who cannot deal with UNCERTAINTIES.

I believe that if any tester (or wannabe) falls under any of the statements I just listed, they should rethink, or quit now! They should find another suitable profession that meets their characteristics. Go back to school, or go back to programming, or go back to customer support, or take an administrative job, or whatever, to build their career in something else. Certifications is a way to attract many people with unfavorable characteristics into our profession.

Sometimes I felt there were people who had potential. But while working with them I learned that they lacked the attributes I respect in a tester. Some were just plain lazy and wanted shortcuts to being skilled. There are no shortcuts.

As testers, we are investigators. We are told to investigate something whose complexities are least understood, i.e. software. We are able to spot and analyze clues to problems that are generally overlooked. We ask questions that were not perceived by others. We expose the illusions about the software. Our clients are fascinated by our reports. This can't be easy. That is why we need intelligent people in this profession. That is why I believe that testing is only for the talented.

Now that is a profession I am proud to be in. That is why people who have these qualities are successful.

Saying NO to ISTQB Software Testing Boards

Sunday, September 28, 2008 09:59 AM

Back in 2005, I participated in an initiation discussion of a new "Bangladesh Software Testing Board" under ISTQB. It was a very disappointing discussion. Instead of giving attention to how they would try to improve the skills of the testers or tester wannabes in Bangladesh, they were more concerned about who will be in the board and how they will register it legally. It was obvious that almost all of the individuals involved in this initiative were not testers. I would have said ALL, but maybe I had failed to notice someone. Just maybe. I brushed off this initiative as just a hype. I did not expect much would ever happen, and I was right.

In 2007, two nice gentlemen from this yet to be initiated board came to talk to me about my opinions about this initiative. I am sure I am not the only one they spoke to. I expressed my skepticism, and interestingly they said they understood many of my concerns. They had very little knowledge about how certification actually undermines our testing craft. They even had very little knowledge about the other so-called famous certifiers in the business. Even this time, non of them were testers. But atleast they had the courage to admit that they did not know much about testing.

Finally, a month back, I was called up by one of the Executive Committee members (a good man with good intentions), from this yet to be initiated board, to invite me to their Executive Committee. But I declined. I mentioned my disappointment in the lack of progress they were making (not that I actually wished they would make progress in certification) and that their mission is not clear to me. He spent the next 30 minutes to convince me about their seriousness to move beyond mere certification. He gave me hints of intending to bring certified trainers from abroad, and tried to encourage me to join them. He even suggested to call up a meeting with the other EC members to discuss this further, and clear my doubts.

I was actually a little flattered by all this (the dark side within me). But I still refused to join them and instead told him that I would rather like to be part of their planning on how they intend to train testers in Bangladesh, and what precautions they would take not to undermine the testing craft. I did not have to be in the Executive Committee for that. He finally agreed. To be honest, I doubt this approach will ever work out. I think they would again disappear for another year or so because of their historical lack of commitment.

You see, there is no way I can pursue my thoughts about the dangers of certification, if I am involved with one myself. I would be known as a hypocrite. Here is the hall of fame where I chose not to include my name.

Meanwhile I am continuing my advocacy against certifications in my local forum, and it has grown to be a very hot topic, requiring the full commitment of myself and others. The discussion grew to about 40+ posts in just one week, with experience reports of certified testers and those who suffered its consequences. One thing for sure, it is not easy to go against the tide. It started off with people being disappointed in my opinions about certification, but now things are looking very productive indeed.

I will probably be summarizing some of these discussion points in a blog post soon. But until then here is one of my latest response to the long thread.

Don't test that

Tuesday, September 02, 2008 11:17 AM

Here is the story. We received a customer complaint that she is able to access some data that she was not supposed to. On reading the customer feedback it seemed to me that this was something we usually test for since the access control system is one of our prime sales pitch. So I asked the tester in charge of that particular feature to investigate the allegation. She came back reporting that she was able to reproduce the issue. She seemed tense, realizing the gravity of the situation. By the way, I usually never blame my testers when a problem is spotted in the wild, no matter how serious it is. She knew this, so her nervousness wasn't about fear of blame, but rather about why she overlooked this very important problem. I remained calm to prove to her that we were not playing the blame game. They know the drill for what comes next. I usually ask a series of questions about how we overlooked the issue, and then we start talking about how we can try to avoid missing similar problems in the future. She knew the questions I will be asking, so she wanted to cut the conversation short, by saying that she wanted to test it but the programmer told her not to.

You as a tester just discovered some serious problems with a new feature and report it to the programmer. The programmer shrugs off your claims and tells you not to test those scenarios since those are not important in his opinion. What do you do?

There is a dilemma here. On one hand, I can't accept the fact that I or any of my testers will intentionally blind ourselves from certain problem scenarios. On the other hand, if we do test the forbidden zone, we risk pissing off the programmers, and being accused of not utilizing our time crunch adequately. Being rapid testers, deadline stress is the norm.

There are a couple of ways I play this game. If I feel that these tests are actually very important to test right away, I try to convince my testers or the programmers with scenarios of the kind of problems I expect. Sometimes I would take examples of similar reported scenarios from Customer Support.

If that does not work I go for Plan B. When a programmer says "Don't test that", I take that to mean "Don't test that NOW". So instead of testing those right away, I would wait until things calmed down. What that means is that if we do not anticipate serious problems in the areas we are testing and we still have time before the release (which is pretty rare) we would go ahead and test the forbidden zone. Otherwise we would wait until after the release.

There have been cases where we did not return to these problems due to time constraints or plain forgetfulness and the worse happens, i.e. the problem is reported by the customer. In such cases my testers and I TRY (emphasis on try) not to scream "I told you so". When we decided not to test these scenarios it was a combined understanding between the programmers and us. However it is important to bring this new information to the attention of the programmers and their superiors so that we can avoid making similar strategic mistakes. So instead of gloating over our hunch I would say something like, "Although we agreed that this was not important enough to test, it seems we did not anticipate the current scenario". You have to admit that it was a hunch, since we couldn't come up with any credible scenarios to prove otherwise.

Blame games do not appeal to me. I believe being a Test Manager, it is important to trust the skills of my testers. They must be good since I hired them and have been working with them to improve their craft. I intentionally avoid a signoff process of their test strategies. I play more of an advisory role recommending test ideas I feel were overlooked or complementing them on a plausible intuition. I would even help them in coming up with scenarios to communicate with the programmers or reproducing unreproducible problems or be their peer tester or speed up their testing efforts. They realize I am a facilitator rather than a lawmaker. There will be times I would disagree with them, but my disagreement is not authoritarian. They always have autonomy over their testing. This in no way suggests that if an issue gets overlooked in a product I will use them as a scapegoat. I think that was clearly expressed in this writing.

I understand this is an unconventional way of managing a Test Team. Many managers I know prefer that their testers spell out all their test cases and get those approved by them. They would rejoice when they were right and the programmers were wrong or take punitive measures when issues popup that should have been tested (in their opinion). Many would argue that this is the only way to have control over what is being tested, and to ensure that issues don't go overlooked. However I feel it is only an illusion of control. I fail to see how a Test Manager can know more about the test subject than the tester who spends more time studying it, exploring it and experiencing it. It is inevitable that problems may be overlooked, but when it does I will be there trying to understand it with my testers. I also make it a point to take into account when the stakeholders are satisfied with our testing, and I have found that there are more cases of appreciation rather than dissatisfaction over unnoticed issues.

Faster builds increases testing time

Friday, January 11, 2008 14:19 PM

Being a rapid tester, I try to invest in automating areas of my testing effort or using tools where the task requires muscle over mind. This accelerates my testing and allows me to utilize my frequent time crunch more effectively. Since testing time scarcity is a regular phenomenon, the R&D and selection of tools or to write code for such tasks is another challenge. In this post I will outline one case where my efforts paid off. Of course, there were cases where I reached a dead end after weeks of R&D and prototypes, but I will share that in another day.

My company develops a web application, that takes a significant time to build (compile) and deploy it to the application server. Usually a tester would get the latest source from the version control system, build it, and deploy it on the test server. This part was already automated with Apache Ant. Ant is primarily a build tool that runs XML scripts for doing all sorts of stuff, and is platform independent.

Even then, there are a few things the tester needs to do:

  1. Inform the other testers that the application is being updated and they will need to temporarily stop the testing on the server.
  2. Start the entire build process and monitor the progress at certain intervals, to see if it is completed. It is irritating when you return to the console to find that the build broke 6 minutes ago.
  3. Inform the other testers that the application is ready for testing.
Although these tasks are not directly linked to testing, they are a prerequisite. If you are doing this a few times during the day, you can imagine the idle time of the tester executing the build process. Also, the longer he delays noticing the end of a build process, the longer everyone has to wait for the deployed application.

It would be great to be automatically notified when a build is starting or ended or broke. Then all the testers can go about their other tasks, such as following up on reported problems, writing session sheets (for Session Based Test Management). So I preferred notification through instant messengers since we all use that for communication.
Some clarifications:
  • Jabber = Your google talk IDs are actually Jabber user IDs. So you are basically using a Jabber server to do instant messaging using google talk. You can host your own Jabber servers for intranet instant messaging.
  • Spark = A jabber messenger. We use it with our own hosted Jabber server.
The solution I am providing, can even be used for MSN, Yahoo and AIM.

So here is what I did. I found a way that Ant will:
  1. Notify all the testers in spark before the build process starts.
  2. Sleep for 30 seconds so that it can be terminated if any tester requests it, and wants to delay deploying the latest updates.
  3. Execute the entire build process.
  4. Notify all the testers in spark that the application is ready for testing with the latest updates.
  5. Notify all the testers in spark if there is a build error.
To do this I used imtask and ant-contrib, which are 3rd party libraries for Ant. Imtask has a jabber task to be used in Ant, to send instant messages to Jabber. It also supports instant messaging for MSN, Yahoo and AIM users. ant-contrib has the try/catch tasks I used in Ant to handle build errors.

Finally, for those who are interested, here is the code snippet from my build.xml that I run with Ant:

<taskdef name="jabber" classname="" />
<taskdef resource="net/sf/antcontrib/"/>

<target name="all.with.notification" description="Let me handle it from here. And I will let you know when I am done!">
<antcall target="send.message">
<param name="param.message" value="${message.buildstarting}"/>
<echo message="*******Going to sleep for a while so that the build process can be stopped if necessary...*******"/>
<sleep seconds="30"/>
<echo message="*******Starting build process...*******"/>

<trycatch property="failure.message">
<antcall target="all"/>

<antcall target="send.message">
<param name="param.message" value="${message.onsuccess}"/>
<echo message="*******SOMETHING WENT HORRIBLY WRONG!*******"/>
<echo message="*******PROBLEM: ${failure.message}"/>

<antcall target="send.message">
<param name="param.message" value="${message.onfailure}"/>

<fail message="*******There was a PROBLEM. Stopping build process...*******"/>

<target name="send.message">
<jabber host="${jabber.server}" secure="true" from="${login.username}" password="${login.password}" message="${param.message}">

No such thing as a YES or NO question

Monday, October 01, 2007 15:21 PM

Maybe I exaggerated the title of this post, but I tend to use it as a heuristic (rule of thumb) when answering testing status related questions. I never assume that I am being asked a yes or no question. So I don't answer with a yes or no. Here are some examples of the questions I am asked by project stakeholders and how I prefer to answer them at different occasions. Majority of the answers I have already used and some I intend to.

Are you done testing the feature?

  • We are done testing the cases we wanted to test.
  • We assumed certain important failure cases and tried to trigger those problems with our tests, within the time you gave. We did not find any problems with those.
  • There are some important problems we found that gave us ideas to do more testing. Will we get more time?
  • There are certain risks we still anticipate, and would want to run a few more tests. Will we get more time?
  • The setup time took longer than expected and we could not execute all the tests we intended. But we did run the important tests first. Will we get more time?
You can always find more ways to test. The problem is that you are only given a limited time frame. So you need prioritize the important tests to execute. If you answer yes, then don't be surprised if you are chastised later when bugs are discovered after the release. If you answer no, then you may influence his unpleasant attitude, even before you get the chance to explain.

Can we release?
  • Well, we got some problems that may turn out to be serious issues when we release (I would sometimes give a brief outline of the issues). What do you think? Is it important to give the release anyway?
  • We are still testing, but we are not finding many problems with our tests. But we don't anticipate the current problems as a major threat.
  • There were some important problems we discovered and resolved, but that gave us new ideas to do more testing. Is it important to give the release anyway?
  • I realize the bug count is very low, but we fear that the recent bug fixes may have caused side effects. We have witnessed some of these problems already. Do you think we can get the time to run some regression tests?
I don't think it is wise to actually answer if we should or should not release. It is not up to me to decide. I will give him the information he needs to make up his own mind.

Do you need more time?
  • More time would certainly help since I got some test ideas that could trigger some important problems.
  • I can always come up with more tests if you give me more time.
  • We did not find any problems. Do you want us to rethink the risks and try to come up with different tests?
There are cases where I am insisted less time and there are times where the reverse happens. You would assume testers would be happy getting more time, but that is not always the case. James Bach's Pinata and Dead Horse Heuristics sheds some light on how much should we test (thanks to Ben Simo for sharing that). I may be beating a dead horse, that may increase frustration and boredom of the test team. On the other hand, if I simply stop testing because I am not finding many problems, I may be overlooking some critical issues.

You will notice that some of the answers are interchangeable between the questions. My intension was to give an overview. There are other questions that may be addressed with the above answers, such as:
  • Is it stable?
  • Are there any more bugs?
Sometimes I get carried away by my confidence and answer too briefly. But every time I do, I remind myself to be true to the team. It is important to understand that I am not being difficult by not giving a straight answer. It is just that there is no straight answer. There are too many uncertainties and I feel I am only doing justice to the entire product team by stating the facts.

Converting the nonbelievers

Friday, September 14, 2007 15:50 PM

For a long time I have been trying to deal with programmers who are not supportive of the testing cause. I suppose there is at least one in every company. They are usually senior programmers, but younger programmers are quick to learn. Pretty soon their population multiplies. They cannot comprehend how someone, who knows so little about their code, can possibly find important problems better than they can. Ironically, they are usually more cooperative with customers. Even then, I can't blame them since I am certain they had to deal with an abundance of frail misguided testers. To them, testers do not add value to their work. It took me a long time to realize that I need to break this assumption for them to believe.

When I was a programmer I had my share of delusions. My constant complains to management led them to believe that I should guide the testers. That was a tactical mistake (at least at the time). I considered myself a good programmer and so my allegiance remained with the programmers. I assumed that to be a better tester the testers need to be more like programmers, i.e. they need to penetrate the black box and understand the internals of the software. Those who could not meet this condition were not good testers in my judgment. Thankfully the sands of time shaped me to be wiser. As I executed and experienced the testing challenges, I am better equipped to deal the obstacles. I guess I did it the unconventional way, i.e. started as a Test Manager and then tried hard to learn testing. However, I have finally become a tester after a lot of disasters. You truly need to be a tester to lead testers.

Now, getting back to the other nonbelievers, most of whom may never get the opportunity I got. Let me share an account of how I handled such a case in one project. I got debriefed on the project when the project was close to release and was told to test it. Me and my team never met or communicated with the programmers of this project. We were instructed that an online bug tracker is the preferred means of communication, since the programmers were extremely busy. They were very confident about their code and were skeptical about the value our testing would add. We were given a couple of days to test and report the problems. I usually do not negotiate deadlines. Instead I try to understand and prioritize what to test within that time frame. The discussion turned out to be quite interesting, but that is another story.

During the discussion and demonstration of the software we tried to understand why they were so proud of it. It turned out they had already demonstrated the software to the customer and were in the final stage of fixing the bugs reported by them. So the problems we were identifying during the demonstration itself was addressed with, "the customer did not complain about it", or "they will never do that". This response was very important for me to note. We then kick started the testing.

I instructed my team to keep log of every problem they encountered, no matter how trivial it seemed. I knew this approach will be met with sharp criticism by the programmers. So I cautioned them not to report seemingly trivial problems until they discovered an important problem. Then they can report the important problem with high priority, followed by the so called trivial problems, all at the same time. This turned out to be extremely effective. It was as if the programmers were forgiving the testers for reporting low priority problems because they got some important problems to deal with.

We tried to report problems with the customer's perspective, trying to express cases where the customer may find it a problem. Whenever a tester found a problem, he would shout out the problem to the entire fleet. This avoided duplicate reports, hence giving less chance to the programmers to complain. With this method, other testers could analyze if there could be similar problems in their own testing areas, and could check dependencies.

We regrouped every two to three hours and discussed the important problems and our next charter. When we spoke to the envoy of the programmers and they told us that certain problems will not be fixed, we did not complain. If we felt that the programmers failed to realize the seriousness of certain problems, we simply elaborated the scenarios. We did not take the authority to dictate what must be fixed.

It all went pretty well. As the deadline approached, our time frame was increased, and then increased again, and finally increased again, so that we could keep testing and reporting problems they would fix. Some of the programmers finally visited us personally to fix some issues that were blocking our testing progress and appreciated our work. It was interesting how all this was achieved without insisting more time, without taking supreme authority of bug fixes and without any confrontation with the programmers, while the testers had a ball.

Are you paying attention?

Friday, September 07, 2007 14:33 PM

In one of my testing consultancy sessions, I was demonstrating the testing of a JavaScript calendar widget in a web application developed by the client. I started off explaining that we should use the windows calendar as an oracle to help us discover problems in the widget. I opened the webpage containing the widget and the windows calendar next to each other.

I was demonstrating this on a projector which only supported a resolution of 800x600. So if I wanted to keep the windows calendar and webpage always visible, my webpage real-estate becomes considerably reduced. To the surprise of my audience I started resizing the browser window so that I can see more of the region surrounding the widget, which seemed unnecessary since the widget only took a small portion of the entire webpage. I explained that I want to be aware of problems surrounding the widget while I am executing my tests on the widget. I did confess that I do not know what kind of problems could appear in those regions but it doesn't hurt to pay attention.

This was a demonstration of exploratory testing. I was narrating my choices of executing certain tests that I generated on the fly, and why I chose to ignore other tests that I considered as equivalent or not related to my current test objective. A few minutes into the testing, one of my colleagues discovered that a button's outline at the corner of the web page was blinking as I was invoking the widget. The audience appreciated this discovery since they could relate to my initial preemption. I myself did not notice this due to inattentional blindness.

Before we continue, I think a good way to explain inattentional blindness is with the example of a magician's performance. Ever notice how a magician sometimes tries to distract you away from his actual trickery by trying to focus your attention on something else. For example, when a coin that you can swear was in the palm of the magician's hand disappears into thin air when he unfolds his fist. In reality, the coin was swiftly removed while you were paying attention to where the illusionist wanted and hence the perception of magic. This is called inattentional blindness. You can't see what is right in front of you, simply because you were paying attention to something else.

The irony of my episode was that I did not discover the problem even though my intention was to pay attention to such peculiarities. Also, exploratory testing is supposed to be better at minimizing inattentional blindness. This symptom is advocated as being more common during scripted testing. So I tried to analyze this phenomenon and came to the following explanation.

Since I was sitting close to a large projection of the screen on the wall, my visual field was limited. This made me more prone to inattentional blindness. Also I was frequently alternating my attention:

  • To find important problems in the widget.
  • To explain to the audience the reasons for my actions.
  • To make sure my explanations were comprehensible.
  • To address queries from the audience.
Now, some may be wondering why I went through the trouble for a seemingly trivial issue. My concern is what if it was not a minor problem and it went undetected. I want to be aware of the constraints that may hinder my discovery of important problems. Being a Tester that is a big deal.

How could so many get it wrong?

Saturday, September 01, 2007 17:21 PM

I come from the part of the globe called Bangladesh, known by many for its floods, poverty and corruption. They all got it wrong, just like many got testing wrong. But this blog is not about me or my country. It is about my experiences in software testing and how it can inspire others.

Michael Bolton said I sound fascinating. James Bach said I write very well. Pradeep Soundararajan said I should start a blog. So here I am. Even though, this does not mean they endorsed me, it did encourage me.

I like to think that I got testing right from the unlikeliest of conditions, because:

  • Fortunately, I was forced into testing since my company was in a crisis and thought I was the best programmer for the job.
  • Fortunately, I could not find anyone in my proximity to answer my queries on software testing.
  • Fortunately, there were very few, yet boring, books on software testing at local book stores.
  • Fortunately, there were no training centers on testing certifications.
  • Fortunately, the seminars I attended were pretty shallow in content.
  • Fortunately, I had to explore my curiosity on my own.
  • Fortunately, I was bored with testing and wondered why others could write about it with so much passion.
  • Fortunately, I was always under deadline pressure and was denied my demand for more testing time.
  • Fortunately, I had the courage to try out my assumptions and was ridiculed for it.
I questioned the applicability of everything I read, heard, saw and experienced. In my quest I came across the writings of Cem Kaner, Bret Pettichord, James Bach, Michael Bolton and few others who made sense of it all. Yet I questioned their reasoning to my context.

My mistakes taught me what to avoid, while the experiences of others intrigued me to investigate further.

No wonder so many still get it wrong.
"Testing is the INFINITE PROCESS of comparing the INVISIBLE to the AMBIGUOUS so as to avoid the UNTHINKABLE happening to the ANONYMOUS!" -- James Bach
I accept the challenge...