Gearing For Kids? A "Larsen Twins" Update

Thursday, May 07, 2015 16:38 PM

First off, I wanted to say thank you to "Women Testers" for including a guest article that Amber and I wrote together. It was about how she prefers to work with me when it comes to learning about programming and testing. If you would like to check that article out, you can download the latest edition of “Women Testers” and look for "Show the Way, Then Get Out of the Way”.



With this article in mind, I offer the following update. Since I want to encourage Amber to write more and share her own thoughts and ideas, we will continue with writing joint blog posts like this one under the tag “Larsen Twins” (because silly and a little fun and why not ;)?).

Amber and I have been looking through a number of books and online materials to help her understand coding and testing concepts. One of the areas that I thought would be tricky to explain or focus on would be the ideas behind computation. A book that I recently reviewed and that I found helpful is called “Lauren Ipsum”, and I personally thought it was cute and  engaging in explaining topics I’d learned the hard way. I was excited to have Amber read this book and tell me what she thought of it.



“It was a fun and cute story, and I get what they are trying to do. By having Lauren help a turtle move in certain directions, it focuses on showing how instructions are executed.”

Ok, good so far.

“However, I kept finding myself waiting for the payoff… what’s the point of this?”

Hmmmmm…..

“I think younger kids would think this is great. This might be a really good book for someone in fourth or fifth grade, but I was getting impatient as I was reading through it. Maybe I’ll feel different when I have tried some more of the ideas out.”

Wow.

Seeing as I have been revamping the SummerQAmp materials, I have been trying to think how I could make it more fun, more relatable, and help explain some of the ideas in a less wonky way to kids. Amber just gave me an interesting piece of reality to consider. Many kids my daughter’s age have been interacting with technology their whole lives. As such, they have become attuned to getting the information they need quickly and directly. The idea of a story to give them the ideas and concepts doesn’t really appeal to her; she wants the straight stuff.

“It may just be me, but I think I do better when I am shown an idea, and then given some ways to play with it and figure it out. I will admit, I thought the chapter on recursion was cute, and that helped me understand the idea a bit, but I still don’t feel like I fully ‘get it’. I’ll have to actually use it to see it in action to really feel like I understand it”.

After pondering this for awhile, I remembered I had another book in my Tsundoku pile;  “Understanding Computation” by Tom Stuart.


In the preface, it says the following:

"This book is for programmers who are curious about programming languages and the theory of computation, especially those who don’t have a formal background in mathematics or computer science. If you’re interested in the mind-expanding parts of computer science that deal with programs, languages, and machines, but are discouraged by the mathematical language that’s often used to explain them, this book is for you. Instead of complex notation we’ll use working code to illustrate theoretical ideas and turn them into interactive experiments that you can explore at your own pace.”

Now that sounds promising! I think this just moved its way to the top of my pile :).

Amber is an excellent case study for my own ideas about teaching and how to teach, because she throws me curveballs. For a girl that loves “kawaii”, the cute and entertaining in her everyday life, she can be decidedly “hard boiled” in her other pursuits. Perhaps David Grohl of the Foo Fighters sums it up best… “don’t bore us, get to the chorus”. We’ll see if Understanding Computation will help us do that. More to come, stay tuned :).

Packt Publishing Rolls Out More FREE Learning

Tuesday, May 05, 2015 14:00 PM

I've kept the advertising on the site to a minimum over the years, but there are a few publishers who have been super kind to me, and have given me many free titles to review. Truth be told, I have so many titles, I could do a book review a week and not run out of books for a couple of years at this point.

Packt Publishing in the UK is one of those companies that has given me much. Therefore, when they do something that I deem could be helpful to the broader community, I feel it appropriate to draw attention to it.

What is Packt up to this time?

They are giving away a free eBook each day.

Update: Packt has confirmed that this promotion is ongoing.



From Packt's Site:

It's back!  And this time for good. Following on from the huge success of our Free Learning campaign, we've decided to open up our free library to you permanently, with better titles than ever -- from today you can claim a free eBook every day here.

Each eBook will only be free for 24 hours, so make sure you come back every day to grab your Free Learning fix! From AngularJS to Zabbix, we'll be featuring a diverse range of titles from our extensive library, so don't miss out on this amazing opportunity to develop some new skills and try out new tech.



Judging from the image above, it looks like this promotion is running through May 17th, but if I'm reading correctly up above, their plan is to offer a free book every day indefinitely (update: yes, it's going to be a perpetual promotion). To be clear, one book is available each day, from 12:00 a.m. to 11:59p.m. GMT.

If, like me, you are jonesing to add to your Tsundoku, here's another opportunity to do exactly that :).


Ever Had Your Screen Talk Back To You?

Monday, May 04, 2015 20:38 PM



As many TESTHEAD readers will notice, I've been on a tear with regard to Accessibility this year. It's become my monomaniacal focus, and I've entered a semi-crazy co-dependent relationship on this topic with Albert Gareev. It's been a lot of fun talking about and focusing on this topic the past few months, as each time I think I understand what's happening and how we can do better, I learn something more that shows me just how far we have to go.

I remember back during STP-CON at the beginning of April witnessing an "A-Ha" moment for a roomful of people. that moment was when I asked the attendees in my session to open their laptops if they had them, or other devices, and turn on a screen reader just so they could "hear" what various sites were saying. Since a majority of the users in the room at that time had Macs, loading VoiceOver was quick. They started VoiceOver... and then things got interesting. As I watched people's faces, I could see the curling of lips, the furrowing of brows, and the nervous laughter pop up at points. The blast of electronic voices that erupted from system speakers was very enlightening, and watching a roomful of people realize just how difficult it was to listen to the output of their favorite sites made it crystal clear how hard it was for sight impaired users to get useful information from an average web site (or at least, a site that hadn't made considerations for accessibility).

This coming Saturday, we will be doing a session on screen readers with Weekend Testing Americas. I'm suggesting Mac users configure VoiceOver, and PC users download NVDA.

This session will be an introduction to Accessibility through the use of Screen Reader applications. My goal is to give people a chance to play with these features on common sites and see how well those sites make the general information on the page accessible to users, and how much useless information they receive. We will use these examples to have a discussion about how to address issues related to Accessibility, as well as building a case for making Accessibility part of an overall testing strategy.

If you would like to join us on Saturday, please send a request to “weekendtestersamericas” to add your SkypeID. In your request, mention that you want to join this Saturday’s session. On Saturday, twenty minutes before the start of the session, please contact us and ask to be added to the session, and we will do so. Here's hoping you can make it ;).

#CAST2015: Full Program is Posted

Thursday, April 30, 2015 01:31 AM



It's that time of year again. The time when the #CAST2015 hash tag starts to get a lot of play. For those who stumbled here from another location, #CAST2015 refers to the Conference for the Association for Software Testing, which has been an event that, for my not being directly involved in its planning or execution, has taken up a fair number of cycles of my reality. 

The full program was announced today, and it is available to view here.

To paraphrase the Conference Committee (or more accurately, just steal it whole cloth):

The Association for Software Testing is pleased to announce its tenth annual conference, CAST 2015 “Moving Testing Forward,”to be held in Grand Rapids, Michigan, August 3-5.Since our first CAST we  have seen dramatic changes in the nature of communications and the nature of delivery, from PC to client/server to the web and web services. Deployment is different; monitoring is different, builds and test tooling are different. We have a variety of new models and methods for our testing. CAST is where we talk about how they actually work out in practice, based on experience. At our tenth CAST, in 2015 speakers will be presenting stories, workshops and tutorials regarding their experiences surrounding how to advance software testing.

Join us this summer for our tenth annual conference in downtown Grand Rapids at the beautiful Grand Plaza Hotel August 3-5, as we explore “Moving Testing Forward.”

As President of the Association for Software Testing, you can bet I will be addressing a whole lot of CAST stuff over that the official web site, but right now, I'm actually stepping back a little bit and I'm talking totally off the cuff, not as the President of AST (though this will certainly be seen by some as "official pronouncements" anyway, so whatever).

First, I want to say that I am impressed with the variety and variation we have in the program this year. CAST is a diverse conference to begin with, but we made a specific focus this year to invite people who had not presented before to really give it a go this year. I am happy to say that there are several speakers who will be giving their first conference talks at CAST this year. I'm also proud that AST partnered with Speak Easy and arranged to work with them to help develop talks for the conference. Speak Easy has made it its mission to help encourage women to speak at conferences, and we are delighted to say that four talks that were mentored by SpeakEasy were picked for the conference.  CAST has prided itself on being a place where different voices get heard, and not specifically catering to the "rock stars" of our industry. We've had a good balance, I think between male and female speakers, male and female keynotes, and a diverse group of participants from different countries and backgrounds. Compared to many conferences I've been to, I'll dare say I think CAST really is one that deserves high marks for diversity. Can we do better? Most certainly, but this year's lineup makes me feel like we are in the vanguard. 

I'll borrow a couple of posts from Lisa Crispin to help make this point even more ;):







Another area I want to talk about is the two hour workshop that will be offered on Tuesday, August 4, 2015 called "Black Box Accessibility Testing: A Heuristic Approach". This workshop will be given by Albert Gareev, with some solid peanut gallery support from yours truly. I'm sure some of you are looking at the title and thinking to yourself "Wait a minute, that sounds a lot like the talks Michael has been delivering this year. Are they at all related?" The answer is, of course, yes. Albert and I have been working for quite some time to help develop both design and testing approaches that help bring Accessibility to light and get some focus and attention. We've both written and presented extensively on these topics, and this year, we have melded minds to help present and deliver a solid workshop of testing skills and techniques that anyone can walk away with and be effective. So why is Albert listed as the speaker and not me? First and foremost reason, the workshop is primarily Albert's research, practice and learnings, so he very much deserves the credit for presenting the workshop. Do I have a hand in what's being presented? Sure, and I'll be spending a fair amount of time helping hammer out the paper that will be available at the conference (yes, we are writing the paper together :) ).

For those who want to attend a conference that will be first rate with regard to content, diverse speakers, unique and original voices, and talk about topics that are relevant to your everyday work, and not retreads of stuff you've heard many times before, I want to personally encourage you to sign up and attend #CAST2015 with us in Grand Rapids, Michigan. It's going to be a great time, and I hope to see you there as we work to "Move Testing Forward"! 

 

When The Music's Over

Tuesday, April 21, 2015 21:42 PM

The past two and a half weeks have been very difficult for me to process. Part of me is numb, part of me is frustrated, and part of me is deeply sad. All of these feelings have conspired to render my writing nearly non-existant, because I couldn't produce anything until I addressed the overwhelming elephant in my reality.


On Friday, April 17, 2015, Socialtext Director of Quality Assurance, Kenneth Pier, completed his battle with cancer. We received word around noon that he had passed away at home, surrounded by family. Two and a half weeks earlier, we spoke on the phone together to make sure I knew the steps to perform a release from start to finish. At the end of that conversation, he told me he felt very tired, and might need to take a day or two off. I answered "by all means, do what you need to do. We'll talk again when you feel better". That was the last time we'd speak to each other directly.


I don't want to talk about losing Ken. Instead, I want to talk about how he lived, and what he meant to me. I met Ken for the first time in December, 2010. Matt Heusser invited me to a "tester's dinner" at Jing Jing's in Palo Alto. The resulting conversations turned into TWIST podcast #30. I remember being impressed with Ken right off the bat. He was a little gruff, and no-nonsense with his answers, but he was a straight shooter, and he possessed a wealth of skill and a practical approach that he was happy to share.


Over the following two years, I would run into Ken at various meetups, workshops and conferences. Each time, we'd talk more in depth, getting to know each other better. In the summer of 2012, I had the chance to sit in with him and Cem Kaner at CAST as they discussed ways to balance automation and exploration. As I was set to give a talk in two days on the same subject, that interaction and deep questioning caused me to discard much of my original talk and start fresh. Ken took the time to answer dozens of questions I had, and in the process, helped me make a talk I would deliver several times over the ensuing years. That final product was hugely inspired by Ken.


A few months later, when I expressed an interest in a change of scenery, and a desire to work with a broader testing team rather than keep going it alone, Ken lobbied me to join his team at Socialtext. I accepted, and thus began a daily interaction with him that lasted for two and a half years. I learned a great deal from Ken and his unique style. Ken was not one to idly chat or play games. If he felt you were doing something good, he told you, immediately. If he felt you were losing focus or going off the rails, he told you... immediately :)! You always knew exactly where you stood with him, what was working for him, and what wasn't. He also took great care in representing his team to upper management, both within Socialtext itself, and when we were acquired by PeopleFluent. Ken was fearless when it came to telling people what could be done and what couldn't. He didn't care if upper management was frustrated or irritated with an answer, he'd rather give them the straight truth than make a promise we couldn't deliver, or push something that would be sub-par.


During many of the stand-up meetings we'd have each morning, Ken would have his oldies station playing over the phone system (half our team is distributed). Some mornings, he'd start singing at the start of stand-up, and often, he'd ask me to sing along with him, since we were both vocal performers (he sang with a number of choir groups, and had a wonderful baritone voice). Over time, though, due to the cancer treatments, the singing voice was quieted, and we heard it less and less. Now, the singing has stopped. I won't hear it again in my day to day work life any longer. I think I will miss that most of all.


I want to invite my friends who are wine connoisseurs (because Ken was definitely one of those) to raise a glass to Ken. He was a man who understood testing, and represented that understanding very well. He inspired a lot of love and admiration from his team, and from everyone that knew him. He was generous with his time, energy, and knowledge. He loved to roll up his sleeves and do the day to day work that, by virtue of his position, he absolutely did not have to do. Still, more than just doing the day to day work, he reveled in helping his team learn something new, do something they'd never done before, and encourage them to go farther and get better each and every day. It's a trait I hope I can exhibit when I work with others in the future.


Thank you, Ken, for just plain being amazing. Our team has a massive hole in it, and I doubt we will ever truly fill it, but we will do our level best to make you proud of us nonetheless.

On Community, or "Building Your Perfect Beast"

Friday, April 03, 2015 20:27 PM

This is going to come across as perhaps a bit scatter-shot, because it contains thoughts that address a lot of things that I am involved in. Each of these could be addressed in a number of different places, and in time will very likely be addressed there, but for right now, today, I am going to separate my various public personas and address this as just me, right now, In my own sphere. What you are reading right now has both nothing to do and everything to do with the Association for Software Testing, the Miagi-do School of Software Testing, Weekend Testing Americas, the Bay Area Software Testing Meetup group, and the freelance entity that is TESTHEAD. For right now, I am addressing this as me and only me. The remarks that follow are not necessarily representative of any of those other initiatives and the other people involved in them.

One of the key messages I received loud and clear about the world of software testing and those who are involved in it is that there is a bit of an identity crisis happening. The world of software development is changing. Long standing biases, beliefs and processes are being disrupted regularly. Things that worked for years are not working so well any longer. Business models are being upended. The rules of the game and the very game itself is being re-written, and many people are waiting to be shown the way.

One of the things I have witnessed time and time again is that there is no end of people happy to consume materials created. I'm very much one of those people, and I enjoy doing so. However, at some point, there comes to be a space where there is a limit to what can be consumed and a void as to what is there. Something is lacking, and a need that is not being fulfilled. It's very easy to ask that someone fill that need, and complain when that person or group does not do so. I remember very well being called to task about this very thing. Back in 2010, I lamented that no one had brought Weekend testing to the Americas, that I had to attend sessions hosted in India, Europe and Australia, often either late at night or early in the morning for me. Joe Strazzere set me straight very quickly when he said (I'm paraphrasing):

"Perhaps it's because no one in the USA values it enough to make it happen. YOU obviously don't value it enough either. If you did, you would have already made it happen!"

That was a slap, but it was an astute and accurate slap. I was lamenting the fact that something I wanted to consume wasn't readily available for me in a way that I wanted to have it. I was waiting for someone else to create it, so I could come along and feed at the counter. Instead, I realized someone had to prepare the food, and since I wanted to eat it, I might as well prepare it, too. The rest, as they say, is history, and to be continued.

I want to make a suggestion to those out there who see a need, an empty space, an unfulfilled yearning for something that you have identified... is there any reason why you are not doing something to bring it to life? Are you waiting for someone else to give you permission? Are you afraid your idea may be laughed at or criticized? If you are a software tester, are you suffering from the malady that you see a problem for every solution? If so, I want to assure you that I understand completely, as I have been there and find myself in that position frequently. Perhaps you feel that you are alone in your frustration, that you are the only one who finds it frustrating that something you care about is not being addressed. While I was at STP-CON, and during Kate Falanga's session, we discussed the three layers of engagement and change/influence. The first layer is ourselves, the second is our team or organization, and the third is the broader community. There's a very good chance that any void you are seeing, any initiative that you hope to see championed, has many who likewise want to see a champion emerge.

My advice to all out there is to stop waiting for someone else to Build the Perfect Beast, and instead, start building your own. Once you start, it's a good bet others will want to get involved as well. No one wanted to do Weekend Testing in the Americas until I was willing to throw my hat in the ring. In short order, others decided they wanted to get involved as well. Some have come and gone, but we are still here and many have helped us solidify what we do. Our Beast is not yet perfect, but it's getting there, and we've learned much along the way. Same goes for every other organization I am involved in. Major movements do not happen by timidly waiting for someone else to take the lead, and they don't come about by asking for permission to do them, either. If you see a need that is not being met, try to create something that will meet that need, even if you have to do it alone at first. My guess is, over time, others will see what you are doing and want to be part of it, too. Do be warned, though, the desire to create is addicting, and you could find yourself a bit over-extended. On the other hand, you may be having too much fun to care :).

Delivering The Goods: A Live Blog from #STPCON, Spring 2015

Thursday, April 02, 2015 23:00 PM



Two days goes by very fast when you are live-blogging each session. It's already Thursday, and at least for me, the conference will end at 5:00 p.m. today, followed by a return to the airport and a flight back home. Much gets packed into these couple of days, and many of the more interesting conversations we have had have been follow-ups outside of the sessions, including a fun discussion that happened during dinner with a number of the participants (sorry, no live blog of that, unless you count the tweet where I am lamenting a comparison of testers to hipsters ;) ). I'll include a couple of after hour shots just to show that it's not all work and conferring at these things:


---

Today I am going to try an experiment. I have a good idea the sessions I want to attend, and this way, I can give you an idea of what I will be covering. Again, some of these may matter to you, some may not. At least this way, at the end of each of these sessions, you will know if you want to tune in to see what I say (and this way I can give fair warning to everyone that I will do my best to keep my shotgun typing to a minimum. I should also say thank you (I think ;) ) to those who ran with the mimi-meme of my comment yesterday with hashtag "TOO LOUD" (LOL!).

---

9:00 am - 10:00 am
KEYNOTE: THINKING FAST AND SLOW – FOR TESTERS’ EVERYDAY LIFE
Joseph Ours



Joseph based his talk on the Daniel Kahneman book  "Thinking Fast and Slow". The premise of the book is that we have two fundamental thinking systems. The first thinking system is the "fast" one, where we can do things rapidly and with little need for extended thought. It's instinctive. By contrast, there's another thinking approach that's required that makes us slow down to work through the steps. that is our non-instinctual thinking, it requires deeper thought and more time. both of these approaches are necessary, but there's a cost to switch between the two. It helps to illustrate how making that jump can lose us time, productivity and focus. I appreciate this acutely, because I do struggle with context-switching in my own reality.

One of the tools I use if I have to deal with an interruption is that I ask myself if I'm willing to lose four hours to take care of it. Does that sound extreme? Maybe, but it helps me really appreciate what happens when I am willing to jump out of flow. By scheduling things in four hour blocks, or even two hour blocks, I can make sure that I don't lose more time than I intend to. Even good and positive interruptions can kill productivity because of this context switch (jumping out of testing to go sit in a meeting for a story kickoff). Sure,the meeting may have only been fifteen minutes, but getting back into my testing flow might take forty five minutes or more to get back to that optimal focus again.

Joseph used a few examples to illustrate the times when certain things were likely to happen or be more likely to be effective (I've played with this quite a bit over the years, so I'll chime in my agreement or or disagreement.

• When is the best time to convince someone to change their mind?

This was an exercise where we saw words that represented colors, and we needed to call out the words based on a selected set of rules. When there was just one color to substitute with a different word, it was easier to follow along. When there were more words to substitute, it went much slower and it was harder to make that substitution. In this we found our natural resistance to change our mind as to what we are perceiving. The reason we did better than other groups who tested this was that our ability to work through the exercise was much more likely to be successful in the morning after breakfast rather than later in the day after we are a little fatigued. Meal breaks tend to allow us to change our opinions or minds because blood sugar gives us energy to consider other options. If we are low on blood sugar, the odds of persuading a different view are much lower.

• How do you optimize your tasks and schedule?

Is there a best time for creativity? I know a bit of this, as I've written on it before, so spoilers, there are times, but they vary from person to person. Generally speaking, there are two waves that people ride throughout the day, and the way that we see things is dependent on these waves. I've found for myself that the thorniest problems and the writing I like to do I can get done early in the morning (read this as early early, like 4 or 5 am) and around 2:00 p.m. I have always used this to think that these are my most creative times... and actually, that's not quite accurate. What I am actually doing is using my most focused and critical thinking time to accomplish creative tasks. That's not the same thing as when I am actually able to "be creative". What I am likely doing is actually putting to output the processing I've done on the creative ideas I've considered. When did I consider those ideas? Probably at the times when my critical thinking is at a low. I've often said this is the time I dod my busywork because I can't really be creative. Ironically, the "busy work time" is likely when I start to form creative ideas, but I don't have that "oh, wow, this is great, strike now" moment until those critical thinking peaks. What's cool is that these ideas do make sense. By chunking time around tasks that are optimized for critical thinking peaks and scheduling busy work for down periods, I'm making some room for creative thought.

Does silence work for or against you?

Sometimes when we are silent when people speak, we may create a tension that causes people to react in a variety of different ways. I offered to Joseph that silence as a received communication from a listener back to me tends to make me talk more. This can be good, or it can cause me to give away more than I intend to. The challenge is that silence doesn't necessarily mean that they disagree, are mad, or are aloof. They may just genuinely be thinking, withholding comment, or perhaps they are showing they don't have an opinion. The key is that silence is a tool, and sometimes, it can work in unique and interesting ways. As a recipient, it lets you reflect. as a speaker, it can draw people out. the trick is to be willing to use it, in both directions.

---

10:15 am - 11:15 am
RISK VS COST: UNDERSTANDING AND APPLYING A RISK BASED TEST MODEL
Jeff Porter

In an ideal world, we have plenty of time, plenty of people, and plenty of system resources, and assisting tools to do everything we need to do. Problem is, there's no such thing as that ideal environment, especially today. We have pressures to release more often, sometimes daily. While Agile methodologies encourage us to slice super thin, the fact is, we still have the same pressures and realities. Instead of shipping a major release once or twice a year, we ship a feature or a fix each day. The time needs are still the same, and the fact is, there is not enough time, money, system resources or people to do everything comprehensively, at least not in a way that would be economically feasible.



Since we can't guarantee completeness in any of these categories, there are genuine risks to releasing anything. we operate at a distinct advantage if we do not acknowledge and understand this. As software testers, we may or may not be the ones to do a risk assessment, but we absolutely need to be part of the process, and we need to be asking questions about the risks of any given project. Once we have identified what the risks are, we can prioritize them, we can identify them, and from that, we can start considering how to address them or mitigate them.

Scope of a project will define risk. User base will affect risk. Time to market is a specific risk. User sentiment may become a risk. Comparable products behaving in a fundamentally different manner than what we believe our product should do is also a risk. We can mess this up royally if we are not careful.

In the real world, complete and comprehensive testing is not possible for any product. That means that you will always leave things untested. It's inevitable. By definition, there's a risk you will miss something important, and leave yourself open to the Joe Strazzere Admonition ("Perhaps they should have tested that more!").

Test plans can be used effectively, not as a laundry list of what we will do, but as a definition and declaration of our risks, with prescriptive ideas as to how we will test to mitigate those risks. With the push to removal of wasteful documentation, I think this would be very helpful. Lists of test cases that may or may not be run aren't very helpful, but developing charters based on risks identified? That's useful and not wasteful documentation. In addition, have conversations with the programmers and fellow testers. Get to understand their challenges and areas that are causing them consternation. It's a good bet that if they tell you a particular area has been giving them trouble, or has taken more time than they expected, that's a good indication that you have a risk area to test.

It's tempting to think that we can automate much of these interactions, but the risk assessment, mitigation, analysis and game plan development is all necessary work that we need to do before we write line one of automation. All of those are critical, sapient tasks, and critical thinking, sapient testers are valuable in this process, and if we leverage the opportunities, we can make ourselves indispensable.

---

11:30 am - 12:30 pm
PERFORMANCE TESTING IN AGILE CONTEXTS
Eric Proegler

the other title for this talk is "Early Performance Testing", and a lot of the ideas Eric is advocating is to look for ways to front load performance testing rather than wait until the end and then worry about optimization and rework. this makes a lot of sense when we consider that getting performance numbers early in development means we can get real numbers and real interactions. It's a great theory, but of course the challenge is in "making it realistic". Development environments are by their very nature not as complete or robust as a production environment. In most cases, the closest we can come to is an artificial simulation and a controlled experiment. It's not a real life representation, but it can still inform us and give us ideas as to what we can and should be doing.



One of the valuable systems we use in our testing is a duplicate of our production environment. In our case, when I say production, what I really mean is a duplicate of our staging server. Staging *is* production for my engineering team, as it is the environment that we do our work on, and anything and everything that matters to us in our day to day efforts resides on staging. It utilizes a lot of the things that our actual production environment uses (database replication, HSA, master slave dispatching, etc.) but it's not actually production, nor does it have the same level of capacity, customers and, most important, customer data.

Having this staging server as a production basis, we can replicate that machine and, with the users, data and parameters as set, we can experiment against it. Will it tell us performance characteristics for our main production server? No, but it will tell us how our performance improves or degrades around our own customer environment. In this case, we can still learn a lot. By developing performance tests against this duplicate staging server, we can get snapshots and indications of problem areas we might face in our day to day exercising of our system. What we learn there can help inform changes our production environment may need.

Production environments have much higher needs, and replicating performance, scrubbing data, setting up a matching environment and using that to run regular tests might be cost prohibitive, so the ability to work in the small and get a representative look can act as an acceptable stand in. If our production server is meant to run on 8 parallel servers and handle 1000 consecutive users, we may not be able to replicate that, but creating an environment with one server and determining if we can run 125 concurrent connections and observe the associated transaction can provide a representative value. We may not learn what the top end can be, but we can certainly determine if problems will occur below the single server peak. If we discover issues here, it's a good bet production will likewise suffer at its relative percentage of interactions.

How about Perfomance Testing in CI? Can it be done? It's possible, but there are also challenges. In my own environment, were we to do performance tests in our CI arrangement, what we are really doing is testing the parallel virtualized servers. It's not a terrible metric, but I'd be leery of assigning authoritative numbers since the actual performance of the virtualized devices cannot be guaranteed. In this case, we can use trending to see if we either get wild swings, or if we get consistent numbers with occasional jumps and bounces.

Also, we can do performance tests that don't require hard numbers at all. We can use a stop watch, watch the screens render, and use our gut intuitions as to whether or not the system is "zippy" or "sluggish". They are not quantitative values, but they have a value, and we should leverage our own senses to encourage further explorations.

The key takeaway is that there is a lot we can do and there's a lot of options we have so that we can make changes and calibrate our interactions and areas we are interested in. We may not be able to be as extensive as we might be with a fully finished and prepped performance clone, but there's plenty we can do to inform our programmers as to the way the system is behaving under pressure.

---

1:15 pm - 1:45 pm
KEYNOTE: THE MEASURES OF QUALITY
Brad Johnson

One of the biggest challenges we all face in the world of testing is that quality is wholly subjective. there are things that some people care about passionately that are far less relevant to others. The qualitative aspects are not numerable, regardless of how hard we want to try to do so. Having said that, there are some areas that counts, values, and numbers are relevant. to borrow from my talk yesterday, I can determine if an element exists or if it doesn't. I can determine if the load time of a page. "Fast" or "slow" are entirely subjective, but if I can determine that it takes 54 milliseconds to load an element on a page as an average over 50 loads, that does give me a number. The next question of course is to think "is that good enough?" It may be if it's a small page with only a few elements. If there are several elements on the page that take the same amount of time to serially load, that may prove to be "fast" or "slow".



Metrics are a big deal when it comes to financials. We care about numbers when we want to know how much stuff costs, how much we are earning, and to borrow an oft used phrase "at the end of the day, does the Excel line up?" If it doesn't, regardless of how good our product is, it won't be around long. Much as e want to believe that metrics aren't relevant. Sadly they are, in the correct context.

Testing is a cost. Make no mistake about it. e don't make money for the company. We can hedge against losing money, but as testers, unless we are selling testing services, testing is a cost center, it's not a revenue center. To the financial people, any change in our activities and approaches is often looked at in the costs those changes will occur. their metric is "how much will this cost us?" Our answer needs to be able to articulate "this cost will be leveraged by securing and preserving this current and future income". Glamorous? Not really, but its essential.

What metrics do we as testers actually care about, or should we care about? In my world view, I use the number of bugs found vs number of bugs fixed. That ratio tells me a lot. This is, yet again,  a drum I hammer regularly, and this should surprise no one when I say I personally value the tester whose ratio of bugs reported to bugs fixed is closest to 1:1. Why? It means to me that testers are not just reporting issues, but that they are advocating for their being fixed. Another metric often asked about is the number of test cases run. Tome, it's a dumb metric, but there's an expectation outside of testing that that is informative. We may know better, but how do we change the perspective? In my view, the better discussion is not "how many test cases did you run" but "what tests did you develop and execute relative to our highest risk factors?" Again, in my world view, I'd love to see a ratio of business risks to test charters completed and reported to be as close to 1:1 as possible.

In the world of metrics, everything tends to get boiled down to Daft Punk's "Harder Better Faster Stronger". I use that lyrical quote not just to stick an ear-worm in your head (though if I have you're welcome or I'm sorry, take your pick), but it's really what metrics mean to convey. Are we faster at our delivery? Are we covering more areas? Do we finish our testing faster? Does our deployment speed factor out to greater revenue? One we either answer Yes or No, the next step is "how much or how little, how frequent or infrequent. What's the number?"

Ultimately, when you get to the C level execs, qualitative factors are tied to quantitative numbers, and most of the time, the numbers have to point to a positive and/or increasing revenue. That's what keeps companies alive. Not enough money, no future, it's that simple.

Brad suggests that, if we need to quantify our efforts, these are the ten areas that will be the most impactful.


It's a pretty good list. I'd add my advocacy and risk ratios, too, but the key to all of this is these numbers don't matter if we don't know them, and they don't matter if we don't share them.

---

2:00 pm - 3:00 pm
TESTING IS YOUR BRAND. SELL IT!
Kate Falanga


One of the oft heard phrases that is heard among software testers and about software testing is that we are misunderstood. Kate Falanga is in some ways a Don Draper of the testing world. She works with Huge, Huge is like Mad men, just more computers, though perhaps equal amounts of alcohol ;). Seriously, though, Kate approaches software testing as though it were a brand, because it is, and she's alarmed at the way the brand is perceived. The fact is, every one of us is a brand unto ourselves, and what we do or do not do affects how that brand is perceived.



Testers are often not very savvy about marketing themselves. I have come to understand this a great deal lately. The truth is, many people interpret my high levels of enthusiasm, my booming voice, and my aggressive passion and drive to be good marketing and salesmanship. It's not. It can be contagious, it can be effective, but that doesn't translate to good marketing. Once that shine wears off, if I can't effectively carry objectives and expectations to completion, or to encourage continued confidence, then my attributes matter very little, and can actually become liabilities.

I used to be a firebrand about software testing and discussing all of the aspects about software testing that were important... to me. Is this bad? Not in and of itself, but it is a problem if I cannot likewise connect this to aspects that matter to the broader organization. Sometimes my passion and enthusiasm can set an unanticipated expectation in the minds of my customers, and when I cannot live up to that level of expectation, there's a let down, and then it's a greater challenge to instill confidence going forward. Enthusiasm is good, but the expectation has to be managed, and it needs to align with the reality that I can deliver.

Another thing that testers often do is they emphasize that they find problems and that they break things. I do agree with the finding problems, but I don't talk about breaking things very much. Testers generally speaking don't break things, we find where they are broken. Regardless of how taht's termed, it is perceived as a negative. It's an important negative, but it's still seen as something that is not pleasant news. Let's face it nobody wants to hear their product is broken. Instead, I prefer, and it sounds like Kate does to, that emphasizing more positive portrayals of what we do is important. Rather than say "i find problems", instead, I emphasize that "I provide information about the state of the project, so that decision makers can make informed choices to move forward". Same objective, but totally different flavor and perception. Yes I can vouch for the fact that the latter approach works :).

The key takeaway is that each of us, and by extension our entire team, sells to others an experience, a lifestyle and a brand. How we are perceived is both individual and collectively, and sometimes, one member of the team can impact the entire brand, for good or for ill. Start with yourself, then expand. Be the agent of change you really want to be. Ball's in your court!

---

3:15 pm - 4:15 pm
BUILDING LOAD PROFILES FROM OBJECTIVE DATA
James Pulley

Wow, last talk of the day! It's fun to be in a session with James, because I've been listening to him via PerfBytes for the past two years, so much of this feels familiar, but more immediate. While I am not a performance tester directly, I have started making strides to start getting into this world because I believe it to be valuable in my quest to be a "specializing generalist" or a "generalizing specialist" (service mark Alan pAge and Brent Jensen ;) ).



Eric Proegler in his talk earlier talked about the ability to push Performance testing earlier in the development and testing process. To continue with that idea, I was curious to get some ideas as to how to build a profile to actually run performance and load testing. can we send boatloads of request to our servers and simulate load? Sure, but will that actually be representative of anything meaningful? In this case, no. What we really want to create is a representative profile of traffic and interactions that actually approaches the real use of our site. To do that, we need to think what will actually represent our users interactions with our site.

That means that workflows should be captured, but how can we do that? One of the ways that we can do this is analysis of previous transactions with our logs, and recreating steps and procedures. Another way is to look at access or error logs to see what people want to find but can't, or see if there are requests that look to not make any sense (i.e. potential attacks on the system). The database admin, the web admin and the CDN Administrator are all good people to cultivate a relationship with to be able to discuss the needs and encourage them to become allies.

Ultimately, the goal of all of this is to be able to steer clear of the "ugly baby syndrome" and look to cast blame or avoid blame, and to do that, we really need to make it possible to be as objective as possible. with a realistic load of transactions that are representative, there's less of a chance for people to say "that test is not relevant" or "that's not a real world representation of our product".

Logs are valuable to help gauge what actually matters and what is junk, but those logs have to be filtered. there are many tools available to help make that happen, some commercial, some open source, but the goal is the same, look for payload that is relevant and real. James encourages looking gat individual requests to see who generated requests, who referred the request, what request was made, and what user agent made the request (web vs mobile, etc.). What is interesting is that we can see patterns that will show us what paths users use to get to our system and what they traverse in our site to get to that information. Looking at these traversals, we can visualize pages and page relationships, and perhaps identify where the "heat" is in our system.

---

Wow, that was an intense and very fast two full days. My thanks to everyone at STP for putting on what has been an informative and fun conference. My gratitude to all of the speakers who let me invade their sessions, type way too loudly at times (I hope I've been better today) and inject my opinions here and there.

As we discussed in Kate's session, change comes in threes. The first step is with us, and if you are here and reading this, you are looking to be the change for yourself, as I am looking to be the change in myself.

The next step is to take back what we have learned to our teams, openly if possible, by stealth if necessary, but change your organization from the ground up.

Finally, if this conference is helpful and you have done things that have proven to be effective, had success in some area, or you are in an area that you feel is under representative, take the third step and engage at the community level. Conferences need fresh voices, and the fact is experience reports of real world application and observation have immense value, and they are straightforward  talks to deliver. Consider putting your hat in the ring to speak at a future session of STP-CON, or another conference near you.

Testing needs all of us, and it will only be as good as the contributors that help build it. I look forward to the next time we can get together in this way, and see what we can build together :).


In My Own Time: A Live Blog from #STPCON, Spring 2015

Monday, April 13, 2015 14:34 PM


To borrow a favorite line from my daughters... 

"What does snow turn into when it melts? It becomes spring!" 

It's a line from one of their all time favorite Anime series (Fruits Basket), and it's a cute ring of optimism that points to warmer days, longer daylight and at least for me, the start of the presenting and writing season. 

Over the past few years, I've enjoyed a three times per year experience of a conference in spring, CAST in the summer, and a conference a little farther afield in the fall. I've been invited to both Sweden and Ireland in the past year, and I'm keeping my fingers crossed that I might have an opportunity to present in Germany in the fall of 2015. Right now, though, that's up in the air, but I can hope and dream.

Today, though, brings me to slightly cloudy but very pleasant San Diego for the regular sessions of STPCON Spring 2015. Monday and Tuesday were special sessions and tutorials, and due to work commitments, I couldn't get here until last night. A group of us were gathered around the fire pit outside in the later evening, though, talking about our respective realities, and that familiar "conferring before conferring" experience that we all know and love. 

A great surprise this morning was to see my friend Mike Ngo, a tester who worked with me a decade ago at Konami Digital Entertainnment. Many of us have stayed in touch with each other over the years; we jokingly refer to ourselves as being part of the "ExKonamicated". He told me he came with a number of co-workers, and that several of them would be attending my talk this morning (oh gee, no pressure there ;) ).


Mike Lyles stated of the program today welcoming testers and attendees coming far and wide (apparently Sweden sent the most people, with 12 attendees), and many companies sending contingents (Mike's team from Yammer being a good example). The food was good this morning, the company has been great, and as always, it's such great fun to see people that I really only get to see online or at these events. By the way, a quick plug for Fall 2015 STPCON. It will be on a cruise ship ("Hey Matt, it's happened. They're having a testing conference... on a boat ;)... HA HA, April Fools!!!" Ah well, therein lies the danger of presenting on April 1st ;) ).


---

Bob Galen starts off the day with a keynote address defining and describing the three pillars of Agile Quality and Testing. As many of us can attest (and did so, loudly), there are challenges with the way Agile Development and Agile Testing take place in the real world. Interestingly enough, one of the summing points is that we have a lot of emphasis on "testing", but that we have lost the focus on actual "quality". In some ways, the fetish with automated testing and test driven development (which admittedly, isn't really testing in the first place, but the wording and the language tends to make us think it does) has made for "lots of testing, but really no significant differences in the perceived quality of the applications as they make their way out in the world". We have Selenium and Robot Framework and Gherkin, oh my, but could these same teams write anything that resembled a genuine Acceptance Test that really informed the quality of the product? So many of these tests provide coverage, but what are hey actually covering?


The biggest issue with all of our testing is that we are missing the real "value perspective". Where are our conversations leading? What does our customer want? What are we actually addressing with our changes? Story writing is connected to the automation, and that's not essentially a bad thing, but it needs to be a means to an end, not the end in and of itself.

The three pillars that Bob considers to be the core of successfully balanced Agile practices are Development & Test Automation, Software Testing, and Cross Functional Team Practices. Agile teams do well on the first pillar, are lacking in the second, and in many ways are really missing the third. The fetish of minimizing needless and wasteful documentation has caused some organizations to jettison documentation entirely. I frequently have to ask "how has that worked for you?". I'm all for having conversations rather than voluminous documentation, but the problem arises when we remove the documentation and still don't have the conversations. Every organization deals with this in some way when they make an Agile transition. 



Beyond the three pillars, there's also the sense that there is a foundation that all Agile teams need to focus on. Is there a whole team ownership of quality? Are we building the right things? Are we actually building it right? If we are using metrics, are they actually useful, and do they make sense? How are we steering the quality initiatives? Are we all following the same set of headlights? Are we actually learning from our mistakes, or are we still repeating the same problems over and over again, but doing it much faster than ever before?

Agile is a great theory (used in the scientific manner and not the colloquial, I genuinely mean that), and it offers a lot of promise for those that embrace it and put it into practice (and practice is the key word; it's not a RonCo product, you don't just "set it and forget it"). It's not all automation, it's not all exploratory testing, and it's not all tools. It's a rhythm and a balance, and improvising is critical. Embrace it, learn from it, use what works, discard what doesn't. Rinse and repeat :).

---

The last time I spoke at STP-CON, I was the closing speaker for the tracks. This time around, I was one of the first out the gate. If I have to choose, I think I prefer the latter, though I'll speak whenever and wherever asked ;). I've spoken about Accessibility in a number of places, and on this blog. Albert Gareev and I have made it a point of focus for this year, and we have collaborated on talks that we are both delivering. When you see me talking about Accessibility, and when you see Albert talk about it, we are usually covering the same ground but from different perspectives. I delivered a dress rehearsal of this talk at the BAST meetup in January in San Francisco, and I received valuable feedback that made its way into this version of the talk.



The first change that I wanted to make sure to discuss right off the bat is that we discuss Accessibility from a flawed premise. We associate it as a negative. It's something we have to do, because a government contract mandates it or we are being threatened with a lawsuit. Needless to say, when you approach features and changes from that kind of negative pressure, its easy to be less than enthusiastic, and to take care of it after everything else. Through discussions I had with a number of people at that dress rehearsal talk, we realized that a golden opportunity to change the narrative was already happening. An attendee held up their smart phone and said "this, right here, will drive more changes to accessible design than any number of government regulations", and they were absolutely right! We see it happening today with micro services being developed to intake data from wearable wearables like the Apple Watch, the FitBit, or other peripheral devices, where general text and screens just aren't the way the product needs to interact.

While I was developing this talk, and sent my slides to be included in the program, a brilliant tweet was made by @willkei on Twitter that encapsulated this interesting turning point:



Yeah, that!

The same steps and processes being used to make content accessible on our smart phones and our smart watches and the Internet of Things are really no different than designing for accessibility from the outset. The form factor necessitates the need for simplicity and direct communication, in a way that a regular interface cannot and will not do. What's great about these developments is the fact that new methods of interaction are having to be developed and implemented. By this same process, we can use this opportunity to incorporate those design ideas for everyone.

A heuristic that Albert and I (well, mostly Albert) have developed that can help testers think about testing for accessibility is called HUMBLE .


HUMBLE

Humanize: be empathetic, understand the emotional components.  
Unlearn: step away from your default [device-specific] habits. Be able to switch into different habit modes.
Model: use personas that help you see, hear and feel the issues. Consider behaviors, pace, mental state and system state.
Build: knowledge, testing heuristics, core testing skills, testing infrastructure, credibility.
Learn: what are the barriers? How do users Perceive, Understand and Operate?
Experiment: put yourself into literal situations. Collaborate with designers and programmers, provide feedback


In short, Accessible design isn't something special, or an add on for "those other people" out there. It's designing in a way that allows the most people the ability to interact with a product effectively, efficiently and in a way that, we should hope, includes rather than excludes. Designing for this up front is much better, and easier to create and test, then having to retrofit products to be accessible. Let's stop looking at the negative associations of accessibility, because we have to do it, and instead focus on making a web that is usable and welcoming for all.


---

Maaret Pyhäjärvi covered a topic that intrigued me greatly. While there are many talks about test automation and continuous delivery, Maaret is approaching the idea of "Continuous Delivery WITHOUT Automation". Wait, what?



Some background for Maaret's product would probably be helpful. She's working on a web based .NET product on a team that has 1 tester for each developer, and a scrum like monthly release for two years. As might be guessed, releases were often late and not tested well enough. This created a vicious cycle of failing with estimates and damaging team morale which, you guessed, caused them more failing with estimates. 

What if, instead of trying to keep working on these failed estimates and artificial timelines, we were to move to continuous deployment instead? What was interesting to me was the way that Maaret described this. They don't want to go to continuous delivery, but instead, continuous deployment. the build, unit test, functional test and integration test components were all automated, but the last step, the actual deployment, is manually handled. Instead of waiting for a bunch of features to be ready and push to release, each feature would be individually released to production as it is tested, deemed worthy and ready to be released. 


This model made me smile because in our environment, I realized that we did this, at least partially. Our staging environment is our internal production server. It's what we as a company actively use to make sure that we are delivering each finished feature (or sometimes plural features) and deploying the application on a regular basis. 

Maaret discussed the value of setting up a Kanban board to track features as a one piece flow. We likewise do this at Socialtext. Each story makes its way through the process, and as we work on each of the stories, as we find issues, we address the issues until all of the acceptance criteria passes. From there, we merge to staging, create a build, and then we physically update the staging server, sometimes daily, occasionally multiple times a day. This helps us to evaluate the issues that we discover in a broader environment and in real time vs. inside of our more sterile and generalized test environments. It seems that Maaret and her team do something very similar.

What I like about Maaret's model, and by extension, ours, is the fact that there remains a human touch to the deployments so that we can interact with and see how these changes are affecting our environment in real time. We remain aware of the deployment issues if they develop, and additionally, we share that role each week. We have a pumpking that handles the deployment of releases each week, and everyone on the engineering team takes their turn as the pumpking. I take my turn as well, so I am familiar with all of the steps to deploy the application, as well as places that things could go wrong. Having this knowledge allows me to approach not just my testing but the actual deployment process from an exploratory mindset. It allows me to address potential issues and see where we might have problems with services we depend on, and how we might have issues if we have to update elements that interact with those services independently.


---

Paul Grizzaffi  has focused on the creating and deploying automated test strategies and frameworks and emphasizes in his talk about changing the conversation that we are having about test automation and how we approach it. in "Not Your Parents’ Automation – Practical Application of Non-Traditional Test Automation", we have to face the fact that traditional automation does matter. we use it for quantitative items and both high and low level checks. However, there's other places that automation can be applied. How can we think differently when it comes to software test automation?


Often, we think of traditional automation as a way to take the human out of the equation entirely. That's not necessarily a bad thing. There's a lot of tedious actions that we do that we really wish we didn't have to do. when we build a test environment, three are a handful of commands I now run, and those commands I put into a script so I can do it all in one step. That's a good use of traditional automation, and it's genuinely valuable. We often say we want to take out the drudgery so the human can focus on the interesting. Let's extend this idea, can we use automation to help us with the interesting as well?

Paul asks  a fundamental question; “how can automation help me do my job?” When we think of it that way, we open new vistas and consider different angles where our automation can help us look for egregious errors, or look at running a script on several machines in parallel. How about if we were to take old machines that we could run headless and create a variety of test environments, and run the same scripts on a matrix of machines. Think that might prove to be valuable? Paul said that yes, it certainly was for their organization.

Paul uses a term I like and have used as a substitute for automation. I say Computer Assisted Testing, Paul calls it Automation Assist, but overall it's the same thing. we're not looking for atomic pass fail check validations. Instead we are looking for ways to observe and use our gut instincts to help us consider if we are actually seeing what we hope to see or determine that we are not seeing what we want to. 

At Socialtext, I have been working on a script grouping with this in mind. We call it Slideshow, and it's a very visible automation tool. For those who recall my talk about balancing Automation and Exploratory Testing, I used the metaphor of a taxi cab in place of a railroad train. The value of a taxi cab is that it can go many unique places,and then let you stop and look around. Paul's philosophy fits very well with that approach and use. Granted, it's fairly labor intensive to make and maintain these tests because we want to keep looking for unique places to go and find new pages. 

Some key ideas that Paul asks us to consider, and I feel makes a lot of sense, is that automation is important, but it gets way too much focus and attention for what it actually provides. If we don't want to scale back our traditional automation efforts, then we should look for ways that we can use it in unique and different ways. One of my favorite statements is to say "forget the tool for now, and start with defining the problem". By doing that, we can then determine what tools actually make sense, or if a tool is actually needed (there's a lot you can do with bash scripts, loops and curl commands... just sayin' ;) ). Also, any tool can be either an exceptional benefit or a huge stumbling block. It depends entirely on the person using the tool, and their expertise or lack thereof. If there is a greater level of knowledge, then more creative uses are likely to result, but don't sell short your non toolsmith testers. They can likely still help you find interesting and creative uses for your automation assistance.

---
I think Smita Mishra deserves an award for overachieving. Not content to deliver an all day workshop, she's also throwing down with a track talk as well. For those who don't know Smita, she is the CEO and Chief Test Consultant with QAzone Infosystems. She's also an active advocate, writer and engaged member of our community, as well as being a rock solid software tester. In her track talk, she discussed "Exploratory Testing: Learning the Security Way".


Smita asks us to consider what it would take to perform a reasonably good security test. Can we manually test security to a reasonable level? Can exploratory testing give us an advantage?

First off, Exploratory Testing needs to be explained as to what it is not. It's not random, it's not truly a ad hoc, at least not if it is being done correctly. A favorite way I like to look at Exploratory Testing is to consider interviewing someone, asking them questions, and based on their answers, asking follow up questions that might lead into interesting and informative areas. Like a personal interview, if we ask safe boring questions with predictable answers, we don't learn very much. However, when we ask a question that makes someone wince, or if we see they are guarded with their answers, or even evasive, we know we have hit on something juicy, and our common desire is to keep probing and find out more.  Exploratory testing is much the same, and security testing is a wonderful way to apply these ideas. 

One of the most interesting ways to attack a system is to determine the threat model and the prevalence of the attack. SQL injections are ways that we can either confirm security of forms fields or se if we can compromise the fields and gain access. 



What is interesting about security testing as a metaphor for exploratory testing is the fact that attacks are rarely tried one off and done. If an issue doesn't manifest with one attack, a hacker will likely try several more. As a tester we need to think and try to see if we can approach the system in a similar manner. It's back to the interviewing metaphor, with a little papparazzi determination for good measure. 

Ask yourself what aspects of the application would make it vulnerable. What services reside in one place, and what services reside in another? Does your site use iFrames? Are you querying different systems? Are you exposing the query payload? Again, each answer will help inform the next question that we want to ask. Each venue we open will allow us to find a potential embarrassment of riches. 

Smita encourages use of the Heuristic Test Strategy Model, which breaks the product into multiple sections (Quality Criteria, Project Environment, Product Elements and Test Techniques, as well as the Perceived Quality that the product should exhibit). Each section has a further breakdown of criteria to examine, and each breakdown thread allows us to ask a question, or group of questions. This model can be used for something as small as a button element or as large as an entire application. Charters can be developed around each of the questions we develop, and our observations can encourage us to ask further clarifying questions. It's like a puzzle when we start putting the pieces together. Each successful link-up informs the next piece to fall in place,and each test can help us determine which further tests to perform, or interlocking areas to examine.

---

Our closing keynote talk today is being delivered courtesy of Mark Tomlinson, titled "It’s Non-Functional, not Dysfunctional". There's functional testing and the general direct interaction we are all used to considering, and then there's all that "other" stuff, the "ilities" and the para-fucnctional items. Performance, Load, Security, Usability, Accessibility, and other items all play into the testing puzzle, and people need to have these skills. However, we don't have the skills readily available in enough places to actually give these areas the justice it deserves. 



Programmers are good at focusing on the what and the where, and perhaps can offer a fighting chance at the why, but the when and the how sometimes fall by the wayside.  Have we considered what happens when we scale up the app? Does the database allow enough records? Can we create consitions that bring the system to its knees (and yes, I know the answer is "yes" ;) ). 

there are a variety of dysfunctions we need to address, and Mark broke down a few here:


Dysfunction 1: Unless the code is stable, I can't test performance.

What determines stable? Who determines it? How long will it take to actually make it stable? Can you answer any of those? Functional testers may not be able to, but performance testers may be able to help make unstable areas more stable (mocks, stubs, virtual services, etc.). We can't do a big run of everything, but we can allocate some effort for some areas.

Dysfunction 2: As a product manager, I've allocated 2 weeks at the end of the project for load testing.

Why two weeks? Is that anywhere near enough? Is it arbitrary? Do we have a plan to actually do this? What happens if we discover problems? Do we have any pivoting room?

Dysfunction 3: The functional testers don't know how to use the tools for performance or security.

Considering how easy it is to download stuff like jMeter and Metasploit or Kali Linux, there's plenty of opportunities to do "ilitiy" testing for very close to free if not outright free. If you happen to get some traction, share what you know with others, and suddenly, surprise, you have a tool stack that others can get involved with.


Dysfunction 4: We don't have the same security configuration in TEST that we do in PROD.

Just because they don't match exsctly doesn't mean that the vulnerabilities are not there. There's a number of ways to look at penetration testing and utilize the approaches these tools use. What if the bizarre strings we use for security testing were actually used for our automated tests? Insane? Hardly. It may not be a 1:1 comparison, but you can likely use it.

Dysfunction 5: Our test environment is shared with other environments and hosted on over provisioned virtual server hosts.

OK, yeah, that's real, and that's going to give us false positives (even latency in the cloud can muck some of this up). this is also a chance to communicate times and slices to access the machines without competing for resources. 

Dysfunction 6: We don ot have defined SLA's or non-functional requirements specifications.

Fair enough, but do we even have scoping for one user? Ten users? One hundred users? What's the goal we are after? Can we quantify something and compare with expectations? Can we use heuristics that make sense to determine if we are even in the ball park?

Question A:


Question B:



Would we approach these questions the exact same way? Ha! Trick question! Of course we wouldn't, but at least by structuring the questions this way (Question B) we can make decisions that at least allow us to make sure we are asking the right questions in the first place. More to the point, we can reach out to people that have a clue as to what we should be doing, but we can't do that if we don't know what to ask.

---
This concludes the Wednesday session of STP-CON. Join me tomorrow for "Delivering the Goods: A Live Blog from #STPCON".

Communications and Breakfast: An Amusing Cultural Moment

Monday, March 30, 2015 23:06 PM

Last week, my family had the pleasure to host three girls from Narita, Japan, as part of our sister city cultural exchange program. We first took part in this exchange in 2013, when our older daughter was selected to be part of our city's delegation to go to Japan. Delegates families are encouraged to host the delegates coming from Japan, and we did that two years ago. We greatly enjoyed the experience, so when it came time for our youngest daughter to see if she could participate, we of course said yes, and thus, we welcomed three young women a long way from their homes into our home.

These girls did not know each other well prior to this trip. Unlike our city, which has one Intermediate school within its city limits, Narita has several schools represented, and the delegates chosen typically do not have hosting assignments with students from their school. Therefore, it was not just a learning experience for the girls to relate to us, but to each other.

On Friday, as I was getting ready to take my eldest daughter to her school, one of the girls came downstairs with some prepackaged single serving rice packets. She motioned to me in the English that she could that she wanted to cook the rice. I figured "OK, sure, I'd be happy to do that for you", and did so. After it was cooked, I opened the small container, put it in a bowl, and then asked her if she wanted a fork or chop sticks. She looked at me quizzically for a moment, and then she said "chop sticks". I handed her the chop sticks, and figured "OK, the girls want to have rice as part of their breakfast. No problem." I then lined up the other packets, explained to Christina how to cook them in the microwave (40 seconds seems fine with our power microwave), and then went to take my daughter to school. The rest of this story comes courtesy of texts and after the fact revelation ;).

While I was out and Christina was helping get the rest of the packets of rice prepared, she pulled out a few bowls and put the rice in the bowls, with a few more chop sticks and placed them on the table for the girls. The young lady who had come down to make the request then said "no, this is for you". Christina smiled, said thank you, and then started to eat the rice from one of the bowls. What she noticed after a few bits was that the girl was staring at her, frozen in place, and looking very concerned. At this point, Christina stopped, put down the bowl, and asked if everything was OK. Our Japanese exchange student tried to grasp for the words to explain what she wanted to say, and as this was happening, another of the exchange students, who was much more fluent in English, saw what was happening.

"Oh, this rice is for a breakfast dish we planned to make for you. Each of the packages has enough rice to make one Onigiri (which is to say, rice ball, a popular food item in Japan). At this, Christina realized what had happened, and texted me what she did. She felt mortified, but I assured her it was OK, and I'd happily split mine with her to make up for it. With that, we were able to work out the details of what they wanted and needed from us so that they could make the Onigiri for us (which they did, and which was delicious, I might add!).

I smiled a little bit at this because I have felt this situation a few times in my career, although it wasn't trying to communicate from English to Japanese and back. Instead, I've had moment like this where I've had to explain software testing concepts to programmers or others in the organization, and they have tried to explain their processes to me. It's very likely that I may have had more than a few moments of my own where I must have stood there, paralyzed and watching things happen, where I wanted to say "no, stop, don't do that, I need to explain more" but felt like the world was whizzing past me. As my wife explained the situation, i couldn't help but feel for both of them. Fortunately, in this case, all it meant was one fewer rice balls. In programming and testing, these mis-communications or mis-understandings are often where things can go ridiculously sideways, albeit usually not in an amusing way. The Larsen Onigiri Incident, I'm sure, will become a story of humor on both sides of the Pacific for the participants. It's a good reminder to make sure, up front, that I understand what others in my organization are thinking before we start doing.

Register NOW for #CAST2015 in Grand Rapids, Michigan

Wednesday, March 25, 2015 18:15 PM



After some final tweaks, messages being sent out to the speakers and presenters, keynotes and other behind the scenes machinations... we are please to announce that registration for AST's tenth annual conference, CAST 2015, to be held August 3-5, 2015 in Grand Rapids, Michigan, is now open!

The pre-conference Tutorials have been published, and these tutorials are being offered by actual practitioners, thought leaders, and experts in testing. All four of them are worth attending, but alas, we have room for twenty-five attendees for each, so if you want to get in, well, now is the time :)!

The tutorials we are offering this year are:

"Speaking Truth to Power" – Fiona Charles

"Testing Fundamentals for Experienced Testers" – Robert Sabourin

"Mobile App Coverage Using Mind Maps" – Dhanasekar Subramaniam

"'Follow-your-nose' Testing – Questioning Rules and Overturning Convention" – Christin Wiedemann

Of course, there is also the two day conference by itself that you can register for, and yes, we wholeheartedly encourage you to register soon.

If you register by June 5th, you can save up to $400 with our "Early Bird" discount. We limit attendance to 250 attendees, and CAST sold out last year, so register soon to secure your seat.

Additionally, we will be offering webCAST again this year. Really want to attend, but you just can't make it those days? We will be live streaming keynotes, full sessions, and "CAST Live" again this year!
Come join us this summer for our tenth annual conference in downtown Grand Rapids at the beautiful Grand Plaza Hotel August 3-5, or join us online for webCAST. Together, let's start “Moving Testing Forward.”

More Automation Does Not Equal Repaired Testing

Monday, March 23, 2015 19:10 PM

Every once in a while, I get an alert form a friend or a colleague to an article or blog post. These posts give me a chance to discuss a broader point that deserves to be made. One of those happened to arrive in my inbox a few day ago. The alert was to this blog post titled Testing is Broken by Philip Howard.

First things first; yes, I disagree with this post, but not for the reasons people might believe.

Testing is indeed "broken" in many places, but I don’t automatically go to the final analysis of this article, which is “it needs far more automation”. What it needs, in my opinion, is better quality automation, the kind that addresses the tedious and the mundane, so that testers can apply their brains to much more interesting challenges. The post talks about how there’s a conflict of interest. Recruiters and vendors are more interested in selling bodies, rather than selling solutions and tools. Therefore they push manual testing and manual testing tools over automated tools. From my vantage point, this has not been the case, but for the sake of discussion, I'll run with it ;).

Philip uses a definition of Manual Testing that I find inadequate. It’s not entirely his fault; he got it from Wikipedia. Manual testing as defined by Wikipedia is "the process of manually testing software for defects. It requires a tester to play the role of an end user and use most or all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases”.

That’s correct, in a sense, but it leaves out a lot. What it doesn’t mention is the fact that manual testing is a skill that requires active thought, consideration, and discernment, the kind that quantitative measures (of which automation is quite adept at handling) fall short with.

I answered back on a LinkedIn forum post related to this blog entry by making a comparison. I focused on two questions related to accessibility:

1. Can I identify that the wai-aria tag for the element being displayed has the defined string associated with it?

That’s an easy "yes" for automation, because we know what we are looking for. We're looking  for an attribute (a wai-aria tag) within a defined item (a stated element), and we know what we expect (a string like “Image Uploaded. Press Next to Continue”). Can this example benefit from improved automation? Absolutely!

2. Can I confirm that the experience for a sight-impaired user of a workflow is on par with the experience of a sighted user?

That’s impossible for automation to handle, at least with the tech we have today. Why? Because this isn’t quantifiable. It’s entirely subjective. To make the conclusion that it is either an acceptable or not acceptable workflow, a real live human has to address and test it. They must use their powers of discernment, and say either “yes, the experience is comparable” or “no, the experience is unacceptable”.

To reiterate, I do believe testing, and testers, needs some help. However, more automation for the sake of more automation isn’t going to do it. What’s needed is the ability to leverage the tools available, to help move off as much of the repetitive busywork we know we have to do every time, and get those quantifiable elements sequenced. Once we have a handle on that, then we have a fighting chance to look at the qualitative and interesting problems, the ones that we haven’t figured out how to quantify yet. Real human beings will be needed to take on those objectives, so don’t be so quick to be rid of the bodies just yet.

The Case for "Just a Little More" Documentation

Friday, March 20, 2015 17:28 PM

The following comment was posted to Twitter yesterday by Aaron Hodder, and I felt compelled not just to retweet it, but to write a follow up about it.

The tweet:



"People that conflate being anti wasteful documentation with anti documentation frustrate me."

I think this is an important area for us to talk about. This is where the pendulum, I believe has swing too far in both directions during my career. I remember very well the 1990s, and having to use ISO-9001 standards for writing test plans. For those not familiar with this approach, the idea was that you had a functional specification, and that functional specification had an accompanying test plan. In most cases, that test plan mapped exactly to that functional specification. We often referred to this as "the sideways spec”. That was a joking term we used to basically say "we took the spec, and we added words like confirm or verify to each statement." If you think I'm kidding, I assure you I'm not. It wasn't exactly that way, but it was very close. I remember all too well writing a number of detailed test plans, trying to be as specific as possible, only to have it turned back to me with "it's not detailed enough." When I finally figured out what they really meant was "just copy the spec”, I dutifully followed. It made my employers happy, but it did not result in better testing. In fact, I think it's safe to say it resulted in worse testing, because we rarely followed what we had written. Qe did what we felt we had to do with the time that we had, and the document existed to cover our butts.

Fast forward now a couple of decades, and we are in the midst of the "Agile age”. In this Agile world, we believe in not providing lots of "needless documentation”. In many environments, this translates to "no documentation" or "no spec” outside of what appears on the Kanban board or Scrum tracker. A lot is left to the imagination. As a tester, this can be a good thing. It allows me the ability to open up different aspects, and go in directions that are not specifically scripted. That's the good part.

The bad part is, because there's sparse documentation, we don't necessarily explore all the potential avenues because we just don’t know what they are. Often, I create my own checklists and add ideas of where I looked, and in the past I’ve received replies like “You're putting too much in here, you don't need to do that, this is overkill." I think it's important for us to differentiate between not overthinking and over planning, and making sure that we are giving enough information and providing enough details to be successful. It's never going to be perfect. The idea is that we communicate, we talk, we don't just throw documents at each other.  We have to make sure that we have communicated enough, and that we really do understand what needs to happen.

I'm a fan of the “Three Amigos” model. Early in the story, preferably at the very beginning, three people come together; the product owner, the programmer and the tester. That is where these details can be discussed and hammered out. I do not expect a full spec or implementation at this point, but it is important that everybody that comes to this meeting share their concerns, considerations and questions. There's a good chance we still might miss a number of things, because we didn't take the time here to talk out what could happen. Is it possible to overdo it? Perhaps, if we're getting bogged down in minutia and details, but I don't think it is a bad thing to press for real questions such as “Where is this product going to be used? What are the permutations that we need to consider? What subsystems might this also affect?" If we don't have the answer right then and there, we still have the ability to say “Oh, yeah that's important. Let's make sure that we document that.”

There's no question, I prefer this more nimble and agile method of developing software to yesteryear. I would really rather not go back to what I had to do in the 90s. However, even in our trying to be lean, and in our quest for a minimal viable products, let’s be sure we are also communicating effectively about what our products need to be doing. My guess is, the overall quality of what comes out the other end will be much better for the effort.

It's Not Your Imagination, It's REALLY Broken

Thursday, March 19, 2015 18:26 PM

One of the more interesting aspects about automation is the fact that it can free you up from doing a lot of repetitive work, and it can check your code to make sure something stupid has not happened. the vast majority of the time, the test runs go by with little fanfare. Perhaps there's an occasional change that causes tests to fail, and they need to be reworked to be in line with the present product. Sometimes there are more numerous failures, and those can often be chalked up to flaky tests or infrastructure woes.

However, there are those days where things look to go terribly wrong, where the information radiator is bleeding red. Your tests have caught something spectacular, something devastating. What I find ironic about this particular situation is not the fact that we jump for joy and say "wow, we sure dodged a bullet there, we found something catastrophic!" Instead, what we usually say is "let's do a debug of this session, because there's no way that this could be right. Nothing fails that spectacularly!"

I used to think the same thing, until I witnessed that very thing happen. It was a typical release cycle for us, stories being worked on as normal, work on a new feature tested, deemed to work as we hoped it would, with a reasonable enough quality to say "good enough to play with others". We merged the changes to the branch, and then we ran the tests. The report showed a failed run. I opened the full report and couldn't believe what I was seeing. More than 50% of the spun up machines were registering red. Did I at first think "whoah, someone must have put in a catastrophically bad piece of code!" No, my first reaction was to say "ugh, what went wrong with our servers?!" This is the danger we face when things just "kind of work" on their own for a long time. We are so used to little hiccups that we know what to do with them. We are totally unprepared when we are faced with a massive failure. In this case, I went through to check all of the failed states of the machines to look for either a network failure or a system failure... only none was to be found. I looked at the test failure statements expecting them to be obvious configuration issues, but they weren't. I took individual tests and ran them in real time and watched the console to see what happened. The screens looked like what we'd expect to see, but we were still failing tests.

After an hour and a half of digging, I had to face a weird fact... someone committed a change that fundamentally broke the application. Whoa! In this Continuous integration world I live in now, the fact is, that's not something you see every day. We gathered together to review the output, and as we looked over the details, one of the programmers said "oh, wow, I know what I did!" He then explained that he had made a change to the way that we fetched cached elements and that change was having a ripple effect on multiple sub systems. In short, it was a real and genuine issue, and it was so big an error that we were willing to disbelieve it before we were able to accept that, yep, this problem was totally real.

As a tester, sometimes I find myself getting tied up in minutiae, and minutiae becomes the modus operandi. We react when we expect. When we get delivered to us a major sinkhole in a program, we are more likely to not trust what we are seeing, because we believe such a thing is just not possible any longer. I'm here to tell you that it does happen, it happens more than I want to believe, or admit, and I really shouldn't be as surprised that it happens.

If I can make any recommendation to a tester out there, if you are faced with a catastrophic failure, take a little time to see if you can understand what it is,and what causes it. Do your due diligence, of course. Make sure that you re not wasting other people's time, but also realize that, yes, even in our ever so interconnected and streamlined world, it is still possible to introduce a small change that has a monumentally big impact. It's more common than you might think, and very often, really, it's not your imagination. You've hit something big. Move accordingly.

TESTHEAD Turns Five Today

Tuesday, March 10, 2015 16:49 PM

With thanks to Tomasi Akimeta
for making me into "TESTHEAD" :)!!!
On March 10, 2010, I stepped forward with the boldest boast I had ever made up to that point. I started a blog about software testing, and in the process, I decided I would try to see if I could say something about the state of software testing, my role in it, and what I had learned through my career. Today, this blog turns five years old. In the USA, were this blog a person, we would say I'm just about done with pre-school, and in the fall, I'd be getting ready to go into Kindergarten ;).

So how has this blog changed over the past five years? For starters, it's been a wonderful learning ground for me. Notice how I said that: a learning ground for me. When I started it, I had intended it to be a teaching ground to others. Yeah, that didn't last long. I realized pretty quickly how little I actually knew, and how much I still had to learn (and am still learning). I've found it to be a great place to "learn in public" and to, in many ways, be a springboard for many opportunities. During its first two years, most of the writing that I did in any capacity showed up here. I could talk about anything I wanted to, so long as it loosely fit into a software testing narrative.

From there, I've been able to explore a bunch of different angles, try out initiatives, and write for other venues, the most recent being a permanent guest blog with IT Knowledge Exchange as one of the Uncharted Waters authors. While I have other places I am writing and posting articles, I still love the fact that I have this blog and that it still exists as an outlet for ideas that may be "not entirely ready for prime time", and I am appreciative of the fact that I have a readership that values that and allows me to experiment openly. Were it not for the many readers of this blog, along with their forwards, shares, retweets, plus-one's and mentions in their own posts, I wouldn't have near the readership I currently have, and I am indeed grateful to all of you who take the time to read what I write on TESTHEAD.

So what's the story behind the picture you see above? My friend Tomasi Akimeta offered to make me some fresh images that I could associate with my site. He asked me what my site represented to me, and how I hoped it would be remembered by others. I laughed and said that I hoped my site would be seen as a place where we could go beyond the expected when it comes to software testing. I'd like to champion thinking rather than received dogma, experimentation rather than following a path by rote, and champion the idea that there is intelligence that goes into testing. He laughed and said "so what you want to do is show that there's a thinking brain behind that crash test dummy?" He was referring to the favicon that's been part of the site for close to five years. I said "Yes, exactly!" He then came back a few weeks later and said "So, something like this?" After I had a good laugh and a smile at the ideas he had, I said "Yes, this is exactly what I would like to have represent this site; the emergence of the human and the brain behind the dummy!"

My thanks to Tomasi for helping me usher in my sixth year with style, and to celebrate the past five with reflection, amusement and a lot of gratitude for all those who regularly read this site. Here's to future days!

Taming Your E-Mail Dragon

Friday, March 06, 2015 19:18 PM

Over on Uncharted Waters, I wrote a post about out of control E-mail titled "Is Your Killer App Killing You?" If that may be a bit too much hyperbole, there is no question that E-mail can be exhausting, demoralizing, and just really hard to manage.

One area that I think is really needed, and would make E-mail much more effective, is some way to extend messages to automatically start new processes. Some of this can be done at a fairly simple level. Most of the time, though, what ends up happening is that I get an email, or a string of emails, I copy the relevant details, and then I paste them somewhere else (calendar, a wiki, some document, Slack, a blog post, etc.). What is missing, and what I think would be extremely helpful, would be to have ways to register key applications with your email provider, whoever it may be, and then have key commands or right click options that would let you take that message, choose what you want to do with it, and then move to that next action.

Some examples... if you get a message and someone writes that they'd like to get together at 3:00 p.m., having the ability to right there schedule an appointment and lock the details of the message in place seems like it would be simple (note the choice of words, I said it seems it would be simple, I'm not saying it would be easy ;) ). If a message includes a dollar amount, it would be awesome to be able to right click or key command so that I could record the transaction in my financial software or create an invoice (either would be legitimate choices, I'd think).

Another option that I didn't mention in the original piece, but that I have found to be somewhat helpful, is to utilize tools that will allow you to aggregate messages that you can review later. For me, there are three levels of email detail that I find myself dealing with.

1. E-mail I genuinely could not care any less about, but doesn't rise to the level of outright SPAM.

I am unsentimental. Lots of stuff from sites I use regularly comes to my inbox and I genuinely do not want to see it. My general habit is to delete it without even opening it. If you find yourself doing this over and over again, just unsubscribe and be done with it. If the site in question doesn't give you a clear option for that, then make rules that will delete those messages so you don't have to. So far, I've yet to find myself saying "aww, man, I really wish I'd seen that offer that I missed, even though I deleted the previous two hundred that landed in my inbox. Cut them loose and free your mind. It's easy :).

2. Emails with a personal connection that matter enough for me to review and consider them, but I may well not actually do anything with them. Still much of the time, I probably will.

These are the messages I let drop into my inbox, usually to be subject to various filter rules and to get sorted into the buckets I want to deal with, but I want to see them and not let them sit around.

3. That stuff that falls between #1 and #2.

For these messages, I am currently using an app called Unroll.me. It's a pretty basic tool in that it creates a folder in my IMAP (called Unroll.Me), and any email that I have decided to "roll up" and look at later goes into this app, and this folder. There's some other features that the app offers, such as Unsubscribing (if the API of the service is set up to do that), include in the roll up, or leave in your Inbox. Each day, I get a message that tells me what has landed in my roll up, and I can review each of them at that point in time.

I will note that this is not a perfect solution. The Unsubscribe works quite well, and the push to Inbox also has no problems. It's the Roll up step that requires a slight change in thinking. If you have hundreds of messages each day landing into the roll up, IMO, you're doing it wrong. The problem with having the roll up collect too many messages is that it becomes easy to put off, or deal with another day, which causes the back log to grow ever larger, and in this case, out of sight definitely means out of mind. To get the best benefit, I'd suggest a daily read and a weekly manage, where you can decide which items should be unsubscribed, which should remain in the roll up, and which should just go straight to your inbox.

In any event, I know that E-mail can suck the joy out of a person, and frankly, that's just no way to live. If you find yourself buried in E-mail, check out the Uncharted Waters article, give Unroll.me a try, or better yet, sound off below with what you use to manage the beast that is out of control email. As I said in the original Uncharted Waters post, I am genuinely interested in ways to tame this monster, so let me know what you do.

All or Nothing, or Why Ask Then?

Thursday, March 05, 2015 23:36 PM

This is a bit of a rant, and I apologize for people who are going to read this and wonder what I am getting on about. Since I try to tie everything to software testing at some point and in some way, hopefully, this will be worth your time.

I have a drug/convenience store near my place of work. I go here semi-regularly to pick up things that I need or just plain want. I'm cyclical when it comes to certain things, but one of my regular purchases is chewing gum. It helps me blunt hunger, and it helps me get into flow when I write, code or test. I also tend to pick up little things here and there because the store is less than 100 steps from my desk. As is often the case, I get certain deals. I also get asked to take surveys on their site. I do these from time to time because, hey, why not? Maybe my suggestions will help them.

Today, as I was walking out the door, I was given my receipt and the cashier posted the following on it.


Really, I get why they do this. If they can't score a five, then it doesn't matter. You weren't happy, end of story. Still, I can't help but look at this as a form of emotional blackmail. It screams "we need to have you give us a five for everything, so please score us a five!" Hey, if I can do so, I will, but what if you were just shy of a five? What if I was in a hurry, and there was just a few too many people in the store? The experience was a true four, but hey, it was still pretty great. Now that experience is going to count for nothing. Does this encourage me to give more fives? No. What it does is tell me "I no longer have any interest in giving feedback", because unless it is something that says "Yay, we're great!" then it's considered worthless. It's a way to collect kudos, and it discounts all other experiences.

As a software tester, I have often faced this attitude. We tend to be taught that bugs are absolute. If something isn't working right, then you need to file a bug. My question always comes down to "what does 'not working right' actually mean?" There are situations where the way a program behaves is not "perfect", but it's plenty "good enough" for what I need to do. Does it delight me in every way possible? No. Would a little tweak here and there be nice? Sure. By the logic above, either everything has to be 100% flawless (good luck with that), or the experience is fundamentally broken and a bug that needs to be addressed. The problem arises when we realize that "anything less than a five is a bug", and that means the vast majority of interactions with systems are bugs... does that make sense? Even if at a ridiculously overburdened fundamental level it is true, that means that the number of "bugs" in the system are so overwhelming that they will never get fixed. Additionally, if anything less than a five counts as zero, what faith do I have that areas I actually consider to be a one or a two, or even a zero, will actually be considered or addressed? The long term tester and cynic in me knows the answer; they won't be looked at.

To stores out there looking for honest feedback, begging for fives isn't going to get it. You will either get people who will post fives because they want to be nice, or they will avoid the survey entirely. Something tells me this is not the outcome you are after, if quality of experience is really what you want. Again, the cynic in me thinks this is just a way to put numbers to how many people feel you are awesome, and give little to no attention to the other 80% of responses. I hope I'm wrong.

-----

ETA: Michael Bolton points out below that I made a faulty assumption with my closing remark. It was meant as a quip, and not to be taken literally, but he's absolutely right. I anchored on the five, and it made me mentally consider an even distribution of the other four numbers. There absolutely is nothing that says that is the case, it's an assumption I jumped to specifically to make a point. Thanks for the comment, Michael :).

Book Review: Ruby Wizardry

Thursday, March 05, 2015 19:45 PM



Continuing with the “Humble Brainiac Book Bundle” that was offered by NoStarch Press and Humble Bundle, I am sharing books that my daughter and I are exploring as we take on a project of learning how to write code and test software. Part of that process has been to spend time in the "code and see" environment that is Codecademy. If you have done any part of the Ruby track on Codecademy, you are familiar with Eric Weinstein’s work, as he’s the one who wrote and does updates to the Ruby section (as well as the Python, JavaScript, HTML/CSS and PHP modules).

With "Ruby Wizardry", Eric takes his coding skills and puts them into the sphere of helping kids get excited about coding, and in particular, excited about coding in Ruby. Of all the languages my daughter and I are covering, Ruby is the one I’ve had the most familiarity with, as well as the most extended interaction, so I was excited to get into this book and see how it would measure up, both as a primer for kids and for adults. So how does Ruby Wizardry do on that front?

As you might guess from the title and the cover, this book is aimed squarely at kids, and it covers the topics by telling a story of two children in a magical land dealing with a King and a Court that is not entirely what it seems, and next to nothing seems to work. To remedy this, the children of the story have to learn some magic to help make the kingdom work properly. What magic, you ask? Why, the Ruby kind, of course :)!

Chapter 1 basically sets the stage and reminds us why we bought this book in the first place. It introduces us to the language by showing us how to get it ("all adults on deck!”), how to install it, make sure it’s running and write our first "Hello World” program. It also introduces us to irb, our command interpreter, and sets the stage for further creative bouts of genius.

Chapter 2 looks at "The King's Strings" and handles, well, strings. Strings have methods, which allow us to append words like length and reverse written with dots ("string".length or 'string'.reverse). Variables are also covered, and show that the syntax that works on raw strings also works on assigned variables as well.

Chapter 3 is all about Pipe Dreams, or put more typically, flow control, with a healthy dose of booleans, string interpolation, and the use of structures like if, elseif, else and unless to make our Ruby scripts act one way or another based on values in variables and how those values relate to other things (such as being equal, not equal, done along with other things or in spite of other things, etc.).

Chapter 4 introduces us to a monorail, which is a train that travels in an endless loop. Likewise, the chapter is about looping constructs and how we can construct loops (using while as long as something is true and until up to the point something becomes false, along with for to iterate over things like arrays) to do important processes and repeat them as many times as we want, or need. Also, it warns us to not write our code in such a way to make an infinite loop, one that will just keep running, like our monorail if it never stops.

Chapter 5 introduces us to the hash, which is a list with two attributes, a key and a value,  as well as the ability to add items to arrays with shift, unshift, push or pop. Additionally we also learned how to play with ranges and other methods that give us insights as to how elements in both arrays and hashes can be accessed and displayed.

Chapter 6 goes into talking about symbols (which is just another word for name), and the notation needed to access symbols. Generally, the content of something is a string; the name of something is a symbol. Utilizing methods allows us to change values, convert symbols to strings, and other options that allow us to manipulate data and variables.

Chapter 7 goes into how to define and create our own methods, including the use of splat (*) parameters, that allows us to use any number of arguments. We can also define methods that take blocks by using "yield".

Chapter 8 extends us into objects, and how we can create them and use them. We also learn about object IDs to tell them apart, as well as classes, which allow us to make objects with similar attributes. We also see how we can have variables with different levels of focus (Local, Global, Instance and Class) and how we can tell each apart (variable, $variable, @variable and @@variable, respectively).

Chapter 9 focuses on inheritance, which is how Ruby classes can share information with each other. this inheritance option allows us to create subclasses and child classes, and inherit attributes from parent/superclasses.

Chapter 10 shows us a horse of a different color, or in this case, modules, which are a bit like classes, but they can't be created using the new method. Modules are somewhat like storage containers so that we can organize code if methods, objects or classes won't do what we want to do.  By using include or extend, we can add the module to existing instances or classes.

Chapter 11 shows us that sometimes the second time's the charm, or more specifically, we start getting the hang of Refactoring. Using or operators to assign variables, using ternary operators for short actions, using case statements instead of multiple if/elseif/else statements, returning boolean variables, and most important, the removal of duplicate code and making methods small and manageable.

Chapter 12 shows us the nitty-gritty of dealing with files. Opening reading from, writing to, closing and deleting files are all critical if we want to actually save the work we do and the changes we make.

Chapter 13 encourages us to follow the WEBrick Road, or how we can use Ruby to read and write data from the Internet to files or back to Internet servers using the open-uri gem, which is a set of files that we can use to make writing programs easier (and there are lots of Ruby gems out there :) ).

Chapter 14 gives some hints as to where you, dear reader, might want to go next. This section includes a list of books, online tutorials, and podcasts, including one of my favorite, the Ruby Rogues Podcast. It also includes interactive resources such as Codecademy, Code School, and Ruby Koans, and a quick list of additional topics.

The book ends with two appendices that talk about how to install Ruby on a Mac or a Linux workstation and some of the issues you might encounter in the process.

Bottom Line:

For a "kids book", there's a lot of meat in here, and to get through all of the examples, you'll need to take some time and see how everything fits together. The story is engaging and provides the context needed for the examples to make sense, and each section provides a review so that you can really see if you get the ideas. While aimed for kids, adults would be well suited to follow along as well. Who knows, some wizarding kids might teach you a thing or three about Ruby, and frankly, that's not a bad deal at all!

Less Eeyore, More Spock

Thursday, February 26, 2015 21:30 PM

Today's entry is inspired by a post that appeared on the QATestLab blog called "Why Are Software Testers Disliked By Others?" I posted an answer to the original blog, but that answer started me thinking... why does this perception exist? Why do people feel this way about software testing or software testers?

There's a lot of jokes and commentary that go with being a software tester. One of the most common being "if no one is talking about you, you're doing a great job". Historically, Quality Assurance and Software Testing was considered somewhat like plumbing. If we do our jobs and we find the issues and they get resolved, it's like the plumbing that drains our sinks, tubs, showers and toilets. Generally speaking, we give the pipes in our walls next to no thought; that is, until something goes wrong. Once there's a back up, or a rupture, or a leak, suddenly plumbing becomes tremendously present in our thoughts, and usually in a negative way. We have to get this fixed, and now! Likewise, software testers are rarely considered or thought about when things are going smoothly, but we are often looked at when something bad has happened (read this as "an issue has made itself out into the wild, been discovered, and been complained about").

This long time perception has built some defensiveness for software testers and software testing, to the point where many testers feel that they are the "guardians of quality". We are the ones that have to find these horrible problems, and if we don't, it's our heads! It sounds melodramatic, but yes, I've lived this. I've been the person called out on the carpet for missing something fundamental and obvious... except for the fact that it wasn't fundamental or obvious two days before, and interestingly, no one else thought to look for it, either.

We can be forgiven, perhaps, for bringing on ourselves what I like to call an "Eeyore Complex". For those not familiar, Eeyore is a character created by A.A. Milne, and figures in the many stories of "Winnie the Pooh". Eeyore is the perpetual black raincloud, the one who finds the bad in everything. Morose, depressing, and in many ways, cute and amusing from a distance. We love Eeyore because we all have a friend that reminds us of him.

The problem is when we find ourselves actually being Eeyore, and for many years, software testing deliberately put itself in that role. We are the eternal pessimist. The product is broken, and we have to find out why. Please note, I am not actually disagreeing with this; the software is broken. All software is, at a fundamental level. It actually is our job to find out where, and advocate that it be fixed. However, I have to say that this is where the similarities must end. Finding issues and reporting/advocating for them is not in itself a negative behavior, but it will be seen as such if we are the ones who present it that way.

Instead, I'd like to suggest we model ourselves after a different figure, that of Spock. Yes, the Spock of Star Trek. Why? Spock is logical. He is, relatively speaking, emotionless, or at least has tremendous control over them (he's half human, so not devoid of them). Think about any Star Trek episode where Spock evaluates a situation. He examines what he observes, he makes a hypothesis about it, tests the hypothesis to see if it makes sense, and then shares the results. Spock is anything but a black raincloud. He's not a downer. In fact, he just "is". He presents findings and data, and then lets others do with that information what they will. Software testers, that is exactly what we do. Unless we have ship/no ship decision making authority, or the ability to change the code to fix what is broken (make no mistake, some of us do), then presenting our findings dispassionately is exactly what we need to be doing.

If people disagree with our findings, that is up to them. We do what we can to make our case, and convince them of the validity of our concerns, but ultimately, the decision to move forward or not is not ours, it's those with the authority to make those decisions. In my work life, discovering this, and actually living by it, has made a world of difference in how I am perceived by my engineering teams, and my interactions with them. It ceases to be an emotional tug of war. It's not a matter of being liked or disliked. Instead it is a matter of providing information that is either actionable or not actionable. How it is used is ultimately not important, so long as I do my best to make sure it is there and surfaced.

At the end of the day, software testers have work to do that is important, and ultimately needs to be done. We provide visibility into risks and issues. We find ways in which workflows do not work as intended. We noticed points that could be painful to our users. None of this is about us as people, or our interactions with others as people. It's about looking at a situation with clear understanding and knowing the objectives we need to meet, and determining if we can actually meet those objectives effectively. Eeyore doesn't know how to do that. Spock does. If you are struggling with the idea that you may not be appreciated, understood, or liked by your colleagues, my recommendation is "less Eeyore, more Spock".

If You Have $20, You Can Have a Standing Desk

Friday, February 20, 2015 18:35 PM

I've had a back and forth involvement with standing desks over the years. I've made cheap options, and had expensive options purchased for me. I've made a standing desk out of a treadmill, but it has been best for passive actions. It's great for reading or watching videos, not so great for actually typing, plus it was limiting as to what I could put on it (no multi monitor setup). Additionally, I've not wanted to make a system that would be too permanent, since I like the option of being flexible and moving things around.

I'm back to a standing desk solution for both home and work because I'm once again dealing with recurring back pain. No question, the best motivation for getting back to standing while working is back pain. It's also a nice way to kick start one's focus and to get involved in burning a few more calories each year. I'll give a shout to Ben Greenfield, otherwise known as "Get Fit Guy" at quickanddirtytips.com for a piece of information that made me smile a bit. He recommends a "sit only to eat" philosophy. In other words, with the exception of having to drive or fly, my goal should be to sit only when I eat. All other times, I should aim to stand, kneel, lie down or do anything but sit slumped in a chair. The piece of data that intrigued me was that, by doing this, I'd be able to burn 100 additional calories each day. In a week's time, that's the equivalent of running a 10K.

For those who have access to an IKEA, or can order online, at this moment, the Ikea LACK square side table, in black or white, can be purchased for $7.99 each. I chose to do exactly that, and thus, for less than $20, including tax, I have set up this very functional standing desk at work:




The upturned wastebasket is an essential piece of this arrangement. It allows me to shift the weight from leg to leg, and to give different muscles in my back some work, and to rest other areas from time to time. I've also set up a similar arrangement at home, but added an EKBY ALEX shelf (I'll update with a picture when I get home :) ). This gives me a little extra height and some additional storage for small items. The true beauty of this system is that it can be broken down quickly and moved anywhere with very little effort, and is much less expensive than comparable systems I have seen. If you'd like to make something a little more customized, with a pair of shelf brackets and a matching 48" shelf, you can make a keyboard tray, though for me personally, the table height works perfectly.

What I find most beneficial about a standing desk, outside of the relief from back pain, is the fact that it is incredibly focusing. When I sit down, it's easy to get into passive moments and lose track of time reading stuff or just passively looking at things. When I stand, there is no such thing as "passive time". It's very focusing, and it really helps me to get into the zone and flow of what I need to do. For those looking to do something similar, seriously, this is a great and very inexpensive way to set up a standing desk.

Packt Publishing says "FREE LEARNING - HELP YOURSELF!"

Monday, February 16, 2015 21:39 PM

Something's in the air, I think. Maybe it's a new year of possibilities, maybe it's the nature of technical advancement and growth that happens each year, but there seems to be a groundswell in opportunities for programmers, testers and DIYers to learn new stuff all the time, often very close to free, and in many cases, just plain outright for free.

Packt Publishing recently did this with their 24 gifts for Christmas, in which they offered ebooks for free each day for a limited time, and they are back at it again with their "FREE LEARNING - HELP YOURSELF!" initiative, and again, available for a limited time.




"Packt is here to help you develop, and that's why we're offering you and other IT professionals 18 days of Free Learning -- from today (February 16, 2015) until March 5, 2015, you can claim a free eBook every day here.

Each eBook will only be available for free for 24 hours, so make sure you come back every day to grab your Free Learning fix! We'll be featuring a diverse range of titles from our extensive library, so don't miss out on this amazing opportunity to develop some new skills and try out new tech.

Happy learning!

The Packt Team"


Starting today, they are offering a title related to developing in Drupal. Not something you're currently interested in? That's OK, check back tomorrow; there may be something else that will be interesting.

As Packt Publishing and a number of other vendors have been super helpful to give me free materials throughout the past five years, and in a sense, contribute greatly to the overall content of the TESTHEAD blog, I'm happy to help spread the word for these promotions when they provide them, so if you want to take advantage of them, check in each day over at Packt and see if they have something interesting for you. I certainly will be!



Book Review: Build Your Own Website

Thursday, February 12, 2015 05:55 AM

With the "Humble Braniac Book Bundle" still underway, I felt it only fitting to keep the trend going and review books that are geared towards kids and those who want to have a quick introduction to the worlds of programming, web design and engineering. My daughter and I are exploring a lot of these titles at the moment, and one that caught my eye was "Build Your Own Website", primarily because it promised to be "A Comic Guide to HTML, CSS and WordPress", and that indeed it is.

I should probably mention that by "a comic guide", it does not mean in the funny sense (though some of it is), but in the illustrated, graphic novel sense. For those familiar with NoStarch Press and their "The Manga Guide to..." series of books, Nate Cooper's writing and Kim Gee's artwork fits very well in that space. What's more, "Build Your Own Website" follows the same template that "The Manga Guide to..." books do, in that each section starts with an illustrated graphic novel treatment of topics, and then follows on with a more in depth prose treatment of each area.

So what's in store for the reader who wants to start on a mission to make their own site from scratch?

Chapter 1 starts with our protagonist Kim looking forward to her first web design class, and shows that inspired and excited first timer's desire to get in and do something. It's followed by an explanation of the tools needed to be downloaded and do the work necessary to complete the examples in the book. All of the exercises and examples can be done for free, all you need is a web browser or two, a text editor, an ftp client (the book recommends FileZilla; it's the one I use as well) and you can get a free WordPress account at http://www.wordpress.com.


Chapter 2 talks about The Trouble with HTML, and how Kim and her dog Tofu meet up with the Web Guru, who introduces them to the basics of HTML, paths and naming conventions, loading pictures and following links, the hierarchy of files and directories that make up a basic web site, and a dragon called "404". The second section goes into details about all of these including explaining about document structure, HEAD and BODY tags, the items that go in each, embedding images, and a basic breakdown of the most common HTML tags.


Chapter 3 shows us how Kim Makes Things Look Great with CSS. Well, Glinda, the Good Witch of CSS helps Kim do that (graphic novel for kids, gang. Work with me, here ;) ). Glinda shows Kim the basics of CSS including classes and IDs, inline styles and external stylesheets that can be referenced along with inline styles, effectively creating a "cascade of styles" (CSS == "Cascading Style Sheets"). The chapter also discusses using div's for creating separate sections and blocks that CSS can be applied to, and ends with commonly used CSS properties.


Chapter 4 is where Kim Arrives in WordPress City, and where the examples focus on, of course, WordPress as a composition platform. Kim gets introduced to what WordPress is, which is a Content Management System (CMS), and the conventions of creating both blogs and websites. Kim is introduced to the Dashboard, creating posts, using the Visual editor, structuring her site, using Categories and Tags, using the Media Library to store media items, and the overall Theme to be used for the site. Each of these is covered in greater detail with examples in the second prose part.


Chapter 5 takes us to Customizing WordPress, and the myriad options that Kim can use to make her site look the way she wants it to. She is introduced to the Appearance panel, the difference between free and premium Themes, plugins such as Buy buttons or sharing posts on social media, Widgets that perform special functions that can be replicated for sections or each page, changing the navigation options to get to your pages quickly, and how each of the elements of WordPress are built on HTML and CSS (as well as things like JavaScript, PHP, Ruby, etc.).


Chapter 6 brings us to The Big Launch, where Kim and Tofu navigate the realities of hosting the site and how to set up hosting so that they can display their finished site to the world. There's lots of options, and most cost some money, but not very much (plans ranging from $5-$10 a month are readily available). Registering a domain is covered, and many sites have an option to install WordPress and use it there.


Bottom Line:
"Build Your Own Website" starts with some basic HTML and CSS, and then spends the bulk of the second half of the book introducing you, the user, to WordPress. For those looking to see the nuts and bolts of making a web site from scratch, including making the navigation elements, more involved interactions, and other esoteric features of web sites outside of the CMS system that WordPress provides, you will be disappointed. Having said that, if the goal is to get a site up and running and using a well designed and quick to use interface, WordPress is a pretty good system, with lots of flexibility and ways to make the basic formatting of a site nearly automatic. Younger web development acolytes can get in and feel what it's like to design and manage a site. To that end, "Build Your Own Website" does a very good job, and does it in an entertaining and engaging manner.

I would recommend this as a good companion book with "Lauren Ipsum", so as to give some high level computer science interaction. Both books also show that young programmers can get in, see what is happening, and become curious as to what comes next. From there, focusing on books that deal with JavaScript or frameworks for setting up sites will be less of a jump, because we can look back at WordPress and its CMS and say "oh yeah, that looks familiar". "Build Your Own Website" sets up a nice foundation, and makes a good jumping off point for those further explorations.

Writing the "Michael Larsen" Way?

Monday, February 09, 2015 21:24 PM

First off, I have to say "thank you" to my friend Jokin Aspiazu for inspiring this post today. He wrote a piece about writing a blog from your own experiences and how he found inspiration from a number of people, including me.

The statement that prompted today's blog post was the following. In it, Jokin is explaining different styles and approaches to writing a blog post, and ways in which those posts are done:

"The Michael Larsen way. He wakes up early, so he has time to write before his daily routines start. The thing is that if you write late at night, when the day is gone, you are going to be tired in body and mind. And your writing won’t be as fluid as you would like, and the next day, when you read your text, you will hear “I’m so tired…” in the background."



I have to say that this is mostly correct. If given a choice, I much prefer writing in the early morning over any other time of day. I think Jokin was referencing a comment I said to him at EuroSTAR when we were talking about the ideal times to do certain things, and that for me, that sweet spot is early in the morning, and by early, I mean any time before 6:00 a.m. The quote, as I remember saying it (because I do say this a lot ;) ), is that "I can get a lot done while everyone else I know and relate with is asleep".

Having said that, there are a few other things that I do, and I find them helpful as well.

The Voice Recorder on my Phone is a Really Good Friend

Oh, Voice Recorder, how kind you are to me. You make it possible for me to capture any insane and rambling thought I might have, and not lose it to the ether. I should also add I am grateful for the headphone and mic combinations now available with SmartPhones, because they make me look like less of a lunatic while I am walking down the street.

Some great spots where I can let loose with my thoughts are on my daily commute legs. I park my car about a half mile from the train station I use because I'm cheap and don't want to pay for a parking permit, but also because it gives me a bit of a walk each day. That walk is often a golden time to think out loud (something I do regularly), and having the voice recorder lets me capture all of that. Also, it gives people walking past me the impression I'm just talking to someone on the phone, so I'm not looked at as though I'm crazy ;).

Often, the ramblings don't amount to much, and those I tend to delete at the end of each week, but every so often, I'll find something that sticks, or that gives me a reason to say "OK, I want to explore that some more". Often that will result in something I want to write about.

Every Book Has a Story to Tell

Generally speaking, I love to read. Though my book buying habits have changed with Kindle apps and e-books, I still love having a lot of choices to choose from and read from. My SmartPhone has become my secondary reading device, but my first is my laptop computer. I often find myself reading books and grabbing passages or parts of a book that I have found interesting, and I pull them over into a Notes app that I use. Sometimes they sit there for months, but every once in awhile, something will happen, or I'll see something that jumps out at me and I'll say "hey, that fits into a problem I'm dealing with".

Very often, those discoveries are not limited to technical books or books about testing. I'm a fan of history, and I love reading about interesting things that have happened in the past, both distant and more recent. I often borrow the Dan Carlin quote of "history has all but ruined fiction for me", and that shows up in the things that I read. It's rare that you will find me reading fiction, though I do from time to time. Most of the time, it's non fiction of a historical, technological, or business perspective. Those lessons often make their way into my posts.

"That Reminds me of a Testing Story..."

I owe this phrase to the Cartoon Tester, aka Andy Glover. He used it as a comical way of showing the different type of testers out there, but it illustrated something I frequently try to do. Even in the mundane aspects of my life, I find things that both inform my testing, and also inform my view of the world as I see it, which in turn informs my testing. Something as simple as a way to mow the lawn, or deal with pests in the yard, or trying to manage the delicate balance of life in my fish tanks, or the daily dilemmas my kids face, or the often interesting and noteworthy events that happen in my role as a Scout Leader, all of these shape what often becomes an analogy to software testing. They may be helpful to others, they may not, but overall, I remind myself that they are helpful to me. Whether they be specific to ideas, events, or interactions with individuals, each of them informs what I do, and gives me ideas of ways that I can do it better... maybe :).

Again, I want to thank Jokin for helping me consider this, and give me a chance to respond a little more in depth. While everything Jokin says is accurate, I hope this gives you a little more of a glimpse into how I think and work, and how I like to gather material that ultimately makes its way into my blog. If these ideas help you to write better, more frequently, or at least think a little differently about how you write, awesome. If not, again, it felt good to give some additional thoughts as to what writing "the Michael Larsen way" actually means, at least to me.

Book Review: Python for Kids

Friday, February 06, 2015 22:12 PM

As I have been working with my daughter to learn programming, we have been focusing on learning Python.  We chose Python primarily because it was something I felt we could both come at from a similar frame of reference. I had worked with Python but in limited capacity, and Amber had not really worked with any programming languages. By working on Python, we could be somewhat closely matched with what we were learning.

NoStarch Press has a number of books that are ideally suited for younger programmers, and to that end we both decided to look at the books together. We will be doing three “for Kids" books reviews during February, provided we can get through all of them. All journeys have to start somewhere though, and this journey will start with Python for Kids. I should also mention that, for a limited time, you can get Python for Kids as part of the NoStarch and Humble Bundle "Humble Brainiac Book Bundle" offer, which comes with a lot of other additional books.

Python for Kids is written in a way that lets the user, whether they are kids or adults, readily get into the material and see how the examples work and try them out for themselves.

Chapter 1 starts out with installing and using Python 3, so right away Amber and I had to adjust to the new syntax (Codecademy’s Python modules are still based on Python 2).  Be aware that there are some subtle differences with Python 3, one of the main ones being the parentheses required when using the print(“text goes here”) function (yeah, like that ;) ).

Chapter 2 introduces the user to calculate values using simple equations via operators, using parentheses to ensure the desired order of operations, and creating and using variables to store values for use in those calculations.

Chapter 3 introduces strings, lists,  tuples, and maps, each of which allows the user to manipulate text in a variety of ways. Each of the approaches has rules as to how to move items and how they are represented, ranging from simple strings with are just quoted sentences, up to maps which have keys and values that can be called as needed.

Chapter 4 introduces the user to drawing methods, and in this case, that involves using a module called turtle (which in turn uses a tool called tinter). Using this tool, we can actually draw the turtles movements with lines moving left and right, moving forward and backward, as well as starting and stopping with up and down commands.

Chapter 5 focuses on asking questions and responding based on the response the system gets. For experienced programs, this i about if statements and their related commands (elif, else)  and how commands allow for combining conditions (using and/or) and converting values (using int, str, float and none).

Chapter 6 talks about “Going Loopy” which, as you might guess, deals with handling loop conditions and repetitive tasks. for and while loops are covered, and break conditions are shown to, well, break out of loops.

Chapter 7 talks about ways to recycle code and reuse it defining and using Functions and importing Modules. Variable scope is also covered so as to see where values are assigned, used and passed back and forth.

Chapter 8 focuses on the things in python, otherwise known as classes and objects.  this chapter covered the details of classes and how they an be used to create instances, and how objects can be created, call functions to work with objects, and assigning object variables to save values in those objects.

Chapter 9 covers the various built-in functions available in Python, those that we can import into Python programs and use directly (functions like abs, bool, print, float, int, len, etc) as well as howe to open and manipulate files.

 Chapter 10 covers a variety of helpful modules, such as copy, keyword, random, shuffle, and the sys module that allows the user to control the Python shell itself, reading input with stdin, writing output with stdout, telling time with the time module, and using pickle to save user information in files (yes, there’s a module called pickle ;) ).

Chapter 11 revisits turtle, and how we can use it to draw a variety of items. including basic geometric shapes, changing the pen color, filling shapes and reusing steps in functions to combine drawing shapes and using multiple colors in one function call.

Chapter 12 covered using tkinter to make “better graphics” and my oh my does this take me back. I used Tcl/Tk back in the 90s for a number of things, and the syntax to use tkinter is very familiar.  Simple shapes, event bindings so objects can move when keys are pressed, using identifying numbers to move shapes and change colors are all covered here.

Part II gives the user a chance top practice the programming chops they have developed in the first Part by making a ball bouncing game called, appropriately enough, Bounce!

Chapter 13 starts out using the tinter module, in which we can create a class for a ball and move it around the screen.  Coordinates can be set so that it will bounce off of the top and corners of the screen, and a randomized option so the ball has a dynamic movement.

Chapter 14 continues with creating classes for the paddle, and using coordinates to see if the ball hit the paddle. Event bindings tie the left and right arrow keys to control the  paddle,  and make it so,  if the player misses the ball, the game ends when the ball hits the bottom.

Part III continues the ideas so far shown with “Mr. Stick Man Races for the Exit”.

Chapter 15 shows how to make graphical images of, you guessed it, a stick man (using GIMP to make the actual images) and making the backgrounds of the images transparent so they won’t cover up other things.

Chapter 16 covers creating a Game class making a background image. By using functions called within_x and within_y, we can determine if items have collided with other items. We also discover Sprites and a sub-class, PlatformSprite, to draw the platforms on the screen.

Chapter 17 walks through the details for taking the pictures we created for Mr. Stick Man back in Chapter 15 and setting up options for the figure to move, turn and jump.

Chapter 18 finishes off the Stick Man game, and shows that putting together a working game is not trivial, but it’s also not a huge endeavor either.  We focus on using the images we made for Mr. Stick Man so he appears to be running, use  collision detection to tell if he hits the sides of the canvas or another object, and have Mr. Stick man drop if he runs off the edge of a platform. We also make a door that Mr. stick man can go through, ending the game.

The book also provides an Afterward that discusses some additional options for doing basic game development Alice, Scratch and Unity3D, PyGame, which is a dedicated library for making games, band a discussion of other programming languages and a super quick look at what they do , where to get them, and what it takes to make a   “Hello, World!” program. The Appendix highlights keywords in Python, and the glossary defines a number of words kids might not hear in their everyday interactions.

Bottom Line:

Python for Kids clocks in at a little over 300 pages, but you won’t feel like it as you work through it. There are numerous small projects and samples to examine, and all of the solutions to the puzzles and challenges are available at http://python-for-kids.com/. While this book is geared towards kids, there’s plenty in here to keep adults engaged and focused. Often, programming books are plagued with the “implied knowledge” trap that happens so often, where small projects give way to bigger ones, without a clear understanding of how the jump was made. Python for Kids does a very good job of avoiding that problem. Having kids as the primary audience, they take their time explaining key areas, and yes, for programmers with a little work under their belts, this is going to probably seem remedial, but as an introduction to Python, I think it’s first rate.

If you are looking to give your kids a kick start in learning how to program, if Python seems like an interesting language to start with, and if you’d like to have a couple of tangible projects to play with and augment as you see fit, then Python for Kids should be on your short list of programming books… provided having fun while learning is a priority.

How I am Overcoming "Expectational Debt"

Thursday, February 05, 2015 21:59 PM

A phrase I have personally grown to love, and at times dread, is the one that Merlin Mann and Dan Benjamin coined in 2011 during their Back to Work podcast. That phrase is "expectational debt". Put simply, it's the act of "writing checks your person can't cash" (there's a more colorful metaphor that can be and is often used for this, but I think you understand what I mean ;) ).

Expectational Debt is tricky, because it is very easy to get into, and it is almost entirely emotional in nature. Financial debt is when you have to borrow money to purchase something you want or need, and it invariably involves a financial contract, or in the simplest sense a "personal agreement" where the money owed will be paid back. It's very tangible. Technical debt is what happen in software all the time, when we have to make a shortcut or compromise on making something work in the ideal way so that we can get a product out the door, with the idea that we will "fix it later". Again, it's also tangible, in the sense that we can see the work we need to do. Expectational debt, on the other hand, is almost entirely emotional. It's associated with a promise, a goal, or a desire to do something. Sometimes that desire is public, sometimes it is private. In all cases, it's a commitment of the mind, and a commitment of time and attention.

I know full well how easy it is to get myself into Expectational Debt, and I can do it surprisingly quickly. People often joke that I have a complete inability to say "no". That's not entirely true, but it's close enough to reality that I don't protest. I enjoy learning new things and trying new experiences, so I am often willing to jump in and say "yeah, that's awesome, I can do that!" With just those words, I am creating an expectational debt, a promise to do something in the future that I fully intend to fulfill, but I have not done the necessary footwork or put the time in to fully understand what I am taking on. Human beings, in general, do this all the time. We also frequently underestimate or overestimate how much these expectations matter to other people. Something we've agreed to do could be of great importance to others, or of minor importance. It's also possible that we ourselves are the only ones who consider that expectation to be valuable. Regardless of the "weight" of the expectation, they all take up space in our head, and every one of them puts a drag on our effectiveness.

Before I offer my solution, I need to say that this is very much something I'm currently struggling with. These suggestions are what I am doing now to rein in my expectational debt. It's entirely possible these approaches will be abandoned by yours truly in the future, or determined not to work. As of now, they're helping, so I'm going to share them.

Identify your "Commitments"

Get a stack of note cards, or use an electronic note file, or take a 365 day single day calendar, whatever method you prefer, and sit down and write out every promise you have made to yourself and to others that you have every intention of fulfilling. Don't just do this for things in your professional life, do this for everything you intend to do (family goals, personal goals, exercise, home repairs, car repairs, social obligations, work goals, personal progress initiatives, literally everything you want to accomplish).

Categorize Your "Commitments"

Once you have done this, put them in a priority order. I like to use the four quadrants approach Steven Covey suggests in "Seven Habits of Highly Effective People". Those quadrants are labeled as follows:

I. Urgent and Important

II. Important but Not Urgent

III. Urgent but Not Important

IV. Not Urgent and Not Important



My guess is that, after you have done this, you will have a few items that are in the I category (Urgent and Important), possibly a few in the III category (Urgent but Not Important), and most will fall into Categories II and IV.

Redouble or Abandon Your "Commitments"

These are questions you need to ask yourself for each commitment:

- Why do I think it belongs here?
- Will it be of great benefit to me or others if I accomplish the goal?
- Is it really my responsibility to do this?
- If I were to not do this, what would happen?

This will help you determine very quickly where each of the items fall. Most expectational debt will, again, fall in categories II and IV.

For your sanity, as soon as you identify something that falls into Category IV (Not Urgent and Not Important), tell yourself "I will not be doing this" and make it visible. Yes, make a list of the things you will NOT be doing.

Next is to look at the items that fall into Category III. These are Urgent but Not Important, or perhaps a better way to put this is that they are Urgent to someone else (and likely Important). They may not be important to you, but another person's anxiety about it, and their visible distress, is making it Urgent to you. It's time for a conversation or two. You have to decide now if you are going to commit time and attention to this, and figure out why you should. Is it because you feel obligated? Is it because it will solve a problem for someone else? Is it because you're really the only one who can deal with the situation? All of these need to be spelled out, and in most cases, they should be handled with a mind to train up someone else to do them, so that you can get out of the situation.

The great majority of things you'll want to do, and want to commit to, will fall in Category II. They are Important, but they are not Urgent (if they were Urgent and Important, you'd be doing them... probably RIGHT NOW!!!). Lose weight for the summer. Learn a new programming language. Discover and become proficient with a new tool. Plan a vacation for next year. Read a long anticipated book. Play a much anticipated video game. These are all items that will, in some way, give us satisfaction, help us move forward and progress on something, or otherwise benefit us, but they don't need to be done "right now". Your goal here is to start scoping out the time to do each of these, and give it a quantifiable space in your reality. I believe in scheduling items that I want to make progress on. In some cases, getting a friendly "accountability partner" to check in on me to make sure I'm doing what I need to do is a huge incentive. A common tactic that I am using now is to allocate four hours for any "endeavor space". I also allocate four hours in case I need to "change tracks" and take care of something else. This may seem like overkill (and often, it is), but it's a shorthand I use so I don't over-commit or underestimate how long it will take to do something. Even with this model, I still underestimate a lot of things, but with experience I get a bit better each time.

This of course leaves the last area (Category I), the Urgent and Important. Usually, it's a crisis. It's where everything else ends up getting bagged for a bit. If you ever find yourself in an automobile accident, and you are injured, guess what, your getting treatment and recovery rockets to Category I. In a less dire circumstance, if you are the network operations person for your company and your network goes down, for the duration of that outage getting the network to work is Category I.

I hate making promises I can't keep, but the truth is, I do it all the time. We all do, usually in small ways. Unless we are pathological liars, we don't intend to get into this situation, but sometimes, yes, the expectations we place on ourselves, or the promises we make to others, grow out of proportion to what we can actually accomplish. Take the time to jettison those things that you will not do. Clear them from your mind. Make exit plans for the things you'd really rather not do, if there is a way to do so. Commit to scheduling those items that provide the greatest benefit, and if at all possible, do what you can to not get into crisis situations. Trust me, your overall sanity will be greatly enhanced, and what's more, you'll start to develop the discipline to grow an expectation surplus. I'm working towards that goal, in any event ;).

Giving Kids a Jump Start: The Humble Braniac Book Bundle (UPDATED)

Wednesday, February 11, 2015 20:17 PM


As many of you are no doubt aware, I have a very long list of books that are in the hopper to be read, processed, applied and reviewed. Those books all come from somewhere. Some of them I buy, but a fairly large percentage of them are given to me (with the proviso that I review them). NoStarch Press has been a partner with me in this process for the past four years. I am very appreciative of their contributions to my technical knowledge and understanding.

It's with this in mind that I want to alert TESTHEAD readers to a special offer. NoStarch Press and the folks at Humble Bundle have teamed up to offer a jump start for kids who want to learn programming and other technical skills. This is really significant right now for me, as I have a 14 year old daughter in the early stages of learning how to program.

What could that special offer be? I'm glad you asked ;).



Until February 18, 2015, 2:00 p.m. EST, Humble Bundle will be offering the "Humble Braniac Book Bundle". This is a variety of e-books geared towards kids so that technical subjects are more accessible to them. What makes this bundle interesting is that YOU get to decide how much you want to pay for it. Yes, you read that right, you get to name your price for a selection of books that, if purchased separately, would cost more than $250.


What do you get with this deal? There are three levels:

1. You can pay any amount of money (seriously, you decide) and for that payment, you will receive:
  • Ruby Wizardry: An Introduction to Programming for Kids
  • Lauren Ipsum: A Story About Computer Science and Other Improbable Things
  • The Manga Guide to Electricity
  • Snip, Burn, Solder, Shred: Seriously Geeky Stuff to Make with Your Kids
  • The LEGO Adventure Book, Volume 1: Cars, Castles, Dinosaurs & More!

2. Humble Bundle keeps track of the average payment. For those willing to pay more than the average (which at the time of this writing is $13.08), NoStarch will sweeten the deal and throw in:
  • LEGO Space: Building the Future
  • The Manga Guide to Physics
  • Python for Kids: A Playful Introduction to Programming
  • Incredible LEGO Technic: Cars, Trucks, Robots & More!
  • Build Your Own Website: A Comic Guide to HTML, CSS, and WordPress

3. Customers who pay $15 or more will receive all of the above, plus:
  • Steampunk LEGO
  • JavaScript for Kids: A Playful Introduction to Programming
  • The LEGO Neighborhood Book: Build Your Own Town!

For less than the cost of one retail NoStarch book, you can purchase and download thirteen books that will entertain and educate kids and adults alike. My personal recommendations, "The Manga Guide to ..." series of books are great fun and super informative. I have also read or am reading Ruby Wizardry, Python for Kids and JavaScript for Kids books (reviews pending), and they are all individually worth the price of purchase. In this format, plus all the other titles being offered, really, this is a sweet offer.

As if that weren't enough, all Humble Bundle promotions let purchasers choose how much of their money goes to the publisher (NoStarch), Humble Bundle, and charity. The two charities supported by this bundle are the Electronic Frontier Foundation (EFF) and the Freedom of the Press Foundation.

NoStarch has been great to me for many years. I really appreciate their willingness to provide me books to review. To be transparent, I have many of the books in the list already (that NoStarch gave me for free to write reviews), but there are several in this bundle that I don't have, and I have bought in for the Full Monty :).

This Humble Brainiac Book Bundle ends February 18, 2015 at 2pm EST time, so if you want to take advantage of it, best get a move on!


UPDATE: As of February 11, 2015, The Humble Brainiac Book Bundle just got a little sweeter. Three new books just got added to the "pay more than the average" bundle. Those new books are:

- The Manga Guide to Calculus
- Beautiful LEGO
- The LEGO Build it Book, Volume 1: Amazing Vehicles

If you have already purchased your Humble Brainiac Book Bundle, go to the confirmation email and click on the download link... the new books have been added and you can download them from there. Yes, they've been added to orders already purchased. Nice job, Humble Bundle and NoStarch... well done :)!!!