For Your Listening Pleasure: TESTHEAD’s Podcasts

Friday, December 19, 2014 14:07 PM

It’s the end of the year, and it seems that people are going to be getting all sorts of new devices for the holidays. Smartphones, tablets, computers, etc. will likely be purchased by many, and with those purchases will be a need to fill them. In my personal opinion, while video tutorials and such are cool and all, nothing is as portable, engaging and good for short bursts of calm time, semi-active physical exertion or long road/air trips than podcasts. 

Thus, in the spirit of giving, I would like to give everyone a taste of my current listening habits, my podcasts of choice. Some of these have been with me for years, some are relatively new, all of them are informative and engaging (well, at least I think so). 

Additionally, I have also included websites with background info and storage for previous episodes. All are free for newer episodes, some sites charge for back catalog items. Most of course appreciate donations to help them keep the shows going, so by all means, if you can offer them support to keep producing stuff you like, hey, show a little love :)!

And now, without further ado…

AB Testing
Alan Page and Brent Jenson (AB testing, get it ;)?) talk about a variety of software testing topics and their combined four decades of experience, as well as their own meandering thoughts and topics that happen to come into their heads. Both Alan and Brent are long time Microsoft veterans, so there is, of course, a Microsoft flavor to the podcast, but don’t think that just because you don’t live and work in a Microsoft shop that this isn’t for you. There’s plenty to keep a tester (or anyone interested in software quality, regardless of role) interested. What I love about this is that it’s done in a very casual tone, as though they are just kicking back with a couple of beers and ranting about the topics that come up (please note: I have no idea if they are kicking back with beers as they do their podcast, it’s just the vibe it gives me, and I love it :) ). 

Back to Work
Two hundred episodes in, this is still one of my favorite guilty pleasures. Dan Benjamin and Merlin Mann generate a lot of banter in their podcasts, and for those who want the business end of the podcast each week, I’ll help you now with this tip, just fast forward twenty minutes and you’ll be much happier. However, if you’ve been with the show since the beginning, you’ll know that some of the most fun parts of the show actually take place in these twenty minutes, as well as the banter that carries over every show. It’s goofy, messy, often unfocused, but filled with gems if your goal is to really get into the things that make for creative work. Oh, and don’t be surprised if, after a few weeks of listening, you start to look forward to the rambling intros, because you feel like you are getting a glimpse at two of your friends just kicking it and getting caught up on what matters to them, only part of which is the actual podcast.

Common Sense with Dan Carlin
This is the first of two “indispensable” podcasts for me, ones I have listened to for several years, and that I consider the “gold standard” of what the podcast medium can deliver. Dan Carlin is a cantankerous, fast talking, highly caffeinated and animated commenter who bills himself as being “political by way of Mars”. Dan is based in the U.S., and many of the topics he covers are from a U.S.perspective, but what makes Dan different is he refuses to approach problems or issues from a partisan position. For tester looking to exercise their critical thinking skills when it comes to political topics, this podcast is a jewel. 

Freakonomics Radio
The subtitle of this podcast is “The Hidden Side of Everything”, and it tends to remain true to that tagline. Stephen Dubner and Steven Leavitt are the hosts of this eclectic show that mostly looks at economics, but is wonderful for testers because it looks at teasing out what the data tells us about a variety of topics. We think we understand how things work, and why the world looks the way it does, but very often, the hidden realities go counter to the agreed to narrative. Often controversial, always fascinating.

Grammar Girl: Quick and Dirty Tips for Better Writing
This is the first of the “Quick and Dirty Tips” podcasts that I follow, and the fact that I fancy myself a bit of a writer on the side of being a software tester, it should come as no surprise that I would appreciate a regular infusion of grammar and interesting ways to improve my game as a writer. Grammar Girl is hosted by Mignon Fogarty, and is full of interesting tips and tidbits about language, grammar construction, writing tropes, and etymology. There are as of this writing 446 episodes. That’s a lot of potential grammar tips, and while you may not want to listen to every one of them, you will not surprise me in the slightest if you come back and tell me you have heard them all ;).

Hardcore History with Dan Carlin
Of all the podcasts I listen to, most of the episodes are listened to once or twice, and then I delete the ones I listened to already. I do not delete episodes of Hardcore History! I have saved every one of them since the beginning, and I have listened to each episodes multiple times (and that’s no small feat, considering several of the Hardcore History podcast episodes are multiple hours long). Dan Carlin uses the same “Martian” approach to thinking as used in the Common Sense podcast and applies it to historical events. Earlier podcasts were brief entries, while the most recent podcasts would count as full audio books. Dan has a style of delivery that is dramatic, intense, and for me personally, enthralling. Many of the podcasts are continuous series. He’s currently doing a multi part series on the causes and effects of the First World War. The combined total time for this series (titled 'Countdown to Armageddon") is currently almost fourteen hours, with more episodes still to come. Do not be surprised if you find yourself listening for hours at a time. In my opinion, Hardcore History is the perfect long drive or long flight companion, and is still, to me, the gold standard of podcasting.

Philosophize This!
Stephen West hosts this podcast that endeavors to be a chronological exploration of philosophy and epistemology, and if listened to in order, does a very good job of being exactly that. Stephen goes to great lengths to keep a consistent narrative between episodes, and flashes back to what previous philosophers have said to help develop the context for current episodes. He mixes in modern metaphors to help make some of the esoteric bits make sense, and it makes for an overall very enjoyable and interesting series.

Planet Money
Hosted by NPR, and born from the wreckage of the 2008 Financial Crisis, Planet Money has a similar feel and approach to Freakonomics. It’s a very economics based program, with a smattering of finance and a lot of current events, and much like Freakonomics, ventures into territory that is unexpected and open to interpretation. If you think you know that’s going on, you may be surprised to realize how different things look when you “follow the money”.

Ruby Rogues
Ruby Rogues is a revolving cast of commenters, many of which are well known in the Ruby world, and frequently feature guests that talk about a variety of programming topics. I started listening to Ruby Rogues when I worked at a primarily Ruby and Rails shop, and while I have moved on to a work environment that uses different languages, I still listen regularly to this well done and well presented podcast. You do not have to be a Rubyist to reap benefits from listening to the show, but if you are familiar with Ruby, it makes each episode that much more interesting.

Savvy Psychologist: Quick and Dirty Tips for Better Mental Health
This is another Quick and Dirty Tips podcast I enjoy following, and I became interested in it due to some of the challenges I’ve dealt with around my own diagnosis and repeated wrestling with issues related to ADHD. Dr. Ellen Hendrickson tackles a variety of psychological challenges, puts it in terms that every day people can understand and relate to, covering topics such as procrastination, anxiety, depression, motivation, mood and creativity. Understanding the challenges we all face helps us relate better to those we interact with, as well as ourselves. 

Still Untitled: The Adam Savage Project 
You may know Adam Savage from his ten year plus run on the Discover Channel’s “Mythbusters”, but this series goes beyond his most well known career choice and talks about his involvement with special effects, the maker movement, and tests and explorations related to oddball interests. If you enjoy the Mythbusters approach and delivery, you’ll probably enjoy this podcast, too.

Stuff You Missed in History Class
As my interest in Hardcore History will likely tell you, I am a fan of history. I love the obscure and interesting bits and the connection to us today. However, Dan Carlin puts such time and attention to each episode, it can be months between episodes. This podcast (which is part of the “How Stuff Works” series of podcasts) is a twice a week history snack fest. Shows cover a wide variety of topics, often focusing on the weird or unusual. It’s great fun and a must listen for the avid history buff.

Stuff You Should Know
How did leper colonies come into being? What is terraforming? What did the enlightenment actually “enlighten”? If you’ve ever found yourself wondering about these things, and literally hundred more areas, then this podcast is right up your alley. Twice a week, the podcast covers interesting and unique topics that, when taken together, cover all sorts of areas that I may know a little bit about, but I always learn just a little bit more. If the idea of learning more about rogue waves, X-rays, and stem cells interests you, then this is a great destination.

TED Radio Hour
If you’ve ever seen or heard a TED talk, you know the format and approach. TED Stands for “Technology, Entertainment and Design”, and each hour long episode is based around a theme, with a talk from one or several presenters to build each themed episode. Programs range from games and gaming and how it relates to psychological development, to self determined education in Africa, to working with perceptions, to our relationship with animals. Guy Raz hosts this weekly podcast, and each week is a unique exploration into the unexpected.


Again, these are my favorites for this year and as of today. Next year, this list may be entirely different (though some will probably be perennial favorites). Here’s hoping some of these will be interesting to you, and hey, if you have some favorites you’d like to share, please post them in the comments below.

Spreading Some E-Book Holiday Cheer: #Packt5Dollar Deal

Saturday, December 20, 2014 20:21 PM

For the life of this blog, I have deliberately avoided any direct advertisement, or made any attempt to "monetize" it. I plan to keep it that way, since not accepting advertising allows me to say exactly what I want to say the way I want to say it.

Having said that, there's no question that several book publishers have been more than kind, giving me several dozen titles for free over the years as review copies. More than half my current technical library consists of these books, and many have been great helps over the years. If I can return the favor, even just a little bit, I am happy to do so.

Packt Publishing, located in the United Kingdom, is one of those publishers, and they are currently holding a special $5 E-book sale. Any and all e-book titles from Packt are available for $5 for a limited time (until January 6, 2015).


Again, Packt has been very generous to me over the years, and I appreciate the fact that they are offering this opportunity. I've already take advantage of it... and have added even more books to my Tsundoku list (LOL!). For those curious, I decided to purchase the following:
  1. Backbone.js Cookbook
  2. Java Script Security
  3. JMeter Cookbook
  4. Kali Linux Network Scanning Cookbook
  5. Learning JavaScript Data Structures and Algorithms
  6. Learning Python Testing
  7. Responsive Web Design By Example
  8. Selenium Design Patterns and Best Practices
  9. Selenium webDriver Practical Guide
  10. Web Development with Django Cookbook
  11. Wireshark Essentials
Happy reading :)!!!

A "Linchpin" Challenge Revisited: New in "Uncharted Waters"

Thursday, December 18, 2014 19:27 PM

Yes, this is shameless self promotion, and yes, I would love to have you all read my latest post over at the Uncharted Waters blog titled "You Can Live Without a Resume!" I'm kidding of course. Not about wanting you to read the article, but about the shameless self promotion part. If I were really all about shameless self promotion, that's all I would say, but you all know me better than that (or at least I hope so ;) ).

When I say "Live without a Resume", I do not mean live without any reference to you or what you do. What I mean is "spend less time on the resume and spend more time on visible, tangible aspects of your work and what you can do".

If I were to try to convince someone I know my stuff about testing, and all I sent in was my resume, that may or may not get anyone's attention. What I do know is that, if it doesn't match their particular filtering criteria, it may never even get looked at. Personally, I don't want to start a conversation from that position. Frankly, I don't even want to have a conversation stemming from "hey, I looked over your resume, and..."

So what do I want to have happen? Personally, I'd prefer any of the following:

"Hey, I was looking over your LinkedIn profile and I noticed you had several talks posted. I listened to a couple of them, and I'm interested in talking more about what you said"

or

"I was looking at the Weekend Testing site, and I noticed your name listed on many of the sessions. I read a few, and hey, I think we might have something to discuss"

or

"I read several of your published articles. Could we get together and talk?"

or

"I spent an afternoon reading several posts from your blog. I think you're insane, but you might be a good fit for a friend of mine's company. Can I have them contact you?"

For the record, I have had every one of those things happen. No, this is not a way for me to strut and act cocky. Instead, it illustrates the power of having your work be on display in a way that steps outside of having a resume.

I owe this whole experiment to Seth Godin, and I've tried it now for almost five years. Granted, I could be proven wrong tomorrow. The bottom may fall out of the market, I could find myself unemployed, and then all of what I'm suggesting may not work any longer. That is of course possible. So far, though, Seth's hypothesis has proven to be sound, and for that, I am seriously grateful :).

Teaching by Example: Slowing Down to Get Ahead

Wednesday, December 17, 2014 21:20 PM

It's been an interesting month with my daughter, Amber. We have been engaged in an interesting little "tug of war" over the past several weeks. I am realizing that I have to be very careful here, because enthusiasm on the part of the "teacher" can either inspire a student, or completely demoralize them and turn them off.

We've agreed to a simple goal, and for now, that goal is the Codecademy streak. I've told her I don't care if she only does one lesson on a given day, as long as she at least does that one lesson. Of course, I'd be really happy if she did two, or three, or ten, or heck, even do fifty like I did about seven months ago. Yes, I spent a free Saturday and completely pounded through the entire jQuery course. No, I'm not convinced that was the best use of my time, considering how much I still have to look up to write anything today.

What I realized from that particular experience was that my own enthusiasm to "push through" caused me to take shortcuts. I violated what I jokingly refer to as "The First Zed Commandment" which is "thou shalt not copy and paste". Those who have read any of Zed Shaw's books on "Learn [Language] The Hard Way" know that he espouses this pretty heavily, and I do too... most of the time. However, my own impatience often gets the better of me, and yes, I find myself cheating and copying and pasting. If I were looking to get in physical shape for a run, would I get the same benefits if I agreed to run 5K each day, but when I got impatient, I hopped on my bike and rode the rest of the way? Would I get the same training benefit? Would I get the same physical conditioning? The answer is, of course, no. Writing code is the same way.

At this stage, I may wish my daughter were moving faster, but if the net result is that she scores a lot of points, gets a lot of badges, but doesn't remember fundamental syntax or hasn't put the time in to recognize where she has made a mistake, what am I really teaching her? This has prompted me to, instead, ask her what time she wants to sit down with me, her with her computer, me with mine, and we work either face to face or side by side. This way, she sees what I am doing, and how I'm doing it. It may encourage her to do likewise, or she may say "hey, that looks odd, what are you doing?" Either way, it will start a conversation, and then we can discuss the fundamental details. I have to realize that it's better for her to pull from me rather than me push to her.

So fundamentally simple, so easy to say, yet so hard to do when "Dad Brain" wants to carry her along as fast as he can. On the positive side, this is also allowing me to step back and make sure I really understand what I think I understand, with the often neat realization that, hey, I really didn't know that as well as I thought I did. Consider this my solid recommendation to anyone wanting to learn how to code... teach your kid how to code. If you don't have one, see if you can volunteer at a school's computer club and offer your time. What I'm realizing anew is that the example of working through problems for them is a pretty amazing teacher in its own right.

The Weekend Testing Family is Expanding

Tuesday, December 16, 2014 22:00 PM

It is with great pleasure, and a little bit of familial pride, that I announce there is a new kid on the Weekend Testing block.

During the first couple years of Weekend Testing Americas, Albert Gareev assisted me in getting the group off the ground, working on some more interesting and technical topics, and even worked with me to propose and try out the "Project Sherwood" project. We decided that the Weekend Testing model as it was designed at the time wasn't the best model for this expanded idea, but Albert didn't give up the fight.

We've talked a number of times over the past couple of years about how the Weekend Testing model works both as a distributed communication model, but that it can also work well as a live and in person event. We realized that what "Project Sherwood" was missing was that in-person direct give and take element. Albert figured for that to work, it would make sense to try it with a group he already had familiarity with and in a community where such events could be developed and focused on. Since Albert is in Toronto, he figured "why not set up a Weekend Testing chapter in Toronto?"

Why not, indeed :)?!

For those on the East Coast, and especially those in the immediate Toronto area, you are going to get a cool opportunity. Albert is a dedicated and accomplished tester, with a wealth of knowledge and a fountain of ideas. For those wondering if this is going to dilute the current Weekend Testing Americas offerings and sessions, I'm going to say "unlikely", because those who enjoy participating in these events will go to those that make sense for them and their availability. I've attended events in just about every of the chapters over the past five years, so in my mind, the more the merrier. The real winners are the testers that want to participate and learn some cool things.

So what should you do? For starters, go and join the Weekend Testing Toronto Meetup. Follow @WT_Toronto on Twitter. Check Weekend Testing's main site for scheduled sessions. Most of all, get ready for a fun and engaging opportunity with an interesting and passionate advocate for testing.

Companies Abhor a Vacuum

Monday, December 15, 2014 22:53 PM

Over the past several years, I have seen time and time again that companies frequently say one thing but act in a totally different way. One of the key places I have noticed this is when someone leaves, especially if they are responsible for a critical area.

The common line is that they will hire someone to take over that role, but I've rarely seen it work that way. In fact, the large majority of the time, it falls to someone on the team who either has the misfortune of being assigned the task, or someone is crazy enough to jump into the breach.

I would recommend to any software tester, if you get the opportunity to find an area that is suddenly vacant, even if it may not be a skill or area you are entirely comfortable working in, take the person who is leaving out to lunch and ask them about that key area. It's possible they may not have time for you, but it's also possible they may be more than happy to show you how to work in that area.

As an example, we recently received word that one of our programmers was going to be leaving. There was some discussion in the Engineering meeting about some of the things this engineer was responsible for, and one of those areas (the management of virtual instances) was discussed as something that would need to be reassigned. I threw up my hand and said "I realize I may be out of the main flow of this at the moment, but in my past life I was responsible for handling the Hyper-V virtualization servers at a previous company. I wouldn't mind learning what might be different and where I could be helpful here".

Expecting to hear "Oh, that's OK, we can manage", I received an answer of "Wow, that's great! Yes, could you two get together and make that transfer happen?"

Truth be told, I felt pretty confident that this would be the case. Part of the reason was that I didn't ask permission, I just said "hey, I'd like to learn that". I relied on the fact that my company would have to either look for a person to fill the role, or let someone who volunteered try to take it on first. By voicing my interest, I solved a problem for them; they don't need to look for a new person to do that work. I also gave myself a jolt of electricity to take on something I wasn't currently doing, but felt would both help the organization and, additionally, help me ;). What I also did was couch the opportunity by saying I had a similar experience that I could use to draw upon. I wasn't saying "I know nothing about this, but I can learn". Instead, I phrased it as "I have some experience from this other domain that I think might help me bridge the gaps".

I can only speak for myself here, but I think we tend to wait for others to say it's OK for us to take on a particular responsibility, or to have someone tell us we will be doing something. We likewise might feel that we don't have the seniority or technical ability to take something on, or we may perceive our organization may believe we don't. My recommendation is, if you want to up the odds in your favor of getting an opportunity, ask after someone has announced they are leaving. I'm willing to bet that they would give you the benefit of the doubt. The alternative is to leave a vacuum, and companies abhor a vacuum ;).

Book Review: Lauren Ipsum

Monday, December 15, 2014 01:04 AM

As part of my current experience with teaching my daughter how to write code, I am finding myself getting into territory that I somewhat understand at various levels, but struggle to explain or make clear enough for a thirteen year old to likewise understand. How does someone explain recursion without causing a bunch of confusion in the process? In the past I have found myself struggling with ways to explain certain topics that help ground ideas of computer science, computing and programming, and how they actually work.

Carlos Bueno feels my pain, and to help answer it, he has written a book that is a perfect companion for a young person learning to code. That book is “Lauren Ipsum”. It’s subtitle is “A Story About Computer Science and Other Improbable Things”. More to the point, it is a computer science book without a computer. Wait, what? How does that work?

Carlos prints the following in the pages before the story starts:

I feel I should warn you: You won’t find any computers in this book. If the idea of a computer science book without computers upsets you, please close your eyes until you’ve finished reading the rest of this page.

[...]

You can also play with computer science without you-know-what. Ideas are the real stuff of computer science. This book is about those ideas and how to find them. In fact, most of the characters, places, and thingamajigs in Userland are actually based on those ideas. Check out the Field Guide at the back of the book to learn more about them!

“Lauren Ipsum” is the name of a girl who goes for a walk after a fight with her mother, finds herself lost, and in the process, meets an improbable cast of characters in a magical world called Userland. Through her travels, she solves various problems for herself and others and tries to find her way back home. Each of the people and creatures she meets personifies a different problem in computer science, and ways that can be used to help solve problems related to them. We are introduced to the “traveling salesman” problem, logic and choices, algorithms, cryptography, heuristics, abstraction, construction and deconstruction, networking, and branching paths, to name but a few.

At the end of the book is a section called the Field Guide to Userland, which goes into additional details about each of the chapters, the concepts mentioned in each section, and what they represent. If you are an adult looking for a quick reference for the book and the concepts being covered, this is it, and is frankly worth the purchase price of the book by itself. Having said that, don’t think that you can’t learn from the story itself. In fact, I'd be surprised if yo didn't find yourself enchanted by the main story as well. 


Bottom Line: This is a fun way to introduce problem solving and logic to kids who want to learn how to program. While we have lots of tutorials that talk about the syntax of code or the ways to build a program to do something, we often skip out on these other important topics until later, and then struggle with trying to understand or explain them. To that end, “Lauren Ipsum” does a great job at breaking down what can be difficult to explain topics in a way that a teenager can understand, but also in a way that grown ups who should know this stuff, but struggle with it, can have some new stories to work into their understanding. If you have a kid looking to learn how to code, share this book with them. Have them read it, of course, but take the time to read it yourself, too. You might find yourself much better equipped to explain the concepts as time goes on.

Guest Post: Inside an "Hour of Code" with Amber Larsen

Saturday, December 13, 2014 22:13 PM

This blog entry comes courtesy of my daughter. On Saturday, December 13, 2014 in San Bruno, she and a number of kids from her intermediate and the local elementary schools participated in "Hour of Code". Amber signed up for the "Intro to Java" class. The materials being used for this class can be seen at the Khan Academy, and is an "Intro to Drawing" using Java syntax.

I'll let Amber take it from here:

I would say that the web site we used was very "child friendly". It helped to make it possible for children and teenagers to work on code. It had videos, so instead of reading it, you could see it happen in front of you. We also worked on projects where we were able to make shapes and add colors and work with a palette. It was a good introduction and it was easily doable in an hour.


The course was listed as an "Intro to Java", and while I learned how to make shapes and fill in colors, and yes, I know that I was using Java to make those changes, it felt like I was working with a very small area of the code. I don't feel like I learned how to make websites or an actual program, but then I don't think that was the point. They had a full programming tutorial for Java that we can continue with at the end of the hour.

I think some of the explanations needed to be listened to a couple of times. Some of the other kids I was working with got stuck, but we were able to talk together and straighten it out. It reminded me of the HTML and CSS modules I have been working with in Codecademy.

Speaking of Codecademy, I think my having spent the last month working through the projects there helped me a lot, maybe too much. I finished the set of videos and projects 25 minutes before everyone else.

One of the nice things about what the Intro to Drawing course had was that the videos could run and we could change the code as it was put on the screen. Also, there wasn't a big setup time. When I downloaded Python I had to install it, set up my IDE to work with it, make sure that the compiler worked, that my PATH variable knew about Python, and that the IDE could use the compiler. If we all had to do that, we would have easily spent an hour just getting all that set up.

If I had to say there was anything I didn't like, it's that right away it told me if I made a simple mistake (well, sure, but I'm not finished yet, hang on!). Maybe it's because I'm used to the Codecademy approach, where you fill in what you want to write, and then submit the whole thing, and if there's an error, the screen shows it and it makes a suggestion, and you have to figure out what you did wrong. In a way, that felt more like "testing". With this, it came right out and told you what you were doing wrong. I think I might have learned more without the frequent reminders, but it was an intro, so I understand.

I think that instead of calling it an Intro to Java, it should be called an Intro to Drawing (using Java) because we focused more on the drawing (making lines, making rectangles and circles, filling them in) than we did on the Java. Having said all that, I think it makes sense to do what they did, because they want to make it interesting for kids to want to learn more, and with that, I think they did a pretty good job.



Time is an Asterisk

Saturday, December 13, 2014 01:07 AM

It is that time again. It's the end of another year, and it's the time that I do my typical retrospective on the year that was, what I wrote, what I learned, what I did and what I didn't do. I did a little search for my "Retrospective" tag and smiled, realizing that this is the fifth entry in this series, and an (almost) fifth year of writing this blog. I think that's somewhat noteworthy, as I have very few endeavors that I can point to that have survived for five years, much less thrived. Outside of my marriage and family, and a couple of jobs, this may well be the single longest running entity I've ever managed. No, that's not a sign I'm looking to end this, in fact I'm just getting warmed up. Also, yes, the title is, once again, a nod to the Talking Heads song 'Once in a Lifetime". I'm not sure how many more years I'll be able to keep this streak going, but it worked again for 2014.


The year started with a lot of promise, and in some ways, a need to recuperate. Last year, I took on a daunting challenge of writing 99 action plans for what a software tester can do to become a better software tester. I have the contents of what could be a pretty cool book, but it needs editing, curating and a lot of revision. If there's anything I discovered about myself in 2014, it's the fact that huge looming projects can easily get derailed because I see and feel that they are huge and looming. I did the same thing with an idea to approach technical testing with Noah Sussman's guidance. In some ways, the sheer size and order of magnitude of these projects spooked me, and they got pushed to the back of my focus. 

At the end of the year, I realized I was perhaps too ambitious, and needed to step back a bit and rethink my approach. The saving grace for both of these projects is that they have the potential to turn into an interesting collaboration with my youngest daughter. Because of Google's "Made with Code" events, she has decided she wants to learn how to code. This brings back both of these initiatives, and several others, but now it puts it in a much clearer focus. The ideas I had are interesting, but unfocused. Helping my daughter learn how to code and text, that's a focus. I anticipate those earlier initiatives will get some fresh air and the embers will be stirred and blown back to life. It no longer just about me and my musings, now I have to put up or shut up ;).

This year has been an interesting transition, in that I have been receiving a lot of requests to write for other publications. I am grateful to sites like Smartbear, Zephyr, Techwell and IT Knowledge Exchange, among others, in that they have given me a platform to write about my experiences and pay me for them, too. Of course, that creates an odd tension. Do I hold back and publish for those who will pay me? That's great, but what about the articles that don't fit what they want to publish? What about the things that really only interest me and the readers here? Am I short changing my audience by holding back from this blog? I had to give this some serious thought and see what made sense to do, and ultimately, I decided that I needed to come back and say "what is TESTHEAD ultimately about?" It's really about the education of a software tester, and part of that includes learnings that come from unexpected places. My experiences have a value, and people enjoy reading them. Even more surprising is just how many people still visit my blog even when I haven't posted anything in awhile. What also fascinates me is to see what posts consistently show up as perennial favorites. As of today, my top ten posts are:
  • Testing as a Service? A Post-POST Post (workshop review)
  • Read Articles, Blogs, Forum Posts: 99 Ways Workshop (how-to guide)
  • Introvert? Extrovert? Or Both? (exploring diversity)
  • Learning to Tell Different Stories (exploring diversity)
  • Exercise 5: More Variables And Printing: Learn Ruby the Hard Way (how-to guide)
  • Inflicting Help (lessons from home)
  • BOOK CLUB: How We Test Software at Microsoft (5/16) (book review)
  • Onboarding and Not Getting Mau Mau'd (interpersonal relationships)
  • When Things Just Aren't What They Seem (interpersonal relationships)
  • I Used To Be a Staffer… (volunteering and leading)
What this shows me is that there is no clear theme as to what posts are most appreciated. It's not like there's a "type" of post that specifically gets more traffic than others. The one telltale sign I do see, though, is that of these top ten reads, most of them have to do with my own personal takes on things. Not some authoritative commentary, but just my fallible opinion of why things seem to be the way they are. Also, it seems the areas where I try something and it doesn't work out well, or an area where I am stepping in with guarded enthusiasm are where you all tend to come back to or tell others about. It means my goofy optimism and occasional cluelessness is appreciated and entertaining. I think I can mine that vein for a very long time ;).

This year saw me bring the testing message to a few different venues, some of which were not testing related. I spoke at the ALM Forum 2014 in Seattle, WA, Developer 2014 in Burlingame, CA, I shared the stage with Harrison Lovell at CAST 2014 in New York City, and went to be a participant and correspondent at EuroSTAR 2014 in Dublin, Ireland. During all of those events, I had a chance to meet many new people, start new friendships, discover new opportunities, and generally expand my world just a bit larger than it was before.

2014 was a year of transitions for me. It was a year that saw my eldest child move from High School to college. It was a year where  I lost several close friends. It was a year where I stepped down as the Chair of the Education Special Interest Group and relinquished my role as Treasurer within the Association for Software Testing, and accepted the role as President of the organization. It was a year that saw a Meetup group grow and flourish in San Francisco, then drift a little bit, and then have a hostile takeover attempt take place, to which the core community fought back. It reminded me of an amazing connection I have with many people, and how, when it looked like we might have to walk away, they stood together and said "Oh hell no you won't!" Weekend Testing Americas turned four years old, and has a healthy core of interested facilitators and participants who eagerly ask us "when is the next session?" BTW, December being so jam packed with other events related to families and other groups, we are taking December off, but we will be back in January, and we have a lot of cool new ideas to explore.

Most of all, I have to give my thanks and gratitude for this little forum, what it's become, and how it continues to surprise me, both with what I post here, and with how people react to it. Seriously, to whoever is reading this, whenever you read it, the fact that you took the time to come to my blog, to read something I wrote, to leave me a comment or share a link on a social media site, that you engage with me year after year, it's touching and humbling. Were it not for you, I'd have no reason to do this. Also, so many of the opportunities that come my way start here. Thank you for following up, for asking questions, for holding me accountable and for keeping me honest. It makes writing this blog a whole lot more fun because of that.

As the title says, Time is an Asterisk. It's not just a line to continue a theme (though it does that quite well ;)), it also reminds me that, truthfully, I don't know what next years letter will look like, or what forces are going to shape the next year, or what the flavor of the posts that come will contain, though I can probably offer some guesses. I have a lot of books I want to review. I have a lot of ideas I want to test out with my daughter to see if they work or not. I have a lot of goals I want to see myself obtain. Which ones I will actually cover, and which ones will be written about here, that remains to be seen, but I will do my best to make sure it's something interesting and unique to my own experiences. That I can pretty much guarantee. The rest is a wildcard ;).

Book Review: Zero to One

Friday, December 12, 2014 17:57 PM

As a birthday/Christmas present, a  friend sent me Peter Thiel’s “Zero to One” in a four CD audio format. Peter reads through and presents nearly four hours of audio that goes much faster than the elapsed time would indicate. On the audio production aspects and delivery, it does very well. Having said that, how about the book as a whole?

Zero to One is a book about being an entrepreneur. For many of us, we may stop right there and think “ehh, I’m not an entrepreneur, so this book isn’t for me”. I would encourage anyone with that attitude to not think that way. Regardless of whether or not we work for a company, or we are the founder of a company, or we do freelance work in various capacities, all of us are entrepreneurs. In the curation of our own careers, absolutely we are. To that end, we want to create, to do something interesting, and maybe, dare we say it, change the world.

Most businesses we will see tend to copy someone else in some capacity. They are content to copy what has been successful for others. This is what Peter refers to as "One to N” improvement. It’s incremental, it’s a shaving of time, it’s an improved efficiency, it’s streamlining of process. It may keep you afloat, but it will not rocket you ahead. For that, you need a different approach, a true sense of innovation, a mindset that will bring you from Zero to One.

Thiel presents many anecdotes from the past thirty years in Silicon Valley, with many familiar stories, ups and downs, and memories, oh the memories (having been at Cisco Systems in the 1990s, and with several smaller companies through the ensuing fourteen years, Thiel’s stories are not just memorable, they are my history, and some of the stories hit a little too close to home ;) ).

The book is structured around seven questions that any company (and any individual) should be ready and willing to ask themselves before they commit to a venture or creating what they believe will be a “killer startup”. Those questions are:

Zero to One:  Can you create something new and revolutionary,  rather than copy the work of others and improve upon it?

Timing: Is NOW the time to start your business? If so, why? If not, why?

Market Share:  Are you starting as a big player in a small or underserved market?

People: Do you have the right people to help you meet your vision?

Channel: Can you create and effectively sell your product?

Defensibility: Can you hold your market position 10 and 20 years from now?

Secrets: Have you found a unique opportunity or niche others don’t know about?

Additionally, Thiel encourages that any product that will qualify as a Zero to One opportunity will not just compete with other options, but it will offer a 10X level of improvement over what has come before. If it doesn’t, then competition may overtake and erode anything you may offer. Harsh, but perfectly understandable.

Thiel addresses topics like success and failure, of disruption and collaboration, of replacement and complementarism, and of the fact that any real good technology, no matter how good, needs to be sold and marketed. Engineers believe that if their products is as good as they think it is, it will sell itself. History shows time and time again that that is not the case, and Thiel comes down hard on the side of sales being a driver, and that sales must be shepherded.

Bottom Line: Zero to One makes the case that true entrepreneurship needs to start from the idea of doing something unique, and being willing to look at the seven questions realistically and determinedly. If you cannot answer all seven of the questions with a YES, your odds of success are greatly diminished. Even if all seven can be answered with yes, there are no guarantees. This book is not a tell all guide as to how to be an entrepreneur, but it does give some concrete suggestions as to how to approach that goal. It’s a what and a why book, not a how book, at least not a "formula" how book. It does, however offer a lot of suggestions that the future entrepreneur, company worker, or freelance creator could learn a lot from. If you get the book, read it twice. If you get the audio version, listen to it three times. I think you’ll find the time well spent.

Diversity is More than Skin Deep: New Uncharted Waters Post

Saturday, December 13, 2014 01:08 AM

A short post to say that I have a new entry up over on the IT Knowledge Exchange Uncharted Waters blog. It's titled, "You Keep Saying Diversity, Does it Mean What You Think it Means?"

As I discussed in my live blog posts at EuroSTAR a couple weeks back, many of the discussions are based around "external diversity". Understand, the current environment for many companies and events makes that conversation extremely relevant. I am sensitive to the fact that there is not a broad representation in the computer sciences or in information technology, and yes, gender, ethnicity, and mobility are important areas to focus on.

Less discussed are the areas inside of each of us that make us unique, and dare I say it, possibly hard to manage. This article is meant to keep the conversation going and look at the other less obvious areas where diversity may be taking a hit, even while we are focusing on external factors.

Please have a look, share your comments on the article page, and hey, while you're at it, perhaps consider making Uncharted Waters a regular stop. Between Matt, Justin and myself, we post a number of interesting articles about software delivery, technology, work, the changing technical landscape, and yes, even some software development and software testing, too.

Libretto: The Father/Daughter Coding/Testing Project Gets a Name

Wednesday, December 10, 2014 15:16 PM

As I posted late last month, my daughter Amber has embarked on a journey to learn about software development and software testing, and has proven willing, most of the time, to work through the examples in Codecademy and some other example projects to learn more about code and how it works.



Tuesday  marked a milestone point for her in that she has now set a 30 day coding streak. As of right now, she is most of the way through the HTML and CSS course, and is about 10% through the Python course on Codecademy. I figured it would be a good time to have Amber tell us a little bit about what she has been doing and her reaction to all of this so far.

It's been 30 days, and in that time, I've worked through about 150 exercises, and I have installed my own Python compiler and a program called Geany, which is a nice little window where I can write all of my code for different stuff. I can write HTML, CSS and Python code in this tool, and I can compile and run the Python code I am running. 

Geany was a new tool to me, I learned about it when I was in Dublin, and it was used for the Programming for Testers workshop. What I like about it is that it is fairly unpretentious; it's a simple editor that gives the user color coding of different languages, the ability to hook up different compilers for different languages, and a stripped down interface that focuses on the core needs of the person using it. As an exercise for her, I asked her to download the code, get it installed, and get it working and try to do so with as little input from me as possible. By doing so, she had the chance to look at available sources, make a choice about what she downloaded and utilized, and then put it into use.

When I saw the different versions, I decided to download the newest one, which was Python 3.4.2. It was easy to install (I just followed the screens) and it was easy to put in the code and see the coloring of the letters. I noticed that the code I was working on in Codecademy wasn't working in Geany. I read a bit about Python 3 and found out that the print statement now needs parentheses. Since I'm just starting off, and I've only been doing this for a few days, I haven't really gotten into the habit of writing the code a certain way, so making the shift from Codecademy and Python 2 to Geany with Python 3... it's not hard, it's just something I need to do, and I just do it. It's a little bit of a challenge, but it's easy to handle.

One of the things I encouraged Amber to do with the materials she was working on and reviewing was to take the time to set up files in Geany that reflect what she is learning. Sure, there's a lot you can learn in Codecademy, and there's a list of all the exercises, but going back to review can take a long time, and locating the different items takes a while. When I worked through "Learn Ruby the Hard Way" a couple years back, I made local copies of each assignment, but even then, it was a litter of code and took time to find what I was looking for. As I was talking to Amber about organizing the things she was learning, I suggested that she make a personal project so that each item she learns, she can write it into the project and have it be part of a larger whole. I'm of the mind that we can learn a lot of things, but if we don't have an authentic problem to solve, or a way to put what we are learning into daily use, we will lose what we learn, or we will struggle to find it later.

I'll let Amber pick up from here...

As we were talking about gathering up each lesson, the idea was that I would make a set of files (a web site for HTML and CSS, some longer program in Python) and we'd make it grow and show examples, both on the screen itself and in comments. As we were talking about it, we were joking about the need for "sheet music" for the project. We both started talking about writing a "libretto" for the project, and we said "hey, that's a neat name", so we are now calling our project "Libretto". We are also using Dropbox to keep everything up to date so we can both look at it and "work on it" together. 



As Amber said, we happened on the term "Libretto" to describe what we are making. Borrowing from Wikipedia:

Libretto (pl. libretti), from Italian, is the diminutive of the word libro (book). A libretto is distinct from a synopsis or scenario of the plot, in that the libretto contains all the words and stage directions, while a synopsis summarizes the plot.

This seemed the perfect metaphor for this project. Instead of words and stage directions, this project would contain examples of what she is working on, as well as comments in the code to explain the code itself. It will be verbose, it will have lots of comment space. It won't be particularly elegant, but over time, we want to make it become a collection of classes, methods, IDs pseudo-classes, and other aspects that she can look back to and say "Oh, yeah, I made a pseudo-class that does this one thing, and I can use that as a framework to make something else". The idea is that, as she learns each language, the "Libretto" will tell the same story for that language. By doing so, she can work with the pieces and see how different technologies would solve similar tasks, and do so in a space that can be periodically refactored. The cool thing is that, at least for the time being, Geany looks to be up to the task.

Also, completely on her own, Amber signed up for a weekend class as part of the "Made with Code" initiative. She'll be spending a fair part of this Saturday working on something related to coding with others, most likely an intro to JavaScript, but there will be other opportunities as well. Perhaps our next check in will be to see what her experiences from that event are. Until then, happy coding :).


Book Review: Good Math

Friday, December 12, 2014 17:57 PM

I am going to get this out of the way at the outset. Mathematics has often been a struggle for me. I managed to make my way through to my first semester of Calculus, and then I hit a wall. Having said that, I’ve always been fascinated with mathematics, and the history of how discoveries have been made always interest me.

Also, as a software tester, I often have to look at the calculations being made, and make sure they are right. For 90% of what I do, the math I have learned to date is fine, but there’s still that crazy rush I get when I look at higher order math, or math in areas I’m not familiar, and that door creaks open just enough for me to get a glimpse of something I didn’t understand before, or I feel I’ve come a bit closer to understanding it.

"Good Math” by Marc C. Chu-Carroll has proven to be very helpful in creaking that door and giving me a glimpse into things I thought I understood, but didn’t really understand. The subtitle of the book is "A Geek's Guide to the Beauty of Numbers, Logic, and Computation” and to be fair, the second two areas are the meat of the book. Yes, it goes into the classic popular numbers topics like the development of our numbering system, roman numerals, the discovery of pi, the concept of zero, etc. but it moves on from those pretty quickly.

Mark is the author of the "Good Math, Bad Math" blog. Many of the topics that are covered in this book (and many more that are not) can be read there. Those debates, and some of the confusion regarding different areas of math and computation inform much of this book.

Part 1 deals with Numbers, the one’s we are most familiar with, but also does so in a way that focuses on the rigor of mathematics to make the case for them. Natural numbers, Peano Induction, Integers, Real Numbers, and Irrational numbers get covered here.

Part 2 focuses on Funny Numbers, including the concepts of Zero, e, the Golden Ratio, and i (the imaginary number, what it does and what it means).

Part 3 focuses on the written numbers throughout history, including Roman Numerals, Egyptian Fractions, Fibonacci numbers, and the development of the arithmetic we so often take for granted.

Part 4 delves into Logic, or more specifically, the mathematical proofs that define logic. This section also covers ways of programming using logic and Peano arithmetic, with examples written in Prolog. Note: these examples are not comprehensive, but merely to give a taste of how to use the concepts.

Part 5 deals with Sets, and the variety of ways to look at sets in mathematics, including Axiomatic Set Theory, Models, Infinite Sets, and Group Theory/Symmetry.

Part 6 focuses on Mechanical Math, or Computation. Here is where the Computer Science aficionados may check out, but I was intrigued with clear descriptions of Finite Set Machines, Turing Machines, implementations of systems that are Turing Complete, and how Lambda Calculus is at the heart of the development of programming languages, especially Lisp and Scala.

As one who studied computers from an IT perspective rather than a CS perspective, I never fully got into the logical nuts and bolts of how computers really work. The sections on computation, as well as the sections of defining logic as a computer sees it, and in the strict notation of mathematical proofs, was fascinating.

Good Math is geared towards programmers, and is meant to introduce topics of higher math in ways that will help programmers understand how to implement the ideas, as well as to help them understand how computers actually go about doing what they do. Truth be told, for many of us who use higher level interpreted languages like Ruby, we can go a long way in writing code without ever being exposed to any of these concepts. There are several examples shown, but none worked out in detail. Again, that’s OK, as this is meant to be a survey book about mathematical concepts as relates to computing and logic, not a full blown course in how computer languages implement these models.

Bottom Line: If I were to suggest a core audience for this book, it would be to people like myself, people who have worked with programming languages, may even know a few cool tricks and have some system internals knowledge, but don’t have a strong foundation in higher math. Having said that, if you made your way through typical high school math courses, most of this will feel familiar and accessible. Some of it will feel strange, too (lots of symbols I’ve personally not used in years, if ever), but Mark writes about them in a way that makes them accessible. You still may find yourself reading various sections a few times. Well, I did in any event.

Don't Know What You've Got Until Someone Tries to Take It

Tuesday, December 09, 2014 16:41 PM

The past three days have been, shall we say, emotional, frustrating,  and wonderful, all at the same time. Most of all it has shown me that a group of vocal people can make a difference, but more on that in a bit.

First, some perspective. Last year, Curtis Stuehrenberg and I, along with Josh Meier, decided it was high time there was a software testing Meetup group in San Francisco. May not sound all that revolutionary, and wait, weren't their plenty of software testing groups already? Well, yes, if your main focus was on a specific tool. The Selenium Meetup group has been a de-facto testing group for years, and it's where a lot of software testers go to participate and share in testing topics from time to time, but what we noticed was missing was a focus on the "ilities" of software testing, as well as the areas that go beyond the programming aspects of software (or "paradev" aspects, to borrow from Alister Scott). Thus, we decided to inaugurate the Bay Area Software Testers group, or BAST, as we colloquially refer to it, with the following goals in mind:

The Bay Area Software Test Association originally formed as an impromptu attempt to provide software testers and quality engineers from San Francisco and the East Bay a place to meet and network without having to brave the Silicon Valley traffic snarl. Currently we meet on a monthly basis, providing a chance to hear people talk about general topics and socialize/network/schmooze some of the more interesting people on any software development team .... the testers!

We enjoyed several months of great talks, great conversations, and great activities, and then the summer and fall time frame saw less activity, due to scheduling challenges all of us were dealing with, and just a general challenge to get everyone on the same page. Sometime in this time frame, my Meetup account had been hacked, and I stopped receiving updates. Sad thing is, so much else was going on that I didn't get a feeling for what was happening until just a few days ago. A post to Twitter, though, definitely got my attention:



My first reaction was confusion, then bewilderment, then anger. How could someone hijack our group? Well, we discovered how quickly enough. In the interim as we were looking to transition out of the previous payment and registration (which Curtis had held) and to our new payment model (which Josh and SalesForce had offered to help us with) a member of the group jumped in and made the payment for the next year. In the process, they set themselves up as the organizer, effectively blocking us from being able to administer our own group. What followed was a conversation between Curtis, the person who shall remain nameless, and the folks at Meetup. We were, of course, frustrated that this new entity took over the group from us. We were doubly frustrated when they wouldn't give it back, and we were triply frustrated, and then enraged, when the group started to get spammed with commercial training dates posing as Meetup events. 

To add to the frustration, Meetup gave us the standard policy line of "sorry, it's not up to us, you need to work this out with the new organizer, as they have paid for the group". It was at this point that Curtis, Josh and I, as well as many other people, went on the offensive and started to look for other options. Would we abandon BAST? Make a new group? Drop Meetup altogether and go to another service (Eventbrite perhaps?). Several people took to the interwebs and started calling out the individual who had hijacked the group, and Eventbrite responded to us publicly and said that, if Meetup couldn't help us, they'd be happy to help us establish a new home.

This story, I am happy to say, has a happy ending. Due to the pressure and the vocal comments from several members of our broader community (many of which aren't even in San Francisco), Eventbrite willing to help us make a new home, and our own members posting to the BAST forum that they would not stand for their group being hijacked, and would therefore leave (myself being one who posted a similar comment), Meetup came back to us, allowed us to reinstate Curtis, Josh and myself as organizers, and we were able to deal with the person who hijacked the group and ban them from further participation.

Several lessons came out of this experience, but none more poignant than the fact that we realized we had a great thing, and that it could be take away from us far easier than we ever imagined. Second to that was the fact that we have a great community that cares about what we have created and were wiling to fight on our behalf, not just to say how sorry they were that this was happening, but actually step in and help us resolve it. Third is that people vote with their feet, and if you abuse their trust, you can lose their support very quickly. Fourth, it showed me that we can't take this group for granted. It needs to be cared for and it needs to be nurtured. To that end, for those who left BAST because of this recent mutiny, and for those who were unhappy with the change of events and chose to abandon ship, we want to let you know that everything is back to normal (perhaps even better than normal) and that Josh, Curtis and I are looking forward to making 2015 an active and involved year of discussion and get togethers. Perhaps we let time and momentum take over, but this recent upheaval showed us all how much this group mattered to us, and we are not going to let such a thing happen again. If you left us, please come back. If you want to help us develop events for 2015, we'd love to hear from you. Most of all, we look forward to getting together to socialize/network/schmooze with some of the more interesting people on any software development team .... the testers ;)!

Crash Course: Making "Dry Subjects" Fun

Friday, December 05, 2014 02:41 AM

As a software tester, this next recommendation is probably going to seem a little out there, but I think if you spend some quality time with it, you'll feel differently. Also, I'm a total fanboy of this series, and I want to make sure as many people know about it as possible.

Have you ever wanted to have an introduction and then a continued consideration of topics that are big, meaty and maybe just a little terrifying? Would you like to have those topics be fun to listen to, and be something you'd like to go back to again and again? Finally, would you like something that would be a great catalyst to help you change up the way you think about the world and, well, how you think in general.

Then Crash Course is for you :).

What's Crash Course, you ask? It's the brainchild of brothers John and Hank Greene. Put simply, it's a series of video collections that cover a variety of course areas. World History, U.S. History and Literature are taught by John. Chemistry, Biology, Ecology and Psychology are taught by Hank. A collection called "Big History" is taught by both of them with input from Emily Graslie of "The Brain Scoop". All of these "courses" are available via the Crash Course YouTube channel. Below is a sample video that explains the series:

OK, so this is the preview they used to launch the series in early 2011.
They have *lots* of videos up now :).



So why would I bring this to a testing audience's attention? Because it has been my experience that the more literate we are, the more engaged we are with our work. We talk a mean game about being epistemologists, so learning how we learn and what we learn should always be a priority. Additionally, we bemoan the fact at times that testers are (sometimes) lacking in scientific education and scientific rigor. 

Also, and my personal favorite reason, we don't know where we are going unless we know where we have been. Perhaps its my own personal projection going on here, but it's possible that I enjoy these so much because I NEED them to help fill out stuff I would have learned at a younger age would I actually have been a diligent student. 

Regardless, I think these are great examples of how to take potentially tough topics and make them engaging. They may not work for you the same way they work for me, but give them a try, and let me know if you don't find them as fun and as addicting as I do :).

Book Review: Accessibility Handbook

Friday, December 12, 2014 17:57 PM

The past couple of years have been telling ones for me, in that I took on the responsibility at a new job to oversee the testing and updating of a group of stories that were the focus of an accessibility audit. By doing so, I walked into the world of Accessibility Testing and site development with Accessibility as its focus.

There are a handful of tools out there, and some books that describe what Accessibility means and things to consider when testing sites, but I was confused as to how to actually make the sites I was testing accessible in the first place. Katie Cunningham, the author of the Accessibility Handbook, felt the same way. Her goal was to make a book so that people who were programming websites would have a quick reference as to the what and the how of making sites accessible, with an emphasis on Section 508 Compliance, which is the primary standard for Accessibility in the United States. Akren stated her goal for the books  as follows:


"I decided to write a book that focused on the disabilities rather than the patches. Yes, alt text should always be used and tables should always be scoped. What’s even more important to understand is how poor alt text or tables with no scopes affect the experience of a user. Understanding a user’s tools and limitations helps developers and designers make the next generation of web applications without excluding anyone.”

So how does the "Accessibility Handbook” measure up to that stated goal?

The book breaks each chapter up into different physical challenges, and what defines those challenges as per the recommendations spelled out in Section 508. The chapter describes a set of “Annoyances” that would be present for the user without accessibility considerations, a list of tools available for the users in that capacity, and the methods used to remedy those issues.

Chapter One focuses on “Complete Blindness” and the primary tool for those who are legally or medically blind, and that is the screen reader. Utilizing a variety of tools depending on the platform to be used, this section explores how to optimize HTML and CSS to use screen readers. In addition, aspects such as WAI-ARIA tags are discussed, and aspects of how the product can be tested as well.

Chapter Two focuses on other types of Visual Accessibility, including issues related to Color Blindness and contrasting colors as well as issues dealing with low vision (where the challenges are that text is too small rather than completely unreadable.

Chapter Three deals with Audio Accessibility, which could be for those who are deaf or seriously hearing impaired.

Chapter Four focuses on physical disabilities, and alternative ways to navigate around the page.

Chapter Five deals with a variety of Cognitive Disabilities, including Dyslexia and ADD/ADHD, and how a variety of formatting options can make working with these individuals easier.

Chapter Six is about Selling accessibility to the organization.

Chapter Seven is for Additional Resources to help get the most out of developing for accessibility, including resources for information, for testing, for design, and about the various tools available for them.

Bottom Line: This is a thin book, coming in at 98 pages total, 80 pages of specific content, but don’t let its size fool you. This book will pay for itself with the first usability issues you find.  As you get better, you will be tempted to start creating unique personas for each of the areas, and by all means, do so. The process of seeing how solutions are presented, and how to make changes to those solutions, is well worth the purchase price.

A Yankee Let Loose In Eire: Some Less "Testy" Reflections

Monday, December 01, 2014 19:45 PM

As it has probably been abundantly clear, I spent most of last week in Ireland, and the remainder of the work week getting there and back. Some of the situations I witnessed were humorous, sometimes frustrating, often educational, and also very eye opening.

Let's start with the actual travel. As is routinely the case, we are recommended to get to the airports three hours ahead of time when any international travel is involved. I heeded that call, and was, of course, in and out of security in less than twenty minutes. Having the freedom to therefore relax and just wait for my flight, I did exactly that. As a little bit of housekeeping, I went through and got everything together I could for the flight, and made sure my adapter was working. I reminded myself that last time, I brought a power strip with me to help make things easier, and the net result was the blowing of the fuse of the hotel room when I was in Malmö. I learned this time, one device at a time, and also, I greatly reduced my packing footprint. Just my MacBookPro, my iPhone, their power cords, and the converter to handle UK/Ireland monster plugs. through judicious packing, I managed to get everything I needed into a single bag I could carry on my back. It was thick, but it met requirements to get under the seat, so I was golden.

I am  grateful that trans-continental and transoceanic flights now have power outlets readily available in most seat rows, if not for each and every seat. I finally had the ability to keep power to my devices for the entire trip, which was wonderful. Keeping the devices actually plugged in? That's another story. I think with the frequency of use of these receptacles, it's near impossible to keep an adapter plugged in without it dropping of at some point. I quickly became adept at rigging up various jigs to hold the adapter plugs and such in place so they wouldn't disconnect mid flight.

My two hour layover in Washington, D.C. turned into a five hour layover because the oven in our trans-Atlantic flight was having problems. I'm sure some of you might be thinking "an oven caused that much trouble?" Actually, yes. Without an oven, a Trans-Atlantic flight cannot heat food, and going seven hours without food makes for a cranky set of passengers, so we were all shuttled of the plane and onto another plane at another terminal, along with its requisite checks (yes, the over worked this time :) ). One fantastic bonus was the fact that I had no seat mates for this flight. Not that I am not one who likes conversation with my fellow travelers, but the fact I could actually lie down across three seats, stretch out, and get some actual sleep? Awesome!!!

Out flight landed in Dublin at just a little after 12:00 noon. A quick step through Customs and Immigration including why I was in Dublin ("a software testing conference"... "a what?!" "yes, a software testing conference". I'm now three for three on having to explain that ;) ), and a trot over to get on the double decker green air bus, and I was whisked away to the Dublin Convention Center. I kid you not, I walked in, got my badge, walked up to the third level auditorium, sat down, opened my MacBook, and the official conference program and opening keynote started, right then and there. Sure, I'd have liked to have checked into my hotel first, and maybe changed clothes, but in a pinch, this was fine.

The half day program went quickly (my full running commentary of the day one of EuroSTAR can be seen here) and at the closing of the official session, we were treated to a drum corps that got our attention and then led us downstairs for the evening reception. As there was a dinner being held at Trinity College, but I didn't realize it until it was too late that we had to do a separate registration for that, I was happy to go out and explore Poet's Corner with Michael Bolton, Zeger Van Hese, Jokin Aspiazu, Ruud Cox and several other testers, including helpers in The Test Lab. We settled on The Bachelor Inn, which was a nice pub with good food and drink (and even something for a tee-totaler like myself :) ). As is always the case, the conversation was wonderful, and it is so hard to resist the temptation to hang out all night at these events. Alas, I was missing eight hours from my day, and I knew if I didn't make at least an effort to get some sleep, I would be struggling the next day.

I stayed at a little place called the Maldron Pearse, which was about a half a mile from the Convention center, across the Samuel Beckett bridge, and inside an area of Dublin that was an interesting mix of old and new. Many fresh new buildings stood next to those that looked like they were built in the early 1800s (or earlier). the Maldron Pearse is an older hotel, but undergoing some modernization. For the first time in awhile, I was hit with what it would cost to have WiFi service as a separate payment, and so I agreed to do so the first night, but not thereafter (worked out to being close to $20 a day!). Another factor that took a little getting used to is the latitude. At this time of year, sunrise wasn't until 8:00 a.m. and sunset happened just before 4:00 p.m. Also, though it was chilly in the mornings and the later evenings, it never felt frigid. I was able to make do with regular street clothes and a light snowboard jacket most days (it mostly stayed in the 40s F).

Day two covered a lot of ground. It would be the one full day end to end, so I made sure I was actively engaged in each session. My comments of day two can be seen here.  Additionally I took some time to check out the Test Lab and see some of what they constructed and how they were encouraging participants to get involved. Outside of that, I will confess to walking over to chat with some friends over at SmartBear, but I spent very little time in the expo itself. If there was any one thing I was able to take away from the conference (and this may come down as blasphemy to some) it's that the tools and the peripheral software rarely solves the real issues facing a company or an initiative. there are so many issues that are more important to focus on up front that, frankly, if your biggest problem is that you don't have the right CRM solution or you need a different test management suite, I'll be frank, you're probably doing awesome.

For me, the bigger view was the fact that, even in Europe, the problems tend to be universal. They are issues with communication, with culture, with hiring, with getting a disparate group to mesh. In fact, if there was any one takeaway that I could sum up from this conference, it would be the fact that we are focusing a lot of attention on "physical diversity" (which is great, do not get me wrong, and I'm happy to see that happening), but we are still failing at recognizing the "emotional diversity" that our teams carry. We can do everything right on the hiring front as far as the gender, ethnic background, and sexual orientation, and yet we can still build teams that are remarkably homogeneous, because we tend to hire people like us, all external aspects considered and factored out. Getting a truly diverse team means you have to go into harder areas to quantify, such as emotional connection, communication styles, verbal and written expression, analytical and creative thinking, and being willing to be fluid with roles and responsibilities. Seriously, good luck getting a software tool to help you with that.

One of my favorite aspects of any conference is the ability to meet people I've never met before, but have had some communication with through other mediums. Software testing has taken to social media like few other disciplines I have seen. Through blogs, Twitter, Facebook, Quora and other initiatives I participate in, there are so many software testers that I "know" but hadn't met in person. This event really drove home how many people "knew" me, by reputation, by prose, by initiatives I have been part of. I was able to meet someone in person for the first time and realize they knew a tremendous amount about me, about how I think, and about how I communicate. It was a thrilling feeling, and at time, I will confess, a little unsettling. Granted, I opened up myself for that by having a blog and a presence in social media, but I never quite get used to the feeling when I am talking to someone and looking to explain how I feel about an issue, and them answering that they already know how I feel about it, they'd read my comments on it just last month. Still, I really appreciate that so many people actually tune in to what I have to say, it's really humbling.

Our evening entertainment for Wednesday was an awards dinner and reception at Croke Park, which is the home to the Gaelic Athletic Association and what are the two national sports (outside of football) for the Irish; Hurling and Gaelic Football. Before this evening, I had no idea how big a deal these two games were in Ireland, and seeing the history, the names, and the highlight reels certainly drilled that point home. Croke Park Stadium felt huge to me. I cannot say whether or not it was the size of our American Football stadiums back in the states, but it certainly felt like it. Having had a wonderful night of conversation with friends old and new, a very filling traditional irish dinner, and some deserved awards (knowing that Rickard Edgren won the best paper award put it at the top of my list for items to review when I got home), the buses brought us back to the convention center, and we all made our way back to the comforts of our rooms, to sleep and prepare for another day.

For my final full day in Dublin, we took part in Day Three of the conference (all of which from my perspective can be read about here), and several talks that, again, deal with the real issues that teams face. Again, I will emphasize, the problems with software and products that ultimately fail are less to do with technology and tools, and more to do with people and interactions. Unless we get that part right, ultimately what we do on the product front will be less effective than it can be. Additionally, we all need to realize that the problems on the people front are the hardest to solve, and take the most time, talent and energy. I appreciated very much Shmuel Gershon stepping in to do a last minute keynote (which he did a fantastic job with) and Zeger Van Hese's closing keynote about the interconnectedness of everything we do. I also have to thank Zeger for giving me a term that has become very looming in my reality (Tsundoku), and a fighting desire to do something about it :).

After the closing keynote, we had a session that was about programming for testers. While I have had some experience with programming, I often appreciate these workshops because I like to see how they go from zero to sixty in however long it takes, and what we walk away with in the process. For me, I walked away with a free IDE I'd never used called Geany, and some quick and dirty tips as to how to get people who had never programmed up with some quick wins and the desire to keep going. My intention is to use the same ideas as I pair program with my daughter in the coming weeks. Geany seems a good tool to do what I am hoping to.

Our final night together had a bunch of us making our way over to St. James Gate, with a tour of the Guinness Storehouse, and a museum dedicated to what is quite possibly the most iconic of Irish products. The tour was a great deal of fun, with a lot of history, some insights into a nearly 300 year old company, and what they have done to remain both profitable and relevant. Some good lessons overall in the tenacity of vision, and the willingness to play the long game (they have as one of their fist exhibits a document that is a 9,000 year lease for the St. James Gate property. Now that is long term thinking!). After a lesson in how to "pull the perfect pint from a nitrogen tap" and some breathtaking views from the top of the storehouse in the Gravity Bar, we went to get some dinner and continue the conversations in Crown Alley at one of the beter known and packed full pubs, where I was able to get an Irish translation of an American Thanksgiving dinner (which was quite enjoyable, I have to say :) ). A little walking around, and a little more conversation, then came the realization that there would be an early morning cab ride to the airport, some more waiting, and a long stretch to get home. I took my leave and got a couple of hours of shuteye, then packed up, got into a cab and made my way to the Dublin Airport.

A quick note on people in the States who complain about how much things cost. I found myself regularly doing the mental conversion of typical meals, costs for cabs, general purchases for items, etc. and I can honestly say that Ireland (or at least Dublin) has a higher cost of living than I do in my home town. I could chalk it up to being in touristy areas, but overall, I was still surprised at the costs of many items that I get for much less back home. Traveling to other parts of the world always opens ones eyes, and lets them understand the differences, even in little things.

The flight home I knew was going to be long. I'd have a seven hour layover in Toronto, as well as an almost seven hour flight from Dublin to Toronto, and a six and a half hour flight from Toronto to San Francisco. Interestingly, with my battle plan in force (reading through my collection of e-books and taking notes) coupled with the time it took to go through Immigration and customs in Canada (which to my surprise, meant I had no visit with customs in the U.S. when I arrived back home), those seven hours went much faster than I anticipated. The flight from Toronto to San Francisco, honestly, I slept most of the way. I landed just a little after 11:20 p.m. and by the time I walked into my house at two minutes to midnight (cue Iron Maiden in the background ;) ), I had spent twenty seven hours traveling door to door. Needless to say, I spent most of the weekend recuperating and getting myself back onto Pacific Standard Time. Today, I feel like I'm mostly back to normal.

To the organizers of EuroSTAR, I wish to say thank you for inviting me to your conference, and for giving me a free conference pass as "The Green Tester". It was an interesting situation to be a Yankee abroad, and to realize that I was one of the few people from the U.S. at this event. To hear so many different accents, so many different stories and situations, and to feel a part of a slightly bigger world, I am grateful for the experience. Additionally, to be a delegate without any other obligations, without having to speak, work a booth, do some background work, or other involvement that I have done the past five years, it was a terrific experience to just be at liberty to seek out and find answers to my own questions. In many ways, I did just that. For several questions, I didn't find answers, but I did find new avenues to explore and consider. I'd say that makes for a successful week :).

Book Review: Pride and Paradev

Friday, December 12, 2014 17:57 PM

As I have been looking through and wanting to dig out of my deep hole of collected and unread book titles, I also wanted to look at and give attention to books that I have that are a little less common or advertised.

I completely support the model of self-publishing and services like LeanPub and Lulu, and the opportunities it gives to those in the field to publish their ideas without having to wait for a formal publisher to put it out. Additionally, I wanted to start having a give and take between books that are recent (like my review for “More Agile Testing”) and titles I’ve had for a year and more and haven’t yet reviewed. To that end, I am excited to give some time and attention today to Alister Scott’s "Pride and Paradev”.

“Pride and Paradev” is a short e-book, clocking in at just under 100 pages, yet it is probably the most concise and specific “what” book yet written about Agile testing and the contradictions that exist. In fact, the book’s subtitle is “A collection of agile software testing contradictions". Every question that is posed is answered with a yes and a no, a should and a shouldn’t, with clear explanations as to why both make sense in given contexts.

Alister is the author of the WatirMelon blog, and all of these contradictions are explored there. Wait, if that’s the case, then why should we go and get this book? Lots of reasons, really. For starters, they are all gathered together in one place, which makes it convenient. It can also be loaded onto your favorite reading device, or downloaded to your computer, and used offline as you see fit. Also, 10% of the proceeds from the sales of the e-book go towards helping homeless in Australia, which I think is a perfectly awesome goal :).


Back to the book, and specifically the title, Alister states that a “paradev” is anyone on a software team that doesn't just do programming. That definitely includes software testers. As he pointed out in the introduction of the book:

“…some quick etymology […] para is also used to indicate “beyond, past, by” (think paradox: which translates to "beyond belief" ). This same reasoning translates paradev into "beyond dev" or "past dev". […] paradevs are the people on the team that don’t box themselves into a narrow definition, happy to be flexible, and actually are happy to work on different things.”

The contradictions that Alister focuses on in the book are as follows:
  • Do agile teams even need a software tester?
  • Do agile software testers need technical skills?
  • Are software testers the gatekeepers or guardians of quality?
  • Should agile testers fix the bugs they find?
  • Should testers write the acceptance criteria?
  • Is software testing a good career choice?
  • Is it beneficial to attend software testing conferences? 
  • Should testers get a testing certification?
  • Should acceptance criteria be implicit or explicit?
  • Should your acceptance criteria be specified as Given/When/Then or checklists? 
  • Are physical or virtual story walls better?
  • Which is better: manual or automated testing?
  • Can we just test it in production?
  • What type of test environment should we test in?
  • Should you use test controllers for testing?
  • Should you use production data or generate test data for testing?
  • Should you test in old versions of Internet Explorer?
  • Should you use a tool to track bugs?
  • Should you raise trivial bugs?
  • Should you involve real users in testing?
  • Do you need an automated acceptance testing framework?
  • Who should write your automated acceptance tests?
  • What language should you use for your automated acceptance tests?
  • Should you use the Given/When/Then format to specify automated acceptance tests?
  • Should your element selectors be text or value based? 

Each of these questions are bolstered with quotes from programmers, testers, writers, celebrities, politicians, philosophers, and others to help make the case for each of the points where appropriate (and yes, it adds a dose of fun to the sections).

The book ends with three non-contradictions, which sum up the rest of the book pretty handily:
  • You can only grow by changing your mind.
  • Everything is contextual.
  • You can always choose your reaction.

Bottom Line: Testing is often more than just testing. It involves many disciplines, and in that way, testers go beyond just the programming of software. If you chafe at the title of “tester” and feel in the mood to provoke some interesting conversations, start referring to yourself as a “paradev” and see where the conversations go. If you do that, I would highly recommend getting this book and reading through its contradictions, and decide when and where the contradictions are those you should heed or ignore, do or do not. It’s ultimately up to you. As for me and my testers, including my daughter, I’m going to encourage discussions around being a “paradev”, and I’m going to use this book to do exactly that.

Teaching My Daughter to Code and Test: Beginnings

Sunday, November 30, 2014 23:30 PM

Earlier this month, I brought my 13 year old daughter Amber home from a presentation at Google/YouTube called "Made With Code". Amber came out of the presentation energized, excited, and saying how cool it was what they showed her.


This is the first time that Amber has been "excited" about how computers work outside of a "users" perspective and thinking about computers and computing from a "makers" perspective. As we talked, I was thinking about some freelance coding that I do, and how it might be fun to have her learn a bit about web development, do some stuff and "push it to production", and pay her a bit for her efforts.


It also seems that there might be some interesting "discoveries" and reactions from two perspectives, mine and hers, that might make for a cool blog series. With that, we both decided to work towards building some practice times into our days, and see if the concepts that I have been learning over the years are easily teachable, or if I might learn more from her interactions than she does from me.


Additionally, I figured it would be more interesting to see the experiments and the realizations we come to in somewhat real time rather than wait several months to do a more formal synopsis, so that's also what we will be doing over the coming weeks and months (for those who wondered if my Tsundoku post owed in part to this initiative, the answer is "definitely yes" :) ).


So you will start to see some "shared posts" in this space. If it's my perspective, it will be in a standard layout and font color. If you see highlighted green text, that's Amber, speaking in her own words. Over time, she may choose to make full blog posts here, and those will reflect that in the title, but for now, just know when you see green highlighted text, that is her.


One of the things we decided to start with was to get her focused on something simple, where it would be easy to see and make changes. To that end, we thought it would make sense to have her practice with Codecademy to learn basic details of web formatting and style. Early on, we both decided that a little each day would be a better approach than trying to do a whole bunch at one time.


For the past three weeks I have been working on learning the basics of HTML and CSS in Codecademy. I have found that it is easier to do a little of it every day and keep the streak up than do it all at once in one big shot. I have learned the very, very basics.


One of the things that has been fun, if not a little annoying, has been to have my Dad sitting next to me and helping me with some of the assignments and examples. I say fun because I like the fact that he can help me understand what is happening. I say annoying because he's my Dad. What I mean by that is that sometimes he's a little to quick to tell me when I am doing something wrong without me learning myself. We finally made an agreement that I would work on my own computer and that I would call him over only when I felt stuck or confused. While I appreciate his input, I told him I was not going to learn anything with him always hovering over me and telling me what to do.


Other than that, I am happy to say that, since joining Codecademy on November 10th, I have done a little bit each day, and I have a 19 day streak as of now. My Dad checked up on me every day that he was away in Ireland to make sure I kept my streak alive, and I was happy to say I did.

It has been interesting to see the ways that Amber interacts with me as we work together on the Codeademy projects. Speaking of which, we have a session scheduled for a little later today (and that will extend her streak to twenty days ;) ). I am really curious to see where this journey will lead us both, and what we both learn from the experience.

Book Review: More Agile Testing

Friday, December 12, 2014 17:57 PM



In honor of running into Janet Gregory at EuroSTAR and listening to her talk about "Testing Traps to Avoid in Agile Teams", I told her I felt it only proper to break my Tsundoku and commit to reading “More Agile Testing” on my flight from Ireland to Toronto and during my layover for my flight back to SFO and review it before I got home. Did I succeed? I did, and I am glad I had the dedicated time to do exactly that. This book is so rich with information that you will need to spend some quality time with it.

First, let’s set some context. This is the sequel to the "Agile Testing" book that Lisa Crispin and Janet Gregory wrote back in 2008, and that I did a review of in the early days of the TESTHEAD blog back in 2010. In that review, I said that I didn’t have the time in an Agile team to give the book justice, so I reviewed it on how I thought that Agile seemed to me and the advice given. This time, with More Agile Testing, I have four and a half years of experience with Agile teams, and I can categorically say yes, this book addresses many of the challenges Agilists go through, especially Agile Testers.

Agile has grown and matured over the past several years. Some may say it has a clearer picture of itself, others may say its become fragmented and just another marketing gimmick. Some may complain that Agile programming is a thing, but Agile Testing? All of this points to the fact that there are questions, dilemmas and issues in the world of Agile, and nowhere is that more clear (or more muddled) than for the Agile Tester. Are we an appendage? Are we an integrated member of the team? Are we an anachronism? What about DevOps? Continuous Delivery? Testing in Production? Lisa and Janet take on all of these issues, and more.

More Agile Testing is not a “how” book. It’s not filled with recipes of how to be an Agile tester… at least not on the surface. Don’t get me wrong, there is a ton of actionable stuff in this book, and anyone working with Agile teams will learn a lot and develop some new appreciation and approaches. What I mean about it not being a “how” book is that it doesn’t tell you specifically what to do. Instead, it is a “what” book, and there’s a whole lot of “what” in its pages. Like its predecessor, More Agile Testing does not need to be read cover to cover (though that’s a perfectly good way to read it, and the first time through, i’d highly recommend doing just that). Instead, each section can stand on its own, and each chapter is formatted to address specific challenges Agile teams face. 

The book is broken up into eight sections. The first is an overview of where Agile has evolved, and the new aspects that are in play that were not so prevalent in 2008 when Agile Testing came out. In addition, it takes a look at the ways that organizations have changed, and the new landscape of software development for applications that span the gamut from desktop to web to mobile to embedded to the Internet of Things.

Section Two is all about Learning for Better Testing. From determining roles and adapting to new needs, to developing T-shaped team members to make box shaped teams, and helping testers (and those interested in testing) develop more in depth thinking skills and work habits to be more effective. 

Section Three focuses on planning. No, not the massive up front planning of traditional development, but the fact that even the just in time and just enough process crowd does more planning than they give themselves credit for, and that the ways we do it can be pretty hit and miss. This section also goes back to the Agile testing quadrants and reviews how each has its own planning challenges. 

Section Four focuses on Testing Business Value. In short, are we building the right thing? Are we getting the right people involved? Do we have a clear vision of what our customer wants, and are we engaging and provoking the conversations necessary to help deliver on that promise? This section focuses on developing examples and using methodologies like ATDD and BDD, and identifying what we do know and what we don’t know.

Section Five places an emphasis on Exploratory Testing. What it is, what it’s not, developing testing charters, working with personas and tours, and working with the other varieties of testing needs and helping make sure our explorations also include territory not typically considered the realm of the explorer (such as Concurrency, Localization, Accessibility, UX, etc.)

Section Six focuses on Test Automation. Note, this talks about the concepts of test automation, not a prepackaged approach to doing test automation or a specific framework to use and modify based on examples, though it gives plenty of links to help the interested party find what they are looking for and lots more. 

Section Seven is all about context, specifically, what happens when we address testing in different organizations and with different levels of maturity and tooling? Version control, CI, and working with other teams and customers are addressed here, as are questions of Agile in a distributed environment. 

Section Eight is Agile Testing in Practice, and focusing on giving testing the visibility it needs to be successful.

Appendix A shows examples of Page Object based automation using Selenium/Web Driver, and Appendix B is a list of “provocation starters”. In other words, if you are not sure what questions you want to ask your product or your programmers as you are testing, here’s some open ended options to play with.

In addition to the aggregate of Lisa and Janet’s experience, there are dozens of sidebars throughout the book with multiple guest contributors explaining how they implement Agile in their organizations, and the tapestry of similarities and differences they have seen trying to make Agile work in organizations as diverse and different as each of the contributors.


Bottom Line: If you are brand new to Agile software development and Agile testing, this may not be the best place to start, as it expects that you already know about Agile practices. Having said that, I didn’t see anything in this book that would be too hard for the beginner with team guidance to consider, implement and experiment with. However, if they have already read Agile Testing, and are hankering for more ideas to consider, then More Agile Testing will definitely help scratch that itch. Again, this is not a “how” book. This is a “what” and “why” book, but it has lots of great jumping off points for the interested Agile tester to go an find the “how” that they are looking for. As a follow on and sequel to an already solid first book, this is a welcome update, and IMO worth the time to read and reread.

Solving my Tsundoku: The Return of TESTHEAD Book Reviews

Thursday, December 18, 2014 23:51 PM

As I was settling in and preparing for my journey home from Ireland (which will include a flight from Dublin to Toronto (roughly eight hours), plus a seven hour layover, and then a flight from Toronto to San Francisco (roughly another seven hours), I figured it was a great time to dig in and work through some of the books I have received to review, as well as some I have picked up to work with due to the work that I do as a tester and occasional programmer.

I receive a number of e-books in PDF format from a variety of sources. Some are offered to me free for me to review, many more are purchased by me to work through and bulk up my geek brain (well, that was the goal in any event). At some point, the desire to read and apply got overtaken by the real life aspects of work, family, testing initiatives and other things I do. All the while, my book pile keeps getting bigger and bigger.

Zeger van Hese gave a great keynote talk at EuroSTAR this week, and in the process, he talked a bit about those great linguistic terms that English doesn't have a succinct single word for. One example that resonated with me (to the point of causing physical discomfort, to be honest) was the Japanese word "Tsundoku" (積ん読, (hiragana つんどく) which is, according to Wiktionary "(informal) the act of leaving a book unread after buying it, typically piled up together with other such unread books".

The first way to deal with a problem is to realize you have a problem, and to that end, I have decided I am going to do something about that problem. How so? By making one of my "bold boasts" I make from time to time. Since we are still a few weeks before New Years, I cannot be accused of making a New Year's Resolution (since I don't make them ;) ), but I can declare a new goal, and that new goal is that I shall henceforth and forthwith start whittling down my book collection and actually read, apply and review the stack of books that I have. To that end, you may expect, in no particular order, reviews to start appearing for the following:
  1. Accessibility Handbook
  2. Apache JMeter
  3. Application Testing with Capybara
  4. Backbone.js Cookbook
  5. Beginning PHP 6 Apache MySQL 6 Web Development
  6. Computer Science Programming Basics in Ruby
  7. Confident Ruby
  8. Crackproof Your Software
  9. Design Accessible Web Sites
  10. Design Driven Testing
  11. Eloquent JavaScript, 2nd Edition
  12. Everyday Scripting with Ruby
  13. Exceptional Ruby
  14. Good Math
  15. Head First Ajax
  16. Head First HTML and CSS
  17. Head First HTML5 Programming
  18. Head First JavaScript
  19. Head First JavaScript Programming
  20. Head First Mobile Web
  21. Head First PHP and MySQL
  22. Head First SQL
  23. Head First jQuery
  24. Higher Order Perl
  25. How Linux Works
  26. Jasmine JavaScript Testing
  27. JavaScript for Kids
  28. JavaScript Security
  29. JavaScript Testing Beginner's Guide
  30. JavaScript and JSON Essentials
  31. JMeter Cookbook
  32. jQuery Cookbook
  33. Kali Linux Network Scanning Cookbook
  34. Lauren Ipsum: A Story About Computer Science and Other Improbable Things
  35. Learning JavaScript Data Structures and Algorithms
  36. Learning Metasploit Exploitation and Development
  37. Learning Python Testing
  38. Manga Guide to Physics
  39. Manga Guide to Statistics
  40. Mastering Regular Expressions
  41. Metaprogramming Ruby
  42. Metasploit Penetration Testing Cookbook
  43. Metasploit The Penetration Testers Guide
  44. Modern Perl
  45. More Agile Testing
  46. PHP MySQL JavaScript HTML5 All in One for Dummies
  47. Pride and Paradev
  48. Pro HTML5 Accessibility
  49. Rails Crash Course
  50. Regular Expressions Cookbook
  51. Responsive Web Design By Example
  52. Robot Framework Test Automation
  53. Ruby Wizards
  54. Running Lean
  55. Selenium Design Patterns and Best Practices
  56. Selenium WebDriver Practical Guide
  57. Specification by Example
  58. Test Driven Web Development with Python
  59. TestComplete Cookbook
  60. TestComplete Made Easier
  61. The Art of Application Performance Testing
  62. The Art of Software Testing, 3rd Edition
  63. The Selenium Guidebook
  64. The Well Grounded Rubyist
  65. Web Development with Django Cookbook
  66. Web Penetration Testing with Kali Linux
  67. Webbots Spiders and Screen Scrapers 2nd edition
  68. Wicked Cool PHP
  69. Wicked Cool Ruby Scripts
  70. Wireshark Essentials
  71. Zero to One
Yes, this is a line in the sand. Yes, I intend to fix this problem of mine. No, I cannot say which order these reviews will appear, but be sure, they are coming. Yes, I intend to have at least one posted either before I leave Toronto or by the time I leave SFO (i.e. my home airport). Yes, I encourage you all to call me on it if I slack off.

One way or another, this begins today, and it will not finish until all of the stack is read, worked and commented on. That may take awhile ;). 

Green Grass and High Tides: Day Three at #esconfs

Thursday, November 27, 2014 18:58 PM

It's been a great few days here in my briefly adopted home. So many great opportunities to speak with friends from the Twittersphere, blogosphere and other arenas where personal "meetspace" has not been a factor, but having that in person opportunity has proven to be so worthwhile.

I've had the pleasure of meeting so many virtual friends in person, with a highlight being meeting and enjoying dinner with Julie Gardiner (new in person acquaintance) and Dawn Haynes (who I've known in person and have worked with in various capacities the past few years), among others. The awards dinner was held last night at Croke Park, home to a rather large stadium dedicated to Hurling and Gaelic Football. Below for your amusement is a shot of yours truly trying to get his hand on hitting a ball with a hurling stick (and if I'm mangling the lingo, please forgive me ;).

A bit about Dublin. The city center is small, easily walkable, with lots to see and do. The blending of old and new is everywhere apparent. One of the buildings that I saw had an inscription over the top saying it was for the "British and Irish Steam Packet Company". I am not even going to pretend to know what a Steam packet is, but it was cool to see these old buildings now being the current homes of telecommunications and web design companies, among other things.

tody is the final day of the conference proper, and there's a change in the program. One of the Keynote speakers had to drop out at the last minute, so my friend Shmuel Gershon will be delivering the morning address. Speaking of which, I should get up there so I can actually report on it.

---

Shmuel started his talk with the fact that our world is completely dependent on software. Fifty years ago, this was not the case. Seventy five years ago, there was no software to speak of for the vast majority of people (and for those who did interact with it, it was in its infancy). Today, we cannot imagine living our lives without it. In some ways, we as people have a chance to have a taste of virtual immortality. It's both macabre and fascinating to think that things like my Twitter account, my Facebook feed, or this blog could conceivably live on after I do.  Our ability to "persist" outside of the memories of our families and friends has, up until now, been limited to a small number of celebrated people (politicians, philosophers, scientists, celebrities). today, common everyday people have a chance to have a piece of their ideas and ideals live on after them.

We look at books as a way to retain knowledge and transfer knowledge. This has been a means of transfer for hundreds of years. they are permanent, solid, and transmissable, but they are difficult to change (books need to be reprinted to get new versions into peoples hands. Today, with the development of software and electronic books, updating those titles is much easier, and the distribution is as simple as a button click.  In previous years, if a publisher wanted to send me a physical book to review, it incurred a printing cost and a shipping cost to get it to me. Today, most of the books I review come in the form of PDF's and through email messages, with links to download. There is a cost associated with it, but it is much smaller than before, and getting updated copies are, again, just a button click away.

Software is more than just a product. It is now a primary means of transferring knowledge and information. Having a product by itself is no longer sufficient. Now we need to be clear that the software we are producing is actually capturing and transferring the knowledge that we have and understand. This change is also filtering into the way that we test. Having the functions work the way we expect it to not enough. We need to make sure that we are transferring knowledge and sharing information with our software creations. Programmers are filling software with knowledge, bot tacit and explicit. we need to be clear that we are able to make sure that both the knowledge and the mechanisms to transfer it are intact. That's an interesting shift in mental paradigm.

Shmuel used the example of how Portugal was looking for a way to get to India via another route than around Africa. The mission was to find a new route to India, but in the process, the explorers discovered the South American country of Brazil. The technical mission could be considered a failure. What are you doing wasting time on this place called Brazil? You need to find a new route to India. Fortunately for Portugal, they saw that the discovery of Brazil was a fundamental change to their world and their interests. In other words, the goal of reaching one mission successfully could be seen as a failure, but the discovery and new opportunities that come from it can be substantial. Let's not be so myopic that we miss the great opportunities that we may literally stumble upon.

Most software that is developed that has staying power (think of many of the most prolific and long lasting UNIX tools) start as a need to "scratch a personal itch" for the programmers that are creating them. Most of the tools that we use and that are successful, especially the ones that have deep penetration and are freely available, their staying power is specifically because the needs that they met are generally universal. They scratch a personal itch, to be sure, but it's a good bet that the itch being scratched affects a lot of other people. Getting that feedback from others helps to determine which products will stand the test of time. The products that scratch the highest numbers of personal itches will have staying power.

When we are testing software, it may feel strange to think that we are really testing the transfer of knowledge, but once we do get that, our very vernacular of how we talk about the work we do changes. We live in an era where knowledge can be nurtured and developed in a variety of ways. Ultimately, we need to get out of the mindset of "have we shipped the product yet" to "are we providing our customers with the best way to help them transfer knowledge that is essential to how they do business". Shmuel gives us a bold statement to consider. "We are not shipping a product, we are sustaining and preserving civilization". Try that on for size for awhile, and see if that doesn't make you think about things a bit differently ;).

---

Next up is "Testing Traps to Avoid in Agile Teams" with Janet Gregory.  Having spent the past few years working as an embedded tester in an agile team, both as a Lone Tester and as a part of a broader testing team, I definitely have lived through much of this.

One of the aspects that has been a big change is the idea that we have to wait for a build before we can do any testing. I do remember this well in my previous team because we had a push to a demo machine that needed to be done. The term mini-watefall gets thrown around, but perhaps a better term is the "ketchup effect", where the testers are tapping on the ketchup bottle and waiting for the ketchup to come out. when it finally does, it hits the food with a big chunk of ketchup. testing is like this when we have to wait for the build to do our testing. In my current environment, we have set it up so that all of us have access to the build environment and build machines that we can access. I am able to load a build within minutes of a programmer committing changes. It's really cool that, on any given day, I can have a chat with the programmer about something they are working on, them alerting me that they have committed a change, an I can load that build within a few minutes of that change.

More to the point, there is a lot of testing that can be done between builds, and ways that we can loop around on the testing needed with the idea that, once we get the most recent build, we can get back to see the changes.

I know the feeling of having the programmers be the Agile component, where I was on the outside of the development process. I have had to wait until the stories come together before I can do any meaningful testing. This can still be a problem where I currently am, but we have been encouraged to be part of the process as early as possible. We practice the Three Amigos model, and that Three Amigos approach allows us to get involved very early in the process. Still, even with that, there are many times where we are waiting for DevComplete to occur before we get involved on a story beyond that first kickoff. At times, we have been able to do direct paired programming and testing, but it is more common to have the programmer do the initial work on the story before we get into the main testing. At times it can be valuable to get in immediately, at times it makes sense to wait until everything is in place for the first round of testing. We don't have to always be inserting ourselves into the initial programming, but if we can be helpful to the programmer during that early phase, then by all means, let's do that.

I've long struggled with the idea of being the "Quality Police" and trying to get out of the mindset of being the person who says "go or no go". By getting the team to focus on the quality of the product, rather than just the testers doing the legwork, we are able to get everyone's eyes involved and engaged. One of the things we do in our stories is we get into looking at acceptance criteria and the implementation. We don't file bugs for stories in process. Instead, we work through issues we discover and put the story back on the line. It's been a system that has worked pretty well, but there are ways we could probably do it more efficiently or with less turnaround time. A tighter feedback loop would certainly help with this. Additionally, testers should become more technically aware and understand the programmer's vernacular.

Automation is important, but there's a lot of testing that should be done manually. Having said that, there's a lot of repetitive work that should be automated, and keeping on top of that is a big job. Currently, my efforts are focused on more manual testing, but everyone on the team has both the ability and the chops to do some automation work. We have a dedicated automation toolsmith to handle the bulk of that work, but we all have the ability and the expectation to help out where we can. Still, there's a fair amount of automation that I think we could all be doing on the team to help us get ahead of the curve.

Having a large suite of automated tests, both at the unit test level and at the integration level, helps us keep on top of a number of things. There's also a fair amount of modification that has to be done from time to time. We try our best to make sure that the testability  is there, and that the automation doesn't need to be modified regularly, but of course, things happen and new features get added all the time. The key goal is to focus on getting our most general tests and workflows automated, so that we can look at the special cases, or allow us to explore and look with fresh eyes on the areas that we may not be covering yet (am I sounding like a broken record with that yet ;)?).

One of the bigger dangers is that we sometimes forget the big picture. When we work with big systems, (or when we do some large scale update) we can get myopic and focus on something too intently). Sometimes these large focuses conflict with what other people do (some changes for accessibility, as an example, have had to be reconsidered because large swaths of automated tests ended up breaking because we shut off access to elements that we didn't intend to. Making thorough workflows and making sure we can complete them from start to finish, and in parallel with other workflows, can be a big step in helping us see how workflows interact and have a carry-on effect with other workflows. In our environment, every story has a unique branch that gets merged to the main branch, and going through and making sure that these dozens of parallel branches keep the peace with each other is an interesting process.

So overall, I recognize areas where my team can improve here, but overall, I think we do pretty good, all things considered. I'm looking forward to seeing how we can refine this list over the coming weeks and months.

---

Next up, "Diversity in your Test Team: Embrace it or lose the best thing you have" with Julie Gardiner. Julie has been someone I have seen multiple times, but never in person. She's been recorded for various conferences, and I've enjoyed her delivery and sense of humor, and the people aspect she delivers in her message.

Just like there is no one size fits all in test cases, there's no one size fits all in testers, either. Diversity is a hot topic at the moment, but too often, we talk about diversity when it comes to the makeup of the team members alone. It's not just about their genetic makeup and the variety thereof, but the skill sets that they also bring to the table. When we look at getting different genders, different cultures, different life experiences, and then insist that they all do  the same work the same way, we are totally missing the point. True efficiency and effectiveness from the team needs to consider what each person brings to the table, and works to maximize those efforts and abilities, while respecting the fact that not everyone is interchangeable. Some are familiar with the Dreyfus Model of Skills Acquisition. It's a good bet that, even if we get a roomful of people with similar skills and technical background, we would be able to plot the levels each persons skills fall on the scale from one to five (or Novice, Advanced Beginner, Competent, Proficient, and Expert). If we are truly honest, there will be a continuum with all of the skills represented. By being honest with what each person can do, we can give them the guidance they need, or we can give them the freedom to do what they do best.

Group dynamics are also part of this equation, and the ways that people communicate varies. Julie put up a questionnaire and some ecxamples of communication styles. Julies'e examples are listed in the pictures. I'll post mine when I have a little more time ;).



The idea is that, if we take the number of each side (x and y axis, we can plot where each person lands on the XY grid. As you might guess, few people will be in the same place. There will be a broad distribution, and the more people we have on the team, the more likely the list will be distributed. The general breakdown has four quadrants, and in those quadrants, we have The Pragmatist, The Facilitator, The Analyst, and the Pioneer.

Each are an archetype, and if we are honest, each person falls on a continuum of each (few are totally one quadrant). The point is, each person is going to approach their work, their communication, and their methodology. managing these people are going to require different approaches, and each will have different needs.

Change and process improvement also fall on a continuum with people. Not everyone is comfortable with dramatic changes. The Pioneers are more likely to be, while the pragmatists are less likely to be. The facilitators are game if they can collaborate on the process, an the analysts will want to see the theory and work the problems to be sure they are on the right track. Do you recognize yourself in these representations? Do you recognize your team members? Are they duplicates of you? Of course they aren't. They have their own distributions and their own avenues for how they like to work.

The key takeaway is that we need to be more aware of the fact that diversity is more than just gender, ethnic background, sexual orientation and the typical breakdown that we keep hearing about. Don't get me wrong, those are very important, and the greater the distribution of those items, the more likely you will get diversity in the areas of thought, personality, problem solving and skills. If we do all of this work to get people with differences, only to insist they shape themselves to be replaceable cogs, we are doing both them and our teams a huge disservice.

---

The closing keynote with Zeger van Hese, titled "Everything is Connected - Exploring Diversity, Innovation & Leaership" started with an explanation of Myers-Briggs types, and Zeger's personal distribution and what those things mean. The inspiration for the theme looks at a tribar of  diversity, leadership and innovation, and the interconnectedness of those aspects.

There's linguistic diversity, which introduces some interesting terms I had never heard before, and how those terms are unique to their cultures (it's cool to see there's a term in Japanese that's a single word that stands for "the condition of buying books but not reading them so that they pile up in a great big stack on your desk". I needed that sentence. Japanese have one word ;). A variety of responses are needed to be able to meet the demands of an organization. Hiring variety allows for the ability to get people from a variety of backgrounds to make a diverse and creative team. If we focus too hard on getting people like us, or that think and act like us, we should not be surprised when what we get is a generic and bland response, because everyone basically is the same.

Randomness and serendipity is also important, since there is an interesting variety of options that can take place when we are ope to allowing that randomness to take place. However, don't misconstrue randomness and serendipity with just winging it. Preparation and readiness is necessary for randomness and seendipity to be effective. It takes time and effort to be prepared, but when yo uare ready for anything, then anything can make its appearance ;).

Ask yourself, are you creative? Most people will probably say they aren't (about a third of the room said they were). I think we short change ourselves. Many of us are able to be very creative at times, but we assume that, unless we are insanely creative and productive all the time, that we do not have true creativity. That's a false equivalence. Being creative is a state of mind and action based on stimulus and a willingness to respond. We equate creativity with quality, and frankly, most of us do not start out with quality creations. We frequently suck when we start something. I appreciate the long time readers of my blog, and those who think I write cool things. as of today, I've put 926 posts up on this blog, and that does not include the dozens of entries that I deleted mid way, and perhaps the hundreds of entries that never made it into a post at all because I thought it was junk.

Creativity is not just a spark of inspiration. Often is starts that way, but if you haven't put in the time and energy to develop the skills necessary to use it, it won't matter very much. Don't take that to be to pessimistic, it's not meant to be. What I mean with that is that many of our efforts are going to be less than masterpieces. Of those 926 posts, half of them are below average (by definition ;) ). Half of them fall below the median of quality as well. Still, if you were to ask ten different people which of my posts deserve to be above or below that median line, you might get a broad variety of answers. that's because what is good matters to the person who is consuming the data, not the person writing it. You may think the Princess Bride is one of the greatest movies of all time. Someone else may decide it is a completely corny movie. What matters is not what other people think, but what you think. Your desires and motivations will help decide how you feel bout certain things.

Leadership needs to be able to handle the diversity mix for it to be effective. Again, leadership is something that we as a general population seem to have a problem it. Part o this is the fact that Leadership is sold as this insanely altruistic or hyper-focused attitude. We automatically think that the leader is the alpha dog, and that's not necessarily the case. Everyone has a bit of leadership ability in them, and under the right circumstances, those leadership opportunities can be sought out and applied without feeling one has to be a general or a manager/director.

Sometimes we suffer rom the status quo bias, where we tend to struggle with reconciling "new" with "useful". We may miss opportunities because we do not see the benefit. If we were more honest,  we might even say that the opportunity scares us. we can't turn off that fear, but perhaps we can channel those feelings more effectively. ure, there will be abrasion, there may even be genuine fear and frustration, but by embracing and making room for that ambiguity, we can let real creativity develop.

With that the official program ends, and an announcement of Eurostar to be held next November in Maastricht, Netherlands has been made. My congratulations to Ruud Teunissen for being the conference chair this year, and my thanks and gratitude to Paul Gerrard for a Yeoman's work at being conference chair this year. To Paul and the staff that helped put on EuroSTAR this year, may I say "well done and thank you".

---


You thought I would be finished, but you'd be wrong. There was an after conference session about "Programming for Testers" that I found to compelling to pass up.

Anyone familiar with my blog knows I have answered this question many times. Do testers need to be professional programmers? No. Is it advantageous? Absolutely! Do I code? Enough to be dangerous. Am I a professional coder? Nope, but I strive to learn enough to be both dangerous and effective.


Therefore, I'm glad to put myself into a play time situation to see how the two instructors want to cover this topic. More than just write "Hello World", we're actually going to control some external devices, such as a robotic arm using a Raspberri Pi device.



A hop, skip and a jump, a Python distribution download and a Geany download later, I am ready to go... I think ;).

Python installed, Geany installed.


A few lines of basic code, a compile and an execute, and here we are:

wow, this feels like "Learn Ruby the Hard Way" all over again (LOL!)

Create two numbers? Sure, we can do that :):

declare two numbers and print the two numbers. Whee!!!

And now let's multiple two numbers and print out the string:

it works, and it feels good :)!!!

Something a little more interesting? OK, here's a loop.


And here's a comparison:



Moving along very fast, here's a Raspberry Pi, running a robotic arm, controlled by a Wii Controller. Yep, a bit of a code jump, but not too insane ;).


And with that, I'm off to tour the Guinness Storehouse... cue the jokes about the guy who doesn't drink touring a world famous beer factory. It's OK, I'm used to it ;). Again, thanks very much for playing along, it's been a fun several days. Dublin, you've been fantastic, Eurostar, you have likewise been amazing. Happy Thanksgiving to all of my U.S. friends, and to all else, enjoy the rest of your Thursday :).

Making testers very happy... OK, who wants mine (LOL!)?!

Green Thoughts: Day Two at #esconfs

Wednesday, November 26, 2014 17:43 PM

It really puts into perspective how much of a time difference eight hours is. I was mostly good yesterday, but this morning came way too fast, and I am definitely feeling the time change. A bit of a foggy morning today, but a brisk walk from the Maldron Pearse and a snack on a peanut bar, and I'm ready to go :).

Convention Center Dublin, aka Where the Action is :)
The first keynote for today is being given by Isabel Evans and its titled "Restore to Factory Settings", or to put simply, when a change program goes wrong. We always want to think that making changes will always be positive. the truth is, there are some strange things that happen, and its entirely possible that we might find ourselves in situations we never considered. Isabel works with Dolphin Systems, who is associated with Accessibility issues (hey, something in my wheelhouse, awesome :) ).

The initial problems were related to quality issues, and their idea was that improved testing would solve these problems. Isabel figured "30 years of testing, sure I can do this, but there's not just testing issues here". Starting with issue #1, improve testing. First, there were no dedicated testers. Testing was done by whoever would be able to do testing. Seemed an obvious first step was to do some defect recognition, and develop skills to help them discover defects and actually talk about them.

Isabel suggested that she sit with the developers and work with them, and even that was a request that was at first a difficult transition. She had to fight for that position, but ultimately they decided it made sense to work with that arrangement. By recruiting some of the support team and getting others involved, they were able to put together a test team.

With a variety of initiatives, they were able to improve defect recognition, and requirements acquisition also improves. Sounds great, right? well, the reality is that the discovery of issues was actually adding to the time to release. They improved the testing, which identified more bugs, which added a greater workload, which adds pressure to the release schedule, which means more mistakes are made, and more bugs are introduced. Now, for those of us who are familiar with testing, this is very logical, but the point Isabel is making is that testing alone, and focusing on a greater emphasis on testing will not automatically mean better quality. In fact, it has a significant chance of making the issues worse at first, because they are now being shown the light of day.

The talk title "Restore to Factory Settings" is that, when things get tough, the natural reaction is to go back to doing what everyone always did. there are enthusiastic adopters, people against the whole idea, and then there are waverers in the middle. The waverers are the ones who hold the power. They will revert back to their SOP when things get tough. Even the enthusiastic adopters, if they are not encouraged, will revert back to SOP. The people against will go back to the old ways the second they get a chance. Management, meanwhile, is getting agitated that this big movement to change everything is not getting traction. Sounds absurd, but it happens all the time, and I'm sure we've all experienced this in one form or another.

The key takeaway that Isabel was describing is that changes to testing are often icing on a much thicker and denser cake. Changing testing will not change the underlying culture of a company. It will not change the underlying architecture of a product. Management not being willing to change their approach also adds to the issues. If the rest of the cake is not being dealt with, testing alone will not improve quality. In fact, it's likely in the short term to make quality issues even worse, because now there is clarity of the issues, but no motivation to change the behaviors that brought everyone there.

This all reminds me of the four stages of team development (forming, storming, norming and performing), and the fact that the testing changes fit clearly into the storming stage. If the organization doesn't embrace the changes, then the situation never really gets out of the storming stage, and morale takes perpetual hits. Plans describe what likely won't happen in the future, but we still plan so we have a basis to manage change. Risk management is all about stuff that hasn't happened, but we still need to consider it so we are prepared if it actually does happen. In short, "Say the Same, Do the Same, Get the Same".

Change is hard, and every change in the organization tends to cause disruption. Change programs bring all of the ugly bits to the surface, and the realizations tumble like dominos. To quote Roland Orzabel's "Goodnight Song", nothing ever changes unless there's some pain. As the pain is recognized, the motivation for change becomes more clear. Prioritization takes center stage. Change has a real fighting chance of succeeding.

Ultimately, there is a right time for implementing changes, and not any one thing is going to solve all the problems. Continuous improvement is a great pair of buzzwords, but the process itself is really tricky to implement and, more important, to sustain.

---

Next up, "Every Software Tester has a PRICE" with Michael Bolton. I've come to this session because I am often curious as to how we present the information we find, and the ways that we can gather that information. Ecery test should have an expected predicted result. Makes sense for checks, but it doesn't really hold up for testing. Testing is more involved, and may lead you to completely different conclusions. Often, the phrase we hear is "I don't have enough information to test". Is that true? It may well be, but the more fundamental question is "where do we get the information we need in the first place?"

Our developers, our company web site, our user guide, our story backlog, our completed stories, our project charter, our specifications, our customers, other products in our particular space, etc. Additionally, the elusive requirements that we are hoping to find to inform us are often not anything that is written down. Tacit knowledge that resides in the heads collectively of our organization is what ultimately makes up the requirements that matter. The tricky part is gathering together all of the miscellaneous parts so that it can be made actionable. Think about it this way. For those of us who have kids, do we know the exact address of our kids schools or where they go for their extra curricular activities? I'm willing to bet most of us don't, but we know how to get there. It only becomes an issue when we have to explain it to someone else. As a tester, we need to consider ourselves that person that needs to consider how to get those addresses for all of those collective kids and where they need to go.

The fact is, there's lots of information that is communicated to us by body language and by inference. Requirements are ultimately "collective tacit knowledge". What we require of a product cannot be completely coded or ever truly known. That doesn't mean that we cannot come close, or get to a reasonable level that will help generate a good enough working model. One of the interesting aspects of the iPhone, for example, is "charisma"... what makes an iPhone an iPhone, and what makes it compelling? Is it its technical capabilities, or does it just "feel good"? How do we capture that charisma as a product element, as a feature to be tested?

One of the best sources of information, and one not talked about very often, is the process of "experimentation". In other words, we develop the requirements by actively testing and experimenting with the product, or with the people responsible for the product. Interviewing individuals associated with the product will help inform what we want to be building (think customer support, customers, manufacturing, sales, marketing,  subject matter experts, senior management, etc.) and our experimenting with their input will give us even more ideas to focus on). We also develop oracles to help us see potential issues (in the sense that an oracle is some mechanism that helps us determine if there is an issue before us). The product itself can inform us of what it could do. We can also do thought experiments of what a product might do.


What this shows us is that there are many sources of information for test ideas and test artifacts in ways that most of us never consider. We place limits on our capabilities that are artificial. So many of our ideas are limited by our own imaginations and our own tenacity. If we really want to get deep on a topic, we are able to do that and do it effectively. Often, though, we suffer not from a lack of imagination, but from a lack of will to use it. So much of what we want to do is dictated by a perceived lack of time, so that we try to limit ourselves to the areas that will be the quickest and most accessible. This is not a bad thing, but it points out or limitations in our efforts. We trade effectiveness for efficiency, and in the process, we cut off so many usable avenues that will help us define and determine how to guide our efforts.

---
Next up, "How Diversity Challenged me to be Innovative as a Test Team Leader" with Nathalie Rooseboom de Vries - van Delft.

What does diversity really mean? What does it mean to embrace and utilize diversity? What happens when you go from being a team of one as a consultant to wanting to be a team leader an manage people? How can we get fifteen unique and different people to work together and become a single team? What's more, what happens when you have to work with a team and a culture that is ossified in older practices? This is the world Nathalie jumped into. I frankly don't envy her.

One of the biggest benefits of being a consultant is that, after a period of doing a particular job or focus, you can leave, and the focus is temporary, and you don't have to live with the aftermath of the decisions that follow on. When we make a commitment to become part of a team long term, we inherit all of the dysfunction, oddity, and unique factors that the team is built from. The dynamics of each organization is unique, but they tend to have similar variations on a theme. The ultimate goal of an organization is to release a product that makes money. Testers are there to help make sure the product that goes out is of the highest quality possible, but make no mistake, testers do not make a company money (well, they do if you are selling testing services, but generally speaking, they don't). Getting a team on the same page is a challenge, and when you aim to get everyone working together, part of that balance is understanding how to emphasize the strengths of your team mates.

Nathalie uses an example of what she called a "Parent/Adult/Child" metaphor to transactions. The Parent role can have over Positive and over Negative issues. the Parent role can nurture, but it can also be controlling, it can be consoling and yet blaming. the child role is both docile and rebellious, unresponsive and insecure. In some early interactions, there may well be Parent/Child interactions, but the goal is to move away over time to a more Adult/Adult interaction. To get that equality of behavior, sometimes you have to use the parent relationship to get the behavior from the "Child", or if you want to get the Parent to respond differently, the Child needs to use a different technique to get that behavior to manifest.

The ability to challenge members of your team will require different methods. Diversity of the team will make it impossible to use the same technique for every member. They each have unique approaches and unique interests and motivations. One of Nathalie's approaches is to have a jar with lollipops and a question and answer methodology. If you post a question, you get a lollipop. If you answer a question, you get a lollipop, too. The net result is that people realize that they can answer each other's questions. They can learn from each other, and they can improve the overall team's focus by adding to the knowledge of the entire team and getting a little recognition for doing that. She also uses a simple game called "grababall" which has a number of tasks and things that need to be done. The idea is that when you grab a ball, you have a goal inside of the ball to accomplish. If you accomplish the goal, you get a point. At the end of the year, the highest point accrual gets a prize. By working on these small goals and getting points, the team gets engaged, and it become a bit more fun.

Diversity is more than just the makeup of the team, of having different genders, life experiences or ethnic backgrounds. Diversity goes farther. Understanding the ways that your team members are motivated, and the different ways that they can be engaged can give huge benefits to the organization. Take the time to discover how they respond, and what aspects motivate them, then play to those aspects.

---

Next up, "Introducting Operational Intelligence into Testing" with Albert Witteveen. Albert has had a dual career, where he has spent time in both testing and in operations (specifically in the Telco space). Testers are all familiar with the issues that happen after a product goes live. The delay, the discovery, the finger pointing... yet Operations discovers the problem in a short period of time.

What is the secret? Why do the operations people find things testers don't? It's not as simple as the testers missed stuff (though that was part of the answer), it's also that the operational folks actually utilize the product an manage and monitor the business processes. Operations people have different tools, and have different focuses.
Testers can be a bit myopic at times. If our tests pass, we move on to other things. Small errors may be within the margin of error for us. In Ops, the errors need to be addressed and considered. Operations doesn't have an expected result, they are driven by the errors and the issues. In the Ops world, "every error counts".

Operations managers have log entries and other issues that are reported. With that, they work backwards to help get the systems to tell them where the issues are occurring. In short, logs are a huge resource, and few testers are tapping them for their full value.

So what does this mean? Does it mean we need Operations people on the testing team? Actually, that's not a bad idea. If possible, have a liaison working with the testers. If that's not a reasonable option, then have the operations people teach/train the testers how to use logs and look for issues.

Sharing the tools that operations uses for monitoring and examining the systems would go a long way to be able to see what is happening to the servers with a real load and a real analytics of what is happening in the systems over time.  If there is any one silver bullet I can see from doing Ops level monitoring and testing, it's that we can objectively see the issues, and we can see them as they actually happen, not just when we want to see it happen.

--

I'm in Adam Knight's talk "Big Data, Small Sprint". What is big data, other than the buzzword that talks about storing a lot of details and records? Who three years ago even really knew what "big data" was? When you talk about big data, you are talking about large bulk load data. Adam's product specifically deals with database archiving.

This model deals with tens of millions of records, dozens of partitions and low frequency ingesting of data (perhaps once a week). their new challenge was to handle millions of records per hour, with tens of thousands of partitions. The ability to work within Agile and targeting the specific use cases of this customer, they were able to deliver the basic building blocks to this customer within one sprint. Now imagine storing tens of billions of records each day (I'm trying to, really, and it's a bit of a struggle). Adam showed a picture of an elephant, then a Titanosaurus, and then the Death Star. This is not meant to represent the size of increase for the records, but the headaches that testers are now dealing with.

In a big data system, can we consider the Individual? Yes, but we cannot effectively test every individual uniquely. Can the data be manipulated? Yes, but it needs to be done in a different way. We also can't manage an entire dataset on a single machine.  we an back up a system, but the back up will be too big for testing purposes. Is big data too big to wrap ones head around? It requires a different order of magnitude to discuss (think moving from kilometers or miles to astronomical units or light years to describe distances in space).

OK, so this stuff is big. we get that now. But how can we test something this big?  We start by changing our perspective, and we shift from focusing on every record to focusing on the structures and how they are populated with representative data (from records to partitions, from data to metadata, from single databases to clusters). Queries would not be made where they would pull a row from every conceivable table. Instead, we'd be looking at pulling representational data over multiple partitions. Testers working on big data projects need to develop special skills beyond classic testing. There is a multi-skill request, but the idea of getting multiple testers that have all of the skills needed in one person is highly unlikely. Adam discusses the idea of developing the people on the test team to strive to be "T" shaped.  A T-shaped tester would have many broad but rudimentary or good  test skills, as well as a few core competencies that they would know very deeply. By combining complementary T-shaped testers, you can make for a fully functional square shaped team.

Adam mentioned using Ganglia as a way to monitor a cluster of machines (there's that word again ;) ) so that the data, the logs and other detail can be examined in Semi Real times. To be frank, my systems don't come anywhere close to these levels of magnitude, but we still have a fairly extensive amount of data, and these approached are interesting, to say the least :).

---

I promised my friend Jokin Aspiazu that I would give him a testing challenge while we were here at EuroStar. Jokin humored me for the better part of an hour and a half showing me how to tackle an issue in the EuroSTAR test lab. I asked him to evaluate a time tracking application and either sell me on its implementation or convince me it was not shippable, and to find a scenario significant enough to kill the project.

He succeeded :).

Of course, this app is designed to be a TestLab app to be puzzled through and with, but I asked him to look beyond the requirements for the Test Lab's challenge and look at the product holistically, and to give me a yes/no in a limited amount of time (all the while articulating his reasoning as he was doing it, which I'm sure must have been mildly annoying ;) ).

With that, I determined Jokin earned advancement as a Brown Belt in the Miagi-do School of Software Testing.  For those of you here, high five him and buy him a drink when you see him, he's earned it!

---
A last minute substitution caused me to jump onto another talk, in this case "The Silent Assassins" by Geoff Thompson. These are the mistakes that can kill even the best planned projects.

Silent Assassin #1: Focus on Speed, not on Quality. Think of the idea of a production floor taking up more space to fix problems coming off the line than is allocated to actually developing new product well.

Silent Assassin #2: The Blindness to the True Cost of Quality. Supporting and maintaining the software costs a lot more than putting it together initially. Consider the amount of money it will take to maintain a system.

Silent Assassin #3: Go by Feel, Not by Facts. Metrics can be abused, and we can measure all sorts of worthless stuff, but data and actual deliverables are real things, and therefore, we need to make sure we have the facts on our side to say if we are going to be able to deliver a product on time. In short, we don't know what we don't know, so get our relevant facts in order.

Sinet Assassin #4: Kicking Off a Project Before the Business is Ready. Do our customers actually understand what they will be getting? It's not enough for us to deliver what "we" think the appropriate solution is, the customers need to have a say,and if we don't give them a say, the adoption may be minimal (opr even non-existent). Which leads to...

Silent Assassin #5: Lack of Buy-in From Users. Insufficient preparation, a lack of training, no demonstration of new features and methods, will likewise kill the adoption of a project with users.

Silent Assassin #6: Faulty Design. Software design defects compound as they go. The later a fundamental design issue, the harder it will be to fix, and in many cases, the problems will be exponentially more difficult to fix.

OK, that's all great, but what can we actually do about it? To disarm the assassins, you need to approach the areas that these problems fall into. The first area is processes. It's the way you do the work and the auto-pilot aspects of the work we do. The next is People. Getting people on the teams to work with each other. Getting buy-in from customers and communicating regularly, and taking the people into consideration of our efforts. The last area is tools, and its listed last because we often reach for the tool first, but if we haven't figured out the other two first, the tools are not going to be effective (or at least not as effective as they actually could be). Focus on effectiveness first, then shoot for efficiency.

Shift Left & Compress: put a clear focus to deliver the highest quality solution to customers at the lowest possible cost point. In my vernacular, this comes down to "review the work we do and get to the party early". Focus on the root causes of issues, and actually do something to stop the issues from happening. The Compress point is to do it early, prioritize up front, and spend your efforts in the most meaningful efforts. easy to say, often really difficult to do correctly. Again, the organization as a whole needs to buy in to this for it to be effective. This may also... actually, scratch that, it will need to have investments in time, money, energy, emotion and commitment to get past these assassins. These are difficult issues, and they are costly ones, but tackling it head on may give you a leg up on delivering a product that will be much less costly to maintain later. The money will be spent. The question is how and when ;).

---

Yes, this happened, and yes, it was glorious :)!!!

...and as an added bonus, Smartbear sings a Tester's Lament to the score of Frozen's "Let It Go" :)
---

Wednesday's closing Keynote is with Julian Harty; the topic is "Software Talks - Are You Listening?"

First question... why bother? Don't we know what we need to do? Of course we do, or so we think. However, I totally relate to the fact that software tells us a lot of things. It has its own language, and without listening, we can do many things that are counter to our best intentions.

The first way that our software talks to us is through our user base and through their interactions. If we remove features they like, especially in the app world, the software will tell us through lower ratings (perhaps much lower ratings). Analytics can help us, but yet again, there's much more we can learn earlier (and actually do something about) than do it later when it's been released.

Logs are our friend, and in many ways, the logs are the most loquacious artifact of all. So much information is available and most testers don't avail themselves of it, and that is if they look at them at all. analytics can be filtered from devices, churned into a number of different breakdowns and then we try to understand what is happening in real time. The information we gather can be helpful, but we need to develop insights from it. we want to gather design events, implementation events, field test data, evaluations, things that will tell us who is using what, when and where. A/B Testing fits very well in this space. We can see how one group of users reacts compared to another group. We can gauge precision and accuracy, so long as we don't conflate the two automatically. It's entirely possible that we can be incredibly accurate, but we are missing the target completely.

There are darks sides to analytics, too. One cardinal rule is "Do No Harm", so your app should not do things that would have negative effects (such as having a flashlight app track your every move while it is in use and upload that location data). We can look at the number of downloads, the number of crashes and the percentage of users that use a particular revision.  If we see that a particular OS is in significant use, and that OS has a number of crashes, we can deduce the priority of working on that issue and its effect on a large population of users.

The key takeaway is that we can learn a lot about our users and what they do, what they don't do and what they really wish they could do. We leave a lot on the table when we don't take advantage of what the code tells us... so lest's make sure we are listening ;).

Well, that's it for today, not counting socializing, schmoozing and dinner. Hope you had fun following along today, and I'll see you again tomorrow morning.

So Many Shades of Green: Day One at #esconfs

Thursday, November 27, 2014 10:26 AM

A first glimpse of Ireland from the air and in the airport.
Hello all and welcome to another Live Blogging extravaganza from yours truly. It's been a while since I've done one of these, but I am happy to be back in the saddle one again. My journey has already been an eventful one, starting with a flight from San Francisco to Washington DC, changing to another plane, then changing to another plane because an oven was on the fritz int the plane, and then a trans-Atlantic flight which literally just ended about an hour ago. A trip through Immigration, a hop on the Green Bus, and now I'm sitting in a large auditorium with Paul Gerrard chatting about what to expect over the next three days.


---

The first talk I am witnessing is courtesy of Andy Stafford-Clark and it's about "The Internet of Things ...it's Coming!" What is the Internet of Things? It's the interconnection of devices and information details that are not necessarily associated with the devices we typically refer to as Internet enabled devices. We've been focusing on the past three decades talking about computers, and phones and tablets and communication devices. Those are things that we are now well used to seeing, but what about the lights in our house? Our refrigerator? Our home thermostat? The train station display. Many of these devices are using homebrew tools (think Arduino/Raspberry Pi or other devices that can be created to control or set up servers that we can query, modify and update. To some, this is the epitome of nerdy, and for others, it's genuine and valuable information that helps us make decisions about what we can do (hmmm, that sounds familiar :) ). The goal of this initial and primitive "Internet of Things" as it stands today is that it is more of a fun curiosity for front minded nerdy types, but the promise of what it can offer is very compelling. What if we had the ability of actually putting together a clear understanding of how we use energy, as an example? We know we use water, electricity and gas for various purposes, but do we really know when we are using water? What really is causing the largest percentage of usage? Is it family showers? Laundry? Me changing out the water of the fish tanks? garden maintenance? How cool would it be if I could get an hour to hour breakdown of water usage each day and drill down to see various times? That is a perfect adaptation of the Internet of Things, if we choose to set it up and look at it.

Shall we get even more crazy? How about making a mousetrap that actually tells you when it has caught a mouse? Sound weird? well, with an Arduino board and a mechanical mouse trap, it's doable and Andy and his family did it.

All interesting insights, and novel uses, but how will this engage and interest the average everyday user, and when will we see the move away from the "nerdy" crowd towards the everyday people to use them. More to the point, how will we actually be able to test this stuff? Embedded systems knowledge will certainly help, but in many ways, Arduino?Raspbery Pi gives those who want to work in these areas some of the best up front training imaginable. So many of the systems use simple languages like Scratch or JavaScript (well, simple by comparison ;) ), and there will certainly need to be a change of focus. Awareness and familiarity of working with circuit boards helps, but the interfaces are not *that* foreign, thank goodness :). Some interesting issues sill need to be considered, such a how to power all of these devices, and utilizing ways to limit how they are powered (the goal is to make these options available without adding a large additional power load. Much of these devices are being positioned as solutions for helping people save power, so that's a creative challenge to consider. Additionally, there's the question of how to simulate thousands of devices running. How will we do that? How will we test that? These are questions that the next few years will probably start to provide answers for us. No matter how you look at it, this will not be boring ;).

---

Next up, Adapting Automation to the Available Workforce with Colm Harrington. This is a topic that has long interested me and has likewise vexed me in a variety of workplaces. Colm started with an anecdote referring to Einstein and examinations he was giving his students. They were the same questions, and when called on it, he replied "the questions are the same, but the answers have changed". Automation has changed in the last several years as well. The commercial tools have lost a lot of ground. WebDriver is currently king. The needs for automation are extending, and manual testers who exclusively do just manual testing are becoming more and more rare. All of us are doing some level of automation, but not all of us are seasoned programmers and software developers. We need to do a better job of bringing more people in the organization to be able to use and modify automation to be useful for more of the organization. It's a great promise and a wonderful goal, but how do we bring those to the table who do not already have a strong automation background, and more to the point, can we get the software out on time, under or at budget, without embarrassing errors getting into the hands of our customers? Automation is secondary to that, but still very important.

Colm's goal is not to have people get too detailed in the code, which makes sense with the topic. The goal is not to force testers to write automation, but to encourage them to get involved in a meaningful way and at the level they can be comfortable with. Automation can cover a lot of ground, but for me, the biggest issue is the tedious stuff of setup, population and traversal. Automation that addresses that area alone makes me very happy. Yes, it takes time to set a lot of this stuff up, but at least we have the ability of setting everything up from start to finish so I can get more deeply into the corner areas. When we have to set up everything manually, by the time we get to the places that are interesting, we end up exhausted, and less likely to find interesting things. To that end, automation can be done that doesn't require the programmer have an in depth knowledge of all the internals. Instead, we can be engagerd in focusing on the traversal steps we know w do all the time.

One of the biggest challenges that organizations have is that they do not have the ability or the time to take a full team and train them from scratch. However, if the team has taken the time to implement a framework that is easily modified, or that allows individuals on the team to get some quick wins, that will definitely help speed the success of individuals making their way in getting involved with automation. Using a Domain Specific Language or API, many of the steps can be compartmentalized so that the whole team can communicate with the same language. Will the toolsmiths have an advantage? Of course they will, but they will also be able to make a system that all of the participants will be able to leverage (think of Cucumber and the ability to write statements that are well understood by everyone). When the testers write the tests and cover various test cases, the testers knowledge is being used in a way that is most effective, with the programmers able to help fill in the blanks for the testers to be better able to focus on test design and implementation, rather than trying to wrench the tool to work for their benefit.

Some of the best ways to help make it possible for the testers and others to be effective is to keep things simple, consistent and intuitive. Keep data and test scenarios as clear from each other as possible, do the best you can to encourage a common language and linguistics between the code and the test implementations (use labels and methods that make sense by how they are names and what they actually do), and keep tests as atomic as possible (one test from beginning to end, in as few steps as necessary to accomplish the goal. Additionally, a key consideration is to balance the ability of the tests nd methods to be humane over minimal. Refactoring to where the intention is obfuscated is much less helpful as compared to allowing a little more verbosity to give all the participants a clear understanding of what's happening. Also, use the option to create soft assertions, which allow the user to check 50 different fields, notice the one place it fails, an inform at the end of the test rather than stopping cold at the first error it discovers with a hard assert.

Other important considerations: don't make something reusable if all you are doing is adding to its reuselessness. Let the client code shape the API, and don't have the API be set in stone. Dev's and QA need to work together, in either proximity or communication. If you can sit together, sit together, or share screens and talk together in the same time and space, if not the same proximity.

---

Next up is Rikard Edgren and "Trying to Teach Testing Skills & Judgment". This particular topic is near and dear to my heart for a variety of reasons, the most important reason being the fact that my daughter has started the process of learning how to write code with me. More than teach her how to code, though, I want to teach her how to test, and more to the point, teach her how to test effectively and with the ability to learn about what is important, not just doing the busywork associated with general testing. Rickard's model he is describing is a 1-2 year education arrangement, with internships and other opportunities to actually get into real situations. Rickard's approach and philosophy is as follows:

- Motivation is key
- Not for the money
- It's not about us
- Encourage new ideas
- Don't be afraid

Rickard mentions the value of focusing on Tacit Skills and Judgment, including asking good questions, applying critical thinking, understanding what is important to the customers, learning quickly, looking at a variety of perspectives, utilizing effective test strategies, and looking for those opportunities to "catch lightning in a bottle" from time to time (i.e. serendipity) and of course, knowing when good enough actually is ;).

Rickard  shred a story where he was working with a programmer who decided to try to be a tester and how, while he could see the problems left and right, the programmer didn't necessarily see the issue, or made assumptions based on how the code was being used. the key was, the tester wanted to see the problems. The programmers often want to get the work finished and of their plates (I totally understand this, and believe me, I make the same excuses when I am the one writing the code). This is also why I find it imperative that I have someone else test my code that I write, and not be afraid to tell me my creation is ugly (or at the very least, could be substantially improved ;) ).

Critical thinking isn't just that we question everything. We need to be discerning in the things we question. Start with "What if..." to get the ball rolling. This will help the tester start thinking creatively and get into unique areas they might not automatically consider. Also, be aware of the biases that enter you purview (and don't ever say you don't have biases, everyone does ;) ).

Everyone thinks differently, and the ability as a teach to be able to explain things in a variety of ways is critical. Likewise, we want to encourage those we are teaching to try a variety of things, even if the attempts are not successful or lead to frustration. we need to step back, regroup and give them a chance to look at what they did well, where they could improve, and how they can get the most out of the experiences. There will be theory and hard topics, and those are important, but always couch the concepts in practical uses. The names are not essential, the use and the understanding of how things work does (well, the names are seriously helpful to make sure that we do what we need to and can communicate effectively, but focus on what is being done more so that what it is called, at least at first. Once they get what is happening, the names will make sense ;) ).

Rikard has a paper that covers this topic. I'll reference it as soon as I get time to get to the link and update this stream of consciousness. Oh, and lookie lookie... green and yellow cards to manage question flow... now where have I seen that before ;)?

---

Next up, the closing keynote for Tuesday with Rob Lambert about "Continuous Delivery and DevOps: Moving from Staged To Pervasive Testing". I've often heard of this mystical world of DevOps, I've even heard of Continuous Deployment and heard rumors of people doing it. We do pretty well where we are, but we don't currently have a full scale Continuous Delivery system in place.  Still, there is a sense of wonder and appreciation whenever I hear about this in practice.

Rob spent the first part of the talk discussing what many of us know all too well, the long slog march of staged development, testing and release. Don't get me wrong, I am not a fan of this approach at all (too many years suffering through it), especially because, at the tail end, they would pull in everyone humanly possible to test a release (and ultimately, a condemnation for why software testing is ineffective, slow and boring). Yet ironically, the next project gets run exactly the same way.

This brought the big question to the fore; "why do we keep doing these massive, slow running releases?" When the customer needs change, we need to change with them, and big cumbersome releases don't allow for that. Major releases also require a lot of testing of a lot of code at one time, and that invariably means slow, cumbersome, and most likely not fully covered. Releasing in smaller and more frequent chunks means that less code, has to go out, less overall thrashing takes place, and the feedback loop is much tighter.

How to do that? Rob's Company chose to do the following:

- Adopt Agile
- Prioritize work
- Bring Dev and Ops together (DevOps, get it ;)?)
- Everyone tests. Testing all the time.
- The team needs to become one with the data, and understand what the servers are telling us about our apps and services

They removed testers from the center of the team. Note, they don't remove testing from the center, in fact that's the very switch they made. Testing always happens, and the programmers get into it as well. This goes beyond Test Driven Development. It means that automation is used to verify tests where possible, and a more aggressive approach to canning as many tests as possible, with a progressive march to get more coverage and more tests in place with each story. this is very familiar to what we do on my team. the automated tests are the things we want to run every time we do a build, so we emphasize getting those tests reliable, understandable, and easy to modify if needs be that we need to modify them. Ideally, the idea is to automate as much of the drudgery as we can so that we have fresh eyes and fresh energy to look at actual new features and learning what those new features actually do.

Cycle times vary, and each organization can modify and tweak the cycle times as they choose. If weekly is the shipping schedule you want to use, then your cycle time needs to be somewhere between four and five days. Dog fooding (or pre-production) is a life that we understand very well. It helps us se the real performance and actual workflows and how they are processed, and the good bad and ugly that surrounds them. Rob emphasizes exploratory testing be used along with the focus on automation, and an emphasis on the testing that is most critical. The key to success is a focus on continuous improvement and questioning the effectiveness of what you are doing. there will be political battles, and often, there will be issues with people rather than issues with technology or process. Additionally, everyone knows how to test, but not everyone knows how to test with relevance. Remove the drudgery where you can, so that opening the testers eyes and having fresh energy to tackle real and important problems is emphasized. If "anyone can test", then examine the tests that anyone can do, then ask critically if that testing is providing value.

---







More to come, stay tuned :)!!!

TESTHEAD Paddling Through "Uncharted Waters"

Thursday, November 20, 2014 17:06 PM

A couple of months ago, I agreed to come on as part of the team that writes an IT blog over at IT Knowledge Exchange called "Uncharted Waters". This blog was started by Matt Heusser a couple of years ago, and Matt invited Justin Rohrman and myself to write articles for the blog as well.

Since September, I have contributed a few pieces to the blog, and so far the reaction has been quite positive.

What did I not do? I didn't tell anyone HERE that I was writing them (well, outside of my little Twitter feed that appears in the corner of my blog). Needless to say, I am not doing well in the sphere of self promotion, but I aim to change that going forward.

For obvious reasons, I want to encourage people to read them. If you read them, and enjoy them, and comment on them and share them with others, that gives IT Knowledge Exchange, and other outlets, reasons to have me write more articles for them. It also gives me a chance to do more research, learn about different things, and develop ideas that I hope can benefit all of us.

So to that end, here are a few links to recent entries.

Twenty Years, Seven Companies, Nine Different Styles of Testing

Your First 30 Days in the New Gig (and if You’re on the Old Gig, They Begin TODAY)

Create Your Own Career Trajectory

Using Collaborative Tools To Improve Software Work

The Art of “Making Time”

My View of the Future: Mixed, but Guardedly Optimistic


These articles have been interesting opportunities for me to go into areas I don't normally talk about, and share my message with people who may not otherwise read it here. I would also like to encourage my regular readers to check out these articles as I write them, and perhaps add Uncharted Waters as a regular part of your daily or weekly read (Matt and Justin publish some great stuff). It's been a pleasure to get into the groove of this initiative with them, and I look forward to future entries, both there and here.