There's Nothing Wrong with "Ordinary"

Wednesday, October 22, 2014 14:15 PM

One of the things that I have found interesting over the past few years is the perceptions people have and how they deal with them in regards to their own potential, and what they can actually do. I've been struggling as of late with a harsh realization. I am not brilliant. I am not amazing. I am not a genius. In most ways, I am noteworthy for the fact that I am so very "ordinary". Yet many would rebut that statement and say "that is not true, you are far from ordinary", as though ordinary is a curse word or an epithet.

I've been thinking a lot about this the past several weeks as I have watched my daughter and her growing tribe of people who appreciate the efforts and abilities she has in the artistic sphere. Here I risk no hyperbole. My daughter is an excellent artist, for her age or any age. She has gone from being someone who loves to draw and making cute picture to making really good pictures to making "oh my goodness, where in the world did that come from?!" pictures. Her Instagram account has 30,000+ followers. That's more followers than I have in every single social media account I have combined ;). So many people post comments to her pictures and say "wow, that is incredible, I could never do that!" She handles the compliments with a very sweet grace and courtesy, because only she knows the truth. In most ways, she is a very ordinary girl, and in some ways, she has several unique challenges that put her at a deficit compared to many of her peers.

My daughter has a triple whammy of vision issues. She has amblyopia, strabismus and astigmatism. On top of that, she is also far-sighted. Most people don't realize that, when she wears her general purpose contact lenses, she can't make out who they are if they are more than 15 feet away from her. The glasses she needs to wear when she doesn't have her contacts in, and that allow her some better distance vision, give me a headache to try to look through. Yet with all these frustrations, she has one very neat positive effect... she can see up close very well. It's because of this little quirk that she has been able to develop an eye for close details that has enabled her to become exceptional as an artist over the past few years. Still, even with the talent, and the close up ability, what has set her apart is the fact that she put in an incredible amount of time to hone and perfect what she does. She stays up way too late for my personal comfort most nights. She sometimes struggles with other aspects of her life, such as school work, organization, and at times her health has taken a hit or two. Yet she perseveres, because she knows that she is ordinary, and she has decided that the only way to transcend ordinary is to work at what she can, and use every day to get a little better.

She's a stellar reminder to me that we are all ordinary, and there's nothing wrong with being ordinary. What's wrong is if we use "ordinary" as an excuse, as a way to say "oh, I could never do that, I don't have the talent or a special gift". Here's the good news... you don't really need special talent or genetic blessings, at least not for most things. Being an NBA basketball player, OK, you may have some problems if you are 5'4", but that doesn't mean you still can't play an excellent game. Likewise when I find myself frustrated with my own technical skills, or lack thereof, and I want to wallow in the comfort of being "ordinary". It's OK to be ordinary, everyone is, but it's the one's that work hard enough to push past others that make them "extra-ordinary". My thanks to my daughter for the continual reminder, and a desire to keep pushing the level of what "ordinary" can do.




An Alternative Approach to Teaching History?

Wednesday, October 01, 2014 17:37 PM

Over the past several years, I've found Dan Carlin to be one of the most entertaining and thought provoking podcasters that I've listened to. For some, he is grating, irritating, and frustrating. He doesn't use the standard narrative. In fact, he steadfastly refuses to. Both of his podcasts, Common Sense and Hardcore History, strive to look at current events and history from what he calls a "martian" perspective. In many ways, I consider Dan to be the most "testerly" of podcasters. He strives to take views and commentary from all sides, consider the possibilities and the arguments made, and then presents them in a way that boils down to central themes and core ideas.

In his most recent podcast (Common Sense 281 – Controlling the Past), he makes a few points that I think would be immensely helpful in regards to not just K-12 education as it relates to history, but the entire way that we teach any subject. Perhaps it's the contrarian in me, or perhaps it reflects my own frustrations and misgivings with the way that school is taught, but I think we do several things wrong, and the net result is that many children develop a serious aversion to actual learning and discovering the joy of learning and education.

I am currently living this reality with my three children. I now have a college freshman, a high school sophomore, and an eight grader. I see what they are currently trying to do to get through their days in their various schools, and the adaptations each has to make. Ultimately, though all three are slightly different, they tend to suffer from the same problem. We operate our schools on the notion of facts and figures and dates and formulas that need to be memorized, need to be spit out on tests, evaluation is made, and then we move on to the next bit. Sometimes, this works well. Sometimes, it doesn't. As an adult who works in an ever changing landscape, I've had to embrace a different approach to learning. Also, as a software tester, I've had to often approach learning from a skeptical and often even cynical viewpoint. I'm not paid to say the product works. I'm paid to try to find out where it might be broken. My entire workaday life is the process of disproving and refutation... and I get paid for that ;).

Back to Dan and this podcast... one of the things that Dan highlights, especially in history, is that we tend to go through waves of revisionism. Fifty years ago, the Founding Fathers were near mythical deities. today, in many circles, they are seen as greedy despotic "white men" who built a society on a veneer of freedom at the cost of slavery and subjugation of others. every few years, there seems to be some tug of war about whether or not we should be exposing every one's sins, or instilling virtue through printing hagiography. Dan's thought, and one I share, is "why are we doing either?". In other words, if we truly want to teach history and what has come before, why are we necessarily giving one narrative more air time than others? What if, instead, we did something similar to what the news magazine "The Week" does? For those not familiar, The Week is a journal that presents many of its stories and headlines as a distillation of a variety of views from different sources. If a topic is going to be presented, it would take headlines and stories from both "liberal" and "conservative" pundits, publications and writers, and generally avoid making an editorial of its own, with the exception of actual editorials that it publishes, and clearly states as such that the writers are doing exactly that.

What is the purpose of this type of presentation? It actually allows for the reader to synthesize what they are reading, see the various viewpoints, the pros and the cons, and even the inherent biases of each side, and then leave it up to the reader to reason out what they are reading and what it actually means. It also helps give a more balanced view of the events and the key players. Rather than force a viewpoint based on an ideology, it allows the reader to process what they are seeing and apply their own litmus test to the material, and let them look for the coherence or the inconsistencies, something that testers are very well familiar with doing. Think about what history would look like if we allowed this same approach. We don't tell the story or George Washington or Geronimo or Martin Luther King from just one side. It isn't hagiography or character assassination. It isn't sanitized or prettied up to meet an agenda. It's given as is, with the idea that the reader discovers who the people actually are, and that they really are just that, they are people. Possibly extraordinary, possibly flawed, almost always misrepresented. Gather multiple views, present them as is, and then let the student actually practice some critical thinking skills, synthesize the data presented, and then (gasp!) actually give an opinion or discussion on what they've covered.

It's possible I may be completely insane proposing such a thing, but ultimately, I think the benefits would be huge. We talk a mean game about the importance of critical thinking. Wouldn't it be awesome to actually let students, I don't know... critically think?! Also, and I may just be speaking for myself here, but wouldn't this also make the idea of studying history (or any other subject) way more fun? As a tester, the ferreting out of the causes and effects, and advocating for the information discovered, is a huge part of the fun of testing. How great would it be to actually let students experience that in their everyday learning?

Again, it's a scary and bold proposition, but I'm just crazy enough to think teenage students are able to handle it, and might actually learn to enjoy these subjects in a way they've never really been able to before. What do you think? Realistic objective? Pie in the sky dream? If you had the chance to reshape how primary and secondary education were presented, what would you do?


Does Net Neutrality Matter to You? It Does to Me!

Wednesday, September 10, 2014 18:53 PM

As I'm sure many of you already know, today is the 'Internet Slowdown".

Tens of thousands of websites are currently running the slow-loading icon to symbolize what the Internet would be like without Net Neutrality. TESTHEAD is part of that tens of thousands. I'm happy to join with sites like Netflix, Vimeo, Google, reddit, Kickstarter, Meetup, Upworthy and countless others who also taking part in this.

The FCC's comment period ends in five days! If you want to make your voice heard, please feel free to use the Internet Slowdown widget that loads when this page loads to do so.

In fact, don't just feel free to do it, I urge you to do it :)!!!

If you've already commented, you can also use the widget to call Congress.

Below is the statement from Team Internet, one I am happy to share:

We believe in the free and open Internet, 
with no arbitrary fees or slow lanes for sites that can't pay. 
Take a stand for "Title II reclassification," the only option 
that lets the FCC stop Team Cable from breaking 
the key principles of the Internet we love. 

Will you join us? 

Take Action.


When Things Just Aren't What They Seem

Tuesday, September 09, 2014 02:34 AM

Truthfully, I hoped I would never have to write a piece like this. However, the last couple of weeks have made it so that I can't seem to write anything without this taking up the forefront of my mind.

Twenty seven years ago, I met a guy, a little older than I was, in the San Francisco Bay Area music scene. We were both relative newcomers, working on getting our first bands into the clubs. We had similar backgrounds, similar family lives, we were elder siblings with wild dreams, and we were both a bit on the gaudy and goofy side. I mean that in the absolute best way possible. We both had a penchant for garish clothing, big poofy hair, enjoyed being creative, and making things that we felt spoke to our individuality. We also both loved combing through the bargain bins at Tandy Leather Company, picking up whatever inexpensive notions we could find or afford, and making our own stage clothes with them. One night, I went out to promote for one of my band's early shows and ran into this guy, doing the same for his band. We looked at each other, pointed, laughed and said "dude, nice Tandy Leather bargain find"... and then laughed some more. We started talking. That's when I first got to know a wonderful man, and a friendship developed that would span several years and several bands, for both of us.

We became good friends, albeit in a "band rivalry" kind of way at first. We'd always high five each other for the shows each other was playing, but always in the back of our minds, we kept a tally:

"Hey, congrats on your first Friday night gig!" (we just nailed a Saturday slot).
"Dude, you're opening for Band X? Awesome!" (we just inked to play support for Band Y).
"Hey congratulations for headlining at the Omni!" (we're headlining at The Stone).
"Aww, dude, we're playing the same night? What a bummer!" (I'm totally stealing your crowd).
"Dude, you're gig drew 700 people? That's awesome!" (ours drew 800).

and on and on and on... and we would always laugh about it. We started as rivals, became friends, and in time, became like brothers.

We'd been there through each others ups and downs, the high points and the low points, the transitions that looked like bright futures and the valley where we felt we'd screwed everything up. In 1989, I really felt like I had missed everything, that I'd made a terrible mistake leaving a band that had a solid track record & history and venturing out on my own. It looked all so promising at first, but then, misfire after misfire, in the latter part of the year, I was band-less and alone, and felt like I might never perform again. He jumped on top of me at a club one night while I was feeling down, and in his ever sunny, ultra-cheerful way, gave me a pep talk, told me I just needed to have some faith, keep pushing, and not give up. He believed in me, and knew I'd find something much better than anything I'd been through thus far.

He was right. About two months after that particular pep talk, I agreed to go check out some friends from another band that I'd actually written off before, but decided to give them another look due to the fact that they had two new guys join the group, and they brought a whole new sound and energy to what they had been doing. With that, I decided I had to give them a listen. Subsequently, I joined High Wire, the band that would ultimately be the best group I'd ever perform with.

As things tend to happen, life wanders and meanders, and ultimately, my time with High Wire and with professional performance would come to an end. I'd get married, start a family and change careers, becoming a software tester. My friend would likewise move in different circles, relocate to Los Angeles, and have his own meandering river to follow. We'd reconnect several years later, first through mySpace, and then through Facebook. We'd touch base a time or two, here and there, we'd talk about old gigs, new challenges, where each other was in life, and how we'd gotten knocked down but always got up again. Through it all, my friend had that super sunny disposition, that "Shaka" sign we'd alway throw each other, telling each other to just "hang loose". We'd been buffeted, we'd tumbled and been knocked about, but we always knew we were all right and we'd deal.

On August 26, 2014, the biggest miss of my life took place. Me, the software tester, the one who says never to take anything truly at face value, woke up early to work on a project, and while reading Facebook, saw a post that shook me to my core. It was my friend, my long time, happy go lucky, super positive friend, saying goodbye. Goodbye to everyone. Goodbye to life. And with that, all went quiet. I couldn't believe what I was reading. I wanted to believe it was a joke. It had to be. Maybe it was some trigger and he'd come back and apologize, say he was sorry for getting carried away, and all would be well. We who were his friends went and checked with his family, other friends, those of us who could spent the next few days talking to each other, getting the word out, trying to do anything, by whatever means necessary, to find him. We donated money to Search and Rescue efforts. We asked for friends to share details about him with their friends. Most of all, we did not sleep, we did not eat, we stood at hair trigger attention. We had to know. Was he OK, or not? Was he with us, or not?

Sadly, on Friday, August 29, 2014, we discovered the news. He had been found, on a remote beach in Maui. He was gone. Our efforts to alert the media, to get Search and Rescue teams out to look for him, to get flyers all over the island of Maui and other islands in Hawaii, were what had primed the Park Ranger to be looking for him. When he was discovered, they were able to make an identification; it was indeed him. A bright candle in many of our lives had been put out.

The days since have been challenging. I have so many questions. My heart aches to realize that a friend who was always there for so many of us, to cheer us up in our darkest moments, reached out to none of us in his. We had no clue. Many of us would never in a million years have believed this would happen, certainly not to him. It just made no sense. Yet once we started looking back, over the years, to the challenges and issues he faced, and the ups and downs that went with them, we realized a deeper pattern, one we hadn't seen, or really, one we didn't want to see. That bright and sunny disposition was masking what were to him years of deep disappointment and unfulfilled dreams, of recurring problems and painful memories, to a point where he believed that it would be better to end it instead of moving forward.

I'm left here today to think "What could I have done? How could I have helped? Couldn't he call me?" and realizing that I was asking someone to make a step to a friend that was casual at best today. Of all people, why would he have called me? If so many much closer friends were not reached out to, why would he choose to reach out to me, an old band acquaintance from 20 years earlier? The fact is, my connection with him wasn't strong enough to be that kind of a lifeline... and that realization stings now.

For those who have kept reading through all of this, I thank you. It's not my usual topic, and I'm afraid I'm stumbling here, but if there is anything these past two weeks have taught me, it's to trust those weird feelings we sometimes get. It's to not believe the gloss and the sheen on the surface. I know this from software. The prettiest products often hide the most devastating bugs. I fundamentally know that about products I test, but putting that same reasoning to human interactions is much harder. Still, part of me now feels, after looking back at conversations and interactions, the clues were there. They seem obvious now, but I missed them then. In software, when we miss a catastrophic bug, we learn, we regroup, and we try again, to make sure it doesn't happen to another release. Sadly, today, this miss can't be called back, It can't be patched. We cannot release an upgrade.

My hope with this post today is that I can ask my friends, my readers, and anyone who comes across this post in the future to do me a favor. Never take those relationships you have for granted. Be they your friends, your family, your co-workers or even casual pals from a scene you may share, look to see if there is something that might be a warning sign. If you think you may sense something, and your gut reaction tells you things may not be all right, reach out to them. Talk to them. Let them unburden if they choose to. If they don't, let them know you are there for them anyway. Do everything you can when things just aren't what they seem. You may do more than just cheer someone up. You may save a life. My friend has shed the pain he was feeling. It is no more for him. For those of us who are left behind, we will always wonder "What could we have done?" The simple answer at this point, and the painful one, is "Nothing, but maybe, just maybe, we can be more alert to help see that it doesn't happen again".

A TESTHEAD Wayback Machine Find: ALM Forum Talk From April 2014

Thursday, August 21, 2014 06:58 AM

I am grateful for a variety of friends and acquaintances in the testing world who keep me alert to things they discover. What makes it even more fun is when I'm alerted to things I did and forgot about, or someone discovers something I didn't know was there. Today is one of those days.

Back in April, I gave a talk about "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed". I posted the slides on my LinkedIn profile and my SlideShare account, and then went on with my reality. It was pointed out that a video of my talk was recorded, and now that I know where it is, I can share it here :).

http://vimeopro.com/user27088109/almforum14test/video/95238828


I give a shout out to SummerQAmp, PerScholas, Weekend Testing, and Miagi-do as exemplars as to what we can do to help empower future testers with real skills. There are many others, to be sure, but these are the one's I'm actively engaged in, so hey, I'm biased :).

I hope you enjoy the talk, and if you do, please share the message with others.

Coyote Teaching: Watch How It All Came Together

Friday, August 15, 2014 21:16 PM

Harrison Lovell and I decided to try an experiment.

What if a mentoring pair (a person relatively new to the software testing world and a longtime practitioner) were to work together and look at the way that mentoring is performed?

Could we learn something in the process?

What if we tried something novel, and looked at mentoring relationship all over the world, both current and ancient?

What would we find, and could we learn from them in a way that might prove to be useful to us today?


With that, we embarked on a several month voyage (mostly performed over Skype and email) and decided we'd give a try at a method that takes its cues from ancient cultures. Those methods are called "Coyote Teaching", and we opted to be the Coyotes :).

During CAST 2014, which was held this past week at the Helen and Martin Kimmel Center in New York City, we had the chance to present this topic and approach, and Huib Schoots, a friend of ours, was kind enough to record the whole talk. For those who would like to see it, it is here in its entirety:



I want to congratulate Harrison on his first conference talk, and to thank him for his enthusiasm, as well as his hospitality while showing me around mid-town Manhattan during the week (I should also mention that this was the first opportunity we have had to meet face to face).


I am also thankful to those who gave us valuable feedback to make the talk even better than we originally envisioned. My thanks especially to Alessandra Moreira for helping me go over the fine points of the talk and acting as the counter debater to help poke holes in the ideas we were going to present.

With that, please watch "Coyote Teaching: a New (Old?) Take on the Art of Mentorship". If you like what you see, please comment below and let us know what you liked. If you don't like what you see, please comment below and tell us that, too :). Either way, we'd love to hear what you think.

The Conferring Continues at #CAST2014

Friday, August 15, 2014 21:52 PM


Hello everyone! It's wild to think that my week in New York will be ending tomorrow morning. I've had so many great experiences, conversations and interactions with so many great people. I've met both of my PerScholas mentees, and I've enjoyed watching them take in this experience. It's also been great fun talking to so many new friends, and I will genuinely miss this "gathering of the tribe", but let's not lament leaving when there is a whole day of interaction and conferring (not to mention my own talk ;) ) still to happen.

Lean Coffee this morning dealt with some interesting challenges in that the water main broke out in front of the Marlton. As such, the water for most of 8th Street was shut off. Water only got restored a little after 7:00 a.m., so the coffee part of Lean Coffee for the participants is just starting to come in.

One of the topics we started with was the transient nature of software testing, and why we see so few people who come into testing stay with testing. There are lots of reasons for this. In my career, many of the people I have met who were software testers have gone on to do other things. Some became programmers, some became project managers, some became system administrators, and some became managers. Of the people I knew who were testers for more than five years, most of them remained testers going forward. that's just my reaction, but I think that, by the time people have reached the five year mark as testers, they decided that they either liked testing, or they were good at testing. Of course, this may be an observation bias, since I am seeing people who followed my own path. It's an interesting topic to look at further.

The second topic was focusing on how software testers train and learn about their jobs, and how they find the time to do it. For many of the people, they make an effort to carve out time that will be relevant to them. For me personally, I like to use my time on the train when I commute to and from work to read, think, and ponder ideas. Others use materials like Coursera or uTest University. Some people enjoy writing blogs (like me ;) ). Key takeaway, everyone has a different way to learn.

The third topic focused on how people get through the large amount of material at a conference. How do they capture it all? there's lots of techniques. My favorite technique is writing blog posts in semi-real time (like this one). I find that, instead of writing a summary of what I am hearing, I try to write a summary and a personal take on the details I have learned, so that it is more personal and actionable. Others use sketch notes, doodles, mind maps, recordings of talks, etc. The key takeaway is not the method in which you capture, but that you make what you capture actionable.

The fourth topic is moving into consulting or contracting, and what it takes to make that work. Several of the participants shared their experiences as to how they made the transition, and the challenges they faced. In addition to doing the work, they had to work with finding gigs, collecting money, doing paperwork and tax filing, etc. At times, there will be unusual fits, and needs that may or may not be an ideal situation, but a benefit of being a consultant is that you are there for a specific purpose and for a specific time, and at the end of it, you can leave. There's a need to be able to deal with a high level of ambiguity, and that ambiguity tends to be a big hurdle for many. On the other side, there is a mindset of learning and continuous pivoting, where there's always something new to learn. the pay can be good, but it can be sporadic. Ultimately, at the end of the day, we all are consultants, even if we are employed by a company and we are getting a paycheck.

The final topic is "are numbers evil or maligned". Some people look at numbers as a horrible waste of time. Don't count test cases, don't count bugs, don't count story points. In some ways, this quantitative accounting is both aggravating, but somewhat necessary. Numbers provide information. Whether that information is bad or good depends a lot on what we want to measure. If we are dealing with things that are consistent (network throughput, megabytes of download, etc.) collecting those numbers matters. Numbers of test cases, number of stories, etc. are much less helpful because there is such a variation as to what and how we can control those values. When the measurement cannot be actionable, or it's really vague, then the numbers don't really make a difference. Numbers can be informative, or they can be noise. It's up to our organizations (and us) to figure out what is relevant and why.

Thanks to all the participants for being involved at this early hour :).

---

The morning keynote for today was delivered by Carol Strohecker of the Rhode Island School of Design, and the topic was "From STEM to STEAM: Advocacy to Curricula" and the focus was on the fact that, while we have been emphasizing STEM (science, Technology, Engineering and Mathematics), we miss a lot of important details when we do not include "Art" in that process.

STEAM emphasizes the importance of art and design and how it is important to innovation and economic growth. Tied into this is the maker movement, which likewise emphasizes not just the functional but the aesthetic. There's a lot of neat efforts and initiatives taking place at the academic level as well as the legislative level to get these initiatives into schools so that we can emphasize this balance.

There are many tools that we can use to help us look at the world in a more artistic and aesthetic approach. Art is nebulous and subjective. It does not have the same level of solid concrete syntax that science or language has (and language is pushing it). Much of the variance in the artistic leads to a development of a particular skill or attribute that we call "intuition". It's not concrete, it's not focused, it's not based on hard data, but it informs in ways that are just as valuable. Artistic endeavors help to develop these traits. Cutting the arts out of our economic vision puts us at a significant disadvantage.

[I had to duck out at this point to take care of some AST business, so I can only give you my take on the actual details I heard discussed. Sorry for the gap.]

One of the quotes shared that makes great sense to me comes from Immanuel Kant:

"The intellect can intuit nothing, the senses can think nothing. Only through their union can knowledge arise."

It feels like this talk is resonating with many people, as the Open Season for this talk is vigorous and active. An emphasis on synthesizing the inputs and the areas that we interact with makes for a richer and greater whole. It takes a different level of thinking to make it happen, but there are amazing opportunities for those willing to stretch into new disciplines. Many books have been suggested, including "Inventing Kindergarten", which talks about the importance of play and discovery to learning and skill acquisition/development.

James Bach made an interesting comment. With the emphasis on STEM, and now STEAM, who gets left behind now? Are there other areas that are now orphaned and unfunded, or is it that these areas  have been unfunded for so long that they cry out for help? Does it make sense to work with a small group, or do we need to consider that STEAM is meant to be an all encompassing discipline focus?


---


Up next is mine and Harrison's talk... for obvious reasons, I will not be live blogging then ;). I do however encourage everyone to tweet comments or even ask questions, and I'll be happy to follow up and answer them :).


---

During lunch we had the results of the test challenge and we also had the results of the Board of Directors election.

Returning members of the board:
- Markus Gärtner
- Keith Klain (re-elected)
- Michael Larsen
- Pete Walen

New Members:
- Erik Davis
- Alessandra Moreira
- Justin Rohrman

Congratulations to everyone, I think 2014-2015 will be an awesome year :).

Ben Simo took the stage after lunch to talk about the messy rollout of the healthcare.gov web site and all of the problems that he alone was able to find with the site. Ben made a blog to record the issues that he found, and that received a *lot* of attention from the media and from the government as well.

For the details of each of the areas that he explored, you can see the examples he posted on http://blog.isthereaproblemhere.com/. What I found interesting was the fact that as Ben tested and logged his discoveries, it showed just how messed up so many of the areas were, and how much of the efforts Ben applied helped discover some strange issues without even trying. Ben was not asked to speak to Congress or to testify, and he did not find that there was any government action from his efforts, but he became the target of DDOS attacks and media outlets were calling him very regularly.

Ben has on the side of the "is there a problem here" site a variety of test heuristics are listed, and he applied most of those heuristics to help uncover the bugs he found. Many of the issues discovered fit into the specific heuristics. Listed here they are:

CONSISTENCY HEURISTICS
from James Bach and Michael Bolton

H istory
I mage
C omparable Products
C laims
U ser Expectations
P roduct
P urpose
S tatutes
-F amiliar

FAILURE HEURISTICS
from Ben Simo

F unctional
A ppropriate
I mpact
L og
U ser Interface
R ecovery
E motions

SECURITY VULNERABILITIES
the OWASP Top 10

1. Injection
2. Broken authentication and session management
3. Cross-Site Scripting
4. Insecure Direct Object References
5. Security Misconfiguration
6. Sensitive Data Exposure
7. Missing Function Level Access Control
8. Cross-Site Request Forgery
9. Using Components with Known Vulnerabilities
10. Unvalidated Redirects and Forwards

What was also amazing to see was that all of these issues were discovered/reported using nothing more than this own data. He said he would not try to do anything to access other people's information, and he did not. even then, he still found plenty of issues that should have made the healtcare.gov team both very nervous, and very grateful.

---

Geoff Loken tickled my interest with the talk titled "The history of reason; arts, science, and testing". As one who finds philosophy and all of its iterations over the millenia fascinating, I've found the different ways that reason and intuition developed over the ages to be a worthwhile area to study and learn about. Ultimately, philosophy comes down to epistemology, and epistemology led to the scientific method (observation/conjecture, hypothesis, prediction, testing, analysis).

Observation cannot tell us everything. There are things we just can't see, hear, touch, smell, or taste. at those times, we have to use reasoning skills to go farther. The scientific method does not prove that things exist, but it can disprove to a point the nature of an items existence.

Geoff played a clip from Monty Python and the Holy Grail (the witch scene) which showed what could be considered a case of bad test design. Ironically, based on the laws and understanding of the world at the time, they performed tests that were actually in line with their standards of rigor. We simply have codified our understanding of more disciplines since their time.

One of the other aspects that comes into play when we are testing is that science can quantify, but it can't qualify. There's aspects to testing that we need to look at that go beyond the very definable and specific data aspects.

Overall, I found this to be a lot of fun to discuss, and it reminded me of many of the historical dilemmas that we have faced over the centuries. We look at what appears to be totally irrational actions in centuries past. It makes me wonder what from our current time will look irrational 500 years from now ;).

---

The last session I attended today was Justin Rohrman's talk "Looking to Social Science for Help With Metrics". Metrics is considered a dirty word in many places, and that disparaging attitude is not entirely unjustified. Metrics are not entirely useless, but the measurement of them in the right context is important. If they are not used in the correct context, they can be benign at best and downright counter-productive.

By focusing on the metrics that actually matter, we can look at measurements that can tell us how to learn about the systems we use and learn where we are in the life of the product. Some of the context-driven measurements that we could/should be looking at include:


  • work in progress
  • cycle time
  • lead tile
  • touch time
  • slack
  • takt time
  • time slicing
  • variation
  • Find -> Fix -> Retest loop

These measurements intrigue me, and they seem to be much more in line with what could actually help an organization. These fit well into what is referred to as the Lean Model.  Lean focuses on measurement for improvement. This is in contrast with using not to be confused with measurement for control.

I'm currently fortunate to work for a company that does not require me to chase down a number of useless metrics, though we have a few core metrics that we look at. These examples give me hope that we can get even more focused on measurement values that are meaningful. I'll definitely bring the context-driven list to my engineering team, or better yet, try to see if I can derive them and report them myself :).

---

Tim Coulter and Paul Holland wandered about the venue and checked out a bunch of the talks, and he decided to check out as many of the talks as he could to share some "TimBITS" and takeaways.  Some takeaways:

- Direct your testing keeping the business goals in mind
- Tighten up the feedback loop, it will make everyone happy
- Write your bug reports as if it were for a  memory-wiped feature you
- When you add a test specialist into a development team, everybody wins
- Skills atrophy: Testing skills must be used or you will lose them
- Social sciences all play a part in testing, it's not just technology
- Testers appear to be hardwired to play games
- Art has an impact on Software Testing
- Good mentoring is hard. Answer your mentees questions with more questions
- It's a sin to test mobile apps sitting at your desk
- To be a good tester you need to be a "sort of skilled hacker"
- Pair with people in all roles they will all give you different insights
- You can't prove quality with science, but you can prove facts that may alter judgement
- Try to show the need for what you want to teach before you try to teach it
- "Hey, these aren't just testers but real people!"

Many of the participants are sharing their own takeaways, and they are covering many of the ones already mentioned, but they are showing that many people are seeing that "there is an amazing community that looks out for one another and actively encourages each other to do their best work".

---

It's been fun, but all good things must come to an end. We are now at the closing keynote, given by Matt Heusser, and the title for this one is "Software Testing State of the Practice (And Art! And Science!)".

First thing that Matt says that he is seeing is that there is a swing back to programming, and back again. The debate of what testers should be doing, what kind of work they should be doing, and who should be doing it is still raging. Should testers be programmers? Some will embrace that, some will fight it, and some will find a place in the middle.

Automation and tech is increasing in our lives (there are supermarkets where self-checkout is becoming the norm). The problem with this prevalence is that we are losing the human touch and interaction. Another issue is that testing looks to be a transient career. Seven years from now, it's entirely likely that more than 50% of the people who are here attending their first CAST will not even be in test 7 years from now.

Another issue that we are seeing is the Fragmentation within Testing. What does test even mean? What is testing, and who has the final say on the actual definition? We have a wide variety of codebases and strategies of how we code and how we test. All of this leads to a discipline that many people don't really understand what it is that the test teams do. We are a check box that needs to be marked.

The Agile community in the early 2000s we in a similar situation, and they dared to suggest they had a better way to write software. That became the Agile movement, and that's changed much of the software development world. Scrum is now the default development environment for a large percentage of the software development population. Scrum calls for testing to be done by embedded members of the team, and not by a separate entity.

Matt refers to three words that he felt would change the world of software testing. Those words are:

- Honesty
- Capable
- Reach

We need to be honest with our dealings and we need to show and demonstrate integrity. We need to prove that we are capable and competent, and we need to reach out to those who don't understand what we do or why we do it.

I added a comment to this talk in open season when asked about why testers tend to move out of testing, and I think that there's something to be said that testers are broad generalists. We have to be. We need to look at the product from a wide variety of angles, and because of that, we have a broad skill set that allows us to pivot into different positions, either temporarily or as a new job. I personally have done stints as a network engineer, an application engineer, a customer support representative, and even a little tech writing and training, all the while having software testing as a majority of my job or as a peripheral component of it. I'm sure I'm not an isolated incident.

There's a lot to be said about the fact that we are a community that offers a lot to each other, and we are typically really good at that (giving back to others). As I said in a tweet reply, if we inspire you, please bring the message back to your friends or your team. Let them know we are here :).


Live from New York, It's... #CAST2014

Wednesday, August 13, 2014 13:03 PM


Hey! Finally, I'm able to work that silly line a little more literally ;). Yesterday was the workshop day for CAST, but today is the first full general conference day, so I will be live blogging as much as I can of the proceedings, if WiFi will let me.

CAST 2014 is being held at the New York University Kimmel Center, which is just outside of the perimeter of Washington Square. the pre-game show for the conference (or one of them, in any event) is taking place at the Marlton Hotel in the lobby. Jason Coutu is leading a Lean Coffee event, in which all of the participants get together and vote on topics they want to talk about, and then discussion revolves around the topics that get the most votes.

---

This morning we started out with the topic of capacity planning and the attempt to manage and predict how to plan for capacity planning. Variance in teams can be dramatic (a three person test team is going to have lower capacity than a fifty or one hundred person team. One interesting factor is that estimation for stories is often wrong. Story points and stuff like that often regress to the mean. One attendee asked "what would happen if we just got rid of the points for stories altogether?" Instead of looking at points for stories, we should be looking at ways to get stories to the same size in general. It may be a gut feeling, it may be a time heuristics, but the effort may be better suited to just making the stories smaller, rather than get to involved in adding up points.

Another interesting measurement is "mean time to fix the build".  Another idea is to see which files get checked out and checked in most frequently to see where the largest amount of modifications are taking place and how often. Some organizations look to measure everything they can measure. One quip was "are they measuring how much time they are spending measuring?". While some measurements are red herrings, often there are valid areas that it makes sense to measure and learn what is needed to remedy. A general consensus is that the desire to get lots of finely granulated measurements is less effective than just targeting effort to fix issues and getting the release to be stable and fixing the issues as they happen.

---

Another topic is the challenge of what happens when a company has devalued or lost the exploratory tester skill set due to focusing on "technical testers". A debate came up to see what "technical tester" actually means, and in general, it was agreed that a technical tester is a programmer that writes or maintains automated testing suites, and that they meet the same level/bar that the software engineers meet. The question is, what is being lost by having this be the primary focus? Is it possible that we are missing a wonderful opportunity to work with individuals who are not necessarily technical, or that is not their primary focus. I consider myself a somewhat "technical tester", but I much prefer/enjoy working in an environment where I can do both technical and exploratory testing. A comment was raised that perhaps "technical tester" is limiting. Technically aware might be a better term, in that the need for technical skills is rising everywhere, not just in the testing space.

---

The last topic we covered was "testing kata" and this is a topic that is of great interest to me because of thoughts that we who are instructors in Miagi-do have been considering implementing. My personal desire is to see the development of a variety of kata that we can put together and use in a larger sphere. In martial arts (specificially, Aikido), there is the concept of "Randori", which is an open combat scenario where the participant has multiple chalengers, and needs to use the kata skills they have learned. The kata part, we have a lot of examples. The randori, that's an open area that is ripe for discussion. The question is, how to put it into practice? I'd *love* to hear other people's thoughts on that :).

---

After breakfast, we got things started with Keith Klain welcoming everyone to the event, and Rich Robinson explaining the facilitation process. As many know (and I guess a few don't ;) ) CAST uses a facilitation method that optimizes the way that the audience can participate. The K-cards that we give out let people determine how and where they can question and make sure everyone who wants a say can get their say.

James Bach is the first keynote, and he has opened up with the reality that talking about testing is very challenging. It's hard enough to talk to testing with other testers, but talking to people who are not testers? Fuhgeddaboudit! Well, no, not really, but it sure feels that way. Talking about testing is a challenging endeavor, and very often, there is a delayed reaction (Analytical Lag Time) where the testing you do one day comes together and gives you insights an hour or a day later. These are maddening experiences, but they are very powerful if we are aware of them and know how to harness them. The title of James' talk is "Testing is Not Test Cases (Toward a Performance Culture)". James started by looking at a product that would allow a writer working on a novel the change and modify "scene" order. The avenues that James was showing looked like classic exploratory techniques, but there is a natural ebb and flow to testing. The role of test cases is almost negligible. The thinking process is constantly evolving. In many ways, the process of testing is writing the test cases while you are testing. Most of the details of the test cases are of minor importance. The actual process of thinking about and pushing the application is hugely complex. An interesting point James makes is that there is not test for "it works". All I know for sure is that it has failed at this point in time.

The testing we perform can be informed by some basic ideas, some quick suggestions, and then following the threads that those suggestions give to us. Every act of testing (genuine testing) involves several layers. there's a narrative or explanation, there's an enactment of test ideas, there's the knowledge of the product that we have. there's the tester's role and integration in the team, there's the skill the tester brings to the table, and there's the tester's demeanor and temperament. All of these aspects come to play and help inform us as to what we can do when we test.

The act of testing is a performance. It can't truly be automated, or put into a sequence of steps that anyone can do. That's like expecting that we can get a sequence of steps so that anyone can step in and be Paul Stanley of KISS. We all can sing the lyrics or play the chords if we know them, but the whole package, the whole performance, cannot be duplicated, not that there aren't many tribute band performers that really try ;).

James shared the variety of processes that he uses to test. He shared the idea of a Lévy Flight, where we sample and cover a space very meticulously, then we jump up and "fly" to some other location and then do another meticulous search. the Lévy Flight heuristic is meant to represent the way that birds and insects scour areas, then fly off at what looks like a random manner, and then meticulously searching again for food, water, etc. From a distance, it seems random, but if we observe closely, we see that even the random fly around is no random at all, but instead it's a systematic method of exploration. Other areas James talks about are modeling from observations, factoring based on the product, experiment design and using tools that can support the test heuristic.

James created a "spec" based on his observations, but recognizes that his observations could be wrong, so he will look to share these options with a programmer to make sure that his observations match the intended reality. there is a certain amount of conjecture here. Because of that, precise speech is important. If we are vague, we can give leeway for others to say "yes, those are correct assumptions of the project". the more specific, the less likely that wiggle room will be there. Issues will be highlighted and easier to confirm as issues if we are consistent with the terms we use. The test cases are not all that interesting or important. The "testers" spec and questions we develop and present at the end is it. However, just as Paul Stanley singing "Got to Choose" at Cobo Hall in Michigan in 1975, the performance of the same song in Los angeles in 1977 will not sound exactly the same. Likewise, a testing round a week later may produce a totally different document,with similarities, but perhaps fundamental differences, too.

Does this all mean that test cases are completely irrelevant and useless? No, but we need to put them in the hierarchy they actually belong. There is a level of focus and ways that we want to interact with the system. Having a list of areas to look at so as to not forget where we want to go certainly helps. Walking through a city is immensely more helpful if we have a map, but it's not entirely essential. we can intuit from street names and address numbers, we can walk and explore, we can ask for directions from people we meet, etc. Will the map help us get directly where we want to go? Maybe. Will following the map show us everything we want to see? Perhaps not. Having a willingness to go in various directions because we've seen something interesting will tell us much more than following the map to the latter. So it is with test cases. They are a map. They are a suggested way to go. They are not *THE* way to go.

Ultimately, it comes down to the fact that testing is a performance. Fretting about test cases gets in the way of the performance. Back to watching KISS (or The Cure of The Weeknd if we want to be more inclusive), they have songs, they have lyrics, they have emotive passages, but at the end of the day, an instance of a performance can be encoded, but it represents only one instance of time. Every performance is different, every performance is on a continuum. You can capture a single performance, but not al of the performances that can be made. Cases can guide us, but if we want to perform optimally, we have to get beyond the test cases. We can capture the actual notes and words. There is no automation for "showmanship" ;).

This comes down to Tacit and Explicit Knowledge. I remember talking about this with James when we were at Øredev in Sweden and we talked about how to teach something where we cant express the words. There is a level of explicit knowledge that we can talk about and share, but there's a lot of stuff buried underneath that we can't explain as easily (the tacit knowledge). Getting to the point of transferring that tacit knowledge gets to experience and shared challenges. Most important, it goes to actively thinking about what you are looking at and doing what you can to make for a performance that is both memorable and stands up to scrutiny.

---

As in all of these sessions, there are so many places to go and talks to see, that it is difficult to make decisions on what to see. For that purpose, I am deliberately going to let the WebCAST stream speak for itself. If you can view the WebCAST presentations, please do so. After the recordings are available they will be posted to the AST Channel for later viewing. For that reason, I am going to focus on sessions that I can attend that are not going to be recorded, as well as are relevant to my own interests and aspirations. With that, I was happy to join Alessandra Moreira (@testchick) and her talk "My Manager would Never Go For That", or more succinctly, how to apply context-driven principles to the art and act of persuasion. I always think of the scene in the film "Amadeus" where Mozart is trying to convince the Emperor to let him go forward with the staging and presentation of "The Marriage of Figaro". The Emperor at one point says "You are passionate, Mozart, but you do not persuade". This is a key reminder to me, and I'm guessing Ale is very familiar with this. Being passionate is not enough, we have to persuade others to see the value in what we are passionate about.


Sometimes this comes down to a decision of "should I stay or should I go?". Do I change my organization or do I change my organization? Persuasion may or may not come about, but the odds are, we can do a better job of persuading if we are ourselves willing to be persuaded. Conversations are two way streets. We learn new things all the time. Are we willing to adapt our position given new information? If not, why not? If we think that our way is the best way, and we are not willing to bend with what we are told, why should anyone else be persuaded by us? Influence is not coercion, it's not manipulation, it's a process of guiding and suggesting, offering information and examples, and "walking the walk" that we want to influence in others. there is a three step process in the art of persuasion. First, we need to discover something that we feel is important. Second, we need to prepare, get our ducks in a row so to speak. we need to know and have supporting evidence that we understand what we are doing and that we have a compelling case. From there, we then need to communicate and embark on an honest and frank dialog about what we want to see be an outcome.

In my own experience, I have found that persuasion is much easier if you have already done something on your own and experienced success with it. Sometimes we need to explore options on our own, and see if they are viable. Perhaps we can find one person on our team who can offer a willing ear. I have a few developers on my team who are often willing to help me experiment with new approaches, as long as I am prepared to explain what I want to do and have done my homework up front. People are willing to give you the ability and the benefit of the doubt if you come prepared. If you can present what you do in a way that the person who needs to be persuaded can be convinced that you have worked to be ready for them to do their part, they are much more likely to go along with it. Start small, and get some little successes. that will often get these first few "adopters" on board with you, and then you can move on to others. Over time, you will have proven the worth of your idea (or had it disproved), and you can move forward, or you can refine and regroup to try again.

I'm fortunate in that I have a Manager who is very willing to let me try just about anything if it will help us get to better testing and higher skill, but it's likewise important to do my homework first, as it helps to build my credibility on the topic at hand. Credibility goes a long way to helping persuade others, and credibility takes time to build. With credibility comes believability, and with believability comes a willingness to let you try an idea or experiment. If the experiments are successful in their eyes, they will be more likely to let you do more in those areas you are aiming to persuade. If the experiments fail, do not despair, but it may mean you have to adapt your approach and make sure you understand what you need to do and how it fits in your own organization.

One of the key areas that people fail on when it comes to persuasion is "compromise". Compromise has become a bad word to many. It's not that you are losing, it's that you are willing to work with another person to validate what they are thinking and to see what you are thinking. It also helps to start small, pick one area, or a particular time box, and work out from there.

---

During the lunch break, Trish Khoo stepped on stage to talk about the ideas of "Scaling Up with Embedded Testing", where Trish described a lot of the testing efforts she had been involved in, where the code that she had written was not regarded by any of the programmers, since they felt what she was doing was just some other thing that had to be done so the programmer could do what they needed to do. fast forward to her working in London, where the programmers were talking about how they would test the code they are writing. This was a revelation to her, because up to that point, she had never seen a programmer do any type of testing. Since many of the efforts that she was used to doing were now being taken care by the programmers, that made her role more challenging, so she had to be a lot more inquisitive and aggressive in looking for new areas to explore.

We often think of the developer and tester being responsible for finding and fixing bugs, and the product owner and the tester are responsible for verifying expectations. The bigger challenge is that, with these loops that we enter, we end up chewing up hours, days and weeks constantly going through these cycles of finding and fixing bugs and verifying expectations. Interestingly, when we ask developers to focus on writing tests to help them write better code, the common answer is "yeah, that makes sense, but we don't have time to do that now, we'll do that next sprint", and then they say the same thing next time, if it comes up again. How do we convince an organization to consider a different approach to having developers get more involved in test? She looked to a number of different organizations to see how they did it.

One of the people Trish talked to was Elisabeth Hendrickson of Cloud Foundry/Pivotal. Interestingly, Cloud Foundry does not have a QA department. That's not to say that they do not have testers, but they have programmers who test, and testers who program. There is no wall. Everyone is a programmer, and everyone is a tester. Elisabeth has a tester on the team by the name of Dave Liebreich (Hi Dave ;) ). While he is a tester, he also does as much testing and the programmers, and as much code writing as the programmers.

Another person she talked to was Alan Page of Microsoft (Hi, Alan ;) ). Some of the teams at Microsoft has moved to a model where everyone has dispensed with job titles. Ask Alan what his job title is, he'll say "generic employee" or if pushed, "software engineer". The idea is that they are not confined to a specific specialty. The goal is that, instead of having people in roles, they open up the opportunities for people to do what their skill set and passion provides. the net process is that managers are orchestrating projects based on skill. Instead of hiring "testers who code", the are looking to hire "people who can solve problems with code". The idea that tester is a role is not relevant, everyone codes, everyone tests.

The third case study was with Michael Bachman at Google. In a previous incarnation, Google would outsource a lot of the manual testing with vendors, mostly to look at the front end UI. Much of the coverage that the testers were addressing was ignoring about 90% of the code in play. For Google to stay competitive, they opted to change their organization so that Engineering owned quality as a whole. There was no QA department. Programmers would test, and there was another team called Engineering Productivity, who helped to teach about areas of testing, as well as investing in Software Engineers in Test (SET), who could then help instruct the other programmers in methods related to software testing. The idea with Google was that "Quality is Team Owned, not Test or QA Owned".

What did they all have in common? Efficiency was the main driver. Teams that have gone to this model have done so for efficiency reasons. there are lots of other words associated with this (Education, Feedback, Upskill, Culture, No Safety Net, etc.). One word that is missing, that I'd be curious to see, is effectiveness. Overall, based on the presentation, I would say that effective was also part of this process. Efficiency without effectiveness ultimately will cause an organization to crash and burn. therefore, that means there is value in these changes.

So what does that mean for me as a tester? It means the bar has been raised. We have some new challenges, and we should not be afraid to embrace them. Does that mean that exploratory testing is no longer relevant? Of course not, we still explore when we develop tool assisted testing. We do end up adding some additional skills, and we might be encouraged to write application code, too. As one who doesn't have a "developer" background, that doesn't automatically put me at a disadvantage. It does mean I would be well served to learn a bit about programming and getting involved in that capacity. It may start small, but we all can do some of it if we give it a chance. We may never get good enough at it that we become full time programmers, but this model doesn't really require it. Also, that's three companies out of tens of thousands. It may become a reality for more companies, but rather than be on the tail end of the experience and have it happen to you, perhaps it may help to get in front of the wave and be part of the transition :).

---

The next session I opted to attend was about "The Psychology and Engineering of Testing" which was being presented by Jan Eumann and Ilari Aegerter. Both Jan and Ilari work with eBay, but they are part of the European team, and the European market and engineering realities are different than what goes on in Silicon Valley. There is a group of testers based on London and Berlin that gets software from the U.S. to test, while the European team has software testers embedded into the development team.

Who works in an embedded tester in an Agile team? Overall, they look for individuals with strong engineering skills, but they also want to see the passion, interest and curiosity that helps make an embedded tester formidable. the important distinction that the embedded testers at Europe eBay are not thinking of programming and testing as either/or, but "as well as". They are encouraged to develop a broad set of skills to help solve real problems with the best people who can solve the problems, rather than to focus on just one area.

When Jan and Ben Kelly were embedded within the European teams, there was an initial experience of Testers vs. Programmers, but over time, developers became test infected, and testers became programming savvy along with it. this prompted other teams saying "hey, we want a tester, too".  In this environment, testers and programmers both win.

Though there are integrated teams, the testers still report to testing managers, so while there are still traditional reporting structures, there is a strong interconnected sense between the programmers and testers in their current culture. The Product Test Engineering team has their own Agile manifesto that helps define the integration and importance of the role of test, and how it's a role that is shared through the whole team. If the goal of an embedded tester is to be part of a team, then it makes sense, in Jan's view, to be with the team in space, attitude and purpose. Sitting with the programmers, hanging with the programmers, meeting with the programmers, all of these help to make sure that the tester is involved right from the start.

Additionally, testers can help tech programmers some testing discipline and an understanding of testing principles. Testers bring technical awareness of other domains. They also have the ability to help guide testing efforts in early stage development and help inform and encourage areas that can be set up where programmers might not do so were there not a tester involved. It sounds like an exciting place to be a part of, and an interesting model to aspire to.

---

I love the cross pollination that occurs between the social sciences and software testing, and Huib Schoots has a talk that addresses exactly that.

We often confuse software testing and computer science as though they are hard sciences like mathematics or physics or chemistry. They have principles and components that are similar, but in many ways, the systems that make software are more akin to the social sciences. We think that computers will do the same thing every single time in exactly the same way. fact is, timing, variance, user interactions, congestion and other details all get in the way of the specific factors that would make "experiments" in the computer science domain truly repeatable. I've seen this happen in continuous integration environments, where a series of tests that ran at one time worked the second time they were run, without changing any parameters. One caused the first one to fail and the second one to pass. there can be lots of reasons, but usually they are not physics or mechanical details, but coding and architectural errors. In other words, people making mistakes. Thus, social rather than hard sciences.

Huib shifted over to the ideas in the book "Thinking Fast and Slow", in which simple things are calculated or evaluated very quickly, and other more complicated matters require a different kind of thinking. Karl Mark developed theories about how people should interact, and while the theories he prescribed have been shown to not be ideal, they are still based on the realities of how humans interact with one another. The science of Sociology informs many aspects of the way that we work and interact with others, which generally informs our designs of systems. Levy Strauss represents Anthropology, which deals with the way that different cultures are structured and the parameters that environmental factors that help to inform those options. Maria Montesorri represents Didactics and Pedagogy, aka learning and the methodology that helps inform how we learn. He used his girlfriend to represent Communication studies, and the fact that the way we talk to one another informs the way we design systems, because the communication aspect is often what gets in the way of what an application does (or should I say, the inability to communicate smoothly gets in the way).

Science and Research are areas that inform a great deal of what a software tester actually does. Sadly, very few software testers are really familiar with the scientific method, and without that understanding, many of the options that can help inform test design is missing. I realized this myself several years ago when I stopped considering just listing out a long series of lines of test cases as being effective testing. By going back and considering the scientific method, it gave me the ability to reframe testing as though it were a scientific discipline in and of itself. However, we do ourselves a tremendous disservice if we only use hard science metaphors and ignore the social sciences and what they inform us of how we communicate and interact.

We focus so much attention on trying to prove we are right. that's a misnomer. we cannot prove we are right in anything. We can disprove, but when we say we've proven something, we say we have not found anything that disproves what we have seen. Over time, and repeated observation, we can come close to saying it is "right", but that's only until we get information that disproves it. The theory of gravity seems to be pretty consistent, but hey, I'll keep an open mind ;).

Humans are not rational creatures. We have serious flaws. We have biases we filter everything through. We are emotional creatures. We often do things for completely irrational reasons. We have gut feelings that we trust, even if they fly in the face of what would be considered rational. Sometimes they are write, and sometimes they are wrong, yet we still heed them. Testing needs to work in the human realm. We have to focus on the sticky and bumpy realities of real life, and our testing efforts likewise have to exist in that space.

---

Martin Hynie and Christin Wiedemann focused on a talk that was all about games. Well, to be more specific, "Why testers love playing – Exploring the science behind games". Games are fundamental to the way that we interact with our environment and the way we interact with others. Games help us develop cognitive abilities. The more we play, the more cognitive development occurs. This reminds me a lot of the work and talks I have seen given by Jane McGonigal and how gaming and game culture effects both our thinking and our psychological being.

OK, that's all cool, but what does this have to do with testing?

Testing is greatly informed by the way we interact with systems. We try out ideas and see if they will work based on what we think a system might do. While the specific skills learned from games do not transfer, our way of looking at situations and inspiration that may give us ideas do. Game play is scientifically proven to modify and change our cortical networks. It was fascinating to see the way in which Martin and the other testers on the team approached this as a testing challenge.

The test subjects had their brains scanned while they were playing games, and the results showed that gaming had an actual impact on the gaming brain. Those who played games frequently showed cooler areas of the brain than those who were not playing games. This shows that gaming optimizes neural networks in many cases. Martin also tok this process focusing on a more specific game, i.e. Mastermind, and what that game did to his brain.

So are games good for testers? Jury is out, but the small sample set certainly seems to indicate that yes, there looks to be evidence that games do indeed help testers and that the culture of tester games, and other games, is indeed healthy. Hmmm, I wonder what my brain looks like on Silent Hill... wait, forget I said that, maybe I really don't want to know ;).

---

A great first day, so much fun, so much learning, and now it's time to schmooze and have a good time with the participants. See you all tomorrow!

From Mid-Town Manhattan, it's #TestRetreatNYC

Sunday, August 10, 2014 17:52 PM


In the offices of LiquidNet, a group of intrepid, ambitious testers met and decided to discuss what aspects of testing mattered to us. Test Retreat is an Open Space conference, where the sessions begin when the participants want to have it begin (and end), the people who are there are the people who need to be there, and the law of two feet rules. We took some time to discuss topics, pull together similar topics and threads, and then head into our places to talk about the stuff that is burning us up inside.

---

The first session I attended was based around developing a testing community within a city or region. As many of us were part of different communities around the world, there were a variety of experiences, ranging from very small markets with perhaps a hundred testers total, to large regions with hundreds of thousands of testers, but little in the way of community engagement. 

Rich Robinson led the discussion and shared his own experiences with growing and scaling the community in Sydney, Australia. Rich shared that there were three areas that were consistently asked for and looked at as the goals of the attendees: looking for work, finding out the latest trends, and honing and sharpening skills. Rather than try to be all things to all people, they developed a committee and separate initiatives, with committee members focusing on the initiatives that mattered to them. For the group that wanted to get work, they made an initiative called “opportunity seekers”, and focused energy on those who had that as their biggest objective. For those who want to focus on latest trends, they have an avenue for that as well. 

Different regions have unique challenges. In the Bay Area, we have an overabundance of meetups for technical topics, of which a handful of them relate to software testing. As a founder of Bay Area Software Testers (BAST), Curtis Stuehrenberger, Josh Meier and I have chosen to try to focus on areas that are not as often discussed (we tend to steer clear of automation tools and techniques, since there are dozens of other meetup groups that cover those topics). Another challenge is frequency. In some cases, regular and frequent meetings are key. In others, having them less frequently works, but the key is that they meet regularly (monthly, every six weeks, or quarterly seem to be the most common models).

Rich also shared that their initiatives branch away from the formal meetup sessions, and have other opportunities that they initiate that occur outside of the formal meetup times. By having each initiative have people committed to it and resources to help drive those initiatives, buzz gets generated and more people get involved. One of the key things that Rich emphasized was getting people involved and engaged for these initiatives. The Sydney tester group has a committee of ten members that helps make sure that these initiatives are staffed and supported.

Another challenge is the local regions. Some cities have sprawl, others are difficult to get to in a timely manner due to traffic and population density. For example, the San Francisco Bay Area has four general regions: San Francisco (and the upper San Francisco Peninsula, Silicon Valley (and the lower San Francisco Peninsula), the East Bay, and the North Bay. With a few exceptions, people who participate in one generally do not regularly participate in the others. To reach out to a broader community in these sub-regions, it may require using technology and remote-access options for people to participate. 

Ultimately, growing a community takes time, it takes dedicated people, it takes a range of topics that matter to the attendees (including making sure that food and drink is there ;) ). To get those people you want to be involved, it helps to be very specific about what is needed. Saying “I need help with this” is less effective than saying “I need this specific thing to be done at this time for this purpose”. Specificity helps a lot when recruiting helpers.


The next session was “Sleep No More”, presented by Claire Moss, and focused on the model of the performance/play called Sleep No More (which is, in some ways, described as “an immersive performance of Macbeth”). It’s a darkened environment, all participants wear masks, no photography, no talking, just experience as the person sees it. Exploratory Testing, in many ways, has similarities to this particular experience. Claire used a number of cards to help display the ideas, and one of the first ideas she shared was “fortune favors the bold”. Curiosity and a willingness to go in without fear and deal with a substantial amount of “vague” is a huge plus. If you already have that, you have a strong advantage. If this is not natural for you, it can be developed.

Each room in the Sleep No More experience was part of the performance, and at any time, rooms could be empty or have people filter in during the performance. There are “minders” in the event that help to make sure that people don’t completely lose track of where they are. At times, there are very personal experiences that take place based on your tracks and where you go. Claire described a very intense experience of the performance based on where she went and what she observed and chose to follow up on. She also said that, up to this point, no one that she knew had anything like the experience she had.

The experience of Sleep No More was bizarre, creepy, full of strange triggers, and the potential to go into wildly unexpected direction. Software testing in many ways mirrors this experience. While there may be familiar areas and ideas, very often, our choices and angles may take us into very unexpected places. To give an example of the scope of the space, this was in a six story building, in an area that used to be a hotel (and whole areas of the building were gutted, in some spaces multiple floors were open to the air and visible). 

Claire described a feeling of “amazement fatigue”, where the level of stimulus is so high that there is no way to take it all in. The participants have to make conscious choices as to where they will go, and many of the participants will have wildly different experiences. Sometimes, they would follow a character, only to watch them go through a door and close and lock it, so that they couldn’t be followed any longer. This reminds me of following threads of a feature, and being brought to a dead end. People will observe different things, and they will also observe what other people do, and what they focus on. This can give us clues as to areas we want to explore next. 

This experience sounds amazing, and I am definitely interested in going and doing it myself, if time and commitments permit me to do so. Looks like I will be attending the August 10 performance :).


The next session was based on “Leadership”, and Natalie Bennett led the session with the idea that she wanted to see where individuals felt their experiences or needs for leadership were, as opposed to her telling us what she felt about Leadership and how to do it. Questions that Natalie wanted to discuss were:

- What is the purpose of a test team lead?
- What is it for?
What makes it different than being a test manager?

The discussion shifted from there into ways that test team leads and test managers were similar and where they differed. Some of the participants talked about how they led by example, and that they divvied up the work among the group based on the people involved and what they were expected to do. Team leads in general do not have hiring/firing authority, and they typically do not write reviews or have salary decision input. In other environments, the team manager and team lead are one in the same. There are some who are cynical about the effectiveness of this arrangement, while some feel that it is possible to be both a team lead and a team manager.   One attendee who is a Director of Q.A. for her company said that she was “the face of Q.A.” to the organization, and as such, she was setting the direction and expectations for the organization, as well as for her own direct reports.

Team leads are expected to teach and coach the members of their group, as well as be the point of contact for the group. It’s seen as important that they be able to focus on and develop their own role and make it responsive to their own environment. The team lead stands up for the group, and defends them from encroachment of issues and initiatives that are counter-productive to their success. Responsibility and authority tends to be on a sliding scale. Different companies allow for a different level of authority for the leads. Some give a level of authority that is just short of being an actual manager. In others, the leads is considered a “first contact” among equals.

One of the bigger challenges is to deal effectively when team members are failing. Failing in and of itself is not bad. It’s important to learn, and failing is how you learn, but when the failing is chronic or insurmountable, there needs to be a different level of interaction. Lean Coffee, direct mentoring, or even a serious re-consideration of experiences and goals can be hugely beneficial, both for the individual and the team as a whole.


Matt Heusser led a session about “Teaching Testing”, and some of the challenges that we face when we teach software testing to others. When we have an engaged and focused person, this usually isn’t a problem. When the person in question isn’t engaged, or is just going through the motions, then it’s a little more difficult.  The question we focused on at first was “what methods of teaching have worked for you?” Testing is a tactile experience, rather than looking at an abstract questions. We are familiar with questions like “how do you test a stapler?” or “how can you test a Rubik’s Cube?”. The presentation of this challenge may be the most important aspect. For some, they might look at “how do you test a stapler?” as demeaning. They are professionals, what is this going to teach me?

In my experience, one of the things I found to be helpful is to actually spell out how challenging the exercise could be. Rather than ask “How do you test a stapler?”, I might instead say “Tell me the 120 ways that you can test a stapler to confirm it’s fit for use?” This sets a very different expectation. Instead of saying “oh, this is trivial”, by seeding a high number, they may want to try to see how they might be able to meet or exceed that number. They become engaged.

To borrow a bit from Seth Godin, there are two primary goals for everyone. It’s the important aspects that we need to learn, regardless of the discipline. The first is to focus on authentic problems. The second is to be able to lead. Domain knowledge is a huge factor in helping to identify authentic problems. It’s not the only means, but getting to really know the domain can help inform the testing ability. Another important aspect is to understand how people learn, Everyone goes about learning a bit differently. Helping each person learn how they learn can be a huge step in helping to teach them. Sometimes the most ripe area of learning is to wade into an area where people disagree, or where there might be a number of people or groups where there might be dysfunction, where team members don’t talk to each other, or there’s simmering hostility between people. If there’s hostility between two programmers, and they write software that interacts with each other, it’s a good bet that there might be a goldmine of issues between their interaction points (I think this is a very interesting idea, btw :) ) .

Key to teaching testing is the ability to reflect and confirm what has been taught and learned, and for me, I think that Weekend Testing does this very well. The benefit of Weekend Testing, beyond just doing the exercise, is that we can see lightbulbs turning on, and there’s a record of it that others can see and learn from. Creating HowTo’s can also be a helpful mechanism for this. 


This section is the talk that Smita Mishra and I gave about “Hiring Testers and Keeping them Engaged Once We’ve Hired Them”. I recorded this session, and I will transcribe it later ;).


Claire Moss led a session on “Communicating to Management” and we went through and considered a list of questions that are important to frame the conversation(s):

What does quality like to our organization?
Why spend money on testing?
What does testing do?
What value are we getting out of testing?
I read this about QA, and it says we should do this… why aren’t we doing this?” 

These are all questions that we need to be prepared to answer. The question is, how do we do that?

There are several methods we can use, but first and foremost, we need to determine what we need to speak with management about, and if possible, use the opportunities to help educate them about what it is we can do, and at the same time, get a clear understanding about what their view of the world is.

Looking to standards and practices that are helpful can give us guidance, but it doesn’t always represent our reality. Information needs to be specific to explaining where we stand at the given time. Testing is primarily focused on giving quality information to the executives so that they can make qualified decisions. That is first and foremost our mission. Information that we can effectively provide is: 

- Framing of the ecosystem on a global scale (browser standards, trends, data usage histories)
- Impact on customers (client feedback, analytics data)
- Clarify issues and questions (heading off the executive freakout)
- Managing expectations (especially when dealing with something new)
- Explaining how likely issues brought to their attention really are problems worth investing in
- Explaining risk factors and methods to mitigate those risks

—-

At the end of the day we had a lot of new idea, feedback for some new initiatives, an emphasis on better communication, more focused due diligence, and the fact that so many participants had a lot they felt they could contribute. This was a fun and active day, and a lot of learning and connecting. One of the key things I am always impressed about when it comes to these events is that we really have a lot of solid people in the testing community, but we need even more.

I encourage every tester that admires craftsmanship, skill, and thinking make it a point to come to these now annual events (this is the third of these, so I think it’s safe to say that it’s a thing now ;) ). Once again, thanks Matt (Heusser) and Matt (Barcomb) for organizing what has becoming my favorite Open Space event. May there be many more.


Silence Can Be Powerful

Friday, August 08, 2014 13:00 PM

This past Tuesday evening, the Boy Scout troop that I am Scoutmaster for (Troop 250 in San Bruno), held its big annual Homecoming Court of Honor. This is typically a big affair, in that it encompasses our Scout Camp week and all the awards earned while there (which is to say, a lot of them).

One of the things I have been trying to do with the boys in my troop is encourage them to take on challenges and take chances. Ultimately, a well run Troop is run by the boys, not by the adult leaders. Still, it's very common for them to ask me a lot of questions about what they should do or shouldn't do, and normally, I'm ready and willing to provide them answers.

This time, though, I decided to do something unprecedented, at least as far as a Court of Honor was concerned. The scouts are familiar with what we call a "silent" campout. At a silent campout, the adult leaders camp in a site adjacent to the one the scouts are camping in, and we follow all rules of safety and emergency preparedness, but other than that, we stay in our camp site, and they stay in theirs. They set up, cook, clean, make and break down fires, and anything else they need, all without any input from the adult leaders. The goal is to have them learn from their own mistakes, and to have them work with each other to solve their problems rather than have the adult leaders do it for them.

I decided to take this one step further and declared that our Court of Honor would be a "silent" Court of Honor. In other words, the scouts would run it, they would do all of the specific details (give out awards, recognize rank advancements, etc.) and I and the other adult leaders would sit back and watch. We would not speak, we would not direct, we would not answer questions.

So how did it work out? Splendidly!

Granted, there were several times where I had to sit on my hands and keep my mouth shut when I so wanted to say "no, not that way, do it like this!" that, however, was not the point. It wasn't my Court of Honor, it was their Court of Honor. If they neglected to bring something out, put something on display, or do something that they had seen me do dozens of times, that didn't matter. What I wanted to see from them was what they felt was important. I wanted to se what aspects of a Court of Honor they wanted to do. They jumped at the chance to do a skit. they liked doing silly one liners. They enjoyed the awards part, and they left me a little time at the end to speak my peace. Which I did, but only at the end.

Many times, I think we do a disservice to those who are learning and trying to figure out what's important and what isn't. Coyote Teaching focuses on leveraging the environment and addressing real needs, as well as focusing on the art of questioning, or asking more questions rather than giving direct answers. I'd like to add to that the very real teaching tool of "be quiet". Sometimes it's best to not answer, or to remove ourselves entirely. Sure, there may be stumbling, there may be things said that are not perfect, or there may be some key stuff that gets forgotten, but that's OK.

What's important is to give the people you are working with a chance to discover what is important to them, and let them reach that conclusion themselves. It would have been very efficient to correct them and tell them what to do, but it would be far less effective than giving them the chance to run with the program all on their own. They've participated in several Courts of Honor over the years that I have run, and regardless of how flawlessly I may have done them in the past, none of them will be as memorable, or mean as much, as this one will. The reason? BY giving them silence, they got to experience and do for themselves what they wanted to do, and honor the troop in the way that actually mattered to them.

I'm super proud of all of them... and yes, I took notes ;).

Stepping Back, Taking a Breath, Letting Go, and Saying "NO"

Thursday, August 07, 2014 20:01 PM

Many will, no doubt, notice that my contributions to this blog have been spotty the past few months. There's a very specific reason.

A few months back, I did an experiment. I decided to sit down and really see how long it took me to do certain things. I've been reading a lot lately about the myth of multi-tasking (as in, we humans cannot really do it, no matter what we may think to the contrary). I'd been noticing that a lot of my email conversations started to have a familiar theme to them: "yeah, I know I said I'd do that, and I'm sorry I'm behind, but I'll get to that right away".

Honestly, I meant that each and every time I wrote it, but I realized that I had done something I am far too prone to do. I too frequently say "yes" to things that sound like fun, sound like an adventure, or otherwise would interest and engage me. In the boundless optimism of the moment, I say "sure" to those opportunities, knowing in the back of my mind there's going to be a time cost, but it's really fuzzy, and I couldn't quantify it in a meaningful enough way to guard myself.

I decided I needed to do something specific. I purchased a 365 day calendar (the kind with tear off pages for each day), and I took all of the dates from January through May (at the time I did this, that was the current date). I wrote down, on each sheet, something I said or promised someone I would do. Some of them were trivial, some were more involved, some were big ticket items like researching an entire series of blog posts or working through a full course of study for a programming language. As I started jotting them down, I realized that each time I wrote one down, another one popped into my head, and I dutifully wrote that one down too, and another, and another, until I had mostly used up the sheets of paper.

WOW!!!

I came to the conclusion that I would have to do some drastic time management to actually get through all of these, and part of that was to find out where I actually spent my time and how much time it took to actually complete these tasks. I also told myself that, until I got through a bunch of these, I was going to curtail my blog writing until they were done. I've often used my blog as a "healthy procrastination", but I decided that, unless I was discussing something time sensitive or I was at an event, the blog would have to take a back seat. That's the long and short of why I have written so little these past three months.

In addition, I came to a realization that matched a lot of what I had been reading about multi-tasking and effectively transitioning from one task to another. For every two tasks I tried to accomplish at the same time, I came to see that the turnaround time to getting them done, compared to doing them independently, was four hours above and beyond what it would take to do those tasks individually. That was at the absolute best case scenario, with me firing on all cylinders, and me in "hot mode" brain-wise. As I've said in the past, to borrow from James Bach, my brain is not like a well oiled machine. Instead, it's very much like an unruly tiger. I can have all the desire in the world, and all the incentives to want to get something done, but unless "the tiger" was in the mood, it just wasn't going to be a product I, or anyone else, would be happy with.

The areas and stimuli that had the best effect was an absolute drop dead date, and another person in need of what I was doing to make it happen. Even then, I found myself delivering so close to the drop dead date that it was making both myself and the people I was collaborating with anxious.

Frankly, that's just no way to live!

Next week is CAST. I am excited about the talk I am delivering. It's about mentoring, and using a method called Coyote Teaching, along with the rich (but often expensive) nature in which it allows for not just transfer of skills, but also truly effective understanding. In this process of writing and working on this talk with my co-presenter, Harrison Lovell, I decided to use it on me, a little bit of "Physician, heal thyself". I came to realize that my expectational debt was growing out of control again. In the effort to try to please everyone, I was pleasing no one, least of all myself. Additionally, I have been looking at what the next year or so will be shaping up to look like, where my time and energy is going to be needed, and I came to the stark realization that I really had to cut back my time and attention for a variety of things that, while they sounded great on the surface, were just going to take up too much time for me to be effective.

I've already conversed with several people and started the process of tying up and winding down some things. I want to be good to my word, but I have to be clear as to what I can really do and what time I actually have to do those things. Time and attention are finite. We really cannot make or delay time. No one has yet to make the magic device from "The Girl, The Gold Watch and Everything", and time travel is not yet possible. That means that all I can do is use the precious 24 hours I get granted each day to meet the objectives that really matter. That means I really and honestly have to exercise the muscles that control the answer "NO" much more often than I am comfortable with doing. I have to remind myself that I would rather do fewer things really well than a lot of things mediocre or poorly.

I am appreciative of those who have willingly and understandingly helped by stepping in and taking over areas that I needed to step back from. Others will follow, to be certain. For the most part, though, people are actually OK with it when you say "NO". It's far better than saying "YES" and having that yes disappear into a black hole of time, needing consistent prodding and poking to bring it back to the surface.

I still have some things to deliver, and once they are delivered, I'm going to tie off the loose ends and move on where I can, hand off what I must, and focus on the areas that are the most important (of which I realize, that list can change daily). Here's looking to a little less cluttered, but hopefully more focused and effective few months ahead, and what I hope is also a more regular blog posting schedule ;).

A Time and a Season to all Things

Wednesday, July 16, 2014 21:50 PM

Today I sent the following message to the members of the Education Special Interest Group of the Association for Software Testing:


Hello everyone!

Three years ago at this time, I took on a challenge that no one else wanted to take on. I realized that there was a lot at stake if someone didn't [added: the AST BBST classes might cease], and thus a practitioner, with little academic experience, took over a role that Cem Kaner had managed for several years. I stepped into the role of being the Education SIG Chair, and through that process, I learned a lot, we as a SIG have done a lot, and some interesting projects have come our way to be part of (expansion of AST BBST classes and offerings, SummerQAmp materials, PerScholas mentoring program, etc.). It's been a pleasure to be part of these opportunities and represent the members of AST in this capacity.

However, there is a time and a season for all things, and I feel that my time as an effective Chair has reached its end. As of July 15, 2014, I have officially resigned as the Chair of the Education Special Interest Group. This does not mean that I will stop being involved, or stop teaching BBST courses, or stop working on the SummerQAmp materials. In fact, it's [my] desire to work on those things that has prompted me to take this step. Even I and my hyper-involved self has to know his limitations.

I have asked Justin Rohrman to be the new Chair of the Education Special Interest Group, and he has graciously accepted. Justin is more than capable to do the job. In many ways, I suspect he will do a better job than I have. I intend to work with him over the next few weeks to provide an orderly transition of roles and authority so that he can do what I do, and ultimately, so I can stop doing some of it :).

Justin, congratulations, and thank you. EdSIG, I believe wholeheartedly you shall be in good hands.

Regards,
Michael Larsen
Outgoing EdSIG Chair


To everyone I've had the chance to work with in this capacity over the past three years, thank you. thank you for your patience as I learned how to make everything work, for some definition of "work". Thank you for helping me learn and dare to try things I wasn't aware I could even do. Most of all, thanks for teaching me more than I am sure I have ever taught any of you over these past three years.

As I said above, I am not going away. I am not going to stop teaching the BBST courses, but this will give me more of an opportunity to be able to teach them, or assist others in doing so, which is a more likely outcome, I think. It also frees me up so I can give more attention to participating in programs that matter a great deal to me, such as SummerQAmp and PerScholas. As I said above, I believe Justin will be fantastic, and I'll be just a phone call or email message away from help if he should need it ;).

Packt Publishing Turns Ten - We Get The Presents :)

Friday, July 04, 2014 13:13 PM

Over the past several years, I have been given many books to review by Packt Publishing, and it's safe to say a substantial part of my technical library comes from them. As a blogger who receives these books, usually for free, I like to help spread the word for publishers when they make special offers for their readers.

Below is a message from Packt Publishing that celebrates their tenth anniversary as a publisher, and an announcement that all of their ebook and video titles are, until Saturday, July 5, 2014, available for $10 each.

To take advantage, go to http://bit.ly/1mWoyq1

Packt’s celebrates 10 years with a special $10 offer

This month marks 10 years since Packt Publishing embarked on its mission to deliver effective learning and information services to IT professionals. In that time it’s published over 2000 titles and helped projects become household names, awarding over $400,000 through its Open Source Project Royalty Scheme.

To celebrate this huge milestone, from June 26th, every e-book and video title is $10 each for 10 days – this promotion covers every title and customers can stock up on as many copies as they like until July 5th.

Dave Maclean, Managing Director explains ‘From our very first book published back in 2004, we’ve always focused on giving IT professionals the actionable knowledge they need to get the job done. As we look forward to the next 10 years, everything we do here at Packt will focus on helping those IT professionals, and the wider world, put software to work in innovative new ways.

We’re very excited to take our customers on this new journey with us, and we would like to thank them for coming this far with this special 10-day celebration, when we’ll be opening up our comprehensive range of titles for $10 each.

If you’ve already tried a Packt title in the past, you’ll know this is a great opportunity to explore what’s new and maintain your personal and professional development. If you’re new to Packt, then now is the time to try our extensive range – we’re confident that in our 2000+ titles you’ll find the knowledge you really need , whether that’s specific learning on an emerging technology or the key skills to keep you ahead of the competition in more established tech.

More information is available at: http://bit.ly/1mWoyq1

Coyote Teaching, Lousy Estimates, and Making a Pirate

Wednesday, July 02, 2014 20:23 PM

Several weeks ago I made a conscientious decision. That decision was to focus on one goal and one goal only. I'll admit it's a strange goal, and it's difficult to put into simple words or explain in a way that many people will understand, but for long time readers of this blog, that shouldn't come as a surprise ;).

In August, I'm going to be presenting a talk with Harrison Lovell about Coyote Teaching, and the ways in which this type of teaching can help inform us better than rote example and imitation. As part of this process, I thought it would be fun to take something that I do that is completely outside the realm of software testing, and see what would happen if I applied or examined Coyote Teaching ideas and techniques in that space. Personally, I found the results very interesting, and over the next few days, I'm going to share some of what I learned, and how those lessons can be applied.

What was this unusual project? It's all about "playing pirate" ;).

OK, wait, let me back up a bit...

One of the things I've been famous for, over several Halloweens, has been my elaborate and very involved pirate costumes. Why pirates? They fascinate me, always have. They are the outsiders, the ones who dared to subvert a system that was tyrannical, and to make a world that was built on their own terms. Granted, that world was often bloody, violent, deceptive, and very dangerous, with a strong likelihood the pirates would be killed outright or publicly executed, yet it has captured the imaginations of generations through several centuries.

Here in Northern California, there is an annual affair called the Northern California Pirate Festival. Many people dress up and “play pirate” at this event, and last year I made a commitment that I would be one of them. More to the point, I decided I wanted to go beyond just “playing pirate”, I wanted to get in on the action. Now, in this day and age, get in on the action doesn't mean “become an actual pirate”, it means join the ranks of the re-enactors. This year was a small step, in that I chose to volunteer for the festival, and work in whatever capacity they needed. With this decision, I also opted to go beyond the Halloween tropes of pirates, and actually research and bring to life a composite character from that time, and to pay special attention to the clothes, the mannerisms, and the details of the particular era.

Most people when they look at popular representation of pirates, they're looking at tropes that represent the Golden Age of Piracy, that period in the early 1700s where many of the famous stories are set (The Pirates of the Caribbean franchise, Treasure Island, Black Sails, etc.). What this ignores is the fact that piracy had been around for millennia, and there were other eras that had a rich history, and an interesting look, all their own.

To this end, I decided that I wanted to represent an Elizabethan Sea Dog. My goal was to have people walk up to me, and say “hey, you look different than most of the people here”, and then I could discuss earlier ages of piracy, or in my case, privateering (and really, if you were on the side of the people being attacked, that difference was mostly irrelevant).

To make this a little more interesting, I decided to make my outfit from scratch. The only items I did not make from scratch were my boots, my hat, and the sword and dagger that I chose to carry. Everything else would be hand made, and here is where our story really begins.

The first order of business if you choose to be a re-enactor, is to do research. If your character is a real person, you need to know as much as possible about not just their personal histories, but about their time period, where they came from, the mores of the day, the situations that may have driven someone to be on the high seas in the first place, and those decisions that might potentially lead them to being privateers or pirates. Even if the character you are reenacting is fictitious, you still want to be able to capture these details. I spent several months reading up and examining all of these aspects, but I gave the clothes of the era special attention. What did a mariner in the mid-1500s actually wear? To this end, I came up with a mental picture of of what I wanted my Sea Dog to look like. My Sea Dog would have high Calvary style boots, long pumpkin breeches, a billowy Renaissance style shirt, a close-fitting jacket (referred to as a “doublet”), a thicker outer jacket, called a jerkin, and would wear what was called a "Tudor Cap". I would also make a wide belt capable of carrying both a rapier and a main gauche (a parrying dagger used in two handed dueling, common for the time period). I would make the “frogs”, or the carriers for the sword and dagger. I’d also make a simple pouch to hold valuables. Just a handful of items. It didn't seem that complicated. As one who already knew how to sew, and has had experience making clothes in the past, I figured this was a project I could knock out in a weekend.

Wow, was I ever wrong!

I thank you if you have stuck with me up to this point, and you may be forgiven if you are thinking "wow, that's quite a buildup, but what does this have to do with the Coyote Teaching method?" Well, let's have a look, starting with the first part of the project, the pumpkin breeches.

Through my research I decided I wanted to create something that look dashing, and a little dangerous, and I decided that I would use leather and suede in many of the pieces.  The problem with using leather and suede is that it doesn't come on a regular sized bolt of fabric. In fact, real leather and suede is some of the most irregular material you can work with, since it entirely depends on the particular hide you are examining. I quickly realized that I had no pieces that would give me a size to cut a full leg portion from any of my pattern pieces. What to do? In this case, I decided to piece long strips of three inch wide suede together. This would give the look of “panel seams”, and give the sectional look that is common for pumpkin breeches.

So let’s think about the easy part. Make a pair of pants. Just cut some material, and stitch it together, right?

Here are the steps that making these pants really entailed:

- taking out multiple suede hides and examining them
- cutting away the sections that would be unusable (too thin, too thick, holes or angles that couldn’t be used, etc.)
- lay out the remaining pieces and utilize a template to cut the strips needed. Repeat 40 times.
- take regular breaks because cutting through suede is tiring.

Irregular suede pieces cut to a uniform width and length.

- baste the pieces together and stitch them down the length of the strips, so as to make panels that were ten strips wide.
- make four of these panels.

Stitched composite panels. Each section of ten strips
is used for half or each leg (4 panels total)

- size the pattern for the dimensions of the pants desired.
- cut the suede panels into the desired shapes (being careful to minimize the need to cut over the stitched sections)
- cut matching pattern pieces out of linen to act as a lining for the suede.
- pin and piece together the lining and outer suede and sew them together.
- piece the leather panels to each other to sew them together so they actually resembled breeches
- wrestle with a sewing machine that is sewing through 4 to six layers of suede at a time, as well as the thickness of the lining material
- replace broken needles, since suede is murder on sewing machine needles, even when using leather needles.
- unstitch areas that bunched up, or where the thread was visible and not cleanly pulled, or where thread broke while stitching.
- make a cutaway and stitch a fly so that the pants can be opened and closed (so as to aid putting on and taking off, and of course, answering the call of nature).
- punch holes to place grommets in the waistband (since breeches of this period were tied to the doublet).

That weekend I had set aside to do the whole project, actually gave me enough time to size the suede and cut the strips I would need. That’s all I managed to do in that time, because I discovered a variety of contingent steps needed. I had to get my tools together, determine which of my tools were up to the task, which tools I didn’t even own, and clear space and set up my work area to be effective. These issues took way more time than I anticipated.

How long did it take me to actually make these breeches? When all was said and done, a week. Using whatever time I could carve out, I estimated I spent close to 14 hours getting everything squared away to make these.
Completed pumpkin breeches...
or so I thought at the time.


I liked how they turned out, I thought they looked amazing… that is, until I gave them a test run outside in the heat of the day, and realized that I would probably die of heat exhaustion.

Since the weekend of the event was looking to be very hot (mid 90s, historically speaking, with one year reaching 104 degrees) I realized these pants would be so uncomfortable as to be unbearable. What could I do now? I had put so much time into these, I didn’t have time to start over. Fortunately, research to the rescue. It turns out that there was a style of pumpkin breeches that, instead of being stitched together, had strips of material that acted as guardes, and that hung loosely rather than stitched together. After looking at a few examples, and seeing how they were made, I decided to cut open all of the seams I had spent so much time putting together, and reinforcing the sections at the waist and at the bottom of the leg. It was a long and tedious change, but it allowed air to escape, and me to not die of heat exhaustion.

Jumping a little ahead, but here is the finished
open air version of the pumpkin breeches.


OK, let's talk Coyote Teaching now...

This whole process brought into stark relief the idea of estimating our efforts, and how we, even when we are experienced, can be greatly misled by our enthusiasm for a project.

Was I completely off base to think I could get this project done in a weekend? Turns out, yes! It may have been correct or accurate if I were to be using standard bolt fabric, but I wasn’t. I chose to do something novel, and that “novel” approach took five times longer to complete. What’s more, I had to actually undo much of the work that I did to actually make it viable.

My estimate was dead wrong, even though I had experience making pants and making items to wear. I knew how to sew, I knew how to piece together items, I’ve actually made items, so I felt that gave me a good confidence to make an estimate that would be accurate.

I’ve come to appreciate that, when I try to make an estimate on something I think I know how to do, I am far more likely to underestimate the time requirements needed when I am enthusiastic about the project. In contrast, if I am pessimistic about a project, I am likely to overestimate how long it will take. Our own internal biases, whether they be the “rose colored glasses” of optimism, or the depletion of energy that comes with pessimism, both prevent us from making a real and effective estimate.

Knowing what I know now, how would I consider guiding someone else to do a similar project? I could tell them all of the pitfalls I faced, but those might not be helpful, unless they are doing exactly the same thing I am doing. Most of the time, we are not all doing the exact same thing, and my suggestions may prove to be a hindrance. Knowing now what I know about the process of preparing suede in sections, I would likely walk the person through defining what might be done. In the process, it’s possible they might come up with answers I didn’t ("why are we using real suede, when we can buy suede-cloth that is regular sized on a bolt?". "Couldn’t we just attach the strips to a ready made pair of pants?"). By giving them the realities of issues they might face, or allowing them to think through them on their own, we can help foster avenues and solutions that they would not find on their own, or that perhaps we wouldn’t, either.

One thing is decided, though… the next outfit I make (and yes, be assured, I will make another one ;) ) is going to be made with regular bolts of wool, cotton or linen, rakish good looks be darned.

Weekend Testing Americas is Looking to "Go Deep" This Saturday

Monday, June 30, 2014 23:04 PM

Yes, it's been quiet here for quite awhile. Too quiet, and I needed to break the silence. What better way than with an announcement of a Weekend Testing Americas session, you say? I agree wholeheartedly!

With that, I would like to cordially invite each and every one of you reading this to come and join us this Saturday, July 5, 2014.

Weekend Testing Americas #52 - Going Deep with "Deep Testing"
Date: Saturday, July 5, 2014
Time: 09:00 a.m. - 11:00 a.m. Pacific Daylight Time
Facilitator: Michael Larsen


So what does "deep testing" mean? well, if I told you all that now, it wouldn't make much sense to hold the session now, would it? Still, I can't just leave it at that. If that's all I was going to do, I'd just as well have you look at the Weekend Testing site announcement and be done with it.


Justin Rohrman and I were discussing what would make for a good session for July, and he suggested the idea of "deep testing", and in the process, he suggested that we consider a few questions:


What if, instead of just having an unfocused bug hunt, we as a group decided to take a look at a specific feature (or two or three depending on the size of the group) and do what we could to really drill down as far as we could with that particular feature. Heck, why not just dig into a single screen and see what we could find? We have an exercise in the AST BBST Test Design class that does exactly this, only it takes it to the component level (we're talking one button, one dial, one element, period). we're not looking to be that restrictive, but it's an interesting way of looking at a problem (well, we thought so, in any event).


As part of the session, we are going to focus on some of these ideas (there may be lots more, but expect us to, for sure, talk about these:

- how do we know what we are doing is deep testing?

- what do we do differently (thought process, approach, techniques, etc.)?

- how do we actually perform deep testing (hint: staring at a feature longer doesn't make it "deep")?

- how do we know when enough is enough?

If this sounds like an interesting use of part of your Saturday, then please, come join us July 5, 2014. We will be starting at 9:00 a.m. Pacific time for this session, and it will run until 11:00 a.m. Pacific.


For those who have done this before, you already know the procedure. For those who have not, please add "weekendterstersamericas" to your Skype ID list and send us a contact request. More specifically, tell us via Skype that you would like to participate in this upcoming session. We will build a preliminary list from those who send us those requests.


On Saturday, at least 15 minutes before the session starts, please be on Skype and ready to join the session (send us a message to say you're ready; it makes it that much easier to build the session) and we will take it from there.

Here's hoping to see you Saturday.

Listening to a Cowboy: Live at Climate Corp, It's BAST!!!

Thursday, May 29, 2014 02:38 AM

Hello everyone, and sorry for the delay in posting. There's a lot of reasons for that, and really, I'll explain in a lengthy (or small series) of posts exactly why that has been the case. However, tonight, I am emerging from my self imposed exile to come out and give support for Curtis Stuehrenberg and hist tall about "ACCellerating Your Test Planning".

From the BAST meetup post:

"One of the most pervasive questions we're asked by people testing within an agile environment is how to perform test planning when you've only got two weeks for a sprint - and you're usually asked to start before specifications and other work is solidified. This evening we plan on exploring one of the most effective tools your speaker has used to get a test team started working at the beginning of a sprint and perhaps even earlier. We'll be conducting a working session using the ACC method first proposed by James Whittaker and developed over actual practice in mobile, web, and "big data" application development."

For those not familiar with Curtis (and if you aren't, well, where have you been ;)? ):

Curtis is currently leading mobile application testing at the Climate Corporation located in San Francisco, Seattle, and Kansas City. When not trying to help famers and growers deal with weather and changing climate conditions he devotes what little free time he can muster to using his 15 years of practical experience to promote agile software testing and contextual quality assurance at conferences like SFAgile, STPCon, ALM-Forum, and CAST as well as publications like Tea Time for Testers and Better Software magazine.

This is an extension of Curtis' talk from the ALM Forum in April. One of the core ideas is to ask "can you write your test plan in ten minutes? If not, why not?"

Curtis displayed some examples of his own product (including downloading the Climate Corp mobile app by each of us), and brought us into an example testing scenario and requirements gathering session. Again, rather than trying to make an exhaustive document, we had to be very quick and nimble in regards to what we could cover and in how much time we had to cover it. In this case, we had the talk duration to define the areas of the product, the components that were relevant, and the attributes that mattered to our testing.

Session Based Test Management fits really well in this environment, and helps to really focus attention for a given session. By using a very focused mission, and a small time box (30 minutes or so), each test session allows the tester the ability to look at the attributes and components that make sense in that specific session. By writing down and reporting what they see, they are able to document their test cases as they are being run, and in addition, show a variety of areas where they may have totally new testing ideas based on the testing session they just went through, and these in turn inform other testing sessions. In some ways, this method of exploring and reporting simultaneously allows for a development of a matrix that is more dense and more complete than one that may be generated first before actively testing.

the dynamic this time around was more personal and more focused. Since it was not a formal conference presentation, the questions were more common, and we were able to address questions immediately rather than waiting until the talk was finished. Jon Bah's idea of threads was presented and described, and how it can help capture interesting data, but help us consciously stay "on task", yet capture interesting areas to explore later (OK, I piped in on that, but hey, it deserved to be said :) ).

It's been a few months since we were able to get everyone together, and my thanks to Curtis for taking the lead and getting us together this month. we are looking forward to next month's Meetup, and as soon as we know what it is (and who is presenting it ;) ).

Going "Coyote": Overcoming Fear and Uncertainty with "The Craddick Effect"

Thursday, May 01, 2014 20:00 PM

For those who have been following my comments about mine and Harrison Lovell’s CAST 2014 talk ("Coyote Teaching: A new take on the art of mentorship") this fits very nicely into the ideas we will be discussing. We’ve been looking back at interactions that we have had over the years where mentorship played an important role in skill development. During one of our late night Skype calls, we were talking about skateboard and snowboard skills, and how we were able to get from one skill level to another.

One aspect that we both agreed was a common challenge was “the fear factor” that we all face. In a broader sense, we both appreciate that snowboarding and skateboarding are inherently dangerous. Push the envelope on either and the risk of injury, and even death, is definitely possible. Human beings tend to work very hard at at unconscious level to keep ourselves alive. The amygdala is the most ancient part of our brain development. It deals with emotions, and it also deals with fear and aggression. It’s our “fight or flight” instinct. One one level, it’s perfectly rational to listen to it in many circumstances, but if we want to develop a technical skill like jumping or riding at speed, we have to overcome it.

About fifteen years ago, I first met Sean Craddick, a fellow snowboarder who was my age and was, to put it simply, amazingly talented. I used to joke whenever I saw Sean at a competition that I would say “oh well, there goes my shot at a Gold Medal!” He humored me the first couple of times, but the third time I said it, he surprised me. He answered “Dude, don’t say that. Don’t ever say that! I could try to throw some trick and land it badly, and scrub my entire run. I could miss my groove entirely, or miss a gate on a turn, or I could catch an edge and bomb the whole thing. Every event is up in the air, and every event has the potential of having an outcome we’d least expect. Don’t say you don’t have a chance, you always have a chance, but you’ll never get the chance if you don’t believe you have it.”

Because we were both the same age and had fairly similar life experiences, I’d hang out with Sean at many of these events, and sometimes run into him on off days when I was just up at the mountain practicing. One time, he noticed that I kept going by a tall rail and at the last moment, I’d veer off or turn and ride past it. After a few times of seeing this, when he saw me about to veer off again he yelled “Hey Michael! The next time you veer of, stop dead in your tracks, unbuckle your board and walk back up here. I want to talk to you about something.” Sure enough, I went down, veered off course, and slammed to a stop. I took off my board, and then I walked up the hill. Sean looked at me and said:

“Take it straight on, and line your nose with the lip and where the beginning of the rail is.”

I nodded, buckled in, and then went down to the rail transition. I veered off. I stopped. I walked back up the hill.

“Lean  back on your rear heel just a bit. It will give you a more comfortable balance when you first get on the rail.”

I nodded, bucked back in, went down again, and again I veered off. I stopped, unbuckled, and walked back to the top of the hill again. By this point I was winded, my calves were aching, my heart was pounding, and I was getting rather frustrated.

“One final thing. Do an ollie at the end of the rail.”

What? I hadn’t even gotten on the rail, why is he telling me what to do when I get off of it? I shrugged, buckled in, went for the hit, and this time, I went straight, I lined the nose up, I set my weight back just a little bit, I slid down the rail, and I did a passably adequate ollie off the end of the rail, and landed the trick. When I did,  Sean whooped and hollered, then came down after me and hit the same rail.

“Awesome, lets go hi the chairlift!”

As we did, Sean looked at me and said:

 “You can understand all the mechanics in the world, but if your brain tells you 'you can’t do it, it’s too dangerous, it’s too risky', you need to get your body to shut your brain up! That’s what I had you do. I knew why you were sketching the last few feet. You were afraid. It felt beyond you. You might crash. It might hurt real bad if you do. The brain understands all that. It wants to keep you safe. Safe, however, doesn’t help you get better. Whenever I find myself giving in to the fear, I stop what I’m doing, right there, and I walk up the hill, and I try it again. If I pull back again, I walk the hill again, and again, and again. What happens is the body gets so fatigued that every fiber of your being starts screaming to your brain ‘just shut up already and let me do this!' Exertion and exhaustion can often help you overcome any fear, and then you can put your mechanics to good use.”

Yeah, I paraphrased a lot of that, but that’s the gist of what Sean was trying to get across to me. Our biggest enemy is not that we can’t do something, but that we are afraid that we can’t do something. That fear is powerful, it’s ancient, and it can be paralyzing. That ultra primitive brain can’t be reasoned with very well, unless we give it another pain to focus on. At some point the physical pain of exertion and exhaustion will out shout the feelings of fear, and then we can do what we need to do.

In a nutshell, that’s “The Craddick Effect”. There may be a much fancier name for it, but that’s how I’ve always approached mentorship where I have to overcome fear and doubt in a person. When some one is afraid, it’s easy to retreat. As a mentor, we have to recognize when that fear is present, and somehow work with it.

You may not do something as extreme as what Sean did with me, but you may well find other, more subtle ways to accomplish the same thing. Imagine having to take on a new testing tool where there’s a lot that needs to be learned up front. We could just let them go on their own and let them poke around. We can take their word that they are getting and understanding what they need to, or we can prod and test them to see what’s really happening. If we see that  they don’t understand enough, or maybe even very little, don’t assume lack of aptitude or drive, look for fear. If you can spot fear, try to coax them in a way that they can put their energy somewhere else for a time so that they can get to a point to shout down the fear. It may be having them do a variety of simpler tasks, still fruitful, but somewhat repetitive and tedious. After awhile, they will get a bit irritated, and then give them a slight push to move farther forward. Repeat as necessary. Over time, you may well see that they have slid past the pain and frustration point, and they just “get” what they are working with. It just clicks.

As a mentor, look to help foster that interaction. As a person receiving mentoring, know that this may very well be exactly what your mentor is trying to do. Allow yourself to go with it. In the end, both of you may learn a lot more about yourselves and your potential than you thought possible. It’s pretty cool when that happens ;).

Django 1.7 and… ME (yet another Live Blog)

Thursday, May 01, 2014 05:32 AM

So this might seem an odd spot, but this has become something of a mission on my part for 2014. I’ve decided that I want to try to become multi-lingual when it comes to web frameworks. We have all sots of interesting frameworks to play with to make web apps and web sites,and Django is the Python centric web framework, and therefore one I want to know more about and get more experience with. Seems a great reason to come into San Francisco and see what the Pythonistas are doing and how they are doing it with Django.

Yes, this is going to be live blogged, and as usual, it may be messy at first. Forgive the stream of consciousness, I promise I’ll clean it up later :).


A bit about our topic this evening (courtesy of Meetup):

Django 1.7 is one of the biggest releases in recent years for Django; several major new features, innumerable smaller improvements, and some big changes to parts of Django that have lain unchanged since before version 1.0. Come and learn about new app loading, system checks, customized select_related, custom lookups, and, of course, migrations. We'll cover both the advantages these new features bring as well as the issues you might have when upgrading from 1.6 or below.


A bit about our presenter this evening (also from Meetup):

Andrew Godwin is a Django core developer, the author of South and the new django.db.migrations framework, and currently works for Eventbrite as a Senior Software Engineer, working on system architecture. He's been using Django since 2007, and has worked on far too many Django websites at this point. In his spare time, he also enjoys flying planes, archery, and cheese.


Lightning Talks 

#1 Randall Degges - Django & Bcrypt

Randall kicked things off right away with a talk about how Django does password hashing and securing of passwords, with the estimated cost o what it takes to crack a password (hint, it's not that hard). If you want to be more security alert, Randall is recommending that we consider using BCrypt. It's been around awhile, and it allows for transparent password upgrading (users update their hash the first time they log in. No muss no fuss :). Sounds kinda cool, to tell the truth, I'm looking forward to playing with it for a bit.

#2 Venkata - Django Rest Framework w/ in-line resource expanding

The second talk discussed a bit on the Django REST framework. Some of the cool methods to handle drop down, pop open and other events were very quickly covered, and some quick details as to what each item can do. A quick discussion with fast flashes of code. I caught some of the details, but I'll be the first to admit, a lot of this flew right past me (gives me a better idea as to areas I need to get a little more familiar with). Granted, this is a lightning talk, so that should be expected, but hey, I pride myself on being able to keep up ;).

#3 Django Meetup Recap

The third lightning talk basically covered a recap of what the Django group has been covering and some quick recaps of what has been discussed in the previous meetups (Ansible, Office Entrance Theme Music, Integrating Django & NoSQL, etc.). Takeaway, if we want resources after the meetups are over, we have a place to go (and I thank you for that :) ).

---

Andrew Godwin's Talk

This seems like a great time to say that I'm relatively new to Django, so a lot of what's being discussed is kind of exciting because it makes me fgeel like I'll be able to get into what's being offered without having to worry about unlearning a lot of things to feel comfortable with the new details. Part of the new code is an update to South (which, as is mentioned above, is something Andrew is intimately involved in).

Details as to how apps are loaded and how to check for and warn programmers about what may happen with an upgrade. Having suffered through a few updates where things worked, then didn't and not having any clue as to why, this is very appealing.

Another new aspect is an adjustable and tunable prefetch option, so that instead of all or nothing, there's a spectrum of choices that can be looked up and help based on context.

A rather ominous slide has flashed across saying "Important Upgrade Notes", and a new detail is that all field classes need to have a deconstruct() option. It's now a required method for all fields. Additionally, initial_data is dead. It's important to have modules use data migration instead. In short, don't automatically assume that older modules that use initial_data will cleanly work. I will take Andrew's word on that ;).

So what's coming up in Django 1.8? Definitely improvements in interactions with PostGreSQL, as well as migrations for contributing apps. But that's getting a bit ahead of the race at the moment. Expect Django 1.7 to hit the scene around May 15th, give or take a few days. Again, I will take Andrew's word on that ;).

---

There's no question, I feel a little bit like a fish out of water, and frankly, that's great! This reminds me well that there is so much I need to learn, especially if my goal of becoming a technical tester is going to advance farther than just wishful thinking or following pre-written recipes. It's not enough to just "know a framework" or "know my framework".

As was aptly demonstrated to me a year and a half ago, I spent a lot of time in the Rails stack, and then I went to work with a company that didn't user Rails at all. Did that mean all that time and learning was wasted? Of course not. It did give me a way to look at how frameworks are constructed and how they interact. I'm thinking of it like learning Spanish when I was younger. Don't get me wrong, I'm not great shakes when it comes to Spanish, but I understand a fair amount, and can follow along in many conversations. What's really cool is that that gives me an add on benefit that I can follow a little bit in both French and Italian as well, since they are closely related. that's how I feel about learning a variety of web frameworks. The more of them I learn, the easier it will be to move between them, and to understand the challenge that they all face.

In any event, this was an interesting and whirlwind tour of some new stuff happening in Django, and I plan to come back and learn more, with an eye to understanding more next time than I did today. Frankly, that shouldn't be too hard to accomplish ;).


Thanks for hanging out with me. Have a good rest of the evening, wherever you are.


TECHNICAL TESTER FRIDAY - Getting UnGraphical with lynx and grep

Thursday, May 01, 2014 18:26 PM

Use lynx --dump to retrieve the contents of your Web site. Just hardcode all the page URLs. Redirect all the content to flat files, then use grep to look for patterns in your content. Start by looking for mistakes you commonly make. Save your greps in a file.

Wow, now this brings back some memories :). 

I first loaded up a lynx browser back in 1993, and this was my introduction to what the non-graphical World Wide Web looked like. Truth be told, I fairly quickly abandoned lynx as an everyday platform when both NCSA Mosaic and the first version of Netscape came out, but there is indeed a value to using lynx. It’s a nice tool to add to accessibility tests, so that you can see what your super pretty graphical page looks to those who don’t have that option. For those curious... it looks like this (well, mine looks like this):

Yep, that's what the Web looked like in 1993. Cool, huh?


lynx —dump does exactly what it sounds like.

Here’s an example from my own little site project:

lynx —dump http://127.0.0.1/web/orchestra/index.php

This prints the following to the screen:

Adding a redirect tag (‘>’) puts it in a file for us. repeat a bunch of times, and you can pull down details on every page in your site.

OK, cool, that’s interesting, but what does that do for us? It allows us to go through and pull out data that we’d want to analyze. Granted, the site as it exists right now isn’t all that spectacular, but it does give us a basis for how we can construct some simple greps. 

For those not familiar with this tool, "grep" is an old UNIX standby. The term comes from the syntax of the “ed” editor, and the command that was used was g/re/p (or “globally search for a regular expression and print it to stdout”).  Those of you with Windows machines can download Grep for Windows at http://gnuwin32.sourceforge.net/packages/grep.htm, or you can find a variety of fun an interesting versions. For me, since my system is in a virtual environment, I'm just going to save the files to my shared folder space and play with grep on my Mac :).

The main benefit to using grep is to look for things that show up in your pages that you may find interesting, or things that might be errors. Searching for basic strings in file names can show a lot of interesting details in the content of the pages. As a quick set of examples that we can do using grep, I recommend poking around on this page for 15 command examples in ways you can use grep to get interesting data.

Once you find a few greps that you find useful, it's a good idea to save those in a file so that you can run them over and over again as you add content to the site and get more information to mine from your site.  

This is meant to be a really basic first step in getting into the details of what your pages show and help get you away from using the browser as a main interaction source. Yes, there's a lot that can be done just with the files and the content that is in them. How you choose to look at them and what interesting details they show will be my focus for next week.

A Leaner and Cleaner Codecademy

Thursday, April 24, 2014 14:37 PM

A couple of years ago, I posted that I was excited to see the initiative that would become Codecademy get of the ground. At the time, it was limited in what it offered. It featured a course on JavaScript and some other small project ideas, and after a little poking around, I went on to other things. A year later I came back and saw that there was some new material, this time on Ruby and Python. A little more poking and then I went on to do other things.

I made a commitment to roll through Noah Sussman's "ways to become a more technical tester", which I follow up on each Friday in my TECHNICAL TESTER FRIDAY posts.. In that process, I decided it would be good to have a place that novice testers could go and learn some fundamentals about web programming. With that, I decided to go and give Codecademy another look, and I'm glad that I did.

For starters, Codecademy has refreshed everything on the site. They talk about it at length in "Codecademy Reimagined", and I for one am impressed with the level of depth they went into the describe the changes.

It's opened up a number of courses and updated several of their older offerings. The original JavaScript track has been deprecated (but it is still there if you want to work through it), and a new JavaScript track has been put in place. The site has been augmented with a jQuery track, a freshened HTML/CSS track, and updates to Ruby, Python, and PHP tracks as well.

In addition, there are several small project areas that users can practice and make "Codebits" to show what they have learned. Some of the Codebits are already assembled (examples include animating your name, making a solar system model and a simple web site template, as well as open format Codebits that users can share. Additionally, there are also a variety of projects ranging from novice to intermediate and advanced levels so that you caa practice what you are learning.

Another cool section is the API track. Currently, there are 29 API's listed that users can experiment with, and make their applications so that they can interact with the various API's. the offering range from YouTube to Twitter to Evernote, and also show the languages best suited to using that particular API's (JavaScript, Ruby and Python).

So how's the actual learning process? It's pretty solid, to tell the truth. Each track has a variety of initiatives, and a range of lessons and small projects interspersed throughout to keep the participant's attention. The editor can be finicky at times, but usually a page refresh will solve most of the odd problems. One of the nice attributes of having an account and working through the exercises is that your progress is saved. All of the steps from the first lesson to the last are recorded as part of your progress. That means you can go back and see your "cleared" examples and exercises.

Additionally there are Q&A Forums associated with each project, and so far, even when I've been stuck in some places, I've been able to find answers in each of the forums thus far. Participants put time in to answer questions and debate the approaches, and make clear where there is a code misunderstanding or an issue with Codecademy itself (and often, they offer workarounds and report updates that fix those issues). Definitely a great resource.  If I have to be nit-picky, it's the fact that, often, many of the Q&A Forum answers are jumbled together. Though the interface allows you to filter on the particular module and section by name, number and description, it would be really helpful to have a header for each question posted that says what module the question represents. Many do this when they write their reply titles, but having it be a prepended field that's automatically entered would be sweet :).

Overall, I think Codecademy has come a long way from when I first took a look at it about two and a half years ago. They have put a lot of effort into the site and their updates, and it shows. If you are already playing around at Codecademy, you already know everything I've written here. If you haven't been there in awhile, I recommend a return trip. It's really become a nice learning hub. If you have never been there, and are someone who wants to learn how to program front end and back end web apps, etc., and you like the idea of FREE, then seriously, go check the site out and get into a track that interests you. I'd suggest HTML/CSS, JavaScript, and JQuery first. From there, if you'd like to focus just on making web sites with little in the way of entry criteria, then check out the PHP track, otherwise, branch out into the Ruby or Python tracks, and work through the site at your own pace. It's not going to be the be all and end all destination to learn about programming, but seriously, you can make a pretty big dent with what you can learn here.

Selenium SF Live: An Evening With Dave Haeffner

Wednesday, April 23, 2014 19:26 PM

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home. 

TECHNICAL TESTER, err, Saturday: Pain Fog and Objective Completion

Saturday, April 19, 2014 19:45 PM

Yes, I know, I'm a day late with this. Actually, I'm closer to a week and a day late with this, but reality decided to remind me that I'm not 21 any longer.

Last week, April 9th, as I was getting off the train, I stood up and reached over to grab my bag. The "twinge" I felt above my hip on my right side was a tell tale reminder. I have sciatica, and if I feel that twinge, I am not going to be in for a fun week or two. Sure enough, my premonition became reality. Within 48 hours, I was flat on my back, with little ability to move, and the very act of doing anything (including sleeping) became a monumental chore. To that end, that meant that my progress on anything that was not "mission critical" pretty much stopped. There was no update last Friday because there was nothing to report. I spent most of the last week with limited movement, a back brace, copious amounts of Ibuprofen, any typing only when I had to. I'm happy to report I'm getting much better, but siting for long stretches to code or write was still painful, though less so each day.

Since I'm greatly desirous to move forward, I decided to make a push in the later part of this week to clear the CodeCademy site's three courses related to web development: PHP, HTML/CSS and JavaScript. Late last night, I finished up the course for JavaScript. Yay me :)!!!

As an overall course and level of coverage, I have to give CodeCademy credit, they have put together a platform that is actually pretty good for a self-directed learner. It's not perfect, by any means, and the editor can be finicky at times, but it's flexible enough to allow for a lot of answers that would quality as correct, so you don't get frustrated if you don't pick their exact way of doing something.

Many of the hints offered for each of the lessons are also helpful and don't spoon feed too much to the participant, making you actually stretch and think. While I've known about and tinkered with JavaScript off and on for years, I will honestly say I've learned quite a bit from this courses. I would recommend these three courses (HTML/CSS, PHP and JavaScript) as a very good no cost first stop to learn about these topics. Each does a good job of explaining the topics and concepts individually.

If there was any criticism, it's that there is little in the course examples that integrate the ideas (at least so far). There is a course on jQuery, and I anticipate that that will probably have more to do with actual web component interaction and integration, so that's my next goal to complete. After that, I plan to go back and complete the Ruby and Python modules, and explore their API module as well.

For now, consider this is a modest victory dance, or in this case, a slow moving fist pump. I may need a week or two before I can actually dance ;). Also, next week I'll have some meat to add to this,s since I'm going to start covering some command-line level tools to play with and interact with, and those are a lot more fun to write about!

Become a "Coyote" at CAST 2014

Wednesday, April 09, 2014 16:10 PM

Now that the full program has been announced, and my talk is posted and described, I can now say, with a certainty, what I'll be talking about at CAST 2014 this August in New York City.

Actually, I need to qualify that. It's not what I'm going to talk about, it's what "we" are going to talk about.

Harrison Lovell is an up and coming tester with copious amounts of wit, humor and energy. Seriously, he gives me a run for my money in the energy department. I met Harrison through the PerScholas mentorship program, and we have been communicating and working together regularly on a number of initiatives since we first met in September of 2013. The results of those interactions, experiments, and a variety of hits (and yes, some misses here and there), are the core of the talk we will be doing together.

Here's the basics from the sched.org site:

"Coyote Teaching: A new take on the art of mentorship"

Too often, new software testers are dropped into the testing world with little idea as to what to do, how to do it, and where to get help if they need it. Mentors are valuable, but too often, mentors try to shoe-horn these new testers into their way of seeing the world. Often, the result is frustration on both sides.

“Coyote Teaching” emphasizes answering questions with questions, using the environment as examples, and allowing those being mentored the chance to create their own unique learning experience. Coyote Teaching lets new testers learn about the product, testing, the world in which their product works, and the contexts in which those efforts matter.

We will demonstrate the Coyote Teaching approach. Through examples from our own mentoring relationship, we show ways in which both mentors (and those being mentored) can benefit from this arrangement.

“When raised by a coyote, one becomes a coyote”.


Speakers

Michael Larsen
Senior Quality Assurance Engineer, Socialtext
Michael Larsen is Senior Tester located in San Francisco, California. Over the past seventeen years, he has been involved in software testing for products ranging from network routers and switches, virtual machines, capacitance touch devices, video games and distributed database applications that service the legal and entertainment industries.
Read More →

Harrison C. Lovell
Associate Engineer, QA, Virtusa
Harrison C. Lovell is an Associate Engineer at Virtusa’s Albany office. He is a proud alumnus from Per Scholas’ ‘IT-Ready Training’ and STeP (Software Testing education Program) courses. For the past year, he has thrown himself into various environments dealing with testing, networking and business practices with a passion for obtaining information and experience.


Yes, I think this is going to be an amazing talk. Of course, I would say that, because I'm part of the duo giving it, but really, I think we have something unique and interesting to share, and perhaps a few interesting tricks that might help you if you are looking to be a mentor to others, or if you are one who wants to be mentored. One thing I can guarantee, considering the combinations of personalities that Harrison and I will bring to the talk... you will not be bored ;)

Technical Tester Friday: Ladies and Gentleman, JavaScript has Entered the Building

Saturday, April 05, 2014 01:45 AM

There's nothing like the mild terror one feels when they get back from several days away and then thinks "aw man, I have to post something today!" Too many of my posts have stretched over two weeks, and while I had perfectly valid reasons for that, I said that I's post every Friday, whether I had a lot to talk about or just a little. I've decided the volume of the delivery is less important than the regularity and reliability of having an entry every Friday, and that's what's driving me today.

With the ALM forum and all that surrounded it, as well as getting ready for my talk in presenting my slides, I really didn't have a whole at a time to go through and push my way into learning more about JavaScript and implementing as much of it as I wanted to. I started the Codecademy JavaScript module, and worked through several of the initial entries. When I realized that I wasn't going to be able to have something to show by the end of this week... okay I cheated. Well I didn't cheat. I just went out on the web and I looked at some sample JavaScript projects, and tried to see if I could make sense of what they were doing.

The good news I found a simple JavaScript project that I could apply to the site's navigation bar. If you remember from last week's example, the navigation bar was really just a couple of links horizontally displayed, with brackets on the end to simulate "buttons". This time, we made real buttons with a little interactivity to them. They give a visual indication of which page has been selected (the button is a little larger than the others):






and here's the CSS and JavaScript code that makes the site look better than 1995 ;).




I should stop here and say, again, there are a lot of neat little distractions you can get into with JavaScript. There's potentially a slight barrier to entry to a brand-new web developer. HTML is pretty easy. CSS has some rules, but once you learn them, they don't feel that different compared to native HTML. JavaScript is very similar to PHP, in that you can learn the basics pretty quickly. How to actually use the basics effectively, and in a meaningful way on your pages... that's a bit of an art form, and it's one of the things you're going to have to practice doing. Start small and work out from there.

So there you have it. Again, because of me being out of town and being completely consumed by the ALM Form conference, I did not get a chance to do as much JavaScript hacking as I wanted to, but that will give me a chance for next week to get a little deeper. Perhaps I can pop a little bit of eye candy into the site, so we make it a little more interesting. As always, crawl before you walk, walk before you run, and maybe run before you get on a bicycle or drive a car. Little steps get you in, and I think the project will ultimately become a little more interesting as the defined repeatable things get more and more caned so I can focus on other things :).

Testing the Limits at #ALMForum: Day Three

Friday, April 04, 2014 09:28 AM

Wow, what a week this has been. We're now on day three, the last day, and I'm up in an hour! I'm excited, a little frazzled, but I think we're going to do well. I'm also excited that the four speakers in the breakout today are all good friends; Curtis Stuehrenberg, Seth Eliot and Mark Tomlinson are gonna' help me close lout this conference and we look forward to chatting with as many people as possible that want to look at ways to the change the face and state of software testing. If you are here at ALM Forum, come join us. If you are not able to be, please read on here and take in as much as you can from my notes and observations.

-----

Transforming Software Development in a World of Services with Sam Guckenheimer is the first session, a we are starting out with a thought experiment around Air BnB (the online service to rent rooms and houses. etc. in different cities). A boat on Puget sound is available, so a company can host all of their team members on the boat. What will the experience be? Will it be a fun stay? Will it be too cramped? We don't know, but one thing's for sure, it will be open, it will be public, and good or bad, if people want to talk about it, they will.

This makes for an interesting comparison to Agile development, and the way that agile has shaken out. What had intended to be a relatively private internal housekeeping mode has become a more public viewing. We are social, we are open, we use systems that are often out of our control in the 100% sense of the word. A lot of our practices and actions are not quiet and hidden, they are visible to all who would care to see them. It's a little daunting, but it's also tremendously liberating.

This talk is looking at a Microsoft ideal of "cloud cadence". Customers want regular improvements, we want to maximize the value we provide to our customers, and we know that their feedback is not just for developers, it's seen by everyone. Get it right, we have app store five star reviews. get it wrong, and we can have considerably lower reviews (and don't for a second think those reviews don't matter; it can be the difference between adoption or being totally forsaken).

The DevOps life cycle comes together with three aspects. we have development, we have production, and in between we have the collaboration piece. What's the most important element there? Well, without good development, we have a product that is sub par. With bad deployment, we might have a great product but it won't really work the way we intend it to. The middle piece is  the critical aspect, and that collaboration element is really difficult to pin down. It's not a simple prescription, a set checklist. each organization and project will be different, and many times, the underpinnings will change (from our servers to the cloud, from a dedicated  and closed application to a socially aware application). Sometimes the changes are made deliberately, sometimes the changes are made a little more forcefully. Either way, without a sense of shared purpose or collaboration between the development and production groups, including the tooling necessary to accomplish the goals.

The ability to do all of these things in the Visual sTudio team is the core of Sam's talk, and the interactions with their clients, and the variety of changes that occur drive many of their decisions. They learn from their customers and change direction. They focus on a human to human feedback model (which may sound a little unusual for a giant company like Microsoft, but Sam makes a convincing case :) ).


-----

So this is my talk. no I can’t talk about my talk while I’m giving it, so this is a little canned ;). My topic is "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed”. If I were to be a little more literal with my title, I’d call it “What you want to have the new testers that you hire know and want to be so that they can be genuinely effective for you and your team… oh, and they may not be the obvious areas you think they need to be".

Software development, and software testing, is undergoing a radical change,. We’ve embraced the idea of changes in development and delivery, but we tend to still look at old school “best practices” in software testing. We’re not still testing the software the previous generation wrote. Development has changed, and testing is changing, and it’s still as relevant as it was before, but we need to approach it differently than we have.

I’m involved in a variety of initiatives that are specifically geared towards teaching software testing to a new generation of testers (and hey, current testers may find the ideas useful, too).

Programs like SummerQAmp, PerScholas, Weekend Testing, the Miagi-do School of Software Testing, and the BBST series of classes are all designed to help software testers not just develop ideas, but real world skills that can help them do their jobs effectively. The community that surrounds testing (in the Twitter, G+, and special forum space) are all doing amazing work to move testing forward. 


So what’s wrong with the old model? We still hear about testing teams, even in so called Agile organizations, that are still doing Heavy Process, Heavy Scripting series of tests. It’s like the development team is Agile, but the test team is expected to still be a waterfall team. Automation makes a lot of promises, and don’t get me wrong, I am pro automation for many things. I use automation. I write automation. I prefer the term Computer Aided Testing, but Automation will suffice. It’s a tool, but it’s not the only tool, and it has been oversold on what it can accomplish. It’s great for repetitive tasks. It’s great for configuration and iteration stepping. It’s lousy at making informed decisions. Though it’s not been a problem I’ve personally dealt with or had to experience, I know that “certification” has been sold as a way to “pre-qualify” testers. As a practical outcome, I think we have failed here, because most of the certifications offered are heavy on passing a test, and light on demonstration of real world skills and the effectiveness thereof.


I believe the New Testers need to focus on a new toolkit and a new attitude. It’s not really new, in fact, in many ways, it’s ancient, but it’s been woefully underutilized. We need testers who are sapient (stealing that from James Bach), but basically meaning we need testers who are actively and critically thinking about what they are doing and observing. Testers need to do more than find bugs, they need to sell those bugs.. Really, what’s more important, lots of bugs, or the championing of important bugs that actually get fixed?

Testers need to return to and have a solid understanding of both the Scientific And Socratic methods. I believe that New Testers will be less button pushers and more scientists, philosophers and skeptics. These are not just testing traits, these need to be embraced by everyone in development. New Testers don’t want to prove the software works. They want to find how it is broken. They want badly to lose the stigma of being the bug shield. They are much better utilized as “beat reporters” sharing a clear story of your product. A thought experiment from Elisabeth Hendrickson that I personally love is “what is the most terrifying headline about your company you could imagine seeing in the paper? Wouldn’t you want your testers to not only find out that terrifying headline, but inform you so that  you could prevent it?"

OK, that’s great. So where can I find these New Testers? You can find them in Computer Science departments at universities. Yes, I’m daring to say it. Most testers have historically fallen into the job, but I am seeing people who are now self selecting to be software testers, and it’s *WONDERFUL*. They are not also-ran programmers, or people who couldn’t hack programming. Some are great programmers, but they have decided that there are other challenges they’d like to deal with rather than stringing code together. And that’s *ALSO* great. The point is, that are not considering testing as a consolation prize, but they are selecting testing on its own merits, and we should recruit them with the same philosophy. Where else can we find great up and coming testers (and to be fair, current great testers)? Check out people with degrees in Humanities, or Journalism, or Psychology. Look for actual scientists who might be looking for a change of pace. Do you have really good Customer Service Representatives? It’s a good bet you have some fantastic testers in that group.

Programs like SummerQAmp, PerScholas, Weekend Testing, BBST Courses, a thriving ecosystem of Bloggers, Newsgroups and Online Magazines and Twitter (yes, Twitter :) ) are at the vanguard of bringing this new paradigm of testing to the fore. Each of these, in their sphere, is looking to help bring real, tangible testing skills to their participants, and give them a chance to show what they can do and improve their craft. Weekend Testing is not just a movement, it’s also a portable model that anyone can use. All you need is Skype, a topic of discussion, a product or project, a mission and some charters, and two hours to interact, instruct and facilitate. If you want to see some amazing testing insights, I encourage you to review just about any Weekend Testing transcript. 

Testers are not a single group, they have many interests and they have their own niches. Some testers will be good at some, better at others, probably not stellar in all, but with a broad team with recognition of this, you may be surprised at the powerhouse you can develop when you get the Explorers, the Performance Tweakers, the Toolsmiths (automation, CI, deployment tools, etc), the Evil Masterminds (Security), the Humanists (human factors, usability), and the Storytellers together. Just don’t make the mistake in thinking you can get this all in one person. You may get attributes of all in one tester, but none of us can be experts at all of these, or should I say, very few of us can be (I’m certainly not one of them).

In all the new testers will be focused on:

  • less scripting, more active thinking
  • less checking, more real testing
  • less blind faith, more scientific skepticism
  • creative, inventive, intuitive, mindful

In short, the future is now, and I can introduce you to hundreds of them ;). Better yet, why not come join us and see for yourself?


  • SummerQAmp: hire an intern
  • PerScholas: have a chat with recent STeP graduates and their mentors
  • Weekend Testing: Come join us for a session or two and see the magic happen
  • Miagi-do: Do a web search for the term “Miagi-do School of Software Testing”. Or better yet, just ask me ;). 

-----

Curtis Stueherenberg is talking about how to "ACCellerate Your Agile Test Planning". He decided to chuck the Power Point entirely, and decided to give a crash ourse in Agile testing on a live product... specifically, his procut (well, Climate Corp's mobile app, to be specific). His point was to say "what if we have to test a product in two weeks? How about one week? How about three days? What are you going to do?"

Rather than talk it, we all participated in an active testing session, downloading the app to our mobile devices (iPhone and Android only, sorry Windows Phone users :( ). By walking through the steps and the test areas, and using an idea from James Whittaker and Gogle called the ACC model, we all in real time put together sections of risk and areas we would want to make sure that we tested.  In many ways, ACC is a variation on a theme of Session Based Test Management (SBTM). It informs out tests, we act on the guidance, and we pivot and adapt based on what we learn, and we do it quickly.

Much of the interaction was just things we did in real time, and for my money, this was a brilliant way to emphasize this approach. Instead of just talking about it, we all did it. Even if the idea of a formal test plan is not something you have to deal with, give this approach a try. I know I'm going to play with this when I get back home :).

-----

Now it's time for Seth Eliot and "Your Path to Data Driven Quality" and a roadmap towards how to use the data that you are gathering to help guide you to your ultimate destination. Seth wants to make the point that testing is measurement, and you can't measure if you don't have data (well, you can, but it won't really be worth much). Seth asks if we are HiPPO driven (meaning is our strategy defined buy the "Highest Paid Person's Opinion" or were we making decisions based on hard data. Engineering data can help a little bit (test results, bug counts, pass fail rates). They can give us a picture, but maybe not a complete one (in fact, not even close to a complete one). There's a lot of stuff we are leaving on the table. Seth says that leveraging production data (or "near production data") gives us a richer and more dynamic data set. Testers try to be creative, but we can't come close to the wacko randomness of the real world users that interact with our product.

First step: Determine your questions. Use Goal Question Metrics. Start at the beginning and see what you ultimately want to do. Don't just get data and look for answers. Your data will taint the questions you ask if you don't ask the questions first. You may develop a confirmation bias if you look at data that may seem to point to a question you haven't asked. Instead, the data may give you a correlation to something, but it may not actually tell you anything important. Starting with the question helps to de-bias your expectations, and then it gives you guidance as to what the data actually tells you.

Then: Design for production-data quality. There's two types of data we can access. Active and passive data can be used. active data could be test cases or synthetic data of a simulated user. Passive data is using real world data and real users interactions. Synthetic data is safer, but it's by definition incomplete. Passive data is more complete, but there's a danger to using it (compromising identification data, etc.). Staging the data acquisition lets us start with synthetics data (reminds me of my "Attack on Titan" account group that I have lovingly put together when I test Socialtext... yes, I have one. Don't judge me ;) ), to copying my actual account and sharing on our production site (much more rich data, but needs to be scrubbed of anything that could compromise individuals privacy... which in turn gets us back to synthetic data of sorts, but a richer set. Bulk up and repeat. Over time, we can go from having a small set of sample data to a much larger and beefier data-set, with lots more interesting data points.

Then: Select Data sources. There's a number of ways to gather and accumulate data. We can export from user accounts, or we can actively aggregate user data and collect those details (reminds me of the days of NetFlow FlowCollection at Cisco). We need to be clear as to what we are gathering and the data handling privacy that goes with it. Anonymous data is typically safe, sensitive personally identifiable info requires protocols to gather, most likely scrub, or  not touch with a ten foot pole.  Will we be using Infrastructure data, app data. usage. account details, etc. Each area has its unique challenges. Plan accordingly.

Then: Use the right data tools. What are you going to use to store this data. Databases are of course common, but for big data apps, we need something a little more robust (Hadoop is hip in this area). where do you store a Hadoop instance? Split it up into smaller chunks (note, splitting it makes it vulnerable, so we need to replicate it. Wow, big data gets bigger :) ).Using map reducing tools, we can crunch down to a smaller data set for analysis purposes. I'm going to take Seth's word for it, as Hadoop is not one of my strong suits, but I appreciated the 60 second guided tour :) ). Regardless of the data collection and storage, ultimately that data needs to be viewed, monitored, aggregated and analyzed. The tools that do that are wide and varied, but the goal is to drill down to the data that matters to you, and having the ability to interpret what you are seeing.

Then: Get answers to your questions. Ultimately, we hope that we are able to get answers based on the real data we have gathered that will help us either support or dispute our hypothesis (back to the scientific method; testing is asking questions and then, based on the answers we receive, considering and proposing more interesting questions. Does our data show us interesting points to focus our attention? Do we know a bit more about user sentiment? Have we figured out where our peak traffic times are? If we have asked these questions, and gathered data that is appropriate for those questions, if we have been focused on aggregating the appropriate data and analyzing it, we should be able to say "yes, we have support for our hypothesis" or "no, this data refutes our hypothesis". Of course, that leads to even more questions, which means we go to...

Lather. Rinse. Repeat.

Hmmm, Mark Tomlinson just passed me a note with a statement that says "Computer Aided Exploratory Testing"? Hadn't considered it quite that way, but yes, this certainly fits the description. An intriguing prospect, and one I need to play with a bit more :).

-----

Lightning talks! Woo!!! We have four presenters looking to rifle through some quick talks.

Mark Prichard is discussing "Complete Continuous Integration and Testing for Mobile and Web Applications". Mark is with Cloudbees, and he's explaining how they do exactly what the title describes. Some interesting ideas surrounding how to use Jenkins and other tools to make it possible to build multiple releases and leverage a variety of common tools so as to not have to replicate everything for each environment. Leverage the cloud and Platform-as-a-Service for Continuous Delivery. Key takeaway... "ALM in the cloud will become the rule, not the exception". Quote attributed to Kurt Bittner.

--

Mike Ostenberg from SOASTA is next and he's talking about 'Performance Testing in Production, and what you'll find there".  Begs the question... *WHY* do we want to do performance testing in production (isn't that what we call a "customer freak out"... well, yeah, but that's an after effect, and we really want to not go there ;) ). Real systems, real load, real profiling. There's ways we can simulate load on a test environment, but it's not really going to match what happens in the real space. additionally, we want to do our load testing earlier than we traditionally do it. At the end of the cycle, we're a little too far gone to actually pivot base on what we learn.

Load testing in Production, Mike points out, can be done in stages and can be done on different levels. Just as we use Unit tests for components, integration tests for bigger systems, and feature/acceptance tests to tie it all together, we can deconstruct load tests to match a similar paradigm. Earlier load tests are dealing with errors, page loads, garbage collection, data  management, etc.  Regardless of the stage, there are some critical things to look at.

Bandwidth is #1, can everyone reach what they need? Load balancing, or making sure everyone pulls their weight, is also high priority. Application issues; there's no such thing as perfect code. Earlier tests can shake out the system to help show inefficient code, sync issues, etc. Database performance fits in application issues, but it's a special set of test cases. The database, as Mike points out, is the core of performance. Locking and contention, index issues, memory management, connection management, etc. all come into play. Architecture is imperative. Think of matching the right engine to the appropriate car. Connectivity comes into play as well. Latency, lack of redundancy, firewall capacity, DNS, etc. Configuration means we need to get custom and actually see if we mean it. Shared Environments... watch out for those noisy neighbors :). Random stuff comes into play when things are shared in the real world. Pay attention to what they can do for you (or to you ;) ).

I like this staggered approach, it makes the idea of "testing in production" not seem so overwhelming.

--

Now on deck is Dori Exterman, and he's talking about "Reducing the Build-Test-Deploy Cycle from Hours to Minutes at Cellebrite". Hmmm, color me mildly skeptical, but OK, tell me more :). I'm very familiar with the idea of serial build-test-deploy, and I know that that does not bode well. Multi-core systems can certainly help with this, and leveraging multi-core environments can allow us to do a much tighter build-test-deploy pipeline. Parallel processing speeds things up, but there's a system limit, and those system limits are also very costly at their higher end.

So what's the option when we max out the cores on a single system? seems that going parallel to more servers would make sense. Rather than one machine with 32 cores, how about 8 machines with four cores? same number of cores, maybe similar throughput gains (and potentially better since system resources are shared over multiple machines). This approach is referred to as a CI Cluster Farm. Cool, but we're still in a similar ball-park. Can we do better? Dori says yes, and his answers is to use distributed computing within your own network of machines. If I'm hearing this correctly, it's kind of like the idea of letting your machine be used for "protein folding" experiments while your machine is in more idle states (anyone else remember signing up to do stuff like that :)? ). I'm not sure that's what Dori means, but it seems this could be really viable, and we already have an example of that happening (i.e "signing up for protein folding").

How wild would it be to be able to wire up your entire network, everyone's machines, so that they can help speed up the build process? It's a fascinating model. I'd be curious to see if this really comes to fruition.

--

We had another Lightning talk added that came from a Birds of a Feather session about CI/CD, so this is a bit of a surprise. The idea was to see how we could leverage pipelines (mini-builds that run in sequence and individually). mini builds also helps us to build individual components, with a goals to integrate the elements later on. Often, all we want is a Yes/No to see if the change is good, or not (gated check-ins).

This blends into Dori's talk just given on distributed computing and utilizing down times for making an almost unlimitedly parallel build engine. So this is interesting, but what's management going to say about all of this? Well, what is it costing us not to do this? Are we losing time and in effect losing money in the process? Will this help us fix some of our technical debt? If so, it may well be worth considering. If it adds more technical debt, less likely to sell that option.

Another point is that good CI infrastructure will bubble up issues in design and architecture of both the process and the application. Innovation and motivation will potentially increase when changes can be made more frequently, and subsequently, more atomically.

By using information radiators, we can get a clearer sense as to who did what to cause the build to fail. Gadgets (lights, sounds, sensory input) can help make it more apparent and in real time. Not sure if this would be a major plus, but I'm not necessarily the best judge of what developers consider to be fun ;).


-----

The final test track talk, the anchor session, goes to Mark Tomlinson, as he discusses 'roles and Revelations: Embracing and Evolving our Conceptions of Testing". With a title like that, let's just say "you had me at 'hello'" ;).

Mark is a fun guy to listen to (check out his podcast "PerfBytes" to get a feel), and thus, it's fun to hear him do a more narrative talk as opposed to a techy talk. We start out with the idea of what testing is, at least how we look at it historically. We find bugs, we see that we can validate to a spec, we try to reduce costs, and we aim to mitigate risks. Overall, I think if you gave that list to any lay person and said "that's what testing is", they'd probably have little difficulty understanding that. Those definitions are valid, but it's also somewhat limiting. We've seen some interesting milestones over the past 50 years. Debugging, Demonstration, Destruction, Evaluation and Prevention can all be seen as "eras of testing". Mark points out that there are 10 different schools of testing (Domain, Stress, Specification, Risk, Random/Statistical, Function, regression, Scenario, User, and Exploratory).

That's all cool... but what if one day everything changed? Well, one would say that the past 14 years, or since the Agile Manifesto, the Universe did Change... to steal a little from James Burke. We are less likely today to have isolated test groups. We have a lot more alphabet soup when it comes to our titles. I've had lots of titles, lots of combinations, but ultimately all of them could be distilled to a "tester" of some flavor. Some teams have no dedicated testers, or just one dedicated tester. Test Driven Development is an unfortunate term choice, in the fact that what is a design process often gets mistaken for "testing" (nope, it's not. It's checking for correctness, but it is not testing). Out time to be interactive and effective is happening earlier, and I love this fact.

Continuous Integration, Continuous Deployment, Continuous Delivery and even Continuous Testing have entered the vernacular. What does this mean? It's all about trying to automate as many of the steps as humanly possible. Build-Check-Deploy-Monitor-Repeat. Conceive of a time and place where we go from end to end without a person involved, just machines. Sounds great, huh? In some ways, it's awesome, but there's an unfortunate side effect, in that may processes are billed as testing that are not. Checking is what automation does. It's great for a lot of things, but it can't really think. Testing, real testing, requires thinking and judgment. There's been a devaluing of testing in some organizations, or just doing testing is considered a liability. Unless we are all coding toolsmiths, we are of a lesser order... and that's bunk!!!

Ultimately, testing is a cost... seriously. Testing does not make money. testing is a cost center. It's an important cost center, but it is a cost. think of Health Insurance. It is not an investment. It's a cost you have to pay... but when you crash a car or break a leg, then the insurance kicks in, and I'll bet you're happy when you have it (and really frustrated if you don't). that's what testing is. It's insurance. It's a hedge. It's a cost to prevent calamity. With all of the changing going on, we ned to be clear what we are and what we provide.

What we generate, and what real value we provide, is feedback and information. we are not critics. We are not nay-sayers, we are honest (we hope) reporters of the state of reality, or at least as we can potentially be. the really valuable things that we can provide are not automate-able. Yes, I dared to say that :). Computers can evaluate variable values and they can confirm or deny state changes, but they cannot really think, and they cannot make an informed judgment call. They can only do what we as people tell them to.

Change is constant, and we will see more change as we continue. Testers need to be open to change,and realize that, while there is always value that we provide, the way we provide that value, and the mechanisms and institutions that surround them will evolve. If we do not evolve with them, we will be left behind.

Mark emphasizes that software testers are "Facilitators of Quality". Testing is not just limited to dedicated testers, it's dispersing. therefore, we need to emphasize where we can be effective, and that may mean going in totally different directions. Testing provides diversity, if we are willing to have it be a diversifying role. Think of new techniques, expand the way that we can ask questions, learn more about the infrastructure, and figure out ways that we can keep asking questions. The day we stop asking questions is the day testing dies, for real.

Testing can actually accelerate development. I believe this, and have seen it happen in my own experiences. This is where paired developer-tester arrangements can be great. think of the programmer being the pilot, and the tester being the navigator. Yes, if all we ask is "are we there yet?", we don't offer much, but if we watch the terrain, and ask if some ways we've mapped may be better or worse for the time we want to arrive, now we're adding value, and in some ways, we can help them fix issues before they've even been committed. testers provoke reactions. Not to be jerks, but to get people to think and consider what they really should be doing. Do you think you can't do that? If so, why? Give it a try. You may surprise yourself (and maybe a few programmers) with how much you deliver. In short, be the Devil's Advocate as often as possible, and be prepared to embrace the Devil's you don't know ;).

Consider that every tester is an Analyst. It may be formal or informal, but we all are, deep down. we can research quality efforts, we can drill down into data and see patterns and trends, we can also see trends and efficiencies we can add to our repertoire, and adapt, adapt, adapt!

-----

Sorry for the delay for the last bit, but with a rather meta post presentation call with Mark Tomlinson (we did a conference call about how to do podcasts, and in the process, recorded the session... so yeah, we made a podcast about how to do podcasts as an artifact of a meeting about how to do podcasts. Main takeaway, it's fun, but there's more to doing them than many people consider. We just hope we didn't scare everyone off after we were done (LOL!). After that, all of the speakers descended upon Tango Restaurant and had a fabulous dinner courtesy of the ALM Forum organizing staff. Great conversation with Scott Wambler, Curtis Stuehrenberg, Peter Varhol, and Seth Eliot, as well as several others. the nerd brain power in that small room was probably off the charts, and i was honored to have been included in this event. Seattle, thank you for a very busy and truly enjoyable week. For those who have been keeping track of this rather long missive, my thanks to you, too. To everyone who came to my talk and tweeted or retweeted my comments, and who commented back to me about my talk and gave me your impressions, feedback is a gift, and I've received many gifts today. Truly, thank you so much.

With this, I must return back to reality and back to San Francisco early this morning. I've enjoyed out rime together, and I hope that, in some small way, this meandering three days of live blogging has given you a flavor of the event and what I've learned these past few days. Let's do it all again some time :)!!!