A TESTHEAD Wayback Machine Find: ALM Forum Talk From April 2014

Thursday, August 21, 2014 06:58 AM

I am grateful for a variety of friends and acquaintances in the testing world who keep me alert to things they discover. What makes it even more fun is when I'm alerted to things I did and forgot about, or someone discovers something I didn't know was there. Today is one of those days.

Back in April, I gave a talk about "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed". I posted the slides on my LinkedIn profile and my SlideShare account, and then went on with my reality. It was pointed out that a video of my talk was recorded, and now that I know where it is, I can share it here :).

http://vimeopro.com/user27088109/almforum14test/video/95238828


I give a shout out to SummerQAmp, PerScholas, Weekend Testing, and Miagi-do as exemplars as to what we can do to help empower future testers with real skills. There are many others, to be sure, but these are the one's I'm actively engaged in, so hey, I'm biased :).

I hope you enjoy the talk, and if you do, please share the message with others.

Coyote Teaching: Watch How It All Came Together

Friday, August 15, 2014 21:16 PM

Harrison Lovell and I decided to try an experiment.

What if a mentoring pair (a person relatively new to the software testing world and a longtime practitioner) were to work together and look at the way that mentoring is performed?

Could we learn something in the process?

What if we tried something novel, and looked at mentoring relationship all over the world, both current and ancient?

What would we find, and could we learn from them in a way that might prove to be useful to us today?


With that, we embarked on a several month voyage (mostly performed over Skype and email) and decided we'd give a try at a method that takes its cues from ancient cultures. Those methods are called "Coyote Teaching", and we opted to be the Coyotes :).

During CAST 2014, which was held this past week at the Helen and Martin Kimmel Center in New York City, we had the chance to present this topic and approach, and Huib Schoots, a friend of ours, was kind enough to record the whole talk. For those who would like to see it, it is here in its entirety:



I want to congratulate Harrison on his first conference talk, and to thank him for his enthusiasm, as well as his hospitality while showing me around mid-town Manhattan during the week (I should also mention that this was the first opportunity we have had to meet face to face).


I am also thankful to those who gave us valuable feedback to make the talk even better than we originally envisioned. My thanks especially to Alessandra Moreira for helping me go over the fine points of the talk and acting as the counter debater to help poke holes in the ideas we were going to present.

With that, please watch "Coyote Teaching: a New (Old?) Take on the Art of Mentorship". If you like what you see, please comment below and let us know what you liked. If you don't like what you see, please comment below and tell us that, too :). Either way, we'd love to hear what you think.

The Conferring Continues at #CAST2014

Friday, August 15, 2014 21:52 PM


Hello everyone! It's wild to think that my week in New York will be ending tomorrow morning. I've had so many great experiences, conversations and interactions with so many great people. I've met both of my PerScholas mentees, and I've enjoyed watching them take in this experience. It's also been great fun talking to so many new friends, and I will genuinely miss this "gathering of the tribe", but let's not lament leaving when there is a whole day of interaction and conferring (not to mention my own talk ;) ) still to happen.

Lean Coffee this morning dealt with some interesting challenges in that the water main broke out in front of the Marlton. As such, the water for most of 8th Street was shut off. Water only got restored a little after 7:00 a.m., so the coffee part of Lean Coffee for the participants is just starting to come in.

One of the topics we started with was the transient nature of software testing, and why we see so few people who come into testing stay with testing. There are lots of reasons for this. In my career, many of the people I have met who were software testers have gone on to do other things. Some became programmers, some became project managers, some became system administrators, and some became managers. Of the people I knew who were testers for more than five years, most of them remained testers going forward. that's just my reaction, but I think that, by the time people have reached the five year mark as testers, they decided that they either liked testing, or they were good at testing. Of course, this may be an observation bias, since I am seeing people who followed my own path. It's an interesting topic to look at further.

The second topic was focusing on how software testers train and learn about their jobs, and how they find the time to do it. For many of the people, they make an effort to carve out time that will be relevant to them. For me personally, I like to use my time on the train when I commute to and from work to read, think, and ponder ideas. Others use materials like Coursera or uTest University. Some people enjoy writing blogs (like me ;) ). Key takeaway, everyone has a different way to learn.

The third topic focused on how people get through the large amount of material at a conference. How do they capture it all? there's lots of techniques. My favorite technique is writing blog posts in semi-real time (like this one). I find that, instead of writing a summary of what I am hearing, I try to write a summary and a personal take on the details I have learned, so that it is more personal and actionable. Others use sketch notes, doodles, mind maps, recordings of talks, etc. The key takeaway is not the method in which you capture, but that you make what you capture actionable.

The fourth topic is moving into consulting or contracting, and what it takes to make that work. Several of the participants shared their experiences as to how they made the transition, and the challenges they faced. In addition to doing the work, they had to work with finding gigs, collecting money, doing paperwork and tax filing, etc. At times, there will be unusual fits, and needs that may or may not be an ideal situation, but a benefit of being a consultant is that you are there for a specific purpose and for a specific time, and at the end of it, you can leave. There's a need to be able to deal with a high level of ambiguity, and that ambiguity tends to be a big hurdle for many. On the other side, there is a mindset of learning and continuous pivoting, where there's always something new to learn. the pay can be good, but it can be sporadic. Ultimately, at the end of the day, we all are consultants, even if we are employed by a company and we are getting a paycheck.

The final topic is "are numbers evil or maligned". Some people look at numbers as a horrible waste of time. Don't count test cases, don't count bugs, don't count story points. In some ways, this quantitative accounting is both aggravating, but somewhat necessary. Numbers provide information. Whether that information is bad or good depends a lot on what we want to measure. If we are dealing with things that are consistent (network throughput, megabytes of download, etc.) collecting those numbers matters. Numbers of test cases, number of stories, etc. are much less helpful because there is such a variation as to what and how we can control those values. When the measurement cannot be actionable, or it's really vague, then the numbers don't really make a difference. Numbers can be informative, or they can be noise. It's up to our organizations (and us) to figure out what is relevant and why.

Thanks to all the participants for being involved at this early hour :).

---

The morning keynote for today was delivered by Carol Strohecker of the Rhode Island School of Design, and the topic was "From STEM to STEAM: Advocacy to Curricula" and the focus was on the fact that, while we have been emphasizing STEM (science, Technology, Engineering and Mathematics), we miss a lot of important details when we do not include "Art" in that process.

STEAM emphasizes the importance of art and design and how it is important to innovation and economic growth. Tied into this is the maker movement, which likewise emphasizes not just the functional but the aesthetic. There's a lot of neat efforts and initiatives taking place at the academic level as well as the legislative level to get these initiatives into schools so that we can emphasize this balance.

There are many tools that we can use to help us look at the world in a more artistic and aesthetic approach. Art is nebulous and subjective. It does not have the same level of solid concrete syntax that science or language has (and language is pushing it). Much of the variance in the artistic leads to a development of a particular skill or attribute that we call "intuition". It's not concrete, it's not focused, it's not based on hard data, but it informs in ways that are just as valuable. Artistic endeavors help to develop these traits. Cutting the arts out of our economic vision puts us at a significant disadvantage.

[I had to duck out at this point to take care of some AST business, so I can only give you my take on the actual details I heard discussed. Sorry for the gap.]

One of the quotes shared that makes great sense to me comes from Immanuel Kant:

"The intellect can intuit nothing, the senses can think nothing. Only through their union can knowledge arise."

It feels like this talk is resonating with many people, as the Open Season for this talk is vigorous and active. An emphasis on synthesizing the inputs and the areas that we interact with makes for a richer and greater whole. It takes a different level of thinking to make it happen, but there are amazing opportunities for those willing to stretch into new disciplines. Many books have been suggested, including "Inventing Kindergarten", which talks about the importance of play and discovery to learning and skill acquisition/development.

James Bach made an interesting comment. With the emphasis on STEM, and now STEAM, who gets left behind now? Are there other areas that are now orphaned and unfunded, or is it that these areas  have been unfunded for so long that they cry out for help? Does it make sense to work with a small group, or do we need to consider that STEAM is meant to be an all encompassing discipline focus?


---


Up next is mine and Harrison's talk... for obvious reasons, I will not be live blogging then ;). I do however encourage everyone to tweet comments or even ask questions, and I'll be happy to follow up and answer them :).


---

During lunch we had the results of the test challenge and we also had the results of the Board of Directors election.

Returning members of the board:
- Markus Gärtner
- Keith Klain (re-elected)
- Michael Larsen
- Pete Walen

New Members:
- Erik Davis
- Alessandra Moreira
- Justin Rohrman

Congratulations to everyone, I think 2014-2015 will be an awesome year :).

Ben Simo took the stage after lunch to talk about the messy rollout of the healthcare.gov web site and all of the problems that he alone was able to find with the site. Ben made a blog to record the issues that he found, and that received a *lot* of attention from the media and from the government as well.

For the details of each of the areas that he explored, you can see the examples he posted on http://blog.isthereaproblemhere.com/. What I found interesting was the fact that as Ben tested and logged his discoveries, it showed just how messed up so many of the areas were, and how much of the efforts Ben applied helped discover some strange issues without even trying. Ben was not asked to speak to Congress or to testify, and he did not find that there was any government action from his efforts, but he became the target of DDOS attacks and media outlets were calling him very regularly.

Ben has on the side of the "is there a problem here" site a variety of test heuristics are listed, and he applied most of those heuristics to help uncover the bugs he found. Many of the issues discovered fit into the specific heuristics. Listed here they are:

CONSISTENCY HEURISTICS
from James Bach and Michael Bolton

H istory
I mage
C omparable Products
C laims
U ser Expectations
P roduct
P urpose
S tatutes
-F amiliar

FAILURE HEURISTICS
from Ben Simo

F unctional
A ppropriate
I mpact
L og
U ser Interface
R ecovery
E motions

SECURITY VULNERABILITIES
the OWASP Top 10

1. Injection
2. Broken authentication and session management
3. Cross-Site Scripting
4. Insecure Direct Object References
5. Security Misconfiguration
6. Sensitive Data Exposure
7. Missing Function Level Access Control
8. Cross-Site Request Forgery
9. Using Components with Known Vulnerabilities
10. Unvalidated Redirects and Forwards

What was also amazing to see was that all of these issues were discovered/reported using nothing more than this own data. He said he would not try to do anything to access other people's information, and he did not. even then, he still found plenty of issues that should have made the healtcare.gov team both very nervous, and very grateful.

---

Geoff Loken tickled my interest with the talk titled "The history of reason; arts, science, and testing". As one who finds philosophy and all of its iterations over the millenia fascinating, I've found the different ways that reason and intuition developed over the ages to be a worthwhile area to study and learn about. Ultimately, philosophy comes down to epistemology, and epistemology led to the scientific method (observation/conjecture, hypothesis, prediction, testing, analysis).

Observation cannot tell us everything. There are things we just can't see, hear, touch, smell, or taste. at those times, we have to use reasoning skills to go farther. The scientific method does not prove that things exist, but it can disprove to a point the nature of an items existence.

Geoff played a clip from Monty Python and the Holy Grail (the witch scene) which showed what could be considered a case of bad test design. Ironically, based on the laws and understanding of the world at the time, they performed tests that were actually in line with their standards of rigor. We simply have codified our understanding of more disciplines since their time.

One of the other aspects that comes into play when we are testing is that science can quantify, but it can't qualify. There's aspects to testing that we need to look at that go beyond the very definable and specific data aspects.

Overall, I found this to be a lot of fun to discuss, and it reminded me of many of the historical dilemmas that we have faced over the centuries. We look at what appears to be totally irrational actions in centuries past. It makes me wonder what from our current time will look irrational 500 years from now ;).

---

The last session I attended today was Justin Rohrman's talk "Looking to Social Science for Help With Metrics". Metrics is considered a dirty word in many places, and that disparaging attitude is not entirely unjustified. Metrics are not entirely useless, but the measurement of them in the right context is important. If they are not used in the correct context, they can be benign at best and downright counter-productive.

By focusing on the metrics that actually matter, we can look at measurements that can tell us how to learn about the systems we use and learn where we are in the life of the product. Some of the context-driven measurements that we could/should be looking at include:


  • work in progress
  • cycle time
  • lead tile
  • touch time
  • slack
  • takt time
  • time slicing
  • variation
  • Find -> Fix -> Retest loop

These measurements intrigue me, and they seem to be much more in line with what could actually help an organization. These fit well into what is referred to as the Lean Model.  Lean focuses on measurement for improvement. This is in contrast with using not to be confused with measurement for control.

I'm currently fortunate to work for a company that does not require me to chase down a number of useless metrics, though we have a few core metrics that we look at. These examples give me hope that we can get even more focused on measurement values that are meaningful. I'll definitely bring the context-driven list to my engineering team, or better yet, try to see if I can derive them and report them myself :).

---

Tim Coulter and Paul Holland wandered about the venue and checked out a bunch of the talks, and he decided to check out as many of the talks as he could to share some "TimBITS" and takeaways.  Some takeaways:

- Direct your testing keeping the business goals in mind
- Tighten up the feedback loop, it will make everyone happy
- Write your bug reports as if it were for a  memory-wiped feature you
- When you add a test specialist into a development team, everybody wins
- Skills atrophy: Testing skills must be used or you will lose them
- Social sciences all play a part in testing, it's not just technology
- Testers appear to be hardwired to play games
- Art has an impact on Software Testing
- Good mentoring is hard. Answer your mentees questions with more questions
- It's a sin to test mobile apps sitting at your desk
- To be a good tester you need to be a "sort of skilled hacker"
- Pair with people in all roles they will all give you different insights
- You can't prove quality with science, but you can prove facts that may alter judgement
- Try to show the need for what you want to teach before you try to teach it
- "Hey, these aren't just testers but real people!"

Many of the participants are sharing their own takeaways, and they are covering many of the ones already mentioned, but they are showing that many people are seeing that "there is an amazing community that looks out for one another and actively encourages each other to do their best work".

---

It's been fun, but all good things must come to an end. We are now at the closing keynote, given by Matt Heusser, and the title for this one is "Software Testing State of the Practice (And Art! And Science!)".

First thing that Matt says that he is seeing is that there is a swing back to programming, and back again. The debate of what testers should be doing, what kind of work they should be doing, and who should be doing it is still raging. Should testers be programmers? Some will embrace that, some will fight it, and some will find a place in the middle.

Automation and tech is increasing in our lives (there are supermarkets where self-checkout is becoming the norm). The problem with this prevalence is that we are losing the human touch and interaction. Another issue is that testing looks to be a transient career. Seven years from now, it's entirely likely that more than 50% of the people who are here attending their first CAST will not even be in test 7 years from now.

Another issue that we are seeing is the Fragmentation within Testing. What does test even mean? What is testing, and who has the final say on the actual definition? We have a wide variety of codebases and strategies of how we code and how we test. All of this leads to a discipline that many people don't really understand what it is that the test teams do. We are a check box that needs to be marked.

The Agile community in the early 2000s we in a similar situation, and they dared to suggest they had a better way to write software. That became the Agile movement, and that's changed much of the software development world. Scrum is now the default development environment for a large percentage of the software development population. Scrum calls for testing to be done by embedded members of the team, and not by a separate entity.

Matt refers to three words that he felt would change the world of software testing. Those words are:

- Honesty
- Capable
- Reach

We need to be honest with our dealings and we need to show and demonstrate integrity. We need to prove that we are capable and competent, and we need to reach out to those who don't understand what we do or why we do it.

I added a comment to this talk in open season when asked about why testers tend to move out of testing, and I think that there's something to be said that testers are broad generalists. We have to be. We need to look at the product from a wide variety of angles, and because of that, we have a broad skill set that allows us to pivot into different positions, either temporarily or as a new job. I personally have done stints as a network engineer, an application engineer, a customer support representative, and even a little tech writing and training, all the while having software testing as a majority of my job or as a peripheral component of it. I'm sure I'm not an isolated incident.

There's a lot to be said about the fact that we are a community that offers a lot to each other, and we are typically really good at that (giving back to others). As I said in a tweet reply, if we inspire you, please bring the message back to your friends or your team. Let them know we are here :).


Live from New York, It's... #CAST2014

Wednesday, August 13, 2014 13:03 PM


Hey! Finally, I'm able to work that silly line a little more literally ;). Yesterday was the workshop day for CAST, but today is the first full general conference day, so I will be live blogging as much as I can of the proceedings, if WiFi will let me.

CAST 2014 is being held at the New York University Kimmel Center, which is just outside of the perimeter of Washington Square. the pre-game show for the conference (or one of them, in any event) is taking place at the Marlton Hotel in the lobby. Jason Coutu is leading a Lean Coffee event, in which all of the participants get together and vote on topics they want to talk about, and then discussion revolves around the topics that get the most votes.

---

This morning we started out with the topic of capacity planning and the attempt to manage and predict how to plan for capacity planning. Variance in teams can be dramatic (a three person test team is going to have lower capacity than a fifty or one hundred person team. One interesting factor is that estimation for stories is often wrong. Story points and stuff like that often regress to the mean. One attendee asked "what would happen if we just got rid of the points for stories altogether?" Instead of looking at points for stories, we should be looking at ways to get stories to the same size in general. It may be a gut feeling, it may be a time heuristics, but the effort may be better suited to just making the stories smaller, rather than get to involved in adding up points.

Another interesting measurement is "mean time to fix the build".  Another idea is to see which files get checked out and checked in most frequently to see where the largest amount of modifications are taking place and how often. Some organizations look to measure everything they can measure. One quip was "are they measuring how much time they are spending measuring?". While some measurements are red herrings, often there are valid areas that it makes sense to measure and learn what is needed to remedy. A general consensus is that the desire to get lots of finely granulated measurements is less effective than just targeting effort to fix issues and getting the release to be stable and fixing the issues as they happen.

---

Another topic is the challenge of what happens when a company has devalued or lost the exploratory tester skill set due to focusing on "technical testers". A debate came up to see what "technical tester" actually means, and in general, it was agreed that a technical tester is a programmer that writes or maintains automated testing suites, and that they meet the same level/bar that the software engineers meet. The question is, what is being lost by having this be the primary focus? Is it possible that we are missing a wonderful opportunity to work with individuals who are not necessarily technical, or that is not their primary focus. I consider myself a somewhat "technical tester", but I much prefer/enjoy working in an environment where I can do both technical and exploratory testing. A comment was raised that perhaps "technical tester" is limiting. Technically aware might be a better term, in that the need for technical skills is rising everywhere, not just in the testing space.

---

The last topic we covered was "testing kata" and this is a topic that is of great interest to me because of thoughts that we who are instructors in Miagi-do have been considering implementing. My personal desire is to see the development of a variety of kata that we can put together and use in a larger sphere. In martial arts (specificially, Aikido), there is the concept of "Randori", which is an open combat scenario where the participant has multiple chalengers, and needs to use the kata skills they have learned. The kata part, we have a lot of examples. The randori, that's an open area that is ripe for discussion. The question is, how to put it into practice? I'd *love* to hear other people's thoughts on that :).

---

After breakfast, we got things started with Keith Klain welcoming everyone to the event, and Rich Robinson explaining the facilitation process. As many know (and I guess a few don't ;) ) CAST uses a facilitation method that optimizes the way that the audience can participate. The K-cards that we give out let people determine how and where they can question and make sure everyone who wants a say can get their say.

James Bach is the first keynote, and he has opened up with the reality that talking about testing is very challenging. It's hard enough to talk to testing with other testers, but talking to people who are not testers? Fuhgeddaboudit! Well, no, not really, but it sure feels that way. Talking about testing is a challenging endeavor, and very often, there is a delayed reaction (Analytical Lag Time) where the testing you do one day comes together and gives you insights an hour or a day later. These are maddening experiences, but they are very powerful if we are aware of them and know how to harness them. The title of James' talk is "Testing is Not Test Cases (Toward a Performance Culture)". James started by looking at a product that would allow a writer working on a novel the change and modify "scene" order. The avenues that James was showing looked like classic exploratory techniques, but there is a natural ebb and flow to testing. The role of test cases is almost negligible. The thinking process is constantly evolving. In many ways, the process of testing is writing the test cases while you are testing. Most of the details of the test cases are of minor importance. The actual process of thinking about and pushing the application is hugely complex. An interesting point James makes is that there is not test for "it works". All I know for sure is that it has failed at this point in time.

The testing we perform can be informed by some basic ideas, some quick suggestions, and then following the threads that those suggestions give to us. Every act of testing (genuine testing) involves several layers. there's a narrative or explanation, there's an enactment of test ideas, there's the knowledge of the product that we have. there's the tester's role and integration in the team, there's the skill the tester brings to the table, and there's the tester's demeanor and temperament. All of these aspects come to play and help inform us as to what we can do when we test.

The act of testing is a performance. It can't truly be automated, or put into a sequence of steps that anyone can do. That's like expecting that we can get a sequence of steps so that anyone can step in and be Paul Stanley of KISS. We all can sing the lyrics or play the chords if we know them, but the whole package, the whole performance, cannot be duplicated, not that there aren't many tribute band performers that really try ;).

James shared the variety of processes that he uses to test. He shared the idea of a Lévy Flight, where we sample and cover a space very meticulously, then we jump up and "fly" to some other location and then do another meticulous search. the Lévy Flight heuristic is meant to represent the way that birds and insects scour areas, then fly off at what looks like a random manner, and then meticulously searching again for food, water, etc. From a distance, it seems random, but if we observe closely, we see that even the random fly around is no random at all, but instead it's a systematic method of exploration. Other areas James talks about are modeling from observations, factoring based on the product, experiment design and using tools that can support the test heuristic.

James created a "spec" based on his observations, but recognizes that his observations could be wrong, so he will look to share these options with a programmer to make sure that his observations match the intended reality. there is a certain amount of conjecture here. Because of that, precise speech is important. If we are vague, we can give leeway for others to say "yes, those are correct assumptions of the project". the more specific, the less likely that wiggle room will be there. Issues will be highlighted and easier to confirm as issues if we are consistent with the terms we use. The test cases are not all that interesting or important. The "testers" spec and questions we develop and present at the end is it. However, just as Paul Stanley singing "Got to Choose" at Cobo Hall in Michigan in 1975, the performance of the same song in Los angeles in 1977 will not sound exactly the same. Likewise, a testing round a week later may produce a totally different document,with similarities, but perhaps fundamental differences, too.

Does this all mean that test cases are completely irrelevant and useless? No, but we need to put them in the hierarchy they actually belong. There is a level of focus and ways that we want to interact with the system. Having a list of areas to look at so as to not forget where we want to go certainly helps. Walking through a city is immensely more helpful if we have a map, but it's not entirely essential. we can intuit from street names and address numbers, we can walk and explore, we can ask for directions from people we meet, etc. Will the map help us get directly where we want to go? Maybe. Will following the map show us everything we want to see? Perhaps not. Having a willingness to go in various directions because we've seen something interesting will tell us much more than following the map to the latter. So it is with test cases. They are a map. They are a suggested way to go. They are not *THE* way to go.

Ultimately, it comes down to the fact that testing is a performance. Fretting about test cases gets in the way of the performance. Back to watching KISS (or The Cure of The Weeknd if we want to be more inclusive), they have songs, they have lyrics, they have emotive passages, but at the end of the day, an instance of a performance can be encoded, but it represents only one instance of time. Every performance is different, every performance is on a continuum. You can capture a single performance, but not al of the performances that can be made. Cases can guide us, but if we want to perform optimally, we have to get beyond the test cases. We can capture the actual notes and words. There is no automation for "showmanship" ;).

This comes down to Tacit and Explicit Knowledge. I remember talking about this with James when we were at Øredev in Sweden and we talked about how to teach something where we cant express the words. There is a level of explicit knowledge that we can talk about and share, but there's a lot of stuff buried underneath that we can't explain as easily (the tacit knowledge). Getting to the point of transferring that tacit knowledge gets to experience and shared challenges. Most important, it goes to actively thinking about what you are looking at and doing what you can to make for a performance that is both memorable and stands up to scrutiny.

---

As in all of these sessions, there are so many places to go and talks to see, that it is difficult to make decisions on what to see. For that purpose, I am deliberately going to let the WebCAST stream speak for itself. If you can view the WebCAST presentations, please do so. After the recordings are available they will be posted to the AST Channel for later viewing. For that reason, I am going to focus on sessions that I can attend that are not going to be recorded, as well as are relevant to my own interests and aspirations. With that, I was happy to join Alessandra Moreira (@testchick) and her talk "My Manager would Never Go For That", or more succinctly, how to apply context-driven principles to the art and act of persuasion. I always think of the scene in the film "Amadeus" where Mozart is trying to convince the Emperor to let him go forward with the staging and presentation of "The Marriage of Figaro". The Emperor at one point says "You are passionate, Mozart, but you do not persuade". This is a key reminder to me, and I'm guessing Ale is very familiar with this. Being passionate is not enough, we have to persuade others to see the value in what we are passionate about.


Sometimes this comes down to a decision of "should I stay or should I go?". Do I change my organization or do I change my organization? Persuasion may or may not come about, but the odds are, we can do a better job of persuading if we are ourselves willing to be persuaded. Conversations are two way streets. We learn new things all the time. Are we willing to adapt our position given new information? If not, why not? If we think that our way is the best way, and we are not willing to bend with what we are told, why should anyone else be persuaded by us? Influence is not coercion, it's not manipulation, it's a process of guiding and suggesting, offering information and examples, and "walking the walk" that we want to influence in others. there is a three step process in the art of persuasion. First, we need to discover something that we feel is important. Second, we need to prepare, get our ducks in a row so to speak. we need to know and have supporting evidence that we understand what we are doing and that we have a compelling case. From there, we then need to communicate and embark on an honest and frank dialog about what we want to see be an outcome.

In my own experience, I have found that persuasion is much easier if you have already done something on your own and experienced success with it. Sometimes we need to explore options on our own, and see if they are viable. Perhaps we can find one person on our team who can offer a willing ear. I have a few developers on my team who are often willing to help me experiment with new approaches, as long as I am prepared to explain what I want to do and have done my homework up front. People are willing to give you the ability and the benefit of the doubt if you come prepared. If you can present what you do in a way that the person who needs to be persuaded can be convinced that you have worked to be ready for them to do their part, they are much more likely to go along with it. Start small, and get some little successes. that will often get these first few "adopters" on board with you, and then you can move on to others. Over time, you will have proven the worth of your idea (or had it disproved), and you can move forward, or you can refine and regroup to try again.

I'm fortunate in that I have a Manager who is very willing to let me try just about anything if it will help us get to better testing and higher skill, but it's likewise important to do my homework first, as it helps to build my credibility on the topic at hand. Credibility goes a long way to helping persuade others, and credibility takes time to build. With credibility comes believability, and with believability comes a willingness to let you try an idea or experiment. If the experiments are successful in their eyes, they will be more likely to let you do more in those areas you are aiming to persuade. If the experiments fail, do not despair, but it may mean you have to adapt your approach and make sure you understand what you need to do and how it fits in your own organization.

One of the key areas that people fail on when it comes to persuasion is "compromise". Compromise has become a bad word to many. It's not that you are losing, it's that you are willing to work with another person to validate what they are thinking and to see what you are thinking. It also helps to start small, pick one area, or a particular time box, and work out from there.

---

During the lunch break, Trish Khoo stepped on stage to talk about the ideas of "Scaling Up with Embedded Testing", where Trish described a lot of the testing efforts she had been involved in, where the code that she had written was not regarded by any of the programmers, since they felt what she was doing was just some other thing that had to be done so the programmer could do what they needed to do. fast forward to her working in London, where the programmers were talking about how they would test the code they are writing. This was a revelation to her, because up to that point, she had never seen a programmer do any type of testing. Since many of the efforts that she was used to doing were now being taken care by the programmers, that made her role more challenging, so she had to be a lot more inquisitive and aggressive in looking for new areas to explore.

We often think of the developer and tester being responsible for finding and fixing bugs, and the product owner and the tester are responsible for verifying expectations. The bigger challenge is that, with these loops that we enter, we end up chewing up hours, days and weeks constantly going through these cycles of finding and fixing bugs and verifying expectations. Interestingly, when we ask developers to focus on writing tests to help them write better code, the common answer is "yeah, that makes sense, but we don't have time to do that now, we'll do that next sprint", and then they say the same thing next time, if it comes up again. How do we convince an organization to consider a different approach to having developers get more involved in test? She looked to a number of different organizations to see how they did it.

One of the people Trish talked to was Elisabeth Hendrickson of Cloud Foundry/Pivotal. Interestingly, Cloud Foundry does not have a QA department. That's not to say that they do not have testers, but they have programmers who test, and testers who program. There is no wall. Everyone is a programmer, and everyone is a tester. Elisabeth has a tester on the team by the name of Dave Liebreich (Hi Dave ;) ). While he is a tester, he also does as much testing and the programmers, and as much code writing as the programmers.

Another person she talked to was Alan Page of Microsoft (Hi, Alan ;) ). Some of the teams at Microsoft has moved to a model where everyone has dispensed with job titles. Ask Alan what his job title is, he'll say "generic employee" or if pushed, "software engineer". The idea is that they are not confined to a specific specialty. The goal is that, instead of having people in roles, they open up the opportunities for people to do what their skill set and passion provides. the net process is that managers are orchestrating projects based on skill. Instead of hiring "testers who code", the are looking to hire "people who can solve problems with code". The idea that tester is a role is not relevant, everyone codes, everyone tests.

The third case study was with Michael Bachman at Google. In a previous incarnation, Google would outsource a lot of the manual testing with vendors, mostly to look at the front end UI. Much of the coverage that the testers were addressing was ignoring about 90% of the code in play. For Google to stay competitive, they opted to change their organization so that Engineering owned quality as a whole. There was no QA department. Programmers would test, and there was another team called Engineering Productivity, who helped to teach about areas of testing, as well as investing in Software Engineers in Test (SET), who could then help instruct the other programmers in methods related to software testing. The idea with Google was that "Quality is Team Owned, not Test or QA Owned".

What did they all have in common? Efficiency was the main driver. Teams that have gone to this model have done so for efficiency reasons. there are lots of other words associated with this (Education, Feedback, Upskill, Culture, No Safety Net, etc.). One word that is missing, that I'd be curious to see, is effectiveness. Overall, based on the presentation, I would say that effective was also part of this process. Efficiency without effectiveness ultimately will cause an organization to crash and burn. therefore, that means there is value in these changes.

So what does that mean for me as a tester? It means the bar has been raised. We have some new challenges, and we should not be afraid to embrace them. Does that mean that exploratory testing is no longer relevant? Of course not, we still explore when we develop tool assisted testing. We do end up adding some additional skills, and we might be encouraged to write application code, too. As one who doesn't have a "developer" background, that doesn't automatically put me at a disadvantage. It does mean I would be well served to learn a bit about programming and getting involved in that capacity. It may start small, but we all can do some of it if we give it a chance. We may never get good enough at it that we become full time programmers, but this model doesn't really require it. Also, that's three companies out of tens of thousands. It may become a reality for more companies, but rather than be on the tail end of the experience and have it happen to you, perhaps it may help to get in front of the wave and be part of the transition :).

---

The next session I opted to attend was about "The Psychology and Engineering of Testing" which was being presented by Jan Eumann and Ilari Aegerter. Both Jan and Ilari work with eBay, but they are part of the European team, and the European market and engineering realities are different than what goes on in Silicon Valley. There is a group of testers based on London and Berlin that gets software from the U.S. to test, while the European team has software testers embedded into the development team.

Who works in an embedded tester in an Agile team? Overall, they look for individuals with strong engineering skills, but they also want to see the passion, interest and curiosity that helps make an embedded tester formidable. the important distinction that the embedded testers at Europe eBay are not thinking of programming and testing as either/or, but "as well as". They are encouraged to develop a broad set of skills to help solve real problems with the best people who can solve the problems, rather than to focus on just one area.

When Jan and Ben Kelly were embedded within the European teams, there was an initial experience of Testers vs. Programmers, but over time, developers became test infected, and testers became programming savvy along with it. this prompted other teams saying "hey, we want a tester, too".  In this environment, testers and programmers both win.

Though there are integrated teams, the testers still report to testing managers, so while there are still traditional reporting structures, there is a strong interconnected sense between the programmers and testers in their current culture. The Product Test Engineering team has their own Agile manifesto that helps define the integration and importance of the role of test, and how it's a role that is shared through the whole team. If the goal of an embedded tester is to be part of a team, then it makes sense, in Jan's view, to be with the team in space, attitude and purpose. Sitting with the programmers, hanging with the programmers, meeting with the programmers, all of these help to make sure that the tester is involved right from the start.

Additionally, testers can help tech programmers some testing discipline and an understanding of testing principles. Testers bring technical awareness of other domains. They also have the ability to help guide testing efforts in early stage development and help inform and encourage areas that can be set up where programmers might not do so were there not a tester involved. It sounds like an exciting place to be a part of, and an interesting model to aspire to.

---

I love the cross pollination that occurs between the social sciences and software testing, and Huib Schoots has a talk that addresses exactly that.

We often confuse software testing and computer science as though they are hard sciences like mathematics or physics or chemistry. They have principles and components that are similar, but in many ways, the systems that make software are more akin to the social sciences. We think that computers will do the same thing every single time in exactly the same way. fact is, timing, variance, user interactions, congestion and other details all get in the way of the specific factors that would make "experiments" in the computer science domain truly repeatable. I've seen this happen in continuous integration environments, where a series of tests that ran at one time worked the second time they were run, without changing any parameters. One caused the first one to fail and the second one to pass. there can be lots of reasons, but usually they are not physics or mechanical details, but coding and architectural errors. In other words, people making mistakes. Thus, social rather than hard sciences.

Huib shifted over to the ideas in the book "Thinking Fast and Slow", in which simple things are calculated or evaluated very quickly, and other more complicated matters require a different kind of thinking. Karl Mark developed theories about how people should interact, and while the theories he prescribed have been shown to not be ideal, they are still based on the realities of how humans interact with one another. The science of Sociology informs many aspects of the way that we work and interact with others, which generally informs our designs of systems. Levy Strauss represents Anthropology, which deals with the way that different cultures are structured and the parameters that environmental factors that help to inform those options. Maria Montesorri represents Didactics and Pedagogy, aka learning and the methodology that helps inform how we learn. He used his girlfriend to represent Communication studies, and the fact that the way we talk to one another informs the way we design systems, because the communication aspect is often what gets in the way of what an application does (or should I say, the inability to communicate smoothly gets in the way).

Science and Research are areas that inform a great deal of what a software tester actually does. Sadly, very few software testers are really familiar with the scientific method, and without that understanding, many of the options that can help inform test design is missing. I realized this myself several years ago when I stopped considering just listing out a long series of lines of test cases as being effective testing. By going back and considering the scientific method, it gave me the ability to reframe testing as though it were a scientific discipline in and of itself. However, we do ourselves a tremendous disservice if we only use hard science metaphors and ignore the social sciences and what they inform us of how we communicate and interact.

We focus so much attention on trying to prove we are right. that's a misnomer. we cannot prove we are right in anything. We can disprove, but when we say we've proven something, we say we have not found anything that disproves what we have seen. Over time, and repeated observation, we can come close to saying it is "right", but that's only until we get information that disproves it. The theory of gravity seems to be pretty consistent, but hey, I'll keep an open mind ;).

Humans are not rational creatures. We have serious flaws. We have biases we filter everything through. We are emotional creatures. We often do things for completely irrational reasons. We have gut feelings that we trust, even if they fly in the face of what would be considered rational. Sometimes they are write, and sometimes they are wrong, yet we still heed them. Testing needs to work in the human realm. We have to focus on the sticky and bumpy realities of real life, and our testing efforts likewise have to exist in that space.

---

Martin Hynie and Christin Wiedemann focused on a talk that was all about games. Well, to be more specific, "Why testers love playing – Exploring the science behind games". Games are fundamental to the way that we interact with our environment and the way we interact with others. Games help us develop cognitive abilities. The more we play, the more cognitive development occurs. This reminds me a lot of the work and talks I have seen given by Jane McGonigal and how gaming and game culture effects both our thinking and our psychological being.

OK, that's all cool, but what does this have to do with testing?

Testing is greatly informed by the way we interact with systems. We try out ideas and see if they will work based on what we think a system might do. While the specific skills learned from games do not transfer, our way of looking at situations and inspiration that may give us ideas do. Game play is scientifically proven to modify and change our cortical networks. It was fascinating to see the way in which Martin and the other testers on the team approached this as a testing challenge.

The test subjects had their brains scanned while they were playing games, and the results showed that gaming had an actual impact on the gaming brain. Those who played games frequently showed cooler areas of the brain than those who were not playing games. This shows that gaming optimizes neural networks in many cases. Martin also tok this process focusing on a more specific game, i.e. Mastermind, and what that game did to his brain.

So are games good for testers? Jury is out, but the small sample set certainly seems to indicate that yes, there looks to be evidence that games do indeed help testers and that the culture of tester games, and other games, is indeed healthy. Hmmm, I wonder what my brain looks like on Silent Hill... wait, forget I said that, maybe I really don't want to know ;).

---

A great first day, so much fun, so much learning, and now it's time to schmooze and have a good time with the participants. See you all tomorrow!

From Mid-Town Manhattan, it's #TestRetreatNYC

Sunday, August 10, 2014 17:52 PM


In the offices of LiquidNet, a group of intrepid, ambitious testers met and decided to discuss what aspects of testing mattered to us. Test Retreat is an Open Space conference, where the sessions begin when the participants want to have it begin (and end), the people who are there are the people who need to be there, and the law of two feet rules. We took some time to discuss topics, pull together similar topics and threads, and then head into our places to talk about the stuff that is burning us up inside.

---

The first session I attended was based around developing a testing community within a city or region. As many of us were part of different communities around the world, there were a variety of experiences, ranging from very small markets with perhaps a hundred testers total, to large regions with hundreds of thousands of testers, but little in the way of community engagement. 

Rich Robinson led the discussion and shared his own experiences with growing and scaling the community in Sydney, Australia. Rich shared that there were three areas that were consistently asked for and looked at as the goals of the attendees: looking for work, finding out the latest trends, and honing and sharpening skills. Rather than try to be all things to all people, they developed a committee and separate initiatives, with committee members focusing on the initiatives that mattered to them. For the group that wanted to get work, they made an initiative called “opportunity seekers”, and focused energy on those who had that as their biggest objective. For those who want to focus on latest trends, they have an avenue for that as well. 

Different regions have unique challenges. In the Bay Area, we have an overabundance of meetups for technical topics, of which a handful of them relate to software testing. As a founder of Bay Area Software Testers (BAST), Curtis Stuehrenberger, Josh Meier and I have chosen to try to focus on areas that are not as often discussed (we tend to steer clear of automation tools and techniques, since there are dozens of other meetup groups that cover those topics). Another challenge is frequency. In some cases, regular and frequent meetings are key. In others, having them less frequently works, but the key is that they meet regularly (monthly, every six weeks, or quarterly seem to be the most common models).

Rich also shared that their initiatives branch away from the formal meetup sessions, and have other opportunities that they initiate that occur outside of the formal meetup times. By having each initiative have people committed to it and resources to help drive those initiatives, buzz gets generated and more people get involved. One of the key things that Rich emphasized was getting people involved and engaged for these initiatives. The Sydney tester group has a committee of ten members that helps make sure that these initiatives are staffed and supported.

Another challenge is the local regions. Some cities have sprawl, others are difficult to get to in a timely manner due to traffic and population density. For example, the San Francisco Bay Area has four general regions: San Francisco (and the upper San Francisco Peninsula, Silicon Valley (and the lower San Francisco Peninsula), the East Bay, and the North Bay. With a few exceptions, people who participate in one generally do not regularly participate in the others. To reach out to a broader community in these sub-regions, it may require using technology and remote-access options for people to participate. 

Ultimately, growing a community takes time, it takes dedicated people, it takes a range of topics that matter to the attendees (including making sure that food and drink is there ;) ). To get those people you want to be involved, it helps to be very specific about what is needed. Saying “I need help with this” is less effective than saying “I need this specific thing to be done at this time for this purpose”. Specificity helps a lot when recruiting helpers.


The next session was “Sleep No More”, presented by Claire Moss, and focused on the model of the performance/play called Sleep No More (which is, in some ways, described as “an immersive performance of Macbeth”). It’s a darkened environment, all participants wear masks, no photography, no talking, just experience as the person sees it. Exploratory Testing, in many ways, has similarities to this particular experience. Claire used a number of cards to help display the ideas, and one of the first ideas she shared was “fortune favors the bold”. Curiosity and a willingness to go in without fear and deal with a substantial amount of “vague” is a huge plus. If you already have that, you have a strong advantage. If this is not natural for you, it can be developed.

Each room in the Sleep No More experience was part of the performance, and at any time, rooms could be empty or have people filter in during the performance. There are “minders” in the event that help to make sure that people don’t completely lose track of where they are. At times, there are very personal experiences that take place based on your tracks and where you go. Claire described a very intense experience of the performance based on where she went and what she observed and chose to follow up on. She also said that, up to this point, no one that she knew had anything like the experience she had.

The experience of Sleep No More was bizarre, creepy, full of strange triggers, and the potential to go into wildly unexpected direction. Software testing in many ways mirrors this experience. While there may be familiar areas and ideas, very often, our choices and angles may take us into very unexpected places. To give an example of the scope of the space, this was in a six story building, in an area that used to be a hotel (and whole areas of the building were gutted, in some spaces multiple floors were open to the air and visible). 

Claire described a feeling of “amazement fatigue”, where the level of stimulus is so high that there is no way to take it all in. The participants have to make conscious choices as to where they will go, and many of the participants will have wildly different experiences. Sometimes, they would follow a character, only to watch them go through a door and close and lock it, so that they couldn’t be followed any longer. This reminds me of following threads of a feature, and being brought to a dead end. People will observe different things, and they will also observe what other people do, and what they focus on. This can give us clues as to areas we want to explore next. 

This experience sounds amazing, and I am definitely interested in going and doing it myself, if time and commitments permit me to do so. Looks like I will be attending the August 10 performance :).


The next session was based on “Leadership”, and Natalie Bennett led the session with the idea that she wanted to see where individuals felt their experiences or needs for leadership were, as opposed to her telling us what she felt about Leadership and how to do it. Questions that Natalie wanted to discuss were:

- What is the purpose of a test team lead?
- What is it for?
What makes it different than being a test manager?

The discussion shifted from there into ways that test team leads and test managers were similar and where they differed. Some of the participants talked about how they led by example, and that they divvied up the work among the group based on the people involved and what they were expected to do. Team leads in general do not have hiring/firing authority, and they typically do not write reviews or have salary decision input. In other environments, the team manager and team lead are one in the same. There are some who are cynical about the effectiveness of this arrangement, while some feel that it is possible to be both a team lead and a team manager.   One attendee who is a Director of Q.A. for her company said that she was “the face of Q.A.” to the organization, and as such, she was setting the direction and expectations for the organization, as well as for her own direct reports.

Team leads are expected to teach and coach the members of their group, as well as be the point of contact for the group. It’s seen as important that they be able to focus on and develop their own role and make it responsive to their own environment. The team lead stands up for the group, and defends them from encroachment of issues and initiatives that are counter-productive to their success. Responsibility and authority tends to be on a sliding scale. Different companies allow for a different level of authority for the leads. Some give a level of authority that is just short of being an actual manager. In others, the leads is considered a “first contact” among equals.

One of the bigger challenges is to deal effectively when team members are failing. Failing in and of itself is not bad. It’s important to learn, and failing is how you learn, but when the failing is chronic or insurmountable, there needs to be a different level of interaction. Lean Coffee, direct mentoring, or even a serious re-consideration of experiences and goals can be hugely beneficial, both for the individual and the team as a whole.


Matt Heusser led a session about “Teaching Testing”, and some of the challenges that we face when we teach software testing to others. When we have an engaged and focused person, this usually isn’t a problem. When the person in question isn’t engaged, or is just going through the motions, then it’s a little more difficult.  The question we focused on at first was “what methods of teaching have worked for you?” Testing is a tactile experience, rather than looking at an abstract questions. We are familiar with questions like “how do you test a stapler?” or “how can you test a Rubik’s Cube?”. The presentation of this challenge may be the most important aspect. For some, they might look at “how do you test a stapler?” as demeaning. They are professionals, what is this going to teach me?

In my experience, one of the things I found to be helpful is to actually spell out how challenging the exercise could be. Rather than ask “How do you test a stapler?”, I might instead say “Tell me the 120 ways that you can test a stapler to confirm it’s fit for use?” This sets a very different expectation. Instead of saying “oh, this is trivial”, by seeding a high number, they may want to try to see how they might be able to meet or exceed that number. They become engaged.

To borrow a bit from Seth Godin, there are two primary goals for everyone. It’s the important aspects that we need to learn, regardless of the discipline. The first is to focus on authentic problems. The second is to be able to lead. Domain knowledge is a huge factor in helping to identify authentic problems. It’s not the only means, but getting to really know the domain can help inform the testing ability. Another important aspect is to understand how people learn, Everyone goes about learning a bit differently. Helping each person learn how they learn can be a huge step in helping to teach them. Sometimes the most ripe area of learning is to wade into an area where people disagree, or where there might be a number of people or groups where there might be dysfunction, where team members don’t talk to each other, or there’s simmering hostility between people. If there’s hostility between two programmers, and they write software that interacts with each other, it’s a good bet that there might be a goldmine of issues between their interaction points (I think this is a very interesting idea, btw :) ) .

Key to teaching testing is the ability to reflect and confirm what has been taught and learned, and for me, I think that Weekend Testing does this very well. The benefit of Weekend Testing, beyond just doing the exercise, is that we can see lightbulbs turning on, and there’s a record of it that others can see and learn from. Creating HowTo’s can also be a helpful mechanism for this. 


This section is the talk that Smita Mishra and I gave about “Hiring Testers and Keeping them Engaged Once We’ve Hired Them”. I recorded this session, and I will transcribe it later ;).


Claire Moss led a session on “Communicating to Management” and we went through and considered a list of questions that are important to frame the conversation(s):

What does quality like to our organization?
Why spend money on testing?
What does testing do?
What value are we getting out of testing?
I read this about QA, and it says we should do this… why aren’t we doing this?” 

These are all questions that we need to be prepared to answer. The question is, how do we do that?

There are several methods we can use, but first and foremost, we need to determine what we need to speak with management about, and if possible, use the opportunities to help educate them about what it is we can do, and at the same time, get a clear understanding about what their view of the world is.

Looking to standards and practices that are helpful can give us guidance, but it doesn’t always represent our reality. Information needs to be specific to explaining where we stand at the given time. Testing is primarily focused on giving quality information to the executives so that they can make qualified decisions. That is first and foremost our mission. Information that we can effectively provide is: 

- Framing of the ecosystem on a global scale (browser standards, trends, data usage histories)
- Impact on customers (client feedback, analytics data)
- Clarify issues and questions (heading off the executive freakout)
- Managing expectations (especially when dealing with something new)
- Explaining how likely issues brought to their attention really are problems worth investing in
- Explaining risk factors and methods to mitigate those risks

—-

At the end of the day we had a lot of new idea, feedback for some new initiatives, an emphasis on better communication, more focused due diligence, and the fact that so many participants had a lot they felt they could contribute. This was a fun and active day, and a lot of learning and connecting. One of the key things I am always impressed about when it comes to these events is that we really have a lot of solid people in the testing community, but we need even more.

I encourage every tester that admires craftsmanship, skill, and thinking make it a point to come to these now annual events (this is the third of these, so I think it’s safe to say that it’s a thing now ;) ). Once again, thanks Matt (Heusser) and Matt (Barcomb) for organizing what has becoming my favorite Open Space event. May there be many more.


Silence Can Be Powerful

Friday, August 08, 2014 13:00 PM

This past Tuesday evening, the Boy Scout troop that I am Scoutmaster for (Troop 250 in San Bruno), held its big annual Homecoming Court of Honor. This is typically a big affair, in that it encompasses our Scout Camp week and all the awards earned while there (which is to say, a lot of them).

One of the things I have been trying to do with the boys in my troop is encourage them to take on challenges and take chances. Ultimately, a well run Troop is run by the boys, not by the adult leaders. Still, it's very common for them to ask me a lot of questions about what they should do or shouldn't do, and normally, I'm ready and willing to provide them answers.

This time, though, I decided to do something unprecedented, at least as far as a Court of Honor was concerned. The scouts are familiar with what we call a "silent" campout. At a silent campout, the adult leaders camp in a site adjacent to the one the scouts are camping in, and we follow all rules of safety and emergency preparedness, but other than that, we stay in our camp site, and they stay in theirs. They set up, cook, clean, make and break down fires, and anything else they need, all without any input from the adult leaders. The goal is to have them learn from their own mistakes, and to have them work with each other to solve their problems rather than have the adult leaders do it for them.

I decided to take this one step further and declared that our Court of Honor would be a "silent" Court of Honor. In other words, the scouts would run it, they would do all of the specific details (give out awards, recognize rank advancements, etc.) and I and the other adult leaders would sit back and watch. We would not speak, we would not direct, we would not answer questions.

So how did it work out? Splendidly!

Granted, there were several times where I had to sit on my hands and keep my mouth shut when I so wanted to say "no, not that way, do it like this!" that, however, was not the point. It wasn't my Court of Honor, it was their Court of Honor. If they neglected to bring something out, put something on display, or do something that they had seen me do dozens of times, that didn't matter. What I wanted to see from them was what they felt was important. I wanted to se what aspects of a Court of Honor they wanted to do. They jumped at the chance to do a skit. they liked doing silly one liners. They enjoyed the awards part, and they left me a little time at the end to speak my peace. Which I did, but only at the end.

Many times, I think we do a disservice to those who are learning and trying to figure out what's important and what isn't. Coyote Teaching focuses on leveraging the environment and addressing real needs, as well as focusing on the art of questioning, or asking more questions rather than giving direct answers. I'd like to add to that the very real teaching tool of "be quiet". Sometimes it's best to not answer, or to remove ourselves entirely. Sure, there may be stumbling, there may be things said that are not perfect, or there may be some key stuff that gets forgotten, but that's OK.

What's important is to give the people you are working with a chance to discover what is important to them, and let them reach that conclusion themselves. It would have been very efficient to correct them and tell them what to do, but it would be far less effective than giving them the chance to run with the program all on their own. They've participated in several Courts of Honor over the years that I have run, and regardless of how flawlessly I may have done them in the past, none of them will be as memorable, or mean as much, as this one will. The reason? BY giving them silence, they got to experience and do for themselves what they wanted to do, and honor the troop in the way that actually mattered to them.

I'm super proud of all of them... and yes, I took notes ;).

Stepping Back, Taking a Breath, Letting Go, and Saying "NO"

Thursday, August 07, 2014 20:01 PM

Many will, no doubt, notice that my contributions to this blog have been spotty the past few months. There's a very specific reason.

A few months back, I did an experiment. I decided to sit down and really see how long it took me to do certain things. I've been reading a lot lately about the myth of multi-tasking (as in, we humans cannot really do it, no matter what we may think to the contrary). I'd been noticing that a lot of my email conversations started to have a familiar theme to them: "yeah, I know I said I'd do that, and I'm sorry I'm behind, but I'll get to that right away".

Honestly, I meant that each and every time I wrote it, but I realized that I had done something I am far too prone to do. I too frequently say "yes" to things that sound like fun, sound like an adventure, or otherwise would interest and engage me. In the boundless optimism of the moment, I say "sure" to those opportunities, knowing in the back of my mind there's going to be a time cost, but it's really fuzzy, and I couldn't quantify it in a meaningful enough way to guard myself.

I decided I needed to do something specific. I purchased a 365 day calendar (the kind with tear off pages for each day), and I took all of the dates from January through May (at the time I did this, that was the current date). I wrote down, on each sheet, something I said or promised someone I would do. Some of them were trivial, some were more involved, some were big ticket items like researching an entire series of blog posts or working through a full course of study for a programming language. As I started jotting them down, I realized that each time I wrote one down, another one popped into my head, and I dutifully wrote that one down too, and another, and another, until I had mostly used up the sheets of paper.

WOW!!!

I came to the conclusion that I would have to do some drastic time management to actually get through all of these, and part of that was to find out where I actually spent my time and how much time it took to actually complete these tasks. I also told myself that, until I got through a bunch of these, I was going to curtail my blog writing until they were done. I've often used my blog as a "healthy procrastination", but I decided that, unless I was discussing something time sensitive or I was at an event, the blog would have to take a back seat. That's the long and short of why I have written so little these past three months.

In addition, I came to a realization that matched a lot of what I had been reading about multi-tasking and effectively transitioning from one task to another. For every two tasks I tried to accomplish at the same time, I came to see that the turnaround time to getting them done, compared to doing them independently, was four hours above and beyond what it would take to do those tasks individually. That was at the absolute best case scenario, with me firing on all cylinders, and me in "hot mode" brain-wise. As I've said in the past, to borrow from James Bach, my brain is not like a well oiled machine. Instead, it's very much like an unruly tiger. I can have all the desire in the world, and all the incentives to want to get something done, but unless "the tiger" was in the mood, it just wasn't going to be a product I, or anyone else, would be happy with.

The areas and stimuli that had the best effect was an absolute drop dead date, and another person in need of what I was doing to make it happen. Even then, I found myself delivering so close to the drop dead date that it was making both myself and the people I was collaborating with anxious.

Frankly, that's just no way to live!

Next week is CAST. I am excited about the talk I am delivering. It's about mentoring, and using a method called Coyote Teaching, along with the rich (but often expensive) nature in which it allows for not just transfer of skills, but also truly effective understanding. In this process of writing and working on this talk with my co-presenter, Harrison Lovell, I decided to use it on me, a little bit of "Physician, heal thyself". I came to realize that my expectational debt was growing out of control again. In the effort to try to please everyone, I was pleasing no one, least of all myself. Additionally, I have been looking at what the next year or so will be shaping up to look like, where my time and energy is going to be needed, and I came to the stark realization that I really had to cut back my time and attention for a variety of things that, while they sounded great on the surface, were just going to take up too much time for me to be effective.

I've already conversed with several people and started the process of tying up and winding down some things. I want to be good to my word, but I have to be clear as to what I can really do and what time I actually have to do those things. Time and attention are finite. We really cannot make or delay time. No one has yet to make the magic device from "The Girl, The Gold Watch and Everything", and time travel is not yet possible. That means that all I can do is use the precious 24 hours I get granted each day to meet the objectives that really matter. That means I really and honestly have to exercise the muscles that control the answer "NO" much more often than I am comfortable with doing. I have to remind myself that I would rather do fewer things really well than a lot of things mediocre or poorly.

I am appreciative of those who have willingly and understandingly helped by stepping in and taking over areas that I needed to step back from. Others will follow, to be certain. For the most part, though, people are actually OK with it when you say "NO". It's far better than saying "YES" and having that yes disappear into a black hole of time, needing consistent prodding and poking to bring it back to the surface.

I still have some things to deliver, and once they are delivered, I'm going to tie off the loose ends and move on where I can, hand off what I must, and focus on the areas that are the most important (of which I realize, that list can change daily). Here's looking to a little less cluttered, but hopefully more focused and effective few months ahead, and what I hope is also a more regular blog posting schedule ;).

A Time and a Season to all Things

Wednesday, July 16, 2014 21:50 PM

Today I sent the following message to the members of the Education Special Interest Group of the Association for Software Testing:


Hello everyone!

Three years ago at this time, I took on a challenge that no one else wanted to take on. I realized that there was a lot at stake if someone didn't [added: the AST BBST classes might cease], and thus a practitioner, with little academic experience, took over a role that Cem Kaner had managed for several years. I stepped into the role of being the Education SIG Chair, and through that process, I learned a lot, we as a SIG have done a lot, and some interesting projects have come our way to be part of (expansion of AST BBST classes and offerings, SummerQAmp materials, PerScholas mentoring program, etc.). It's been a pleasure to be part of these opportunities and represent the members of AST in this capacity.

However, there is a time and a season for all things, and I feel that my time as an effective Chair has reached its end. As of July 15, 2014, I have officially resigned as the Chair of the Education Special Interest Group. This does not mean that I will stop being involved, or stop teaching BBST courses, or stop working on the SummerQAmp materials. In fact, it's [my] desire to work on those things that has prompted me to take this step. Even I and my hyper-involved self has to know his limitations.

I have asked Justin Rohrman to be the new Chair of the Education Special Interest Group, and he has graciously accepted. Justin is more than capable to do the job. In many ways, I suspect he will do a better job than I have. I intend to work with him over the next few weeks to provide an orderly transition of roles and authority so that he can do what I do, and ultimately, so I can stop doing some of it :).

Justin, congratulations, and thank you. EdSIG, I believe wholeheartedly you shall be in good hands.

Regards,
Michael Larsen
Outgoing EdSIG Chair


To everyone I've had the chance to work with in this capacity over the past three years, thank you. thank you for your patience as I learned how to make everything work, for some definition of "work". Thank you for helping me learn and dare to try things I wasn't aware I could even do. Most of all, thanks for teaching me more than I am sure I have ever taught any of you over these past three years.

As I said above, I am not going away. I am not going to stop teaching the BBST courses, but this will give me more of an opportunity to be able to teach them, or assist others in doing so, which is a more likely outcome, I think. It also frees me up so I can give more attention to participating in programs that matter a great deal to me, such as SummerQAmp and PerScholas. As I said above, I believe Justin will be fantastic, and I'll be just a phone call or email message away from help if he should need it ;).

Packt Publishing Turns Ten - We Get The Presents :)

Friday, July 04, 2014 13:13 PM

Over the past several years, I have been given many books to review by Packt Publishing, and it's safe to say a substantial part of my technical library comes from them. As a blogger who receives these books, usually for free, I like to help spread the word for publishers when they make special offers for their readers.

Below is a message from Packt Publishing that celebrates their tenth anniversary as a publisher, and an announcement that all of their ebook and video titles are, until Saturday, July 5, 2014, available for $10 each.

To take advantage, go to http://bit.ly/1mWoyq1

Packt’s celebrates 10 years with a special $10 offer

This month marks 10 years since Packt Publishing embarked on its mission to deliver effective learning and information services to IT professionals. In that time it’s published over 2000 titles and helped projects become household names, awarding over $400,000 through its Open Source Project Royalty Scheme.

To celebrate this huge milestone, from June 26th, every e-book and video title is $10 each for 10 days – this promotion covers every title and customers can stock up on as many copies as they like until July 5th.

Dave Maclean, Managing Director explains ‘From our very first book published back in 2004, we’ve always focused on giving IT professionals the actionable knowledge they need to get the job done. As we look forward to the next 10 years, everything we do here at Packt will focus on helping those IT professionals, and the wider world, put software to work in innovative new ways.

We’re very excited to take our customers on this new journey with us, and we would like to thank them for coming this far with this special 10-day celebration, when we’ll be opening up our comprehensive range of titles for $10 each.

If you’ve already tried a Packt title in the past, you’ll know this is a great opportunity to explore what’s new and maintain your personal and professional development. If you’re new to Packt, then now is the time to try our extensive range – we’re confident that in our 2000+ titles you’ll find the knowledge you really need , whether that’s specific learning on an emerging technology or the key skills to keep you ahead of the competition in more established tech.

More information is available at: http://bit.ly/1mWoyq1

Coyote Teaching, Lousy Estimates, and Making a Pirate

Wednesday, July 02, 2014 20:23 PM

Several weeks ago I made a conscientious decision. That decision was to focus on one goal and one goal only. I'll admit it's a strange goal, and it's difficult to put into simple words or explain in a way that many people will understand, but for long time readers of this blog, that shouldn't come as a surprise ;).

In August, I'm going to be presenting a talk with Harrison Lovell about Coyote Teaching, and the ways in which this type of teaching can help inform us better than rote example and imitation. As part of this process, I thought it would be fun to take something that I do that is completely outside the realm of software testing, and see what would happen if I applied or examined Coyote Teaching ideas and techniques in that space. Personally, I found the results very interesting, and over the next few days, I'm going to share some of what I learned, and how those lessons can be applied.

What was this unusual project? It's all about "playing pirate" ;).

OK, wait, let me back up a bit...

One of the things I've been famous for, over several Halloweens, has been my elaborate and very involved pirate costumes. Why pirates? They fascinate me, always have. They are the outsiders, the ones who dared to subvert a system that was tyrannical, and to make a world that was built on their own terms. Granted, that world was often bloody, violent, deceptive, and very dangerous, with a strong likelihood the pirates would be killed outright or publicly executed, yet it has captured the imaginations of generations through several centuries.

Here in Northern California, there is an annual affair called the Northern California Pirate Festival. Many people dress up and “play pirate” at this event, and last year I made a commitment that I would be one of them. More to the point, I decided I wanted to go beyond just “playing pirate”, I wanted to get in on the action. Now, in this day and age, get in on the action doesn't mean “become an actual pirate”, it means join the ranks of the re-enactors. This year was a small step, in that I chose to volunteer for the festival, and work in whatever capacity they needed. With this decision, I also opted to go beyond the Halloween tropes of pirates, and actually research and bring to life a composite character from that time, and to pay special attention to the clothes, the mannerisms, and the details of the particular era.

Most people when they look at popular representation of pirates, they're looking at tropes that represent the Golden Age of Piracy, that period in the early 1700s where many of the famous stories are set (The Pirates of the Caribbean franchise, Treasure Island, Black Sails, etc.). What this ignores is the fact that piracy had been around for millennia, and there were other eras that had a rich history, and an interesting look, all their own.

To this end, I decided that I wanted to represent an Elizabethan Sea Dog. My goal was to have people walk up to me, and say “hey, you look different than most of the people here”, and then I could discuss earlier ages of piracy, or in my case, privateering (and really, if you were on the side of the people being attacked, that difference was mostly irrelevant).

To make this a little more interesting, I decided to make my outfit from scratch. The only items I did not make from scratch were my boots, my hat, and the sword and dagger that I chose to carry. Everything else would be hand made, and here is where our story really begins.

The first order of business if you choose to be a re-enactor, is to do research. If your character is a real person, you need to know as much as possible about not just their personal histories, but about their time period, where they came from, the mores of the day, the situations that may have driven someone to be on the high seas in the first place, and those decisions that might potentially lead them to being privateers or pirates. Even if the character you are reenacting is fictitious, you still want to be able to capture these details. I spent several months reading up and examining all of these aspects, but I gave the clothes of the era special attention. What did a mariner in the mid-1500s actually wear? To this end, I came up with a mental picture of of what I wanted my Sea Dog to look like. My Sea Dog would have high Calvary style boots, long pumpkin breeches, a billowy Renaissance style shirt, a close-fitting jacket (referred to as a “doublet”), a thicker outer jacket, called a jerkin, and would wear what was called a "Tudor Cap". I would also make a wide belt capable of carrying both a rapier and a main gauche (a parrying dagger used in two handed dueling, common for the time period). I would make the “frogs”, or the carriers for the sword and dagger. I’d also make a simple pouch to hold valuables. Just a handful of items. It didn't seem that complicated. As one who already knew how to sew, and has had experience making clothes in the past, I figured this was a project I could knock out in a weekend.

Wow, was I ever wrong!

I thank you if you have stuck with me up to this point, and you may be forgiven if you are thinking "wow, that's quite a buildup, but what does this have to do with the Coyote Teaching method?" Well, let's have a look, starting with the first part of the project, the pumpkin breeches.

Through my research I decided I wanted to create something that look dashing, and a little dangerous, and I decided that I would use leather and suede in many of the pieces.  The problem with using leather and suede is that it doesn't come on a regular sized bolt of fabric. In fact, real leather and suede is some of the most irregular material you can work with, since it entirely depends on the particular hide you are examining. I quickly realized that I had no pieces that would give me a size to cut a full leg portion from any of my pattern pieces. What to do? In this case, I decided to piece long strips of three inch wide suede together. This would give the look of “panel seams”, and give the sectional look that is common for pumpkin breeches.

So let’s think about the easy part. Make a pair of pants. Just cut some material, and stitch it together, right?

Here are the steps that making these pants really entailed:

- taking out multiple suede hides and examining them
- cutting away the sections that would be unusable (too thin, too thick, holes or angles that couldn’t be used, etc.)
- lay out the remaining pieces and utilize a template to cut the strips needed. Repeat 40 times.
- take regular breaks because cutting through suede is tiring.

Irregular suede pieces cut to a uniform width and length.

- baste the pieces together and stitch them down the length of the strips, so as to make panels that were ten strips wide.
- make four of these panels.

Stitched composite panels. Each section of ten strips
is used for half or each leg (4 panels total)

- size the pattern for the dimensions of the pants desired.
- cut the suede panels into the desired shapes (being careful to minimize the need to cut over the stitched sections)
- cut matching pattern pieces out of linen to act as a lining for the suede.
- pin and piece together the lining and outer suede and sew them together.
- piece the leather panels to each other to sew them together so they actually resembled breeches
- wrestle with a sewing machine that is sewing through 4 to six layers of suede at a time, as well as the thickness of the lining material
- replace broken needles, since suede is murder on sewing machine needles, even when using leather needles.
- unstitch areas that bunched up, or where the thread was visible and not cleanly pulled, or where thread broke while stitching.
- make a cutaway and stitch a fly so that the pants can be opened and closed (so as to aid putting on and taking off, and of course, answering the call of nature).
- punch holes to place grommets in the waistband (since breeches of this period were tied to the doublet).

That weekend I had set aside to do the whole project, actually gave me enough time to size the suede and cut the strips I would need. That’s all I managed to do in that time, because I discovered a variety of contingent steps needed. I had to get my tools together, determine which of my tools were up to the task, which tools I didn’t even own, and clear space and set up my work area to be effective. These issues took way more time than I anticipated.

How long did it take me to actually make these breeches? When all was said and done, a week. Using whatever time I could carve out, I estimated I spent close to 14 hours getting everything squared away to make these.
Completed pumpkin breeches...
or so I thought at the time.


I liked how they turned out, I thought they looked amazing… that is, until I gave them a test run outside in the heat of the day, and realized that I would probably die of heat exhaustion.

Since the weekend of the event was looking to be very hot (mid 90s, historically speaking, with one year reaching 104 degrees) I realized these pants would be so uncomfortable as to be unbearable. What could I do now? I had put so much time into these, I didn’t have time to start over. Fortunately, research to the rescue. It turns out that there was a style of pumpkin breeches that, instead of being stitched together, had strips of material that acted as guardes, and that hung loosely rather than stitched together. After looking at a few examples, and seeing how they were made, I decided to cut open all of the seams I had spent so much time putting together, and reinforcing the sections at the waist and at the bottom of the leg. It was a long and tedious change, but it allowed air to escape, and me to not die of heat exhaustion.

Jumping a little ahead, but here is the finished
open air version of the pumpkin breeches.


OK, let's talk Coyote Teaching now...

This whole process brought into stark relief the idea of estimating our efforts, and how we, even when we are experienced, can be greatly misled by our enthusiasm for a project.

Was I completely off base to think I could get this project done in a weekend? Turns out, yes! It may have been correct or accurate if I were to be using standard bolt fabric, but I wasn’t. I chose to do something novel, and that “novel” approach took five times longer to complete. What’s more, I had to actually undo much of the work that I did to actually make it viable.

My estimate was dead wrong, even though I had experience making pants and making items to wear. I knew how to sew, I knew how to piece together items, I’ve actually made items, so I felt that gave me a good confidence to make an estimate that would be accurate.

I’ve come to appreciate that, when I try to make an estimate on something I think I know how to do, I am far more likely to underestimate the time requirements needed when I am enthusiastic about the project. In contrast, if I am pessimistic about a project, I am likely to overestimate how long it will take. Our own internal biases, whether they be the “rose colored glasses” of optimism, or the depletion of energy that comes with pessimism, both prevent us from making a real and effective estimate.

Knowing what I know now, how would I consider guiding someone else to do a similar project? I could tell them all of the pitfalls I faced, but those might not be helpful, unless they are doing exactly the same thing I am doing. Most of the time, we are not all doing the exact same thing, and my suggestions may prove to be a hindrance. Knowing now what I know about the process of preparing suede in sections, I would likely walk the person through defining what might be done. In the process, it’s possible they might come up with answers I didn’t ("why are we using real suede, when we can buy suede-cloth that is regular sized on a bolt?". "Couldn’t we just attach the strips to a ready made pair of pants?"). By giving them the realities of issues they might face, or allowing them to think through them on their own, we can help foster avenues and solutions that they would not find on their own, or that perhaps we wouldn’t, either.

One thing is decided, though… the next outfit I make (and yes, be assured, I will make another one ;) ) is going to be made with regular bolts of wool, cotton or linen, rakish good looks be darned.

Weekend Testing Americas is Looking to "Go Deep" This Saturday

Monday, June 30, 2014 23:04 PM

Yes, it's been quiet here for quite awhile. Too quiet, and I needed to break the silence. What better way than with an announcement of a Weekend Testing Americas session, you say? I agree wholeheartedly!

With that, I would like to cordially invite each and every one of you reading this to come and join us this Saturday, July 5, 2014.

Weekend Testing Americas #52 - Going Deep with "Deep Testing"
Date: Saturday, July 5, 2014
Time: 09:00 a.m. - 11:00 a.m. Pacific Daylight Time
Facilitator: Michael Larsen


So what does "deep testing" mean? well, if I told you all that now, it wouldn't make much sense to hold the session now, would it? Still, I can't just leave it at that. If that's all I was going to do, I'd just as well have you look at the Weekend Testing site announcement and be done with it.


Justin Rohrman and I were discussing what would make for a good session for July, and he suggested the idea of "deep testing", and in the process, he suggested that we consider a few questions:


What if, instead of just having an unfocused bug hunt, we as a group decided to take a look at a specific feature (or two or three depending on the size of the group) and do what we could to really drill down as far as we could with that particular feature. Heck, why not just dig into a single screen and see what we could find? We have an exercise in the AST BBST Test Design class that does exactly this, only it takes it to the component level (we're talking one button, one dial, one element, period). we're not looking to be that restrictive, but it's an interesting way of looking at a problem (well, we thought so, in any event).


As part of the session, we are going to focus on some of these ideas (there may be lots more, but expect us to, for sure, talk about these:

- how do we know what we are doing is deep testing?

- what do we do differently (thought process, approach, techniques, etc.)?

- how do we actually perform deep testing (hint: staring at a feature longer doesn't make it "deep")?

- how do we know when enough is enough?

If this sounds like an interesting use of part of your Saturday, then please, come join us July 5, 2014. We will be starting at 9:00 a.m. Pacific time for this session, and it will run until 11:00 a.m. Pacific.


For those who have done this before, you already know the procedure. For those who have not, please add "weekendterstersamericas" to your Skype ID list and send us a contact request. More specifically, tell us via Skype that you would like to participate in this upcoming session. We will build a preliminary list from those who send us those requests.


On Saturday, at least 15 minutes before the session starts, please be on Skype and ready to join the session (send us a message to say you're ready; it makes it that much easier to build the session) and we will take it from there.

Here's hoping to see you Saturday.

Listening to a Cowboy: Live at Climate Corp, It's BAST!!!

Thursday, May 29, 2014 02:38 AM

Hello everyone, and sorry for the delay in posting. There's a lot of reasons for that, and really, I'll explain in a lengthy (or small series) of posts exactly why that has been the case. However, tonight, I am emerging from my self imposed exile to come out and give support for Curtis Stuehrenberg and hist tall about "ACCellerating Your Test Planning".

From the BAST meetup post:

"One of the most pervasive questions we're asked by people testing within an agile environment is how to perform test planning when you've only got two weeks for a sprint - and you're usually asked to start before specifications and other work is solidified. This evening we plan on exploring one of the most effective tools your speaker has used to get a test team started working at the beginning of a sprint and perhaps even earlier. We'll be conducting a working session using the ACC method first proposed by James Whittaker and developed over actual practice in mobile, web, and "big data" application development."

For those not familiar with Curtis (and if you aren't, well, where have you been ;)? ):

Curtis is currently leading mobile application testing at the Climate Corporation located in San Francisco, Seattle, and Kansas City. When not trying to help famers and growers deal with weather and changing climate conditions he devotes what little free time he can muster to using his 15 years of practical experience to promote agile software testing and contextual quality assurance at conferences like SFAgile, STPCon, ALM-Forum, and CAST as well as publications like Tea Time for Testers and Better Software magazine.

This is an extension of Curtis' talk from the ALM Forum in April. One of the core ideas is to ask "can you write your test plan in ten minutes? If not, why not?"

Curtis displayed some examples of his own product (including downloading the Climate Corp mobile app by each of us), and brought us into an example testing scenario and requirements gathering session. Again, rather than trying to make an exhaustive document, we had to be very quick and nimble in regards to what we could cover and in how much time we had to cover it. In this case, we had the talk duration to define the areas of the product, the components that were relevant, and the attributes that mattered to our testing.

Session Based Test Management fits really well in this environment, and helps to really focus attention for a given session. By using a very focused mission, and a small time box (30 minutes or so), each test session allows the tester the ability to look at the attributes and components that make sense in that specific session. By writing down and reporting what they see, they are able to document their test cases as they are being run, and in addition, show a variety of areas where they may have totally new testing ideas based on the testing session they just went through, and these in turn inform other testing sessions. In some ways, this method of exploring and reporting simultaneously allows for a development of a matrix that is more dense and more complete than one that may be generated first before actively testing.

the dynamic this time around was more personal and more focused. Since it was not a formal conference presentation, the questions were more common, and we were able to address questions immediately rather than waiting until the talk was finished. Jon Bah's idea of threads was presented and described, and how it can help capture interesting data, but help us consciously stay "on task", yet capture interesting areas to explore later (OK, I piped in on that, but hey, it deserved to be said :) ).

It's been a few months since we were able to get everyone together, and my thanks to Curtis for taking the lead and getting us together this month. we are looking forward to next month's Meetup, and as soon as we know what it is (and who is presenting it ;) ).

Going "Coyote": Overcoming Fear and Uncertainty with "The Craddick Effect"

Thursday, May 01, 2014 20:00 PM

For those who have been following my comments about mine and Harrison Lovell’s CAST 2014 talk ("Coyote Teaching: A new take on the art of mentorship") this fits very nicely into the ideas we will be discussing. We’ve been looking back at interactions that we have had over the years where mentorship played an important role in skill development. During one of our late night Skype calls, we were talking about skateboard and snowboard skills, and how we were able to get from one skill level to another.

One aspect that we both agreed was a common challenge was “the fear factor” that we all face. In a broader sense, we both appreciate that snowboarding and skateboarding are inherently dangerous. Push the envelope on either and the risk of injury, and even death, is definitely possible. Human beings tend to work very hard at at unconscious level to keep ourselves alive. The amygdala is the most ancient part of our brain development. It deals with emotions, and it also deals with fear and aggression. It’s our “fight or flight” instinct. One one level, it’s perfectly rational to listen to it in many circumstances, but if we want to develop a technical skill like jumping or riding at speed, we have to overcome it.

About fifteen years ago, I first met Sean Craddick, a fellow snowboarder who was my age and was, to put it simply, amazingly talented. I used to joke whenever I saw Sean at a competition that I would say “oh well, there goes my shot at a Gold Medal!” He humored me the first couple of times, but the third time I said it, he surprised me. He answered “Dude, don’t say that. Don’t ever say that! I could try to throw some trick and land it badly, and scrub my entire run. I could miss my groove entirely, or miss a gate on a turn, or I could catch an edge and bomb the whole thing. Every event is up in the air, and every event has the potential of having an outcome we’d least expect. Don’t say you don’t have a chance, you always have a chance, but you’ll never get the chance if you don’t believe you have it.”

Because we were both the same age and had fairly similar life experiences, I’d hang out with Sean at many of these events, and sometimes run into him on off days when I was just up at the mountain practicing. One time, he noticed that I kept going by a tall rail and at the last moment, I’d veer off or turn and ride past it. After a few times of seeing this, when he saw me about to veer off again he yelled “Hey Michael! The next time you veer of, stop dead in your tracks, unbuckle your board and walk back up here. I want to talk to you about something.” Sure enough, I went down, veered off course, and slammed to a stop. I took off my board, and then I walked up the hill. Sean looked at me and said:

“Take it straight on, and line your nose with the lip and where the beginning of the rail is.”

I nodded, buckled in, and then went down to the rail transition. I veered off. I stopped. I walked back up the hill.

“Lean  back on your rear heel just a bit. It will give you a more comfortable balance when you first get on the rail.”

I nodded, bucked back in, went down again, and again I veered off. I stopped, unbuckled, and walked back to the top of the hill again. By this point I was winded, my calves were aching, my heart was pounding, and I was getting rather frustrated.

“One final thing. Do an ollie at the end of the rail.”

What? I hadn’t even gotten on the rail, why is he telling me what to do when I get off of it? I shrugged, buckled in, went for the hit, and this time, I went straight, I lined the nose up, I set my weight back just a little bit, I slid down the rail, and I did a passably adequate ollie off the end of the rail, and landed the trick. When I did,  Sean whooped and hollered, then came down after me and hit the same rail.

“Awesome, lets go hi the chairlift!”

As we did, Sean looked at me and said:

 “You can understand all the mechanics in the world, but if your brain tells you 'you can’t do it, it’s too dangerous, it’s too risky', you need to get your body to shut your brain up! That’s what I had you do. I knew why you were sketching the last few feet. You were afraid. It felt beyond you. You might crash. It might hurt real bad if you do. The brain understands all that. It wants to keep you safe. Safe, however, doesn’t help you get better. Whenever I find myself giving in to the fear, I stop what I’m doing, right there, and I walk up the hill, and I try it again. If I pull back again, I walk the hill again, and again, and again. What happens is the body gets so fatigued that every fiber of your being starts screaming to your brain ‘just shut up already and let me do this!' Exertion and exhaustion can often help you overcome any fear, and then you can put your mechanics to good use.”

Yeah, I paraphrased a lot of that, but that’s the gist of what Sean was trying to get across to me. Our biggest enemy is not that we can’t do something, but that we are afraid that we can’t do something. That fear is powerful, it’s ancient, and it can be paralyzing. That ultra primitive brain can’t be reasoned with very well, unless we give it another pain to focus on. At some point the physical pain of exertion and exhaustion will out shout the feelings of fear, and then we can do what we need to do.

In a nutshell, that’s “The Craddick Effect”. There may be a much fancier name for it, but that’s how I’ve always approached mentorship where I have to overcome fear and doubt in a person. When some one is afraid, it’s easy to retreat. As a mentor, we have to recognize when that fear is present, and somehow work with it.

You may not do something as extreme as what Sean did with me, but you may well find other, more subtle ways to accomplish the same thing. Imagine having to take on a new testing tool where there’s a lot that needs to be learned up front. We could just let them go on their own and let them poke around. We can take their word that they are getting and understanding what they need to, or we can prod and test them to see what’s really happening. If we see that  they don’t understand enough, or maybe even very little, don’t assume lack of aptitude or drive, look for fear. If you can spot fear, try to coax them in a way that they can put their energy somewhere else for a time so that they can get to a point to shout down the fear. It may be having them do a variety of simpler tasks, still fruitful, but somewhat repetitive and tedious. After awhile, they will get a bit irritated, and then give them a slight push to move farther forward. Repeat as necessary. Over time, you may well see that they have slid past the pain and frustration point, and they just “get” what they are working with. It just clicks.

As a mentor, look to help foster that interaction. As a person receiving mentoring, know that this may very well be exactly what your mentor is trying to do. Allow yourself to go with it. In the end, both of you may learn a lot more about yourselves and your potential than you thought possible. It’s pretty cool when that happens ;).

Django 1.7 and… ME (yet another Live Blog)

Thursday, May 01, 2014 05:32 AM

So this might seem an odd spot, but this has become something of a mission on my part for 2014. I’ve decided that I want to try to become multi-lingual when it comes to web frameworks. We have all sots of interesting frameworks to play with to make web apps and web sites,and Django is the Python centric web framework, and therefore one I want to know more about and get more experience with. Seems a great reason to come into San Francisco and see what the Pythonistas are doing and how they are doing it with Django.

Yes, this is going to be live blogged, and as usual, it may be messy at first. Forgive the stream of consciousness, I promise I’ll clean it up later :).


A bit about our topic this evening (courtesy of Meetup):

Django 1.7 is one of the biggest releases in recent years for Django; several major new features, innumerable smaller improvements, and some big changes to parts of Django that have lain unchanged since before version 1.0. Come and learn about new app loading, system checks, customized select_related, custom lookups, and, of course, migrations. We'll cover both the advantages these new features bring as well as the issues you might have when upgrading from 1.6 or below.


A bit about our presenter this evening (also from Meetup):

Andrew Godwin is a Django core developer, the author of South and the new django.db.migrations framework, and currently works for Eventbrite as a Senior Software Engineer, working on system architecture. He's been using Django since 2007, and has worked on far too many Django websites at this point. In his spare time, he also enjoys flying planes, archery, and cheese.


Lightning Talks 

#1 Randall Degges - Django & Bcrypt

Randall kicked things off right away with a talk about how Django does password hashing and securing of passwords, with the estimated cost o what it takes to crack a password (hint, it's not that hard). If you want to be more security alert, Randall is recommending that we consider using BCrypt. It's been around awhile, and it allows for transparent password upgrading (users update their hash the first time they log in. No muss no fuss :). Sounds kinda cool, to tell the truth, I'm looking forward to playing with it for a bit.

#2 Venkata - Django Rest Framework w/ in-line resource expanding

The second talk discussed a bit on the Django REST framework. Some of the cool methods to handle drop down, pop open and other events were very quickly covered, and some quick details as to what each item can do. A quick discussion with fast flashes of code. I caught some of the details, but I'll be the first to admit, a lot of this flew right past me (gives me a better idea as to areas I need to get a little more familiar with). Granted, this is a lightning talk, so that should be expected, but hey, I pride myself on being able to keep up ;).

#3 Django Meetup Recap

The third lightning talk basically covered a recap of what the Django group has been covering and some quick recaps of what has been discussed in the previous meetups (Ansible, Office Entrance Theme Music, Integrating Django & NoSQL, etc.). Takeaway, if we want resources after the meetups are over, we have a place to go (and I thank you for that :) ).

---

Andrew Godwin's Talk

This seems like a great time to say that I'm relatively new to Django, so a lot of what's being discussed is kind of exciting because it makes me fgeel like I'll be able to get into what's being offered without having to worry about unlearning a lot of things to feel comfortable with the new details. Part of the new code is an update to South (which, as is mentioned above, is something Andrew is intimately involved in).

Details as to how apps are loaded and how to check for and warn programmers about what may happen with an upgrade. Having suffered through a few updates where things worked, then didn't and not having any clue as to why, this is very appealing.

Another new aspect is an adjustable and tunable prefetch option, so that instead of all or nothing, there's a spectrum of choices that can be looked up and help based on context.

A rather ominous slide has flashed across saying "Important Upgrade Notes", and a new detail is that all field classes need to have a deconstruct() option. It's now a required method for all fields. Additionally, initial_data is dead. It's important to have modules use data migration instead. In short, don't automatically assume that older modules that use initial_data will cleanly work. I will take Andrew's word on that ;).

So what's coming up in Django 1.8? Definitely improvements in interactions with PostGreSQL, as well as migrations for contributing apps. But that's getting a bit ahead of the race at the moment. Expect Django 1.7 to hit the scene around May 15th, give or take a few days. Again, I will take Andrew's word on that ;).

---

There's no question, I feel a little bit like a fish out of water, and frankly, that's great! This reminds me well that there is so much I need to learn, especially if my goal of becoming a technical tester is going to advance farther than just wishful thinking or following pre-written recipes. It's not enough to just "know a framework" or "know my framework".

As was aptly demonstrated to me a year and a half ago, I spent a lot of time in the Rails stack, and then I went to work with a company that didn't user Rails at all. Did that mean all that time and learning was wasted? Of course not. It did give me a way to look at how frameworks are constructed and how they interact. I'm thinking of it like learning Spanish when I was younger. Don't get me wrong, I'm not great shakes when it comes to Spanish, but I understand a fair amount, and can follow along in many conversations. What's really cool is that that gives me an add on benefit that I can follow a little bit in both French and Italian as well, since they are closely related. that's how I feel about learning a variety of web frameworks. The more of them I learn, the easier it will be to move between them, and to understand the challenge that they all face.

In any event, this was an interesting and whirlwind tour of some new stuff happening in Django, and I plan to come back and learn more, with an eye to understanding more next time than I did today. Frankly, that shouldn't be too hard to accomplish ;).


Thanks for hanging out with me. Have a good rest of the evening, wherever you are.


TECHNICAL TESTER FRIDAY - Getting UnGraphical with lynx and grep

Thursday, May 01, 2014 18:26 PM

Use lynx --dump to retrieve the contents of your Web site. Just hardcode all the page URLs. Redirect all the content to flat files, then use grep to look for patterns in your content. Start by looking for mistakes you commonly make. Save your greps in a file.

Wow, now this brings back some memories :). 

I first loaded up a lynx browser back in 1993, and this was my introduction to what the non-graphical World Wide Web looked like. Truth be told, I fairly quickly abandoned lynx as an everyday platform when both NCSA Mosaic and the first version of Netscape came out, but there is indeed a value to using lynx. It’s a nice tool to add to accessibility tests, so that you can see what your super pretty graphical page looks to those who don’t have that option. For those curious... it looks like this (well, mine looks like this):

Yep, that's what the Web looked like in 1993. Cool, huh?


lynx —dump does exactly what it sounds like.

Here’s an example from my own little site project:

lynx —dump http://127.0.0.1/web/orchestra/index.php

This prints the following to the screen:

Adding a redirect tag (‘>’) puts it in a file for us. repeat a bunch of times, and you can pull down details on every page in your site.

OK, cool, that’s interesting, but what does that do for us? It allows us to go through and pull out data that we’d want to analyze. Granted, the site as it exists right now isn’t all that spectacular, but it does give us a basis for how we can construct some simple greps. 

For those not familiar with this tool, "grep" is an old UNIX standby. The term comes from the syntax of the “ed” editor, and the command that was used was g/re/p (or “globally search for a regular expression and print it to stdout”).  Those of you with Windows machines can download Grep for Windows at http://gnuwin32.sourceforge.net/packages/grep.htm, or you can find a variety of fun an interesting versions. For me, since my system is in a virtual environment, I'm just going to save the files to my shared folder space and play with grep on my Mac :).

The main benefit to using grep is to look for things that show up in your pages that you may find interesting, or things that might be errors. Searching for basic strings in file names can show a lot of interesting details in the content of the pages. As a quick set of examples that we can do using grep, I recommend poking around on this page for 15 command examples in ways you can use grep to get interesting data.

Once you find a few greps that you find useful, it's a good idea to save those in a file so that you can run them over and over again as you add content to the site and get more information to mine from your site.  

This is meant to be a really basic first step in getting into the details of what your pages show and help get you away from using the browser as a main interaction source. Yes, there's a lot that can be done just with the files and the content that is in them. How you choose to look at them and what interesting details they show will be my focus for next week.

A Leaner and Cleaner Codecademy

Thursday, April 24, 2014 14:37 PM

A couple of years ago, I posted that I was excited to see the initiative that would become Codecademy get of the ground. At the time, it was limited in what it offered. It featured a course on JavaScript and some other small project ideas, and after a little poking around, I went on to other things. A year later I came back and saw that there was some new material, this time on Ruby and Python. A little more poking and then I went on to do other things.

I made a commitment to roll through Noah Sussman's "ways to become a more technical tester", which I follow up on each Friday in my TECHNICAL TESTER FRIDAY posts.. In that process, I decided it would be good to have a place that novice testers could go and learn some fundamentals about web programming. With that, I decided to go and give Codecademy another look, and I'm glad that I did.

For starters, Codecademy has refreshed everything on the site. They talk about it at length in "Codecademy Reimagined", and I for one am impressed with the level of depth they went into the describe the changes.

It's opened up a number of courses and updated several of their older offerings. The original JavaScript track has been deprecated (but it is still there if you want to work through it), and a new JavaScript track has been put in place. The site has been augmented with a jQuery track, a freshened HTML/CSS track, and updates to Ruby, Python, and PHP tracks as well.

In addition, there are several small project areas that users can practice and make "Codebits" to show what they have learned. Some of the Codebits are already assembled (examples include animating your name, making a solar system model and a simple web site template, as well as open format Codebits that users can share. Additionally, there are also a variety of projects ranging from novice to intermediate and advanced levels so that you caa practice what you are learning.

Another cool section is the API track. Currently, there are 29 API's listed that users can experiment with, and make their applications so that they can interact with the various API's. the offering range from YouTube to Twitter to Evernote, and also show the languages best suited to using that particular API's (JavaScript, Ruby and Python).

So how's the actual learning process? It's pretty solid, to tell the truth. Each track has a variety of initiatives, and a range of lessons and small projects interspersed throughout to keep the participant's attention. The editor can be finicky at times, but usually a page refresh will solve most of the odd problems. One of the nice attributes of having an account and working through the exercises is that your progress is saved. All of the steps from the first lesson to the last are recorded as part of your progress. That means you can go back and see your "cleared" examples and exercises.

Additionally there are Q&A Forums associated with each project, and so far, even when I've been stuck in some places, I've been able to find answers in each of the forums thus far. Participants put time in to answer questions and debate the approaches, and make clear where there is a code misunderstanding or an issue with Codecademy itself (and often, they offer workarounds and report updates that fix those issues). Definitely a great resource.  If I have to be nit-picky, it's the fact that, often, many of the Q&A Forum answers are jumbled together. Though the interface allows you to filter on the particular module and section by name, number and description, it would be really helpful to have a header for each question posted that says what module the question represents. Many do this when they write their reply titles, but having it be a prepended field that's automatically entered would be sweet :).

Overall, I think Codecademy has come a long way from when I first took a look at it about two and a half years ago. They have put a lot of effort into the site and their updates, and it shows. If you are already playing around at Codecademy, you already know everything I've written here. If you haven't been there in awhile, I recommend a return trip. It's really become a nice learning hub. If you have never been there, and are someone who wants to learn how to program front end and back end web apps, etc., and you like the idea of FREE, then seriously, go check the site out and get into a track that interests you. I'd suggest HTML/CSS, JavaScript, and JQuery first. From there, if you'd like to focus just on making web sites with little in the way of entry criteria, then check out the PHP track, otherwise, branch out into the Ruby or Python tracks, and work through the site at your own pace. It's not going to be the be all and end all destination to learn about programming, but seriously, you can make a pretty big dent with what you can learn here.

Selenium SF Live: An Evening With Dave Haeffner

Wednesday, April 23, 2014 19:26 PM

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home. 

TECHNICAL TESTER, err, Saturday: Pain Fog and Objective Completion

Saturday, April 19, 2014 19:45 PM

Yes, I know, I'm a day late with this. Actually, I'm closer to a week and a day late with this, but reality decided to remind me that I'm not 21 any longer.

Last week, April 9th, as I was getting off the train, I stood up and reached over to grab my bag. The "twinge" I felt above my hip on my right side was a tell tale reminder. I have sciatica, and if I feel that twinge, I am not going to be in for a fun week or two. Sure enough, my premonition became reality. Within 48 hours, I was flat on my back, with little ability to move, and the very act of doing anything (including sleeping) became a monumental chore. To that end, that meant that my progress on anything that was not "mission critical" pretty much stopped. There was no update last Friday because there was nothing to report. I spent most of the last week with limited movement, a back brace, copious amounts of Ibuprofen, any typing only when I had to. I'm happy to report I'm getting much better, but siting for long stretches to code or write was still painful, though less so each day.

Since I'm greatly desirous to move forward, I decided to make a push in the later part of this week to clear the CodeCademy site's three courses related to web development: PHP, HTML/CSS and JavaScript. Late last night, I finished up the course for JavaScript. Yay me :)!!!

As an overall course and level of coverage, I have to give CodeCademy credit, they have put together a platform that is actually pretty good for a self-directed learner. It's not perfect, by any means, and the editor can be finicky at times, but it's flexible enough to allow for a lot of answers that would quality as correct, so you don't get frustrated if you don't pick their exact way of doing something.

Many of the hints offered for each of the lessons are also helpful and don't spoon feed too much to the participant, making you actually stretch and think. While I've known about and tinkered with JavaScript off and on for years, I will honestly say I've learned quite a bit from this courses. I would recommend these three courses (HTML/CSS, PHP and JavaScript) as a very good no cost first stop to learn about these topics. Each does a good job of explaining the topics and concepts individually.

If there was any criticism, it's that there is little in the course examples that integrate the ideas (at least so far). There is a course on jQuery, and I anticipate that that will probably have more to do with actual web component interaction and integration, so that's my next goal to complete. After that, I plan to go back and complete the Ruby and Python modules, and explore their API module as well.

For now, consider this is a modest victory dance, or in this case, a slow moving fist pump. I may need a week or two before I can actually dance ;). Also, next week I'll have some meat to add to this,s since I'm going to start covering some command-line level tools to play with and interact with, and those are a lot more fun to write about!

Become a "Coyote" at CAST 2014

Wednesday, April 09, 2014 16:10 PM

Now that the full program has been announced, and my talk is posted and described, I can now say, with a certainty, what I'll be talking about at CAST 2014 this August in New York City.

Actually, I need to qualify that. It's not what I'm going to talk about, it's what "we" are going to talk about.

Harrison Lovell is an up and coming tester with copious amounts of wit, humor and energy. Seriously, he gives me a run for my money in the energy department. I met Harrison through the PerScholas mentorship program, and we have been communicating and working together regularly on a number of initiatives since we first met in September of 2013. The results of those interactions, experiments, and a variety of hits (and yes, some misses here and there), are the core of the talk we will be doing together.

Here's the basics from the sched.org site:

"Coyote Teaching: A new take on the art of mentorship"

Too often, new software testers are dropped into the testing world with little idea as to what to do, how to do it, and where to get help if they need it. Mentors are valuable, but too often, mentors try to shoe-horn these new testers into their way of seeing the world. Often, the result is frustration on both sides.

“Coyote Teaching” emphasizes answering questions with questions, using the environment as examples, and allowing those being mentored the chance to create their own unique learning experience. Coyote Teaching lets new testers learn about the product, testing, the world in which their product works, and the contexts in which those efforts matter.

We will demonstrate the Coyote Teaching approach. Through examples from our own mentoring relationship, we show ways in which both mentors (and those being mentored) can benefit from this arrangement.

“When raised by a coyote, one becomes a coyote”.


Speakers

Michael Larsen
Senior Quality Assurance Engineer, Socialtext
Michael Larsen is Senior Tester located in San Francisco, California. Over the past seventeen years, he has been involved in software testing for products ranging from network routers and switches, virtual machines, capacitance touch devices, video games and distributed database applications that service the legal and entertainment industries.
Read More →

Harrison C. Lovell
Associate Engineer, QA, Virtusa
Harrison C. Lovell is an Associate Engineer at Virtusa’s Albany office. He is a proud alumnus from Per Scholas’ ‘IT-Ready Training’ and STeP (Software Testing education Program) courses. For the past year, he has thrown himself into various environments dealing with testing, networking and business practices with a passion for obtaining information and experience.


Yes, I think this is going to be an amazing talk. Of course, I would say that, because I'm part of the duo giving it, but really, I think we have something unique and interesting to share, and perhaps a few interesting tricks that might help you if you are looking to be a mentor to others, or if you are one who wants to be mentored. One thing I can guarantee, considering the combinations of personalities that Harrison and I will bring to the talk... you will not be bored ;)

Technical Tester Friday: Ladies and Gentleman, JavaScript has Entered the Building

Saturday, April 05, 2014 01:45 AM

There's nothing like the mild terror one feels when they get back from several days away and then thinks "aw man, I have to post something today!" Too many of my posts have stretched over two weeks, and while I had perfectly valid reasons for that, I said that I's post every Friday, whether I had a lot to talk about or just a little. I've decided the volume of the delivery is less important than the regularity and reliability of having an entry every Friday, and that's what's driving me today.

With the ALM forum and all that surrounded it, as well as getting ready for my talk in presenting my slides, I really didn't have a whole at a time to go through and push my way into learning more about JavaScript and implementing as much of it as I wanted to. I started the Codecademy JavaScript module, and worked through several of the initial entries. When I realized that I wasn't going to be able to have something to show by the end of this week... okay I cheated. Well I didn't cheat. I just went out on the web and I looked at some sample JavaScript projects, and tried to see if I could make sense of what they were doing.

The good news I found a simple JavaScript project that I could apply to the site's navigation bar. If you remember from last week's example, the navigation bar was really just a couple of links horizontally displayed, with brackets on the end to simulate "buttons". This time, we made real buttons with a little interactivity to them. They give a visual indication of which page has been selected (the button is a little larger than the others):






and here's the CSS and JavaScript code that makes the site look better than 1995 ;).




I should stop here and say, again, there are a lot of neat little distractions you can get into with JavaScript. There's potentially a slight barrier to entry to a brand-new web developer. HTML is pretty easy. CSS has some rules, but once you learn them, they don't feel that different compared to native HTML. JavaScript is very similar to PHP, in that you can learn the basics pretty quickly. How to actually use the basics effectively, and in a meaningful way on your pages... that's a bit of an art form, and it's one of the things you're going to have to practice doing. Start small and work out from there.

So there you have it. Again, because of me being out of town and being completely consumed by the ALM Form conference, I did not get a chance to do as much JavaScript hacking as I wanted to, but that will give me a chance for next week to get a little deeper. Perhaps I can pop a little bit of eye candy into the site, so we make it a little more interesting. As always, crawl before you walk, walk before you run, and maybe run before you get on a bicycle or drive a car. Little steps get you in, and I think the project will ultimately become a little more interesting as the defined repeatable things get more and more caned so I can focus on other things :).

Testing the Limits at #ALMForum: Day Three

Friday, April 04, 2014 09:28 AM

Wow, what a week this has been. We're now on day three, the last day, and I'm up in an hour! I'm excited, a little frazzled, but I think we're going to do well. I'm also excited that the four speakers in the breakout today are all good friends; Curtis Stuehrenberg, Seth Eliot and Mark Tomlinson are gonna' help me close lout this conference and we look forward to chatting with as many people as possible that want to look at ways to the change the face and state of software testing. If you are here at ALM Forum, come join us. If you are not able to be, please read on here and take in as much as you can from my notes and observations.

-----

Transforming Software Development in a World of Services with Sam Guckenheimer is the first session, a we are starting out with a thought experiment around Air BnB (the online service to rent rooms and houses. etc. in different cities). A boat on Puget sound is available, so a company can host all of their team members on the boat. What will the experience be? Will it be a fun stay? Will it be too cramped? We don't know, but one thing's for sure, it will be open, it will be public, and good or bad, if people want to talk about it, they will.

This makes for an interesting comparison to Agile development, and the way that agile has shaken out. What had intended to be a relatively private internal housekeeping mode has become a more public viewing. We are social, we are open, we use systems that are often out of our control in the 100% sense of the word. A lot of our practices and actions are not quiet and hidden, they are visible to all who would care to see them. It's a little daunting, but it's also tremendously liberating.

This talk is looking at a Microsoft ideal of "cloud cadence". Customers want regular improvements, we want to maximize the value we provide to our customers, and we know that their feedback is not just for developers, it's seen by everyone. Get it right, we have app store five star reviews. get it wrong, and we can have considerably lower reviews (and don't for a second think those reviews don't matter; it can be the difference between adoption or being totally forsaken).

The DevOps life cycle comes together with three aspects. we have development, we have production, and in between we have the collaboration piece. What's the most important element there? Well, without good development, we have a product that is sub par. With bad deployment, we might have a great product but it won't really work the way we intend it to. The middle piece is  the critical aspect, and that collaboration element is really difficult to pin down. It's not a simple prescription, a set checklist. each organization and project will be different, and many times, the underpinnings will change (from our servers to the cloud, from a dedicated  and closed application to a socially aware application). Sometimes the changes are made deliberately, sometimes the changes are made a little more forcefully. Either way, without a sense of shared purpose or collaboration between the development and production groups, including the tooling necessary to accomplish the goals.

The ability to do all of these things in the Visual sTudio team is the core of Sam's talk, and the interactions with their clients, and the variety of changes that occur drive many of their decisions. They learn from their customers and change direction. They focus on a human to human feedback model (which may sound a little unusual for a giant company like Microsoft, but Sam makes a convincing case :) ).


-----

So this is my talk. no I can’t talk about my talk while I’m giving it, so this is a little canned ;). My topic is "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed”. If I were to be a little more literal with my title, I’d call it “What you want to have the new testers that you hire know and want to be so that they can be genuinely effective for you and your team… oh, and they may not be the obvious areas you think they need to be".

Software development, and software testing, is undergoing a radical change,. We’ve embraced the idea of changes in development and delivery, but we tend to still look at old school “best practices” in software testing. We’re not still testing the software the previous generation wrote. Development has changed, and testing is changing, and it’s still as relevant as it was before, but we need to approach it differently than we have.

I’m involved in a variety of initiatives that are specifically geared towards teaching software testing to a new generation of testers (and hey, current testers may find the ideas useful, too).

Programs like SummerQAmp, PerScholas, Weekend Testing, the Miagi-do School of Software Testing, and the BBST series of classes are all designed to help software testers not just develop ideas, but real world skills that can help them do their jobs effectively. The community that surrounds testing (in the Twitter, G+, and special forum space) are all doing amazing work to move testing forward. 


So what’s wrong with the old model? We still hear about testing teams, even in so called Agile organizations, that are still doing Heavy Process, Heavy Scripting series of tests. It’s like the development team is Agile, but the test team is expected to still be a waterfall team. Automation makes a lot of promises, and don’t get me wrong, I am pro automation for many things. I use automation. I write automation. I prefer the term Computer Aided Testing, but Automation will suffice. It’s a tool, but it’s not the only tool, and it has been oversold on what it can accomplish. It’s great for repetitive tasks. It’s great for configuration and iteration stepping. It’s lousy at making informed decisions. Though it’s not been a problem I’ve personally dealt with or had to experience, I know that “certification” has been sold as a way to “pre-qualify” testers. As a practical outcome, I think we have failed here, because most of the certifications offered are heavy on passing a test, and light on demonstration of real world skills and the effectiveness thereof.


I believe the New Testers need to focus on a new toolkit and a new attitude. It’s not really new, in fact, in many ways, it’s ancient, but it’s been woefully underutilized. We need testers who are sapient (stealing that from James Bach), but basically meaning we need testers who are actively and critically thinking about what they are doing and observing. Testers need to do more than find bugs, they need to sell those bugs.. Really, what’s more important, lots of bugs, or the championing of important bugs that actually get fixed?

Testers need to return to and have a solid understanding of both the Scientific And Socratic methods. I believe that New Testers will be less button pushers and more scientists, philosophers and skeptics. These are not just testing traits, these need to be embraced by everyone in development. New Testers don’t want to prove the software works. They want to find how it is broken. They want badly to lose the stigma of being the bug shield. They are much better utilized as “beat reporters” sharing a clear story of your product. A thought experiment from Elisabeth Hendrickson that I personally love is “what is the most terrifying headline about your company you could imagine seeing in the paper? Wouldn’t you want your testers to not only find out that terrifying headline, but inform you so that  you could prevent it?"

OK, that’s great. So where can I find these New Testers? You can find them in Computer Science departments at universities. Yes, I’m daring to say it. Most testers have historically fallen into the job, but I am seeing people who are now self selecting to be software testers, and it’s *WONDERFUL*. They are not also-ran programmers, or people who couldn’t hack programming. Some are great programmers, but they have decided that there are other challenges they’d like to deal with rather than stringing code together. And that’s *ALSO* great. The point is, that are not considering testing as a consolation prize, but they are selecting testing on its own merits, and we should recruit them with the same philosophy. Where else can we find great up and coming testers (and to be fair, current great testers)? Check out people with degrees in Humanities, or Journalism, or Psychology. Look for actual scientists who might be looking for a change of pace. Do you have really good Customer Service Representatives? It’s a good bet you have some fantastic testers in that group.

Programs like SummerQAmp, PerScholas, Weekend Testing, BBST Courses, a thriving ecosystem of Bloggers, Newsgroups and Online Magazines and Twitter (yes, Twitter :) ) are at the vanguard of bringing this new paradigm of testing to the fore. Each of these, in their sphere, is looking to help bring real, tangible testing skills to their participants, and give them a chance to show what they can do and improve their craft. Weekend Testing is not just a movement, it’s also a portable model that anyone can use. All you need is Skype, a topic of discussion, a product or project, a mission and some charters, and two hours to interact, instruct and facilitate. If you want to see some amazing testing insights, I encourage you to review just about any Weekend Testing transcript. 

Testers are not a single group, they have many interests and they have their own niches. Some testers will be good at some, better at others, probably not stellar in all, but with a broad team with recognition of this, you may be surprised at the powerhouse you can develop when you get the Explorers, the Performance Tweakers, the Toolsmiths (automation, CI, deployment tools, etc), the Evil Masterminds (Security), the Humanists (human factors, usability), and the Storytellers together. Just don’t make the mistake in thinking you can get this all in one person. You may get attributes of all in one tester, but none of us can be experts at all of these, or should I say, very few of us can be (I’m certainly not one of them).

In all the new testers will be focused on:

  • less scripting, more active thinking
  • less checking, more real testing
  • less blind faith, more scientific skepticism
  • creative, inventive, intuitive, mindful

In short, the future is now, and I can introduce you to hundreds of them ;). Better yet, why not come join us and see for yourself?


  • SummerQAmp: hire an intern
  • PerScholas: have a chat with recent STeP graduates and their mentors
  • Weekend Testing: Come join us for a session or two and see the magic happen
  • Miagi-do: Do a web search for the term “Miagi-do School of Software Testing”. Or better yet, just ask me ;). 

-----

Curtis Stueherenberg is talking about how to "ACCellerate Your Agile Test Planning". He decided to chuck the Power Point entirely, and decided to give a crash ourse in Agile testing on a live product... specifically, his procut (well, Climate Corp's mobile app, to be specific). His point was to say "what if we have to test a product in two weeks? How about one week? How about three days? What are you going to do?"

Rather than talk it, we all participated in an active testing session, downloading the app to our mobile devices (iPhone and Android only, sorry Windows Phone users :( ). By walking through the steps and the test areas, and using an idea from James Whittaker and Gogle called the ACC model, we all in real time put together sections of risk and areas we would want to make sure that we tested.  In many ways, ACC is a variation on a theme of Session Based Test Management (SBTM). It informs out tests, we act on the guidance, and we pivot and adapt based on what we learn, and we do it quickly.

Much of the interaction was just things we did in real time, and for my money, this was a brilliant way to emphasize this approach. Instead of just talking about it, we all did it. Even if the idea of a formal test plan is not something you have to deal with, give this approach a try. I know I'm going to play with this when I get back home :).

-----

Now it's time for Seth Eliot and "Your Path to Data Driven Quality" and a roadmap towards how to use the data that you are gathering to help guide you to your ultimate destination. Seth wants to make the point that testing is measurement, and you can't measure if you don't have data (well, you can, but it won't really be worth much). Seth asks if we are HiPPO driven (meaning is our strategy defined buy the "Highest Paid Person's Opinion" or were we making decisions based on hard data. Engineering data can help a little bit (test results, bug counts, pass fail rates). They can give us a picture, but maybe not a complete one (in fact, not even close to a complete one). There's a lot of stuff we are leaving on the table. Seth says that leveraging production data (or "near production data") gives us a richer and more dynamic data set. Testers try to be creative, but we can't come close to the wacko randomness of the real world users that interact with our product.

First step: Determine your questions. Use Goal Question Metrics. Start at the beginning and see what you ultimately want to do. Don't just get data and look for answers. Your data will taint the questions you ask if you don't ask the questions first. You may develop a confirmation bias if you look at data that may seem to point to a question you haven't asked. Instead, the data may give you a correlation to something, but it may not actually tell you anything important. Starting with the question helps to de-bias your expectations, and then it gives you guidance as to what the data actually tells you.

Then: Design for production-data quality. There's two types of data we can access. Active and passive data can be used. active data could be test cases or synthetic data of a simulated user. Passive data is using real world data and real users interactions. Synthetic data is safer, but it's by definition incomplete. Passive data is more complete, but there's a danger to using it (compromising identification data, etc.). Staging the data acquisition lets us start with synthetics data (reminds me of my "Attack on Titan" account group that I have lovingly put together when I test Socialtext... yes, I have one. Don't judge me ;) ), to copying my actual account and sharing on our production site (much more rich data, but needs to be scrubbed of anything that could compromise individuals privacy... which in turn gets us back to synthetic data of sorts, but a richer set. Bulk up and repeat. Over time, we can go from having a small set of sample data to a much larger and beefier data-set, with lots more interesting data points.

Then: Select Data sources. There's a number of ways to gather and accumulate data. We can export from user accounts, or we can actively aggregate user data and collect those details (reminds me of the days of NetFlow FlowCollection at Cisco). We need to be clear as to what we are gathering and the data handling privacy that goes with it. Anonymous data is typically safe, sensitive personally identifiable info requires protocols to gather, most likely scrub, or  not touch with a ten foot pole.  Will we be using Infrastructure data, app data. usage. account details, etc. Each area has its unique challenges. Plan accordingly.

Then: Use the right data tools. What are you going to use to store this data. Databases are of course common, but for big data apps, we need something a little more robust (Hadoop is hip in this area). where do you store a Hadoop instance? Split it up into smaller chunks (note, splitting it makes it vulnerable, so we need to replicate it. Wow, big data gets bigger :) ).Using map reducing tools, we can crunch down to a smaller data set for analysis purposes. I'm going to take Seth's word for it, as Hadoop is not one of my strong suits, but I appreciated the 60 second guided tour :) ). Regardless of the data collection and storage, ultimately that data needs to be viewed, monitored, aggregated and analyzed. The tools that do that are wide and varied, but the goal is to drill down to the data that matters to you, and having the ability to interpret what you are seeing.

Then: Get answers to your questions. Ultimately, we hope that we are able to get answers based on the real data we have gathered that will help us either support or dispute our hypothesis (back to the scientific method; testing is asking questions and then, based on the answers we receive, considering and proposing more interesting questions. Does our data show us interesting points to focus our attention? Do we know a bit more about user sentiment? Have we figured out where our peak traffic times are? If we have asked these questions, and gathered data that is appropriate for those questions, if we have been focused on aggregating the appropriate data and analyzing it, we should be able to say "yes, we have support for our hypothesis" or "no, this data refutes our hypothesis". Of course, that leads to even more questions, which means we go to...

Lather. Rinse. Repeat.

Hmmm, Mark Tomlinson just passed me a note with a statement that says "Computer Aided Exploratory Testing"? Hadn't considered it quite that way, but yes, this certainly fits the description. An intriguing prospect, and one I need to play with a bit more :).

-----

Lightning talks! Woo!!! We have four presenters looking to rifle through some quick talks.

Mark Prichard is discussing "Complete Continuous Integration and Testing for Mobile and Web Applications". Mark is with Cloudbees, and he's explaining how they do exactly what the title describes. Some interesting ideas surrounding how to use Jenkins and other tools to make it possible to build multiple releases and leverage a variety of common tools so as to not have to replicate everything for each environment. Leverage the cloud and Platform-as-a-Service for Continuous Delivery. Key takeaway... "ALM in the cloud will become the rule, not the exception". Quote attributed to Kurt Bittner.

--

Mike Ostenberg from SOASTA is next and he's talking about 'Performance Testing in Production, and what you'll find there".  Begs the question... *WHY* do we want to do performance testing in production (isn't that what we call a "customer freak out"... well, yeah, but that's an after effect, and we really want to not go there ;) ). Real systems, real load, real profiling. There's ways we can simulate load on a test environment, but it's not really going to match what happens in the real space. additionally, we want to do our load testing earlier than we traditionally do it. At the end of the cycle, we're a little too far gone to actually pivot base on what we learn.

Load testing in Production, Mike points out, can be done in stages and can be done on different levels. Just as we use Unit tests for components, integration tests for bigger systems, and feature/acceptance tests to tie it all together, we can deconstruct load tests to match a similar paradigm. Earlier load tests are dealing with errors, page loads, garbage collection, data  management, etc.  Regardless of the stage, there are some critical things to look at.

Bandwidth is #1, can everyone reach what they need? Load balancing, or making sure everyone pulls their weight, is also high priority. Application issues; there's no such thing as perfect code. Earlier tests can shake out the system to help show inefficient code, sync issues, etc. Database performance fits in application issues, but it's a special set of test cases. The database, as Mike points out, is the core of performance. Locking and contention, index issues, memory management, connection management, etc. all come into play. Architecture is imperative. Think of matching the right engine to the appropriate car. Connectivity comes into play as well. Latency, lack of redundancy, firewall capacity, DNS, etc. Configuration means we need to get custom and actually see if we mean it. Shared Environments... watch out for those noisy neighbors :). Random stuff comes into play when things are shared in the real world. Pay attention to what they can do for you (or to you ;) ).

I like this staggered approach, it makes the idea of "testing in production" not seem so overwhelming.

--

Now on deck is Dori Exterman, and he's talking about "Reducing the Build-Test-Deploy Cycle from Hours to Minutes at Cellebrite". Hmmm, color me mildly skeptical, but OK, tell me more :). I'm very familiar with the idea of serial build-test-deploy, and I know that that does not bode well. Multi-core systems can certainly help with this, and leveraging multi-core environments can allow us to do a much tighter build-test-deploy pipeline. Parallel processing speeds things up, but there's a system limit, and those system limits are also very costly at their higher end.

So what's the option when we max out the cores on a single system? seems that going parallel to more servers would make sense. Rather than one machine with 32 cores, how about 8 machines with four cores? same number of cores, maybe similar throughput gains (and potentially better since system resources are shared over multiple machines). This approach is referred to as a CI Cluster Farm. Cool, but we're still in a similar ball-park. Can we do better? Dori says yes, and his answers is to use distributed computing within your own network of machines. If I'm hearing this correctly, it's kind of like the idea of letting your machine be used for "protein folding" experiments while your machine is in more idle states (anyone else remember signing up to do stuff like that :)? ). I'm not sure that's what Dori means, but it seems this could be really viable, and we already have an example of that happening (i.e "signing up for protein folding").

How wild would it be to be able to wire up your entire network, everyone's machines, so that they can help speed up the build process? It's a fascinating model. I'd be curious to see if this really comes to fruition.

--

We had another Lightning talk added that came from a Birds of a Feather session about CI/CD, so this is a bit of a surprise. The idea was to see how we could leverage pipelines (mini-builds that run in sequence and individually). mini builds also helps us to build individual components, with a goals to integrate the elements later on. Often, all we want is a Yes/No to see if the change is good, or not (gated check-ins).

This blends into Dori's talk just given on distributed computing and utilizing down times for making an almost unlimitedly parallel build engine. So this is interesting, but what's management going to say about all of this? Well, what is it costing us not to do this? Are we losing time and in effect losing money in the process? Will this help us fix some of our technical debt? If so, it may well be worth considering. If it adds more technical debt, less likely to sell that option.

Another point is that good CI infrastructure will bubble up issues in design and architecture of both the process and the application. Innovation and motivation will potentially increase when changes can be made more frequently, and subsequently, more atomically.

By using information radiators, we can get a clearer sense as to who did what to cause the build to fail. Gadgets (lights, sounds, sensory input) can help make it more apparent and in real time. Not sure if this would be a major plus, but I'm not necessarily the best judge of what developers consider to be fun ;).


-----

The final test track talk, the anchor session, goes to Mark Tomlinson, as he discusses 'roles and Revelations: Embracing and Evolving our Conceptions of Testing". With a title like that, let's just say "you had me at 'hello'" ;).

Mark is a fun guy to listen to (check out his podcast "PerfBytes" to get a feel), and thus, it's fun to hear him do a more narrative talk as opposed to a techy talk. We start out with the idea of what testing is, at least how we look at it historically. We find bugs, we see that we can validate to a spec, we try to reduce costs, and we aim to mitigate risks. Overall, I think if you gave that list to any lay person and said "that's what testing is", they'd probably have little difficulty understanding that. Those definitions are valid, but it's also somewhat limiting. We've seen some interesting milestones over the past 50 years. Debugging, Demonstration, Destruction, Evaluation and Prevention can all be seen as "eras of testing". Mark points out that there are 10 different schools of testing (Domain, Stress, Specification, Risk, Random/Statistical, Function, regression, Scenario, User, and Exploratory).

That's all cool... but what if one day everything changed? Well, one would say that the past 14 years, or since the Agile Manifesto, the Universe did Change... to steal a little from James Burke. We are less likely today to have isolated test groups. We have a lot more alphabet soup when it comes to our titles. I've had lots of titles, lots of combinations, but ultimately all of them could be distilled to a "tester" of some flavor. Some teams have no dedicated testers, or just one dedicated tester. Test Driven Development is an unfortunate term choice, in the fact that what is a design process often gets mistaken for "testing" (nope, it's not. It's checking for correctness, but it is not testing). Out time to be interactive and effective is happening earlier, and I love this fact.

Continuous Integration, Continuous Deployment, Continuous Delivery and even Continuous Testing have entered the vernacular. What does this mean? It's all about trying to automate as many of the steps as humanly possible. Build-Check-Deploy-Monitor-Repeat. Conceive of a time and place where we go from end to end without a person involved, just machines. Sounds great, huh? In some ways, it's awesome, but there's an unfortunate side effect, in that may processes are billed as testing that are not. Checking is what automation does. It's great for a lot of things, but it can't really think. Testing, real testing, requires thinking and judgment. There's been a devaluing of testing in some organizations, or just doing testing is considered a liability. Unless we are all coding toolsmiths, we are of a lesser order... and that's bunk!!!

Ultimately, testing is a cost... seriously. Testing does not make money. testing is a cost center. It's an important cost center, but it is a cost. think of Health Insurance. It is not an investment. It's a cost you have to pay... but when you crash a car or break a leg, then the insurance kicks in, and I'll bet you're happy when you have it (and really frustrated if you don't). that's what testing is. It's insurance. It's a hedge. It's a cost to prevent calamity. With all of the changing going on, we ned to be clear what we are and what we provide.

What we generate, and what real value we provide, is feedback and information. we are not critics. We are not nay-sayers, we are honest (we hope) reporters of the state of reality, or at least as we can potentially be. the really valuable things that we can provide are not automate-able. Yes, I dared to say that :). Computers can evaluate variable values and they can confirm or deny state changes, but they cannot really think, and they cannot make an informed judgment call. They can only do what we as people tell them to.

Change is constant, and we will see more change as we continue. Testers need to be open to change,and realize that, while there is always value that we provide, the way we provide that value, and the mechanisms and institutions that surround them will evolve. If we do not evolve with them, we will be left behind.

Mark emphasizes that software testers are "Facilitators of Quality". Testing is not just limited to dedicated testers, it's dispersing. therefore, we need to emphasize where we can be effective, and that may mean going in totally different directions. Testing provides diversity, if we are willing to have it be a diversifying role. Think of new techniques, expand the way that we can ask questions, learn more about the infrastructure, and figure out ways that we can keep asking questions. The day we stop asking questions is the day testing dies, for real.

Testing can actually accelerate development. I believe this, and have seen it happen in my own experiences. This is where paired developer-tester arrangements can be great. think of the programmer being the pilot, and the tester being the navigator. Yes, if all we ask is "are we there yet?", we don't offer much, but if we watch the terrain, and ask if some ways we've mapped may be better or worse for the time we want to arrive, now we're adding value, and in some ways, we can help them fix issues before they've even been committed. testers provoke reactions. Not to be jerks, but to get people to think and consider what they really should be doing. Do you think you can't do that? If so, why? Give it a try. You may surprise yourself (and maybe a few programmers) with how much you deliver. In short, be the Devil's Advocate as often as possible, and be prepared to embrace the Devil's you don't know ;).

Consider that every tester is an Analyst. It may be formal or informal, but we all are, deep down. we can research quality efforts, we can drill down into data and see patterns and trends, we can also see trends and efficiencies we can add to our repertoire, and adapt, adapt, adapt!

-----

Sorry for the delay for the last bit, but with a rather meta post presentation call with Mark Tomlinson (we did a conference call about how to do podcasts, and in the process, recorded the session... so yeah, we made a podcast about how to do podcasts as an artifact of a meeting about how to do podcasts. Main takeaway, it's fun, but there's more to doing them than many people consider. We just hope we didn't scare everyone off after we were done (LOL!). After that, all of the speakers descended upon Tango Restaurant and had a fabulous dinner courtesy of the ALM Forum organizing staff. Great conversation with Scott Wambler, Curtis Stuehrenberg, Peter Varhol, and Seth Eliot, as well as several others. the nerd brain power in that small room was probably off the charts, and i was honored to have been included in this event. Seattle, thank you for a very busy and truly enjoyable week. For those who have been keeping track of this rather long missive, my thanks to you, too. To everyone who came to my talk and tweeted or retweeted my comments, and who commented back to me about my talk and gave me your impressions, feedback is a gift, and I've received many gifts today. Truly, thank you so much.

With this, I must return back to reality and back to San Francisco early this morning. I've enjoyed out rime together, and I hope that, in some small way, this meandering three days of live blogging has given you a flavor of the event and what I've learned these past few days. Let's do it all again some time :)!!!

A San Franciscan in Seattle: #ALMForum Day Two Reflections

Wednesday, April 02, 2014 21:47 PM

Last night, Adam Yuret invited me out to see what the wild world of Seattle Lean Coffee is all about. Having heard from a number of the people who have participated in these events, I decided I wanted to play as well, so my morning was centered around Lean Coffee and meeting a great group of Seattleites and their various roles and areas of expertise.

We covered some interesting topics including the use of Pomodoro and how to make the best use of it (I added the Procrastination Dash to the mix of discussions), the use of SenseMaker and whether or not the adherence to it as a paradigm bordered on religion (it's a framework for helping realize and see results, but it's not magic), some talk about the challenges of defining what technical testing really means (yes, I introduced that topic ;) ), sharing some thoughts on what defines a WIP limit for an organization, and some thoughts about "Motivation 3.0" (based on Daniel Pink's book "Drive").


Great discussions, lots of interesting insights, and an appreciation for the fact that, over time, we see the topics change from being technical to being more humanistic. The humanistic questions are really the more interesting ones, in my estimation. Again, my thanks to Adam and the rest of the Seattle Lean Coffee group for having me attend with them today.

-----

Cloud Testing in the Mainstream is a panel discussion with Steve Winter, Ashwin Kothari, Mark Tomlinson, and Nick Richardson. The discussion has ranged across a variety of topics, staring with what drove these organizations to start doing cloud based solutions (and therefore, cloud based testing) and how they have to focus on more than just the application in their own little environment, or how much they ned to be aware of in between hops to make their application work in the cloud (and how it works in the cloud. as an example, latency becomes a very real challenge, and tests that work in a dedicated lab environment will potentially fail in a cloud environment, mainly because of the distance and time necessary to complete the configuration and setup steps for tests.

Additional technical hurdles have been to get into the idea of continuous integration and needing to test code in production, as well as to push to production regularly. Steven works with FIS Mobile, which caters to banking and financial clients. Talk about a client that was resistant to the idea of continuous deployment, but certain aspects are indeed able to be managed and tested in this way, or at least a conversation is happening where it wasn't before.

Performance testing now takes on additional significance in the cloud, since the environment has aspects that are not as easily controlled (read: gamed) as they would be if the environment were entirely contained in their own isolated lab.

Nike was an organization that went through a time where they didn't have the information that they needed to make a decision. In house lab infrastructure was proving to be a limitation, since they couldn't cover the aspects of their production environment or a real example of how the system would work on the open web. With the fact that OPS was able to demonstrate some understanding through monitoring of services in the cloud, that helped the QA team to decide to collaborate and help understand how to leverage the cloud for testing, and how leveraging the cloud made for a different dialect of testing, so to speak.

A question that came up was to ask if cloud testing was only for production testing, and of course the answer is "no", but it does open up a conversation about how" testing in production" can be performed intentionally and purposefully, rather than something to be terrified about and say "oh man, we're testing in PRODUCTION?!" Of course, not every testing scenario makes sense to be tested in production (many would be just plain insane) but there are times when it does make a lot of sense to do certain tests in production (a live site performance profile, monitoring of a deployment, etc.).

Overall an interesting discussion and some worthwhile pros and cons as to why it makes sense to test in the cloud. Having made this switch recently, I really appreciate the flexibility and the value that it provides, so you'll hear very few complaints from me :).

-----
Mike Brittain is talking about Principles and Practices of Continuous Deployment, and his experiences at Etsy. Companies that are small can spin up quickly, and can outmaneuver larger companies. Larger companies need to innovate of die. There are scaling hurdles that need to be overcome, and they are not going to be solved overnight. There also needs to be a quick recovery time in the event something goes wrong.  Quality os not just about testing before release, it also includes adaptability and response time. Even though the ideas of Continuous Deployment are meant to handle small releases frequently performed, there still needs to be a fair amount of talent in the engineering team to handle that. The core idea behind being able to be successful in Continuous Development is the idea of "rapid experimentation".

Continuous Delivery and Continuous Deployment share  a number of principles. First is to keep the build green, no failed tests. Second is to have a "one button" option. Push the button, all deployment options are performed. Continuous Deployment breaks a bit with the fact that every passing build is deployed to production, where continuous delivery means that the feature is delivered with a business need. Most of the builds deploy "dark changes", meaning code is pushed, but little to no changes are visible to the end user (thin CSS rules, unreferenced code, back end changes, etc.). A Check in triggers a test. If clean that triggers automated acceptance tests. If that passes, then it triggers the need for user acceptance tests. If that's green, then it pushes the release. at any point, if the step is  red, then it will flag the issue and atop the deploy train.

Going from one environment to another can have unexpected changes. How many times have you heard "what do you mean it's not working in production? I tested that before we released!" Well, that's not entirely surprising, since our test environment is not our production environment. Question of course is, where's the bug? Is it in the check ins? Are we missing a unit test(s)? are we missing automated UA tests (or manual UA tests)? Do we have a clear way of being identified if something goes wrong? What does a roll back process look like? All of these are still issues, even in Continuous Deployment environments. One avenue Etsy has provided to help smooth this transition is a setup that does pre-production validation. Smoke tests, Integration tests, Functional and UA tests are performed with hooks into some production environment resources, and active monitoring is performed. All of this without having to commit the entire release to production, or doing so in stages.

Mike made the point that Etsy pushes, approximately, about 50,000 lines of code each month. With a single release, there's a lot of chances for there to be bugs clustered in that single release. By making many releases over the course of days, weeks or months. The odds of a cluster of bugs appearing are minimal. Instead, the bugs that do appear are isolated and considered within their release window, and their fix likewise tightly mirrors their release.

This is an interesting model. My company is not quite to the point that we can do what they are describing, but I realized we are also not way out of the ballpark to consider it. It allows organizations to iterate rapidly, and also to fix problems rapidly (potentially, if there is enough risk tolerance build into the system). Lots to ponder ;).

-----
Peter Varhol is covering one of my favorite topics, which is Bias in Testing (specifically, cognitive bias). Peter started his talk by correlating the book "Moneyball" to testing, and that often, the stereotypical best "hitter/pitcher/runner/fielder/player" does not necessarily correlate to winning games. By overcoming the "bias" that many of the talent scouts had, he was able to build a consistently solid team by going beyond the expectations.

There's a fair amount of bias in testing. That bias can contribute to missing bugs, or testers not seeing bugs, for a variety of reasons. Many of the easy to fix options (missing test cases, missing automated checks, missing requirement parameters) can be added and covered in the future. The more difficult one is our own biases as to what we see. Our brains are great at ambiguity. they love to fill in the blanks and smooth out rough patches. even when we have a "great eye for detail", we can often plaster over and smooth out our own experience, without even knowing it.

Missed bugs are errors in judgment. we make a judgment call, and sometime we get it wrong, especially when we tend to think fast. When we slow down our thinking, we tend to see things we wouldn't otherwise see. case in point: if I just read through my blog to proof-read the text, it's a good bet I will miss half a dozen things, because my brain is more than happy to gloss over and smooth out typos; I get what I mean, so it's good enough... well, no, not really, since I want to publish and have a clean and error-free output.

Contrast that with physically reading out, and vocalizing, the text in my blog as though I am speaking it to an audience. This act alone has helped me find a large number of typos that I would otherwise totally miss. The reason? I have to slow down my thinking, and that slow down helps me recognize issues I would have glossed over completely (this is the premise of Daniel Kahneman's "Thinking, Fast and Slow".  To keep with the Kahneman nomenclature, we'll use System 1 for fast thinking and System 2 for slow thinking.

One key thing to remember is that System 1 and System 2 may not be compatible, and they may even be in conflict. It's important to know when we might need to dial in one thought approach or the other. Our biases could be personal. They could be interactional. they could be historical. they may be right a vast majority of the time, and when they are, we can get lazy. We know what's coming, so we expect it to come. when it doesn't we are either caught off guard, or we don't notice it at all. "Representative Bias" is a more formal way of saying this.

When we are "experts" in a particular aspect, we can have that expertise work against us as well. we may fail to look at it from another perspective, perhaps that of a new user. This is called "The Curse of Knowledge".

"Congruence Bias" is where we plan tests based on a particular hypothesis, whereas we may not have alternative hypotheses . If we think something should work, we will work on the ways to support that a system works, instead of looking at areas where a hypothesis might be proven false.

'Confirmation Bias" is what happens when we search for information or feedback that confirms our initial perceptions.

"The Anchoring Effect" is what happens when we become to convinced on a particular course of action that we become locked into a particular piece of information, or a number, where we miss other possibilities. Numbers can fixate us, and that fixation can cause biases, too.

" Inattentional Blindness" is the classic example where we focus on a particular piece of information that they miss something right in front of them (not a moonwalking bear, but a gorilla this time ;) ). there are other visual images that expand on this.

The "Blind Spot Bias" comes from when we evaluate our decision making process compared to others. With a few exceptions, we tend to think we make better decisions than others in most areas, especially those we feel we have a particular level of expertise.

Most of the time, when we find a bug, it's not because we have missed a requirement or missed a test case (not to say that those don't lead to bugs, but they are less common). Instead, it's a subjective parameter. We're not looking at something in a way that could be interpreted as negative or problematic. This is an excellent reminder of just how much we need to be aware of what and where we can be swayed by our own biases, even by this small and limited list. There's lots more :).

-----

-----
More to come, stay tuned.

Live From Seattle, it's #ALMForum: A TESTHEAD Live Blog

Wednesday, April 02, 2014 00:21 AM

Good morning everyone. I'll be coming at you live from Seattle at various times of the day. This is a live blog, and as such, it's going to be stream of consciousness, it may contain mistakes, and it may also have gaps in logical flow. If you want to see the real time feed, an ability to handle ambiguity will help. If you can't handle a touch of ambiguity, wait until later in the day when I get a chance to clean things up a bit ;).

We start out with Scott Ambler (@scottwambler on Twitter) and a discussion of  Disciplined Agile Delivery and how to scale Agile practices in larger organizations. Scott made an few points about the fact that Agile is a process with a lot of variations on the theme. Methodologies and methods are all nice, but each organization has to piece together for themselves which of the methods will actually work. Scott has written a book called Disciplined Agile Delivery (DAD). The acronym of DAD is not an accident. Key aspects of DAD are that it is people first, goal drive, it';s a hybrid approach, learning oriented, utilizes a full delivery lifecycle, try to emphasize the solution, not just the software. In short DAD tries to be the parent; it gives a number of "good ideas" and then lets the team try to grow up with some guidance, rather than an iron hand.

Questions to ask: what are the variety of methods used? What is the big picture? While we can look at a lot of terminology, and we can say that Scrum or agile processes are loose form and just kind of happen, that's not really the case at all.  Solution delivery is complex, and there's a lot of just plain hard reality that takes place. Most of us are not working on the cool new stuff. We're more commonly looking at adding new features or enhanced features to stuff that already exists. Each team will probably have different needs, and each team will probably work in different ways. DAD is OK with that.

Scott thankfully touched on a statement in a keynote that made me want to throw the "devil horns" and yell "right on!" there is no such thing as a best practice; there are good practices in some circumstances, and those same practices could be the kiss of death in another situation. Granted, those of us who are part of the context-driven testing movement, this is a common refrain. the fact that this is being said in a conference that is not a testing conference per se brought a big smile to my face. the point is, there are many lean and agile options for all aspects of software delivery. The advice we are going to get is going to conflict at times, it's going to fit some places and not others, and again, that's OK.

Disciplined agile delivery comes down to asking the questions around Inception (How to we start?), Construction (What is the solution we need to provide?), Transition (How to we get the software to our customers?) and Ongoing (what do we do throughout all of these processes?).

For years, we used to be individually focused. We all would do our "best practices" and silo ourselves in our disciplines. Agile teams try to break down those silos, and that's a great start, but there's more to it than that. Our teams need to work with other teams, and each team is going to bring their own level of function (and dysfunction). this is where context comes into play, and it's one of the ways that we can get a handle on how to scale our methods. While we like the idea of co-location, the fact is that many teams are distributed. Some teams are partially dispersed, others are totally dispersed (reminds me of Socialtext as it was originally implemented; there was no "home office" in the early days).  Teams can range from small (just a few people), medium (10-30 people), and large teams (we think 30+ is large, other companies look at anything less than 50 people as small teams). The key point is that there are advantages and disadvantages regarding the size of your team. Architecture may have a full architecture team with representatives in each functional group. Product owners and product managers might also be part of an over arching team where representatives come from smaller groups and teams.

The key point to take away from this is that Agile transformations are not easy. They require work, they take time to put into place, there will be mis-steps, there will be variations that don't match what the best practices models represent. the biggest challenge is one of culture, not technology. Tools and scrum meetings are fairly easy. Making these a real part of the flow and life of the business takes time, effort and consistent practice. Don't get too caught up in the tools doing everything for you. They won't.  Agile/Scrum is  a good starting point, but we need to move beyond this. Disciplined Agile Delivery helps us up our game, and gets us on a firmer footing. Ultimately, if we get these challenges under control with a relatively small team, we can look to pulling this off with a large enterprise. If we can't get the small team stuff working, Agile scaling will be pretty much irrelevant.

My thanks to Scott for a great first talk, and now it's time to get up and see what else ALM forum has to offer.
-----

I'm going to be spending a fair amount of my time in the Changing Face of Testing Track. I've already connencted with some old friends and partners in crime. Mark Tomlinson and I are probably going to be doing a fair amount of cross commenting, so don't be surprised if you see a fair amount of Mark in my comments ;).

Jeff Sussna is taking the lead for us testers and talking about how QA is changing, and how we need to make a change along with it. We're leaving industrialism (in many ways) and we are embarking on a post-industrial world, where we share not necessarily things, but we share experiences. We are moving from a number of paradigms into new paradigms:

from products to services: locked in mechanisms are giving way to experiences that speak to us individually. The mobile experience is one of the key places to see this. People who have negative experiences don't live with it, they drop the app and find something else.

from silos to infusion: being an information silo used to give a sense of job security. It doesn't any longer. Being able to interact with multiple organizations and to be adaptable is more valuable that being someone who has everything they know under lock and key.

from complicated to complex: complicated is predictable, it's bureaucratic, it's heavy. Complex is fragmented. It's independent, it doesn't necessarily follow the rules, and as such it's harder to control (if control is possible at all).

from efficient to adaptive: efficiency is only efficient when the process is well understood, and the expectations are clearly laid out. Disruption kills this, and efficiency gives way when you can't predict what is going to happen. This is why adaptability is more valuable than just efficiency. Learn how to be adaptive and efficient? Now you've got something ;).

The disruption that we see in our industry is accelerating. Companies that had huge leads and leverage that could take years to erode are eroding much faster. Disruption is not just happening, it's happening in far more places. Think about Cloud computing. Why is it accelerating as a model? Is it because people are really all that interested in spinning up a bunch of Linux instances? No, not really. The real benefit is that we can create solutions (file sharing, resource options, parallel execution) where we couldn't before. We don't necessarily care about the structure of what makes the solution, we care that we can run our tests in parallel in far less time than it would take to run them on a single machine in serial. Dropbox is genius not because it's in the cloud, it's genius because any file I really care about I can get to anywhere, at any time, on any device, and I can do it with very little physical setup and maintenance (changes delivered in an "absorbable manner").


Think of Netflix and their “chaos monkey”. They go in and turn instance off. They deliberately break stuff. They want to see what they might be able to find. “I don’t always test my code, but when I do, I do it in production”. That’s supposed to be a joke, but believe it or not, there is a great benefit to testing in production. this is why I am very invested in using my company’s product on their production servers, and looking at issues based on workflows I depend upon.

So what does this all mean for testers and testing? Does this mean that our role is being usurpsed? No, but it does mean our role is changing. Instead of having to babysit machines and be the isolated gatekeeper, we can test more intelligently and with a greater sense of adventure. We can also emphasize that testing goes beyond just performing scripted steps. We can also test more than just the code that we receive, when we receive it. We can test requirements. We can provoke questions. More to the point, we can be a feedback loop to the organization. If an organization believes in being truly adaptive, then it is, effectively, an environment that is friendly to QA.

Mark and I had a little fun considering some of the ramifications as presented, and since Mark said he has some debatable comments he'll be sharing in his talk, I'm going to hold off and not comment until then (stay tuned for further details ;) ).  Suffice it to say, testers are notorious for not necessarily agreeing across the board. That's also part of testing. If we agreed 100%, I'd be deeply worried about the state of our profession.

Testing covers a lot of areas. User testing validates usability. Unit tests can cover code functionality, but there's a lot of space in between those areas that get so much attention. There are lots of "ilities" we need to be paying attention to.

Retros are a good opportunity to see what went well and what can go better, but the technique only works when it's done on a frequent enough level, and the feedback is substantive.

What we definitely need to get away from is "Discontinuous Quality". Let's stop talking about QA wagging the dog. Let's not save testing until the end, where we find problems and tell people about them, only to be said that we are the bottleneck stopping the organization from releasing. Instead, let's get to the party earlier. Let's check out ideas earlier. Let's understand what we are able to contribute, and in as many places as we can.  Ultimately, we are not delivering functionality, we are delivering the ability to help accomplish goals and accomplish objectives. How we do it is not nearly as important as the fact that we actually do it, and do it in a way that is both effective and adaptable.

For me, the one most common thing I can think of to help this is the term "QA". I do my best to not use that term at all if I can get away with it. If I'm asked if I'm in QA, I always answer "yes, I'm a software tester". We have to get out of the business of assuring quality, because we really can't do that. we can inform, we can evangelize, we can enlighten, but we really can't assure anything. What we can do is test, and weave a compelling story. Ultimately, the story is the most important thing we can deliver, as it's the narrative that really defines if a solution goes out or doesn't.

-----

Ken Johnston (@rkjohnston) is talking about EaaSY, or 'Everything as a Service, Yes!".  Ken wants to help us see what the role of testing actually is. It's not really about quality assurance, but more about Risk assessment and management. I agree with this, in the sense that, in the old school environments I used to work in, especially when I worked for a game publisher, when a bug shipped to production, unless is was particularly egregious, it was eternal.  In the services world, and the services model, since software is much more pliable, and much more manageable, there's no such thing as a "dated ship". We can udate all the time, and with that, problems can be addressed much more quickly. With this model, we can be less forced into slotted times. We can update a bug in the same day. we can release a new feature in a week where it used to take a quarter or a year.

EaasY covers a number of parts to be made to be effective.

Componentization: break out as much of the functionality from external dependencies as possible.

Continuous Delivery: Requires Continuous Stability. It needs to have a targeted set of tests, an atomic level of development, and likely is an area that can be deployed/fixed with a low number of people being impacted by the change (the more mission critical, the less likely a Continuous Delivery model will be the desired approach. Not impossible, but probably not the best focus (IMO).

User Segmentation: When we think of how to deploy to users, and we can use a number of methods to do that. we can create concentric rings, with the smallest ring being the most risk tolerant users, and expanding out to a larger set of users, the farther out we get, the more risk averse the users. Additionally, we can use tools like A/B testing, to see how two groups of people react to a change as structured one way or another (structure A vs. Structure B). This is a way to put into production a change, but have a small group of people see it and react to it.

Runtime Flags: Layers can be updated independently. We can fork traffic through the production path and at key areas, data can be forked and routed through a different setup, and then reconvene with the production flow (this is pretty cool, actually :) ). Additionally, code can be pushed, but it can be "pushed dark", meaning it can be put in place but turned on at a later time.

Big Data: Five "Vs" (Volume, Variety, Velocity, Verification, Value). These need to be considered for any data driven project. The better each of these is, the more likely we will be successful in utilizing big data solutions.

Minimum Viable Product: Mark callup on Seth Eliot's "Big Up Front Testing" (BUFT) and says "say no to "BUFT". With a minimum viable product, we need to scale our testing to a point where we can have a MVP, and appropriate testing for the scale of the MVP. Additionally, there are options where we can Test in Production (not full scale, of course).

Overall, this was a very interesting approach and idea. Many of the ideas and approaches described sound very similar to activities we are already doing in Socialtext, but it also gives me areas where I can see that we can do better.

-----

James Whittaker (@docjamesw) is doing the next plenary session, called "A Future Worth Wanting". First we start with our own devices, our own apps, we own them, they're ours, but they aren't particularly useful if they don't connect to a data source somewhere (call it the web and the cloud for simplicity). James is making the point that there's a fair amount of stuff in between that we are not including. The Web browser is one of these middle point items. The app store is another. We know what to do and how to do it, we don't give it much thought. Could we be doing better?

Imagine getting an email, then having to research where an event is, how much tickets are, and how we could handle transactions (using "entities") and we can use those entities and we can find out information and perform transactions based on those entities. Frankly, this would be cool :).

What if we were a calendar? We are planning to do something, some kind of activity that we need to be time focused for. What do we naturally do? We jump to a browser and go figure out what we need. what of our calendar could use those entity relationships and do the search for us, or better yet, return what has already been searched for based on the calendar parameters? Think of writing code? Wouldn't it be cool to find a library that could expand on what you are doing or do what you are hoping to do?

The idea here is to be able to track "entities" to "intents", and execute those intents. Think about being able to call up a fact checking app in PowerPoint, and based on what you type, you get a return specific to your text entry. Again, very interesting. The key takeaway is that our apps, our tools, our information needs are getting tailored to exactly the data we want, from the section of the web or cloud that we actually need.

This isn't a new concept, really. This is the concept of "agents" that's been talked about for almost two decades. The goal we want is to be able to have our devices, our apps, our services, etc, be able to communicate with each other and tell us what we need to know when we need to know it. It's always been seen as a bit of a pipe dream, but every week it seems like we are getting to see and know more examples that make that pipe dream look a little less far fetched.

Goals we want to aim for:

- Stop losing the stuff we've already found
- Localize the data and localize the monetization
- Apps can understand intent, and if they don't, they should. Wouldn't it be great if based on a search or goal, we can download the appropriate apps directly?
- make it about me, not my device


Overall, these are all cool ideas, and yes, these are ideas I can get behind (a bit less branding, but I like the sentiment ;) ).

-----

Alexander Podelko (@apodelko) wants to have us see a "Bigger Picture" when it comes to load testing. There's a lot of terminology that goes into load testing and they are often interchangeable, but not always. the most common image we have of Load testing (and yes, I've lived this personally) is the last minute before deployment, we put some synthetic tests together in our lab, try to run a bunch of connections, see what happens, and call it a day and push to production. as you might guess, hilarity ensues.

The problem with this is not just the lateness, or the inability to really match our environment, but that we miss a lot of stuff. There's a lot of options to load testing that can give us a more broad picture (as the talk suggests). Some of the other issues that load testing brings is the fact that each tool has limitations to what it can cover, as well as the robustness that can be provided by the various tools (as you might guess, JMeter does not solve every load testing problem... I know, contain your shock and dismay ;) ).

As Alexander points out quite appropriately, web sites were simple for a very brief window of time. They are expanding to be more complex and less controllable through standard and simple tools that would cover everything in one place. There are a variety of tools that can be used, ranging from open sources to commercial tools. The more complicated the system, the less likely one tool will be able answer the needs.

Overall, load testing looks to have some of the broadest challenges for the systems that are meant to be tested, at least if we want to create load that is not completely synthetic and generally meaningless. Making load tests that are complex, heterogeneous, and indicative of real world traffic are possible, but the more unique and real world the traffic you wish to emulate, the more difficult the process to actually provide that simulated traffic actually is.


-----

In the mid afternoon, they held a number of Birds of a Feather sessions, to provide some more interactive conversations, and one of the was specifically about how to use GIT. Granted, I'm pretty familiar with GIT, but I always appreciate seeing how people use it and seeing different ways to use it that I may have not considered.

One of the tools that they used for the demonstration was to "Learn Git Branching", which displays a graphical representation of a variety of commits, and shows what commands actually do when they are run (git commit, git merge, rebase, etc.).

-----

The last session of the day is being delivered courtesy of Allan Wagner, and the focus is on continuous testing, or why we would want to consider doing continuous testing. The labor costs are getting higher, even with outsourcing options considered, test lab complexity is increasing, and the amount of testing required keeps growing and growing. OK, so let's suppose that Continuous Testing is the approach you want to go with (I hope it's not the only approach, but cool, I can go with it for this paradigm), where do you start?

For testers to be able to do continuous testing, they need:

- production like test environments (realistic and complete
- automated tests that can un unattended
- orchestration from build to production which is reliable, repeatable and traceable

One very good question to ask is "how much time do you spend doing repetitive set up and tear down of your test environments?" In my own environment, we have gotten considerably better in this area, but we do still spend a fair amount of time to set up our test environments. I'm not entirely sure that, even with service virtualization, there would be a tremendous increase in time saved for doing spot visual testing. While I do feel that having automated tests is important, I do not buy into the idea that automated testing only is a good idea. It certainly is a big plus and a necessary methodology for unit tests, but beyond that, trying to automate all of the tests seems to fall under the law of diminishing returns. I don't think that that is what Allan is suggesting, but I'm adding my commentary just the same ;).

Service Virtualization looks to try to create, as its name describes, the ability to make elements hat are unavailable available for testing. It requires mocks and stubs to work, where you can simulate the transactions rather than try to configure big data hardware or front end components that don't yet exist for our applications.

Virtual Components need to fit certain parameters. They need to be simple, non-deterministic, data-driven, using a stateful data model, and have functionality where we can easily determine their behavioral aspects.

The key idea is that, as development continues, the virtual components will be replaced with the real components and start looking at additional pieces of later functionality.  In other examples, the virtualized components may be those that simulate a third party service that would be too expensive to have part of the environment as a regular part of the development process.

Allen made the point in his talk that Continuous Testing is not intended to be the be all and end all of your testing, but it is meant to be a way to perform automated testing as early as possible and as focused as possible so that the drudge work of set-up tear down, configuration change and all of the other time consuming steps can be automated as much as possible. This is meant to allow the thinking testers to do the work that really matters, which is to perform exploratory testing and let the tester genuinely think. That's always a positive outcome :).

-----

From here' it's a reception, some drinks, and some milling about, not to mention dinner and chilling with the attendees. I'll call this a day at this point and let you all take a break from these updates, at least for today. Tomorrow, I'm going to combine two events, in that I'll be taking part in SEALEAN (Lean Coffee event) and then picking up with the ALM Forum conference again  after that. Have a good night testing friends, see you tomorrow morning :).

End of Entry: 04/01/2014: 05:20 p.m. PDT

TECHNICAL TESTER FRIDAY: And After Awhile... You Can Work on Points for Style

Saturday, March 29, 2014 01:44 AM

Last week was a bit of a whirlwind. I took my son out for a quarter cross country trip to check out a University he may be going to. Needless to say, that took a few days out of my reality. Between being in airplanes, rental cars and dealing with less than stellar rural WiFi in spots, let's just say I'm a week behind where I should be. On the bright side, I do have some stuff to show, and a little bit of extending on the theme of PHP being a site driver.

One of the interesting paradigms that shifted for me with some of the abilities that PHP provides is that I now look at pages differently. I used to use simple tools like Sea Monkey or some other cheap WSIWIG editing tool, chopping out anything that would be overhead, and making some simple frame-ups that I could use to quickly drop in text, make changes, etc. It was intensely manual, but hey, I was rolling my own, so I was OK with that.

Then PHP came along and messed up everything, and bless it for doing exactly that :).

The last time we were together I started looking at ways that I could clean up the pages, put some basic style ideas together, and make a site that would be less of a pain to maintain. Did I succeed? In some ways yes, but in others, I’ve traded one series of challenges for another. Some things are much easier with PHP, and some things allow you a wide latitude to do things that are hard to track down later. The key point, though, is that we can emphasize interactivity and automation for the pages rather than brute force updating and modifications. 



I've taken some time and cleared the game on the HTML and CSS module that Codecademy offers, and I have to say it's actually a pretty good synopsis. the HTML is pretty basic, but it made for a nice layering to teach the ideas behind CSS and where to use them. There's a lot of options that allow for fine motor control of the pages, but I think the best is the rapid templating that it can give to a site designer.

Here's my current index.php file, which is what I now use as my base template for all pages of the proposed "Youth Orchestra" site:


Yep, that's it. Pretty much every page in the site now has this as its basic template. What does the main site look like now?


Yeah, I know, I've moved from 1995 to about 2002 ;). Again, this was so that I could focus on using a few key attributes and not drowning in them. The main idea that I wanted to focus on was how to create a fundamental "box model" that would be easy to use as a base for all of the pages, regardless of what they were to display. 

Each of the main sections of the page are spelled out as div's, each with a unique class or ID. We have the header, we have a nav bar, we have a left side bar, a right main content area and a footer at the bottom. Each of those areas is defined (more or less) by a placement of the individual div elements. 

The style page is not terribly expansive, but it gives a little better view than it did before. also, there's a whole bunch of things that we can still do with these pages, and I am not even close to done. Added to that, there's still no JavaScript in any of these pages. I'm aiming to boost that net week, when I start getting more aggressive with forms and media displays.

Here's a look at the CSS file as it currently exists:



Again, these are pretty basic commands. Not a lot of fancy footwork is going on in here just yet, but that's a good thing. My goal is to try to look to see where I can decouple dense HTML and atomize the items, perhaps in small PHP pages,  or perhaps make it purely database driven (or as much as possible). 

Also, while I am starting to use HTML5 conventions for the base HTML structure, I'm using the standard div structure and giving each dive a unique name or attributes, so that I can, again, simplify what each page needs and what I need to maintain. At the moment, the separate PHP files are currently echo statements with raw HTML listing. It's a step up on the maintainability scale, but part of me hears Paul Stanley from his "100,000 Years" banter on KISS Alive saying "Now come on! I KNOW you can do better than that!!" On the bright side, it gives me something to look forward to and tweak a bit more.

For those that want to follow the ideas from this session, my recommendation is to work through the examples on Codecademy, practice all of the examples, and as you perform each step, grab a snippet here and there and use the ideas to build your css file. try to arrange the items as much as possible from most expansive to the most localized, and if possible, try to keep elements that have an impact or relation on each other close together. 

At some point, if the site gets sufficiently complex, that will become more difficult, but seeing each of the CSS elements, classes, pseudo classes and option in plat, it really becomes easier to separate the structure from the style points. I still have a fair amount of decoupling to do, specifically the use of tables, and I also need to look at more granular positioning and locking of elements in place (almost there, but there's still a little fudge factor going on with the divs). On the bright side, we're moving forward and making progress, and that's what matters :).






Congratulations to Packt for 2000 Titles

Tuesday, March 25, 2014 12:23 PM

As many of you know, one of the things I put a lot of focus on are books and book reviews in this blog. Several publishers have been very generous to me and have given me access to a variety of titles to read, apply and review. Packt Publishing is one of those companies, and as a celebration of the fact that they have released their 2000th title, I want to help them celebrate and encourage those who like their titles to take advantage of their current offer.

I feel bad that I couldn't do this sooner, but I've been out of town the past few days (another post is coming on that point, don't worry), but I still want to get the word out about the Packt 2000 title celebration, but there's a limited mount of time to take advantage of it.

So what is this all about?

It's simple. Until the end of the day today, if you buy any Packt Publishing EBook title, you will get another Packt Publishing EBook for FREE.

Click on either of the links above and you can take advantage of this opportunity, but don't delay, as it ends today (Mar. 25, 2014).

Again, I tend to not use this site for advertising purposes, but I appreciate the fact that Packt has provided a lot of titles to me over the past few years to review, and I would like to return the favor. If you appreciate open source software books, and want to support a company that produces many solid titles, then head over to Packt and get your BOGO on :).