Tuesday, April 21, 2015 21:42 PM
The past two and a half weeks have been very difficult for me to process. Part of me is numb, part of me is frustrated, and part of me is deeply sad. All of these feelings have conspired to render my writing nearly non-existant, because I couldn't produce anything until I addressed the overwhelming elephant in my reality.
I don't want to talk about losing Ken. Instead, I want to talk about how he lived, and what he meant to me. I met Ken for the first time in December, 2010. Matt Heusser invited me to a "tester's dinner" at Jing Jing's in Palo Alto. The resulting conversations turned into TWIST podcast #30. I remember being impressed with Ken right off the bat. He was a little gruff, and no-nonsense with his answers, but he was a straight shooter, and he possessed a wealth of skill and a practical approach that he was happy to share.
Over the following two years, I would run into Ken at various meetups, workshops and conferences. Each time, we'd talk more in depth, getting to know each other better. In the summer of 2012, I had the chance to sit in with him and Cem Kaner at CAST as they discussed ways to balance automation and exploration. As I was set to give a talk in two days on the same subject, that interaction and deep questioning caused me to discard much of my original talk and start fresh. Ken took the time to answer dozens of questions I had, and in the process, helped me make a talk I would deliver several times over the ensuing years. That final product was hugely inspired by Ken.
A few months later, when I expressed an interest in a change of scenery, and a desire to work with a broader testing team rather than keep going it alone, Ken lobbied me to join his team at Socialtext. I accepted, and thus began a daily interaction with him that lasted for two and a half years. I learned a great deal from Ken and his unique style. Ken was not one to idly chat or play games. If he felt you were doing something good, he told you, immediately. If he felt you were losing focus or going off the rails, he told you... immediately :)! You always knew exactly where you stood with him, what was working for him, and what wasn't. He also took great care in representing his team to upper management, both within Socialtext itself, and when we were acquired by PeopleFluent. Ken was fearless when it came to telling people what could be done and what couldn't. He didn't care if upper management was frustrated or irritated with an answer, he'd rather give them the straight truth than make a promise we couldn't deliver, or push something that would be sub-par.
During many of the stand-up meetings we'd have each morning, Ken would have his oldies station playing over the phone system (half our team is distributed). Some mornings, he'd start singing at the start of stand-up, and often, he'd ask me to sing along with him, since we were both vocal performers (he sang with a number of choir groups, and had a wonderful baritone voice). Over time, though, due to the cancer treatments, the singing voice was quieted, and we heard it less and less. Now, the singing has stopped. I won't hear it again in my day to day work life any longer. I think I will miss that most of all.
I want to invite my friends who are wine connoisseurs (because Ken was definitely one of those) to raise a glass to Ken. He was a man who understood testing, and represented that understanding very well. He inspired a lot of love and admiration from his team, and from everyone that knew him. He was generous with his time, energy, and knowledge. He loved to roll up his sleeves and do the day to day work that, by virtue of his position, he absolutely did not have to do. Still, more than just doing the day to day work, he reveled in helping his team learn something new, do something they'd never done before, and encourage them to go farther and get better each and every day. It's a trait I hope I can exhibit when I work with others in the future.
Thank you, Ken, for just plain being amazing. Our team has a massive hole in it, and I doubt we will ever truly fill it, but we will do our level best to make you proud of us nonetheless.
Friday, April 03, 2015 20:27 PM
One of the key messages I received loud and clear about the world of software testing and those who are involved in it is that there is a bit of an identity crisis happening. The world of software development is changing. Long standing biases, beliefs and processes are being disrupted regularly. Things that worked for years are not working so well any longer. Business models are being upended. The rules of the game and the very game itself is being re-written, and many people are waiting to be shown the way.
One of the things I have witnessed time and time again is that there is no end of people happy to consume materials created. I'm very much one of those people, and I enjoy doing so. However, at some point, there comes to be a space where there is a limit to what can be consumed and a void as to what is there. Something is lacking, and a need that is not being fulfilled. It's very easy to ask that someone fill that need, and complain when that person or group does not do so. I remember very well being called to task about this very thing. Back in 2010, I lamented that no one had brought Weekend testing to the Americas, that I had to attend sessions hosted in India, Europe and Australia, often either late at night or early in the morning for me. Joe Strazzere set me straight very quickly when he said (I'm paraphrasing):
"Perhaps it's because no one in the USA values it enough to make it happen. YOU obviously don't value it enough either. If you did, you would have already made it happen!"
That was a slap, but it was an astute and accurate slap. I was lamenting the fact that something I wanted to consume wasn't readily available for me in a way that I wanted to have it. I was waiting for someone else to create it, so I could come along and feed at the counter. Instead, I realized someone had to prepare the food, and since I wanted to eat it, I might as well prepare it, too. The rest, as they say, is history, and to be continued.
I want to make a suggestion to those out there who see a need, an empty space, an unfulfilled yearning for something that you have identified... is there any reason why you are not doing something to bring it to life? Are you waiting for someone else to give you permission? Are you afraid your idea may be laughed at or criticized? If you are a software tester, are you suffering from the malady that you see a problem for every solution? If so, I want to assure you that I understand completely, as I have been there and find myself in that position frequently. Perhaps you feel that you are alone in your frustration, that you are the only one who finds it frustrating that something you care about is not being addressed. While I was at STP-CON, and during Kate Falanga's session, we discussed the three layers of engagement and change/influence. The first layer is ourselves, the second is our team or organization, and the third is the broader community. There's a very good chance that any void you are seeing, any initiative that you hope to see championed, has many who likewise want to see a champion emerge.
My advice to all out there is to stop waiting for someone else to Build the Perfect Beast, and instead, start building your own. Once you start, it's a good bet others will want to get involved as well. No one wanted to do Weekend Testing in the Americas until I was willing to throw my hat in the ring. In short order, others decided they wanted to get involved as well. Some have come and gone, but we are still here and many have helped us solidify what we do. Our Beast is not yet perfect, but it's getting there, and we've learned much along the way. Same goes for every other organization I am involved in. Major movements do not happen by timidly waiting for someone else to take the lead, and they don't come about by asking for permission to do them, either. If you see a need that is not being met, try to create something that will meet that need, even if you have to do it alone at first. My guess is, over time, others will see what you are doing and want to be part of it, too. Do be warned, though, the desire to create is addicting, and you could find yourself a bit over-extended. On the other hand, you may be having too much fun to care :).
Thursday, April 02, 2015 23:00 PM
Two days goes by very fast when you are live-blogging each session. It's already Thursday, and at least for me, the conference will end at 5:00 p.m. today, followed by a return to the airport and a flight back home. Much gets packed into these couple of days, and many of the more interesting conversations we have had have been follow-ups outside of the sessions, including a fun discussion that happened during dinner with a number of the participants (sorry, no live blog of that, unless you count the tweet where I am lamenting a comparison of testers to hipsters ;) ). I'll include a couple of after hour shots just to show that it's not all work and conferring at these things:
9:00 am - 10:00 am
KEYNOTE: THINKING FAST AND SLOW – FOR TESTERS’ EVERYDAY LIFE
Joseph based his talk on the Daniel Kahneman book "Thinking Fast and Slow". The premise of the book is that we have two fundamental thinking systems. The first thinking system is the "fast" one, where we can do things rapidly and with little need for extended thought. It's instinctive. By contrast, there's another thinking approach that's required that makes us slow down to work through the steps. that is our non-instinctual thinking, it requires deeper thought and more time. both of these approaches are necessary, but there's a cost to switch between the two. It helps to illustrate how making that jump can lose us time, productivity and focus. I appreciate this acutely, because I do struggle with context-switching in my own reality.
One of the tools I use if I have to deal with an interruption is that I ask myself if I'm willing to lose four hours to take care of it. Does that sound extreme? Maybe, but it helps me really appreciate what happens when I am willing to jump out of flow. By scheduling things in four hour blocks, or even two hour blocks, I can make sure that I don't lose more time than I intend to. Even good and positive interruptions can kill productivity because of this context switch (jumping out of testing to go sit in a meeting for a story kickoff). Sure,the meeting may have only been fifteen minutes, but getting back into my testing flow might take forty five minutes or more to get back to that optimal focus again.
Joseph used a few examples to illustrate the times when certain things were likely to happen or be more likely to be effective (I've played with this quite a bit over the years, so I'll chime in my agreement or or disagreement.
• When is the best time to convince someone to change their mind?
This was an exercise where we saw words that represented colors, and we needed to call out the words based on a selected set of rules. When there was just one color to substitute with a different word, it was easier to follow along. When there were more words to substitute, it went much slower and it was harder to make that substitution. In this we found our natural resistance to change our mind as to what we are perceiving. The reason we did better than other groups who tested this was that our ability to work through the exercise was much more likely to be successful in the morning after breakfast rather than later in the day after we are a little fatigued. Meal breaks tend to allow us to change our opinions or minds because blood sugar gives us energy to consider other options. If we are low on blood sugar, the odds of persuading a different view are much lower.
• How do you optimize your tasks and schedule?
Is there a best time for creativity? I know a bit of this, as I've written on it before, so spoilers, there are times, but they vary from person to person. Generally speaking, there are two waves that people ride throughout the day, and the way that we see things is dependent on these waves. I've found for myself that the thorniest problems and the writing I like to do I can get done early in the morning (read this as early early, like 4 or 5 am) and around 2:00 p.m. I have always used this to think that these are my most creative times... and actually, that's not quite accurate. What I am actually doing is using my most focused and critical thinking time to accomplish creative tasks. That's not the same thing as when I am actually able to "be creative". What I am likely doing is actually putting to output the processing I've done on the creative ideas I've considered. When did I consider those ideas? Probably at the times when my critical thinking is at a low. I've often said this is the time I dod my busywork because I can't really be creative. Ironically, the "busy work time" is likely when I start to form creative ideas, but I don't have that "oh, wow, this is great, strike now" moment until those critical thinking peaks. What's cool is that these ideas do make sense. By chunking time around tasks that are optimized for critical thinking peaks and scheduling busy work for down periods, I'm making some room for creative thought.
Does silence work for or against you?
Sometimes when we are silent when people speak, we may create a tension that causes people to react in a variety of different ways. I offered to Joseph that silence as a received communication from a listener back to me tends to make me talk more. This can be good, or it can cause me to give away more than I intend to. The challenge is that silence doesn't necessarily mean that they disagree, are mad, or are aloof. They may just genuinely be thinking, withholding comment, or perhaps they are showing they don't have an opinion. The key is that silence is a tool, and sometimes, it can work in unique and interesting ways. As a recipient, it lets you reflect. as a speaker, it can draw people out. the trick is to be willing to use it, in both directions.
10:15 am - 11:15 am
RISK VS COST: UNDERSTANDING AND APPLYING A RISK BASED TEST MODEL
In an ideal world, we have plenty of time, plenty of people, and plenty of system resources, and assisting tools to do everything we need to do. Problem is, there's no such thing as that ideal environment, especially today. We have pressures to release more often, sometimes daily. While Agile methodologies encourage us to slice super thin, the fact is, we still have the same pressures and realities. Instead of shipping a major release once or twice a year, we ship a feature or a fix each day. The time needs are still the same, and the fact is, there is not enough time, money, system resources or people to do everything comprehensively, at least not in a way that would be economically feasible.
Since we can't guarantee completeness in any of these categories, there are genuine risks to releasing anything. we operate at a distinct advantage if we do not acknowledge and understand this. As software testers, we may or may not be the ones to do a risk assessment, but we absolutely need to be part of the process, and we need to be asking questions about the risks of any given project. Once we have identified what the risks are, we can prioritize them, we can identify them, and from that, we can start considering how to address them or mitigate them.
Scope of a project will define risk. User base will affect risk. Time to market is a specific risk. User sentiment may become a risk. Comparable products behaving in a fundamentally different manner than what we believe our product should do is also a risk. We can mess this up royally if we are not careful.
In the real world, complete and comprehensive testing is not possible for any product. That means that you will always leave things untested. It's inevitable. By definition, there's a risk you will miss something important, and leave yourself open to the Joe Strazzere Admonition ("Perhaps they should have tested that more!").
Test plans can be used effectively, not as a laundry list of what we will do, but as a definition and declaration of our risks, with prescriptive ideas as to how we will test to mitigate those risks. With the push to removal of wasteful documentation, I think this would be very helpful. Lists of test cases that may or may not be run aren't very helpful, but developing charters based on risks identified? That's useful and not wasteful documentation. In addition, have conversations with the programmers and fellow testers. Get to understand their challenges and areas that are causing them consternation. It's a good bet that if they tell you a particular area has been giving them trouble, or has taken more time than they expected, that's a good indication that you have a risk area to test.
It's tempting to think that we can automate much of these interactions, but the risk assessment, mitigation, analysis and game plan development is all necessary work that we need to do before we write line one of automation. All of those are critical, sapient tasks, and critical thinking, sapient testers are valuable in this process, and if we leverage the opportunities, we can make ourselves indispensable.
11:30 am - 12:30 pm
PERFORMANCE TESTING IN AGILE CONTEXTS
the other title for this talk is "Early Performance Testing", and a lot of the ideas Eric is advocating is to look for ways to front load performance testing rather than wait until the end and then worry about optimization and rework. this makes a lot of sense when we consider that getting performance numbers early in development means we can get real numbers and real interactions. It's a great theory, but of course the challenge is in "making it realistic". Development environments are by their very nature not as complete or robust as a production environment. In most cases, the closest we can come to is an artificial simulation and a controlled experiment. It's not a real life representation, but it can still inform us and give us ideas as to what we can and should be doing.
One of the valuable systems we use in our testing is a duplicate of our production environment. In our case, when I say production, what I really mean is a duplicate of our staging server. Staging *is* production for my engineering team, as it is the environment that we do our work on, and anything and everything that matters to us in our day to day efforts resides on staging. It utilizes a lot of the things that our actual production environment uses (database replication, HSA, master slave dispatching, etc.) but it's not actually production, nor does it have the same level of capacity, customers and, most important, customer data.
Having this staging server as a production basis, we can replicate that machine and, with the users, data and parameters as set, we can experiment against it. Will it tell us performance characteristics for our main production server? No, but it will tell us how our performance improves or degrades around our own customer environment. In this case, we can still learn a lot. By developing performance tests against this duplicate staging server, we can get snapshots and indications of problem areas we might face in our day to day exercising of our system. What we learn there can help inform changes our production environment may need.
Production environments have much higher needs, and replicating performance, scrubbing data, setting up a matching environment and using that to run regular tests might be cost prohibitive, so the ability to work in the small and get a representative look can act as an acceptable stand in. If our production server is meant to run on 8 parallel servers and handle 1000 consecutive users, we may not be able to replicate that, but creating an environment with one server and determining if we can run 125 concurrent connections and observe the associated transaction can provide a representative value. We may not learn what the top end can be, but we can certainly determine if problems will occur below the single server peak. If we discover issues here, it's a good bet production will likewise suffer at its relative percentage of interactions.
How about Perfomance Testing in CI? Can it be done? It's possible, but there are also challenges. In my own environment, were we to do performance tests in our CI arrangement, what we are really doing is testing the parallel virtualized servers. It's not a terrible metric, but I'd be leery of assigning authoritative numbers since the actual performance of the virtualized devices cannot be guaranteed. In this case, we can use trending to see if we either get wild swings, or if we get consistent numbers with occasional jumps and bounces.
Also, we can do performance tests that don't require hard numbers at all. We can use a stop watch, watch the screens render, and use our gut intuitions as to whether or not the system is "zippy" or "sluggish". They are not quantitative values, but they have a value, and we should leverage our own senses to encourage further explorations.
The key takeaway is that there is a lot we can do and there's a lot of options we have so that we can make changes and calibrate our interactions and areas we are interested in. We may not be able to be as extensive as we might be with a fully finished and prepped performance clone, but there's plenty we can do to inform our programmers as to the way the system is behaving under pressure.
1:15 pm - 1:45 pm
KEYNOTE: THE MEASURES OF QUALITY
One of the biggest challenges we all face in the world of testing is that quality is wholly subjective. there are things that some people care about passionately that are far less relevant to others. The qualitative aspects are not numerable, regardless of how hard we want to try to do so. Having said that, there are some areas that counts, values, and numbers are relevant. to borrow from my talk yesterday, I can determine if an element exists or if it doesn't. I can determine if the load time of a page. "Fast" or "slow" are entirely subjective, but if I can determine that it takes 54 milliseconds to load an element on a page as an average over 50 loads, that does give me a number. The next question of course is to think "is that good enough?" It may be if it's a small page with only a few elements. If there are several elements on the page that take the same amount of time to serially load, that may prove to be "fast" or "slow".
Metrics are a big deal when it comes to financials. We care about numbers when we want to know how much stuff costs, how much we are earning, and to borrow an oft used phrase "at the end of the day, does the Excel line up?" If it doesn't, regardless of how good our product is, it won't be around long. Much as e want to believe that metrics aren't relevant. Sadly they are, in the correct context.
Testing is a cost. Make no mistake about it. e don't make money for the company. We can hedge against losing money, but as testers, unless we are selling testing services, testing is a cost center, it's not a revenue center. To the financial people, any change in our activities and approaches is often looked at in the costs those changes will occur. their metric is "how much will this cost us?" Our answer needs to be able to articulate "this cost will be leveraged by securing and preserving this current and future income". Glamorous? Not really, but its essential.
What metrics do we as testers actually care about, or should we care about? In my world view, I use the number of bugs found vs number of bugs fixed. That ratio tells me a lot. This is, yet again, a drum I hammer regularly, and this should surprise no one when I say I personally value the tester whose ratio of bugs reported to bugs fixed is closest to 1:1. Why? It means to me that testers are not just reporting issues, but that they are advocating for their being fixed. Another metric often asked about is the number of test cases run. Tome, it's a dumb metric, but there's an expectation outside of testing that that is informative. We may know better, but how do we change the perspective? In my view, the better discussion is not "how many test cases did you run" but "what tests did you develop and execute relative to our highest risk factors?" Again, in my world view, I'd love to see a ratio of business risks to test charters completed and reported to be as close to 1:1 as possible.
In the world of metrics, everything tends to get boiled down to Daft Punk's "Harder Better Faster Stronger". I use that lyrical quote not just to stick an ear-worm in your head (though if I have you're welcome or I'm sorry, take your pick), but it's really what metrics mean to convey. Are we faster at our delivery? Are we covering more areas? Do we finish our testing faster? Does our deployment speed factor out to greater revenue? One we either answer Yes or No, the next step is "how much or how little, how frequent or infrequent. What's the number?"
Ultimately, when you get to the C level execs, qualitative factors are tied to quantitative numbers, and most of the time, the numbers have to point to a positive and/or increasing revenue. That's what keeps companies alive. Not enough money, no future, it's that simple.
Brad suggests that, if we need to quantify our efforts, these are the ten areas that will be the most impactful.
It's a pretty good list. I'd add my advocacy and risk ratios, too, but the key to all of this is these numbers don't matter if we don't know them, and they don't matter if we don't share them.
2:00 pm - 3:00 pm
TESTING IS YOUR BRAND. SELL IT!
One of the oft heard phrases that is heard among software testers and about software testing is that we are misunderstood. Kate Falanga is in some ways a Don Draper of the testing world. She works with Huge, Huge is like Mad men, just more computers, though perhaps equal amounts of alcohol ;). Seriously, though, Kate approaches software testing as though it were a brand, because it is, and she's alarmed at the way the brand is perceived. The fact is, every one of us is a brand unto ourselves, and what we do or do not do affects how that brand is perceived.
Testers are often not very savvy about marketing themselves. I have come to understand this a great deal lately. The truth is, many people interpret my high levels of enthusiasm, my booming voice, and my aggressive passion and drive to be good marketing and salesmanship. It's not. It can be contagious, it can be effective, but that doesn't translate to good marketing. Once that shine wears off, if I can't effectively carry objectives and expectations to completion, or to encourage continued confidence, then my attributes matter very little, and can actually become liabilities.
I used to be a firebrand about software testing and discussing all of the aspects about software testing that were important... to me. Is this bad? Not in and of itself, but it is a problem if I cannot likewise connect this to aspects that matter to the broader organization. Sometimes my passion and enthusiasm can set an unanticipated expectation in the minds of my customers, and when I cannot live up to that level of expectation, there's a let down, and then it's a greater challenge to instill confidence going forward. Enthusiasm is good, but the expectation has to be managed, and it needs to align with the reality that I can deliver.
Another thing that testers often do is they emphasize that they find problems and that they break things. I do agree with the finding problems, but I don't talk about breaking things very much. Testers generally speaking don't break things, we find where they are broken. Regardless of how taht's termed, it is perceived as a negative. It's an important negative, but it's still seen as something that is not pleasant news. Let's face it nobody wants to hear their product is broken. Instead, I prefer, and it sounds like Kate does to, that emphasizing more positive portrayals of what we do is important. Rather than say "i find problems", instead, I emphasize that "I provide information about the state of the project, so that decision makers can make informed choices to move forward". Same objective, but totally different flavor and perception. Yes I can vouch for the fact that the latter approach works :).
The key takeaway is that each of us, and by extension our entire team, sells to others an experience, a lifestyle and a brand. How we are perceived is both individual and collectively, and sometimes, one member of the team can impact the entire brand, for good or for ill. Start with yourself, then expand. Be the agent of change you really want to be. Ball's in your court!
3:15 pm - 4:15 pm
BUILDING LOAD PROFILES FROM OBJECTIVE DATA
Wow, last talk of the day! It's fun to be in a session with James, because I've been listening to him via PerfBytes for the past two years, so much of this feels familiar, but more immediate. While I am not a performance tester directly, I have started making strides to start getting into this world because I believe it to be valuable in my quest to be a "specializing generalist" or a "generalizing specialist" (service mark Alan pAge and Brent Jensen ;) ).
Eric Proegler in his talk earlier talked about the ability to push Performance testing earlier in the development and testing process. To continue with that idea, I was curious to get some ideas as to how to build a profile to actually run performance and load testing. can we send boatloads of request to our servers and simulate load? Sure, but will that actually be representative of anything meaningful? In this case, no. What we really want to create is a representative profile of traffic and interactions that actually approaches the real use of our site. To do that, we need to think what will actually represent our users interactions with our site.
That means that workflows should be captured, but how can we do that? One of the ways that we can do this is analysis of previous transactions with our logs, and recreating steps and procedures. Another way is to look at access or error logs to see what people want to find but can't, or see if there are requests that look to not make any sense (i.e. potential attacks on the system). The database admin, the web admin and the CDN Administrator are all good people to cultivate a relationship with to be able to discuss the needs and encourage them to become allies.
Ultimately, the goal of all of this is to be able to steer clear of the "ugly baby syndrome" and look to cast blame or avoid blame, and to do that, we really need to make it possible to be as objective as possible. with a realistic load of transactions that are representative, there's less of a chance for people to say "that test is not relevant" or "that's not a real world representation of our product".
Logs are valuable to help gauge what actually matters and what is junk, but those logs have to be filtered. there are many tools available to help make that happen, some commercial, some open source, but the goal is the same, look for payload that is relevant and real. James encourages looking gat individual requests to see who generated requests, who referred the request, what request was made, and what user agent made the request (web vs mobile, etc.). What is interesting is that we can see patterns that will show us what paths users use to get to our system and what they traverse in our site to get to that information. Looking at these traversals, we can visualize pages and page relationships, and perhaps identify where the "heat" is in our system.
Wow, that was an intense and very fast two full days. My thanks to everyone at STP for putting on what has been an informative and fun conference. My gratitude to all of the speakers who let me invade their sessions, type way too loudly at times (I hope I've been better today) and inject my opinions here and there.
As we discussed in Kate's session, change comes in threes. The first step is with us, and if you are here and reading this, you are looking to be the change for yourself, as I am looking to be the change in myself.
The next step is to take back what we have learned to our teams, openly if possible, by stealth if necessary, but change your organization from the ground up.
Finally, if this conference is helpful and you have done things that have proven to be effective, had success in some area, or you are in an area that you feel is under representative, take the third step and engage at the community level. Conferences need fresh voices, and the fact is experience reports of real world application and observation have immense value, and they are straightforward talks to deliver. Consider putting your hat in the ring to speak at a future session of STP-CON, or another conference near you.
Testing needs all of us, and it will only be as good as the contributors that help build it. I look forward to the next time we can get together in this way, and see what we can build together :).
Monday, April 13, 2015 14:34 PM
Monday, March 30, 2015 23:06 PM
These girls did not know each other well prior to this trip. Unlike our city, which has one Intermediate school within its city limits, Narita has several schools represented, and the delegates chosen typically do not have hosting assignments with students from their school. Therefore, it was not just a learning experience for the girls to relate to us, but to each other.
On Friday, as I was getting ready to take my eldest daughter to her school, one of the girls came downstairs with some prepackaged single serving rice packets. She motioned to me in the English that she could that she wanted to cook the rice. I figured "OK, sure, I'd be happy to do that for you", and did so. After it was cooked, I opened the small container, put it in a bowl, and then asked her if she wanted a fork or chop sticks. She looked at me quizzically for a moment, and then she said "chop sticks". I handed her the chop sticks, and figured "OK, the girls want to have rice as part of their breakfast. No problem." I then lined up the other packets, explained to Christina how to cook them in the microwave (40 seconds seems fine with our power microwave), and then went to take my daughter to school. The rest of this story comes courtesy of texts and after the fact revelation ;).
While I was out and Christina was helping get the rest of the packets of rice prepared, she pulled out a few bowls and put the rice in the bowls, with a few more chop sticks and placed them on the table for the girls. The young lady who had come down to make the request then said "no, this is for you". Christina smiled, said thank you, and then started to eat the rice from one of the bowls. What she noticed after a few bits was that the girl was staring at her, frozen in place, and looking very concerned. At this point, Christina stopped, put down the bowl, and asked if everything was OK. Our Japanese exchange student tried to grasp for the words to explain what she wanted to say, and as this was happening, another of the exchange students, who was much more fluent in English, saw what was happening.
"Oh, this rice is for a breakfast dish we planned to make for you. Each of the packages has enough rice to make one Onigiri (which is to say, rice ball, a popular food item in Japan). At this, Christina realized what had happened, and texted me what she did. She felt mortified, but I assured her it was OK, and I'd happily split mine with her to make up for it. With that, we were able to work out the details of what they wanted and needed from us so that they could make the Onigiri for us (which they did, and which was delicious, I might add!).
I smiled a little bit at this because I have felt this situation a few times in my career, although it wasn't trying to communicate from English to Japanese and back. Instead, I've had moment like this where I've had to explain software testing concepts to programmers or others in the organization, and they have tried to explain their processes to me. It's very likely that I may have had more than a few moments of my own where I must have stood there, paralyzed and watching things happen, where I wanted to say "no, stop, don't do that, I need to explain more" but felt like the world was whizzing past me. As my wife explained the situation, i couldn't help but feel for both of them. Fortunately, in this case, all it meant was one fewer rice balls. In programming and testing, these mis-communications or mis-understandings are often where things can go ridiculously sideways, albeit usually not in an amusing way. The Larsen Onigiri Incident, I'm sure, will become a story of humor on both sides of the Pacific for the participants. It's a good reminder to make sure, up front, that I understand what others in my organization are thinking before we start doing.
Wednesday, March 25, 2015 18:15 PM
After some final tweaks, messages being sent out to the speakers and presenters, keynotes and other behind the scenes machinations... we are please to announce that registration for AST's tenth annual conference, CAST 2015, to be held August 3-5, 2015 in Grand Rapids, Michigan, is now open!
The pre-conference Tutorials have been published, and these tutorials are being offered by actual practitioners, thought leaders, and experts in testing. All four of them are worth attending, but alas, we have room for twenty-five attendees for each, so if you want to get in, well, now is the time :)!
The tutorials we are offering this year are:
"Speaking Truth to Power" – Fiona Charles
"Testing Fundamentals for Experienced Testers" – Robert Sabourin
"Mobile App Coverage Using Mind Maps" – Dhanasekar Subramaniam
"'Follow-your-nose' Testing – Questioning Rules and Overturning Convention" – Christin Wiedemann
Of course, there is also the two day conference by itself that you can register for, and yes, we wholeheartedly encourage you to register soon.
If you register by June 5th, you can save up to $400 with our "Early Bird" discount. We limit attendance to 250 attendees, and CAST sold out last year, so register soon to secure your seat.
Additionally, we will be offering webCAST again this year. Really want to attend, but you just can't make it those days? We will be live streaming keynotes, full sessions, and "CAST Live" again this year!
Come join us this summer for our tenth annual conference in downtown Grand Rapids at the beautiful Grand Plaza Hotel August 3-5, or join us online for webCAST. Together, let's start “Moving Testing Forward.”
Monday, March 23, 2015 19:10 PMTesting is Broken by Philip Howard.
First things first; yes, I disagree with this post, but not for the reasons people might believe.
Testing is indeed "broken" in many places, but I don’t automatically go to the final analysis of this article, which is “it needs far more automation”. What it needs, in my opinion, is better quality automation, the kind that addresses the tedious and the mundane, so that testers can apply their brains to much more interesting challenges. The post talks about how there’s a conflict of interest. Recruiters and vendors are more interested in selling bodies, rather than selling solutions and tools. Therefore they push manual testing and manual testing tools over automated tools. From my vantage point, this has not been the case, but for the sake of discussion, I'll run with it ;).
Philip uses a definition of Manual Testing that I find inadequate. It’s not entirely his fault; he got it from Wikipedia. Manual testing as defined by Wikipedia is "the process of manually testing software for defects. It requires a tester to play the role of an end user and use most or all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases”.
That’s correct, in a sense, but it leaves out a lot. What it doesn’t mention is the fact that manual testing is a skill that requires active thought, consideration, and discernment, the kind that quantitative measures (of which automation is quite adept at handling) fall short with.
I answered back on a LinkedIn forum post related to this blog entry by making a comparison. I focused on two questions related to accessibility:
1. Can I identify that the wai-aria tag for the element being displayed has the defined string associated with it?
That’s an easy "yes" for automation, because we know what we are looking for. We're looking for an attribute (a wai-aria tag) within a defined item (a stated element), and we know what we expect (a string like “Image Uploaded. Press Next to Continue”). Can this example benefit from improved automation? Absolutely!
2. Can I confirm that the experience for a sight-impaired user of a workflow is on par with the experience of a sighted user?
That’s impossible for automation to handle, at least with the tech we have today. Why? Because this isn’t quantifiable. It’s entirely subjective. To make the conclusion that it is either an acceptable or not acceptable workflow, a real live human has to address and test it. They must use their powers of discernment, and say either “yes, the experience is comparable” or “no, the experience is unacceptable”.
To reiterate, I do believe testing, and testers, needs some help. However, more automation for the sake of more automation isn’t going to do it. What’s needed is the ability to leverage the tools available, to help move off as much of the repetitive busywork we know we have to do every time, and get those quantifiable elements sequenced. Once we have a handle on that, then we have a fighting chance to look at the qualitative and interesting problems, the ones that we haven’t figured out how to quantify yet. Real human beings will be needed to take on those objectives, so don’t be so quick to be rid of the bodies just yet.
Friday, March 20, 2015 17:28 PM
The following comment was posted to Twitter yesterday by Aaron Hodder, and I felt compelled not just to retweet it, but to write a follow up about it.
"People that conflate being anti wasteful documentation with anti documentation frustrate me."
I think this is an important area for us to talk about. This is where the pendulum, I believe has swing too far in both directions during my career. I remember very well the 1990s, and having to use ISO-9001 standards for writing test plans. For those not familiar with this approach, the idea was that you had a functional specification, and that functional specification had an accompanying test plan. In most cases, that test plan mapped exactly to that functional specification. We often referred to this as "the sideways spec”. That was a joking term we used to basically say "we took the spec, and we added words like confirm or verify to each statement." If you think I'm kidding, I assure you I'm not. It wasn't exactly that way, but it was very close. I remember all too well writing a number of detailed test plans, trying to be as specific as possible, only to have it turned back to me with "it's not detailed enough." When I finally figured out what they really meant was "just copy the spec”, I dutifully followed. It made my employers happy, but it did not result in better testing. In fact, I think it's safe to say it resulted in worse testing, because we rarely followed what we had written. Qe did what we felt we had to do with the time that we had, and the document existed to cover our butts.
Fast forward now a couple of decades, and we are in the midst of the "Agile age”. In this Agile world, we believe in not providing lots of "needless documentation”. In many environments, this translates to "no documentation" or "no spec” outside of what appears on the Kanban board or Scrum tracker. A lot is left to the imagination. As a tester, this can be a good thing. It allows me the ability to open up different aspects, and go in directions that are not specifically scripted. That's the good part.
The bad part is, because there's sparse documentation, we don't necessarily explore all the potential avenues because we just don’t know what they are. Often, I create my own checklists and add ideas of where I looked, and in the past I’ve received replies like “You're putting too much in here, you don't need to do that, this is overkill." I think it's important for us to differentiate between not overthinking and over planning, and making sure that we are giving enough information and providing enough details to be successful. It's never going to be perfect. The idea is that we communicate, we talk, we don't just throw documents at each other. We have to make sure that we have communicated enough, and that we really do understand what needs to happen.
I'm a fan of the “Three Amigos” model. Early in the story, preferably at the very beginning, three people come together; the product owner, the programmer and the tester. That is where these details can be discussed and hammered out. I do not expect a full spec or implementation at this point, but it is important that everybody that comes to this meeting share their concerns, considerations and questions. There's a good chance we still might miss a number of things, because we didn't take the time here to talk out what could happen. Is it possible to overdo it? Perhaps, if we're getting bogged down in minutia and details, but I don't think it is a bad thing to press for real questions such as “Where is this product going to be used? What are the permutations that we need to consider? What subsystems might this also affect?" If we don't have the answer right then and there, we still have the ability to say “Oh, yeah that's important. Let's make sure that we document that.”
There's no question, I prefer this more nimble and agile method of developing software to yesteryear. I would really rather not go back to what I had to do in the 90s. However, even in our trying to be lean, and in our quest for a minimal viable products, let’s be sure we are also communicating effectively about what our products need to be doing. My guess is, the overall quality of what comes out the other end will be much better for the effort.
Thursday, March 19, 2015 18:26 PM
However, there are those days where things look to go terribly wrong, where the information radiator is bleeding red. Your tests have caught something spectacular, something devastating. What I find ironic about this particular situation is not the fact that we jump for joy and say "wow, we sure dodged a bullet there, we found something catastrophic!" Instead, what we usually say is "let's do a debug of this session, because there's no way that this could be right. Nothing fails that spectacularly!"
I used to think the same thing, until I witnessed that very thing happen. It was a typical release cycle for us, stories being worked on as normal, work on a new feature tested, deemed to work as we hoped it would, with a reasonable enough quality to say "good enough to play with others". We merged the changes to the branch, and then we ran the tests. The report showed a failed run. I opened the full report and couldn't believe what I was seeing. More than 50% of the spun up machines were registering red. Did I at first think "whoah, someone must have put in a catastrophically bad piece of code!" No, my first reaction was to say "ugh, what went wrong with our servers?!" This is the danger we face when things just "kind of work" on their own for a long time. We are so used to little hiccups that we know what to do with them. We are totally unprepared when we are faced with a massive failure. In this case, I went through to check all of the failed states of the machines to look for either a network failure or a system failure... only none was to be found. I looked at the test failure statements expecting them to be obvious configuration issues, but they weren't. I took individual tests and ran them in real time and watched the console to see what happened. The screens looked like what we'd expect to see, but we were still failing tests.
After an hour and a half of digging, I had to face a weird fact... someone committed a change that fundamentally broke the application. Whoa! In this Continuous integration world I live in now, the fact is, that's not something you see every day. We gathered together to review the output, and as we looked over the details, one of the programmers said "oh, wow, I know what I did!" He then explained that he had made a change to the way that we fetched cached elements and that change was having a ripple effect on multiple sub systems. In short, it was a real and genuine issue, and it was so big an error that we were willing to disbelieve it before we were able to accept that, yep, this problem was totally real.
As a tester, sometimes I find myself getting tied up in minutiae, and minutiae becomes the modus operandi. We react when we expect. When we get delivered to us a major sinkhole in a program, we are more likely to not trust what we are seeing, because we believe such a thing is just not possible any longer. I'm here to tell you that it does happen, it happens more than I want to believe, or admit, and I really shouldn't be as surprised that it happens.
If I can make any recommendation to a tester out there, if you are faced with a catastrophic failure, take a little time to see if you can understand what it is,and what causes it. Do your due diligence, of course. Make sure that you re not wasting other people's time, but also realize that, yes, even in our ever so interconnected and streamlined world, it is still possible to introduce a small change that has a monumentally big impact. It's more common than you might think, and very often, really, it's not your imagination. You've hit something big. Move accordingly.
Tuesday, March 10, 2015 16:49 PM
|With thanks to Tomasi Akimeta |
for making me into "TESTHEAD" :)!!!
So how has this blog changed over the past five years? For starters, it's been a wonderful learning ground for me. Notice how I said that: a learning ground for me. When I started it, I had intended it to be a teaching ground to others. Yeah, that didn't last long. I realized pretty quickly how little I actually knew, and how much I still had to learn (and am still learning). I've found it to be a great place to "learn in public" and to, in many ways, be a springboard for many opportunities. During its first two years, most of the writing that I did in any capacity showed up here. I could talk about anything I wanted to, so long as it loosely fit into a software testing narrative.
From there, I've been able to explore a bunch of different angles, try out initiatives, and write for other venues, the most recent being a permanent guest blog with IT Knowledge Exchange as one of the Uncharted Waters authors. While I have other places I am writing and posting articles, I still love the fact that I have this blog and that it still exists as an outlet for ideas that may be "not entirely ready for prime time", and I am appreciative of the fact that I have a readership that values that and allows me to experiment openly. Were it not for the many readers of this blog, along with their forwards, shares, retweets, plus-one's and mentions in their own posts, I wouldn't have near the readership I currently have, and I am indeed grateful to all of you who take the time to read what I write on TESTHEAD.
So what's the story behind the picture you see above? My friend Tomasi Akimeta offered to make me some fresh images that I could associate with my site. He asked me what my site represented to me, and how I hoped it would be remembered by others. I laughed and said that I hoped my site would be seen as a place where we could go beyond the expected when it comes to software testing. I'd like to champion thinking rather than received dogma, experimentation rather than following a path by rote, and champion the idea that there is intelligence that goes into testing. He laughed and said "so what you want to do is show that there's a thinking brain behind that crash test dummy?" He was referring to the favicon that's been part of the site for close to five years. I said "Yes, exactly!" He then came back a few weeks later and said "So, something like this?" After I had a good laugh and a smile at the ideas he had, I said "Yes, this is exactly what I would like to have represent this site; the emergence of the human and the brain behind the dummy!"
My thanks to Tomasi for helping me usher in my sixth year with style, and to celebrate the past five with reflection, amusement and a lot of gratitude for all those who regularly read this site. Here's to future days!
Friday, March 06, 2015 19:18 PMIs Your Killer App Killing You?" If that may be a bit too much hyperbole, there is no question that E-mail can be exhausting, demoralizing, and just really hard to manage.
One area that I think is really needed, and would make E-mail much more effective, is some way to extend messages to automatically start new processes. Some of this can be done at a fairly simple level. Most of the time, though, what ends up happening is that I get an email, or a string of emails, I copy the relevant details, and then I paste them somewhere else (calendar, a wiki, some document, Slack, a blog post, etc.). What is missing, and what I think would be extremely helpful, would be to have ways to register key applications with your email provider, whoever it may be, and then have key commands or right click options that would let you take that message, choose what you want to do with it, and then move to that next action.
Some examples... if you get a message and someone writes that they'd like to get together at 3:00 p.m., having the ability to right there schedule an appointment and lock the details of the message in place seems like it would be simple (note the choice of words, I said it seems it would be simple, I'm not saying it would be easy ;) ). If a message includes a dollar amount, it would be awesome to be able to right click or key command so that I could record the transaction in my financial software or create an invoice (either would be legitimate choices, I'd think).
Another option that I didn't mention in the original piece, but that I have found to be somewhat helpful, is to utilize tools that will allow you to aggregate messages that you can review later. For me, there are three levels of email detail that I find myself dealing with.
1. E-mail I genuinely could not care any less about, but doesn't rise to the level of outright SPAM.
I am unsentimental. Lots of stuff from sites I use regularly comes to my inbox and I genuinely do not want to see it. My general habit is to delete it without even opening it. If you find yourself doing this over and over again, just unsubscribe and be done with it. If the site in question doesn't give you a clear option for that, then make rules that will delete those messages so you don't have to. So far, I've yet to find myself saying "aww, man, I really wish I'd seen that offer that I missed, even though I deleted the previous two hundred that landed in my inbox. Cut them loose and free your mind. It's easy :).
2. Emails with a personal connection that matter enough for me to review and consider them, but I may well not actually do anything with them. Still much of the time, I probably will.
These are the messages I let drop into my inbox, usually to be subject to various filter rules and to get sorted into the buckets I want to deal with, but I want to see them and not let them sit around.
3. That stuff that falls between #1 and #2.
For these messages, I am currently using an app called Unroll.me. It's a pretty basic tool in that it creates a folder in my IMAP (called Unroll.Me), and any email that I have decided to "roll up" and look at later goes into this app, and this folder. There's some other features that the app offers, such as Unsubscribing (if the API of the service is set up to do that), include in the roll up, or leave in your Inbox. Each day, I get a message that tells me what has landed in my roll up, and I can review each of them at that point in time.
I will note that this is not a perfect solution. The Unsubscribe works quite well, and the push to Inbox also has no problems. It's the Roll up step that requires a slight change in thinking. If you have hundreds of messages each day landing into the roll up, IMO, you're doing it wrong. The problem with having the roll up collect too many messages is that it becomes easy to put off, or deal with another day, which causes the back log to grow ever larger, and in this case, out of sight definitely means out of mind. To get the best benefit, I'd suggest a daily read and a weekly manage, where you can decide which items should be unsubscribed, which should remain in the roll up, and which should just go straight to your inbox.
In any event, I know that E-mail can suck the joy out of a person, and frankly, that's just no way to live. If you find yourself buried in E-mail, check out the Uncharted Waters article, give Unroll.me a try, or better yet, sound off below with what you use to manage the beast that is out of control email. As I said in the original Uncharted Waters post, I am genuinely interested in ways to tame this monster, so let me know what you do.
Thursday, March 05, 2015 23:36 PM
This is a bit of a rant, and I apologize for people who are going to read this and wonder what I am getting on about. Since I try to tie everything to software testing at some point and in some way, hopefully, this will be worth your time.
I have a drug/convenience store near my place of work. I go here semi-regularly to pick up things that I need or just plain want. I'm cyclical when it comes to certain things, but one of my regular purchases is chewing gum. It helps me blunt hunger, and it helps me get into flow when I write, code or test. I also tend to pick up little things here and there because the store is less than 100 steps from my desk. As is often the case, I get certain deals. I also get asked to take surveys on their site. I do these from time to time because, hey, why not? Maybe my suggestions will help them.
Today, as I was walking out the door, I was given my receipt and the cashier posted the following on it.
Really, I get why they do this. If they can't score a five, then it doesn't matter. You weren't happy, end of story. Still, I can't help but look at this as a form of emotional blackmail. It screams "we need to have you give us a five for everything, so please score us a five!" Hey, if I can do so, I will, but what if you were just shy of a five? What if I was in a hurry, and there was just a few too many people in the store? The experience was a true four, but hey, it was still pretty great. Now that experience is going to count for nothing. Does this encourage me to give more fives? No. What it does is tell me "I no longer have any interest in giving feedback", because unless it is something that says "Yay, we're great!" then it's considered worthless. It's a way to collect kudos, and it discounts all other experiences.
As a software tester, I have often faced this attitude. We tend to be taught that bugs are absolute. If something isn't working right, then you need to file a bug. My question always comes down to "what does 'not working right' actually mean?" There are situations where the way a program behaves is not "perfect", but it's plenty "good enough" for what I need to do. Does it delight me in every way possible? No. Would a little tweak here and there be nice? Sure. By the logic above, either everything has to be 100% flawless (good luck with that), or the experience is fundamentally broken and a bug that needs to be addressed. The problem arises when we realize that "anything less than a five is a bug", and that means the vast majority of interactions with systems are bugs... does that make sense? Even if at a ridiculously overburdened fundamental level it is true, that means that the number of "bugs" in the system are so overwhelming that they will never get fixed. Additionally, if anything less than a five counts as zero, what faith do I have that areas I actually consider to be a one or a two, or even a zero, will actually be considered or addressed? The long term tester and cynic in me knows the answer; they won't be looked at.
To stores out there looking for honest feedback, begging for fives isn't going to get it. You will either get people who will post fives because they want to be nice, or they will avoid the survey entirely. Something tells me this is not the outcome you are after, if quality of experience is really what you want. Again, the cynic in me thinks this is just a way to put numbers to how many people feel you are awesome, and give little to no attention to the other 80% of responses. I hope I'm wrong.
ETA: Michael Bolton points out below that I made a faulty assumption with my closing remark. It was meant as a quip, and not to be taken literally, but he's absolutely right. I anchored on the five, and it made me mentally consider an even distribution of the other four numbers. There absolutely is nothing that says that is the case, it's an assumption I jumped to specifically to make a point. Thanks for the comment, Michael :).
Thursday, March 05, 2015 19:45 PM
With "Ruby Wizardry", Eric takes his coding skills and puts them into the sphere of helping kids get excited about coding, and in particular, excited about coding in Ruby. Of all the languages my daughter and I are covering, Ruby is the one I’ve had the most familiarity with, as well as the most extended interaction, so I was excited to get into this book and see how it would measure up, both as a primer for kids and for adults. So how does Ruby Wizardry do on that front?
As you might guess from the title and the cover, this book is aimed squarely at kids, and it covers the topics by telling a story of two children in a magical land dealing with a King and a Court that is not entirely what it seems, and next to nothing seems to work. To remedy this, the children of the story have to learn some magic to help make the kingdom work properly. What magic, you ask? Why, the Ruby kind, of course :)!
Chapter 1 basically sets the stage and reminds us why we bought this book in the first place. It introduces us to the language by showing us how to get it ("all adults on deck!”), how to install it, make sure it’s running and write our first "Hello World” program. It also introduces us to irb, our command interpreter, and sets the stage for further creative bouts of genius.
Chapter 2 looks at "The King's Strings" and handles, well, strings. Strings have methods, which allow us to append words like length and reverse written with dots ("string".length or 'string'.reverse). Variables are also covered, and show that the syntax that works on raw strings also works on assigned variables as well.
Chapter 4 introduces us to a monorail, which is a train that travels in an endless loop. Likewise, the chapter is about looping constructs and how we can construct loops (using while as long as something is true and until up to the point something becomes false, along with for to iterate over things like arrays) to do important processes and repeat them as many times as we want, or need. Also, it warns us to not write our code in such a way to make an infinite loop, one that will just keep running, like our monorail if it never stops.
Chapter 5 introduces us to the hash, which is a list with two attributes, a key and a value, as well as the ability to add items to arrays with shift, unshift, push or pop. Additionally we also learned how to play with ranges and other methods that give us insights as to how elements in both arrays and hashes can be accessed and displayed.
Chapter 6 goes into talking about symbols (which is just another word for name), and the notation needed to access symbols. Generally, the content of something is a string; the name of something is a symbol. Utilizing methods allows us to change values, convert symbols to strings, and other options that allow us to manipulate data and variables.
Chapter 7 goes into how to define and create our own methods, including the use of splat (*) parameters, that allows us to use any number of arguments. We can also define methods that take blocks by using "yield".
Chapter 8 extends us into objects, and how we can create them and use them. We also learn about object IDs to tell them apart, as well as classes, which allow us to make objects with similar attributes. We also see how we can have variables with different levels of focus (Local, Global, Instance and Class) and how we can tell each apart (variable, $variable, @variable and @@variable, respectively).
Chapter 9 focuses on inheritance, which is how Ruby classes can share information with each other. this inheritance option allows us to create subclasses and child classes, and inherit attributes from parent/superclasses.
Chapter 10 shows us a horse of a different color, or in this case, modules, which are a bit like classes, but they can't be created using the new method. Modules are somewhat like storage containers so that we can organize code if methods, objects or classes won't do what we want to do. By using include or extend, we can add the module to existing instances or classes.
Chapter 11 shows us that sometimes the second time's the charm, or more specifically, we start getting the hang of Refactoring. Using or operators to assign variables, using ternary operators for short actions, using case statements instead of multiple if/elseif/else statements, returning boolean variables, and most important, the removal of duplicate code and making methods small and manageable.
Chapter 12 shows us the nitty-gritty of dealing with files. Opening reading from, writing to, closing and deleting files are all critical if we want to actually save the work we do and the changes we make.
Chapter 13 encourages us to follow the WEBrick Road, or how we can use Ruby to read and write data from the Internet to files or back to Internet servers using the open-uri gem, which is a set of files that we can use to make writing programs easier (and there are lots of Ruby gems out there :) ).
Chapter 14 gives some hints as to where you, dear reader, might want to go next. This section includes a list of books, online tutorials, and podcasts, including one of my favorite, the Ruby Rogues Podcast. It also includes interactive resources such as Codecademy, Code School, and Ruby Koans, and a quick list of additional topics.
The book ends with two appendices that talk about how to install Ruby on a Mac or a Linux workstation and some of the issues you might encounter in the process.
For a "kids book", there's a lot of meat in here, and to get through all of the examples, you'll need to take some time and see how everything fits together. The story is engaging and provides the context needed for the examples to make sense, and each section provides a review so that you can really see if you get the ideas. While aimed for kids, adults would be well suited to follow along as well. Who knows, some wizarding kids might teach you a thing or three about Ruby, and frankly, that's not a bad deal at all!
Thursday, February 26, 2015 21:30 PM
Today's entry is inspired by a post that appeared on the QATestLab blog called "Why Are Software Testers Disliked By Others?" I posted an answer to the original blog, but that answer started me thinking... why does this perception exist? Why do people feel this way about software testing or software testers?
This long time perception has built some defensiveness for software testers and software testing, to the point where many testers feel that they are the "guardians of quality". We are the ones that have to find these horrible problems, and if we don't, it's our heads! It sounds melodramatic, but yes, I've lived this. I've been the person called out on the carpet for missing something fundamental and obvious... except for the fact that it wasn't fundamental or obvious two days before, and interestingly, no one else thought to look for it, either.
We can be forgiven, perhaps, for bringing on ourselves what I like to call an "Eeyore Complex". For those not familiar, Eeyore is a character created by A.A. Milne, and figures in the many stories of "Winnie the Pooh". Eeyore is the perpetual black raincloud, the one who finds the bad in everything. Morose, depressing, and in many ways, cute and amusing from a distance. We love Eeyore because we all have a friend that reminds us of him.
The problem is when we find ourselves actually being Eeyore, and for many years, software testing deliberately put itself in that role. We are the eternal pessimist. The product is broken, and we have to find out why. Please note, I am not actually disagreeing with this; the software is broken. All software is, at a fundamental level. It actually is our job to find out where, and advocate that it be fixed. However, I have to say that this is where the similarities must end. Finding issues and reporting/advocating for them is not in itself a negative behavior, but it will be seen as such if we are the ones who present it that way.
If people disagree with our findings, that is up to them. We do what we can to make our case, and convince them of the validity of our concerns, but ultimately, the decision to move forward or not is not ours, it's those with the authority to make those decisions. In my work life, discovering this, and actually living by it, has made a world of difference in how I am perceived by my engineering teams, and my interactions with them. It ceases to be an emotional tug of war. It's not a matter of being liked or disliked. Instead it is a matter of providing information that is either actionable or not actionable. How it is used is ultimately not important, so long as I do my best to make sure it is there and surfaced.
At the end of the day, software testers have work to do that is important, and ultimately needs to be done. We provide visibility into risks and issues. We find ways in which workflows do not work as intended. We noticed points that could be painful to our users. None of this is about us as people, or our interactions with others as people. It's about looking at a situation with clear understanding and knowing the objectives we need to meet, and determining if we can actually meet those objectives effectively. Eeyore doesn't know how to do that. Spock does. If you are struggling with the idea that you may not be appreciated, understood, or liked by your colleagues, my recommendation is "less Eeyore, more Spock".
Friday, February 20, 2015 18:35 PM
I've had a back and forth involvement with standing desks over the years. I've made cheap options, and had expensive options purchased for me. I've made a standing desk out of a treadmill, but it has been best for passive actions. It's great for reading or watching videos, not so great for actually typing, plus it was limiting as to what I could put on it (no multi monitor setup). Additionally, I've not wanted to make a system that would be too permanent, since I like the option of being flexible and moving things around.
I'm back to a standing desk solution for both home and work because I'm once again dealing with recurring back pain. No question, the best motivation for getting back to standing while working is back pain. It's also a nice way to kick start one's focus and to get involved in burning a few more calories each year. I'll give a shout to Ben Greenfield, otherwise known as "Get Fit Guy" at quickanddirtytips.com for a piece of information that made me smile a bit. He recommends a "sit only to eat" philosophy. In other words, with the exception of having to drive or fly, my goal should be to sit only when I eat. All other times, I should aim to stand, kneel, lie down or do anything but sit slumped in a chair. The piece of data that intrigued me was that, by doing this, I'd be able to burn 100 additional calories each day. In a week's time, that's the equivalent of running a 10K.
For those who have access to an IKEA, or can order online, at this moment, the Ikea LACK square side table, in black or white, can be purchased for $7.99 each. I chose to do exactly that, and thus, for less than $20, including tax, I have set up this very functional standing desk at work:
What I find most beneficial about a standing desk, outside of the relief from back pain, is the fact that it is incredibly focusing. When I sit down, it's easy to get into passive moments and lose track of time reading stuff or just passively looking at things. When I stand, there is no such thing as "passive time". It's very focusing, and it really helps me to get into the zone and flow of what I need to do. For those looking to do something similar, seriously, this is a great and very inexpensive way to set up a standing desk.
Monday, February 16, 2015 21:39 PM
Something's in the air, I think. Maybe it's a new year of possibilities, maybe it's the nature of technical advancement and growth that happens each year, but there seems to be a groundswell in opportunities for programmers, testers and DIYers to learn new stuff all the time, often very close to free, and in many cases, just plain outright for free.
Packt Publishing recently did this with their 24 gifts for Christmas, in which they offered ebooks for free each day for a limited time, and they are back at it again with their "FREE LEARNING - HELP YOURSELF!" initiative, and again, available for a limited time.
Thursday, February 12, 2015 05:55 AMHumble Braniac Book Bundle" still underway, I felt it only fitting to keep the trend going and review books that are geared towards kids and those who want to have a quick introduction to the worlds of programming, web design and engineering. My daughter and I are exploring a lot of these titles at the moment, and one that caught my eye was "Build Your Own Website", primarily because it promised to be "A Comic Guide to HTML, CSS and WordPress", and that indeed it is.
I should probably mention that by "a comic guide", it does not mean in the funny sense (though some of it is), but in the illustrated, graphic novel sense. For those familiar with NoStarch Press and their "The Manga Guide to..." series of books, Nate Cooper's writing and Kim Gee's artwork fits very well in that space. What's more, "Build Your Own Website" follows the same template that "The Manga Guide to..." books do, in that each section starts with an illustrated graphic novel treatment of topics, and then follows on with a more in depth prose treatment of each area.
So what's in store for the reader who wants to start on a mission to make their own site from scratch?
Chapter 1 starts with our protagonist Kim looking forward to her first web design class, and shows that inspired and excited first timer's desire to get in and do something. It's followed by an explanation of the tools needed to be downloaded and do the work necessary to complete the examples in the book. All of the exercises and examples can be done for free, all you need is a web browser or two, a text editor, an ftp client (the book recommends FileZilla; it's the one I use as well) and you can get a free WordPress account at http://www.wordpress.com.
Chapter 2 talks about The Trouble with HTML, and how Kim and her dog Tofu meet up with the Web Guru, who introduces them to the basics of HTML, paths and naming conventions, loading pictures and following links, the hierarchy of files and directories that make up a basic web site, and a dragon called "404". The second section goes into details about all of these including explaining about document structure, HEAD and BODY tags, the items that go in each, embedding images, and a basic breakdown of the most common HTML tags.
Chapter 3 shows us how Kim Makes Things Look Great with CSS. Well, Glinda, the Good Witch of CSS helps Kim do that (graphic novel for kids, gang. Work with me, here ;) ). Glinda shows Kim the basics of CSS including classes and IDs, inline styles and external stylesheets that can be referenced along with inline styles, effectively creating a "cascade of styles" (CSS == "Cascading Style Sheets"). The chapter also discusses using div's for creating separate sections and blocks that CSS can be applied to, and ends with commonly used CSS properties.
Chapter 4 is where Kim Arrives in WordPress City, and where the examples focus on, of course, WordPress as a composition platform. Kim gets introduced to what WordPress is, which is a Content Management System (CMS), and the conventions of creating both blogs and websites. Kim is introduced to the Dashboard, creating posts, using the Visual editor, structuring her site, using Categories and Tags, using the Media Library to store media items, and the overall Theme to be used for the site. Each of these is covered in greater detail with examples in the second prose part.
Chapter 6 brings us to The Big Launch, where Kim and Tofu navigate the realities of hosting the site and how to set up hosting so that they can display their finished site to the world. There's lots of options, and most cost some money, but not very much (plans ranging from $5-$10 a month are readily available). Registering a domain is covered, and many sites have an option to install WordPress and use it there.
"Build Your Own Website" starts with some basic HTML and CSS, and then spends the bulk of the second half of the book introducing you, the user, to WordPress. For those looking to see the nuts and bolts of making a web site from scratch, including making the navigation elements, more involved interactions, and other esoteric features of web sites outside of the CMS system that WordPress provides, you will be disappointed. Having said that, if the goal is to get a site up and running and using a well designed and quick to use interface, WordPress is a pretty good system, with lots of flexibility and ways to make the basic formatting of a site nearly automatic. Younger web development acolytes can get in and feel what it's like to design and manage a site. To that end, "Build Your Own Website" does a very good job, and does it in an entertaining and engaging manner.
Monday, February 09, 2015 21:24 PM
First off, I have to say "thank you" to my friend Jokin Aspiazu for inspiring this post today. He wrote a piece about writing a blog from your own experiences and how he found inspiration from a number of people, including me.
The statement that prompted today's blog post was the following. In it, Jokin is explaining different styles and approaches to writing a blog post, and ways in which those posts are done:
"The Michael Larsen way. He wakes up early, so he has time to write before his daily routines start. The thing is that if you write late at night, when the day is gone, you are going to be tired in body and mind. And your writing won’t be as fluid as you would like, and the next day, when you read your text, you will hear “I’m so tired…” in the background."
Having said that, there are a few other things that I do, and I find them helpful as well.
The Voice Recorder on my Phone is a Really Good Friend
Oh, Voice Recorder, how kind you are to me. You make it possible for me to capture any insane and rambling thought I might have, and not lose it to the ether. I should also add I am grateful for the headphone and mic combinations now available with SmartPhones, because they make me look like less of a lunatic while I am walking down the street.
Some great spots where I can let loose with my thoughts are on my daily commute legs. I park my car about a half mile from the train station I use because I'm cheap and don't want to pay for a parking permit, but also because it gives me a bit of a walk each day. That walk is often a golden time to think out loud (something I do regularly), and having the voice recorder lets me capture all of that. Also, it gives people walking past me the impression I'm just talking to someone on the phone, so I'm not looked at as though I'm crazy ;).
Often, the ramblings don't amount to much, and those I tend to delete at the end of each week, but every so often, I'll find something that sticks, or that gives me a reason to say "OK, I want to explore that some more". Often that will result in something I want to write about.
Every Book Has a Story to Tell
Generally speaking, I love to read. Though my book buying habits have changed with Kindle apps and e-books, I still love having a lot of choices to choose from and read from. My SmartPhone has become my secondary reading device, but my first is my laptop computer. I often find myself reading books and grabbing passages or parts of a book that I have found interesting, and I pull them over into a Notes app that I use. Sometimes they sit there for months, but every once in awhile, something will happen, or I'll see something that jumps out at me and I'll say "hey, that fits into a problem I'm dealing with".
Very often, those discoveries are not limited to technical books or books about testing. I'm a fan of history, and I love reading about interesting things that have happened in the past, both distant and more recent. I often borrow the Dan Carlin quote of "history has all but ruined fiction for me", and that shows up in the things that I read. It's rare that you will find me reading fiction, though I do from time to time. Most of the time, it's non fiction of a historical, technological, or business perspective. Those lessons often make their way into my posts.
"That Reminds me of a Testing Story..."
I owe this phrase to the Cartoon Tester, aka Andy Glover. He used it as a comical way of showing the different type of testers out there, but it illustrated something I frequently try to do. Even in the mundane aspects of my life, I find things that both inform my testing, and also inform my view of the world as I see it, which in turn informs my testing. Something as simple as a way to mow the lawn, or deal with pests in the yard, or trying to manage the delicate balance of life in my fish tanks, or the daily dilemmas my kids face, or the often interesting and noteworthy events that happen in my role as a Scout Leader, all of these shape what often becomes an analogy to software testing. They may be helpful to others, they may not, but overall, I remind myself that they are helpful to me. Whether they be specific to ideas, events, or interactions with individuals, each of them informs what I do, and gives me ideas of ways that I can do it better... maybe :).
Again, I want to thank Jokin for helping me consider this, and give me a chance to respond a little more in depth. While everything Jokin says is accurate, I hope this gives you a little more of a glimpse into how I think and work, and how I like to gather material that ultimately makes its way into my blog. If these ideas help you to write better, more frequently, or at least think a little differently about how you write, awesome. If not, again, it felt good to give some additional thoughts as to what writing "the Michael Larsen way" actually means, at least to me.
Friday, February 06, 2015 22:12 PM
NoStarch Press has a number of books that are ideally suited for younger programmers, and to that end we both decided to look at the books together. We will be doing three “for Kids" books reviews during February, provided we can get through all of them. All journeys have to start somewhere though, and this journey will start with Python for Kids. I should also mention that, for a limited time, you can get Python for Kids as part of the NoStarch and Humble Bundle "Humble Brainiac Book Bundle" offer, which comes with a lot of other additional books.
Python for Kids is written in a way that lets the user, whether they are kids or adults, readily get into the material and see how the examples work and try them out for themselves.
Chapter 1 starts out with installing and using Python 3, so right away Amber and I had to adjust to the new syntax (Codecademy’s Python modules are still based on Python 2). Be aware that there are some subtle differences with Python 3, one of the main ones being the parentheses required when using the print(“text goes here”) function (yeah, like that ;) ).
Chapter 2 introduces the user to calculate values using simple equations via operators, using parentheses to ensure the desired order of operations, and creating and using variables to store values for use in those calculations.
Chapter 3 introduces strings, lists, tuples, and maps, each of which allows the user to manipulate text in a variety of ways. Each of the approaches has rules as to how to move items and how they are represented, ranging from simple strings with are just quoted sentences, up to maps which have keys and values that can be called as needed.
Chapter 4 introduces the user to drawing methods, and in this case, that involves using a module called turtle (which in turn uses a tool called tinter). Using this tool, we can actually draw the turtles movements with lines moving left and right, moving forward and backward, as well as starting and stopping with up and down commands.
Chapter 5 focuses on asking questions and responding based on the response the system gets. For experienced programs, this i about if statements and their related commands (elif, else) and how commands allow for combining conditions (using and/or) and converting values (using int, str, float and none).
Chapter 6 talks about “Going Loopy” which, as you might guess, deals with handling loop conditions and repetitive tasks. for and while loops are covered, and break conditions are shown to, well, break out of loops.
Chapter 7 talks about ways to recycle code and reuse it defining and using Functions and importing Modules. Variable scope is also covered so as to see where values are assigned, used and passed back and forth.
Chapter 8 focuses on the things in python, otherwise known as classes and objects. this chapter covered the details of classes and how they an be used to create instances, and how objects can be created, call functions to work with objects, and assigning object variables to save values in those objects.
Chapter 9 covers the various built-in functions available in Python, those that we can import into Python programs and use directly (functions like abs, bool, print, float, int, len, etc) as well as howe to open and manipulate files.
Chapter 10 covers a variety of helpful modules, such as copy, keyword, random, shuffle, and the sys module that allows the user to control the Python shell itself, reading input with stdin, writing output with stdout, telling time with the time module, and using pickle to save user information in files (yes, there’s a module called pickle ;) ).
Chapter 11 revisits turtle, and how we can use it to draw a variety of items. including basic geometric shapes, changing the pen color, filling shapes and reusing steps in functions to combine drawing shapes and using multiple colors in one function call.
Chapter 12 covered using tkinter to make “better graphics” and my oh my does this take me back. I used Tcl/Tk back in the 90s for a number of things, and the syntax to use tkinter is very familiar. Simple shapes, event bindings so objects can move when keys are pressed, using identifying numbers to move shapes and change colors are all covered here.
Part II gives the user a chance top practice the programming chops they have developed in the first Part by making a ball bouncing game called, appropriately enough, Bounce!
Chapter 13 starts out using the tinter module, in which we can create a class for a ball and move it around the screen. Coordinates can be set so that it will bounce off of the top and corners of the screen, and a randomized option so the ball has a dynamic movement.
Chapter 14 continues with creating classes for the paddle, and using coordinates to see if the ball hit the paddle. Event bindings tie the left and right arrow keys to control the paddle, and make it so, if the player misses the ball, the game ends when the ball hits the bottom.
Part III continues the ideas so far shown with “Mr. Stick Man Races for the Exit”.
Chapter 15 shows how to make graphical images of, you guessed it, a stick man (using GIMP to make the actual images) and making the backgrounds of the images transparent so they won’t cover up other things.
Chapter 16 covers creating a Game class making a background image. By using functions called within_x and within_y, we can determine if items have collided with other items. We also discover Sprites and a sub-class, PlatformSprite, to draw the platforms on the screen.
Chapter 17 walks through the details for taking the pictures we created for Mr. Stick Man back in Chapter 15 and setting up options for the figure to move, turn and jump.
Chapter 18 finishes off the Stick Man game, and shows that putting together a working game is not trivial, but it’s also not a huge endeavor either. We focus on using the images we made for Mr. Stick Man so he appears to be running, use collision detection to tell if he hits the sides of the canvas or another object, and have Mr. Stick man drop if he runs off the edge of a platform. We also make a door that Mr. stick man can go through, ending the game.
The book also provides an Afterward that discusses some additional options for doing basic game development Alice, Scratch and Unity3D, PyGame, which is a dedicated library for making games, band a discussion of other programming languages and a super quick look at what they do , where to get them, and what it takes to make a “Hello, World!” program. The Appendix highlights keywords in Python, and the glossary defines a number of words kids might not hear in their everyday interactions.
Python for Kids clocks in at a little over 300 pages, but you won’t feel like it as you work through it. There are numerous small projects and samples to examine, and all of the solutions to the puzzles and challenges are available at http://python-for-kids.com/. While this book is geared towards kids, there’s plenty in here to keep adults engaged and focused. Often, programming books are plagued with the “implied knowledge” trap that happens so often, where small projects give way to bigger ones, without a clear understanding of how the jump was made. Python for Kids does a very good job of avoiding that problem. Having kids as the primary audience, they take their time explaining key areas, and yes, for programmers with a little work under their belts, this is going to probably seem remedial, but as an introduction to Python, I think it’s first rate.
If you are looking to give your kids a kick start in learning how to program, if Python seems like an interesting language to start with, and if you’d like to have a couple of tangible projects to play with and augment as you see fit, then Python for Kids should be on your short list of programming books… provided having fun while learning is a priority.
Thursday, February 05, 2015 21:59 PMBack to Work podcast. That phrase is "expectational debt". Put simply, it's the act of "writing checks your person can't cash" (there's a more colorful metaphor that can be and is often used for this, but I think you understand what I mean ;) ).
Expectational Debt is tricky, because it is very easy to get into, and it is almost entirely emotional in nature. Financial debt is when you have to borrow money to purchase something you want or need, and it invariably involves a financial contract, or in the simplest sense a "personal agreement" where the money owed will be paid back. It's very tangible. Technical debt is what happen in software all the time, when we have to make a shortcut or compromise on making something work in the ideal way so that we can get a product out the door, with the idea that we will "fix it later". Again, it's also tangible, in the sense that we can see the work we need to do. Expectational debt, on the other hand, is almost entirely emotional. It's associated with a promise, a goal, or a desire to do something. Sometimes that desire is public, sometimes it is private. In all cases, it's a commitment of the mind, and a commitment of time and attention.
I know full well how easy it is to get myself into Expectational Debt, and I can do it surprisingly quickly. People often joke that I have a complete inability to say "no". That's not entirely true, but it's close enough to reality that I don't protest. I enjoy learning new things and trying new experiences, so I am often willing to jump in and say "yeah, that's awesome, I can do that!" With just those words, I am creating an expectational debt, a promise to do something in the future that I fully intend to fulfill, but I have not done the necessary footwork or put the time in to fully understand what I am taking on. Human beings, in general, do this all the time. We also frequently underestimate or overestimate how much these expectations matter to other people. Something we've agreed to do could be of great importance to others, or of minor importance. It's also possible that we ourselves are the only ones who consider that expectation to be valuable. Regardless of the "weight" of the expectation, they all take up space in our head, and every one of them puts a drag on our effectiveness.
Before I offer my solution, I need to say that this is very much something I'm currently struggling with. These suggestions are what I am doing now to rein in my expectational debt. It's entirely possible these approaches will be abandoned by yours truly in the future, or determined not to work. As of now, they're helping, so I'm going to share them.
Identify your "Commitments"
Get a stack of note cards, or use an electronic note file, or take a 365 day single day calendar, whatever method you prefer, and sit down and write out every promise you have made to yourself and to others that you have every intention of fulfilling. Don't just do this for things in your professional life, do this for everything you intend to do (family goals, personal goals, exercise, home repairs, car repairs, social obligations, work goals, personal progress initiatives, literally everything you want to accomplish).
Categorize Your "Commitments"
Once you have done this, put them in a priority order. I like to use the four quadrants approach Steven Covey suggests in "Seven Habits of Highly Effective People". Those quadrants are labeled as follows:
I. Urgent and Important
II. Important but Not Urgent
III. Urgent but Not Important
IV. Not Urgent and Not Important
My guess is that, after you have done this, you will have a few items that are in the I category (Urgent and Important), possibly a few in the III category (Urgent but Not Important), and most will fall into Categories II and IV.
Redouble or Abandon Your "Commitments"
These are questions you need to ask yourself for each commitment:
- Why do I think it belongs here?
- Will it be of great benefit to me or others if I accomplish the goal?
- Is it really my responsibility to do this?
- If I were to not do this, what would happen?
This will help you determine very quickly where each of the items fall. Most expectational debt will, again, fall in categories II and IV.
For your sanity, as soon as you identify something that falls into Category IV (Not Urgent and Not Important), tell yourself "I will not be doing this" and make it visible. Yes, make a list of the things you will NOT be doing.
Next is to look at the items that fall into Category III. These are Urgent but Not Important, or perhaps a better way to put this is that they are Urgent to someone else (and likely Important). They may not be important to you, but another person's anxiety about it, and their visible distress, is making it Urgent to you. It's time for a conversation or two. You have to decide now if you are going to commit time and attention to this, and figure out why you should. Is it because you feel obligated? Is it because it will solve a problem for someone else? Is it because you're really the only one who can deal with the situation? All of these need to be spelled out, and in most cases, they should be handled with a mind to train up someone else to do them, so that you can get out of the situation.
The great majority of things you'll want to do, and want to commit to, will fall in Category II. They are Important, but they are not Urgent (if they were Urgent and Important, you'd be doing them... probably RIGHT NOW!!!). Lose weight for the summer. Learn a new programming language. Discover and become proficient with a new tool. Plan a vacation for next year. Read a long anticipated book. Play a much anticipated video game. These are all items that will, in some way, give us satisfaction, help us move forward and progress on something, or otherwise benefit us, but they don't need to be done "right now". Your goal here is to start scoping out the time to do each of these, and give it a quantifiable space in your reality. I believe in scheduling items that I want to make progress on. In some cases, getting a friendly "accountability partner" to check in on me to make sure I'm doing what I need to do is a huge incentive. A common tactic that I am using now is to allocate four hours for any "endeavor space". I also allocate four hours in case I need to "change tracks" and take care of something else. This may seem like overkill (and often, it is), but it's a shorthand I use so I don't over-commit or underestimate how long it will take to do something. Even with this model, I still underestimate a lot of things, but with experience I get a bit better each time.
This of course leaves the last area (Category I), the Urgent and Important. Usually, it's a crisis. It's where everything else ends up getting bagged for a bit. If you ever find yourself in an automobile accident, and you are injured, guess what, your getting treatment and recovery rockets to Category I. In a less dire circumstance, if you are the network operations person for your company and your network goes down, for the duration of that outage getting the network to work is Category I.
I hate making promises I can't keep, but the truth is, I do it all the time. We all do, usually in small ways. Unless we are pathological liars, we don't intend to get into this situation, but sometimes, yes, the expectations we place on ourselves, or the promises we make to others, grow out of proportion to what we can actually accomplish. Take the time to jettison those things that you will not do. Clear them from your mind. Make exit plans for the things you'd really rather not do, if there is a way to do so. Commit to scheduling those items that provide the greatest benefit, and if at all possible, do what you can to not get into crisis situations. Trust me, your overall sanity will be greatly enhanced, and what's more, you'll start to develop the discipline to grow an expectation surplus. I'm working towards that goal, in any event ;).
Wednesday, February 11, 2015 20:17 PM
It's with this in mind that I want to alert TESTHEAD readers to a special offer. NoStarch Press and the folks at Humble Bundle have teamed up to offer a jump start for kids who want to learn programming and other technical skills. This is really significant right now for me, as I have a 14 year old daughter in the early stages of learning how to program.
What could that special offer be? I'm glad you asked ;).
Until February 18, 2015, 2:00 p.m. EST, Humble Bundle will be offering the "Humble Braniac Book Bundle". This is a variety of e-books geared towards kids so that technical subjects are more accessible to them. What makes this bundle interesting is that YOU get to decide how much you want to pay for it. Yes, you read that right, you get to name your price for a selection of books that, if purchased separately, would cost more than $250.
What do you get with this deal? There are three levels:
1. You can pay any amount of money (seriously, you decide) and for that payment, you will receive:
- Ruby Wizardry: An Introduction to Programming for Kids
- Lauren Ipsum: A Story About Computer Science and Other Improbable Things
- The Manga Guide to Electricity
- Snip, Burn, Solder, Shred: Seriously Geeky Stuff to Make with Your Kids
- The LEGO Adventure Book, Volume 1: Cars, Castles, Dinosaurs & More!
2. Humble Bundle keeps track of the average payment. For those willing to pay more than the average (which at the time of this writing is $13.08), NoStarch will sweeten the deal and throw in:
- LEGO Space: Building the Future
- The Manga Guide to Physics
- Python for Kids: A Playful Introduction to Programming
- Incredible LEGO Technic: Cars, Trucks, Robots & More!
- Build Your Own Website: A Comic Guide to HTML, CSS, and WordPress
3. Customers who pay $15 or more will receive all of the above, plus:
- Steampunk LEGO
- The LEGO Neighborhood Book: Build Your Own Town!
As if that weren't enough, all Humble Bundle promotions let purchasers choose how much of their money goes to the publisher (NoStarch), Humble Bundle, and charity. The two charities supported by this bundle are the Electronic Frontier Foundation (EFF) and the Freedom of the Press Foundation.
NoStarch has been great to me for many years. I really appreciate their willingness to provide me books to review. To be transparent, I have many of the books in the list already (that NoStarch gave me for free to write reviews), but there are several in this bundle that I don't have, and I have bought in for the Full Monty :).
This Humble Brainiac Book Bundle ends February 18, 2015 at 2pm EST time, so if you want to take advantage of it, best get a move on!
UPDATE: As of February 11, 2015, The Humble Brainiac Book Bundle just got a little sweeter. Three new books just got added to the "pay more than the average" bundle. Those new books are:
- The Manga Guide to Calculus
- Beautiful LEGO
- The LEGO Build it Book, Volume 1: Amazing Vehicles
If you have already purchased your Humble Brainiac Book Bundle, go to the confirmation email and click on the download link... the new books have been added and you can download them from there. Yes, they've been added to orders already purchased. Nice job, Humble Bundle and NoStarch... well done :)!!!
Tuesday, February 03, 2015 20:42 PM
Imagine my smile when I read "The Incredibly Obvious Secret: Finding (and Fixing) Product Bugs Through a Close Relationship Between Test and Customer Support”, written by Jay Kremer and posted to the Zoozk Engineering Blog. How, might you ask, would I think to be looking at an online dating web sites engineering blog? For that, I have to thank the great group of software testers who attend the Bay Area Software Testers Meetup group. One of them is the Test Manager at Zoosk, and he told me about this article and suggested I have a look. OK, yeah, that test manager is Jay ;).
In any event, I felt a lot of kinship with this post, and I wanted to have a chance to reply to it and say that, yes, there are organizations that do have this kind of relationship in addition to Zoosk. I work for one of them.
At Socialtext, if a customer reports an issue, the first line of contact is a real human being who walks through the issue with them. The support engineer can often see the the potential value of the bug in question (it’s real, it’s reproducible, it’s potentially impactful, and it has a high probability of affecting a number of people). At this point, that support person can contact us, or we can go over to them and say “hey, anything interesting coming in?”, and they will tell us what they are working on.
Socialtext has the approach that bugs get on the Kanban board as soon as we can identify and reproduce the issue (again, the easier it is to quantify the problem, the easier it is to fix, generally speaking). It is not at all uncommon to have a bug be reported one day, scheduled in the Kanban and picked up on the second day, a fix committed and tested, and then merged to our staging environment in short order thereafter. We are a company that runs most of our businesses processes on our product. This makes us, essentially, the primary alpha and beta testers, which in turn allows us to address bugs quickly, at least most of the time. It seems that Zoosk has a similar philosophy.
I have often said that customer support engineers make for awesome testers, and software testers can often make amazing support engineers. Like Jay, I believe this is a symbiotic relationship that needs to be enhanced and encouraged, and more testing organizations should make that connection and work closely with their support teams, not just for when they report issues. Software testers can learn a lot about the true state of their product and the workflows that really matter by spending some quality time with the support team. I think you will be pleasantly surprised with what you learn.
Friday, January 23, 2015 16:12 PMUncharted Waters" blog posts. The first is the thought and consideration that goes into the lead up to, the writing of and the making public of the post. The second is what happens usually within the first twenty-four to forty-eight hours after the post goes live, because I will get a comment or a thought from someone that will make me slap my head and say "Oh! Yes! That! That would have made sense to say!"
In my most recent Uncharted Waters entry "Heroes, Hubris and Nemesis", I decided to take on the "hero culture" that tends to exist in various companies, and the fact that, instead of them being shining examples of excellence, instead, those so called heroes are very likely the bottleneck to their team. I then spent the rest of the article giving advice about how to deal with all of that.
"Oh! Yes! That! That would have made sense to say!"
Here's where I think we really could make a difference. What if, instead of calling the person that does these things a hero, we used terminology like this instead? What if, for the people that hoard expertise, we called them out for doing exactly that? What if we made the perpetual online and self-flagellating martyr of the team out to be the problem rather than the savior of the organization? Would behavior change? I believe the answer is yes.
Alan Page and Brent Jenson talked about this in Episode 14 of "AB Testing" back in December. They likewise described a "hero" on their team and the way they did things. Brent described the challenge he had with a person that had no desire to teach others, because if others were taught, they wouldn't be special any longer. In addition, what was really happening was that, due to the existence of this particular hero on the team, management didn't have to invest the time or emotional energy in the rest of their team, because their hero was there to save the day.
What if, instead of extolling the hero, this team instead called out that behavior as anathema to their success? What if management had instead insisted that this person be given advancement based on how well and how frequently this person crossed trained the team, and made it clear that any opportunity for advancement would hinge on their ability to teach others and bring the whole team up to that person's level? What if, instead of praising the "hero" instincts, the management instead called out their opaque approach and unwillingness to share as fatal detriments? Do you think the entire culture of that team would have changed? I certainly do. One of two things would have happened. Either that person would have worked hard to train up the team, or that person would have left of their own accord because they would realize that their "heroics" were not going to be their differentiation. Either way, the team would have been better off, because the team as a whole would have to deal with the imbalance of skills. By relying on the hero, management was taking the lazy way out. They (management) didn't have to invest the time and energy in the rest of their team, the hero got to play the public martyr, and everyone was happy... at least until a crisis came where the hero couldn't step up.
My solution was to make a team of heroes, but perhaps the better way, the more appropriate way, is to try to remove the hero moniker from those who are not working to help the entire organization succeed. It would take a lot of guts, but I think the teams willing to do the latter will long term be better off than those who don't.
Monday, January 26, 2015 22:11 PM
The Association for Software Testing (AST) is holding its tenth annual conference this year. The Conference for the Association for Software Testing (CAST) will be held in downtown Grand Rapids, Michigan on August 3-5, 2015. Ten years is a cause for celebration in my book, and we are throwing a party. I say "we" because I am part of the Board of Directors and the President of AST, and therefore this conference is a big part of what 2015 will be shaping up to be for me.
The theme of the conference this year is “Moving Testing Forward”, and to that effect, I want to reach out and encourage my fellow software testers, programmers, quality advocates, or whatever area you see yourself, to put in a proposal for this year’s conference.
My belief is that the best way to move testing forward is to do so with more voices, and CAST is well known for being the conference where new and unique voices are heard. Several great speakers have developed over the years, and CAST has been quoted as being where many of them first had the opportunity to present. We want to encourage that approach further, so I am sending out a personal call to those software testers who might still be on the fence.
- Maybe you are a relative newcomer.
- Maybe you have not spoken in front of a large group before.
- Maybe you work in an industry and an environment where you think “oh, nobody would be interested in hearing what I have to say”.
Let me assure you, that last one is categorically false. In my opinion, the best talks and presentations are not built around theories or tools. They are built around real world experiences, the good, the bad, the occasionally ugly, and the often unintentionally hilarious.
I had the pleasure of having dinner last night with a couple of friends, both involved in various levels of software delivery, including software testing, and the stories we were sharing, and frequently laughing about, came from our core experiences, what we have witnessed, and what we have learned from those experiences. As I listened and participated with my friends, I mentally ticked off six or seven talk ideas and said “wow, these stories, and the lessons learned from them, would be so great if we could get them out to more people”. I have a pretty good feeling that several of these conversations will become talks at CAST, but the main point I want to encourage is that “your real world experiences make for great talks!”
The Call for Participation for CAST 2015 ends on January 31, 2015. That’s a little under ten days from now. I’d like to encourage all of you, if you are able, to propose a talk for CAST. How are you moving testing forward? There’s a very good bet that what you are doing (or perhaps wish you were doing, or perhaps not doing) will be of great value to your fellow software testers. We encourage people from all arenas, all experiences, and all backgrounds to come share in each others ideas and approaches. There will be a lot of fun things happening at CAST 2015. We want you to be part of it :).
Wednesday, January 21, 2015 21:55 PMRyan Arsenault and the folks over at uTest for showcasing me in their first "Ask the Expert" blog entry. The questions I was asked centered around career choices for testers and ways that we can succeed, or at least do better than we are now.
I cannot help it, part of me feels very strange using the word "expert" to describe myself at anything. I'm happy to use words like experienced, educated, practiced or even proficient, but "expert" carries a strange weight to it. It's so subjective, and it feels like, once you've been branded one, that there's only one way to go from there, and that's down. Moreover, I don't really believe there is such a thing as an "expert", because that implies that that person has learned all there is to learn and has mastered all there is to master... and that's just fundamentally wrong on so many levels.
I've come to realize that we all own our experiences, and that we all have opportunities to learn from our successes and our mistakes (oh, how much I have learned from my mistakes). This is why I have no problems talking about my experiences or my observations. They are mine, and as such, are certainly open to interpretation, or debate, or scrutiny, or even outright ridicule at times, but they are wholly mine. Expertise, however, is a judgment call. I personally have very little trust in people who proclaim themselves to be "experts" at anything. However, I place a lot of credence on other people who tell me that someone is an expert. Why? because they are witnesses to the skill, acumen and judgment being displayed, and they can then decide if the term "expert" makes sense.
It also often comes down to "expert compared to who?" I have many interests, and things that I spend a lot of time getting into. When I tell people I was a competitive snowboarder for several years, it conjures up an image in their minds; I must be an expert snowboarder. They may even watch me ride, and come to that conclusion because of the technique I can muster and the terrain I can ride on. Yet put me alongside other riders I used to compete with, and any questions of my so called "expert" level goes right out the window. That doesn't take away from what I have learned, the events I've participated in and the medals I won, but to use those hallmarks to say I am an "expert" is, in my mind, misleading. Still, to others who have never raced, or are newcomers to the sport, to them I am an expert, insomuch as I can show or teach them things that they do not know.
Again, I thank uTest for giving me an opportunity to share my experiences, and I am honored to be part of their "Expert" panel. I don't know if I deserve the moniker, but they seem to think so, and so do their readers, and ultimately, I guess that just means it's up to me from here on out to either prove them right, or prove them wrong. Here's hoping my actions and efforts do more to strengthen the belief in the former, rather than proving the latter ;).