Wednesday, July 16, 2014 21:50 PM
Today I sent the following message to the members of the Education Special Interest Group of the Association for Software Testing:
Three years ago at this time, I took on a challenge that no one else wanted to take on. I realized that there was a lot at stake if someone didn't [added: the AST BBST classes might cease], and thus a practitioner, with little academic experience, took over a role that Cem Kaner had managed for several years. I stepped into the role of being the Education SIG Chair, and through that process, I learned a lot, we as a SIG have done a lot, and some interesting projects have come our way to be part of (expansion of AST BBST classes and offerings, SummerQAmp materials, PerScholas mentoring program, etc.). It's been a pleasure to be part of these opportunities and represent the members of AST in this capacity.
However, there is a time and a season for all things, and I feel that my time as an effective Chair has reached its end. As of July 15, 2014, I have officially resigned as the Chair of the Education Special Interest Group. This does not mean that I will stop being involved, or stop teaching BBST courses, or stop working on the SummerQAmp materials. In fact, it's [my] desire to work on those things that has prompted me to take this step. Even I and my hyper-involved self has to know his limitations.
I have asked Justin Rohrman to be the new Chair of the Education Special Interest Group, and he has graciously accepted. Justin is more than capable to do the job. In many ways, I suspect he will do a better job than I have. I intend to work with him over the next few weeks to provide an orderly transition of roles and authority so that he can do what I do, and ultimately, so I can stop doing some of it :).
Justin, congratulations, and thank you. EdSIG, I believe wholeheartedly you shall be in good hands.
Outgoing EdSIG Chair
To everyone I've had the chance to work with in this capacity over the past three years, thank you. thank you for your patience as I learned how to make everything work, for some definition of "work". Thank you for helping me learn and dare to try things I wasn't aware I could even do. Most of all, thanks for teaching me more than I am sure I have ever taught any of you over these past three years.
As I said above, I am not going away. I am not going to stop teaching the BBST courses, but this will give me more of an opportunity to be able to teach them, or assist others in doing so, which is a more likely outcome, I think. It also frees me up so I can give more attention to participating in programs that matter a great deal to me, such as SummerQAmp and PerScholas. As I said above, I believe Justin will be fantastic, and I'll be just a phone call or email message away from help if he should need it ;).
Friday, July 04, 2014 13:13 PM
Below is a message from Packt Publishing that celebrates their tenth anniversary as a publisher, and an announcement that all of their ebook and video titles are, until Saturday, July 5, 2014, available for $10 each.
To take advantage, go to http://bit.ly/1mWoyq1
Packt’s celebrates 10 years with a special $10 offer
This month marks 10 years since Packt Publishing embarked on its mission to deliver effective learning and information services to IT professionals. In that time it’s published over 2000 titles and helped projects become household names, awarding over $400,000 through its Open Source Project Royalty Scheme.
To celebrate this huge milestone, from June 26th, every e-book and video title is $10 each for 10 days – this promotion covers every title and customers can stock up on as many copies as they like until July 5th.
Dave Maclean, Managing Director explains ‘From our very first book published back in 2004, we’ve always focused on giving IT professionals the actionable knowledge they need to get the job done. As we look forward to the next 10 years, everything we do here at Packt will focus on helping those IT professionals, and the wider world, put software to work in innovative new ways.
We’re very excited to take our customers on this new journey with us, and we would like to thank them for coming this far with this special 10-day celebration, when we’ll be opening up our comprehensive range of titles for $10 each.
If you’ve already tried a Packt title in the past, you’ll know this is a great opportunity to explore what’s new and maintain your personal and professional development. If you’re new to Packt, then now is the time to try our extensive range – we’re confident that in our 2000+ titles you’ll find the knowledge you really need , whether that’s specific learning on an emerging technology or the key skills to keep you ahead of the competition in more established tech.
More information is available at: http://bit.ly/1mWoyq1
Wednesday, July 02, 2014 20:23 PM
Several weeks ago I made a conscientious decision. That decision was to focus on one goal and one goal only. I'll admit it's a strange goal, and it's difficult to put into simple words or explain in a way that many people will understand, but for long time readers of this blog, that shouldn't come as a surprise ;).
In August, I'm going to be presenting a talk with Harrison Lovell about Coyote Teaching, and the ways in which this type of teaching can help inform us better than rote example and imitation. As part of this process, I thought it would be fun to take something that I do that is completely outside the realm of software testing, and see what would happen if I applied or examined Coyote Teaching ideas and techniques in that space. Personally, I found the results very interesting, and over the next few days, I'm going to share some of what I learned, and how those lessons can be applied.
What was this unusual project? It's all about "playing pirate" ;).
OK, wait, let me back up a bit...
One of the things I've been famous for, over several Halloweens, has been my elaborate and very involved pirate costumes. Why pirates? They fascinate me, always have. They are the outsiders, the ones who dared to subvert a system that was tyrannical, and to make a world that was built on their own terms. Granted, that world was often bloody, violent, deceptive, and very dangerous, with a strong likelihood the pirates would be killed outright or publicly executed, yet it has captured the imaginations of generations through several centuries.
Here in Northern California, there is an annual affair called the Northern California Pirate Festival. Many people dress up and “play pirate” at this event, and last year I made a commitment that I would be one of them. More to the point, I decided I wanted to go beyond just “playing pirate”, I wanted to get in on the action. Now, in this day and age, get in on the action doesn't mean “become an actual pirate”, it means join the ranks of the re-enactors. This year was a small step, in that I chose to volunteer for the festival, and work in whatever capacity they needed. With this decision, I also opted to go beyond the Halloween tropes of pirates, and actually research and bring to life a composite character from that time, and to pay special attention to the clothes, the mannerisms, and the details of the particular era.
Most people when they look at popular representation of pirates, they're looking at tropes that represent the Golden Age of Piracy, that period in the early 1700s where many of the famous stories are set (The Pirates of the Caribbean franchise, Treasure Island, Black Sails, etc.). What this ignores is the fact that piracy had been around for millennia, and there were other eras that had a rich history, and an interesting look, all their own.
To this end, I decided that I wanted to represent an Elizabethan Sea Dog. My goal was to have people walk up to me, and say “hey, you look different than most of the people here”, and then I could discuss earlier ages of piracy, or in my case, privateering (and really, if you were on the side of the people being attacked, that difference was mostly irrelevant).
To make this a little more interesting, I decided to make my outfit from scratch. The only items I did not make from scratch were my boots, my hat, and the sword and dagger that I chose to carry. Everything else would be hand made, and here is where our story really begins.
The first order of business if you choose to be a re-enactor, is to do research. If your character is a real person, you need to know as much as possible about not just their personal histories, but about their time period, where they came from, the mores of the day, the situations that may have driven someone to be on the high seas in the first place, and those decisions that might potentially lead them to being privateers or pirates. Even if the character you are reenacting is fictitious, you still want to be able to capture these details. I spent several months reading up and examining all of these aspects, but I gave the clothes of the era special attention. What did a mariner in the mid-1500s actually wear? To this end, I came up with a mental picture of of what I wanted my Sea Dog to look like. My Sea Dog would have high Calvary style boots, long pumpkin breeches, a billowy Renaissance style shirt, a close-fitting jacket (referred to as a “doublet”), a thicker outer jacket, called a jerkin, and would wear what was called a "Tudor Cap". I would also make a wide belt capable of carrying both a rapier and a main gauche (a parrying dagger used in two handed dueling, common for the time period). I would make the “frogs”, or the carriers for the sword and dagger. I’d also make a simple pouch to hold valuables. Just a handful of items. It didn't seem that complicated. As one who already knew how to sew, and has had experience making clothes in the past, I figured this was a project I could knock out in a weekend.
Wow, was I ever wrong!
I thank you if you have stuck with me up to this point, and you may be forgiven if you are thinking "wow, that's quite a buildup, but what does this have to do with the Coyote Teaching method?" Well, let's have a look, starting with the first part of the project, the pumpkin breeches.
Through my research I decided I wanted to create something that look dashing, and a little dangerous, and I decided that I would use leather and suede in many of the pieces. The problem with using leather and suede is that it doesn't come on a regular sized bolt of fabric. In fact, real leather and suede is some of the most irregular material you can work with, since it entirely depends on the particular hide you are examining. I quickly realized that I had no pieces that would give me a size to cut a full leg portion from any of my pattern pieces. What to do? In this case, I decided to piece long strips of three inch wide suede together. This would give the look of “panel seams”, and give the sectional look that is common for pumpkin breeches.
So let’s think about the easy part. Make a pair of pants. Just cut some material, and stitch it together, right?
Here are the steps that making these pants really entailed:
- taking out multiple suede hides and examining them
- cutting away the sections that would be unusable (too thin, too thick, holes or angles that couldn’t be used, etc.)
- lay out the remaining pieces and utilize a template to cut the strips needed. Repeat 40 times.
- take regular breaks because cutting through suede is tiring.
|Irregular suede pieces cut to a uniform width and length.|
- baste the pieces together and stitch them down the length of the strips, so as to make panels that were ten strips wide.
- make four of these panels.
|Stitched composite panels. Each section of ten strips|
is used for half or each leg (4 panels total)
- size the pattern for the dimensions of the pants desired.
- cut the suede panels into the desired shapes (being careful to minimize the need to cut over the stitched sections)
- cut matching pattern pieces out of linen to act as a lining for the suede.
- pin and piece together the lining and outer suede and sew them together.
- piece the leather panels to each other to sew them together so they actually resembled breeches
- wrestle with a sewing machine that is sewing through 4 to six layers of suede at a time, as well as the thickness of the lining material
- replace broken needles, since suede is murder on sewing machine needles, even when using leather needles.
- unstitch areas that bunched up, or where the thread was visible and not cleanly pulled, or where thread broke while stitching.
- make a cutaway and stitch a fly so that the pants can be opened and closed (so as to aid putting on and taking off, and of course, answering the call of nature).
- punch holes to place grommets in the waistband (since breeches of this period were tied to the doublet).
That weekend I had set aside to do the whole project, actually gave me enough time to size the suede and cut the strips I would need. That’s all I managed to do in that time, because I discovered a variety of contingent steps needed. I had to get my tools together, determine which of my tools were up to the task, which tools I didn’t even own, and clear space and set up my work area to be effective. These issues took way more time than I anticipated.
How long did it take me to actually make these breeches? When all was said and done, a week. Using whatever time I could carve out, I estimated I spent close to 14 hours getting everything squared away to make these.
|Completed pumpkin breeches... |
or so I thought at the time.
I liked how they turned out, I thought they looked amazing… that is, until I gave them a test run outside in the heat of the day, and realized that I would probably die of heat exhaustion.
Since the weekend of the event was looking to be very hot (mid 90s, historically speaking, with one year reaching 104 degrees) I realized these pants would be so uncomfortable as to be unbearable. What could I do now? I had put so much time into these, I didn’t have time to start over. Fortunately, research to the rescue. It turns out that there was a style of pumpkin breeches that, instead of being stitched together, had strips of material that acted as guardes, and that hung loosely rather than stitched together. After looking at a few examples, and seeing how they were made, I decided to cut open all of the seams I had spent so much time putting together, and reinforcing the sections at the waist and at the bottom of the leg. It was a long and tedious change, but it allowed air to escape, and me to not die of heat exhaustion.
|Jumping a little ahead, but here is the finished|
open air version of the pumpkin breeches.
OK, let's talk Coyote Teaching now...
This whole process brought into stark relief the idea of estimating our efforts, and how we, even when we are experienced, can be greatly misled by our enthusiasm for a project.
Was I completely off base to think I could get this project done in a weekend? Turns out, yes! It may have been correct or accurate if I were to be using standard bolt fabric, but I wasn’t. I chose to do something novel, and that “novel” approach took five times longer to complete. What’s more, I had to actually undo much of the work that I did to actually make it viable.
My estimate was dead wrong, even though I had experience making pants and making items to wear. I knew how to sew, I knew how to piece together items, I’ve actually made items, so I felt that gave me a good confidence to make an estimate that would be accurate.
I’ve come to appreciate that, when I try to make an estimate on something I think I know how to do, I am far more likely to underestimate the time requirements needed when I am enthusiastic about the project. In contrast, if I am pessimistic about a project, I am likely to overestimate how long it will take. Our own internal biases, whether they be the “rose colored glasses” of optimism, or the depletion of energy that comes with pessimism, both prevent us from making a real and effective estimate.
Knowing what I know now, how would I consider guiding someone else to do a similar project? I could tell them all of the pitfalls I faced, but those might not be helpful, unless they are doing exactly the same thing I am doing. Most of the time, we are not all doing the exact same thing, and my suggestions may prove to be a hindrance. Knowing now what I know about the process of preparing suede in sections, I would likely walk the person through defining what might be done. In the process, it’s possible they might come up with answers I didn’t ("why are we using real suede, when we can buy suede-cloth that is regular sized on a bolt?". "Couldn’t we just attach the strips to a ready made pair of pants?"). By giving them the realities of issues they might face, or allowing them to think through them on their own, we can help foster avenues and solutions that they would not find on their own, or that perhaps we wouldn’t, either.
One thing is decided, though… the next outfit I make (and yes, be assured, I will make another one ;) ) is going to be made with regular bolts of wool, cotton or linen, rakish good looks be darned.
Monday, June 30, 2014 23:04 PM
With that, I would like to cordially invite each and every one of you reading this to come and join us this Saturday, July 5, 2014.
Weekend Testing Americas #52 - Going Deep with "Deep Testing"
Date: Saturday, July 5, 2014
Time: 09:00 a.m. - 11:00 a.m. Pacific Daylight Time
Facilitator: Michael Larsen
So what does "deep testing" mean? well, if I told you all that now, it wouldn't make much sense to hold the session now, would it? Still, I can't just leave it at that. If that's all I was going to do, I'd just as well have you look at the Weekend Testing site announcement and be done with it.
Justin Rohrman and I were discussing what would make for a good session for July, and he suggested the idea of "deep testing", and in the process, he suggested that we consider a few questions:
What if, instead of just having an unfocused bug hunt, we as a group decided to take a look at a specific feature (or two or three depending on the size of the group) and do what we could to really drill down as far as we could with that particular feature. Heck, why not just dig into a single screen and see what we could find? We have an exercise in the AST BBST Test Design class that does exactly this, only it takes it to the component level (we're talking one button, one dial, one element, period). we're not looking to be that restrictive, but it's an interesting way of looking at a problem (well, we thought so, in any event).
As part of the session, we are going to focus on some of these ideas (there may be lots more, but expect us to, for sure, talk about these:
- how do we know what we are doing is deep testing?
- what do we do differently (thought process, approach, techniques, etc.)?
- how do we actually perform deep testing (hint: staring at a feature longer doesn't make it "deep")?
- how do we know when enough is enough?
If this sounds like an interesting use of part of your Saturday, then please, come join us July 5, 2014. We will be starting at 9:00 a.m. Pacific time for this session, and it will run until 11:00 a.m. Pacific.
For those who have done this before, you already know the procedure. For those who have not, please add "weekendterstersamericas" to your Skype ID list and send us a contact request. More specifically, tell us via Skype that you would like to participate in this upcoming session. We will build a preliminary list from those who send us those requests.
On Saturday, at least 15 minutes before the session starts, please be on Skype and ready to join the session (send us a message to say you're ready; it makes it that much easier to build the session) and we will take it from there.
Here's hoping to see you Saturday.
Thursday, May 29, 2014 02:38 AM
Hello everyone, and sorry for the delay in posting. There's a lot of reasons for that, and really, I'll explain in a lengthy (or small series) of posts exactly why that has been the case. However, tonight, I am emerging from my self imposed exile to come out and give support for Curtis Stuehrenberg and hist tall about "ACCellerating Your Test Planning".
From the BAST meetup post:
"One of the most pervasive questions we're asked by people testing within an agile environment is how to perform test planning when you've only got two weeks for a sprint - and you're usually asked to start before specifications and other work is solidified. This evening we plan on exploring one of the most effective tools your speaker has used to get a test team started working at the beginning of a sprint and perhaps even earlier. We'll be conducting a working session using the ACC method first proposed by James Whittaker and developed over actual practice in mobile, web, and "big data" application development."
For those not familiar with Curtis (and if you aren't, well, where have you been ;)? ):
Curtis is currently leading mobile application testing at the Climate Corporation located in San Francisco, Seattle, and Kansas City. When not trying to help famers and growers deal with weather and changing climate conditions he devotes what little free time he can muster to using his 15 years of practical experience to promote agile software testing and contextual quality assurance at conferences like SFAgile, STPCon, ALM-Forum, and CAST as well as publications like Tea Time for Testers and Better Software magazine.
This is an extension of Curtis' talk from the ALM Forum in April. One of the core ideas is to ask "can you write your test plan in ten minutes? If not, why not?"
Curtis displayed some examples of his own product (including downloading the Climate Corp mobile app by each of us), and brought us into an example testing scenario and requirements gathering session. Again, rather than trying to make an exhaustive document, we had to be very quick and nimble in regards to what we could cover and in how much time we had to cover it. In this case, we had the talk duration to define the areas of the product, the components that were relevant, and the attributes that mattered to our testing.
Session Based Test Management fits really well in this environment, and helps to really focus attention for a given session. By using a very focused mission, and a small time box (30 minutes or so), each test session allows the tester the ability to look at the attributes and components that make sense in that specific session. By writing down and reporting what they see, they are able to document their test cases as they are being run, and in addition, show a variety of areas where they may have totally new testing ideas based on the testing session they just went through, and these in turn inform other testing sessions. In some ways, this method of exploring and reporting simultaneously allows for a development of a matrix that is more dense and more complete than one that may be generated first before actively testing.
the dynamic this time around was more personal and more focused. Since it was not a formal conference presentation, the questions were more common, and we were able to address questions immediately rather than waiting until the talk was finished. Jon Bah's idea of threads was presented and described, and how it can help capture interesting data, but help us consciously stay "on task", yet capture interesting areas to explore later (OK, I piped in on that, but hey, it deserved to be said :) ).
It's been a few months since we were able to get everyone together, and my thanks to Curtis for taking the lead and getting us together this month. we are looking forward to next month's Meetup, and as soon as we know what it is (and who is presenting it ;) ).
Thursday, May 01, 2014 20:00 PM
For those who have been following my comments about mine and Harrison Lovell’s CAST 2014 talk ("Coyote Teaching: A new take on the art of mentorship") this fits very nicely into the ideas we will be discussing. We’ve been looking back at interactions that we have had over the years where mentorship played an important role in skill development. During one of our late night Skype calls, we were talking about skateboard and snowboard skills, and how we were able to get from one skill level to another.
One aspect that we both agreed was a common challenge was “the fear factor” that we all face. In a broader sense, we both appreciate that snowboarding and skateboarding are inherently dangerous. Push the envelope on either and the risk of injury, and even death, is definitely possible. Human beings tend to work very hard at at unconscious level to keep ourselves alive. The amygdala is the most ancient part of our brain development. It deals with emotions, and it also deals with fear and aggression. It’s our “fight or flight” instinct. One one level, it’s perfectly rational to listen to it in many circumstances, but if we want to develop a technical skill like jumping or riding at speed, we have to overcome it.
About fifteen years ago, I first met Sean Craddick, a fellow snowboarder who was my age and was, to put it simply, amazingly talented. I used to joke whenever I saw Sean at a competition that I would say “oh well, there goes my shot at a Gold Medal!” He humored me the first couple of times, but the third time I said it, he surprised me. He answered “Dude, don’t say that. Don’t ever say that! I could try to throw some trick and land it badly, and scrub my entire run. I could miss my groove entirely, or miss a gate on a turn, or I could catch an edge and bomb the whole thing. Every event is up in the air, and every event has the potential of having an outcome we’d least expect. Don’t say you don’t have a chance, you always have a chance, but you’ll never get the chance if you don’t believe you have it.”
Because we were both the same age and had fairly similar life experiences, I’d hang out with Sean at many of these events, and sometimes run into him on off days when I was just up at the mountain practicing. One time, he noticed that I kept going by a tall rail and at the last moment, I’d veer off or turn and ride past it. After a few times of seeing this, when he saw me about to veer off again he yelled “Hey Michael! The next time you veer of, stop dead in your tracks, unbuckle your board and walk back up here. I want to talk to you about something.” Sure enough, I went down, veered off course, and slammed to a stop. I took off my board, and then I walked up the hill. Sean looked at me and said:
“Take it straight on, and line your nose with the lip and where the beginning of the rail is.”
I nodded, buckled in, and then went down to the rail transition. I veered off. I stopped. I walked back up the hill.
“Lean back on your rear heel just a bit. It will give you a more comfortable balance when you first get on the rail.”
I nodded, bucked back in, went down again, and again I veered off. I stopped, unbuckled, and walked back to the top of the hill again. By this point I was winded, my calves were aching, my heart was pounding, and I was getting rather frustrated.
“One final thing. Do an ollie at the end of the rail.”
What? I hadn’t even gotten on the rail, why is he telling me what to do when I get off of it? I shrugged, buckled in, went for the hit, and this time, I went straight, I lined the nose up, I set my weight back just a little bit, I slid down the rail, and I did a passably adequate ollie off the end of the rail, and landed the trick. When I did, Sean whooped and hollered, then came down after me and hit the same rail.
“Awesome, lets go hi the chairlift!”
As we did, Sean looked at me and said:
“You can understand all the mechanics in the world, but if your brain tells you 'you can’t do it, it’s too dangerous, it’s too risky', you need to get your body to shut your brain up! That’s what I had you do. I knew why you were sketching the last few feet. You were afraid. It felt beyond you. You might crash. It might hurt real bad if you do. The brain understands all that. It wants to keep you safe. Safe, however, doesn’t help you get better. Whenever I find myself giving in to the fear, I stop what I’m doing, right there, and I walk up the hill, and I try it again. If I pull back again, I walk the hill again, and again, and again. What happens is the body gets so fatigued that every fiber of your being starts screaming to your brain ‘just shut up already and let me do this!' Exertion and exhaustion can often help you overcome any fear, and then you can put your mechanics to good use.”
Yeah, I paraphrased a lot of that, but that’s the gist of what Sean was trying to get across to me. Our biggest enemy is not that we can’t do something, but that we are afraid that we can’t do something. That fear is powerful, it’s ancient, and it can be paralyzing. That ultra primitive brain can’t be reasoned with very well, unless we give it another pain to focus on. At some point the physical pain of exertion and exhaustion will out shout the feelings of fear, and then we can do what we need to do.
In a nutshell, that’s “The Craddick Effect”. There may be a much fancier name for it, but that’s how I’ve always approached mentorship where I have to overcome fear and doubt in a person. When some one is afraid, it’s easy to retreat. As a mentor, we have to recognize when that fear is present, and somehow work with it.
You may not do something as extreme as what Sean did with me, but you may well find other, more subtle ways to accomplish the same thing. Imagine having to take on a new testing tool where there’s a lot that needs to be learned up front. We could just let them go on their own and let them poke around. We can take their word that they are getting and understanding what they need to, or we can prod and test them to see what’s really happening. If we see that they don’t understand enough, or maybe even very little, don’t assume lack of aptitude or drive, look for fear. If you can spot fear, try to coax them in a way that they can put their energy somewhere else for a time so that they can get to a point to shout down the fear. It may be having them do a variety of simpler tasks, still fruitful, but somewhat repetitive and tedious. After awhile, they will get a bit irritated, and then give them a slight push to move farther forward. Repeat as necessary. Over time, you may well see that they have slid past the pain and frustration point, and they just “get” what they are working with. It just clicks.
As a mentor, look to help foster that interaction. As a person receiving mentoring, know that this may very well be exactly what your mentor is trying to do. Allow yourself to go with it. In the end, both of you may learn a lot more about yourselves and your potential than you thought possible. It’s pretty cool when that happens ;).
Thursday, May 01, 2014 05:32 AM
Yes, this is going to be live blogged, and as usual, it may be messy at first. Forgive the stream of consciousness, I promise I’ll clean it up later :).
A bit about our topic this evening (courtesy of Meetup):
Django 1.7 is one of the biggest releases in recent years for Django; several major new features, innumerable smaller improvements, and some big changes to parts of Django that have lain unchanged since before version 1.0. Come and learn about new app loading, system checks, customized select_related, custom lookups, and, of course, migrations. We'll cover both the advantages these new features bring as well as the issues you might have when upgrading from 1.6 or below.
A bit about our presenter this evening (also from Meetup):
Andrew Godwin is a Django core developer, the author of South and the new django.db.migrations framework, and currently works for Eventbrite as a Senior Software Engineer, working on system architecture. He's been using Django since 2007, and has worked on far too many Django websites at this point. In his spare time, he also enjoys flying planes, archery, and cheese.
#1 Randall Degges - Django & Bcrypt
Randall kicked things off right away with a talk about how Django does password hashing and securing of passwords, with the estimated cost o what it takes to crack a password (hint, it's not that hard). If you want to be more security alert, Randall is recommending that we consider using BCrypt. It's been around awhile, and it allows for transparent password upgrading (users update their hash the first time they log in. No muss no fuss :). Sounds kinda cool, to tell the truth, I'm looking forward to playing with it for a bit.
#2 Venkata - Django Rest Framework w/ in-line resource expanding
The second talk discussed a bit on the Django REST framework. Some of the cool methods to handle drop down, pop open and other events were very quickly covered, and some quick details as to what each item can do. A quick discussion with fast flashes of code. I caught some of the details, but I'll be the first to admit, a lot of this flew right past me (gives me a better idea as to areas I need to get a little more familiar with). Granted, this is a lightning talk, so that should be expected, but hey, I pride myself on being able to keep up ;).
#3 Django Meetup Recap
The third lightning talk basically covered a recap of what the Django group has been covering and some quick recaps of what has been discussed in the previous meetups (Ansible, Office Entrance Theme Music, Integrating Django & NoSQL, etc.). Takeaway, if we want resources after the meetups are over, we have a place to go (and I thank you for that :) ).
Andrew Godwin's Talk
This seems like a great time to say that I'm relatively new to Django, so a lot of what's being discussed is kind of exciting because it makes me fgeel like I'll be able to get into what's being offered without having to worry about unlearning a lot of things to feel comfortable with the new details. Part of the new code is an update to South (which, as is mentioned above, is something Andrew is intimately involved in).
Details as to how apps are loaded and how to check for and warn programmers about what may happen with an upgrade. Having suffered through a few updates where things worked, then didn't and not having any clue as to why, this is very appealing.
Another new aspect is an adjustable and tunable prefetch option, so that instead of all or nothing, there's a spectrum of choices that can be looked up and help based on context.
A rather ominous slide has flashed across saying "Important Upgrade Notes", and a new detail is that all field classes need to have a deconstruct() option. It's now a required method for all fields. Additionally, initial_data is dead. It's important to have modules use data migration instead. In short, don't automatically assume that older modules that use initial_data will cleanly work. I will take Andrew's word on that ;).
So what's coming up in Django 1.8? Definitely improvements in interactions with PostGreSQL, as well as migrations for contributing apps. But that's getting a bit ahead of the race at the moment. Expect Django 1.7 to hit the scene around May 15th, give or take a few days. Again, I will take Andrew's word on that ;).
There's no question, I feel a little bit like a fish out of water, and frankly, that's great! This reminds me well that there is so much I need to learn, especially if my goal of becoming a technical tester is going to advance farther than just wishful thinking or following pre-written recipes. It's not enough to just "know a framework" or "know my framework".
As was aptly demonstrated to me a year and a half ago, I spent a lot of time in the Rails stack, and then I went to work with a company that didn't user Rails at all. Did that mean all that time and learning was wasted? Of course not. It did give me a way to look at how frameworks are constructed and how they interact. I'm thinking of it like learning Spanish when I was younger. Don't get me wrong, I'm not great shakes when it comes to Spanish, but I understand a fair amount, and can follow along in many conversations. What's really cool is that that gives me an add on benefit that I can follow a little bit in both French and Italian as well, since they are closely related. that's how I feel about learning a variety of web frameworks. The more of them I learn, the easier it will be to move between them, and to understand the challenge that they all face.
In any event, this was an interesting and whirlwind tour of some new stuff happening in Django, and I plan to come back and learn more, with an eye to understanding more next time than I did today. Frankly, that shouldn't be too hard to accomplish ;).
Thanks for hanging out with me. Have a good rest of the evening, wherever you are.
Thursday, May 01, 2014 18:26 PM
|Yep, that's what the Web looked like in 1993. Cool, huh?|
I made a commitment to roll through Noah Sussman's "ways to become a more technical tester", which I follow up on each Friday in my TECHNICAL TESTER FRIDAY posts.. In that process, I decided it would be good to have a place that novice testers could go and learn some fundamentals about web programming. With that, I decided to go and give Codecademy another look, and I'm glad that I did.
Codecademy Reimagined", and I for one am impressed with the level of depth they went into the describe the changes.
projects ranging from novice to intermediate and advanced levels so that you caa practice what you are learning.
Additionally there are Q&A Forums associated with each project, and so far, even when I've been stuck in some places, I've been able to find answers in each of the forums thus far. Participants put time in to answer questions and debate the approaches, and make clear where there is a code misunderstanding or an issue with Codecademy itself (and often, they offer workarounds and report updates that fix those issues). Definitely a great resource. If I have to be nit-picky, it's the fact that, often, many of the Q&A Forum answers are jumbled together. Though the interface allows you to filter on the particular module and section by name, number and description, it would be really helpful to have a header for each question posted that says what module the question represents. Many do this when they write their reply titles, but having it be a prepended field that's automatically entered would be sweet :).
Wednesday, April 23, 2014 19:26 PM
The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.
The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.
Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!
Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.
Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.
Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.
Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.
If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
- Central setup and teardown
- Central folder structure
- well defined config files
- Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
- create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
- wrap it all up so that it can be plugged into a CI server.
Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).
Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).
Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home.
Saturday, April 19, 2014 19:45 PM
Last week, April 9th, as I was getting off the train, I stood up and reached over to grab my bag. The "twinge" I felt above my hip on my right side was a tell tale reminder. I have sciatica, and if I feel that twinge, I am not going to be in for a fun week or two. Sure enough, my premonition became reality. Within 48 hours, I was flat on my back, with little ability to move, and the very act of doing anything (including sleeping) became a monumental chore. To that end, that meant that my progress on anything that was not "mission critical" pretty much stopped. There was no update last Friday because there was nothing to report. I spent most of the last week with limited movement, a back brace, copious amounts of Ibuprofen, any typing only when I had to. I'm happy to report I'm getting much better, but siting for long stretches to code or write was still painful, though less so each day.
As an overall course and level of coverage, I have to give CodeCademy credit, they have put together a platform that is actually pretty good for a self-directed learner. It's not perfect, by any means, and the editor can be finicky at times, but it's flexible enough to allow for a lot of answers that would quality as correct, so you don't get frustrated if you don't pick their exact way of doing something.
If there was any criticism, it's that there is little in the course examples that integrate the ideas (at least so far). There is a course on jQuery, and I anticipate that that will probably have more to do with actual web component interaction and integration, so that's my next goal to complete. After that, I plan to go back and complete the Ruby and Python modules, and explore their API module as well.
For now, consider this is a modest victory dance, or in this case, a slow moving fist pump. I may need a week or two before I can actually dance ;). Also, next week I'll have some meat to add to this,s since I'm going to start covering some command-line level tools to play with and interact with, and those are a lot more fun to write about!
Wednesday, April 09, 2014 16:10 PMCAST 2014 this August in New York City.
Actually, I need to qualify that. It's not what I'm going to talk about, it's what "we" are going to talk about.
Harrison Lovell is an up and coming tester with copious amounts of wit, humor and energy. Seriously, he gives me a run for my money in the energy department. I met Harrison through the PerScholas mentorship program, and we have been communicating and working together regularly on a number of initiatives since we first met in September of 2013. The results of those interactions, experiments, and a variety of hits (and yes, some misses here and there), are the core of the talk we will be doing together.
Here's the basics from the sched.org site:
"Coyote Teaching: A new take on the art of mentorship"
Too often, new software testers are dropped into the testing world with little idea as to what to do, how to do it, and where to get help if they need it. Mentors are valuable, but too often, mentors try to shoe-horn these new testers into their way of seeing the world. Often, the result is frustration on both sides.
“Coyote Teaching” emphasizes answering questions with questions, using the environment as examples, and allowing those being mentored the chance to create their own unique learning experience. Coyote Teaching lets new testers learn about the product, testing, the world in which their product works, and the contexts in which those efforts matter.
We will demonstrate the Coyote Teaching approach. Through examples from our own mentoring relationship, we show ways in which both mentors (and those being mentored) can benefit from this arrangement.
“When raised by a coyote, one becomes a coyote”.
Senior Quality Assurance Engineer, Socialtext
Michael Larsen is Senior Tester located in San Francisco, California. Over the past seventeen years, he has been involved in software testing for products ranging from network routers and switches, virtual machines, capacitance touch devices, video games and distributed database applications that service the legal and entertainment industries.
Read More →
Harrison C. Lovell
Associate Engineer, QA, Virtusa
Harrison C. Lovell is an Associate Engineer at Virtusa’s Albany office. He is a proud alumnus from Per Scholas’ ‘IT-Ready Training’ and STeP (Software Testing education Program) courses. For the past year, he has thrown himself into various environments dealing with testing, networking and business practices with a passion for obtaining information and experience.
Yes, I think this is going to be an amazing talk. Of course, I would say that, because I'm part of the duo giving it, but really, I think we have something unique and interesting to share, and perhaps a few interesting tricks that might help you if you are looking to be a mentor to others, or if you are one who wants to be mentored. One thing I can guarantee, considering the combinations of personalities that Harrison and I will bring to the talk... you will not be bored ;)
Saturday, April 05, 2014 01:45 AM
Friday, April 04, 2014 09:28 AM
Transforming Software Development in a World of Services with Sam Guckenheimer is the first session, a we are starting out with a thought experiment around Air BnB (the online service to rent rooms and houses. etc. in different cities). A boat on Puget sound is available, so a company can host all of their team members on the boat. What will the experience be? Will it be a fun stay? Will it be too cramped? We don't know, but one thing's for sure, it will be open, it will be public, and good or bad, if people want to talk about it, they will.
This makes for an interesting comparison to Agile development, and the way that agile has shaken out. What had intended to be a relatively private internal housekeeping mode has become a more public viewing. We are social, we are open, we use systems that are often out of our control in the 100% sense of the word. A lot of our practices and actions are not quiet and hidden, they are visible to all who would care to see them. It's a little daunting, but it's also tremendously liberating.
This talk is looking at a Microsoft ideal of "cloud cadence". Customers want regular improvements, we want to maximize the value we provide to our customers, and we know that their feedback is not just for developers, it's seen by everyone. Get it right, we have app store five star reviews. get it wrong, and we can have considerably lower reviews (and don't for a second think those reviews don't matter; it can be the difference between adoption or being totally forsaken).
The DevOps life cycle comes together with three aspects. we have development, we have production, and in between we have the collaboration piece. What's the most important element there? Well, without good development, we have a product that is sub par. With bad deployment, we might have a great product but it won't really work the way we intend it to. The middle piece is the critical aspect, and that collaboration element is really difficult to pin down. It's not a simple prescription, a set checklist. each organization and project will be different, and many times, the underpinnings will change (from our servers to the cloud, from a dedicated and closed application to a socially aware application). Sometimes the changes are made deliberately, sometimes the changes are made a little more forcefully. Either way, without a sense of shared purpose or collaboration between the development and production groups, including the tooling necessary to accomplish the goals.
The ability to do all of these things in the Visual sTudio team is the core of Sam's talk, and the interactions with their clients, and the variety of changes that occur drive many of their decisions. They learn from their customers and change direction. They focus on a human to human feedback model (which may sound a little unusual for a giant company like Microsoft, but Sam makes a convincing case :) ).
- less scripting, more active thinking
- less checking, more real testing
- less blind faith, more scientific skepticism
- creative, inventive, intuitive, mindful
- SummerQAmp: hire an intern
- PerScholas: have a chat with recent STeP graduates and their mentors
- Weekend Testing: Come join us for a session or two and see the magic happen
- Miagi-do: Do a web search for the term “Miagi-do School of Software Testing”. Or better yet, just ask me ;).
Curtis Stueherenberg is talking about how to "ACCellerate Your Agile Test Planning". He decided to chuck the Power Point entirely, and decided to give a crash ourse in Agile testing on a live product... specifically, his procut (well, Climate Corp's mobile app, to be specific). His point was to say "what if we have to test a product in two weeks? How about one week? How about three days? What are you going to do?"
Rather than talk it, we all participated in an active testing session, downloading the app to our mobile devices (iPhone and Android only, sorry Windows Phone users :( ). By walking through the steps and the test areas, and using an idea from James Whittaker and Gogle called the ACC model, we all in real time put together sections of risk and areas we would want to make sure that we tested. In many ways, ACC is a variation on a theme of Session Based Test Management (SBTM). It informs out tests, we act on the guidance, and we pivot and adapt based on what we learn, and we do it quickly.
Much of the interaction was just things we did in real time, and for my money, this was a brilliant way to emphasize this approach. Instead of just talking about it, we all did it. Even if the idea of a formal test plan is not something you have to deal with, give this approach a try. I know I'm going to play with this when I get back home :).
Now it's time for Seth Eliot and "Your Path to Data Driven Quality" and a roadmap towards how to use the data that you are gathering to help guide you to your ultimate destination. Seth wants to make the point that testing is measurement, and you can't measure if you don't have data (well, you can, but it won't really be worth much). Seth asks if we are HiPPO driven (meaning is our strategy defined buy the "Highest Paid Person's Opinion" or were we making decisions based on hard data. Engineering data can help a little bit (test results, bug counts, pass fail rates). They can give us a picture, but maybe not a complete one (in fact, not even close to a complete one). There's a lot of stuff we are leaving on the table. Seth says that leveraging production data (or "near production data") gives us a richer and more dynamic data set. Testers try to be creative, but we can't come close to the wacko randomness of the real world users that interact with our product.
First step: Determine your questions. Use Goal Question Metrics. Start at the beginning and see what you ultimately want to do. Don't just get data and look for answers. Your data will taint the questions you ask if you don't ask the questions first. You may develop a confirmation bias if you look at data that may seem to point to a question you haven't asked. Instead, the data may give you a correlation to something, but it may not actually tell you anything important. Starting with the question helps to de-bias your expectations, and then it gives you guidance as to what the data actually tells you.
Then: Design for production-data quality. There's two types of data we can access. Active and passive data can be used. active data could be test cases or synthetic data of a simulated user. Passive data is using real world data and real users interactions. Synthetic data is safer, but it's by definition incomplete. Passive data is more complete, but there's a danger to using it (compromising identification data, etc.). Staging the data acquisition lets us start with synthetics data (reminds me of my "Attack on Titan" account group that I have lovingly put together when I test Socialtext... yes, I have one. Don't judge me ;) ), to copying my actual account and sharing on our production site (much more rich data, but needs to be scrubbed of anything that could compromise individuals privacy... which in turn gets us back to synthetic data of sorts, but a richer set. Bulk up and repeat. Over time, we can go from having a small set of sample data to a much larger and beefier data-set, with lots more interesting data points.
Then: Select Data sources. There's a number of ways to gather and accumulate data. We can export from user accounts, or we can actively aggregate user data and collect those details (reminds me of the days of NetFlow FlowCollection at Cisco). We need to be clear as to what we are gathering and the data handling privacy that goes with it. Anonymous data is typically safe, sensitive personally identifiable info requires protocols to gather, most likely scrub, or not touch with a ten foot pole. Will we be using Infrastructure data, app data. usage. account details, etc. Each area has its unique challenges. Plan accordingly.
Then: Use the right data tools. What are you going to use to store this data. Databases are of course common, but for big data apps, we need something a little more robust (Hadoop is hip in this area). where do you store a Hadoop instance? Split it up into smaller chunks (note, splitting it makes it vulnerable, so we need to replicate it. Wow, big data gets bigger :) ).Using map reducing tools, we can crunch down to a smaller data set for analysis purposes. I'm going to take Seth's word for it, as Hadoop is not one of my strong suits, but I appreciated the 60 second guided tour :) ). Regardless of the data collection and storage, ultimately that data needs to be viewed, monitored, aggregated and analyzed. The tools that do that are wide and varied, but the goal is to drill down to the data that matters to you, and having the ability to interpret what you are seeing.
Then: Get answers to your questions. Ultimately, we hope that we are able to get answers based on the real data we have gathered that will help us either support or dispute our hypothesis (back to the scientific method; testing is asking questions and then, based on the answers we receive, considering and proposing more interesting questions. Does our data show us interesting points to focus our attention? Do we know a bit more about user sentiment? Have we figured out where our peak traffic times are? If we have asked these questions, and gathered data that is appropriate for those questions, if we have been focused on aggregating the appropriate data and analyzing it, we should be able to say "yes, we have support for our hypothesis" or "no, this data refutes our hypothesis". Of course, that leads to even more questions, which means we go to...
Lather. Rinse. Repeat.
Hmmm, Mark Tomlinson just passed me a note with a statement that says "Computer Aided Exploratory Testing"? Hadn't considered it quite that way, but yes, this certainly fits the description. An intriguing prospect, and one I need to play with a bit more :).
Lightning talks! Woo!!! We have four presenters looking to rifle through some quick talks.
Mark Prichard is discussing "Complete Continuous Integration and Testing for Mobile and Web Applications". Mark is with Cloudbees, and he's explaining how they do exactly what the title describes. Some interesting ideas surrounding how to use Jenkins and other tools to make it possible to build multiple releases and leverage a variety of common tools so as to not have to replicate everything for each environment. Leverage the cloud and Platform-as-a-Service for Continuous Delivery. Key takeaway... "ALM in the cloud will become the rule, not the exception". Quote attributed to Kurt Bittner.
Mike Ostenberg from SOASTA is next and he's talking about 'Performance Testing in Production, and what you'll find there". Begs the question... *WHY* do we want to do performance testing in production (isn't that what we call a "customer freak out"... well, yeah, but that's an after effect, and we really want to not go there ;) ). Real systems, real load, real profiling. There's ways we can simulate load on a test environment, but it's not really going to match what happens in the real space. additionally, we want to do our load testing earlier than we traditionally do it. At the end of the cycle, we're a little too far gone to actually pivot base on what we learn.
Load testing in Production, Mike points out, can be done in stages and can be done on different levels. Just as we use Unit tests for components, integration tests for bigger systems, and feature/acceptance tests to tie it all together, we can deconstruct load tests to match a similar paradigm. Earlier load tests are dealing with errors, page loads, garbage collection, data management, etc. Regardless of the stage, there are some critical things to look at.
Bandwidth is #1, can everyone reach what they need? Load balancing, or making sure everyone pulls their weight, is also high priority. Application issues; there's no such thing as perfect code. Earlier tests can shake out the system to help show inefficient code, sync issues, etc. Database performance fits in application issues, but it's a special set of test cases. The database, as Mike points out, is the core of performance. Locking and contention, index issues, memory management, connection management, etc. all come into play. Architecture is imperative. Think of matching the right engine to the appropriate car. Connectivity comes into play as well. Latency, lack of redundancy, firewall capacity, DNS, etc. Configuration means we need to get custom and actually see if we mean it. Shared Environments... watch out for those noisy neighbors :). Random stuff comes into play when things are shared in the real world. Pay attention to what they can do for you (or to you ;) ).
I like this staggered approach, it makes the idea of "testing in production" not seem so overwhelming.
Now on deck is Dori Exterman, and he's talking about "Reducing the Build-Test-Deploy Cycle from Hours to Minutes at Cellebrite". Hmmm, color me mildly skeptical, but OK, tell me more :). I'm very familiar with the idea of serial build-test-deploy, and I know that that does not bode well. Multi-core systems can certainly help with this, and leveraging multi-core environments can allow us to do a much tighter build-test-deploy pipeline. Parallel processing speeds things up, but there's a system limit, and those system limits are also very costly at their higher end.
So what's the option when we max out the cores on a single system? seems that going parallel to more servers would make sense. Rather than one machine with 32 cores, how about 8 machines with four cores? same number of cores, maybe similar throughput gains (and potentially better since system resources are shared over multiple machines). This approach is referred to as a CI Cluster Farm. Cool, but we're still in a similar ball-park. Can we do better? Dori says yes, and his answers is to use distributed computing within your own network of machines. If I'm hearing this correctly, it's kind of like the idea of letting your machine be used for "protein folding" experiments while your machine is in more idle states (anyone else remember signing up to do stuff like that :)? ). I'm not sure that's what Dori means, but it seems this could be really viable, and we already have an example of that happening (i.e "signing up for protein folding").
How wild would it be to be able to wire up your entire network, everyone's machines, so that they can help speed up the build process? It's a fascinating model. I'd be curious to see if this really comes to fruition.
We had another Lightning talk added that came from a Birds of a Feather session about CI/CD, so this is a bit of a surprise. The idea was to see how we could leverage pipelines (mini-builds that run in sequence and individually). mini builds also helps us to build individual components, with a goals to integrate the elements later on. Often, all we want is a Yes/No to see if the change is good, or not (gated check-ins).
This blends into Dori's talk just given on distributed computing and utilizing down times for making an almost unlimitedly parallel build engine. So this is interesting, but what's management going to say about all of this? Well, what is it costing us not to do this? Are we losing time and in effect losing money in the process? Will this help us fix some of our technical debt? If so, it may well be worth considering. If it adds more technical debt, less likely to sell that option.
Another point is that good CI infrastructure will bubble up issues in design and architecture of both the process and the application. Innovation and motivation will potentially increase when changes can be made more frequently, and subsequently, more atomically.
By using information radiators, we can get a clearer sense as to who did what to cause the build to fail. Gadgets (lights, sounds, sensory input) can help make it more apparent and in real time. Not sure if this would be a major plus, but I'm not necessarily the best judge of what developers consider to be fun ;).
The final test track talk, the anchor session, goes to Mark Tomlinson, as he discusses 'roles and Revelations: Embracing and Evolving our Conceptions of Testing". With a title like that, let's just say "you had me at 'hello'" ;).
Mark is a fun guy to listen to (check out his podcast "PerfBytes" to get a feel), and thus, it's fun to hear him do a more narrative talk as opposed to a techy talk. We start out with the idea of what testing is, at least how we look at it historically. We find bugs, we see that we can validate to a spec, we try to reduce costs, and we aim to mitigate risks. Overall, I think if you gave that list to any lay person and said "that's what testing is", they'd probably have little difficulty understanding that. Those definitions are valid, but it's also somewhat limiting. We've seen some interesting milestones over the past 50 years. Debugging, Demonstration, Destruction, Evaluation and Prevention can all be seen as "eras of testing". Mark points out that there are 10 different schools of testing (Domain, Stress, Specification, Risk, Random/Statistical, Function, regression, Scenario, User, and Exploratory).
That's all cool... but what if one day everything changed? Well, one would say that the past 14 years, or since the Agile Manifesto, the Universe did Change... to steal a little from James Burke. We are less likely today to have isolated test groups. We have a lot more alphabet soup when it comes to our titles. I've had lots of titles, lots of combinations, but ultimately all of them could be distilled to a "tester" of some flavor. Some teams have no dedicated testers, or just one dedicated tester. Test Driven Development is an unfortunate term choice, in the fact that what is a design process often gets mistaken for "testing" (nope, it's not. It's checking for correctness, but it is not testing). Out time to be interactive and effective is happening earlier, and I love this fact.
Continuous Integration, Continuous Deployment, Continuous Delivery and even Continuous Testing have entered the vernacular. What does this mean? It's all about trying to automate as many of the steps as humanly possible. Build-Check-Deploy-Monitor-Repeat. Conceive of a time and place where we go from end to end without a person involved, just machines. Sounds great, huh? In some ways, it's awesome, but there's an unfortunate side effect, in that may processes are billed as testing that are not. Checking is what automation does. It's great for a lot of things, but it can't really think. Testing, real testing, requires thinking and judgment. There's been a devaluing of testing in some organizations, or just doing testing is considered a liability. Unless we are all coding toolsmiths, we are of a lesser order... and that's bunk!!!
Ultimately, testing is a cost... seriously. Testing does not make money. testing is a cost center. It's an important cost center, but it is a cost. think of Health Insurance. It is not an investment. It's a cost you have to pay... but when you crash a car or break a leg, then the insurance kicks in, and I'll bet you're happy when you have it (and really frustrated if you don't). that's what testing is. It's insurance. It's a hedge. It's a cost to prevent calamity. With all of the changing going on, we ned to be clear what we are and what we provide.
What we generate, and what real value we provide, is feedback and information. we are not critics. We are not nay-sayers, we are honest (we hope) reporters of the state of reality, or at least as we can potentially be. the really valuable things that we can provide are not automate-able. Yes, I dared to say that :). Computers can evaluate variable values and they can confirm or deny state changes, but they cannot really think, and they cannot make an informed judgment call. They can only do what we as people tell them to.
Change is constant, and we will see more change as we continue. Testers need to be open to change,and realize that, while there is always value that we provide, the way we provide that value, and the mechanisms and institutions that surround them will evolve. If we do not evolve with them, we will be left behind.
Mark emphasizes that software testers are "Facilitators of Quality". Testing is not just limited to dedicated testers, it's dispersing. therefore, we need to emphasize where we can be effective, and that may mean going in totally different directions. Testing provides diversity, if we are willing to have it be a diversifying role. Think of new techniques, expand the way that we can ask questions, learn more about the infrastructure, and figure out ways that we can keep asking questions. The day we stop asking questions is the day testing dies, for real.
Testing can actually accelerate development. I believe this, and have seen it happen in my own experiences. This is where paired developer-tester arrangements can be great. think of the programmer being the pilot, and the tester being the navigator. Yes, if all we ask is "are we there yet?", we don't offer much, but if we watch the terrain, and ask if some ways we've mapped may be better or worse for the time we want to arrive, now we're adding value, and in some ways, we can help them fix issues before they've even been committed. testers provoke reactions. Not to be jerks, but to get people to think and consider what they really should be doing. Do you think you can't do that? If so, why? Give it a try. You may surprise yourself (and maybe a few programmers) with how much you deliver. In short, be the Devil's Advocate as often as possible, and be prepared to embrace the Devil's you don't know ;).
Consider that every tester is an Analyst. It may be formal or informal, but we all are, deep down. we can research quality efforts, we can drill down into data and see patterns and trends, we can also see trends and efficiencies we can add to our repertoire, and adapt, adapt, adapt!
Sorry for the delay for the last bit, but with a rather meta post presentation call with Mark Tomlinson (we did a conference call about how to do podcasts, and in the process, recorded the session... so yeah, we made a podcast about how to do podcasts as an artifact of a meeting about how to do podcasts. Main takeaway, it's fun, but there's more to doing them than many people consider. We just hope we didn't scare everyone off after we were done (LOL!). After that, all of the speakers descended upon Tango Restaurant and had a fabulous dinner courtesy of the ALM Forum organizing staff. Great conversation with Scott Wambler, Curtis Stuehrenberg, Peter Varhol, and Seth Eliot, as well as several others. the nerd brain power in that small room was probably off the charts, and i was honored to have been included in this event. Seattle, thank you for a very busy and truly enjoyable week. For those who have been keeping track of this rather long missive, my thanks to you, too. To everyone who came to my talk and tweeted or retweeted my comments, and who commented back to me about my talk and gave me your impressions, feedback is a gift, and I've received many gifts today. Truly, thank you so much.
With this, I must return back to reality and back to San Francisco early this morning. I've enjoyed out rime together, and I hope that, in some small way, this meandering three days of live blogging has given you a flavor of the event and what I've learned these past few days. Let's do it all again some time :)!!!
Wednesday, April 02, 2014 21:47 PM
Great discussions, lots of interesting insights, and an appreciation for the fact that, over time, we see the topics change from being technical to being more humanistic. The humanistic questions are really the more interesting ones, in my estimation. Again, my thanks to Adam and the rest of the Seattle Lean Coffee group for having me attend with them today.
Cloud Testing in the Mainstream is a panel discussion with Steve Winter, Ashwin Kothari, Mark Tomlinson, and Nick Richardson. The discussion has ranged across a variety of topics, staring with what drove these organizations to start doing cloud based solutions (and therefore, cloud based testing) and how they have to focus on more than just the application in their own little environment, or how much they ned to be aware of in between hops to make their application work in the cloud (and how it works in the cloud. as an example, latency becomes a very real challenge, and tests that work in a dedicated lab environment will potentially fail in a cloud environment, mainly because of the distance and time necessary to complete the configuration and setup steps for tests.
Additional technical hurdles have been to get into the idea of continuous integration and needing to test code in production, as well as to push to production regularly. Steven works with FIS Mobile, which caters to banking and financial clients. Talk about a client that was resistant to the idea of continuous deployment, but certain aspects are indeed able to be managed and tested in this way, or at least a conversation is happening where it wasn't before.
Performance testing now takes on additional significance in the cloud, since the environment has aspects that are not as easily controlled (read: gamed) as they would be if the environment were entirely contained in their own isolated lab.
Nike was an organization that went through a time where they didn't have the information that they needed to make a decision. In house lab infrastructure was proving to be a limitation, since they couldn't cover the aspects of their production environment or a real example of how the system would work on the open web. With the fact that OPS was able to demonstrate some understanding through monitoring of services in the cloud, that helped the QA team to decide to collaborate and help understand how to leverage the cloud for testing, and how leveraging the cloud made for a different dialect of testing, so to speak.
A question that came up was to ask if cloud testing was only for production testing, and of course the answer is "no", but it does open up a conversation about how" testing in production" can be performed intentionally and purposefully, rather than something to be terrified about and say "oh man, we're testing in PRODUCTION?!" Of course, not every testing scenario makes sense to be tested in production (many would be just plain insane) but there are times when it does make a lot of sense to do certain tests in production (a live site performance profile, monitoring of a deployment, etc.).
Overall an interesting discussion and some worthwhile pros and cons as to why it makes sense to test in the cloud. Having made this switch recently, I really appreciate the flexibility and the value that it provides, so you'll hear very few complaints from me :).
Mike Brittain is talking about Principles and Practices of Continuous Deployment, and his experiences at Etsy. Companies that are small can spin up quickly, and can outmaneuver larger companies. Larger companies need to innovate of die. There are scaling hurdles that need to be overcome, and they are not going to be solved overnight. There also needs to be a quick recovery time in the event something goes wrong. Quality os not just about testing before release, it also includes adaptability and response time. Even though the ideas of Continuous Deployment are meant to handle small releases frequently performed, there still needs to be a fair amount of talent in the engineering team to handle that. The core idea behind being able to be successful in Continuous Development is the idea of "rapid experimentation".
Continuous Delivery and Continuous Deployment share a number of principles. First is to keep the build green, no failed tests. Second is to have a "one button" option. Push the button, all deployment options are performed. Continuous Deployment breaks a bit with the fact that every passing build is deployed to production, where continuous delivery means that the feature is delivered with a business need. Most of the builds deploy "dark changes", meaning code is pushed, but little to no changes are visible to the end user (thin CSS rules, unreferenced code, back end changes, etc.). A Check in triggers a test. If clean that triggers automated acceptance tests. If that passes, then it triggers the need for user acceptance tests. If that's green, then it pushes the release. at any point, if the step is red, then it will flag the issue and atop the deploy train.
Going from one environment to another can have unexpected changes. How many times have you heard "what do you mean it's not working in production? I tested that before we released!" Well, that's not entirely surprising, since our test environment is not our production environment. Question of course is, where's the bug? Is it in the check ins? Are we missing a unit test(s)? are we missing automated UA tests (or manual UA tests)? Do we have a clear way of being identified if something goes wrong? What does a roll back process look like? All of these are still issues, even in Continuous Deployment environments. One avenue Etsy has provided to help smooth this transition is a setup that does pre-production validation. Smoke tests, Integration tests, Functional and UA tests are performed with hooks into some production environment resources, and active monitoring is performed. All of this without having to commit the entire release to production, or doing so in stages.
Mike made the point that Etsy pushes, approximately, about 50,000 lines of code each month. With a single release, there's a lot of chances for there to be bugs clustered in that single release. By making many releases over the course of days, weeks or months. The odds of a cluster of bugs appearing are minimal. Instead, the bugs that do appear are isolated and considered within their release window, and their fix likewise tightly mirrors their release.
This is an interesting model. My company is not quite to the point that we can do what they are describing, but I realized we are also not way out of the ballpark to consider it. It allows organizations to iterate rapidly, and also to fix problems rapidly (potentially, if there is enough risk tolerance build into the system). Lots to ponder ;).
Peter Varhol is covering one of my favorite topics, which is Bias in Testing (specifically, cognitive bias). Peter started his talk by correlating the book "Moneyball" to testing, and that often, the stereotypical best "hitter/pitcher/runner/fielder/player" does not necessarily correlate to winning games. By overcoming the "bias" that many of the talent scouts had, he was able to build a consistently solid team by going beyond the expectations.
There's a fair amount of bias in testing. That bias can contribute to missing bugs, or testers not seeing bugs, for a variety of reasons. Many of the easy to fix options (missing test cases, missing automated checks, missing requirement parameters) can be added and covered in the future. The more difficult one is our own biases as to what we see. Our brains are great at ambiguity. they love to fill in the blanks and smooth out rough patches. even when we have a "great eye for detail", we can often plaster over and smooth out our own experience, without even knowing it.
Missed bugs are errors in judgment. we make a judgment call, and sometime we get it wrong, especially when we tend to think fast. When we slow down our thinking, we tend to see things we wouldn't otherwise see. case in point: if I just read through my blog to proof-read the text, it's a good bet I will miss half a dozen things, because my brain is more than happy to gloss over and smooth out typos; I get what I mean, so it's good enough... well, no, not really, since I want to publish and have a clean and error-free output.
Contrast that with physically reading out, and vocalizing, the text in my blog as though I am speaking it to an audience. This act alone has helped me find a large number of typos that I would otherwise totally miss. The reason? I have to slow down my thinking, and that slow down helps me recognize issues I would have glossed over completely (this is the premise of Daniel Kahneman's "Thinking, Fast and Slow". To keep with the Kahneman nomenclature, we'll use System 1 for fast thinking and System 2 for slow thinking.
One key thing to remember is that System 1 and System 2 may not be compatible, and they may even be in conflict. It's important to know when we might need to dial in one thought approach or the other. Our biases could be personal. They could be interactional. they could be historical. they may be right a vast majority of the time, and when they are, we can get lazy. We know what's coming, so we expect it to come. when it doesn't we are either caught off guard, or we don't notice it at all. "Representative Bias" is a more formal way of saying this.
When we are "experts" in a particular aspect, we can have that expertise work against us as well. we may fail to look at it from another perspective, perhaps that of a new user. This is called "The Curse of Knowledge".
"Congruence Bias" is where we plan tests based on a particular hypothesis, whereas we may not have alternative hypotheses . If we think something should work, we will work on the ways to support that a system works, instead of looking at areas where a hypothesis might be proven false.
'Confirmation Bias" is what happens when we search for information or feedback that confirms our initial perceptions.
"The Anchoring Effect" is what happens when we become to convinced on a particular course of action that we become locked into a particular piece of information, or a number, where we miss other possibilities. Numbers can fixate us, and that fixation can cause biases, too.
" Inattentional Blindness" is the classic example where we focus on a particular piece of information that they miss something right in front of them (not a moonwalking bear, but a gorilla this time ;) ). there are other visual images that expand on this.
The "Blind Spot Bias" comes from when we evaluate our decision making process compared to others. With a few exceptions, we tend to think we make better decisions than others in most areas, especially those we feel we have a particular level of expertise.
Most of the time, when we find a bug, it's not because we have missed a requirement or missed a test case (not to say that those don't lead to bugs, but they are less common). Instead, it's a subjective parameter. We're not looking at something in a way that could be interpreted as negative or problematic. This is an excellent reminder of just how much we need to be aware of what and where we can be swayed by our own biases, even by this small and limited list. There's lots more :).
More to come, stay tuned.
Wednesday, April 02, 2014 00:21 AM
We start out with Scott Ambler (@scottwambler on Twitter) and a discussion of Disciplined Agile Delivery and how to scale Agile practices in larger organizations. Scott made an few points about the fact that Agile is a process with a lot of variations on the theme. Methodologies and methods are all nice, but each organization has to piece together for themselves which of the methods will actually work. Scott has written a book called Disciplined Agile Delivery (DAD). The acronym of DAD is not an accident. Key aspects of DAD are that it is people first, goal drive, it';s a hybrid approach, learning oriented, utilizes a full delivery lifecycle, try to emphasize the solution, not just the software. In short DAD tries to be the parent; it gives a number of "good ideas" and then lets the team try to grow up with some guidance, rather than an iron hand.
Questions to ask: what are the variety of methods used? What is the big picture? While we can look at a lot of terminology, and we can say that Scrum or agile processes are loose form and just kind of happen, that's not really the case at all. Solution delivery is complex, and there's a lot of just plain hard reality that takes place. Most of us are not working on the cool new stuff. We're more commonly looking at adding new features or enhanced features to stuff that already exists. Each team will probably have different needs, and each team will probably work in different ways. DAD is OK with that.
Scott thankfully touched on a statement in a keynote that made me want to throw the "devil horns" and yell "right on!" there is no such thing as a best practice; there are good practices in some circumstances, and those same practices could be the kiss of death in another situation. Granted, those of us who are part of the context-driven testing movement, this is a common refrain. the fact that this is being said in a conference that is not a testing conference per se brought a big smile to my face. the point is, there are many lean and agile options for all aspects of software delivery. The advice we are going to get is going to conflict at times, it's going to fit some places and not others, and again, that's OK.
Disciplined agile delivery comes down to asking the questions around Inception (How to we start?), Construction (What is the solution we need to provide?), Transition (How to we get the software to our customers?) and Ongoing (what do we do throughout all of these processes?).
For years, we used to be individually focused. We all would do our "best practices" and silo ourselves in our disciplines. Agile teams try to break down those silos, and that's a great start, but there's more to it than that. Our teams need to work with other teams, and each team is going to bring their own level of function (and dysfunction). this is where context comes into play, and it's one of the ways that we can get a handle on how to scale our methods. While we like the idea of co-location, the fact is that many teams are distributed. Some teams are partially dispersed, others are totally dispersed (reminds me of Socialtext as it was originally implemented; there was no "home office" in the early days). Teams can range from small (just a few people), medium (10-30 people), and large teams (we think 30+ is large, other companies look at anything less than 50 people as small teams). The key point is that there are advantages and disadvantages regarding the size of your team. Architecture may have a full architecture team with representatives in each functional group. Product owners and product managers might also be part of an over arching team where representatives come from smaller groups and teams.
The key point to take away from this is that Agile transformations are not easy. They require work, they take time to put into place, there will be mis-steps, there will be variations that don't match what the best practices models represent. the biggest challenge is one of culture, not technology. Tools and scrum meetings are fairly easy. Making these a real part of the flow and life of the business takes time, effort and consistent practice. Don't get too caught up in the tools doing everything for you. They won't. Agile/Scrum is a good starting point, but we need to move beyond this. Disciplined Agile Delivery helps us up our game, and gets us on a firmer footing. Ultimately, if we get these challenges under control with a relatively small team, we can look to pulling this off with a large enterprise. If we can't get the small team stuff working, Agile scaling will be pretty much irrelevant.
My thanks to Scott for a great first talk, and now it's time to get up and see what else ALM forum has to offer.
I'm going to be spending a fair amount of my time in the Changing Face of Testing Track. I've already connencted with some old friends and partners in crime. Mark Tomlinson and I are probably going to be doing a fair amount of cross commenting, so don't be surprised if you see a fair amount of Mark in my comments ;).
Jeff Sussna is taking the lead for us testers and talking about how QA is changing, and how we need to make a change along with it. We're leaving industrialism (in many ways) and we are embarking on a post-industrial world, where we share not necessarily things, but we share experiences. We are moving from a number of paradigms into new paradigms:
from products to services: locked in mechanisms are giving way to experiences that speak to us individually. The mobile experience is one of the key places to see this. People who have negative experiences don't live with it, they drop the app and find something else.
from silos to infusion: being an information silo used to give a sense of job security. It doesn't any longer. Being able to interact with multiple organizations and to be adaptable is more valuable that being someone who has everything they know under lock and key.
from complicated to complex: complicated is predictable, it's bureaucratic, it's heavy. Complex is fragmented. It's independent, it doesn't necessarily follow the rules, and as such it's harder to control (if control is possible at all).
from efficient to adaptive: efficiency is only efficient when the process is well understood, and the expectations are clearly laid out. Disruption kills this, and efficiency gives way when you can't predict what is going to happen. This is why adaptability is more valuable than just efficiency. Learn how to be adaptive and efficient? Now you've got something ;).
The disruption that we see in our industry is accelerating. Companies that had huge leads and leverage that could take years to erode are eroding much faster. Disruption is not just happening, it's happening in far more places. Think about Cloud computing. Why is it accelerating as a model? Is it because people are really all that interested in spinning up a bunch of Linux instances? No, not really. The real benefit is that we can create solutions (file sharing, resource options, parallel execution) where we couldn't before. We don't necessarily care about the structure of what makes the solution, we care that we can run our tests in parallel in far less time than it would take to run them on a single machine in serial. Dropbox is genius not because it's in the cloud, it's genius because any file I really care about I can get to anywhere, at any time, on any device, and I can do it with very little physical setup and maintenance (changes delivered in an "absorbable manner").
Ken Johnston (@rkjohnston) is talking about EaaSY, or 'Everything as a Service, Yes!". Ken wants to help us see what the role of testing actually is. It's not really about quality assurance, but more about Risk assessment and management. I agree with this, in the sense that, in the old school environments I used to work in, especially when I worked for a game publisher, when a bug shipped to production, unless is was particularly egregious, it was eternal. In the services world, and the services model, since software is much more pliable, and much more manageable, there's no such thing as a "dated ship". We can udate all the time, and with that, problems can be addressed much more quickly. With this model, we can be less forced into slotted times. We can update a bug in the same day. we can release a new feature in a week where it used to take a quarter or a year.
EaasY covers a number of parts to be made to be effective.
Componentization: break out as much of the functionality from external dependencies as possible.
Continuous Delivery: Requires Continuous Stability. It needs to have a targeted set of tests, an atomic level of development, and likely is an area that can be deployed/fixed with a low number of people being impacted by the change (the more mission critical, the less likely a Continuous Delivery model will be the desired approach. Not impossible, but probably not the best focus (IMO).
User Segmentation: When we think of how to deploy to users, and we can use a number of methods to do that. we can create concentric rings, with the smallest ring being the most risk tolerant users, and expanding out to a larger set of users, the farther out we get, the more risk averse the users. Additionally, we can use tools like A/B testing, to see how two groups of people react to a change as structured one way or another (structure A vs. Structure B). This is a way to put into production a change, but have a small group of people see it and react to it.
Runtime Flags: Layers can be updated independently. We can fork traffic through the production path and at key areas, data can be forked and routed through a different setup, and then reconvene with the production flow (this is pretty cool, actually :) ). Additionally, code can be pushed, but it can be "pushed dark", meaning it can be put in place but turned on at a later time.
Big Data: Five "Vs" (Volume, Variety, Velocity, Verification, Value). These need to be considered for any data driven project. The better each of these is, the more likely we will be successful in utilizing big data solutions.
Minimum Viable Product: Mark callup on Seth Eliot's "Big Up Front Testing" (BUFT) and says "say no to "BUFT". With a minimum viable product, we need to scale our testing to a point where we can have a MVP, and appropriate testing for the scale of the MVP. Additionally, there are options where we can Test in Production (not full scale, of course).
Overall, this was a very interesting approach and idea. Many of the ideas and approaches described sound very similar to activities we are already doing in Socialtext, but it also gives me areas where I can see that we can do better.
James Whittaker (@docjamesw) is doing the next plenary session, called "A Future Worth Wanting". First we start with our own devices, our own apps, we own them, they're ours, but they aren't particularly useful if they don't connect to a data source somewhere (call it the web and the cloud for simplicity). James is making the point that there's a fair amount of stuff in between that we are not including. The Web browser is one of these middle point items. The app store is another. We know what to do and how to do it, we don't give it much thought. Could we be doing better?
Imagine getting an email, then having to research where an event is, how much tickets are, and how we could handle transactions (using "entities") and we can use those entities and we can find out information and perform transactions based on those entities. Frankly, this would be cool :).
What if we were a calendar? We are planning to do something, some kind of activity that we need to be time focused for. What do we naturally do? We jump to a browser and go figure out what we need. what of our calendar could use those entity relationships and do the search for us, or better yet, return what has already been searched for based on the calendar parameters? Think of writing code? Wouldn't it be cool to find a library that could expand on what you are doing or do what you are hoping to do?
The idea here is to be able to track "entities" to "intents", and execute those intents. Think about being able to call up a fact checking app in PowerPoint, and based on what you type, you get a return specific to your text entry. Again, very interesting. The key takeaway is that our apps, our tools, our information needs are getting tailored to exactly the data we want, from the section of the web or cloud that we actually need.
This isn't a new concept, really. This is the concept of "agents" that's been talked about for almost two decades. The goal we want is to be able to have our devices, our apps, our services, etc, be able to communicate with each other and tell us what we need to know when we need to know it. It's always been seen as a bit of a pipe dream, but every week it seems like we are getting to see and know more examples that make that pipe dream look a little less far fetched.
Goals we want to aim for:
- Stop losing the stuff we've already found
- Localize the data and localize the monetization
- Apps can understand intent, and if they don't, they should. Wouldn't it be great if based on a search or goal, we can download the appropriate apps directly?
- make it about me, not my device
Overall, these are all cool ideas, and yes, these are ideas I can get behind (a bit less branding, but I like the sentiment ;) ).
Alexander Podelko (@apodelko) wants to have us see a "Bigger Picture" when it comes to load testing. There's a lot of terminology that goes into load testing and they are often interchangeable, but not always. the most common image we have of Load testing (and yes, I've lived this personally) is the last minute before deployment, we put some synthetic tests together in our lab, try to run a bunch of connections, see what happens, and call it a day and push to production. as you might guess, hilarity ensues.
In the mid afternoon, they held a number of Birds of a Feather sessions, to provide some more interactive conversations, and one of the was specifically about how to use GIT. Granted, I'm pretty familiar with GIT, but I always appreciate seeing how people use it and seeing different ways to use it that I may have not considered.
One of the tools that they used for the demonstration was to "Learn Git Branching", which displays a graphical representation of a variety of commits, and shows what commands actually do when they are run (git commit, git merge, rebase, etc.).
The last session of the day is being delivered courtesy of Allan Wagner, and the focus is on continuous testing, or why we would want to consider doing continuous testing. The labor costs are getting higher, even with outsourcing options considered, test lab complexity is increasing, and the amount of testing required keeps growing and growing. OK, so let's suppose that Continuous Testing is the approach you want to go with (I hope it's not the only approach, but cool, I can go with it for this paradigm), where do you start?
For testers to be able to do continuous testing, they need:
- production like test environments (realistic and complete
- automated tests that can un unattended
- orchestration from build to production which is reliable, repeatable and traceable
One very good question to ask is "how much time do you spend doing repetitive set up and tear down of your test environments?" In my own environment, we have gotten considerably better in this area, but we do still spend a fair amount of time to set up our test environments. I'm not entirely sure that, even with service virtualization, there would be a tremendous increase in time saved for doing spot visual testing. While I do feel that having automated tests is important, I do not buy into the idea that automated testing only is a good idea. It certainly is a big plus and a necessary methodology for unit tests, but beyond that, trying to automate all of the tests seems to fall under the law of diminishing returns. I don't think that that is what Allan is suggesting, but I'm adding my commentary just the same ;).
Service Virtualization looks to try to create, as its name describes, the ability to make elements hat are unavailable available for testing. It requires mocks and stubs to work, where you can simulate the transactions rather than try to configure big data hardware or front end components that don't yet exist for our applications.
Virtual Components need to fit certain parameters. They need to be simple, non-deterministic, data-driven, using a stateful data model, and have functionality where we can easily determine their behavioral aspects.
The key idea is that, as development continues, the virtual components will be replaced with the real components and start looking at additional pieces of later functionality. In other examples, the virtualized components may be those that simulate a third party service that would be too expensive to have part of the environment as a regular part of the development process.
Allen made the point in his talk that Continuous Testing is not intended to be the be all and end all of your testing, but it is meant to be a way to perform automated testing as early as possible and as focused as possible so that the drudge work of set-up tear down, configuration change and all of the other time consuming steps can be automated as much as possible. This is meant to allow the thinking testers to do the work that really matters, which is to perform exploratory testing and let the tester genuinely think. That's always a positive outcome :).
From here' it's a reception, some drinks, and some milling about, not to mention dinner and chilling with the attendees. I'll call this a day at this point and let you all take a break from these updates, at least for today. Tomorrow, I'm going to combine two events, in that I'll be taking part in SEALEAN (Lean Coffee event) and then picking up with the ALM Forum conference again after that. Have a good night testing friends, see you tomorrow morning :).
End of Entry: 04/01/2014: 05:20 p.m. PDT
Saturday, March 29, 2014 01:44 AM
Here's a look at the CSS file as it currently exists:
Tuesday, March 25, 2014 12:23 PMPackt Publishing is one of those companies, and as a celebration of the fact that they have released their 2000th title, I want to help them celebrate and encourage those who like their titles to take advantage of their current offer.
I feel bad that I couldn't do this sooner, but I've been out of town the past few days (another post is coming on that point, don't worry), but I still want to get the word out about the Packt 2000 title celebration, but there's a limited mount of time to take advantage of it.
So what is this all about?
It's simple. Until the end of the day today, if you buy any Packt Publishing EBook title, you will get another Packt Publishing EBook for FREE.
Click on either of the links above and you can take advantage of this opportunity, but don't delay, as it ends today (Mar. 25, 2014).
Again, I tend to not use this site for advertising purposes, but I appreciate the fact that Packt has provided a lot of titles to me over the past few years to review, and I would like to return the favor. If you appreciate open source software books, and want to support a company that produces many solid titles, then head over to Packt and get your BOGO on :).
Tuesday, March 18, 2014 23:01 PM
James Burke is one of my favorite historical authors, and I am a big fan of his ideas behind “Connected thought and events”, which makes the case that history is not a series of isolated events, but that events and discoveries coming from previous generations (an even eras) can give rise to new ideas and modes of thinking. In other words, change doesn’t happen in a vacuum, or in the mind of a single solitary genius. Instead it’s the actions and follow-on achievements by a variety of people throughout history that make certain changes in our world possible (from the weaving of silk to the personal computer, or the stirrup to the atomic bomb).
“Connections" is the companion book to the classic BBC series first filmed in the late 70s, with additional series being created up into the 1990s. If you haven’t already seen the Connections series of programs, please do, they are highly entertaining and engaging (ETA: the first series, aired in 1978, is the best of the three). The original print edition of this book had been out of print for some time, but I was overjoyed to discover that there is a current, and updated, paperback version (as well as a Kindle edition) of this book. The kindle version is the one I am basing the review on.
The subtitle of the book and series is "an Alternative View of Change”. Rather than serendipitous forces coming together and “eureka” moments of discovery happening, Burke makes the case that, just as today, invention happens often as a market force determines the benefit and necessity of that invention, with adoption and use stemming from the both the practical and cultural needs of the community. From there, refinements and other markets often determine how ideas from one area can impact development of other areas. Disparate examples like finance, accounting, cartography, metallurgy, mechanics, water power and automation are not separate disciplines, but rely heavily on each other and the inter-connectedness of these disciplines over time.
The book starts with an explanation of the Northeastern Blackout of 1965, as a away to draw attention to the fact that we live in a remarkably interdependent world today. We are not only the beneficiaries of technologies gifts, but in many ways, we are also at the mercy of them. Technology is wonderful, until it breaks down. At that point, many of the systems that we rely heavily on, when they stop working, can make our lives not just sub-optimal, but dangerous.
Connections uses examples stretching all the way back to Roman Times and the ensuing “Dark Ages”. Burke contends that they were never “really dark”, and makes the case of communication being enabled through Bishop to Bishop Post to show that many of the institutions defined in Roman times continued on unabated. Life did became much more local when the over-seeing and overarching power of a huge government state had ended. The pace of change and the needs of change were not so paramount on this local scale, and thus, many of the engineering marvels of the Roman Empire were not so much “lost” (aqueducts and large scale paved roads) but that they just weren’t needed on the scale that the Romans used them. Still, even in the localized world of the early Middle Ages, change happened, and changes from one area often led to changes in other areas.
This program changed the way I look at the world, and taught me to look at the causal movers as more than just single moments, or single people, but as a continuum that allows ideas to be connected to other ideas. Is Burke’s premise a certainty? No, but he make a very compelling case, and the connections from one era to another are certainly both credible and reasonable. There is a lot of detail thrown at the reader, and many of those details may seem tangential, but he always manages to come back and show how some arcane development in an isolated location, perhaps centuries ago, came to be a key component in out technologically advanced lives, and how it played a part in our current subordination to technology today. Regardless of the facts, figure and pictures (and there are indeed a lot of them), Connections is a wonderful ride. If you are as much of a fan of history as I am, then pretty much anything James Burke has written will prove to be worthwhile. Connections is his grand thesis, and it’s the concept that is most directly tied to him. This book shows very clearly why that is.
Monday, March 17, 2014 16:48 PMThe Manga Guide to Databases” as a refresher and, maybe, teach me a few new things.
I’m already a fan of "The Manga Guide to” series, so I figured that their take on databases would be in the same vein as their other titles (an accompanying storyline, an emphasis on practical topic coverage, and an emphasis on “kawaii”). To meet that end, we are introduced to Princess Raruna, heir apparent to the Kingdom of Cod. We also meet her attendant, Cain, and a fairy named Tico that teaches them about databases… and anyone familiar with Manga has not batted an eye with that kind of a description (and sure, if you looked at the cover, you could probably have figured that out as well ;) ). For those not already familiar with Manga and their tropes, this might seem a bit strange, but go with it. Seriously.
I can see some of you already thinking “OK, sure, I can see teaching about stars, or maybe even math using Manga, but databases? That’s a bit of a stretch, isn’t it?" Well, let’s take a closer look.
Chapter One introduces us to the Kindom of Kod’s main export… fruit (yeah, you were thinking fish. Everyone thinks fish, but no, it’s fruit). Through the bureaucratic and messy system that they have in place, the case is made for why a database is important in the first place, to reduce errors, keep track of important data, and to make sure that data isn’t duplicated in appropriately or not updated when and where it needs to be. This chapter also sets the stage with the scenarios and back story to help define how the database will need to be set up and managed.
Chapter Two takes the idea of a database further and discusses what relational databases are, and how they differ from other system such as hierarchical and networked databases. Fields and records are explained as vertical columns (attributes of a relationship) and records (individual collections of various attributes as relates to one given entity at a time). Tables hold these fields and records, and a variety of operations can be performed to both input and extract/format the data to be viewed.
Chapter Three goes into the process of designing a database, starting with creating an entity-relationship (E-R) model, and establishing the types of relationship an entity can have (one to one, one to many, many to many). From there, a table is designed, and by examining the relationships, we can see where data is duplicated. We can divide the big table into smaller, interrelated tables in a process called normalization. The concept of both primary and foreign keys are also introduced.
Chapter Four introduces us to the Structured Query Language, or SQL. SQL allows users to perform functions that allow them to define, operate, and control data. SELECT statement allow users to select specific fields to display and show values of those fields. The WHERE statement allows users to specify conditions as to what records are displayed. INSERT, UPDATE, and DELETE statements let users insert, update and delete data. CREATE TABLE lets a user create a new table, where DROP table lets a user remove (drop) an existing table.
Chapter Five focuses on how to operate a database, including how to set user privileges for a database, how to use locking and ensures consistency with multiple users, setting up indexes to perform faster searches, examining transactions and how they can be “rolled forward or rolled back”, and options for disaster recovery and database repair capabilities.
Chapter Six shows us how the proliferation of databases affects everyday things that we do, and that we are likely dealing with them in areas we otherwise would not consider (every site that this book review will appear on has a database to store them, and that’s at the simplest level. The chapter also shows us examples of distributed databases, database partitioning, two-phase commits, database replication and the use of stored procedures and triggers to perform commonly repeated tasks.
The book ends with a short Appendix with a summary of the most commonly used SQL commands (which would probably make for a nice little project for a dynamic allocation of commands for a web site example page.
If you are an old hand at using databases in general and SQL commands in particular, there’s probably not a whole lot of new material for you here. For those who are just getting into working with databases, this is a much more fun and straightforward way of teaching the ideas than I’ve seen, well, just about anywhere. Do note that this is not going to be the be all and end all of learning about databases, SQL queries or how to effectively design databases. It will, however, go a long way in giving those people who want to learn how to make or manage relational, SQL based databases a simple framework to hang future ideas and learning from.
Friday, March 14, 2014 16:51 PM
I'm going to go on the record and say, for this first part of the process, if you want to have the quickest, start to finish, least amount of resistance approach to working with PHP to build a basic web site, a VM with EasyPHP may very well fit the bill nicely. With it you get the latest PHP, MySQL, Apache Server and a host of other nice features to help you navigate and manage what you need to do. All of which will help give you spare cycles to actually build a site using PHP.
The first few weeks have been looking at PHP and understanding where I can use it, and why I might want to. At the absolute simplest level, it's a great way to template base pages with pieces that you know you will use over and over again. Headers and footers? Not likely to change very much, so why copy and paste a bunch of code? Make a base php script that echo's out what you want to have appear, and have that item appear in the location you want it to on each page. Change the script, and it instantly changes everywhere it's being called. Seems really obvious after you do it, but there's that moment where you realize "Wait, are you serious? It CAN'T be THAT simple!" Actually, yes it can be!
Friday, March 14, 2014 15:05 PM
For those of us who do software testing for a living, we know that we have a number of artifacts that come from our testing. One of those classes of artifacts is data. Tons and tons of data. How do we make sense of it all? What is worth looking at? Why is it worth looking at? What decisions can we make if we compile, analyze and distill the data we receive? More to the point, how do we analyze the data so that we can distill it? That’s where Statistics comes in handy.
I’ll be blunt. I took one statistics class when I was in college. I hated it. In fact, I never finished it. Please understand when I say “I have an aversion to statistics as something I have to actually do”, I am not kidding. As a software tester, that puts me in a bit of a bind. If I can’t make some sense of the data I receive, I can’t do as effective a job. At best, I need to farm that work out to someone else on my team who can do the statistical analysis, meaning I need to get their take and explanation to make decisions. That causes delays. Overall, it would be better to just suck it up and learn a bit about statistics. It’s a core piece of domain knowledge any good software tester should possess, if not immediately, then at some point in their career.
There are lots of ways to learn about Statistics, and frankly, most of them are a bit painful. College courses, text books, online videos, etc. can help, but they are often slow, or assume that you have some background in the ideas already. What to do when you want to get the gist of the idea before you tackle the hairier details? That’s where “The Manga Guide to Statistics” comes in handy.
A caveat: this should absolutely not be your only guide to learning statistics. If that’s what you are looking for, then this book will not deliver on that promise. It is, however a good primer to get you started, and help you look at statistics in a way that’s fun and engaging, especially if Manga tropes appeal to you.
To set the stage, or protagonist, Rui, has a chat with the dreamy co-worker of her father (Mr. Igarashi) about understanding statistics. As Rui expresses interest to her dad about learning Statistics, he agrees to get her help. Rui creates a fantasy of being tutored by Mr. Igarashi, only to have her hopes dashed when an employee of her father, Mamoru Yamamoto (read, drawn to not be dreamy), comes to teach her about statistics. Hilarity ensues. There, that’s the Manga trope, and yes, “kawaii” abounds.
Chapter One focuses on understanding data types, and how we can more readily put terms like Categorical (Qualitative) Data and Numerical (Quantitative) data into aspects that more easy to understand (using a High School Slice-of-Life Manga Drama as the basis for the comparisons). By looking at examples like reader questionnaires, we get to the idea of what these data types are (categorical Data cannot be directly measured, while Numerical Data can be). It also shows how categorical data can be given a point value and treated as numerical data.
Chapter Two gets more into numerical data and discusses some key statistics concepts, such as looking at Frequency Distributions and Histograms (conveniently described by looking for the the best ramen shop in the city, by varying definitions of “best”, and by comparing a team’s bowling scores). By looking at data points and other criteria, and examining how that criteria can be condensed into a table of values, Rui and Maoru show us how we can calculate Mean(or average), the Median (actual mid point of samples) and the standard deviation (the “fudge factor” of what’s been collected).
Chapter Three goes into categorical data. By its nature, categorical data, or qualitative data, cannot be boiled down to a number as is, but there are ways that certain aspects of qualitative data can be categorized and that categorization can be made into quantitative (numeric) data and calculated. Using a cross tabulation, some numerical analysis can be performed, and therefore qualitative data can be measured, albeit imprecisely.
Chapter Four goes into the ideas of Standard Score and Deviation Score, or how to look at a specific data point and see how it relates to the rest of your data, or how to examine data points in a variety of ranges or with different unites of measurement.
Chapter Five talks about Probability, and the ways in which we can predict an outcome based on the data on hand (more correctly, make an educated guess as to the outcome, which is what Probability is meant to do). Data can be plotted on a graph, and that graph can be converted into a curve with enough data points. That curve (standard distribution) can be moved based on the mean and standard deviation. Using a number of different models (Normal distribution, Standard normal distribution, Chi-square distribution, t distribution and F distribution) we can make the curve “move”. By taking into account the way that the curve moves, we can calculate a ratio, or probability, which in turn can allow us to make a variety of predictions.
Chapter Six looks at comparing the relationship, or correlation, between two variables. By charting variable values on a scatter plot, we can eyeball the values and see if we have a positive or negative correlation, or if there is little to no correlation. If we sense there is a correlation, we can use a variety of indexes (spelled out here as Correlation Coefficient for numerical-numerical data, Correlation Ratio for numeric-categorical data and Cramer’s Coefficient for categorical-categorical data) to determine the overall strength or weakness of that correlation. This chapter also points out that these indexes are “fuzzy”, but they are better than nothing.
Chapter Seven examines Hypothesis tests, which are used to help clarify, or understand, if a hypothesis made by examining sample data is correct. We can test for independence of variables, if our tests are looking at variables in critical regions, and gives us examples as to how to perform a statistical analysis of those tests, and a variety of tests to see if variables are independent, homogeneous, and the degree in which they are either, both, or neither. Lather, rinse, repeat.
The book closes with a Appendix that describes how to use Excel and set up the examples explained in the book, and how to get to the functions and create the formulas necessary to do the measurements described in the previous chapters. This is a wonderful addition, and it gives even neophytes to statistics ways to play with the data and see how they can analyze the data, their results, and practice the hypothesis tests or determine probability of future events/actions.
Statistics can be fun, if you plot the story right. If following the antics of Rui and Mamoru sounds like a good time to you, and if gaining a fundamental understanding of some key statistics concepts is your end goal, then this is a nice format in which to learn those fundamental ideas. Note, I said “fundamental ideas”. Do not think that this would be an appropriate guide to say “OK, great, now I know all the statistics I need to know”. Granted, you may learn enough about statistics to be useful, and it may give you additional insights, but this is not an in depth study. Having said all that, for those who want to get into the nitty-gritty stuff, the Appendix about setting up tables and examples using statistical functions and formulas is worth the purchase price alone.
On the Manga story front… does our intrepid heroine Rui master the art of Statistics? Will her unrequited love for Mr. Igarashi remain as such? Will Mamoru be able to replace the spot in Rui’s heart where she hold an affection for Mr. Igarashi? Even if he does, is such a relationship just a little bit creepy? Ahh yes, all of this, and more shall be answered. For those who read manga, well, you probably already know the answers to all of those questions... but it’s still a fun read. For those curious as to whether or not a Manga can teach you a thing or three about statistics, the answer is “yes”, but you’ll need to look elsewhere to build on what’s covered here. As to my target market (i.e. my fellow software testers), if statistics is not your strong suit, this makes for a very practical introduction, and plenty of takeaways to make you just a bit more dangerous at work, and I mean that in the best possible way.
Monday, March 10, 2014 16:17 PM
As I find myself having conversations with a hosting company trying to find out why a database provisioning tool isn't working, a number of modules for SummerQAmp are nearing final drafts and re-visioning, and a few talks are being finalized for conference presentations both near and a few months from now, I came to a realization today. TESTHEAD just turned four years old.
Back on March 10, 2010, I posted the very first message on this blog. It was a bit hesitant, but it made the point of what I hoped it would be:
Welcome to TESTHEAD
"OK, why the need for a blog like this? Well, truth be told, I don’t know that there really is a “need” for this blog, but I’m doing this as a challenge to myself. I consider this an opportunity to “give back” to a community that has helped me over the course of many years, as I have been the beneficiary of many great insights and learned a lot from a number of people and sources over nearly two decades.
this will be a site where I share my own experiences, both good and bad, and what I've learned from them. Expect there to be talk about tools, both proprietary and open source. Expect some talk about test case design (and how I so hate to do it at times). Expect to hear me vent about some frustrations at times, because like all people, I have my share of frustrations when things don't seem to work correctly or go the way that I planned them to.
Friday, March 07, 2014 23:34 PM
I did something a little bit bold this week, in that I decided to bring my daughters with me up to Boreal ski area for a snowboarding trip. That's not all that unusual, really, but it is considering I did it on a Wednesday, and yes, I actually voluntarily took my daughters out of school to come up with me for the trip. Yes, I did make sure that they knew what their assignments were, and that work was done in advance, as much as possible. No, this is not something I plan on making a habit, but I did feel this was an important thing for us to do. Still, I really wanted to focus on my daughters' snowboarding skills, and being able to teach them in an environment, on a day that wouldn't be crowded, so as to be most conducive to learning.
One of the things I've come to realize is that you can tell people exactly what they need to do. You can articulate everything down to the last detail. You can give them an exact blueprint as to what they have to accomplish. None of that is going to matter if they are afraid, unsure or resistant. My youngest daughter is now 13 years old. She's probably had the least amount of time on the hill compared to all my kids. Once you have three kids and they're in school and doing their various activities, you start to realize that going up snowboarding with everybody, whenever you want to, becomes a little more difficult. Over the years, I've had to balance my trips and take them either individually, or we'd agree to go for maybe one or two days a season as a whole family. Fun, sure, but not really good at making sure you can advance in skills. We live three hours away from the snow at the best clip. That means it's a big deal and time commitment to go riding. A day trip is often 18 hours door to door. Much as I love it, and much is it something that I am willing to spend a great deal of time doing, I can't expect my kids to put up the same things that I would. Therefore, we haven't gone as often these last few years. That meant that many of the skills that I was able to develop in a short period of time, my kids have not had the same opportunities. I wanted to make sure that we had a day that was devoted to better skills, and better riding, because let's face it, if you can ride proficiently, the mountain is much more fun, and there are many more options open to you.
My youngest daughter is not timid. She's willing to ride down just about any terrain a mountain can offer, but she suffers from the same thing that many snowboarders do. It's a condition called "heel-side-itis". So many riders never get past the level of going straight, or braking and turning on their heel side edge. Riding toe side just tends to scare them. Personally, i had the opposite problem. At first I *only* rode toe side. Getting over to my heel side regularly was more difficult. In both cases, though, if you only ride one direction (toe side or heel side), you are somewhat at the mercy of gravity. You're also only using half of your effective muscle mass in your legs, and the half that you are using, you are stressing it almost all the time, which leads to greater and faster fatigue. Mastering linked turns are efficient, they give you more control over your trajectory, and frankly, it just makes riding that much more fun.
So why are so many people resistance to learning how to turn toe side? The simple answer, is especially on steeper terrain, you have to commit to going straight down the hill on a pitch, and then make the turn happen. Frankly, that's unnerving for a lot of people. The irony is, it's easier to turn on steeper terrain than it is on flatter terrain. Gravity does the hard work for you. Getting over that mental hurdle is still a challenge.
My youngest daughter is a fighter. She tends to want to do things her way, but at the same time she also wants to get better, and frankly, she wants me to you give her a "high five" and say she's doing good job. She'll definitely try, but she gets frustrated easily, and irritated, and often that results in fighting between me being the instructor and her being the student (note: this dynamic is not limited to snowboarding ;) ).
Because of this, I've found some interesting incentives to encourage her along the way. Sometimes, those incentives are just kind of off the wall. Case in point: my daughter is a big Korean Pop Music (K-pop) fan. She loves playing the music from her iTouch in the car on our road trips. Recently, my son, since he now has his own car and his own iPad to play music, decided that he wanted to have the cassette-deck adapter so that he could listen to music in his car. Because of doing that, we had no cassette deck adapter, which meant my poor little girl had to suffer through listening to my old CDs of ska, new wave, hip hop, punk rock and heavy metal during the trip. During our day of riding, as I was trying to get her to better link her turns, and be more casual and natural on steeper terrain, she resisted. I thought about how I might be able to get her to respond. At that moment, I just smiled and I said "OK, I'll make you a deal. You give me three picture-perfect runs, on the way back, the very first electronics outlet I find, I will buy a cassette tape adapter, so that you can listen to K-Pop over the car stereo the rest of the way home". It worked like a charm.
Yes, this post is indulgent, and yes I'm covering a topic that seems completely out of place, but for those who are regular readers, when has that ever been a surprise? The reason I mention this is because there are many incentives that drive our behavior, and when you get right down to it, are just not rational. Actually they are rational, they're just weird. we're constantly in a state of where we have to trick our brains into doing what we want it to do. When we use these incentives, and we know what incentives actually mattered to people, we can make amazing things happen. We can also figure out how to streamline the process, and see if it really does work for them.
If we can lower barriers to resistance, and if we can get people to enthusiastically take on a challenge, we can then really see how they adapt to the situation. If they are reticent, or rebellious, or just not in the mood to do something, no matter what we do, no matter how well we teach it, we'll have problems with them accomplishing the goal. The next time you decide you need to take on a testing challenge, or you have something that staring you in the face that's just irksome, difficult or getting over the hurdle to make it happen... pick an incentive that will actually motivate you. Don't be surprised if that incentive is a little bizarre. Sometimes bizarre incentives are the ones that really spur us on.
Friday, February 28, 2014 21:49 PM
|For some reason, this picture just sums up |
my past two weeks perfectly ;).
First, I've been looking at a variety of resources available online to learn about and practice using PHP. PHP can do a lot of interesting things and display output, pull in a variety of information sources, and simplify some tasks, but to do that, there needs to be a fair amount of tinkering involved.
Second, just like HTML all by itself will not give a web site a nice look and feel, PHP will not be the be all and end all of interactivity, either. Setting a site up from scratch means that there is a fair amount of interplay to work out, configuration details to tweak, and a lot of refreshing to see changes. Also, without a back end database, much of what is being done in the pages is superficial and not very interesting, though it does help to hammer out syntax.
|It's a small victory, but hey,|
I'll take it!
There's some oddity with their interface when it comes to competing certain assignments and exercises. I have found myself not able to complete a module that I have been actively working on, even though the "code" is correct for the context. I have also closed down my browser, reopened it, gone back to the section I was just working on click save again, and gotten a "Success".
Why do I mention this? Because I'm willing to bet others might be struggling with some examples and scratching their heads wondering why Codecademy isn't accepting their results. Often there are typos, and those are easy to fix, but if you find yourself in a spot where you cannot get it to work, no matter what you do, try closing your browser and coming back to the module and saving again.
Noah stated in the initial comment that we should be prepared to spend a few weeks on this initial project, and to be open to the fact that we will be doing a lot of mucking around to get it to work. Just being able to manipulate pictures would be considered a positive milestone. I thought this would be relatively quick; I mean, how hard could it be to just put up a simple site with PHP? The point is not to just put up a site with PHP; there's lots of ways to superficially do that. The image manipulation challenge is what sets it apart. Noah gives us an authentic problem, and asks up to solve it, without guidance as to how or what to use to do it.
This process led me down several paths and experiments. I set up a local stack on my personal machine. I set up a LAMP server in a virtual space. I set up a site already on the open web to use PHP and experimented with commands and syntax. I rolled several pages of my own to see how it all fits together. I downloaded a ready packaged "template" to get some ideas and save me some keystrokes. I swapped ideas between the home grown pages and the template pages. In short, I tried things, I tweaked them, I went down several dead ends, and I predict I'm going to go down several more.
One thing I learned from my time when I was releasing a podcast every week was that I had to learn just how long it would take to do something. At first, I was wildly over optimistic. I figured my skills with writing music and doing audio editing would make doing something as simple as editing a podcast a breeze. A clip here, a snip there and all would come together. If I wanted to have a slap-dash product, with little regard to the end experience of the listener, that was true. It took little time at all to edit a program that sounded hacked and choppy, but hey, it got the main points across. To make it sound good, to make the audio flow naturally, to remove pause words (the ums, ahs, likes and you knows) and to make the transition sound smooth and clean, to preserve the natural narrative so that the interviews and programs were comfortable to listen to, took a considerable amount of time to do.
I realized I couldn't put in a four hour editing session and have a product that sounded good, but I could put in four one hour editing sessions spread over several days and make a podcast that sounded great. The difference? Spreading out the effort is vital, because discernment and clarity come with repeated practice, and some down time to let the brain reflect offline. We don't get that same level of clarity when we try to push everything into one night to put it all together. My mistake has been more of the latter and less of the former. When we do that, we seek short cuts. We look for quick hacks that "work", for some definition of "work". Our standards for what is "acceptable" go way down, and we repeatedly say "oh heck with it, I have it working, it's good enough". Doing it a little bit at a time, and coming back to reflect on what we are doing, lets us see things that could be done better, and that we realize we really can do better, and without a lot of extra pain and effort. Last week, I tried to put it all together at one time, and was frustrated. This week, I managed a little more spacing, and got closer to a level of skill that I could feel like I'm doing something useful, but I know there's lots more I need to do to even have something basic in place.
So yeah, the past two weeks have been hectic, scattered, and less focused and more "bunched up" in my efforts than I want them to be. It feels like how I see many programmers having to work because of issues and changing priorities, and I have a greater empathy for them and what they go through, even to meet my own arbitrary "deadlines". If that is part of the "lessons learned" that Noah wants to encourage, I think it's working very well.