A Leaner and Cleaner Codecademy

Thursday, April 24, 2014 14:37 PM

A couple of years ago, I posted that I was excited to see the initiative that would become Codecademy get of the ground. At the time, it was limited in what it offered. It featured a course on JavaScript and some other small project ideas, and after a little poking around, I went on to other things. A year later I came back and saw that there was some new material, this time on Ruby and Python. A little more poking and then I went on to do other things.

I made a commitment to roll through Noah Sussman's "ways to become a more technical tester", which I follow up on each Friday in my TECHNICAL TESTER FRIDAY posts.. In that process, I decided it would be good to have a place that novice testers could go and learn some fundamentals about web programming. With that, I decided to go and give Codecademy another look, and I'm glad that I did.

For starters, Codecademy has refreshed everything on the site. They talk about it at length in "Codecademy Reimagined", and I for one am impressed with the level of depth they went into the describe the changes.

It's opened up a number of courses and updated several of their older offerings. The original JavaScript track has been deprecated (but it is still there if you want to work through it), and a new JavaScript track has been put in place. The site has been augmented with a jQuery track, a freshened HTML/CSS track, and updates to Ruby, Python, and PHP tracks as well.

In addition, there are several small project areas that users can practice and make "Codebits" to show what they have learned. Some of the Codebits are already assembled (examples include animating your name, making a solar system model and a simple web site template, as well as open format Codebits that users can share. Additionally, there are also a variety of projects ranging from novice to intermediate and advanced levels so that you caa practice what you are learning.

Another cool section is the API track. Currently, there are 29 API's listed that users can experiment with, and make their applications so that they can interact with the various API's. the offering range from YouTube to Twitter to Evernote, and also show the languages best suited to using that particular API's (JavaScript, Ruby and Python).

So how's the actual learning process? It's pretty solid, to tell the truth. Each track has a variety of initiatives, and a range of lessons and small projects interspersed throughout to keep the participant's attention. The editor can be finicky at times, but usually a page refresh will solve most of the odd problems. One of the nice attributes of having an account and working through the exercises is that your progress is saved. All of the steps from the first lesson to the last are recorded as part of your progress. That means you can go back and see your "cleared" examples and exercises.

Additionally there are Q&A Forums associated with each project, and so far, even when I've been stuck in some places, I've been able to find answers in each of the forums thus far. Participants put time in to answer questions and debate the approaches, and make clear where there is a code misunderstanding or an issue with Codecademy itself (and often, they offer workarounds and report updates that fix those issues). Definitely a great resource.  If I have to be nit-picky, it's the fact that, often, many of the Q&A Forum answers are jumbled together. Though the interface allows you to filter on the particular module and section by name, number and description, it would be really helpful to have a header for each question posted that says what module the question represents. Many do this when they write their reply titles, but having it be a prepended field that's automatically entered would be sweet :).

Overall, I think Codecademy has come a long way from when I first took a look at it about two and a half years ago. They have put a lot of effort into the site and their updates, and it shows. If you are already playing around at Codecademy, you already know everything I've written here. If you haven't been there in awhile, I recommend a return trip. It's really become a nice learning hub. If you have never been there, and are someone who wants to learn how to program front end and back end web apps, etc., and you like the idea of FREE, then seriously, go check the site out and get into a track that interests you. I'd suggest HTML/CSS, JavaScript, and JQuery first. From there, if you'd like to focus just on making web sites with little in the way of entry criteria, then check out the PHP track, otherwise, branch out into the Ruby or Python tracks, and work through the site at your own pace. It's not going to be the be all and end all destination to learn about programming, but seriously, you can make a pretty big dent with what you can learn here.

Selenium SF Live: An Evening With Dave Haeffner

Wednesday, April 23, 2014 19:26 PM

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home. 

TECHNICAL TESTER, err, Saturday: Pain Fog and Objective Completion

Saturday, April 19, 2014 19:45 PM

Yes, I know, I'm a day late with this. Actually, I'm closer to a week and a day late with this, but reality decided to remind me that I'm not 21 any longer.

Last week, April 9th, as I was getting off the train, I stood up and reached over to grab my bag. The "twinge" I felt above my hip on my right side was a tell tale reminder. I have sciatica, and if I feel that twinge, I am not going to be in for a fun week or two. Sure enough, my premonition became reality. Within 48 hours, I was flat on my back, with little ability to move, and the very act of doing anything (including sleeping) became a monumental chore. To that end, that meant that my progress on anything that was not "mission critical" pretty much stopped. There was no update last Friday because there was nothing to report. I spent most of the last week with limited movement, a back brace, copious amounts of Ibuprofen, any typing only when I had to. I'm happy to report I'm getting much better, but siting for long stretches to code or write was still painful, though less so each day.

Since I'm greatly desirous to move forward, I decided to make a push in the later part of this week to clear the CodeCademy site's three courses related to web development: PHP, HTML/CSS and JavaScript. Late last night, I finished up the course for JavaScript. Yay me :)!!!

As an overall course and level of coverage, I have to give CodeCademy credit, they have put together a platform that is actually pretty good for a self-directed learner. It's not perfect, by any means, and the editor can be finicky at times, but it's flexible enough to allow for a lot of answers that would quality as correct, so you don't get frustrated if you don't pick their exact way of doing something.

Many of the hints offered for each of the lessons are also helpful and don't spoon feed too much to the participant, making you actually stretch and think. While I've known about and tinkered with JavaScript off and on for years, I will honestly say I've learned quite a bit from this courses. I would recommend these three courses (HTML/CSS, PHP and JavaScript) as a very good no cost first stop to learn about these topics. Each does a good job of explaining the topics and concepts individually.

If there was any criticism, it's that there is little in the course examples that integrate the ideas (at least so far). There is a course on jQuery, and I anticipate that that will probably have more to do with actual web component interaction and integration, so that's my next goal to complete. After that, I plan to go back and complete the Ruby and Python modules, and explore their API module as well.

For now, consider this is a modest victory dance, or in this case, a slow moving fist pump. I may need a week or two before I can actually dance ;). Also, next week I'll have some meat to add to this,s since I'm going to start covering some command-line level tools to play with and interact with, and those are a lot more fun to write about!

Become a "Coyote" at CAST 2014

Wednesday, April 09, 2014 16:10 PM

Now that the full program has been announced, and my talk is posted and described, I can now say, with a certainty, what I'll be talking about at CAST 2014 this August in New York City.

Actually, I need to qualify that. It's not what I'm going to talk about, it's what "we" are going to talk about.

Harrison Lovell is an up and coming tester with copious amounts of wit, humor and energy. Seriously, he gives me a run for my money in the energy department. I met Harrison through the PerScholas mentorship program, and we have been communicating and working together regularly on a number of initiatives since we first met in September of 2013. The results of those interactions, experiments, and a variety of hits (and yes, some misses here and there), are the core of the talk we will be doing together.

Here's the basics from the sched.org site:

"Coyote Teaching: A new take on the art of mentorship"

Too often, new software testers are dropped into the testing world with little idea as to what to do, how to do it, and where to get help if they need it. Mentors are valuable, but too often, mentors try to shoe-horn these new testers into their way of seeing the world. Often, the result is frustration on both sides.

“Coyote Teaching” emphasizes answering questions with questions, using the environment as examples, and allowing those being mentored the chance to create their own unique learning experience. Coyote Teaching lets new testers learn about the product, testing, the world in which their product works, and the contexts in which those efforts matter.

We will demonstrate the Coyote Teaching approach. Through examples from our own mentoring relationship, we show ways in which both mentors (and those being mentored) can benefit from this arrangement.

“When raised by a coyote, one becomes a coyote”.


Speakers

Michael Larsen
Senior Quality Assurance Engineer, Socialtext
Michael Larsen is Senior Tester located in San Francisco, California. Over the past seventeen years, he has been involved in software testing for products ranging from network routers and switches, virtual machines, capacitance touch devices, video games and distributed database applications that service the legal and entertainment industries.
Read More →

Harrison C. Lovell
Associate Engineer, QA, Virtusa
Harrison C. Lovell is an Associate Engineer at Virtusa’s Albany office. He is a proud alumnus from Per Scholas’ ‘IT-Ready Training’ and STeP (Software Testing education Program) courses. For the past year, he has thrown himself into various environments dealing with testing, networking and business practices with a passion for obtaining information and experience.


Yes, I think this is going to be an amazing talk. Of course, I would say that, because I'm part of the duo giving it, but really, I think we have something unique and interesting to share, and perhaps a few interesting tricks that might help you if you are looking to be a mentor to others, or if you are one who wants to be mentored. One thing I can guarantee, considering the combinations of personalities that Harrison and I will bring to the talk... you will not be bored ;)

Technical Tester Friday: Ladies and Gentleman, JavaScript has Entered the Building

Saturday, April 05, 2014 01:45 AM

There's nothing like the mild terror one feels when they get back from several days away and then thinks "aw man, I have to post something today!" Too many of my posts have stretched over two weeks, and while I had perfectly valid reasons for that, I said that I's post every Friday, whether I had a lot to talk about or just a little. I've decided the volume of the delivery is less important than the regularity and reliability of having an entry every Friday, and that's what's driving me today.

With the ALM forum and all that surrounded it, as well as getting ready for my talk in presenting my slides, I really didn't have a whole at a time to go through and push my way into learning more about JavaScript and implementing as much of it as I wanted to. I started the Codecademy JavaScript module, and worked through several of the initial entries. When I realized that I wasn't going to be able to have something to show by the end of this week... okay I cheated. Well I didn't cheat. I just went out on the web and I looked at some sample JavaScript projects, and tried to see if I could make sense of what they were doing.

The good news I found a simple JavaScript project that I could apply to the site's navigation bar. If you remember from last week's example, the navigation bar was really just a couple of links horizontally displayed, with brackets on the end to simulate "buttons". This time, we made real buttons with a little interactivity to them. They give a visual indication of which page has been selected (the button is a little larger than the others):






and here's the CSS and JavaScript code that makes the site look better than 1995 ;).




I should stop here and say, again, there are a lot of neat little distractions you can get into with JavaScript. There's potentially a slight barrier to entry to a brand-new web developer. HTML is pretty easy. CSS has some rules, but once you learn them, they don't feel that different compared to native HTML. JavaScript is very similar to PHP, in that you can learn the basics pretty quickly. How to actually use the basics effectively, and in a meaningful way on your pages... that's a bit of an art form, and it's one of the things you're going to have to practice doing. Start small and work out from there.

So there you have it. Again, because of me being out of town and being completely consumed by the ALM Form conference, I did not get a chance to do as much JavaScript hacking as I wanted to, but that will give me a chance for next week to get a little deeper. Perhaps I can pop a little bit of eye candy into the site, so we make it a little more interesting. As always, crawl before you walk, walk before you run, and maybe run before you get on a bicycle or drive a car. Little steps get you in, and I think the project will ultimately become a little more interesting as the defined repeatable things get more and more caned so I can focus on other things :).

Testing the Limits at #ALMForum: Day Three

Friday, April 04, 2014 09:28 AM

Wow, what a week this has been. We're now on day three, the last day, and I'm up in an hour! I'm excited, a little frazzled, but I think we're going to do well. I'm also excited that the four speakers in the breakout today are all good friends; Curtis Stuehrenberg, Seth Eliot and Mark Tomlinson are gonna' help me close lout this conference and we look forward to chatting with as many people as possible that want to look at ways to the change the face and state of software testing. If you are here at ALM Forum, come join us. If you are not able to be, please read on here and take in as much as you can from my notes and observations.

-----

Transforming Software Development in a World of Services with Sam Guckenheimer is the first session, a we are starting out with a thought experiment around Air BnB (the online service to rent rooms and houses. etc. in different cities). A boat on Puget sound is available, so a company can host all of their team members on the boat. What will the experience be? Will it be a fun stay? Will it be too cramped? We don't know, but one thing's for sure, it will be open, it will be public, and good or bad, if people want to talk about it, they will.

This makes for an interesting comparison to Agile development, and the way that agile has shaken out. What had intended to be a relatively private internal housekeeping mode has become a more public viewing. We are social, we are open, we use systems that are often out of our control in the 100% sense of the word. A lot of our practices and actions are not quiet and hidden, they are visible to all who would care to see them. It's a little daunting, but it's also tremendously liberating.

This talk is looking at a Microsoft ideal of "cloud cadence". Customers want regular improvements, we want to maximize the value we provide to our customers, and we know that their feedback is not just for developers, it's seen by everyone. Get it right, we have app store five star reviews. get it wrong, and we can have considerably lower reviews (and don't for a second think those reviews don't matter; it can be the difference between adoption or being totally forsaken).

The DevOps life cycle comes together with three aspects. we have development, we have production, and in between we have the collaboration piece. What's the most important element there? Well, without good development, we have a product that is sub par. With bad deployment, we might have a great product but it won't really work the way we intend it to. The middle piece is  the critical aspect, and that collaboration element is really difficult to pin down. It's not a simple prescription, a set checklist. each organization and project will be different, and many times, the underpinnings will change (from our servers to the cloud, from a dedicated  and closed application to a socially aware application). Sometimes the changes are made deliberately, sometimes the changes are made a little more forcefully. Either way, without a sense of shared purpose or collaboration between the development and production groups, including the tooling necessary to accomplish the goals.

The ability to do all of these things in the Visual sTudio team is the core of Sam's talk, and the interactions with their clients, and the variety of changes that occur drive many of their decisions. They learn from their customers and change direction. They focus on a human to human feedback model (which may sound a little unusual for a giant company like Microsoft, but Sam makes a convincing case :) ).


-----

So this is my talk. no I can’t talk about my talk while I’m giving it, so this is a little canned ;). My topic is "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed”. If I were to be a little more literal with my title, I’d call it “What you want to have the new testers that you hire know and want to be so that they can be genuinely effective for you and your team… oh, and they may not be the obvious areas you think they need to be".

Software development, and software testing, is undergoing a radical change,. We’ve embraced the idea of changes in development and delivery, but we tend to still look at old school “best practices” in software testing. We’re not still testing the software the previous generation wrote. Development has changed, and testing is changing, and it’s still as relevant as it was before, but we need to approach it differently than we have.

I’m involved in a variety of initiatives that are specifically geared towards teaching software testing to a new generation of testers (and hey, current testers may find the ideas useful, too).

Programs like SummerQAmp, PerScholas, Weekend Testing, the Miagi-do School of Software Testing, and the BBST series of classes are all designed to help software testers not just develop ideas, but real world skills that can help them do their jobs effectively. The community that surrounds testing (in the Twitter, G+, and special forum space) are all doing amazing work to move testing forward. 


So what’s wrong with the old model? We still hear about testing teams, even in so called Agile organizations, that are still doing Heavy Process, Heavy Scripting series of tests. It’s like the development team is Agile, but the test team is expected to still be a waterfall team. Automation makes a lot of promises, and don’t get me wrong, I am pro automation for many things. I use automation. I write automation. I prefer the term Computer Aided Testing, but Automation will suffice. It’s a tool, but it’s not the only tool, and it has been oversold on what it can accomplish. It’s great for repetitive tasks. It’s great for configuration and iteration stepping. It’s lousy at making informed decisions. Though it’s not been a problem I’ve personally dealt with or had to experience, I know that “certification” has been sold as a way to “pre-qualify” testers. As a practical outcome, I think we have failed here, because most of the certifications offered are heavy on passing a test, and light on demonstration of real world skills and the effectiveness thereof.


I believe the New Testers need to focus on a new toolkit and a new attitude. It’s not really new, in fact, in many ways, it’s ancient, but it’s been woefully underutilized. We need testers who are sapient (stealing that from James Bach), but basically meaning we need testers who are actively and critically thinking about what they are doing and observing. Testers need to do more than find bugs, they need to sell those bugs.. Really, what’s more important, lots of bugs, or the championing of important bugs that actually get fixed?

Testers need to return to and have a solid understanding of both the Scientific And Socratic methods. I believe that New Testers will be less button pushers and more scientists, philosophers and skeptics. These are not just testing traits, these need to be embraced by everyone in development. New Testers don’t want to prove the software works. They want to find how it is broken. They want badly to lose the stigma of being the bug shield. They are much better utilized as “beat reporters” sharing a clear story of your product. A thought experiment from Elisabeth Hendrickson that I personally love is “what is the most terrifying headline about your company you could imagine seeing in the paper? Wouldn’t you want your testers to not only find out that terrifying headline, but inform you so that  you could prevent it?"

OK, that’s great. So where can I find these New Testers? You can find them in Computer Science departments at universities. Yes, I’m daring to say it. Most testers have historically fallen into the job, but I am seeing people who are now self selecting to be software testers, and it’s *WONDERFUL*. They are not also-ran programmers, or people who couldn’t hack programming. Some are great programmers, but they have decided that there are other challenges they’d like to deal with rather than stringing code together. And that’s *ALSO* great. The point is, that are not considering testing as a consolation prize, but they are selecting testing on its own merits, and we should recruit them with the same philosophy. Where else can we find great up and coming testers (and to be fair, current great testers)? Check out people with degrees in Humanities, or Journalism, or Psychology. Look for actual scientists who might be looking for a change of pace. Do you have really good Customer Service Representatives? It’s a good bet you have some fantastic testers in that group.

Programs like SummerQAmp, PerScholas, Weekend Testing, BBST Courses, a thriving ecosystem of Bloggers, Newsgroups and Online Magazines and Twitter (yes, Twitter :) ) are at the vanguard of bringing this new paradigm of testing to the fore. Each of these, in their sphere, is looking to help bring real, tangible testing skills to their participants, and give them a chance to show what they can do and improve their craft. Weekend Testing is not just a movement, it’s also a portable model that anyone can use. All you need is Skype, a topic of discussion, a product or project, a mission and some charters, and two hours to interact, instruct and facilitate. If you want to see some amazing testing insights, I encourage you to review just about any Weekend Testing transcript. 

Testers are not a single group, they have many interests and they have their own niches. Some testers will be good at some, better at others, probably not stellar in all, but with a broad team with recognition of this, you may be surprised at the powerhouse you can develop when you get the Explorers, the Performance Tweakers, the Toolsmiths (automation, CI, deployment tools, etc), the Evil Masterminds (Security), the Humanists (human factors, usability), and the Storytellers together. Just don’t make the mistake in thinking you can get this all in one person. You may get attributes of all in one tester, but none of us can be experts at all of these, or should I say, very few of us can be (I’m certainly not one of them).

In all the new testers will be focused on:

  • less scripting, more active thinking
  • less checking, more real testing
  • less blind faith, more scientific skepticism
  • creative, inventive, intuitive, mindful

In short, the future is now, and I can introduce you to hundreds of them ;). Better yet, why not come join us and see for yourself?


  • SummerQAmp: hire an intern
  • PerScholas: have a chat with recent STeP graduates and their mentors
  • Weekend Testing: Come join us for a session or two and see the magic happen
  • Miagi-do: Do a web search for the term “Miagi-do School of Software Testing”. Or better yet, just ask me ;). 

-----

Curtis Stueherenberg is talking about how to "ACCellerate Your Agile Test Planning". He decided to chuck the Power Point entirely, and decided to give a crash ourse in Agile testing on a live product... specifically, his procut (well, Climate Corp's mobile app, to be specific). His point was to say "what if we have to test a product in two weeks? How about one week? How about three days? What are you going to do?"

Rather than talk it, we all participated in an active testing session, downloading the app to our mobile devices (iPhone and Android only, sorry Windows Phone users :( ). By walking through the steps and the test areas, and using an idea from James Whittaker and Gogle called the ACC model, we all in real time put together sections of risk and areas we would want to make sure that we tested.  In many ways, ACC is a variation on a theme of Session Based Test Management (SBTM). It informs out tests, we act on the guidance, and we pivot and adapt based on what we learn, and we do it quickly.

Much of the interaction was just things we did in real time, and for my money, this was a brilliant way to emphasize this approach. Instead of just talking about it, we all did it. Even if the idea of a formal test plan is not something you have to deal with, give this approach a try. I know I'm going to play with this when I get back home :).

-----

Now it's time for Seth Eliot and "Your Path to Data Driven Quality" and a roadmap towards how to use the data that you are gathering to help guide you to your ultimate destination. Seth wants to make the point that testing is measurement, and you can't measure if you don't have data (well, you can, but it won't really be worth much). Seth asks if we are HiPPO driven (meaning is our strategy defined buy the "Highest Paid Person's Opinion" or were we making decisions based on hard data. Engineering data can help a little bit (test results, bug counts, pass fail rates). They can give us a picture, but maybe not a complete one (in fact, not even close to a complete one). There's a lot of stuff we are leaving on the table. Seth says that leveraging production data (or "near production data") gives us a richer and more dynamic data set. Testers try to be creative, but we can't come close to the wacko randomness of the real world users that interact with our product.

First step: Determine your questions. Use Goal Question Metrics. Start at the beginning and see what you ultimately want to do. Don't just get data and look for answers. Your data will taint the questions you ask if you don't ask the questions first. You may develop a confirmation bias if you look at data that may seem to point to a question you haven't asked. Instead, the data may give you a correlation to something, but it may not actually tell you anything important. Starting with the question helps to de-bias your expectations, and then it gives you guidance as to what the data actually tells you.

Then: Design for production-data quality. There's two types of data we can access. Active and passive data can be used. active data could be test cases or synthetic data of a simulated user. Passive data is using real world data and real users interactions. Synthetic data is safer, but it's by definition incomplete. Passive data is more complete, but there's a danger to using it (compromising identification data, etc.). Staging the data acquisition lets us start with synthetics data (reminds me of my "Attack on Titan" account group that I have lovingly put together when I test Socialtext... yes, I have one. Don't judge me ;) ), to copying my actual account and sharing on our production site (much more rich data, but needs to be scrubbed of anything that could compromise individuals privacy... which in turn gets us back to synthetic data of sorts, but a richer set. Bulk up and repeat. Over time, we can go from having a small set of sample data to a much larger and beefier data-set, with lots more interesting data points.

Then: Select Data sources. There's a number of ways to gather and accumulate data. We can export from user accounts, or we can actively aggregate user data and collect those details (reminds me of the days of NetFlow FlowCollection at Cisco). We need to be clear as to what we are gathering and the data handling privacy that goes with it. Anonymous data is typically safe, sensitive personally identifiable info requires protocols to gather, most likely scrub, or  not touch with a ten foot pole.  Will we be using Infrastructure data, app data. usage. account details, etc. Each area has its unique challenges. Plan accordingly.

Then: Use the right data tools. What are you going to use to store this data. Databases are of course common, but for big data apps, we need something a little more robust (Hadoop is hip in this area). where do you store a Hadoop instance? Split it up into smaller chunks (note, splitting it makes it vulnerable, so we need to replicate it. Wow, big data gets bigger :) ).Using map reducing tools, we can crunch down to a smaller data set for analysis purposes. I'm going to take Seth's word for it, as Hadoop is not one of my strong suits, but I appreciated the 60 second guided tour :) ). Regardless of the data collection and storage, ultimately that data needs to be viewed, monitored, aggregated and analyzed. The tools that do that are wide and varied, but the goal is to drill down to the data that matters to you, and having the ability to interpret what you are seeing.

Then: Get answers to your questions. Ultimately, we hope that we are able to get answers based on the real data we have gathered that will help us either support or dispute our hypothesis (back to the scientific method; testing is asking questions and then, based on the answers we receive, considering and proposing more interesting questions. Does our data show us interesting points to focus our attention? Do we know a bit more about user sentiment? Have we figured out where our peak traffic times are? If we have asked these questions, and gathered data that is appropriate for those questions, if we have been focused on aggregating the appropriate data and analyzing it, we should be able to say "yes, we have support for our hypothesis" or "no, this data refutes our hypothesis". Of course, that leads to even more questions, which means we go to...

Lather. Rinse. Repeat.

Hmmm, Mark Tomlinson just passed me a note with a statement that says "Computer Aided Exploratory Testing"? Hadn't considered it quite that way, but yes, this certainly fits the description. An intriguing prospect, and one I need to play with a bit more :).

-----

Lightning talks! Woo!!! We have four presenters looking to rifle through some quick talks.

Mark Prichard is discussing "Complete Continuous Integration and Testing for Mobile and Web Applications". Mark is with Cloudbees, and he's explaining how they do exactly what the title describes. Some interesting ideas surrounding how to use Jenkins and other tools to make it possible to build multiple releases and leverage a variety of common tools so as to not have to replicate everything for each environment. Leverage the cloud and Platform-as-a-Service for Continuous Delivery. Key takeaway... "ALM in the cloud will become the rule, not the exception". Quote attributed to Kurt Bittner.

--

Mike Ostenberg from SOASTA is next and he's talking about 'Performance Testing in Production, and what you'll find there".  Begs the question... *WHY* do we want to do performance testing in production (isn't that what we call a "customer freak out"... well, yeah, but that's an after effect, and we really want to not go there ;) ). Real systems, real load, real profiling. There's ways we can simulate load on a test environment, but it's not really going to match what happens in the real space. additionally, we want to do our load testing earlier than we traditionally do it. At the end of the cycle, we're a little too far gone to actually pivot base on what we learn.

Load testing in Production, Mike points out, can be done in stages and can be done on different levels. Just as we use Unit tests for components, integration tests for bigger systems, and feature/acceptance tests to tie it all together, we can deconstruct load tests to match a similar paradigm. Earlier load tests are dealing with errors, page loads, garbage collection, data  management, etc.  Regardless of the stage, there are some critical things to look at.

Bandwidth is #1, can everyone reach what they need? Load balancing, or making sure everyone pulls their weight, is also high priority. Application issues; there's no such thing as perfect code. Earlier tests can shake out the system to help show inefficient code, sync issues, etc. Database performance fits in application issues, but it's a special set of test cases. The database, as Mike points out, is the core of performance. Locking and contention, index issues, memory management, connection management, etc. all come into play. Architecture is imperative. Think of matching the right engine to the appropriate car. Connectivity comes into play as well. Latency, lack of redundancy, firewall capacity, DNS, etc. Configuration means we need to get custom and actually see if we mean it. Shared Environments... watch out for those noisy neighbors :). Random stuff comes into play when things are shared in the real world. Pay attention to what they can do for you (or to you ;) ).

I like this staggered approach, it makes the idea of "testing in production" not seem so overwhelming.

--

Now on deck is Dori Exterman, and he's talking about "Reducing the Build-Test-Deploy Cycle from Hours to Minutes at Cellebrite". Hmmm, color me mildly skeptical, but OK, tell me more :). I'm very familiar with the idea of serial build-test-deploy, and I know that that does not bode well. Multi-core systems can certainly help with this, and leveraging multi-core environments can allow us to do a much tighter build-test-deploy pipeline. Parallel processing speeds things up, but there's a system limit, and those system limits are also very costly at their higher end.

So what's the option when we max out the cores on a single system? seems that going parallel to more servers would make sense. Rather than one machine with 32 cores, how about 8 machines with four cores? same number of cores, maybe similar throughput gains (and potentially better since system resources are shared over multiple machines). This approach is referred to as a CI Cluster Farm. Cool, but we're still in a similar ball-park. Can we do better? Dori says yes, and his answers is to use distributed computing within your own network of machines. If I'm hearing this correctly, it's kind of like the idea of letting your machine be used for "protein folding" experiments while your machine is in more idle states (anyone else remember signing up to do stuff like that :)? ). I'm not sure that's what Dori means, but it seems this could be really viable, and we already have an example of that happening (i.e "signing up for protein folding").

How wild would it be to be able to wire up your entire network, everyone's machines, so that they can help speed up the build process? It's a fascinating model. I'd be curious to see if this really comes to fruition.

--

We had another Lightning talk added that came from a Birds of a Feather session about CI/CD, so this is a bit of a surprise. The idea was to see how we could leverage pipelines (mini-builds that run in sequence and individually). mini builds also helps us to build individual components, with a goals to integrate the elements later on. Often, all we want is a Yes/No to see if the change is good, or not (gated check-ins).

This blends into Dori's talk just given on distributed computing and utilizing down times for making an almost unlimitedly parallel build engine. So this is interesting, but what's management going to say about all of this? Well, what is it costing us not to do this? Are we losing time and in effect losing money in the process? Will this help us fix some of our technical debt? If so, it may well be worth considering. If it adds more technical debt, less likely to sell that option.

Another point is that good CI infrastructure will bubble up issues in design and architecture of both the process and the application. Innovation and motivation will potentially increase when changes can be made more frequently, and subsequently, more atomically.

By using information radiators, we can get a clearer sense as to who did what to cause the build to fail. Gadgets (lights, sounds, sensory input) can help make it more apparent and in real time. Not sure if this would be a major plus, but I'm not necessarily the best judge of what developers consider to be fun ;).


-----

The final test track talk, the anchor session, goes to Mark Tomlinson, as he discusses 'roles and Revelations: Embracing and Evolving our Conceptions of Testing". With a title like that, let's just say "you had me at 'hello'" ;).

Mark is a fun guy to listen to (check out his podcast "PerfBytes" to get a feel), and thus, it's fun to hear him do a more narrative talk as opposed to a techy talk. We start out with the idea of what testing is, at least how we look at it historically. We find bugs, we see that we can validate to a spec, we try to reduce costs, and we aim to mitigate risks. Overall, I think if you gave that list to any lay person and said "that's what testing is", they'd probably have little difficulty understanding that. Those definitions are valid, but it's also somewhat limiting. We've seen some interesting milestones over the past 50 years. Debugging, Demonstration, Destruction, Evaluation and Prevention can all be seen as "eras of testing". Mark points out that there are 10 different schools of testing (Domain, Stress, Specification, Risk, Random/Statistical, Function, regression, Scenario, User, and Exploratory).

That's all cool... but what if one day everything changed? Well, one would say that the past 14 years, or since the Agile Manifesto, the Universe did Change... to steal a little from James Burke. We are less likely today to have isolated test groups. We have a lot more alphabet soup when it comes to our titles. I've had lots of titles, lots of combinations, but ultimately all of them could be distilled to a "tester" of some flavor. Some teams have no dedicated testers, or just one dedicated tester. Test Driven Development is an unfortunate term choice, in the fact that what is a design process often gets mistaken for "testing" (nope, it's not. It's checking for correctness, but it is not testing). Out time to be interactive and effective is happening earlier, and I love this fact.

Continuous Integration, Continuous Deployment, Continuous Delivery and even Continuous Testing have entered the vernacular. What does this mean? It's all about trying to automate as many of the steps as humanly possible. Build-Check-Deploy-Monitor-Repeat. Conceive of a time and place where we go from end to end without a person involved, just machines. Sounds great, huh? In some ways, it's awesome, but there's an unfortunate side effect, in that may processes are billed as testing that are not. Checking is what automation does. It's great for a lot of things, but it can't really think. Testing, real testing, requires thinking and judgment. There's been a devaluing of testing in some organizations, or just doing testing is considered a liability. Unless we are all coding toolsmiths, we are of a lesser order... and that's bunk!!!

Ultimately, testing is a cost... seriously. Testing does not make money. testing is a cost center. It's an important cost center, but it is a cost. think of Health Insurance. It is not an investment. It's a cost you have to pay... but when you crash a car or break a leg, then the insurance kicks in, and I'll bet you're happy when you have it (and really frustrated if you don't). that's what testing is. It's insurance. It's a hedge. It's a cost to prevent calamity. With all of the changing going on, we ned to be clear what we are and what we provide.

What we generate, and what real value we provide, is feedback and information. we are not critics. We are not nay-sayers, we are honest (we hope) reporters of the state of reality, or at least as we can potentially be. the really valuable things that we can provide are not automate-able. Yes, I dared to say that :). Computers can evaluate variable values and they can confirm or deny state changes, but they cannot really think, and they cannot make an informed judgment call. They can only do what we as people tell them to.

Change is constant, and we will see more change as we continue. Testers need to be open to change,and realize that, while there is always value that we provide, the way we provide that value, and the mechanisms and institutions that surround them will evolve. If we do not evolve with them, we will be left behind.

Mark emphasizes that software testers are "Facilitators of Quality". Testing is not just limited to dedicated testers, it's dispersing. therefore, we need to emphasize where we can be effective, and that may mean going in totally different directions. Testing provides diversity, if we are willing to have it be a diversifying role. Think of new techniques, expand the way that we can ask questions, learn more about the infrastructure, and figure out ways that we can keep asking questions. The day we stop asking questions is the day testing dies, for real.

Testing can actually accelerate development. I believe this, and have seen it happen in my own experiences. This is where paired developer-tester arrangements can be great. think of the programmer being the pilot, and the tester being the navigator. Yes, if all we ask is "are we there yet?", we don't offer much, but if we watch the terrain, and ask if some ways we've mapped may be better or worse for the time we want to arrive, now we're adding value, and in some ways, we can help them fix issues before they've even been committed. testers provoke reactions. Not to be jerks, but to get people to think and consider what they really should be doing. Do you think you can't do that? If so, why? Give it a try. You may surprise yourself (and maybe a few programmers) with how much you deliver. In short, be the Devil's Advocate as often as possible, and be prepared to embrace the Devil's you don't know ;).

Consider that every tester is an Analyst. It may be formal or informal, but we all are, deep down. we can research quality efforts, we can drill down into data and see patterns and trends, we can also see trends and efficiencies we can add to our repertoire, and adapt, adapt, adapt!

-----

Sorry for the delay for the last bit, but with a rather meta post presentation call with Mark Tomlinson (we did a conference call about how to do podcasts, and in the process, recorded the session... so yeah, we made a podcast about how to do podcasts as an artifact of a meeting about how to do podcasts. Main takeaway, it's fun, but there's more to doing them than many people consider. We just hope we didn't scare everyone off after we were done (LOL!). After that, all of the speakers descended upon Tango Restaurant and had a fabulous dinner courtesy of the ALM Forum organizing staff. Great conversation with Scott Wambler, Curtis Stuehrenberg, Peter Varhol, and Seth Eliot, as well as several others. the nerd brain power in that small room was probably off the charts, and i was honored to have been included in this event. Seattle, thank you for a very busy and truly enjoyable week. For those who have been keeping track of this rather long missive, my thanks to you, too. To everyone who came to my talk and tweeted or retweeted my comments, and who commented back to me about my talk and gave me your impressions, feedback is a gift, and I've received many gifts today. Truly, thank you so much.

With this, I must return back to reality and back to San Francisco early this morning. I've enjoyed out rime together, and I hope that, in some small way, this meandering three days of live blogging has given you a flavor of the event and what I've learned these past few days. Let's do it all again some time :)!!!

A San Franciscan in Seattle: #ALMForum Day Two Reflections

Wednesday, April 02, 2014 21:47 PM

Last night, Adam Yuret invited me out to see what the wild world of Seattle Lean Coffee is all about. Having heard from a number of the people who have participated in these events, I decided I wanted to play as well, so my morning was centered around Lean Coffee and meeting a great group of Seattleites and their various roles and areas of expertise.

We covered some interesting topics including the use of Pomodoro and how to make the best use of it (I added the Procrastination Dash to the mix of discussions), the use of SenseMaker and whether or not the adherence to it as a paradigm bordered on religion (it's a framework for helping realize and see results, but it's not magic), some talk about the challenges of defining what technical testing really means (yes, I introduced that topic ;) ), sharing some thoughts on what defines a WIP limit for an organization, and some thoughts about "Motivation 3.0" (based on Daniel Pink's book "Drive").


Great discussions, lots of interesting insights, and an appreciation for the fact that, over time, we see the topics change from being technical to being more humanistic. The humanistic questions are really the more interesting ones, in my estimation. Again, my thanks to Adam and the rest of the Seattle Lean Coffee group for having me attend with them today.

-----

Cloud Testing in the Mainstream is a panel discussion with Steve Winter, Ashwin Kothari, Mark Tomlinson, and Nick Richardson. The discussion has ranged across a variety of topics, staring with what drove these organizations to start doing cloud based solutions (and therefore, cloud based testing) and how they have to focus on more than just the application in their own little environment, or how much they ned to be aware of in between hops to make their application work in the cloud (and how it works in the cloud. as an example, latency becomes a very real challenge, and tests that work in a dedicated lab environment will potentially fail in a cloud environment, mainly because of the distance and time necessary to complete the configuration and setup steps for tests.

Additional technical hurdles have been to get into the idea of continuous integration and needing to test code in production, as well as to push to production regularly. Steven works with FIS Mobile, which caters to banking and financial clients. Talk about a client that was resistant to the idea of continuous deployment, but certain aspects are indeed able to be managed and tested in this way, or at least a conversation is happening where it wasn't before.

Performance testing now takes on additional significance in the cloud, since the environment has aspects that are not as easily controlled (read: gamed) as they would be if the environment were entirely contained in their own isolated lab.

Nike was an organization that went through a time where they didn't have the information that they needed to make a decision. In house lab infrastructure was proving to be a limitation, since they couldn't cover the aspects of their production environment or a real example of how the system would work on the open web. With the fact that OPS was able to demonstrate some understanding through monitoring of services in the cloud, that helped the QA team to decide to collaborate and help understand how to leverage the cloud for testing, and how leveraging the cloud made for a different dialect of testing, so to speak.

A question that came up was to ask if cloud testing was only for production testing, and of course the answer is "no", but it does open up a conversation about how" testing in production" can be performed intentionally and purposefully, rather than something to be terrified about and say "oh man, we're testing in PRODUCTION?!" Of course, not every testing scenario makes sense to be tested in production (many would be just plain insane) but there are times when it does make a lot of sense to do certain tests in production (a live site performance profile, monitoring of a deployment, etc.).

Overall an interesting discussion and some worthwhile pros and cons as to why it makes sense to test in the cloud. Having made this switch recently, I really appreciate the flexibility and the value that it provides, so you'll hear very few complaints from me :).

-----
Mike Brittain is talking about Principles and Practices of Continuous Deployment, and his experiences at Etsy. Companies that are small can spin up quickly, and can outmaneuver larger companies. Larger companies need to innovate of die. There are scaling hurdles that need to be overcome, and they are not going to be solved overnight. There also needs to be a quick recovery time in the event something goes wrong.  Quality os not just about testing before release, it also includes adaptability and response time. Even though the ideas of Continuous Deployment are meant to handle small releases frequently performed, there still needs to be a fair amount of talent in the engineering team to handle that. The core idea behind being able to be successful in Continuous Development is the idea of "rapid experimentation".

Continuous Delivery and Continuous Deployment share  a number of principles. First is to keep the build green, no failed tests. Second is to have a "one button" option. Push the button, all deployment options are performed. Continuous Deployment breaks a bit with the fact that every passing build is deployed to production, where continuous delivery means that the feature is delivered with a business need. Most of the builds deploy "dark changes", meaning code is pushed, but little to no changes are visible to the end user (thin CSS rules, unreferenced code, back end changes, etc.). A Check in triggers a test. If clean that triggers automated acceptance tests. If that passes, then it triggers the need for user acceptance tests. If that's green, then it pushes the release. at any point, if the step is  red, then it will flag the issue and atop the deploy train.

Going from one environment to another can have unexpected changes. How many times have you heard "what do you mean it's not working in production? I tested that before we released!" Well, that's not entirely surprising, since our test environment is not our production environment. Question of course is, where's the bug? Is it in the check ins? Are we missing a unit test(s)? are we missing automated UA tests (or manual UA tests)? Do we have a clear way of being identified if something goes wrong? What does a roll back process look like? All of these are still issues, even in Continuous Deployment environments. One avenue Etsy has provided to help smooth this transition is a setup that does pre-production validation. Smoke tests, Integration tests, Functional and UA tests are performed with hooks into some production environment resources, and active monitoring is performed. All of this without having to commit the entire release to production, or doing so in stages.

Mike made the point that Etsy pushes, approximately, about 50,000 lines of code each month. With a single release, there's a lot of chances for there to be bugs clustered in that single release. By making many releases over the course of days, weeks or months. The odds of a cluster of bugs appearing are minimal. Instead, the bugs that do appear are isolated and considered within their release window, and their fix likewise tightly mirrors their release.

This is an interesting model. My company is not quite to the point that we can do what they are describing, but I realized we are also not way out of the ballpark to consider it. It allows organizations to iterate rapidly, and also to fix problems rapidly (potentially, if there is enough risk tolerance build into the system). Lots to ponder ;).

-----
Peter Varhol is covering one of my favorite topics, which is Bias in Testing (specifically, cognitive bias). Peter started his talk by correlating the book "Moneyball" to testing, and that often, the stereotypical best "hitter/pitcher/runner/fielder/player" does not necessarily correlate to winning games. By overcoming the "bias" that many of the talent scouts had, he was able to build a consistently solid team by going beyond the expectations.

There's a fair amount of bias in testing. That bias can contribute to missing bugs, or testers not seeing bugs, for a variety of reasons. Many of the easy to fix options (missing test cases, missing automated checks, missing requirement parameters) can be added and covered in the future. The more difficult one is our own biases as to what we see. Our brains are great at ambiguity. they love to fill in the blanks and smooth out rough patches. even when we have a "great eye for detail", we can often plaster over and smooth out our own experience, without even knowing it.

Missed bugs are errors in judgment. we make a judgment call, and sometime we get it wrong, especially when we tend to think fast. When we slow down our thinking, we tend to see things we wouldn't otherwise see. case in point: if I just read through my blog to proof-read the text, it's a good bet I will miss half a dozen things, because my brain is more than happy to gloss over and smooth out typos; I get what I mean, so it's good enough... well, no, not really, since I want to publish and have a clean and error-free output.

Contrast that with physically reading out, and vocalizing, the text in my blog as though I am speaking it to an audience. This act alone has helped me find a large number of typos that I would otherwise totally miss. The reason? I have to slow down my thinking, and that slow down helps me recognize issues I would have glossed over completely (this is the premise of Daniel Kahneman's "Thinking, Fast and Slow".  To keep with the Kahneman nomenclature, we'll use System 1 for fast thinking and System 2 for slow thinking.

One key thing to remember is that System 1 and System 2 may not be compatible, and they may even be in conflict. It's important to know when we might need to dial in one thought approach or the other. Our biases could be personal. They could be interactional. they could be historical. they may be right a vast majority of the time, and when they are, we can get lazy. We know what's coming, so we expect it to come. when it doesn't we are either caught off guard, or we don't notice it at all. "Representative Bias" is a more formal way of saying this.

When we are "experts" in a particular aspect, we can have that expertise work against us as well. we may fail to look at it from another perspective, perhaps that of a new user. This is called "The Curse of Knowledge".

"Congruence Bias" is where we plan tests based on a particular hypothesis, whereas we may not have alternative hypotheses . If we think something should work, we will work on the ways to support that a system works, instead of looking at areas where a hypothesis might be proven false.

'Confirmation Bias" is what happens when we search for information or feedback that confirms our initial perceptions.

"The Anchoring Effect" is what happens when we become to convinced on a particular course of action that we become locked into a particular piece of information, or a number, where we miss other possibilities. Numbers can fixate us, and that fixation can cause biases, too.

" Inattentional Blindness" is the classic example where we focus on a particular piece of information that they miss something right in front of them (not a moonwalking bear, but a gorilla this time ;) ). there are other visual images that expand on this.

The "Blind Spot Bias" comes from when we evaluate our decision making process compared to others. With a few exceptions, we tend to think we make better decisions than others in most areas, especially those we feel we have a particular level of expertise.

Most of the time, when we find a bug, it's not because we have missed a requirement or missed a test case (not to say that those don't lead to bugs, but they are less common). Instead, it's a subjective parameter. We're not looking at something in a way that could be interpreted as negative or problematic. This is an excellent reminder of just how much we need to be aware of what and where we can be swayed by our own biases, even by this small and limited list. There's lots more :).

-----

-----
More to come, stay tuned.

Live From Seattle, it's #ALMForum: A TESTHEAD Live Blog

Wednesday, April 02, 2014 00:21 AM

Good morning everyone. I'll be coming at you live from Seattle at various times of the day. This is a live blog, and as such, it's going to be stream of consciousness, it may contain mistakes, and it may also have gaps in logical flow. If you want to see the real time feed, an ability to handle ambiguity will help. If you can't handle a touch of ambiguity, wait until later in the day when I get a chance to clean things up a bit ;).

We start out with Scott Ambler (@scottwambler on Twitter) and a discussion of  Disciplined Agile Delivery and how to scale Agile practices in larger organizations. Scott made an few points about the fact that Agile is a process with a lot of variations on the theme. Methodologies and methods are all nice, but each organization has to piece together for themselves which of the methods will actually work. Scott has written a book called Disciplined Agile Delivery (DAD). The acronym of DAD is not an accident. Key aspects of DAD are that it is people first, goal drive, it';s a hybrid approach, learning oriented, utilizes a full delivery lifecycle, try to emphasize the solution, not just the software. In short DAD tries to be the parent; it gives a number of "good ideas" and then lets the team try to grow up with some guidance, rather than an iron hand.

Questions to ask: what are the variety of methods used? What is the big picture? While we can look at a lot of terminology, and we can say that Scrum or agile processes are loose form and just kind of happen, that's not really the case at all.  Solution delivery is complex, and there's a lot of just plain hard reality that takes place. Most of us are not working on the cool new stuff. We're more commonly looking at adding new features or enhanced features to stuff that already exists. Each team will probably have different needs, and each team will probably work in different ways. DAD is OK with that.

Scott thankfully touched on a statement in a keynote that made me want to throw the "devil horns" and yell "right on!" there is no such thing as a best practice; there are good practices in some circumstances, and those same practices could be the kiss of death in another situation. Granted, those of us who are part of the context-driven testing movement, this is a common refrain. the fact that this is being said in a conference that is not a testing conference per se brought a big smile to my face. the point is, there are many lean and agile options for all aspects of software delivery. The advice we are going to get is going to conflict at times, it's going to fit some places and not others, and again, that's OK.

Disciplined agile delivery comes down to asking the questions around Inception (How to we start?), Construction (What is the solution we need to provide?), Transition (How to we get the software to our customers?) and Ongoing (what do we do throughout all of these processes?).

For years, we used to be individually focused. We all would do our "best practices" and silo ourselves in our disciplines. Agile teams try to break down those silos, and that's a great start, but there's more to it than that. Our teams need to work with other teams, and each team is going to bring their own level of function (and dysfunction). this is where context comes into play, and it's one of the ways that we can get a handle on how to scale our methods. While we like the idea of co-location, the fact is that many teams are distributed. Some teams are partially dispersed, others are totally dispersed (reminds me of Socialtext as it was originally implemented; there was no "home office" in the early days).  Teams can range from small (just a few people), medium (10-30 people), and large teams (we think 30+ is large, other companies look at anything less than 50 people as small teams). The key point is that there are advantages and disadvantages regarding the size of your team. Architecture may have a full architecture team with representatives in each functional group. Product owners and product managers might also be part of an over arching team where representatives come from smaller groups and teams.

The key point to take away from this is that Agile transformations are not easy. They require work, they take time to put into place, there will be mis-steps, there will be variations that don't match what the best practices models represent. the biggest challenge is one of culture, not technology. Tools and scrum meetings are fairly easy. Making these a real part of the flow and life of the business takes time, effort and consistent practice. Don't get too caught up in the tools doing everything for you. They won't.  Agile/Scrum is  a good starting point, but we need to move beyond this. Disciplined Agile Delivery helps us up our game, and gets us on a firmer footing. Ultimately, if we get these challenges under control with a relatively small team, we can look to pulling this off with a large enterprise. If we can't get the small team stuff working, Agile scaling will be pretty much irrelevant.

My thanks to Scott for a great first talk, and now it's time to get up and see what else ALM forum has to offer.
-----

I'm going to be spending a fair amount of my time in the Changing Face of Testing Track. I've already connencted with some old friends and partners in crime. Mark Tomlinson and I are probably going to be doing a fair amount of cross commenting, so don't be surprised if you see a fair amount of Mark in my comments ;).

Jeff Sussna is taking the lead for us testers and talking about how QA is changing, and how we need to make a change along with it. We're leaving industrialism (in many ways) and we are embarking on a post-industrial world, where we share not necessarily things, but we share experiences. We are moving from a number of paradigms into new paradigms:

from products to services: locked in mechanisms are giving way to experiences that speak to us individually. The mobile experience is one of the key places to see this. People who have negative experiences don't live with it, they drop the app and find something else.

from silos to infusion: being an information silo used to give a sense of job security. It doesn't any longer. Being able to interact with multiple organizations and to be adaptable is more valuable that being someone who has everything they know under lock and key.

from complicated to complex: complicated is predictable, it's bureaucratic, it's heavy. Complex is fragmented. It's independent, it doesn't necessarily follow the rules, and as such it's harder to control (if control is possible at all).

from efficient to adaptive: efficiency is only efficient when the process is well understood, and the expectations are clearly laid out. Disruption kills this, and efficiency gives way when you can't predict what is going to happen. This is why adaptability is more valuable than just efficiency. Learn how to be adaptive and efficient? Now you've got something ;).

The disruption that we see in our industry is accelerating. Companies that had huge leads and leverage that could take years to erode are eroding much faster. Disruption is not just happening, it's happening in far more places. Think about Cloud computing. Why is it accelerating as a model? Is it because people are really all that interested in spinning up a bunch of Linux instances? No, not really. The real benefit is that we can create solutions (file sharing, resource options, parallel execution) where we couldn't before. We don't necessarily care about the structure of what makes the solution, we care that we can run our tests in parallel in far less time than it would take to run them on a single machine in serial. Dropbox is genius not because it's in the cloud, it's genius because any file I really care about I can get to anywhere, at any time, on any device, and I can do it with very little physical setup and maintenance (changes delivered in an "absorbable manner").


Think of Netflix and their “chaos monkey”. They go in and turn instance off. They deliberately break stuff. They want to see what they might be able to find. “I don’t always test my code, but when I do, I do it in production”. That’s supposed to be a joke, but believe it or not, there is a great benefit to testing in production. this is why I am very invested in using my company’s product on their production servers, and looking at issues based on workflows I depend upon.

So what does this all mean for testers and testing? Does this mean that our role is being usurpsed? No, but it does mean our role is changing. Instead of having to babysit machines and be the isolated gatekeeper, we can test more intelligently and with a greater sense of adventure. We can also emphasize that testing goes beyond just performing scripted steps. We can also test more than just the code that we receive, when we receive it. We can test requirements. We can provoke questions. More to the point, we can be a feedback loop to the organization. If an organization believes in being truly adaptive, then it is, effectively, an environment that is friendly to QA.

Mark and I had a little fun considering some of the ramifications as presented, and since Mark said he has some debatable comments he'll be sharing in his talk, I'm going to hold off and not comment until then (stay tuned for further details ;) ).  Suffice it to say, testers are notorious for not necessarily agreeing across the board. That's also part of testing. If we agreed 100%, I'd be deeply worried about the state of our profession.

Testing covers a lot of areas. User testing validates usability. Unit tests can cover code functionality, but there's a lot of space in between those areas that get so much attention. There are lots of "ilities" we need to be paying attention to.

Retros are a good opportunity to see what went well and what can go better, but the technique only works when it's done on a frequent enough level, and the feedback is substantive.

What we definitely need to get away from is "Discontinuous Quality". Let's stop talking about QA wagging the dog. Let's not save testing until the end, where we find problems and tell people about them, only to be said that we are the bottleneck stopping the organization from releasing. Instead, let's get to the party earlier. Let's check out ideas earlier. Let's understand what we are able to contribute, and in as many places as we can.  Ultimately, we are not delivering functionality, we are delivering the ability to help accomplish goals and accomplish objectives. How we do it is not nearly as important as the fact that we actually do it, and do it in a way that is both effective and adaptable.

For me, the one most common thing I can think of to help this is the term "QA". I do my best to not use that term at all if I can get away with it. If I'm asked if I'm in QA, I always answer "yes, I'm a software tester". We have to get out of the business of assuring quality, because we really can't do that. we can inform, we can evangelize, we can enlighten, but we really can't assure anything. What we can do is test, and weave a compelling story. Ultimately, the story is the most important thing we can deliver, as it's the narrative that really defines if a solution goes out or doesn't.

-----

Ken Johnston (@rkjohnston) is talking about EaaSY, or 'Everything as a Service, Yes!".  Ken wants to help us see what the role of testing actually is. It's not really about quality assurance, but more about Risk assessment and management. I agree with this, in the sense that, in the old school environments I used to work in, especially when I worked for a game publisher, when a bug shipped to production, unless is was particularly egregious, it was eternal.  In the services world, and the services model, since software is much more pliable, and much more manageable, there's no such thing as a "dated ship". We can udate all the time, and with that, problems can be addressed much more quickly. With this model, we can be less forced into slotted times. We can update a bug in the same day. we can release a new feature in a week where it used to take a quarter or a year.

EaasY covers a number of parts to be made to be effective.

Componentization: break out as much of the functionality from external dependencies as possible.

Continuous Delivery: Requires Continuous Stability. It needs to have a targeted set of tests, an atomic level of development, and likely is an area that can be deployed/fixed with a low number of people being impacted by the change (the more mission critical, the less likely a Continuous Delivery model will be the desired approach. Not impossible, but probably not the best focus (IMO).

User Segmentation: When we think of how to deploy to users, and we can use a number of methods to do that. we can create concentric rings, with the smallest ring being the most risk tolerant users, and expanding out to a larger set of users, the farther out we get, the more risk averse the users. Additionally, we can use tools like A/B testing, to see how two groups of people react to a change as structured one way or another (structure A vs. Structure B). This is a way to put into production a change, but have a small group of people see it and react to it.

Runtime Flags: Layers can be updated independently. We can fork traffic through the production path and at key areas, data can be forked and routed through a different setup, and then reconvene with the production flow (this is pretty cool, actually :) ). Additionally, code can be pushed, but it can be "pushed dark", meaning it can be put in place but turned on at a later time.

Big Data: Five "Vs" (Volume, Variety, Velocity, Verification, Value). These need to be considered for any data driven project. The better each of these is, the more likely we will be successful in utilizing big data solutions.

Minimum Viable Product: Mark callup on Seth Eliot's "Big Up Front Testing" (BUFT) and says "say no to "BUFT". With a minimum viable product, we need to scale our testing to a point where we can have a MVP, and appropriate testing for the scale of the MVP. Additionally, there are options where we can Test in Production (not full scale, of course).

Overall, this was a very interesting approach and idea. Many of the ideas and approaches described sound very similar to activities we are already doing in Socialtext, but it also gives me areas where I can see that we can do better.

-----

James Whittaker (@docjamesw) is doing the next plenary session, called "A Future Worth Wanting". First we start with our own devices, our own apps, we own them, they're ours, but they aren't particularly useful if they don't connect to a data source somewhere (call it the web and the cloud for simplicity). James is making the point that there's a fair amount of stuff in between that we are not including. The Web browser is one of these middle point items. The app store is another. We know what to do and how to do it, we don't give it much thought. Could we be doing better?

Imagine getting an email, then having to research where an event is, how much tickets are, and how we could handle transactions (using "entities") and we can use those entities and we can find out information and perform transactions based on those entities. Frankly, this would be cool :).

What if we were a calendar? We are planning to do something, some kind of activity that we need to be time focused for. What do we naturally do? We jump to a browser and go figure out what we need. what of our calendar could use those entity relationships and do the search for us, or better yet, return what has already been searched for based on the calendar parameters? Think of writing code? Wouldn't it be cool to find a library that could expand on what you are doing or do what you are hoping to do?

The idea here is to be able to track "entities" to "intents", and execute those intents. Think about being able to call up a fact checking app in PowerPoint, and based on what you type, you get a return specific to your text entry. Again, very interesting. The key takeaway is that our apps, our tools, our information needs are getting tailored to exactly the data we want, from the section of the web or cloud that we actually need.

This isn't a new concept, really. This is the concept of "agents" that's been talked about for almost two decades. The goal we want is to be able to have our devices, our apps, our services, etc, be able to communicate with each other and tell us what we need to know when we need to know it. It's always been seen as a bit of a pipe dream, but every week it seems like we are getting to see and know more examples that make that pipe dream look a little less far fetched.

Goals we want to aim for:

- Stop losing the stuff we've already found
- Localize the data and localize the monetization
- Apps can understand intent, and if they don't, they should. Wouldn't it be great if based on a search or goal, we can download the appropriate apps directly?
- make it about me, not my device


Overall, these are all cool ideas, and yes, these are ideas I can get behind (a bit less branding, but I like the sentiment ;) ).

-----

Alexander Podelko (@apodelko) wants to have us see a "Bigger Picture" when it comes to load testing. There's a lot of terminology that goes into load testing and they are often interchangeable, but not always. the most common image we have of Load testing (and yes, I've lived this personally) is the last minute before deployment, we put some synthetic tests together in our lab, try to run a bunch of connections, see what happens, and call it a day and push to production. as you might guess, hilarity ensues.

The problem with this is not just the lateness, or the inability to really match our environment, but that we miss a lot of stuff. There's a lot of options to load testing that can give us a more broad picture (as the talk suggests). Some of the other issues that load testing brings is the fact that each tool has limitations to what it can cover, as well as the robustness that can be provided by the various tools (as you might guess, JMeter does not solve every load testing problem... I know, contain your shock and dismay ;) ).

As Alexander points out quite appropriately, web sites were simple for a very brief window of time. They are expanding to be more complex and less controllable through standard and simple tools that would cover everything in one place. There are a variety of tools that can be used, ranging from open sources to commercial tools. The more complicated the system, the less likely one tool will be able answer the needs.

Overall, load testing looks to have some of the broadest challenges for the systems that are meant to be tested, at least if we want to create load that is not completely synthetic and generally meaningless. Making load tests that are complex, heterogeneous, and indicative of real world traffic are possible, but the more unique and real world the traffic you wish to emulate, the more difficult the process to actually provide that simulated traffic actually is.


-----

In the mid afternoon, they held a number of Birds of a Feather sessions, to provide some more interactive conversations, and one of the was specifically about how to use GIT. Granted, I'm pretty familiar with GIT, but I always appreciate seeing how people use it and seeing different ways to use it that I may have not considered.

One of the tools that they used for the demonstration was to "Learn Git Branching", which displays a graphical representation of a variety of commits, and shows what commands actually do when they are run (git commit, git merge, rebase, etc.).

-----

The last session of the day is being delivered courtesy of Allan Wagner, and the focus is on continuous testing, or why we would want to consider doing continuous testing. The labor costs are getting higher, even with outsourcing options considered, test lab complexity is increasing, and the amount of testing required keeps growing and growing. OK, so let's suppose that Continuous Testing is the approach you want to go with (I hope it's not the only approach, but cool, I can go with it for this paradigm), where do you start?

For testers to be able to do continuous testing, they need:

- production like test environments (realistic and complete
- automated tests that can un unattended
- orchestration from build to production which is reliable, repeatable and traceable

One very good question to ask is "how much time do you spend doing repetitive set up and tear down of your test environments?" In my own environment, we have gotten considerably better in this area, but we do still spend a fair amount of time to set up our test environments. I'm not entirely sure that, even with service virtualization, there would be a tremendous increase in time saved for doing spot visual testing. While I do feel that having automated tests is important, I do not buy into the idea that automated testing only is a good idea. It certainly is a big plus and a necessary methodology for unit tests, but beyond that, trying to automate all of the tests seems to fall under the law of diminishing returns. I don't think that that is what Allan is suggesting, but I'm adding my commentary just the same ;).

Service Virtualization looks to try to create, as its name describes, the ability to make elements hat are unavailable available for testing. It requires mocks and stubs to work, where you can simulate the transactions rather than try to configure big data hardware or front end components that don't yet exist for our applications.

Virtual Components need to fit certain parameters. They need to be simple, non-deterministic, data-driven, using a stateful data model, and have functionality where we can easily determine their behavioral aspects.

The key idea is that, as development continues, the virtual components will be replaced with the real components and start looking at additional pieces of later functionality.  In other examples, the virtualized components may be those that simulate a third party service that would be too expensive to have part of the environment as a regular part of the development process.

Allen made the point in his talk that Continuous Testing is not intended to be the be all and end all of your testing, but it is meant to be a way to perform automated testing as early as possible and as focused as possible so that the drudge work of set-up tear down, configuration change and all of the other time consuming steps can be automated as much as possible. This is meant to allow the thinking testers to do the work that really matters, which is to perform exploratory testing and let the tester genuinely think. That's always a positive outcome :).

-----

From here' it's a reception, some drinks, and some milling about, not to mention dinner and chilling with the attendees. I'll call this a day at this point and let you all take a break from these updates, at least for today. Tomorrow, I'm going to combine two events, in that I'll be taking part in SEALEAN (Lean Coffee event) and then picking up with the ALM Forum conference again  after that. Have a good night testing friends, see you tomorrow morning :).

End of Entry: 04/01/2014: 05:20 p.m. PDT

TECHNICAL TESTER FRIDAY: And After Awhile... You Can Work on Points for Style

Saturday, March 29, 2014 01:44 AM

Last week was a bit of a whirlwind. I took my son out for a quarter cross country trip to check out a University he may be going to. Needless to say, that took a few days out of my reality. Between being in airplanes, rental cars and dealing with less than stellar rural WiFi in spots, let's just say I'm a week behind where I should be. On the bright side, I do have some stuff to show, and a little bit of extending on the theme of PHP being a site driver.

One of the interesting paradigms that shifted for me with some of the abilities that PHP provides is that I now look at pages differently. I used to use simple tools like Sea Monkey or some other cheap WSIWIG editing tool, chopping out anything that would be overhead, and making some simple frame-ups that I could use to quickly drop in text, make changes, etc. It was intensely manual, but hey, I was rolling my own, so I was OK with that.

Then PHP came along and messed up everything, and bless it for doing exactly that :).

The last time we were together I started looking at ways that I could clean up the pages, put some basic style ideas together, and make a site that would be less of a pain to maintain. Did I succeed? In some ways yes, but in others, I’ve traded one series of challenges for another. Some things are much easier with PHP, and some things allow you a wide latitude to do things that are hard to track down later. The key point, though, is that we can emphasize interactivity and automation for the pages rather than brute force updating and modifications. 



I've taken some time and cleared the game on the HTML and CSS module that Codecademy offers, and I have to say it's actually a pretty good synopsis. the HTML is pretty basic, but it made for a nice layering to teach the ideas behind CSS and where to use them. There's a lot of options that allow for fine motor control of the pages, but I think the best is the rapid templating that it can give to a site designer.

Here's my current index.php file, which is what I now use as my base template for all pages of the proposed "Youth Orchestra" site:


Yep, that's it. Pretty much every page in the site now has this as its basic template. What does the main site look like now?


Yeah, I know, I've moved from 1995 to about 2002 ;). Again, this was so that I could focus on using a few key attributes and not drowning in them. The main idea that I wanted to focus on was how to create a fundamental "box model" that would be easy to use as a base for all of the pages, regardless of what they were to display. 

Each of the main sections of the page are spelled out as div's, each with a unique class or ID. We have the header, we have a nav bar, we have a left side bar, a right main content area and a footer at the bottom. Each of those areas is defined (more or less) by a placement of the individual div elements. 

The style page is not terribly expansive, but it gives a little better view than it did before. also, there's a whole bunch of things that we can still do with these pages, and I am not even close to done. Added to that, there's still no JavaScript in any of these pages. I'm aiming to boost that net week, when I start getting more aggressive with forms and media displays.

Here's a look at the CSS file as it currently exists:



Again, these are pretty basic commands. Not a lot of fancy footwork is going on in here just yet, but that's a good thing. My goal is to try to look to see where I can decouple dense HTML and atomize the items, perhaps in small PHP pages,  or perhaps make it purely database driven (or as much as possible). 

Also, while I am starting to use HTML5 conventions for the base HTML structure, I'm using the standard div structure and giving each dive a unique name or attributes, so that I can, again, simplify what each page needs and what I need to maintain. At the moment, the separate PHP files are currently echo statements with raw HTML listing. It's a step up on the maintainability scale, but part of me hears Paul Stanley from his "100,000 Years" banter on KISS Alive saying "Now come on! I KNOW you can do better than that!!" On the bright side, it gives me something to look forward to and tweak a bit more.

For those that want to follow the ideas from this session, my recommendation is to work through the examples on Codecademy, practice all of the examples, and as you perform each step, grab a snippet here and there and use the ideas to build your css file. try to arrange the items as much as possible from most expansive to the most localized, and if possible, try to keep elements that have an impact or relation on each other close together. 

At some point, if the site gets sufficiently complex, that will become more difficult, but seeing each of the CSS elements, classes, pseudo classes and option in plat, it really becomes easier to separate the structure from the style points. I still have a fair amount of decoupling to do, specifically the use of tables, and I also need to look at more granular positioning and locking of elements in place (almost there, but there's still a little fudge factor going on with the divs). On the bright side, we're moving forward and making progress, and that's what matters :).






Congratulations to Packt for 2000 Titles

Tuesday, March 25, 2014 12:23 PM

As many of you know, one of the things I put a lot of focus on are books and book reviews in this blog. Several publishers have been very generous to me and have given me access to a variety of titles to read, apply and review. Packt Publishing is one of those companies, and as a celebration of the fact that they have released their 2000th title, I want to help them celebrate and encourage those who like their titles to take advantage of their current offer.

I feel bad that I couldn't do this sooner, but I've been out of town the past few days (another post is coming on that point, don't worry), but I still want to get the word out about the Packt 2000 title celebration, but there's a limited mount of time to take advantage of it.

So what is this all about?

It's simple. Until the end of the day today, if you buy any Packt Publishing EBook title, you will get another Packt Publishing EBook for FREE.

Click on either of the links above and you can take advantage of this opportunity, but don't delay, as it ends today (Mar. 25, 2014).

Again, I tend to not use this site for advertising purposes, but I appreciate the fact that Packt has provided a lot of titles to me over the past few years to review, and I would like to return the favor. If you appreciate open source software books, and want to support a company that produces many solid titles, then head over to Packt and get your BOGO on :).

Retro Book Review: Connections

Tuesday, March 18, 2014 23:01 PM

History has the tendency of being seen as static and frozen when we view it from a a later time. What happened is what happened, and nothing else could have happened because, again, at that point, it is set in stone. Once upon a time, however, history could have gone any number of ways, and much of the time, it’s the act of change and transition that help drive history through various eras.

James Burke is one of my favorite historical authors, and I am a big fan of his ideas behind “Connected thought and events”, which makes the case that history is not a series of isolated events, but that events and discoveries coming from previous generations (an even eras) can give rise to new ideas and modes of thinking. In other words, change doesn’t happen in a vacuum, or in the mind of a single solitary genius. Instead it’s the actions and follow-on achievements by a variety of people throughout history that make certain changes in our world possible (from the weaving of silk to the personal computer, or the stirrup to the atomic bomb).

Connections" is the companion book to the classic BBC series first filmed in the late 70s, with additional series being created up into the 1990s. If you haven’t already seen the Connections series of programs, please do, they are highly entertaining and engaging (ETA: the first series, aired in 1978, is the best of the three). The original print edition of this book had been out of print for some time, but I was overjoyed to discover that there is a current, and updated, paperback version (as well as a Kindle edition) of this book. The kindle version is the one I am basing the review on.

The subtitle of the book and series is "an Alternative View of Change”. Rather than serendipitous forces coming together and “eureka” moments of discovery happening, Burke makes the case that, just as today, invention happens often as a market force determines the benefit and necessity of that invention, with adoption and use stemming from the both the practical and cultural needs of the community. From there, refinements and other markets often determine how ideas from one area can impact development of other areas. Disparate examples like finance, accounting, cartography, metallurgy, mechanics, water power and automation are not separate disciplines, but rely heavily on each other and the inter-connectedness of these disciplines over time.

The book starts with an explanation of the Northeastern Blackout of 1965, as a away to draw attention to the fact that we live in a remarkably interdependent world today. We are not only the beneficiaries of technologies gifts, but in many ways, we are also at the mercy of them. Technology is wonderful, until it breaks down. At that point, many of the systems that we rely heavily on, when they stop working, can make our lives not just sub-optimal, but dangerous.

Connections uses examples stretching all the way back to Roman Times and the ensuing “Dark Ages”. Burke contends that they were never “really dark”, and makes the case of communication being enabled through Bishop to Bishop Post to show that many of the institutions defined in Roman times continued on unabated. Life did became much more local when the over-seeing and overarching power of a huge government state had ended. The pace of change and the needs of change were not so paramount on this local scale, and thus, many of the engineering marvels of the Roman Empire were not so much “lost” (aqueducts and large scale paved roads) but that they just weren’t needed on the scale that the Romans used them. Still, even in the localized world of the early Middle Ages, change happened, and changes from one area often led to changes in other areas.

Bottom Line:

This program changed the way I look at the world, and taught me to look at the causal movers as more than just single moments, or single people, but as a continuum that allows ideas to be connected to other ideas. Is Burke’s premise a certainty? No, but he make a very compelling case, and the connections from one era to another are certainly both credible and reasonable. There is a lot of detail thrown at the reader, and many of those details may seem tangential, but he always manages to come back and show how some arcane development in an isolated location, perhaps centuries ago, came to be a key component in out technologically advanced lives, and how it played a part in our current subordination to technology today. Regardless of the facts, figure and pictures (and there are indeed a lot of them), Connections is a wonderful ride. If you are as much of a fan of history as I am, then pretty much anything James Burke has written will prove to be worthwhile. Connections is his grand thesis, and it’s the concept that is most directly tied to him. This book shows very clearly why that is.

Retro Book Review: The Manga Guide to Databases

Monday, March 17, 2014 16:48 PM

Since I’m in the process of doing Noah Sussman’s Technical Tester Challenge, and part of that involves setting up and maintaining a server with an actively running database, I figured now was as good a time as any to take a look at Mana Takahashi and Shoko Azuma's "The Manga Guide to Databases” as a refresher and, maybe, teach me a few new things.


I’m already a fan of "The Manga Guide to” series, so I figured that their take on databases would be in the same vein as their other titles (an accompanying storyline, an emphasis on practical topic coverage, and an emphasis on “kawaii”). To meet that end, we are introduced to Princess Raruna, heir apparent to the Kingdom of Cod. We also meet her attendant, Cain, and a fairy named Tico that teaches them about databases… and anyone familiar with Manga has not batted an eye with that kind of a description (and sure, if you looked at the cover, you could probably have figured that out as well ;) ). For those not already familiar with Manga and their tropes, this might seem a bit strange, but go with it. Seriously.


I can see some of you already thinking “OK, sure, I can see teaching about stars, or maybe even math using Manga, but databases? That’s a bit of a stretch, isn’t it?" Well, let’s take a closer look.


Chapter One introduces us to the Kindom of Kod’s main export… fruit (yeah, you were thinking fish. Everyone thinks fish, but no, it’s fruit). Through the bureaucratic and messy system that they have in place, the case is made for why a database is important in the first place, to reduce errors, keep track of important data, and to make sure that data isn’t duplicated in appropriately or not updated when and where it needs to be. This chapter also sets the stage with the scenarios and back story to help define how the database will need to be set up and managed.


Chapter Two takes the idea of a database further and discusses what relational databases are, and how they differ from other system such as hierarchical and networked databases. Fields and records are explained as vertical columns (attributes of a relationship) and records (individual collections of various attributes as relates to one given entity at a time). Tables hold these fields and records, and a variety of operations can be performed to both input and extract/format the data to be viewed.


Chapter Three goes into the process of designing a database, starting with creating an entity-relationship (E-R) model, and establishing the types of relationship an entity can have (one to one, one to many, many to many). From there, a table is designed, and by examining the relationships, we can see where data is duplicated. We can divide the big table into smaller, interrelated tables in a process called normalization. The concept of both primary and foreign keys are also introduced.


Chapter Four introduces us to the Structured Query Language, or SQL. SQL allows users to perform functions that allow them to define, operate, and control data. SELECT statement allow users to select specific fields to display and show values of those fields. The WHERE statement allows users to specify conditions as to what records are displayed. INSERT, UPDATE, and DELETE statements let users insert, update and delete data. CREATE TABLE lets a user create a new table, where DROP table lets a user remove (drop) an existing table.


Chapter Five focuses on how to operate a database, including how to set user privileges for a database, how to use locking and ensures consistency with multiple users, setting up indexes to perform faster searches, examining transactions and how they can be “rolled forward or rolled back”, and options for disaster recovery and database repair capabilities.


Chapter Six shows us how the proliferation of databases affects everyday things that we do, and that we are likely dealing with them in areas we otherwise would not consider (every site that this book review will appear on has a database to store them, and that’s at the simplest level. The chapter also shows us examples of distributed databases, database partitioning, two-phase commits, database replication and the use of stored procedures and triggers to perform commonly repeated tasks.


The book ends with a short Appendix with a summary of the most commonly used SQL commands (which would probably make for a nice little project for a dynamic allocation of commands for a web site example page.


Bottom Line:


If you are an old hand at using databases in general and SQL commands in particular, there’s probably not a whole lot of new material for you here. For those who are just getting into working with databases, this is a much more fun and straightforward way of teaching the ideas than I’ve seen, well, just about anywhere. Do note that this is not going to be the be all and end all of learning about databases, SQL queries or how to effectively design databases. It will, however, go a long way in giving those people who want to learn how to make or manage relational, SQL based databases a simple framework to hang future ideas and learning from.

TECHNICAL TESTER FRIDAY: A Bare Bones PHP Site and a Foundation for Further Work

Friday, March 14, 2014 16:51 PM

Last week was a comedy of errors. Between trying to focus on getting a live site to behave itself, having numerous back and forth support calls and explanations of "oh, we're sorry, but you'll need to upgrade to our new hosting service to get those features", I finally said "the heck with this" and built a brand new virtual machine (Windows 7) using EasyPHP (just because I saw some friends mention it and I figured "Hmmm, why not?").

I'm going to go on the record and say, for this first part of the process, if you want to have the quickest, start to finish, least amount of resistance approach to working with PHP to build a basic web site, a VM with EasyPHP may very well fit the bill nicely. With it you get the latest PHP, MySQL, Apache Server and a host of other nice features to help you navigate and manage what you need to do. All of which will help give you spare cycles to actually build a site using PHP.

The first few weeks have been looking at PHP and understanding where I can use it, and why I might want to. At the absolute simplest level, it's a great way to template base pages with pieces that you know you will use over and over again. Headers and footers? Not likely to change very much, so why copy and paste a bunch of code? Make a base php script that echo's out what you want to have appear, and have that item appear in the location you want it to on each page. Change the script, and it instantly changes everywhere it's being called. Seems really obvious after you do it, but there's that moment where you realize "Wait, are you serious? It CAN'T be THAT simple!" Actually, yes it can be!





Another nice thing that you can do with PHP is make your database connection strings, as well as your common SQL commands, and store them in a script. By inclusion, you don't have to go and edit a bunch of scripts to issue the same commands over and over, or change statements in a bunch of files. This is where that whole "red, green, refactor" aspect of Test Driven Development can be felt, and wow, it's kinda' cool.






Another thing that I like about EasyPHP, as opposed to working with the live site that I was playing with, is that the debugging process is much quicker. On my live site, I would get errors and have no clue what was happening. I'd have to comb through several log files to figure out what was going on. Here, the system is set up to put all debug messages to your response pages. May seem a bit obtrusive, but really, it's hugely helpful. 

Also, programmers, I've always known this, but the past couple of weeks have reminded me of it again. Debugging programs can be a real challenge, and you can go through what look to be identical programs, line for line, and you cannot for the life of you figure out what the heck is wrong. I had one of those situations this week where I kept looking at why I couldn't update my database with a submission from a page, and it took me 45 minutes to realize I'd posted a comment with a contraction. Perfectly normal thing for someone to do, and yes, the answer is to make a substitution and escape character to handle the single quote, but that's where I ended up losing 45 minutes of my life wondering "why won't this thing work?!!" This emphasizes why programmers are often the worst testers when it comes to their own code. It's the same reason that writers/bloggers are often the worst proofreaders of their own posts (and oh, am I ever guilty of THAT). 

So have you had enough of me prattling on about all this? Where's my site? What did I put together? OK, here ya' go :).





Please, contain your excitement (LOL!).

Yes, I know what a lot of you are thinking. "Seriously? You spent three weeks and this is the best you could come up with?" Correction. I spent three weeks trying to make sense out of what PHP can do, reading up on it, practicing it and playing around with it, while I spent a week getting a MySQL instance to try to behave itself, and then gave up and created an environment from scratch (EasyPHP, from start to finish, install to fully up and running, a total of fifteen minutes. I kid you not).

OK, but the results look a little... well... 1995!

Right!!!

This set of pages is almost 100% pure HTML and PHP. I have exactly two CSS statements. One defines the size of the Submit button in two forms. The other makes for a sans serif font base for the whole site. That's it. Why? Because the prettified version, using CSS, HTML5 and JavaScript, is the next part of the challenge (consider it Noah's "Chapter 2"). 

I've got to give him credit, Noah Sussman is smart with this first step. CSS and JavaScript are big topics, they can take users into lots of different directions. I wondered why Noah wanted us to just focus on PHP for the first part of this project. I think it's because, with that constraint, we can actually consider just how much stuff we can do using only PHP. 

- Want to have a particular part of your site appear at a particular time or set of days? 
- Want to have a number of simple components, like your header and footer, be 100% predictable? 
- Want to have a single place to put SQL variables and a collection of commonly used queries, and only have to change the function calls and parameters based on input? 

In fact, with enough forward thinking, almost all of a site's content could be made up of just include and require statements. 

Again, these are simple steps, nothing Earth-shattering going on here, but they show that, with a little bit of work, and maybe stashing of some syntax examples to use later, you can get a feel for what to do to make a site that is mostly using PHP scripts for content creation.


So what can be gleaned from the first part of this series?

- Technical testers need to realize that programming, any kind of programming, really does take a different mind set and paradigm than manual or exploratory testing does. A tester looks at a finished Lego model and says "how can I see the structure?" A programmer takes a bunch of pieces and thinks "how can I take all of these pieces and put together the finished Lego model?" Programming requires an ability to string small pieces together, and the patience to "accrete" something from very small components into a larger whole. Testers tend to take items in their larger whole and break them apart to get to their individual components and find the relationships. I've intuitively known that, but these past few weeks have made that a lot more obvious.

- PHP alone gives a site programmer some neat tools to gather up ideas and make little bits of reusable code that can appear in lots of places, or create contexts in which the code can be used in a variety of ways. The syntax and the mechanics are similar to many other languages, and getting the feel for the language and what it can do can be done in a fairly short order just by working through the examples at Codecademy. Syntax alone, though, doesn't do much for you. You need to start looking at your pages and start thinking "hmmm, could I use PHP there?" As I've found out, the answer is, well, yeah, just about anyplace, and for a whole variety of things.

- PHP does give you the ability to automate some controls, send and receive information from a database, display data on the page, and do a number of cool manipulations. Without CSS or other "sweeteners", though, it looks jut like old school raw HTML of yesteryear. We could embed CSS statements inside of PHP, but that gets ugly really quickly, and then it makes it murder to change later. PHP is good for quite a few things, but style isn't one of them. Fortunately, I'll get the chance to play with style in the upcoming week.

Oh, one other mention, and I'm going to do a full review on it in the next couple of days. While the point of these posts is to highlight things that are free, open source or easily obtainable through a web search, I do want to mention that I found the "Head First PHP & MySQL" book to be very valuable for this part of the process. The individual concepts for things like configuring Apache, using MySQL and getting syntax for PHP can all be handled with a Google search, but the Head First book gives a variety of little projects to do, all of which help to get the programmer into the mode of coding up working examples, including mistakes that can be fixed, designs that can be refactored, and projects that can be easily ported to other uses. I have some other comments, both pros and cons, but you'll just have to wait for my more complete review ;).

The wheels are now on the car, I have a little bit of gas in the tank, the vehicle starts and can be driven, at least a little. Now let's see if we can clean it up a bit and make it a bit more presentable. Hope you'll join me as I continue the journey :). 

Retro Book Review: The Manga Guide to Statistics

Friday, March 14, 2014 15:05 PM

This is yet another in my series of “books I’ve had from NoStarch that I need to finish reviewing before I feel I can ask them to send me more titles”. I’m saying this because NoStarch has been incredibly generous to me over the years, and has provided me with so many titles I sometimes feel I’ll never get through all of them, at least not to a level that is deserving of a proper and thorough review. On the bright side, I am getting close to clearing my queue, and soon, I will be able to ask again if they can send me some more titles :).


For those of us who do software testing for a living, we know that we have a number of artifacts that come from our testing. One of those classes of artifacts is data. Tons and tons of data. How do we make sense of it all? What is worth looking at? Why is it worth looking at? What decisions can we make if we compile, analyze and distill the data we receive? More to the point, how do we analyze the data so that we can distill it? That’s where Statistics comes in handy.


I’ll be blunt. I took one statistics class when I was in college. I hated it. In fact, I never finished it. Please understand when I say “I have an aversion to statistics as something I have to actually do”, I am not kidding. As a software tester, that puts me in a bit of a bind. If I can’t make some sense of the data I receive, I can’t do as effective a job. At best, I need to farm that work out to someone else on my team who can do the statistical analysis, meaning I need to get their take and explanation to make decisions. That causes delays. Overall, it would be better to just suck it up and learn a bit about statistics. It’s a core piece of domain knowledge any good software tester should possess, if not immediately, then at some point in their career.


There are lots of ways to learn about Statistics, and frankly, most of them are a bit painful. College courses, text books, online videos, etc. can help, but they are often slow, or assume that you have some background in the ideas already. What to do when you want to get the gist of the idea before you tackle the hairier details? That’s where “The Manga Guide to Statistics” comes in handy.


A caveat: this should absolutely not be your only guide to learning statistics. If that’s what you are looking for, then this book will not deliver on that promise. It is, however a good primer to get you started, and help you look at statistics in a way that’s fun and engaging, especially if Manga tropes appeal to you.


To set the stage, or protagonist, Rui, has a chat with the dreamy co-worker of her father (Mr. Igarashi) about understanding statistics. As Rui expresses interest to her dad about learning Statistics, he agrees to get her help. Rui creates a fantasy of being tutored by Mr. Igarashi, only to have her hopes dashed when an employee of her father, Mamoru Yamamoto (read, drawn to not be dreamy), comes to teach her about statistics. Hilarity ensues. There, that’s the Manga trope, and yes, “kawaii” abounds.


If you are familiar with Manga, you know I’ve already spoken volumes about what to expect ;). For those not familiar with Manga, the treatment of the topics are generally amusing, usually at the expense of dignity of either our protagonist or our long suffering tutor, but the light hearted humor is meant to help us relate to the material better. In between the story line, a number of key statistical analysis ideas and concepts are discussed, in a way that makes them accessible and quite a bit less scary than what normally appears in text books. Also, as in the other "Manga Guide To" books, the material is presented in a way that covers a lot of ground. It’s made accessible, but it’s not “watered down” or made to be trivial. The examples actually require the reader to understand some underlying mathematics concepts. If you’ve gotten through at least Intermediate Algebra, most of the math will be easy to follow.


Chapter One focuses on understanding data types, and how we can more readily put terms like Categorical (Qualitative) Data and Numerical (Quantitative) data into aspects that more easy to understand (using a High School Slice-of-Life Manga Drama as the basis for the comparisons). By looking at examples like reader questionnaires, we get to the idea of what these data types are (categorical Data cannot be directly measured, while Numerical Data can be). It also shows how categorical data can be given a point value and treated as numerical data.


Chapter Two gets more into numerical data and discusses some key statistics concepts, such as looking at Frequency Distributions and Histograms (conveniently described by looking for the the best ramen shop in the city, by varying definitions of “best”, and by comparing a team’s bowling scores). By looking at data points and other criteria, and examining how that criteria can be condensed into a table of values, Rui and Maoru show us how we can calculate Mean(or average), the Median (actual mid point of samples) and the standard deviation (the “fudge factor” of what’s been collected).


Chapter Three goes into categorical data. By its nature, categorical data, or qualitative data, cannot be boiled down to a number as is, but there are ways that certain aspects of qualitative data can be categorized and that categorization can be made into quantitative (numeric) data and calculated. Using a cross tabulation, some numerical analysis can be performed, and therefore qualitative data can be measured, albeit imprecisely.


Chapter Four goes into the ideas of Standard Score and Deviation Score, or how to look at a specific data point and see how it relates to the rest of your data, or how to examine data points in a variety of ranges or with different unites of measurement.


Chapter Five talks about Probability, and the ways in which we can predict an outcome based on the data on hand (more correctly, make an educated guess as to the outcome, which is what Probability is meant to do). Data can be plotted on a graph, and that graph can be converted into a curve with enough data points. That curve (standard distribution) can be moved based on the mean and standard deviation. Using a number of different models (Normal distribution, Standard normal distribution, Chi-square distribution, t distribution and F distribution)  we can make the curve “move”. By taking into account the way that the curve moves, we can calculate a ratio, or probability, which in turn can allow us to make a variety of predictions.


Chapter Six looks at comparing the relationship, or correlation, between two variables. By charting variable values on a scatter plot, we can eyeball the values and see if we have a positive or negative correlation, or if there is little to no correlation. If we sense there is a correlation, we can use a variety of indexes (spelled out here as Correlation Coefficient for numerical-numerical data, Correlation Ratio for numeric-categorical data and Cramer’s Coefficient for categorical-categorical data) to determine the overall strength or weakness of that correlation. This chapter also points out that these indexes are “fuzzy”, but they are better than nothing.


Chapter Seven examines Hypothesis tests, which are used to help clarify, or understand, if a hypothesis made by examining sample data is correct. We can test for independence of variables, if our tests are looking at variables in critical regions, and gives us examples as to how to perform a statistical analysis of those tests, and a variety of tests to see if variables are independent, homogeneous, and the degree in which they are either, both, or neither. Lather, rinse, repeat.


The book closes with a Appendix that describes how to use Excel and set up the examples explained in the book, and how to get to the functions and create the formulas necessary to do the measurements described in the previous chapters. This is a wonderful addition, and it gives even neophytes to statistics ways to play with the data and see how they can analyze the data, their results, and practice the hypothesis tests or determine probability of future events/actions.


Bottom Line:


Statistics can be fun, if you plot the story right. If following the antics of Rui and Mamoru sounds like a good time to you, and if gaining a fundamental understanding of some key statistics concepts is your end goal, then this is a nice format in which to learn those fundamental ideas. Note, I said “fundamental ideas”. Do not think that this would be an appropriate guide to say “OK, great, now I know all the statistics I need to know”. Granted, you may learn enough about statistics to be useful, and it may give you additional insights, but this is not an in depth study. Having said all that, for those who want to get into the nitty-gritty stuff, the Appendix about setting up tables and examples using statistical functions and formulas is worth the purchase price alone.


On the Manga story front… does our intrepid heroine Rui master the art of Statistics? Will her unrequited love for Mr. Igarashi remain as such? Will Mamoru be able to replace the spot in Rui’s heart where she hold an affection for Mr. Igarashi? Even if he does, is such a relationship just a little bit creepy? Ahh yes, all of this, and more shall be answered. For those who read manga, well, you probably already know the answers to all of those questions... but it’s still a fun read. For those curious as to whether or not a Manga can teach you a thing or three about statistics, the answer is “yes”, but you’ll need to look elsewhere to build on what’s covered here. As to my target market (i.e. my fellow software testers), if statistics is not your strong suit, this makes for a very practical introduction, and plenty of takeaways to make you just a bit more dangerous at work, and I mean that in the best possible way.

Happy 4th Birthday, TESTHEAD :)!!!

Monday, March 10, 2014 16:17 PM

As I find myself having conversations with a hosting company trying to find out why a database provisioning tool isn't working, a number of modules for SummerQAmp are nearing final drafts and re-visioning, and a few talks are being finalized for conference presentations both near and a few months from now, I came to a realization today. TESTHEAD just turned four years old.

Back on March 10, 2010, I posted the very first message on this blog. It was a bit hesitant, but it made the point of what I hoped it would be:

Welcome to TESTHEAD

"OK, why the need for a blog like this? Well, truth be told, I don’t know that there really is a “need” for this blog, but I’m doing this as a challenge to myself. I consider this an opportunity to “give back” to a community that has helped me over the course of many years, as I have been the beneficiary of many great insights and learned a lot from a number of people and sources over nearly two decades.

[...]


this will be a site where I share my own experiences, both good and bad, and what I've learned from them. Expect there to be talk about tools, both proprietary and open source. Expect some talk about test case design (and how I so hate to do it at times). Expect to hear me vent about some frustrations at times, because like all people, I have my share of frustrations when things don't seem to work correctly or go the way that I planned them to. 

[...]

Most of all, expect to get a real person's perspective on these things and an attempt to communicate them in plain English, whenever I possibly can."


So how have I done on that front? Well, it looks as though many of the initial ideas that I had fell by the wayside pretty quickly. This blog does talk about tools, but it's less specific than I think I initially intended it to be, and I think that's for the better. This blog definitely has taken on a human touch and dealt with a number of the things that I have found to be frustrating and interesting, and sometimes both. 

Overall, this blog has become a repository of a lot of interconnected experiences, and the fact is, one thing leads to another. So many of the things I've talked about these past four years I had no idea I'd be discussing when I first started this challenge. I had no idea I'd join AST, become a BBST instructor, spearhead facilitation for Weekend Testing in the Americas, become a guest blogger at a variety of sites, be invited to speak at conferences halfway around the world, be asked to contribute to a body of knowledge for QA Interns, or be a mentor to other up and coming testers. The opportunities that come my way never cease to amaze me, and very often, those opportunities have stemmed, in some way shape or form, from what I talk about here, at TESTHEAD.

Four years, 889 posts (including this one), zeroing in on a half a million page views, and a lot of great friends and interactions. Thank you, everyone, who has in any way been touched by this blog. Thank you for your Likes, your re-tweets, your favorites, your re-pins, your email forwards and any other ways you've helped to share my blog with other testers and interested parties. It seriously means a lot to me, and I hope to keep making this a destination that you will want to keep coming back to. It looks like pre-school is about to end. Time to get TESTHEAD ready for Kindergarten :)!!!

Strange Incentives

Friday, March 07, 2014 23:34 PM

I did something a little bit bold this week, in that I decided to bring my daughters with me up to Boreal ski area for a snowboarding trip. That's not all that unusual, really, but it is considering I did it on a Wednesday, and yes, I actually voluntarily took my daughters out of school to come up with me for the trip. Yes, I did make sure that they knew what their assignments were, and that work was done in advance, as much as possible. No, this is not something I plan on making a habit, but I did feel this was an important thing for us to do. Still, I really wanted to focus on my daughters' snowboarding skills, and being able to teach them in an environment, on a day that wouldn't be crowded, so as to be most conducive to learning.

One of the things I've come to realize is that you can tell people exactly what they need to do. You can articulate everything down to the last detail. You can give them an exact blueprint as to what they have to accomplish. None of that is going to matter if they are afraid, unsure or resistant. My youngest daughter is now 13 years old. She's probably had the least amount of time on the hill compared to all my kids. Once you have three kids and they're in school and doing their various activities, you start to realize that going up snowboarding with everybody, whenever you want to, becomes a little more difficult. Over the years, I've had to balance my trips and take them either individually, or we'd agree to go for maybe one or two days a season as a whole family. Fun, sure, but not really good at making sure you can advance in skills. We live three hours away from the snow at the best clip. That means it's a big deal and time commitment to go riding. A day trip is often 18 hours door to door. Much as I love it, and much is it something that I am willing to spend a great deal of time doing, I can't expect my kids to put up the same things that I would. Therefore, we haven't gone as often these last few years. That meant that many of the skills that I was able to develop in a short period of time, my kids have not had the same opportunities. I wanted to make sure that we had a day that was devoted to better skills, and better riding, because let's face it, if you can ride proficiently, the mountain is much more fun, and there are many more options open to you.

My youngest daughter is not timid. She's willing to ride down just about any terrain a mountain can offer, but she suffers from the same thing that many snowboarders do. It's a condition called "heel-side-itis". So many riders never get past the level of going straight, or braking and turning on their heel side edge. Riding toe side just tends to scare them. Personally, i had the opposite problem. At first I *only* rode toe side. Getting over to my heel side regularly was more difficult. In both cases, though, if you only ride one direction (toe side or heel side), you are somewhat at the mercy of gravity. You're also only using half of your effective muscle mass in your legs, and the half that you are using, you are stressing it almost all the time, which leads to greater and faster fatigue. Mastering linked turns are efficient, they give you more control over your trajectory, and frankly, it just makes riding that much more fun.

So why are so many people resistance to learning how to turn toe side? The simple answer, is especially on steeper terrain, you have to commit to going straight down the hill on a pitch, and then make the turn happen. Frankly, that's unnerving for a lot of people. The irony is, it's easier to turn on steeper terrain than it is on flatter terrain. Gravity does the hard work for you. Getting over that mental hurdle is still a challenge.

My youngest daughter is a fighter. She tends to want to do things her way, but at the same time she also wants to get better, and frankly, she wants me to you give her a "high five" and say she's doing good job. She'll definitely try, but she gets frustrated easily, and irritated, and often that results in fighting between me being the instructor and her being the student (note: this dynamic is not limited to snowboarding ;) ).

Because of this, I've found some interesting incentives to encourage her along the way. Sometimes, those incentives are just kind of off the wall. Case in point: my daughter is a big Korean Pop Music (K-pop) fan. She loves playing the music from her iTouch in the car on our road trips. Recently, my son, since he now has his own car and his own iPad to play music, decided that he wanted to have the cassette-deck adapter so that he could listen to music in his car. Because of doing that, we had no cassette deck adapter, which meant my poor little girl had to suffer through listening to my old CDs of ska, new wave, hip hop, punk rock and heavy metal during the trip. During our day of riding, as I was trying to get her to better link her turns, and be more casual and natural on steeper terrain, she resisted. I thought about how I might be able to get her to respond. At that moment, I just smiled and I said "OK, I'll make you a deal. You give me three picture-perfect runs, on the way back, the very first electronics outlet I find, I will buy a cassette tape adapter, so that you can listen to K-Pop over the car stereo the rest of the way home". It worked like a charm.

Yes, this post is indulgent, and yes I'm covering a topic that seems completely out of place, but for those who are regular readers, when has that ever been a surprise? The reason I mention this is because there are many incentives that drive our behavior, and when you get right down to it, are just not rational. Actually they are rational, they're just weird. we're constantly in a state of where we have to trick our brains into doing what we want it to do. When we use these incentives, and we know what incentives actually mattered to people, we can make amazing things happen. We can also figure out how to streamline the process, and see if it really does work for them.

If we can lower barriers to resistance, and if we can get people to enthusiastically take on a challenge, we can then really see how they adapt to the situation. If they are reticent, or rebellious, or just not in the mood to do something, no matter what we do, no matter how well we teach it, we'll have problems with them accomplishing the goal. The next time you decide you need to take on a testing challenge, or you have something that staring you in the face that's just irksome, difficult or getting over the hurdle to make it happen... pick an incentive that will actually motivate you. Don't be surprised if that incentive is a little bizarre. Sometimes bizarre incentives are the ones that really spur us on.

TECHNICAL TESTER FRIDAY: Immersing in PHP and a Little Each Day

Friday, February 28, 2014 21:49 PM

For some reason, this picture just sums up
my past two weeks perfectly ;).
Last week was a crunch time at work, and a need to take care of something really important took me away from a timely update, so this is sort of a two in one post.

First, I've been looking at a variety of resources available online to learn about and practice using PHP. PHP can do a lot of interesting things and display output, pull in a variety of information sources, and simplify some tasks, but to do that, there needs to be a fair amount of tinkering involved.

Second, just like HTML all by itself will not give a web site a nice look and feel, PHP will not be the be all and end all of interactivity, either. Setting a site up from scratch means that there is a fair amount of interplay to work out, configuration details to tweak, and a lot of refreshing to see changes. Also, without a back end database, much of what is being done in the pages is superficial and not very interesting, though it does help to hammer out syntax.

It's a small victory, but hey,
I'll take it!
I've finished the Codecademy modules that cover PHP (YAY ME!!!) .

There's some oddity with their interface when it comes to competing certain assignments and exercises. I have found myself not able to complete a module that I have been actively working on, even though the "code" is correct for the context. I have also closed down my browser, reopened it, gone back to the section I was just working on click save again, and gotten a "Success".

Why do I mention this? Because I'm willing to bet others might be struggling with some examples and scratching their heads wondering why Codecademy isn't accepting their results. Often there are typos, and those are easy to fix, but if you find yourself in a spot where you cannot get it to work, no matter what you do, try closing your browser and coming back to the module and saving again.

Noah stated in the initial comment that we should be prepared to spend a few weeks on this initial project, and to be open to the fact that we will be doing a lot of mucking around to get it to work. Just being able to manipulate pictures would be considered a positive milestone. I thought this would be relatively quick; I mean, how hard could it be to just put up a simple site with PHP? The point is not to just put up a site with PHP; there's lots of ways to superficially do that. The image manipulation challenge is what sets it apart. Noah gives us an authentic problem, and asks up to solve it, without guidance as to how or what to use to do it.

This process led me down several paths and experiments. I set up a local stack on my personal machine. I set up a LAMP server in a virtual space. I set up a site already on the open web to use PHP and experimented with commands and syntax. I rolled several pages of my own to see how it all fits together. I downloaded a ready packaged "template" to get some ideas and save me some keystrokes. I swapped ideas between the home grown pages and the template pages. In short, I tried things, I tweaked them, I went down several dead ends, and I predict I'm going to go down several more.

One thing I learned from my time when I was releasing a podcast every week was that I had to learn just how long it would take to do something. At first, I was wildly over optimistic. I figured my skills with writing music and doing audio editing would make doing something as simple as editing a podcast a breeze. A clip here, a snip there and all would come together. If I wanted to have a slap-dash product, with little regard to the end experience of the listener, that was true. It took little time at all to edit a program that sounded hacked and choppy, but hey, it got the main points across. To make it sound good, to make the audio flow naturally, to remove pause words (the ums, ahs, likes and you knows) and to make the transition sound smooth and clean, to preserve the natural narrative so that the interviews and programs were comfortable to listen to, took a considerable amount of time to do.

I realized I couldn't put in a four hour editing session and have a product that sounded good, but I could put in four one hour editing sessions spread over several days and make a podcast that sounded great. The difference? Spreading out the effort is vital, because discernment and clarity come with repeated practice, and some down time to let the brain reflect offline. We don't get that same level of clarity when we try to push everything into one night to put it all together. My mistake has been more of the latter and less of the former. When we do that, we seek short cuts. We look for quick hacks that "work", for some definition of "work". Our standards for what is "acceptable" go way down, and we repeatedly say "oh heck with it, I have it working, it's good enough". Doing it a little bit at a time, and coming back to reflect on what we are doing, lets us see things that could be done better, and that we realize we really can do better, and without a lot of extra pain and effort. Last week, I tried to put it all together at one time, and was frustrated. This week, I managed a little more spacing, and got closer to a level of skill that I could feel like I'm doing something useful, but I know there's lots more I need to do to even have something basic in place.

So yeah, the past two weeks have been hectic, scattered, and less focused and more "bunched up" in my efforts than I want them to be. It feels like how I see many programmers having to work because of issues and changing priorities, and I have a greater empathy for them and what they go through, even to meet my own arbitrary "deadlines". If that is part of the "lessons learned" that Noah wants to encourage, I think it's working very well.

A Weekend Testing Follow Up: Ubertesters Wants to Talk To You :)

Saturday, March 01, 2014 04:12 AM

Earlier in February, the Weekend Testing Americas chapter held a session for and with an app called Ubertesters. This is a wrapper/SDK around an app that can allow for people on mobile devices the ability to report what they see and send in their feedback to programmers and stakeholders without having to move to a different machine.

One of the benefits of Weekend Testing is that we also get feedback from the organizations providing the apps to test. This was shared with me after the session:


I was amazed with the passion of your team to testing and new technologies and tried to stay on track on the way to the way to the airport. I carefully read all the comments and feedback and appreciate them a lot. All the members did a great job, and it will help us to improve Ubertesters user experience for sure.


Additionally, Ubertesters contacted me and asked if I'd be willing to share this with the Weekend Testing community, and those who are part of this "passionate" group of testers. I said I'd be happy to.

From Ubertesters:


'Would you like to become part of a global testing provider and join Ubertesters team (http://ubertesters.com/)? 
Ubertesters is announcing enrollment to their team for testers who are passionate about testing, want to take part in various interesting projects and make some money in the process. For more details, please, contact info@ubertesters.com.'


I think this is pretty cool, and it's becoming part of a neat trend I've seen as of late. When the product owners and stakeholders of products take part in the sessions, they see and learn a great deal. Not just about their product in action, but their product in action in the hands of testers who really care about their craft. I've had a number of people who have participated in sessions say afterwards that they've been contacted by stakeholders and asking them if they'd be interested in talking to them for further opportunities. This is a continuation of that, and it's a result I"m happy to see.


To those who would like to follow up with Ubertesters, a favor. If you do, please mention you are coming to them via Weekend Testers. I can't guarantee that that will give you a better chance of getting in on what they are looking to do, but judging from the feedback and the direct request, I'd say the odds are pretty good ;).


Again, thanks to all who participate in these monthly events. Your energy and enthusiasm is what makes them worthwhile and fun to do. Additionally, as I hope the above illustrates, it's noticed.

Test Retreat 2014: Why I'm Going, and Why You Should, Too :)

Tuesday, February 25, 2014 21:45 PM

First, I need to paraphrase this with something you will be seeing a lot from me in the coming weeks:


CAST 2014 will be held in New York City August 11-13, 2014.


I will be giving a talk along with Harrison Lovell. Details on this will follow, but not until it gets officially posted.


I want to see as many of you as possible come attend CAST 2014, because I feel it is one of the best, if not THE best, software testing conferences a software testing practitioner can attend and come away with real value for their time and investment. As I said previously, you will see me doing a lot more commentary and promotion for CAST going forward.


Having said all that, I want to talk about something that is a preamble to CAST, and looks to be turning into an annual event that I support and want to see thrive. That event is called "Test Retreat".


Test Retreat is rapidly becoming one of my favorite events to attend each year. This year it will be held Saturday, August 9, 2014 from 8:30 AM to 4:30 PM (EDT). I attended the inaugural event in 2012, when it was introduced at CAST 2012 in San Jose, CA and then in Madison, WI during CAST 2013. 

What makes Test Retreat worthwhile? 

Many other events allow a select few to present on ideas that have to be highly structured. The Open Conference option that Test Retreat offers allows me, and others, to present ideas that might be very preliminary and embryonic. Through the event, these preliminary ideas often develop into calls for action, with input from many other participants, that are significantly better than anything I would have proposed on my own. It's this rich level of interaction, conferring with peers, and all willing to work together to develop "better ideas" that make this format a success. Several of my better talks (Let's Stop Faking It, Balancing ATDD, GUI Automation and Exploratory Testing, and others) that I have given at Meet-Ups, conferences and have written up as published articles and papers have had their genesis in Test Retreat.


For those wondering if it makes sense, or if it's worth it to attend a Saturday event (yes, I know Saturdays  are precious), I say "yes"! So far, it has proven to be every bit worth it these past two years. I've already signed up for year three. 

Will I see you there? Will we, perhaps, come up with better ideas together than we would separately? I hope you will come attend and find out!

Book Review: A Web For Everyone

Wednesday, February 19, 2014 21:46 PM

When I started working at Socialtext, I came in right at a time when we were working on a large Accessibility project. For those not familiar with the term Accessibility, it’s the variety of standards, tools, and devices that collectively allow for individuals with disabilities to get access to the information, either on their systems or on the web, and interact with it as seamlessly as everyday users that do not have disabilities. 


There are several standards that can be referenced and used as starting points for understanding accessibility, and a variety of tools, both free and commercial, exist to help the programmer and tester address accessibility issues, create fixes, and test them to see if they work as intended. Over the past year, Accessibility has become a focal point of my testing practice, one I didn’t spend much time thinking about or doing prior to working here.


While it’s important to understand accessibility, it would be even better if more people  were to give thought to accessibility and testing for accessibility in their design decisions, and early in the process make the case that Accessibility to all users (or as many as possible) is an important part of our mission as product owner, creators, designers and testers. Sarah Horton and Whitney Quesenbery approach this challenge and this mission with their book “A Web For Everyone”. More than just ways to code and test for accessibility, this book attempts to help anyone who creates software applications to develop the understanding and the empathy necessary to make design decisions that truly help to make "A Web for Everyone" possible.


So how do Horton and Quesenbery score on this front?


Chapter 1 lays out the case for why we all should consider creating A Web for Everyone, and using the  “Principles of Inclusive Design” (POUR) at the outset. Inclusive design can be seen as the cross section of good design, usability and accessibility. The Web Content Accessibility Guidelines (WCAG) 2.0 standard is introduced, which encompasses various accessibility standards in use in the U.S., the U.K., the European Union, etc. Pour uses seven principles for helping design products that work for the widest range of abilities. Equitable, Flexible, Simple, Intuitive, Perceptible, Tolerant and Considerate of Space and Effort make up the core of POUR. This chapter also focuses on Design Thinking, which emphasizes understanding the human needs first, rather than letting the technology dictate the scope or direction. Combining WCAG, POUR, universal design, and design thinking, starting with the user experience, we can make great strides in designing sites and applications that allow for the greatest possibility of use by a broad variety of users and ability levels.


Chapter 2  introduces us to the idea of People First Design by introducing us to eight different people. To those unfamiliar with the idea of “personas”, this is a great introduction, and a really nice modeling of the idea. Rather than make eight abstract personas, the book outlines eight people in quite specific detail; their physical and cognitive abilities, their skill with technology and understanding/knowledge, and attitudes (motivation, emotions & determination) are fleshed out so that we can relate to them, as well as their unique challenges and abilities/disabilities. By having so much detail, we can empathize with them as though we actually know them. Personal interjection here; even with this level of detail, personas are, by necessity, incomplete. They are stand-in’s for real people. While we will have to “fill in the blanks” for a fair number of things, the more complete a persona we can make and identify with, the more likely we will be able to consider their interaction and design for them to be effective.


Chapter 3  sets the stage for the idea of “Clear Purpose”. Clear Purpose starts with understanding our audience, putting “Accessibility First” to ensure that the broadest group of people can use the product effectively. Emphasis on universal design and equivalent use are more desirable than accommodation, since accommodation usually makes for a less fulfilling experience.


Chapter 4 focuses on Solid Structure. An emphasis is placed on the various markup and presentation options including the options associated with HTML, HTML5 and WAI-ARIA, separating the content from the presentation (yes, CSS is useful for accessibility as well as for eye candy) and optimizing content to be organized in a way that screen readers and assistive technologies can get to the the most important content first. Most important, sites with well defined structure help to remove barriers, and give users confidence they can find what they need when they use a site or an application. 


Chapter 5  covers Easy Interaction. By focusing ion making interactions easy for those with  disabilities, we go a long way in developing a product that is easier to interact with for everyone.  Focusing on keyboard interactions, having codes in both HTML and CSS that leverage assistive technologies to provide more details or outline areas where the keyboard has focus, or to speak to the user where the keyboard focus currently is. Easy interaction enables users to control the interface, with large enough controls. It avoids taking unexpected actions for users that they can do on their own. Easy interaction also includes both preventing and handling errors in an accessible way.


Chapter 6 is all about orientation and navigation, or as the book puts it, “Helpful Wayfinding”. This consists of being consistent with design elements, mapping items similarly on pages so that the look and feel remains consistent, differentiating where it really makes sense to differentiate, and include information as to where a user actually was in the site (think of the bread crumb trail we often take for granted). ARIA roles work with HTML and HTML5 tags to help define where on the page the user is to that assistive technology can reference those rage and provide meaningful alerts and signposts. It also gives some good suggestions as to how to pattern links and navigation items, such as using action words as links, presenting links in an obvious and consistent way, including images as clickable elements, keeping the navigation process simple, and resisting the temptation to bury content in layers of submenus.


Chapter 7 focuses on Clean Presentation, or placing a focus on the visual layout, images and fonts for easy perception. To get the best effect from this, though, Clean Presentation looks to take into consideration a variety of visual disabilities, as well as allowing users the ability to customize the look and feel, using native browser options or physical controls in the site or app itself (think of the font enlarging/shrinking of a Kindle eBook as an example). Stylesheets can help considerably in this regard, and can allows the visuals to me modified both at the user preference level and at the device level (laptop, vs. tablet. vs. phone). Making a contrast between the text and background, font size, style, weight, and spacing, as well as the contrast for images can also help make a site more usable.


Chapter 8 gets to the heart of the matter regarding “Plain Language” Not to be confused with “summing down” content, but writing to the intended audience with words and terms they will understand. Plain language also helps drive content presentation. Clear headings, columns to break up text, small paragraphs, bullet points and use of bolding, italics and links. In general, make it a point to read your site regularly, and incorporate your personals to see what they would think of your presentation.


Chapter 9 covers Accessible Media. Much of what we post requires visual cues. Images and audio/video are the most obvious, and there are differing challenges depending on the individual and the disability. For sighted users, image rich sites can be little more that large blank canvases with a  few words here and there. Audio files cannot easily be consumed by those who cannot hear. This is where  aspects like alt tags for image, closed captioning for video files, transcripts for audio files, and other ways to describe the content on the page will goa long way towards letting more people get involved with the content. 


Chapter 10 brings us to Universal Usability, or put in a different way, a focus on a great user experience for all can go a long way towards helping incorporate many accessibility ideas for everyone. Technology should not have to be fought with to get a job done, or to accomplish a goal. Well designed sites and apps anticipate user interaction, and help guide them to their completion, without requiring a lot of redirection or memorizing of options to get a task completed. A popular phrase is “Don’t make me think!” the site or app should allow the user to focus on their goal, not trying to figure out how the site or app needs to have them achieve it. 


Chapter 11, In Practice, looks to unify the concepts in the book into a holistic approach.  More than just using the techniques described, the organization as a whole needs to buy into the value of accessibility as a lifestyle and a core business goal. Start by evaluating your current site and seeing where you re doing well and where changes would be valuable. Determine what training, skills, people power and infrastructure will help you get from A to B. Also, while personas are a great heuristic, they do not take the place of flesh and blood people dealing with the various disabilities we want to help make our software accessible to. Look to interact with and develop relationships with real people who can give a much clearer understanding of the challenges, so that you can come up with better solutions.


Chapter 12 considers The Future, and what A Web for Everyone might look like. Overall goals include a web that is ubiquitous, where accessibility is part of design from the ground up. Another goal is flexibility as a first principle of design. Our interaction should be invisible, or at least stay out of our way as much as possible. Methods in which we will get to that place will involve being more inclusive and diverse in our understanding of who uses our products, and how they use them. We’ll need to make accessibility part of the way we think, not as an afterthought after we’ve done everything else.

The book ends with three appendices. Appendix A is a brief listing of all the principled discussed in each chapter, and would make for an excellent starting point for any story workshop or test design session. This is a great set of “What if?” questions for software testers. Appendix B is a summary of the WCAG 2.0 standard and how the various sections map to the chapters of the book. IF you want to get into the nitty-gritty details of the spec, or get access to other links and supporting documentation, that’s all here. Appendix C gives a lengthy list of additional reading (book, links, etc.) about the topics covered in the book. If you want to know more about any specific area or heading, check here.


Bottom Line:


"A Web For Everyone" gives a lot of attention to the the personas and real world examples of accessible design, which are interspersed through all of the chapters. We see their plights, and we empathize with them.  These persona examples are humanizing and very helpful. They help us see the what and the why of accessibility. We see many interesting design ideas shared, but we get only a few examples of the how. Implementation is discussed, but only a few ideas are fleshed out beyond the basic prose. This is not necessarily a criticism, because A Web for Everyone does a really good job explaining the what’s and the why’s of accessibility design. High level design ideas and technology briefs are handled quite nicely. A variety of coding examples to show the ideas in actual practice, not so much. If you are looking for a guide to coding sites for accessibility with exercises and examples to create, this isn’t that book. If, however, you want to get inspired to make, test or promote software that can be used and usable by a broader variety of people, this book does an admirable job. 

There's Always Something You Didn't Consider

Wednesday, February 19, 2014 17:29 PM

This past weekend, I had the honor and pleasure to celebrate three new Eagle Scouts in my Troop at a Court of Honor that we held for them this past Saturday. Since the boys in question were all associated with Order of the Arrow, they have the right to have a special "Four Winds" ceremony performed for the. Since I'm associated with the Dance Team that our O.A. Lodge hosts (my primary role in O.A. is "Dance Team Advisor"), I figured it would make sense to present our Dance Team's version of this presentation.

Our Dance Team ceremony is pretty well known and regarded. We have a recorded narrative that mixes in spoken word and Native American Pow Wow songs, as well as ambient background music. We mix in multiple dance styles, representing both female and male dancers and dance styles (typically Jingle Dress, Fancy Shawl, Fancy Dance, and Grass Dance). The outfits that we have are elaborate, and they take a lot of time to put together, put on and take off. The preparation time can often take 45 to 60 minutes for a presentation that rarely last longer than fifteen or twenty minutes. Thus, my goal has been to engineer the process so that the materials can be put together quickly, taken apart quickly, and most important, put on and taken off quickly. To this end, I modified all of the clothing items I could to use side clip fasteners, and make them as adjustable as possible. I cut out the back of an old school backpack and attached it to the top cape of the dance outfit so that the neck bustle could be more easily put on and taken off. To make the fancy dance outfit even easier, I stitched the "angoras" (lower leg decorations that are made from sheep's hair) and the dance bells together into one piece, with the side clips and webbing to make them super easy to take on and off. I tested them on me, and on another analog (a younger scout) and figured it would work well for all concerned.

I'm guessing some of you already know where this is going, don't you ;)?

The day of the performance, we get everyone together, and I assemble everything and show them how to get into and out of the gear. Everything works flawlessly... except for one thing. The angoras for the fancy dance outfit and the wrap sleeve, side-ring clips, and webbing, even when closed down to the absolute tightest level, were still loose on the scout doing the dance. I had figured I'd covered the skinniest possible kid I could think of. Truth be told, no i hadn't, and here he was, right in front of me, wondering what to do. I told him to grab a pair of bandannas and tie them below the bells to add some extra support and pressure. When he went out to dance, even with the added support of the bandanna, one of the set of bells and angoras started sliding down his leg. At this point he looked at me with a mix of bewilderment and horror... "what do I do now?!" The only answer I could telegraph to him was "keep going". He saw that stopping to adjust was not an option, so he adapted his steps to minimize the view of the drooping bells, and after his performance was finished, he went to the area where he was to "stand as sentry" and stood still while the rest of the performers did their parts.

Afterwards, many of the attendees walked up to the dancer and congratulated him on an excellent performance. Not a one of them mentioned the "mishap", though his Mom later pointed out that he looked to be struggling, but adapted effectively under the situation. I learned that as we get closer to the end of a project or a hard deadline, we sometimes make totally innocent lapses in our thinking, and make choices that seem to be perfectly rational, but miss something important. Sometimes these events can be embarrassing, but at the same time, I told the boy in question "sure, the bells drooped, and you couldn't put on a "perfect" performance. On the other hand, of all the boys in the Troop and the Lodge that could have been out there, you were the one who actually suited up to dance. Most people will not remember that his bells drooped. They will remember that he stepped up and did something hard, something intricate, and did a pretty darned good job.

We had a quick "retrospective" on the event, and we all talked about what we could do to make it work better the next time. I got some valuable feedback on the attachment designs I used, and how to modify them to make them even more effective and with a broader range for use. Most of all, though, I was reminded that, no matter how hard you try, no matter how much ground you cover, there's always something you didn't consider.

Book Review: The Modern Web

Friday, February 14, 2014 23:38 PM

I am zeroing in on clearing out my back log of books that came with me on my flight to Florida. I have a few more to get through, some decidedly "retro" by now, and a few that some might find amusing. NoStarch publishes "The MangaGuide to..." series, and I have three titles that I'm working through related to Databases, Statistics and Physics. Consider these the "domain knowledge in a nutshell books", and I'll be posting them in a couple of weeks). With that out of the way ;)...

The web has become a rather fragmented beast these past twenty some odd years. Once upon a  time, it was simple. Well. relatively simple. Three-tiered architecture was the norm, HTML was blocking, some frames could make for structure, and a handful of CGI scripts would give you some interactivity. Add a little JavaScript for eye candy and you were good. 

Now? there’s a different flavor of web framework for any given day of the week, and then some. JavaScript has grown to the point where we don’t even really talk about it, unless it’s to refer to the particular library we are using (jQuery? Backbone? Ember? Angular? All of the above?). CSS and HTML have blended, and the simple structure of old has given way to a myriad of tagging, style references, script references, and other techniques to manage the miss-mash of parts that make up what you see on your screen. Oh yeah, lest we forget “what you see on your screen” has also taken on a whole new meaning. It used to mean computer screen. Now it’s computer, tablet, embedded screen, mobile phone, and a variety of other devices with sizes and shapes we were only dreaming about two decades ago.

Imagine yourself a person wanting to create a site today. I don’t mean going to one of those all-in-one site hosting shops and turning the crank on their template library (though there’s nothing wrong with that), I mean “start from bare teal, roll your own, make a site from scratch” kind of things. With the dizzying array of options out there, what’s an aspiring web developer to do?

Peter Gasston (author of "The Book of CSS3”) has effectively asked the same questions, and his answer is “The Modern Web”. Peter starts with the premise that the days of making a site for just the desktop are long gone. Any site that doesn’t consider mobile as an alternate platform (and truth be told, for many people, their only platform) they’re going to miss out on a lot of people. therefore, the multi platform ideal (device agnostic) is set up front and explanations of options available take that mobile-inclusive model into account. Each chapter looks at a broad array of possible options and available tools, and provides a survey of what they can do. Each chapter ends with a Further Reading section that will take you to a variety of sites and reference points to help you wrap your head around all of these details.

So what does “The Modern Web” have to say for itself?

Chapter 1 describes the Web Platform, sets the stage, and talks a bit about the realities that have led us to what I described in the opening paragraphs. It’s a primer for the ideas that will be covered in the rest of the book. Gasston encourages the idea of the "web platform” and that it contains all of the building blocks to be covered, including HTML5, CSS3 and JavaScript. It also encourages the user to keep up to date in the developments of browsers, what they are doing, what they are not doing, and what they have stopped doing. Gasston also says “test, test, and then test again”, which is a message I can wholeheartedly appreciate.

Chapter 2 is about Structure and Semantics,  or to put a finer point on it, the semantic differences available now to structure documents using HTML5. One of them has become a steady companion of late, and that’s Web Accessibility Initiatives Accessible Rich Internet Applications or WAI-ARIA (usually shortened to ARIA by yours truly). If you have ever wanted to understand Accessibility and the broader 508 standard, and what you an do to get a greater appreciation of what to do to enable this, ARIA tags are a must. The ability to segment the structure of documents based on content and platform means that we spend less time trying to shoehorn our sites into specific platforms, but rather make a ubiquitous platform that can be accessed depending on the device, and create the content to reside in that framework.


Chapter 3 talks about Device Responsive CSS, and at the heart of that is the ability to perform “media queries” what that means is, “tell me what device I am on, and I’ll tell you the best way to display the data.” This is a mostly theoretical chapter, showing what could happen with a variety of devices and leveraging options like Mobile first design. 

Chapter 4 discusses New Approaches to CSS Layouts, including how to set up multi column layouts, taking a look at the Flexbox tool, and the way it structures content, and leveraging the Grid layout so familiar to professional print publishing (defining what’s a space, where the space is, and how to allocate content to a particular space). 

Chapter 5 brings us to the current (as of the book writing) state of JavaScript, and that today’s JavaScript has exploded with available libraries (Burgess uses the term “Cambrian” to describe the proliferation and fragmentation of JavaScript libraries and capabilities). Libraries can be immensely useful, but be warned, they often come at a price, typically in the performance of your site or app. However, there is a benefit to having a lot of capabilities and features that can be referenced under one roof.

Chapter 6 covers device API’s that are now available to web developers thanks to HTML5, etc. Options such as Geolocation, utilizing Web storage, using utilities like drag and drop, accessing the devices camera and manipulating the images captured, connecting to external sites and apps, etc. Again, this is a broad survey, not a detailed breakdown. Explore the further reading if any of these items is interesting to you. 

Chapter 7 looks at Images and Graphics, specifically Scalable Vector Graphics (SVG) and the canvas option in HTML5. While JPEG’s, PNG’s and GIF’s are certainly still used, these newer techniques allow for the ability to draw vector and bitmap graphics dynamically. Each has their uses, along with some sample code snippets to demonstrate them in action.

Chapter 8 is dedicated to forms, more to the point, it is dedicated to the ways that forms can take advantage of the new HTML5 options to help drive rich web applications. A variety of new input options exist to leverage phone and tablet interfaces, where the input type (search box, URL, phone number, etc.) determines in advance what input options are needed and what to display to the user. The ability to auto-display choices to a user based on a data list is shown, as are a variety of input options, such as sliders for numerical values, spin-wheels for choosing dates, and other aspects familiar to mobile users can now be called by assigning their attributes to forms and applications. One of the nicer HTML5 options related to forms is that we can now create client side form validation, whereas before we needed to rely on secondary JavaScript, now it’s just part of the form field declarations (cool!).

Chapter 9 looks at how HTML5 handles multimedia directly using the audio and video tags, and the options to allow the user to display a variety of players, controls and options, as well as to utilize a variety of audio and video formats. Options like subtitles can be added, as well as captioned displayed at key points (think of those little pop-ups in YouTube, etc. yep, those). There are several formats, and of course, not all are compatible with all browsers, to the ability to pick and choose, or use a system’s default, adds to the robustness of the options (and also adds to the complexity of providing video and audio data native via the browser). 

Chapter 10 looks at the difference between a general web and mobile site, and the processes used to package a true “web app” that can be accessed and downloaded from a web marketplace like Google Store. In addition, options like Phonegap, which allows for a greater level of integration with a particular device, and AppCache, which lets a user store data on their device so they can user the app offline, get some coverage and examples.

Chapter 11 can be seen as an Epilogue to the book as a whole, in that it is a look to the future and some areas that are still baking, but may well become available in the not too distant future. Web Components, which allows for blocks to be reused and enhanced, while being in a protected space from standards CSS and JavaScript. CSS is also undergoing tome changes, with regions and exclusions allowing more customizable layout options. A lot of this is still in the works, but some of it is available now. Check the Further Reading sections to see what and how far along.

The book ends with two appendices. Appendix A covers Browser support for each of the sections in the book, while Appendix B is a gathering of chapter by chapter Further reading links and sources. 

Bottom Line:


The so called Modern Web is a miss mash of technologies, standards, practices and options that overlap and cover a lot of areas. There is a lot of detail crammed into this one book, and there’s a fair amount of tinkering to be done to see what works and how. Each section has a variety of examples and ways to see just what the page/site/app is doing. For the web developer who already has a handle on these technologies, this will be a good reference style book to examine and look for further details in the Further Reading (really, there’s a lot of “Further Reading that can be done!). 

The beginning Web Programmer may feel a bit lost in some of this, but with time, and practice with each option, it feels more comfortable. It’s not meant to be a HowTo book, but more of a survey course, with some specific examples spelled out here and there. I do think this book has a special niche that can benefit from it directly, and I’m lucky to be part of that group. Software Testers, if you’d like a book that covers a wide array of “futuristic” web tech, the positives and negatives, and the potential pitfalls that would be of great value to a software tester, this is a wonderful addition to your library. It’s certainly been a nice addition to mine :). 

TECHNICAL TESTER FRIDAY: In Praise of Virtual Machines, and Tweaking with PHP

Friday, February 14, 2014 19:14 PM

It's Friday, one week in to this project, and as I mentioned in an  earlier comment, I reserve the right to go back and change my mind about any of the options I've worked with and what I've put together. Today, I am doing exactly that.

For those who read last week's entry, I sad it would be worth your time to install the needed software on your base machine, so that you could get a feel for each of the components and what it takes to do that. While I still think there's a vale to doing that, after having to uninstall, reinstall, unconfigure, reconfigure, modify, point somewhere else, change options again, and then notice that my hardware machines just don't quite line up the way I expect them to, I have decided to heed the call of so many who left their comments on my post from last week.

In this first block of stuff, I hereby wholeheartedly recommend that you set up a virtual machine to do this work. Set up several if you'd like, but your sanity will be preserved, and  have a few extra benefits:

- you can play what if with multiple machines if you choose
- if you decide to use a Linux virtual machine, your CPU, memory and disk footprint to run the VM is really small.
- applications like VirtualBox and VMWare server allow for saving states and for taking snapshots. It's sort of an on-the-cheap version control, and it can save you from shooting yourself in the foot. Much more so that using your base environment to do all of this.
- set up Dropbox or some other file share location, and you're golden, everything that matters gets placed in a spot where it can be accessed as you need it.

I had every intention of bringing VM's into the conversation at some point, but this past week made me decide now was the best time. If you want to do these exercises in both a hardware machine and a VM, you'll learn a lot. You'll learn a lot by sticking your tongue on an icy pole in the dead of winter, too. I'll leave it as an exercise to the reader to decide if some learning is best done vicariously. In any event, if you've taken the VM route with this, smart move, you won't regret it.

Next step is to set up a site with PHP. That would be great if I knew enough PHP to set up a site. Today, I can say I almost know enough to do that. PHP insertion is super easy. It's just a tag, in this case " to close, and  and some basic code that would look very familiar to anyone who has ever written a sample program in C or another language anything between the tags is PHP doing the work.

//dot (".") is the concatenation element (like + in JavaScript)
echo "Say" . " something" . " witty!";
$teabags = 0;
if ($teabags > 0) {
  echo "There are $teabags tea bags! I'll have a cup!";
} else {
  echo "No more tea! I guess I won't have a cup.";
}
?>

If you create the front end page to evaluate PHP, it will happily do so. Note, you will need to save your pages with a .php extension, instead of .html, to do this. If there's other ways to do this, be patient with me, I haven't gotten that far yet ;).

There's a lot of different resources for PHP available, and one of the quickest to play with and try out is the Codecademy course. They cover the basics of the language, as well as how to put the snippets of PHP into a web page.  Another quick tutorial for building basic site elements can be seen at W3Schools, that perennial old school favorite of web arcana, and the PHP site itself likewise has a fairly quick overview of how to make pages with PHP. Right now I'm playing around with a few elements to see what I can do to create some dynamic content and reference things like images and navigation elements. 

Noah says to allow yourself a couple of weeks to get familiar with the language elements and practice making some simple pages. I'm going to reiterate that advice. It's really tempting to get greedy quick, and want to do too much or try top accomplish too many things at the same time. Noah's lesson 2 focuses on finessing HTML, CSS and JavaScript, so save those for lesson 2. Emphasize the focus in the short term on learning PHP basics, and trying to incorporate a whole bunch of the options into a few web pages. 

Also, save these snippets in a side file with some visible explanations, either in the HTML source or on the page itself. Why? Because this can be a running note tab to remind you of things that work and why. Make a tutorial page for PHP, using mostly PHP. Get a little meta. Put the page in a place you can readily access and review. Right now, I'm doing a hit of copy/paste to add elements and examples (oh for shame! I know, I'm breaking the Zed rule #1 here). Focus on understanding what you are doing first, *then* go back and see if there are ways to save these to include files to make more efficient and refactor. Yes, I'm saying incur a little technical debt right here. That's OK, it'll make the refactoring portion of this project a little more interesting ;).

My goal for this extended weekend and into next week is to get a mockup of a site, not just a page, that includes these PHP aspects, and start playing with them. Now of course, comes the tough part... how to make a site that will interest me enough to engage, but not be so detailed as to cause me to climb down too many rat holes. Stay tuned for next week's edition of TECHNICAL TESTER FRIDAY to see how well I did ;).

What Force Would it take to Shatter an Ice Cube?

Thursday, February 13, 2014 21:19 PM

The question above was prompted by a Skype Coaching session I held a couple of days ago. A new tester contacted me and asked me how they might get started with software testing, and what should they do first.

Almost immediately the person who contacted me (I haven't asked their permission to share all the details, or their name, so I won't) asked me questions about automation tools, and what programming languages they should know and work on. I asked them to stop, and I shared my philosophy with them about testing, which should have as little to do with programming as possible. Don't get me wrong, programming is a perfectly wonderful skill, and I dabble in it at times, but I think we do a dis-service to those who want to be testers when we put at the primary requirement "must be a programmer with these skills and history". What I want to know is "how does a person think? What kind of avenues do they follow? Are the random, or do they hand things on a structure that others can examine, review, and comment/critique?"

It was at this point that I decided to ask about something I believe every tester should know, and that's "what do you know about the scientific method?" Yes, for those who have followed me for awhile, you already know that that is one of the topics we are developing for SummerQAmp, but one of the things I wondered was "how could someone actually demonstrate they understand this?" To that end, the question popped into my head, and I figured I might as well run with it.

"What Force Would it take to Shatter an Ice Cube?"

It's a random question, and that's what I wanted, something random and mostly removed from software testing. It's often too difficult to step in and think about making a science experiment out of software, because it feels so intangible. Physical objects, though, are perfect for these though experiments, so I figured we'd use an ice cube as a starting point for the conversation.

What Force Would it take to Shatter an Ice Cube?

How would someone answer that?
They could make a guess.
They could throw an ice cube at a wall and say "that much force"... and they'd be right.... sort of ;).

But if we really wanted to know, at what point, with actual data, could I determine where an ice cube shatters...

We'd probably set up an experiment. right?
What would we want our experiment to tell us?
Could we determine the point where it happens?
How would we do so?
What would we need to make the experiment happen?
How would we make measurements? Do we actually need measurements?
What will our feedback be? What will tell how we are doing?
How are we gathering data?
How would we explain our results?
Can we "defend" our methodology?
What if new information came our way; could we account for that?
Would we need to repeat or redesign the experiment?

I explained that the process of thinking this through, or even setting up an experiment and actually doing it, would inform them more about their own understanding and curiosity of things than any primer on testing I could give them.

I remember last year discussing this for the first time with James Pulley,and how he said that the Scientific Method has to be Lesson Zero for any potential software tester. The more I consider this, the more I agree. Time will tell if the person who started this conversation follows up, but I think if they actually do this, and really think about this process in depth, as well as do some reading on the scientific Method and why it matters (and that it's not helpful for everything), then we'll be quite a ways down the road to understanding some of the foundations of what it takes to even start a conversation on what makes a good tester.

Agree? Disagree? Better experiments to suggest? I'm all ears :).

Book Review: Perl One-Liners

Thursday, February 13, 2014 06:56 AM

Remember when I said I was going to be on a plane for a combined total of 12 hours, and I was going to use it to work through a bunch of books? This is a continuation of that airline readathon. NoStarch has been really generous and given me a bunch of books to review, so in case you are wondering why there are so many NoStarch titles in a row, well, now you know why :). 

There’s a certain cachet that comes with being able to hack up Linux, Darwin or UNIX boxes. Being able to write scripts is immensely helpful, but there’s always that knowing glance, that little nod, that holdover from the days of “Name That Tune”, where instead of saying “I can name that tune in one note”, the command line geek smiles and says "I can take care of that task with one line”.

Granted, those “one-liners" are often rather involved. Lots of pipes and tees and redirects, to be sure, but somehow, they can actually be said to be “one line fixes” or “one line scripts”. I currently work in an environment where Perl is still in active rotation. I used to write CGI programs in Perl once upon a time (and still maintain some of them to this day). I’ve always appreciated the ability to do things in one line, and in many ways, it’s a neat way to learn some of the more oddball syntax options of both the shell and a given utility, and actually put them into use. All this is my building up to the fact that, when I saw the listing for Peteris Krumins “Perl One-Liners: 130 Programs that Get Things Done”, it just begged for me to say “Oh please, let me review this!” 

The back cover make the following claim: 

Save time and sharpen your coding skills as you learn to conquer those pesky tasks in a few precisely placed keystrokes with Perl One-Liners.

So how does it stack up?


Chapter 1 is meant to help orient the reader towards the idea of a one-line fix or utility. Fact is, a lot of what we do are one-offs, or are tasks that we may need to do one time, but for dozens or even hundred of files. Some knowledge of Perl is  helpful, but many of the commands can just be typed in as is, examine the changes made, and work backwards. Windows users need to do a little tweaking to some of the commands, and Appendix B is here to help you do exactly that. Also, if you see examples that make you want to scratch your head (trust me, you will), there’s always perldoc, which will explain those areas you struggle with.

Chapter 2 looks at spacing, or more to the point, giving you control over just how much you want or can see. Each example explains the steps and the actions each command will perform. Various command line options like -e allow the user to iterate a while loop on every line in a file, so passing a variable parameter sets up an entire while loop. Lots of fascinating variations, each one doing something interesting, some easy to understand, and some bring cryptic to a new level ("perl -00pe0” anyone? Yes, Peteris explains it, and yes, it is pretty cool).

Chapter 3  covers Numbering, and a variety of quick methods to play and tweak with numbering lines and words. The “$." special variable gets covered (thinking it might have something to do with the line number of whatever you are interacting with? You’d be right!), as well as a game called "Perl golfing", which I guess is the Perl equivalent of my aforementioned “name that tune”.

Chapter 4 deal with Calculations. Want to do on the fly counting? Want to shuffle elements? Figure out if numbers are prime or non-prime? Determine a date based on an input? This chapter’s got you covered. 

Chapter 5 focuses on Arrays and Strings. Want to create your open password generator? Figure out ranges? Determine what a decimal value is in Hex, and vice versa? Oooh, I know, how about creating strings of various characteristics to use for text input (come on testers, you were all thinking it ;) )? Maybe generating your own personal offset code to semi-encrypt your messages sounds like fun. There’s plenty here to help you do that. Of course, if that last one seems interesting, the next chapter offers some better options ;).

Chapter 6 deals with Text Conversion and Substitution. Apply base64 encoding and decoding (OK, much more effective, but maybe not quite as fun as making your own raw version), creating HTML or deconstructing HTML, creating or breaking up URLs, and a bunch of operators that allow for all sorts of interesting string manipulations.

Chapter 7 covers Selectively Printing and Deleting Lines. Typically, a one liner that can selectively print can be tweaked to selectively remove, and vice versa. Look for and print lines that repeat, or those that match a particular pattern (and of course, apply similar rules to remove lines that meet your criteria as well).

Chapter 8 brings us to Useful Regular Expressions, and Perl has a monster of a regular expression engine. Rather than give an exhaustive rundown, Peteris focuses on regex patterns that we might use, well, regularly. Finding and matching IP addresses and subnet masks, parsing HTTP headers, validating email addresses, extracting and changing values and many others. This chapter alone is worth the price of purchase.

The book ends with three Appendices. Appendix A gives a listing of Perl’s Special Variables, and how to use them in their context (with more one liner examples). Appendix B is all about Perl One-Liners on Windows, including setting up Perl, getting Bash on Windows, running the one liners from the command prompt or PowerShell, and the oddities that makes Windows, well, Windows, and how to leverage the book to work in that ever so fascinating environment. Appendix C is basically a print out of all the Perl one-liners in the book. It can be downloaded and examined as a quick “how do I do that again?” from one file.

Bottom Line:


This is a tinkerer’s dream. What’s more, its a book that you can grab at any old time and play with for the fun of it, and yes, this book is FUN. You may be an old hand at Perl, you may be a novice, or maybe you’ve never touched a line of Perl code in your life. If you want to accomplish some irksome tasks, it’s a good bet there’s something in here that will help you do what you need to. If the exact match isn’t to be found, it takes very few jumps to get to something you can use and work with, novice and guru alike. You might think it would be ridiculous to say to yourself “hey, I have a few minutes to spare, I think I’ll try out some Perl One-Liners for fun”. Maybe I’m weird, but that’s exactly what I find myself doing. It’s the coders equivalent to a Facebook game, but be warned, a few minutes can become a couple of hours if you’re not careful. Yes, that’s hyperbole, and at the same time, it isn’t. Perl One-Liners is seriously fun, and a wonderful “tech book” for the short attention span readers among us. You may find yourself turning to it again and again. My advice, just roll with it :).