Back to Basics: How to use the Windows Command Line

Monday, February 17, 2014 16:16 PM

Those of us that have worked with computers for most of our lives, take the command line for granted. We know it exists, we know basically how to use it, and we know how to find the commands we need even if we can't remember them.

But, not everyone knows how to use the command line. I've had quite a few questions on the various courses I conduct because people have no familiarity with the command line. And the worst part was, I could not find a good resource to send them to, in order to learn the command line.

As a result, I created a short 6 minute video that shows how to start the windows command line, change to a specific directory, run some commands, and how to find out more information.



Start Command line by:

  • clicking on start \ "Command Prompt"
  • Start \ Run, "cmd"
  • Start \ search for "cmd"
  • Win+R, "cmd"
  • Windows Powertoy "Open Command Window Here"
  • Shift + Right Click - "Open Command Prompt Here"
  • type "cmd" in explorer (Win+e, navigate, "cmd")     
  • Windows 8 command from dashboard
Change to a directory using "cd /d " then copy and paste the absolute path from Windows Explorer.

Basic Commands:
  • dir - show directory listing
  • cd .. - move up a directory
  • cd directoryname  - change to a subdirectory
  • cls - clear the screen
  • title name - retitle a command window
  • help - what commands are available
  • help command - information on the command

If anyone wants more videos like this then please either leave comments here, or on YouTube and let me know. Or if you know of any great references to point beginners at then I welcome those comments as well.

Introducing Virtualbox modern.ie Turnkey Virtual Machines for Web Testing

Thursday, February 13, 2014 22:02 PM

My install of VirtualBox prompted me to update today. And I realised that I hadn't written much about VirtualBox, and I find any videos I had created about it.

Which surprised me since I use Virtual Machines. A lot.


No Matter, since I created the above video today.

In it, I show the basic install process for VirtualBox. A free Virtualisation platform from Oracle which runs on Windows, Mac and Linux.

Also, Modern.IE, which I know I have mentioned before. The Microsoft site where you can download virtual machines for each version of MS Windows - XP through to Windows 8, with a variety of IE versions.

Perfect for 'compatibility' testing - the main use case I think Microsoft envisioned for the site. Or for creating sandbox environments and for running automation against different browsers, which I often use it to do.

I even mention TurnkeyLinux, where you can find pre-built virtual machines for numerous open source tools.

In fact the version of RedMine that I used on the Black Ops Testing Workshops to demonstrate the quick automation I created. I installed via a TurnkeyLinux virtual machine.

Oracle, even host a set of pre-built virtual machines.

A New Feature in VirtualBox (that I only noticed today)

I noticed some functionality had crept in to VirtualBox today.

The cool 'Seamless Mode' which I had previously noticed in Parallels on the Mac (as 'Coherence' mode) and on VM Fusion on the Mac called ('Unity' mode). This allows 'windows' on the virtual machine to run as though they were 'normal' windows on your machine - so not contrained within the virtual machine window.

I love this feature. It means I no longer have to keep switching in and out of a VM Window and can run the virtualised apps alongside native apps. And with shared clipboard and drag and drop, it seems too easy to forget that I ran the app from a VM.

If you haven't tried this yet. Download VirtualBox, install the Win XP with IE6 VM, and then run it in 'Seamless' Mode so you have IE6 running on the desktop of your shiny whiz bang monster desktop. Try it. Testing with IE6 becomes a fun thing to do - how often do you hear that?



How to emulate mobile devices using Chrome browser

Saturday, February 08, 2014 07:56 AM

Google Chrome continually changes, which usually means good news as new features appear. Unfortunately sometimes it means changes to our existing workflow.
This happened recently when Google released a new version of Chrome, but moved the Emulator settings.
I eventually found them, and show you how in the video below:

Or for those of you that prefer to read, read on. I've added references at the bottom.
We have to start by using the Overrides in the Chrome developer tools settings. All the emulation used to exist here, but it has moved.
Right click, and Inspect, to show the developer tools. Then click the cog on the right to show the Settings. And show the Override settings.
So the first thing we do is make sure that we have checked "Show 'Emulation' view in console drawer".
Great.
So now where is the console drawer?
Close the settings and in the dev tools on any of these tabs, we can display the "Console drawer" by pressing the escape key, and lo the drawer did appear and an emulation tab was present.
And we can use the emulation tab to help us test.
In the demo video I show this in action on the bbc site.
Choose a device to emulate. I pick the "Samsung Galaxy Note II" because I have a physical device for that on my desk, and if I encounter any issues I can try the same functionality on my device.
Choose Device, Click Emulate, and you can see the screen size refreshes to a scaled smaller size.
You can amend the display settings using the 'Screen' options. By default it is shown scaled, but you can make if full size if you want.
But we still don't have the mobile site yet. So I refresh the screen. Using Ctrl+F5. And because Chrome is now sending the correct mobile headers for the Note II, we are directed to the Mobile site.
And now the issues.
I try and use the site. Click on the links. And nothing happens.
So, I change sensors and switch off the emulate touch screen. And we have a working site again.
This works on the Note, so it might be a BBC issue, or it might be a Chrome issue. But really it shows us the problems of testing through emulation, when we find suspected issues, we have to replicate them on a better emulator or a physical device.
But the Chrome emulation is so convenient on the desktop that for a first run check on the site, and certainly checking how your server responds to mobile headers, these are a great first step.
And you can stop the emulation by clicking the [Reset] button.
In the video I show a bonus, which I thought was an emulator bug, but seems to be by design by the BBC, where the Weather page does not redirect.
Chrome emulation? Very easy way to run a first check on the site, if you know how to access the functionality.
Additional References:

Software Development Summit 2013

Tuesday, December 17, 2013 14:03 PM

I attended the Software Development Summit December 2013 in Helsinki.

I was fortunate in being asked to perform a keynote, and asked to fill in for a keynote speaker who unfortunately couldn't attend, so I did two keynotes. Lucky me.

You can find slides for the talks listed over on my Compendium Developments site.

I managed to catch up with Kristian Karl and learn more about the CI and testing regime at spotify, you can watch his Eurostar conference Experiences of Test Automation Webinar online (Q&A).

I was able to quickly hang out and see a few of the twitter enabled attendees: Johan Atting,  Johan Jonasson, Aleksis Tolonen, and my fellow Eurostar Committee member Maaret Pyhäjärvi

I did receive a pointer to FreeNest - an open source platform put together by students which is designed to help teams get up to speed with a set of collaboration tools on Ubuntu quickly.

But it was all over very quickly and I had very little time to chat with Ilari Aegerter or Gojko Adzic.

The only downside I had was that I had to miss Gojko's Keynote because I thought I was flying out early. Of course the London Fog had other ideas so instead of learning from Gojko, I was stuck at the airport instead of enjoying the end of the conference :).

SIGIST 2013 Panel - Should Testers Be Able to Code

Tuesday, December 17, 2013 13:24 PM

I attended the SIGIST in December because I was asked to be part of a Panel with the starting discussion title of "Should testers be able to code?".

I was on the panel with Dorothy Graham, Paul Gerrard and Dr Stuart Reid


Embedded image permalink

Initial Notes

I wrote the following in an email to the other panel members during the run up to the SIGIST. It wasn't polished but represented my notes and pre-conf prep.

....

I pretty much have to ignore the title "Should testers be able to code?"

In my mind "Should", at that point in the sentence equals "An obligation to..."

We don't work in an industry where testers have an obligation to code. So any question about obligation has no place in my reality.

I've met developers who do not seem to feel they have an obligation to be able to code. Similarly I've met testers who do not seem to feel they have an obligation to be able to test, or managers who do not seem to feel they have an obligation to be able to 'manage'. The process of Software Development has a lot of fungibility built in and can work with many different skill sets and skill levels on the project.

Personally, I do know how code, and while I can code some things as well as professional developers I consider my coding skills intermediate. Therefore I hope to phrase my answers in the form "When testers can code the advantages are ..." "When testers can not code the disadvantages are..." and "My experience of having up to date and intermediate coding ability has been..."

I do hold some opinions that might pop up:
  • "Testers who can not code, should not write automation code" 
  • "Testers who can not code well, will write worse automation code than testers who can code well."
  • and I suspect that "Many failed test automation programmes are a result of testers not knowing how to code". 
  • "I've also seen developers write awful automation code, I think automation code may require some different coding and design styles than application code."
I've worked with enough teams and reviewed enough automation code over the years to have some evidence base for those opinions. But we have an industry that has pretty low expectations around automation skillsets and automation, and for some reason has lumped most 'automation' in the 'testing' realm.

But this panel isn't about automation. Automation != Testing.

Much of my recent on-line work has been about lowering the barriers to entry for people who do want to code or develop more technical skills. I prefer to help someone do something, if they express an interest.

Personally I try to learn as much as possible about the process, and skill sets involved in, Software Development. One small part of that involves 'coding'. Other parts include "Architecture", "Design", "Databases", "Modelling", "Protocols", "Tools", "Estimation", "Planning", "Communication" etc. etc. etc.

I don't think that any role ("Tester", "Developer", "Analyst") has exclusive right to a set of skills, techniques and knowledge: "testing", "coding", "modelling", "analysis" etc.

I value diverse skills across the team.

On the Panel

My brain, working the way it does, has forgotten most of the questions and answers, so I revisit the memory with some degree of trepidation, false memories may well rear their head.

I think many of the above notes were covered during the Q&A. I was able to pull on some of the material from my Helsinki Talk on 'Experiences with Exploratory Testing...' because someone in the audience said "Developers should not test their own code", and so I was able to riff off the T-Shirt slogan slide from that presentation.

Basically - statements like "Developers should not test their own code" and "Developers do not make good testers" are the kind of statements people post on internet forums, and we should relegate them to T-Shirt slogans so we can mock them and laugh at them. Developers do test their own code, and they can learn to test better. The sooner the 'test community' wipes this nonsense from its collective meme pool the better.

Because we were sitting down on the panel, my opinions were phrased in a less confrontational amanner nd with more humour than might appear from the written form on this page.

I remember saying things like:

  • Projects depend on teams. So we need the team to have a diverse set of skills. And when we build teams look at the gaps in the skillsets.
  • Keep investing in your staff to make sure they keep expanding and improving their skillsets.
  • Becoming a better programmer has helped me test better.
  • Becoming better at testing, has helped me write code better
  • I recommend the book "Growing Object Oriented Software Guided by Tests"
  • Teams are systems. As soon as you add a team member, or mandate that they do something, you change the system. Keep looking at and evaluating the system.
  • Programming means lots of different approaches because there are different styles: OO, Functional, Procedural, etc. They require different skills and models
  • Modelling is a vital skill for testers

We on the panel certainly had fun, I hope the discussions and alternative view points added value to the audience.

End Notes

I think one message that came through from everyone on the panel was that testers need to have the ability to demonstrably add value to projects.

Having the role 'tester' does not mean you automatically add value.

Having the ability to write code does not mean you automatically add value (you might write really bad code).

Each tester needs to identify how they can add value.

The current market vogue is for testers with coding skills.

If that isn't your thang. Then it might mean becoming an expert in UX and psychology so that you can add more value for user focused testing. It might mean a whole bunch of things.

Each tester needs to figure out what they can do to add value, and what they can do to demonstrate their capabilities to the teams or potential employers.

And keep improving.


How to use Jira to subjectively track and report daily on your testing?

Friday, November 22, 2013 12:36 PM

A long time ago I coded a now defunct modelling tool to help me with my testing. Half the battle with managing and reporting testing involves deciding how you will model it for the project you work on.

Generic Modelling


The generic set of formal modelling techniques I use, I often map on to:
  • Entities
  • Lists
  • Hierarchies
  • Graphs
When using Jira, I have access to Entities and Lists.

Lightweight Subjective Status Reporting


On a recent project we wanted a lightweight way of tracking progress/thoughts/notes over time. I really wanted a subjective 'daily' summary  report which provided interested viewers insight into the testing without having to ask.

As part of my normal routine I have become used to creating a daily log and updating it throughout the day. Ofttimes creating a summary section that I can offer to anyone who asks.

How to do this using Jira?


We created a custom entity called something similar to "Status Tracking Summary".

Every day, someone on the team would create this, and title it with the date "20 November 2013".

We only really cared about the title and the description attributes on the entity.

The description took the form of a set of bullets that we maintained over the day to document the status e.g.

- waiting for db schema to configure environment
- release 23.45 received - not deployed yet
- ... etc.

Over the day we would maintain this, so at the end of the day it might look like

- db schema and release 23.45 deployed to environment
- initial sanity testing started see Jira-2567
- ... etc.

I initially thought that the title would change at the end of the day to represent a summary of the summary e.g. "Environment setup and sanity testing", "Defect retesting after new release". But this never felt natural and added no real value so the title normally reflected the date.

Typically, as a team of 3-4, we had 5 - 15 bullets on the list.

Use Dashboards to make things visible


To make it visible, we added a "Filter" on this entity, and added Filter display gadget to the testing dashboard which displayed the last 2 status updates.

This meant that anyone viewing the testing dashboard could see subjective statements of progress throughout the day, and historical end of day summaries throughout the project.

But people don't like writing reports


I have grown used to tracking my day through bullets and actions that I take it for granted that everyone can do this. Still, I had initial concerns that not everyone on the team would add to the status and I might have to chase.

Fortunately that didn't happen.

The team used the Dashboard throughout the day to see what defects they had allocated to them, and to work on tasks and defects in the appropriate state. Therefore they always saw the subjective daily status report when they visited the Dashboard and updating it became a natural task during the day. 

You can report Daily, with mininal overhead


Very often stakeholders ask us to prepare daily reports. I find that creating, and updating, a summary log throughout the day often satisfies that requirement. 

As a team, building it into our documentation process throughout the day added very little overhead and made a big difference to the stakeholders had to our testing.

iOS Screen Capture, Streaming and ScreenRecording tools for Mobile Testing

Friday, November 01, 2013 18:40 PM

I listed the results of my investigation into Android Screen Capture, Streaming and ScreenRecording tools for Mobile Testing, now time to turn to iOS.

Note that I'm not really covering static screen capture here, since short cuts are built into each operating system for static capture.

iOS is pretty locked down. And without jail breaking, your options are limited.

However, iOS has a built in screen sharing capability called AirPlay, designed to be used for streaming to your Apple TV. But that hasn't stopped some enterprising developers building AirPlay servers for both Windows and Mac OS computers.

Both applications are easy to use and offer much the same capability, so which you choose will depend on your evaluation on the machines you use.

Both are insanely affordable. And both have a version for Windows and Mac OS.

AirServer offers more configuration options, although it worked fine out of the box for me. AirServer offers a 7 day trial.

Reflector offers a trial where you can star as many sessions as you want. But each session only lasts for 10 minutes.

To capture your on-iOS-testing, 'airplay' the iOS screen to your desktop or laptop computer, and then use a screen recording tool like Camtasia, BB Flashback (or its big brother BB Test Assistant) and capture the screen movies there.

This doesn't let you interact with the actual iOS device from your computer, but goes some way to making your testing recordable and reportable.

FAQ: How do your books "integrate" with your courses?

Friday, November 01, 2013 17:07 PM

Dear Alan
I'd like to embark on learning from your books and online course but should I do one before the other? Or does one set of materials supersede another?
Thanks, 
A Correspondent 
I receive this question often enough that I'm going to try and answer it fully on the blog.

On a timeline, I created the following products:
If you still want to learn Selenium-RC using Java then the Selenium Simplified Book is the one to get. I walk you through learning the basics of Java, setting up the environments and Selenium-RC in a single book. In my mind the WebDriver courses and Java For Testers, supersede  the Selenium Simplified Book, but if you want to use Selenium-RC then the book remains valid, but remember Selenium-RC has been deprecated in favour of WebDriver.

Feedback I received on the Selenium Simplified book suggested that it was overly oriented to the beginner. Many people already knew how to code and setup the tools, and they just wanted to learn the API.
So, for Selenium WebDriver I created 3 products:
If you don't know how to code, but are a self starter and can learn from online resources when you get started, I recommend:
If you know you're going to need help working through the API then

And since it seemed top heavy on Automation, when that only represents part of what I do in my daily work life, I created the Technical Web Testing 101 online course to introduce people to the tools and thought processes I use when testing Web Applications.

I created  Java For Testers independently of Selenium 2 WebDriver Basics online Course. I use much of the Java in Java For Testers, on the WebDriver course, but don't explain the use of the Java constructs in detail.
I think they complement each other rather than directly overlap or supersede each other. Java For Testers is designed as a stand alone introduction to Java Programming and the WebDriver course doesn't spend a lot of time explaining the Java used.

Hope that helps. And "Thank You" to the most recent set of correspondents that asked the question.

How to connect your iOS device to an HTTP proxy on your desktop or laptop

Friday, November 01, 2013 12:14 PM

Connecting an iOS device to an HTTP Proxy is much the same as we demonstrated on Android devices.

On your iOS settings:

  1. In Wifi
  2. Select the (i) information icon next to your wifi network
  3. On the bottom of the screen is the HTTP Proxy settings
  4. Set this to Manual
  5. Type in the Server IP address and port of your proxy
Done.

Now your HTTP traffic should flow through the desktop proxy.

iOS does a pretty good job of caching so I either clear the cache before I start testing use the Safari Settings and "Clear Cookies and Data" or use the "Private" link at the bottom of Safari to start a new session.

I found iOS doesn't like connecting to some networks, and so I setup a local hotspot with connectify.me and that made my life a little easier.

Remember to switch off the proxy when you are done.

Create a local WiFi hotspot for testing using connectify.me

Friday, October 25, 2013 10:02 AM

We took delivery of the new iOS devices for mobile testing. And in order to activate them, you need to connect over Wi-Fi  But they wouldn't connect to the Wi-Fi network during the activation wizard. What to do?

I decided to use my laptop as a local Wi-Fi hotspot. That way I could configure it with different passwords and encryption types to try the different options and hope that the iOS device would connect.

I had trouble sharing the Wi-Fi connection using the in built windows functionality (ably explained on lifehacker)

So I installed connectify.me to help me use my laptop as a hotspot. And LO! The iOS devices happily connected through my local hotspot and activation could ensue.

Of course - once I had activated them, the iOS devices had no problems connecting to the Wi-Fi network I originally wanted to use for activation.

I suspect that the local hotspot approach might have some useful secondary benefits that I may have to research:

  • monitor traffic without having to setup a proxy
  • throttle the network speed

This also means that I can configure my mobile devices to connect over a single Wi-Fi network, and route the requests over other Wi-Fi networks by configuring the laptop connection rather than on multiple mobile devices.

Anyone else doing this? Any hints and tips or other uses you want to share?

How to chain HTTP Debug Proxies

Thursday, October 24, 2013 14:57 PM

I chain HTTP debug proxies.

That way I can use features from all of them, at the same time:

  •     Fiddler Autoresponders
  •     BurpSuite passive sitemap building
  •     ZAP's multiple breakpoints
  •     etc.

Fiddler

I usually work on Windows, so the first proxy I start is Fiddler. Fiddler hooks into the windows system seamlessly without any additional config. All other proxies I point at Fiddler as the down stream proxy.

When fiddler is running. Test your setup by pointing your browser through Fiddler.

BurpSuite

BurpSuite Options tab. Upstream Proxy Servers. Add an entry for Fiddler:

  • Destination Host: *
  • Proxy Host: localhost
  • Proxy Port: 8888

At this point - test your setup again. Don't chain everything together and then try and figure out where the problem is. Point your browser at BurpSuite and check you can see traffic through them all.

If you get stuck, use the Alerts tab in BurpSuite to check for errors.

Hint: Firefox and Opera maintain their proxy settings independently from the Windows settings so test your setup with Firefox or Opera.

ZAP

Tools \ Options \ Connection. Then point it at BurpSuite
  • Port: 8082 (or whatever you bound BurpSuite too)
Find the port you have bound ZAP to in Tools \ Options \ Local Proxy

And point the browser at this port now.

End

Voila, you should have it all chained.

If not, just revisit the last step. And don't panic.

Step by step. check each part of the journey. If its not working, it will probably just be some stupid error where you had previous config left over from a previous session.

Just remember to unwind them when you are done.

I have a video in the Technical Web Testing 101 course that shows this in more detail.

Links:

How to view http traffic on your mobile phone device via a computer proxy

Wednesday, October 23, 2013 12:46 PM

Viewing the HTTP traffic from your mobile browsers doesn't take that long to set up, but there are a few gotchas to be aware of:
  • You need to find the right IP address on your desktop
  • You need to change your proxy settings on your phone
  • Make sure your proxy allows external connections
  • Make sure both Mobile Device and Proxy Machine are on the same network



The setup I normally use:
  • Phone connected to wifi network
  • Windows Desktop connected to same network via wired connection (or laptop connected via wireless)
As an example, using Fiddler as the debug proxy, on the desktop:
  • Start Fiddler
  • Check that Fiddler has "Allow remote computers to connect" (via Tools \ Fiddler Options \ Connections)
  • Start a command prompt and use "ipconfig" to show you the current list of ip addresses for you computer
On the mobile device, details below are for android (but the principle is the same for other operating systems):
  • Open Settings
  • Wi-Fi settings
  • Long press on the wireless network you are using to access the connection settings
  • Modify Network Config
  • Proxy Settings - Manual
    • change the Proxy HostName to the IP Address of your desktop
    • Add the port for your debug proxy e.g. 8888 for Fiddler
Then your browser should be connecting to the proxy.

Other applications may not use the proxy in this way. You might need to setup port forwarding use adb to feed them through to your proxy (don't leave comments asking how to do this check on google. I haven't had to do this for some time so my memory of hacking around on the android device to get it working is hazy.)

On Android: Chrome, Firefox, Dolphin and the inbuilt browser all worked without issue. I had some hassle connecting Opera, but didn't try and diagnose why.

Try it and see how you get on.

 It makes a big difference to the visibility of your testing when working through mobile.

Additional Notes: 
  • For burpsuite, in "Proxy \ Options" edit the proxy listener to Bind to address "All interfaces"

Android Screen Capture, Streaming and ScreenRecording tools for Mobile Testing

Friday, October 25, 2013 09:43 AM

I looked at my mobile testing options and I realised that I didn't have a full toolbox to help me.

I first wanted to identify screen capture and screen recording options for my Android devices.

Most of the tools I found wanted root devices. When testing, you may not have this option, people might come across paranoid about interfering with the device state.

So I limited myself to tools which did not require root access. They all pretty much work the same way using ADB and Debugging over USB enabled.

If you bought all tools that I recommend in here, then the total cost would hit the dizzying height of £8.98, so I don't see a lot of point trying to roll my own solution.

To use these tools you pretty much need to have a working SDK setup. So work on that first. And if you can connect to your device with adb or monitor.bat then you're probably good to go.

http://developer.android.com/sdk/index.html

In order to record the screen for some of these I use them in combination with a desktop screen recording tool like Camtasia or the Blueberry Software tools BB FlashBack or BB Test Assistant

Free and Open Source Tools

Both Droid@Screen and Android Screen Monitor offer much the same functionality. I think your ultimate choice will depend on which GUI you prefer.

Droid@Screen



Droid@Screen is a pretty good wrapper around the adb.

The main GUI display shows a continually refreshed view of the device.
  • You can take a screenshot very easily. 
  • The main GUI has easy orientation buttons to adjust the GUI display for landscape or GUI.
  • You can capture screenshots to a folder automatically.
  • You can view device properties
  • You can scale the output view
On my Samsung Galaxy Note II the refresh rate was a little slow (about 1 - 2 frames per second), but it is a pretty high res screen. For lower resolution screens you might find that you can use this for screen recording as well.

Android Screen Monitor



Much the same as droid@screen, the GUI is simpler with a right click menu instead of icons.

Sometimes this is a little faster than droid@screen, sometimes droid@screen is a little faster.

Commercial

ASC - Advanced Screen Capture



ASC performs on device screen capture, so it writes a movie file to the phone's memory. It has a bunch of options to adjust framerate. What I particularly like is that it will highlight the taps you make on the screen so you can view the interaction on the device.

On non rooted devices requres you to use an 'activation' program on the PC or mac. The desktop activator program acts as a simple way of making a connection to your device and taking a screenshot, so an easy way of accessing some of the sdk funtionality.

Looking at the popups as the screen 'activates' it is using adb in some way - I assume to enable the android screenshot api.

Application description on the play store says it only works on non-Tegra devices. The trial worked fine on my Samsung Galaxy Note II.

Buy through an in-app purchase for £3.99

Activation Notes: I had some trouble activating it after purchase, but after a few emails with the developer. I had to uninstall it, then re-install it, then click the 'buy' again (I wasn't charged twice). The activation does work, but a bit more fiddly than it needed to be.

VMLite VNC Server

Costs £4.99

A desktop program to start the server on your phone if you work non-rooted.

Once the server runs I can head off to http://<deviceip>:5801 to use the HTTP interface.

Or connect a desktop vnc server to <deviceip>:5901 

The HTML5 viewer was about the same as Droid@Screen or Android Screen Monitor. 

The Java Applet VNC is a little faster, and the best out of the desktop tools I tried.

The video was not as smooth as ASC, but remember that this has the advantage that I can interact with the android device as well from the desktop and use my mouse and keyboard.


Used But Can't Recommend Fully

I also tried a couple of other Open Source Tools, but they didn't work well on my machine, that doesn't mean they won't work on yours, so I list them below:

Summary

A mix of tools there:
  • Desktop Connection for Screenshots and low frame rate streaming
  • VNC for higher framerate and interaction
  • On Device Capture for High Frame Rate
What do you use when you test on mobile devices to record the testing you perform? Leave a comment and let myself and the world know, so we can evaluate the tools you recommend and expand our options.

WinMerge Revisited - my default file and directory comparison tool

Friday, September 27, 2013 08:00 AM

I seem to default to WinMerge for my file and directory comparisons.

Whenever I need to:

  • compare two directories
  • compare two files for differences
  • copy a set of files between directories
  • compare contents of zip files or rar files

When you install WinMerge:

  • You can choose to use it as the merge view in Tortoise SVN. I tend not to do this because the built in Tortoise SVN diff works fine for me. 
  • Add WinMerge to your system path, which allows you to call it from the command line easily and use the command line options. I choose this option.
  • Enable explorer context menu integration - an essential option.
Read the online documentation, or the help file.

When you install. Have a quick flick through the quick tour to get you up to speed quickly.

If you download the .zip file then you essentially have a portable install so can use it from a USB stick.

A few things I especially like:
  • You can drag and drop like a crazy man
    • drag and drop folders on to the desktop icon for comparison
    • drag and drop on to the main pane to start a compare
    • drag and drop into the input fields
  • If you install 7-zip and the7-zip plugin you can compare rar and zip files
    • 7-zip acts as my default archive management tool
A few things to note:
  • The tree mode makes life excellent, just make sure you enable it
  • Look through the options and switch on all the "Shell Integration" options - particularly the "Include subfolders by default"
Why would I use this?
  • I create a backup folder before testing. I test. I can compare and see what changed.
  • I want to revert back to previous files selectively, so I compare dirs and selectively move changed files
  • I have an oracle file and want to compare results
What do you use for your comparison tools? In built version control tools? Commercial tools? Leave a comment if you want to share your secret weapon of choice.


Alternative Uses for Fiddler

Wednesday, September 25, 2013 09:28 AM

A few days ago I realised that one of my use cases for Fiddler, did not stem from a need for debug proxying.


Since I work on a variety of client sites, I periodically have issues with restrictive proxy servers where a .pac file somewhere sets up the browser config, and you have username and password issues etc. etc.

In those circumstances some tools need configuring to use the client site proxy - I recently tried to use httpgraph based on a suggestion from James Lyndsay

Like many tools that allow you to configure the proxy httpgraph has a url and port. Nothing else. No http or https config, no usernames, no passwords.

And like many tools, it didn't work out of the box with a restrictive client side proxy.

So, choose your own adventure time:
  1. Game Over? Stop using the tool?
  2. Mess around with config settings for a few hours trying to get it to work?
  3. Use Fiddler?
1. Stop using the tool

You sit back in your chair and sigh. Nothing ever works out of the box. Why don't vendors make technical tools easy to use. Oh well, nothing lost except a few minutes downloading the tool. Time to get on with the hum drum day to day work with no advantages.

You finish your tasks, and complete the day, but leave with a nagging sense of unfulfilled potential. 

Game Over.

2. Mess around with the settings

You try everything you know. You tweak the registry. You setup system variables. You run the app from the command line with -D proxy settings. You even mess around with the Windows Routing table. Sadly to no avail. You look at the clock. Crikey, its 4pm already, and you still have to finish the test strategy, distribute it, add it to version control, setup the review meeting, and respond to emails. 

Looks like you need to get your priorities in order and stay here till midnight.

You Lose. Game Over.

3. Use Fiddler

"I'll do what I always do first" you think. 

Start up fiddler.

Fiddler hooks in to the Windows internet controls seamlessly, it handles all the proxy stuff, including passwords. Then I'll configure the new tool, to use Fiddler as its proxy. This way the tool has a simple proxy server to connect to, where I control the port and protocol config, and as a bonus, I can see the traffic sent.

So you start up fiddler, and point your new tool's proxy to "127.0.0.1:8888", the Fiddler defaults. 

You know you can use other Fiddler urls if this doesn't work 
But it does work.

Leaving you plenty of time to get on with today's necessary todo list items.

A Double point win. Congratulations. Adventure completed.

Do you have any use cases for debug proxies that you take for granted, that don't immediately involve debug proxying? If so, leave a comment to let me and the world know.

How to Turn on and off JavaScript in Firefox

Wednesday, September 11, 2013 08:40 AM

Whoa, I turn my back for a couple of months and Mozilla remove the option to switch off JavaScript in Firefox.

Short version: Install QuickJava or type "about:config" as the URL then search for "javascript.enabled"

We spent a good 5 or 10 minutes thinking we were crazy. "I'm sure the option used to live here..."

As ever, Google came to the rescue..

Why did Mozilla do this? Because of "Checkboxes that kill your product".

Unfortunately, as a tester I still experience moments where I need to kill the product, so How can I do that now?

"about:config"

Firefox has the built in "about:config" which you can type into the URL entry field. Then search for "JavaScript" and click on the "javascript.enabled" to switch JavaScript on and off.

Add-ons

One of the official forum Q&A items mentions an add on called QuickJava, just make sure that enable the "Add on bar" from the toolbar menu "View \ Toolbars \ Add-on bar"

Other plugins mentioned in this other official forum Q&A item

I chose to install QuickJava because it lets me toggle a bunch of things quickly.

End Notes

As an opinion, I find it a bit odd that since Browsers make ever more developer functionality available at the click of a mouse to every user: allowing them to edit the DOM, or execute arbitrary JavaScript, or amend and delete cookies, etc. etc. I don't think I'd make switching JavaScript off a hard thing to do - this is one of the few things I actually do, as a user, to make a misbehaving website behave. 

But hey ho. Testers have had it worse - remember when we used to have to same html to the desktop to edit it? And edit actual cookie files? We have it easy - particularly when you get to discover a new helpful add-on that you didn't know about - QuickJava.


10 Experiments to Improve Your Exploratory Testing Note Taking

Friday, September 06, 2013 10:35 AM

I have some 'rules' that I apply when I take notes as I perform exploratory testing.
When I look back over how I took notes in the past I can see that I tried different experiments with my approach when building those 'rules'.
I recommend some of my experiments to you now:

  1. In Memory
  2. Only use pen and paper
  3. Only use a text editor
  4. Use a text editor and screenshot tool
  5. Record the screen and talk as you test
  6. Use a tool designed for exploratory testing
  7. Use a Mind Map
  8. Draw a diagram
  9. Automate the capture of logs
  10. Use a Spreadsheet

1 - In Memory

Only use your memory to track your exploratory testing.
This experiment mainly helps me remember that I need to do more. I now feel very uncomfortable testing with just my memory, but when I started this felt natural. I changed.

2 - Only use pen and paper

Yup, you use your computer to test, but you make notes on pen and paper.

Variants:
  • different pens, 
  • different colours, 
  • different sized paper,
  • notebooks, 
  • loose paper, 
  • text, 
  • diagrams, 
  • mind maps, 
  • scribbles.

I find this works well with a single screen, and intense moments in the testing, but I try to re-transcribe or take a photo with my phone and have the image in evernote.

3 - Only use a text editor

Experiment with different text editors, find an editor you like - I've pretty much settled on NotePad++ and Sublime Text, and I import the text files into evernote for later searching.
Different styles of note taking:

  • Prose, 
  • Notes, 
  • Time Stamped Entries, 
  • Annotations like #test #bug etc.

Can you parse your logs at a later date automatically? Would that benefit you?

Do your own search and find a text editor that works for you at the moment.
Touch typing helps. learn to touch type if you can't already.

4 - Use a text editor and a screenshot tool

Sometimes you need to capture the moment as a screenshot.
Do you just use Ctrl+Print? Do you use an image editor? Do you use a dedicated screenshot tool?
I tend to use SnagIt or Jing now. I've used lots of others in the past.
  • Where do you store the images?
  • What filename standard do you use?
  • How do you cross reference your text edit notes to the screenshot?
  • Do you think you would benefit from using a Word Processor and embedding the screenshots along side your text?


5 - Record the screen and Talk as you test

Environment can get in the way for this if you work in a shared office.
Do you have equipment that you can comfortably use for long periods of time?
Talking, and thinking and doing takes practice and time.



6 - Use a tool designed for exploratory testing

People have created a whole bunch of tools; designed, or marketed, as helping exploratory testing.
Try them. See if they work for you.
If your style clashes with the tool. consider if the tool benefits warrant a change of style from you.
Do your own search and find other tools designed for exploratory testing
If you could write your own, what features would it have? Perhaps you could use a combination of tools to gain those features now?

7 - Use a mind map 

Everyone loves creating mind maps. Few people use mind maps like Buzan suggests. Who cares, use mind maps and do it your way.
Do your own search and find a mind map tool that works for you.
What do you represent in the model?
  • Ideas?
  • Steps?
  • TimeStamps?
  • Images?
  • Screenshots?
  • Links?
  • Observations?
  • Questions?
  • To Dos?
  • ?

Over time, learn the features of the tool. consider which features you don't use. Should you? Would they help?
Perhaps you don't use enough of the features? Try a less featured tool and see if it still works for you?

8 - Draw a diagram

Pen and Paper works well for diagrams.
What do you diagram?
  • Structure? 
  • Flow? 
  • Entities? 
  • Notes? 
  • All?

GraphViz lets you write text files that it compiles into automatically positioned graphics.
You can use draw.io as an online diagrammer.
At last count there existed a Bazillion diagramming approaches and tools. Try some of them.

9 - Automate the capture of logs

You can't argue with logs right? Why bother making notes when the logs will do it for you?
  • Fiddler - for HTTPsessions
  • tail system logs (logtail, multitail, etc)
What do you make notes of when you use logs?
Do the logs capture everything you need?
How do you cross reference your notes to your logs and to your screenshots?

10 -  Use a spreadsheet

What about a grid?
Would that help?
Try it and see.
https://docs.google.com/spreadsheet

Repeat

The above covers a lot of note taking styles
  • Visual
  • Tabular
  • Outline, Tree
  • Sequential
  • Adhoc
  • Formal
Evaluate what worked and what didn't.
Take care about your judgement because some of it didn't work due to your lack of experience - try it again. Some of it didn't work because it doesn't fit you, your environment, your system, etc.
Having done them all - try them again. Some of them will seem offensive. Some will feel restrictive. You gain gain more insight when you try it again.

Summary

Even though I gave you this as a "10 experiments to improve your exploratory note taking". I have not given you a quick fix.

  • If you tried one each day, this would take you 2 working weeks.
  • If you try variants in each of the experiments ( different tool, different paper sizes, etc.) This could take a month or more.
  • If you repeat them and challenge yourself to master them, and change them, this could take up to 6 months.
  • I still experiment with my approach. I have done for years.


What experiments would you recommend" or "What experiments you have conducted?". Let me know in the comments below.

You might want to watch "What is Good Evidence" by Griffin Jones, which reminded me I needed to write about this. Griffin's talk overlaps very nicely with this post. I recommend you watch it.

How would you check that a www web site redirects to a mobile site?

Friday, August 30, 2013 12:43 PM

Normally I add my automation posts to SeleniumSimplified.com but this particular case study demonstrates how I think about testing and incorporate automation into my test approach.



The scenario you face as a tester:

  • You have a main web site www.eviltester.com
  • You have a new mobile site m.eviltester.com
  • You have a set of redirection rules that take you from www to m. based on the device
  • And the device is identified by the user-agent header string
e.g. the bbc.co.uk redirects to m.bbc.co.uk if you have the user-agent set to a mobile device.

The first thought for testing?

  • We need to get a bunch of devices to test this on.
And you probably do. But you limit the scope of your testing to a small subset of the possible set of user-agents out there in the real world.

Second thought?

  •  We could spoof the user-agent.
It becomes a technical test where we use the implementation details and check the scope of the implementation coverage.

But how?

  • Well, Chrome has the override settings where we could choose a different user-agent.
  • We could have our debug proxy change the user-agent for us.
Great. Both of those would work. But require us to do this stuff manually, and it will be slow. We probably still want to do this though, to make sure it renders and that the approach works.

Where will we find the user-agents?

We need an oracle source for our data set of user-agents. Fortunately there are a few sites out there that track what user-agents are in use:
I tend to use useragentstring.com - if you have a preference that differs then leave a comment and let me know.
So I wrote some code. And I know about all the "testers shouldn't code", "tester's don't need to code", "blah blah blah" discussions.
I can code. It increases my ability to respond to the variety of conditions on a project. Requisite Variety. I encourage you to learn how to code. (Hey I'm writing a book about that.)
So I code.
I wrote a simple set of Java code that:
  • Uses GhostDriver - the new headless driver wrapper around PhantomJS
  • Visits useragentstring.com and scrapes off the user-agent strings
  • Filters the user-agent strings to those that I consider 'mobile' devices
  • Iterates over all those user-agents
  • Creates a new GhostDriver with that user-agent and visits the www site
  • Checks that I redirect to the mobile site
You can find the code over on github:

    Surely it would be faster to use direct HTTP calls?

    • Faster to run, but not necessarily faster to write. Yes. 
      • See I can use the WebDriver findElements commands when scraping the page and not have to remember how to parse XML in Java or download another Java library.
      • I can use the WebDriver to visit the site and handle all the redirection for me, rather than write some redirect handling code for the Apache HTTP libraries.
    I want to get some automation done fast. That adds value. That augments my manual testing.
    I tidied it up a little for release to github so it isn't completely embarassing, but hey ho, it added value. I'll use it again. It looks pretty nasty, but it works.
    Sometimes that's the type of automation I write when I test.

    But that wasn't the requirement scope!

    • True. It wasn't.
    • The requirement scope was small.
    • Sometimes we have to explore.
    • I look for external oracles and comparative sites and rules to help me evaluate if the requirements meet the actual user need.
    • In this instance I found a lot of user-agents that the redirect rules didn't cover. 

      But if it wasn't in the requirements we can't justify the testing!

      • I can use a comparison with other sites handling of the user-agents (e.g. bbc or tfl)
      • I can see if the gaps in the system under test are better or worse than theirs. 
      • BBC didn't handle 1 user-agent I found,
      • TFL didn't handle 3,
      • The system under test didn't handle 100+
      I use external oracles, as well as internal oracles. I use the competition to evaluate the system under test. I use multiple sources of information and look from multiple angles.

      What would you do?

      Do let me know how you would have done it differently.

      Don't go live with simple security problems - 10 tips to help

      Tuesday, July 23, 2013 20:40 PM

      I feel anger when I stumble across very, very, very simple security issues. Especially when they compromise my data.

      Yes I do. And I hope, as a tester, that you do too.

      But I face a problem... As a tester, I can't say "Did no-one test this!" because I know that they might have done, and someone else might have chosen to go live anyway.

      But on the off chance that no-one did 'test this', I offer you this post.

      Security by obscurity


      If I visit your site and I can gain access to top level pages that I shouldn't have, then I get angry, because that should never happen.

      Even if you haven't told me about the URL, I can try to guess it from your other naming conventions.

      Please make sure you secure your URLs.
      Security by obscurity doesn't work for very long.

       

      Validate Parameters on the server


      And if I see parameters in your URLs. I'll change them.

      Yes I will.

      I will:
      • Because I don't want the list of items to stay limited at 25, I want to see 2500
        • No I don't care about the performance impact on your database - fix that a different way
      • Because I want to skip ahead more pages than you have listed on the page. "I want to go to page 15 now!"
        • No I don't care about the user experience you want me to have, I care about the user experience that I want to have
      • Because I can
        • Yes, because I can

       

      Security by ignorance


      When I visit your site, I look at the traffic you issue when I hit a top level URL.

      I look at what requests you make to build the page.

      Yes I do.

      Most browsers have the functionality to view traffic built in now. Any user can view the traffic and see the URLs that you access.

      And then I use that information...

      I take the URLs you've used. And I use them.

      Sometimes I change them. Simple idea, but sadly all too effective.
      • So if you access  
        • http://site/api/123456/report  (note: not a real domain)
      • I'll access 
        • http://site/api/123455/report  (note: I changed the number)
      Yes I will.

      And if you haven't locked down the API to specific users, with specific role level controls. Then I'll probably see another user's report. Then I get annoyed, because as a user it means that other people can see my reports. And I don't like that.

      No I don't.
      Assume that anything you do automatically someone else will do manually.
      Just because they can.


      Make sure you have low level permissions on your API, don't assume that no-one will notice it.

      I frequently do this because I want to bypass the horrendous GUIs that web sites put in my face, when I want to achieve a result, rather than experience your broken GUI. So I script it. Or if I can get away by posting a URL with some data, then I'll do it.

      I bet other people do this too.

      Testers should do this too, because...
      Security by ignorance doesn't work.

       

      Security through sequential numbering


      And if you're using sequential IDs for reports, or users, or accounts, etc. you actively encouraged people to hack you.

      Yes you did.

      No-one has ever recommended - Security through Sequential numbering.

      No they haven't. Never Ever.
      Security through sequential numbering doesn't work.

       

      Tips for Testing


      So now, the inevitable 10 tips for testing:
      1. Play with the URLs
      2. Change URL Parameters
        1. to check that permissions surround the public level
        2. to check that request validation takes place
      3. Try URLs when logged out to make sure permissions apply
      4. Guess some URLs
      5. Use an HTTP Debug proxy and look at the traffic
      6. Investigate the traffic and see what the requests do
      7. Issue the traffic requests out of context on a page to understand the 'real' state rules in place
      8. Change the URL parameters  in the traffic URLs
        1. to check that permissions surround the AP
        2.  to check that request validation acts at the API level, not just the GUI level
      9. Issue the requests when logged out to check the permissions still apply
      10. And if you do test like this, and your organisation keeps ignoring these types of defects, check if you reported them effectively, and if you did, then leave because that company doesn't deserve you.

       

      You wouldn't like me when I'm Angry


      I didn't even describe security testing above. I described functionality testing.

      And really basic functionality testing at that, just simple input variations. I haven't messed with cookies, I haven't done anything hard (because cookie editing ain't easy, 'right kids).

      If you don't include this "really really simple stuff" level of test activity, then please let me know so that I can avoid your site and find a competitor quickly before we develop a user/supplier relationship.

      I really don't like getting angry when I act as a user.

      Trust me, you wouldn't want me as an Angry user.

      PS:
      • Yes, this blog post does describe problems found at a specific web site.
      • No I will not name that site.
      • Yes, I have already told them... and more than once.
      • Yes I have started looking at alternatives, sigh.

      "Java For Testers" released

      Friday, June 14, 2013 22:37 PM



      I've been working on a lot of stuff in the first 6 months of this year.

      Most of this hasn't been converted into usable product form yet, and is still working through my drafting process.

      But the first thing has made it to the 'public' stage - a new book called "Java For Testers".

      I've released this as beta, the same way I did "Selenium Simplified" when I was writing that.
      The price will gradually increase as the book nears 'done' status. So get in early if you want a bargain.

      A Bugzilla exploratory testing session from the vaults

      Monday, October 07, 2013 21:33 PM

      Back in 2011, I decided to try recording an exploratory testing session.
      I uploaded the results privately to Youtube and essentially forgot about it.
      It was the first time I tried to think out loud, and record, and conduct exploratory testing, and make notes, all at the same time.
      Despite its rough edges, and horrible editing, I'm going to make it available.
      I have long lost the original recording so I can't recreate a full unedited version, and I can't really adjust this video too much, so I'll let it stand as a time bound representation of where I was with the multi tasking approach to exploratory testing back in 2011.



      If you want to learn more about Technical Exploratory Testing then I have a free course that covers more of the basics.

      Technical Web Testing Course

      "I was a zombie tester" a true story

      Friday, April 12, 2013 06:08 AM

      I was a testing zombie. And I confirm that the details herein conform to the truth as I recall it.

      Well a revenant actually

      This is a still from the 2002 film "Mad Dogs", specifically 1 hour, 23 minutes and 44 seconds into the film. And that's me on the left. In full zombie mode.

      And you can't tell from the film still, but all of us in this picture are actually covered in fake blood and dirt. The whole segment is on screen for about 10 seconds but took about 6 hours to film.
      I'm not sure if there are any parallels to testing in this story, but I can at least say "I was a zombie tester, and I have the photographic evidence to support it".
      If this film was a project, it would have run massively over budget It was wildly inefficient (6 hours for 10 seconds) and was filled with waste - which you'll hopefully identify as the story goes on.

      Wot, no Script!

      As an extra - which I was. We didn't receive a script. We were told to turn up, that we would be playing the part of a zombie, and that we should wear clothes that we didn't mind getting messed up.
      Being a closet thespian I turned up in a suit, shirt and tie, after all, I've just risen from the grave, and when you are buried you wear a suit. Or at least, I plan to.
      I was a little surprised then to see all my fellow extras in jeans and T-shirt.
      • Didn't they read the requirements? 
      • Didn't they interpret the requirement? 
      • What kind of zombies were they? 
      • Had they really just risen from the grave in their scruffy T-shirts?
      Clearly my testing powers were in full force because I was the only one that had interpreted the requirement in that way.
      The make up and costume people were also a tad miffed at me.
      "We told you to wear something you could mess up", they said.
      "Yes", said I. "I am", said I.
      "But you are wearing a suit", they said.
      "And you can mess it up", said I
      "Hmmmph", they said.

      Regarding Domain Knowledge

      Because they were the professionals, who clearly knew what they were doing. What did I - a reader of horror novels and comics from a young age, and watcher of horror films my entire life - know?
      I suspect a lesson can be found here regarding the value of tacit domain knowledge. Possibly having more value than the explicit contextual script and staging knowledge that the project team had.
      It was a suit that I had purchased at a charity shop for the princely sum of less than 5 pounds, for the express purpose of events such as these, and other theatrical play acting in the woods that we don't need to go into right now.
      To recap, the people involved in the film hadn't intended the requirement to be interpreted in the manner that I had.
      Consequently, I looked forward to seeing what costume they actually had in mind then.
      Actually, they didn't have anything in mind.
      Nothing.
      Not a thing.

      Scripts require interpretation

      It wasn't in the script. The script just called for some revenants to be scarily visible through some dry ice at the back of a garden (Highgate Cemetery).
      The professionals covered us in fake blood, dirtied us up a bit. But that didn't work.
      And at one point they even put sheets over us (remember this is a true story relating to a professional film with a budget in excess of a million dollars (1,400,000 according to IMDB)).
      The sheets came off when the writer came across and put his foot down, or started crying, I forget exactly which.
      "Revenants, not ***** zombies or ****** ghosts" I heard him say.
      Because he had written a script you see.
      And it plainly described revenants approaching from Highgate cemetery. We were filming in Crouch End, which is some way from Highgate Cemetery, but due to the magic of film making and by summoning the spirit of Ed Wood, that would make no difference. They could edit in Highgate Cemetery, and indeed they did, for you can briefly see a gravestone, which then turns into a bush.
      I did meet the writer upstairs. He looked a bit sad. I tried to cheer him up by mentioning that I was wearing a suit because I knew what a revenant was. But it didn't seem to cheer him up.
      As it got chillier. And darker. I was quite glad of my suit. Because it has a jacket you see, not just bare arms, which is what you have if you're wearing a T-shirt.
      And we zombie/revenants got shuffled off to the back of the garden.

      Action, doesn't mean much without motivation

      We didn't actually see the script. We were just told to walk forward slowly when they told us to.
      They shouted action. And we just stood there.
      Because we weren't real actors you see, and we were expecting to be told "walk forward slowly" or some other instructional phrase.
      Instead we had to wing it. But slowly, because we were really dead.
      So we all shuffled forward in a 'we don't really know if we are zombies or what" type of fashion.
      I thought we must have done well because we were given a round of applause and promised we could turn up for the film premiere.
      I assume the film went straight to video because no premiere invitation ever actually arrived.

      Little things count

      But, not to be too down on the professional make-up costume people... when I was walking home I certainly looked particularly gruesome because everyone I passed looked at me in a truly horrified fashion - covered in fake blood and dirt as I was, and dressed in a suit and tie.
      Its the details that are important. Even if you can't see them on the screen.
      Take for instance our triumphant close up at 1 hour, 24 minutes and 8 seconds into the film.

      It just looks like a black blob to the uninitiated. But no, there are 5 or 6 zombies in that picture. Can you see them?
      This is like a horror version of "Where's Wally?", if "Where's Wally" was inked by someone with a 6 inch emulsion paint brush instead of a 0.5mm pen.
      So there you go. I was a zombie tester. The last picture acts as evidence. And testers value evidence.

      Think of a Word - a 99 Second Talk

      Thursday, March 28, 2013 09:04 AM

      I've said in various talks that I don't enjoy creating, justifying, or applying, definitions.
      I think creating your own definition does work well as an exercise, because you can explore your vocabulary and try and create an encompassing statement of intent to cover what you mean when you use a word. And there exist, people who do the 'definition' thing really well. James Bach and Michael Bolton act as exemplars of this approach and freely share, discuss and debate their definitions via blogs and twitter.
      I do not appear to fit into that group, my definitions do not work well, and when I adopt a definition it feels stifling. I find that my definitions change, not because I have changed, or the situation changed, but because I created a definition that didn't encompass everything I needed it to cover.
      Fortunately, for me, I found an exercise that words better for me. Using words as symbols, and identifying words that apply to the concept or term I want to explore. These words might act as attributes, or characteristics, or high level abstractions, or symbols.
      When used as symbols we deliberately read into them. We deliberately don't try and tie them down. We deliberately explore them from different angles and take from them what we need at the time. The symbol doesn't have a definition. You find and explore the relationship between yourself and the word, at the time and place you find yourself now.
      I phrase it slightly differently in the 99 second talk. Different medium. Different message.
      I prepared this 99 second talk in advance of TestBash 2.0 but in the end the talk didn't feel right on the day. So I created another one instead. Since I prepared the talk in advance I have a recorded practice session, which I release now.

      Next Steps:

      • Create your own definitions - see if that works for you, see how you feel about it
      • Identify some symbols, explore them - see if that works for you, see how you feel about it
      Your browser is not able to display frames. Please visit the mind map: 99 seconds: Words As Symbols on Mind Mapping - MindMeister.
      Create your own mind maps at MindMeister

      99 Seconds at TestBash 2.0

      Wednesday, May 15, 2013 22:17 PM

      I presented a 99 second talk at the TestBash 2.0
      I went to the TestBash with a different talk prepared, but it didn't feel right for the TestBash so I created something else when I was there. As a result I forgot 1/4 of it, so I only hit about 70 seconds.
      I don't think anyone noticed, but I'll link to the recorded video should it ever find its way online. (The actual video is contained within this vimeo).
      So that I have a record of what I meant to say. I recorded the 99 Second talk at home.
      The basic theme revolved around the same concepts as the talk I didn't do. About ownership of the words that we use to describe testing. Something that I've talked about and blogged about before. But I say it again because I think the testing world will transform into something more effective when we take responsibility for the words we use and the testing we do.


      This talk above came out different from the talk at the Test Bash, which came out different from the one in my head. Because despite having some notes on what I meant to say. I reinterpreted those notes differently each time.

      Your browser is not able to display frames. Please visit the mind map: Test Bash 99 Seconds: Contrast, Analyse, Ridicule on Mind Mapping - MindMeister.
      Create your own mind maps at MindMeister
      Note:

      What does a technical exploratory test session look like?

      Wednesday, March 06, 2013 16:52 PM

      As part of "Technical Web Testing 101" I wanted to provide an example of what an exploratory test session with additional "Technical" focus might look like, at the same time demonstrating some of the capabilities of modern browsers whilst comparing them to proxy servers. Phew, a bit of a mouthful, and you can see the resulting video below.
      I uploaded the video to Youtube, as well as it forming part of the course, because I think it has interesting elements that can stand alone. And I don't see many examples of exploratory testing on the web. I wanted to try and provide an example of 'doing' exploratory testing, and the type of notes I took.
      As testers we can provide harsh criticism but I won't let that stop me sharing. If you don't think this provides a good example then I encourage you to share your own. I do welcome constructive comments and critique.
      I'm getting better at thinking aloud as I test, so my verbal narration actually makes sense in this video.

      A few things I want to point out.

      • I try to explain the thought processes and decisions I'm making
      • You can hear me verbally describe risks that I want to investigate
      • You see me spin off track a little to investigate some 'interesting' ideas and then get back on track
      • Modern browsers have a lot of impressive functionality built in that we used to have to use proxy servers to achieve the same effect
      • I'm "Tool Augmented" not "Tool Driven" so the tool helps me do what I identify I want to do, not what the tool allows me to do
      • I'm testing http://google-gruyere.appspot.com/
      • I'm using the "Edit This Cookie" chrome plugin 
      • I did this as two sessions. The first was to get my bearings - and I made notes during it, which you can read below. The second was to record the video. The second session was slightly different as you can see if you compare the video with the notes, which shows that even when we repeat sessions, we learn additional things and do the testing differently.




      Regarding the notes
      • This was an informal session so I didn't timestamp anything - which I would do if I was testing on site professionally.
      • The notes were mainly to guide me in replay so aren't formatted with any annotations e.g. @Bug or headings
      And here are the notes in all their glory. I used Evernote as my note taking repository.

      Testing with Gruyere with Google Chrome
      Create a new account
      "bob" "bob"
      I can see the new account is created with a "GET"?!?
      http://google-gruyere.appspot.com/804259209683/saveprofile?action=new&uid=bob&pw=bob&is_author=True
      perhaps I can use different actions?
      Perhaps I can amend? and change password?
      Perhaps is_author has other alternatives
      e.g. is_admin?
      Having created an account - check storage
      And I have a cookie - have I logged in automatically? I have I'd like to amend the cookie and check if the name can allow me to login as someone else, or the permission field can change - but I can't do that out of the box with Chrome
      Technique - means I have to look for a tool to do that - fortunately I already have one installed, but if I didn't - this would prompt me to do so.
      Try changing the cookie value to admin, refresh, and I no longer appear to be logged in
      Perhaps that is a key? to the ID?
      Repeat the get request and see what happens
      http://google-gruyere.appspot.com/804259209683/saveprofile?action=new&uid=bob&pw=bob&is_author=True
      User already exists - ok fine.
      Try and use the url for different actions e.g. "amend" gives me an invalid action
      Let's see what profile does
      I'll inspect the form and I can see an update value and it is a get request again
      So instead of "amend" try "update" in the url
      incorrect password? But it is the same one?
      Ah - perhaps it is looking for the validation password as seen in the profile update form
      for update it probably needs oldpw as well
      If I take out pw then what happens? request accepted
      but presumeably didn't update anything
      what about the is_admin risk?
      hmm nothing seemed to happen, - what if I logout and login again?
      Woohoo admin