My search for easy to use, free, local HTTP servers

Friday, December 19, 2014 09:14 AM


I have lost count of the number of times I've had to look for a local HTTP server.
  • Experimenting with an open source app
  • Writing some HTML, JavaScript, PHP
  • Testing some flash app for a client
  • Running some internal client code
  • etc. etc.
And since this isn't something I do every day. I forget how to do it, each and every time I start.

I forget:
  • Which servers I already have installed
  • Where I installed them
  • Which directory I configured them to use
  • What local names did I give them to make it 'easy' for me to work with them
  • etc. etc.
Now it might just be me that faces this problem.

If so, I expect you have already stopped reading.

So to cut to the chase, my current favourites are Mongoose (Windows, Mac, Linux) and VirtualHostX (Mac)


Other HTTP Stacks

I have used some of the biggies:
And I probably still have them installed

And some of the tinies:
And some others that I can't remember.

All have been useful at the time. Sometimes I tried to install one but couldn't get it working on client machines because of permissions etc. etc.

I started looking around for alternatives that I could use during training courses, webinars etc.

Some I have not used

Prior to writing this post I was aware that Python had the capability to start up a small http server from the command line, but I hadn't used it. After publication, Brian Goad tweeted his usage of Python to do this.

Brian continued:
could be easily used as a function that takes the dir as argument: simple-server(){ cd $1; python -m SimpleHTTPServer; }
just go to localhost:8000 and you're set!
After Brian's reminder I had a quick look to see what other languages can do this:

If you know of any more languages that have this as part of their default then leave a comment and I'll add them here.

Virtual Machine Stacks

One thing I started using were virtual machines that have software installed already and don't require a web server e.g.
These are great for getting started quickly, but require a little download overhead - which can be painful over conference internet connections.

Sometimes I set up machines in the cloud, preinstalled:
As an additional backup, I like to have a local version that I can share.

VirtualHostX for the Mac

Since I mainly travel with a Mac Laptop I started using VirtualHostX for that.

VirtualHostX is basically a GUI that helps me work with the existing Mac installed LAMP stack.

I can avoid the Mac and command line config. I can avoid installing everything else, and just use VirtualHostX to configure and start/stop everything.

This saved a massive amount of time for me and I do recommend it. But it is Mac only.

Mongoose for Mac, Windows and Linux

I recently encountered Mongoose. It works on Mac, Windows and Linux. 

I used the free version to quickly experiment with some downloaded open source libraries. 

All you do is download the small executable into the directory, run it, and you get the traditional XAMPP style taskbar tooltip icon and easy to use config. 

You can run multiple versions by having them listen on different ports.

I paid $8 for the Windows dev version which allows me to view the HTTP traffic easily as well. This $8 also gives me access to the Linux Pro version. For an extra $5 I could get access to the MacOS pro version.

Summary

I suspect that 'proper' web developers will always prefer an XAMPP installation. But they will also use it more and be completely familiar with it.

For someone like me, who jumps between apps, configs, machines, sites, etc. 

I suspect that at some point I'll probably jump back to  XAMPP due to some future client needs. But for my own work. VirtualHostX and Mongoose are my current easy to use solutions.

What do you use?

Agile Testing Days 2014 - Workshop and Tutorial

Friday, November 28, 2014 15:50 PM

At Agile Testing Days 2014, I presented a full day workshop on "Technical Testing" in Agile and was part of the Black Ops Testing Workshop with Steve Green and Tony Bruce.

Note: there are limited spaces left on our
  Black Ops Testing Full Day tutorial
in London in January 2015

Both of these were hands on events.

In the tutorial I present examples of Technical Testing, and how to integrate Technical Testing into Agile, the participants also test a real live application and apply the techniques, mindsets and tools that I describe.

Since it describe Technical Testing in an Agile system, we also spent time discussing the injection points for the technical testing process and thought processes.

The Black Ops Testing workshop took a similar approach but with less 'talking' since it was a much shorter time period with more people.

We started with a 5 minute lightning talk from myself, Tony, and Steve. During this we emphasized something important that we hoped the participants would focus on during their testing. We then let the participants loose on the system as we mingled. We coached, asked questions and observed. Then during the debrief we extemporize on our observations and ask questions about what we saw to pull out the insights from the participants. We repeat this process over the session.

Both the Black Ops Testing Workshop and my Tutorial used Redmine as the application under test.

We picked Redmine for a number of reasons, and I'll list some below:

  • Virtual Machines are available which make it easy to install
  • The Virtual Machines can be deployed easily to Amazon cloud servers
  • It is available as an Amazon Cloud Marketplace system, making it easy to install
There are some words in there you might notice "easy to install", "deployed easily".

Wouldn't it be great if all the apps we had to test were easy to install and configure and gain access to?

Yes it would. And as testers, when we work on projects, we can stress this to the project team, or work on it ourselves so that we don't spend a lot of time messing about with environments.

I used bitnami and their dashboard to automatically deploy and configure the environment. Tony used the amazon aws marketplace and worked with their dashboard. James Lyndsay helped us out, and went all old school, and deployed the base install to the machine.

I learned from this, that my exploratory note taking approach has permeated all my work. As I was installing the environment and configuring it, I made notes of 'what' I wanted to do, 'where' I was finding the information I needed, 'how' to do the steps I took, what data I was using (usernames, passwords, environment names, etc.). And when I needed to repeat the installation (I installed to a VM in Windows, on Mac, and in the cloud), I had all my notes from the previous installations.

When something went wrong in the environment, and I didn't have access to the parts of the system I needed, I was able to look through my notes and I could see that there were activities I had not performed. I thought I had, but my notes don't lie to me as much as my memory does.

It should come as no surprise to you then, that I stress note taking in my tutorial, and in the Black Ops Testing Workshop.


You can find the slides for the Black Ops Testing Workshop on slideshare. You can find more details about Black Ops Testing over on our .com site


Black Ops Testing Workshop from Agile Testing Days 2014 from eviltester

Agile Testing Days 2014 - Keynote

Friday, November 28, 2014 15:13 PM

I presented a keynote at Agile Testing Days 2014, and took part in the Black Ops Testing Workshop, and presented a one day tutorial on "Technical Testing in Agile". This post covers the Keynote.

The keynote was underpinned by the notion that 'Agile' is not a 'thing', instead 'Agile' provides the context within which our project operates and therefore is part of the weltanshauung of the project. Or as I referred to it, the "System Of Development".

Because really I concentrate on 'Systems'. I think that I view 'context', as an understanding of the System. And remember that, we, the tester, form part of that system and the context as well. Therefore our 'beliefs' about testing become important as the impact how we interact with the people, and the system, and the process, in place on the project.

As ever, I made a lot of notes before the Keynote, and I present those below.

The slides are available on slideshare. The talk was recorded, but has not yet appeared online. My notes below might help you make sense of the slides.


Agile Testing Days 2014 Keynote - Helping Testers Add Value on Agile Projects from eviltester

A few things to note. Many of the keynotes overlapped: talking about role identification and beliefs, communicating from your own experience and models, building models of your process and testing, etc.

And prior to Agile Testing Days, Michael Bolton presented a Eurostar Webinar on 'Agile', which is worth watching, and James Bach presented "Skilled Testing and Agile Development" at Oredev 2014 which is worth watching. I include links to both of those for your delectation because they represent other testers applying Systems Thinking to create a model of 'Testing in Agile', watch them both.


Abstract:

Every Agile project is different, we know this, we don't do things 'by the book' on Agile projects. We learn, we interact, we change, we write the book we go along. Throughout all of this, testing needs to remain viable, and it needs to add value. Remaining viable in this kind of environment can be hard.

Fortunately, we can learn to add value. In this keynote, Alan will describe some of the approaches and models he has used to help testing remain viable. Helping testers analyze the 'system of development' so the test approach can target process risks. Helping testers harness their own unique skills and approaches. The attitudes that the testing process often needs to have driving it, and the skill sets that teams need to ensure are applied to their testing.

At a simple level, this is just Systems Thinking and Modeling. In practice this can prove highly subversive and deliberately provocative. Because we're not talking about 'fitting in', we're talking about survival.

Notes:

Warren Zevon, wrote “Ain’t that pretty at all”, in 1982

Warren Zevon wrote a song called “ain’t that pretty at all"

Warren Zevon was one of those singers who when he comes on, I pretty much have to listen, his voice and songs drag me in, rather than sitting as background music.

In this song, Mr Zevon describes a character who is pretty jaded.

     Well, I've seen all there is to see
     And I've heard all they have to say
     I've done everything I wanted to do . . .
     I've done that too

I know what that feels like. I’ve done management, performance testing, exploratory testing, agile testing, security testing, acceptance testing, UAT, automation, etc.

I’ve worked on Agile, waterfall, heavy weight documentation, lean, yada yada yada. None of it has ever fully worked or been perfect.

Feels like I’ve done everything. the danger is I become jaded or fixed in my ways.

People want Agile to be perfect.

And this Warren Zevon character And doesn’t like what he sees.

And Agile isn’t perfect.

Reality doesn’t seem to match up with his expectations, or desires, or wants.

     And it ain't that pretty at all
     Ain’t that pretty at all

Agile can be messy. It doesn’t always match the books or talks. And when you are new to it sometimes you don’t like what you see.

I went to the grand canyon, took a bus couple of hours to get there. When I got out, everyone else seemed to see a wonder example of mother nature.

I saw some cliffs.

We may not have the strategies to cope

I was back on the bus in 10 minutes. I might be jaded but I can avoid the sunk cost fallacy.

But we might not have the strategies we need to deal with the situation.

    So I'm going to hurl myself against the wall
    'Cause I'd rather feel bad than not feel anything at all

If you come from a waterfall background you might not know how to handle the interactions on an agile project. And if your prep was books and blogs, you might find they described an ideal where the strategies they used are not in place.

Some of our strategies for coping might be self destructive and we might not notice, and other people might not tell us. Because systems are self-healing and they can heal by excluding the toxic thing in the system.

Without the right strategy we make the wrong choice

And when you don’t have a lot of strategies you end up making choices and taking actions that aren’t necessarily the most appropriate for the situation.

Testers telling developers that the project would be better if they just did TDD and paired. Or if we all worked on automation acceptance tests together. Might not get the outcome that you want.

You might fall back on strategies that worked on other projects. But don’t fit in this one.

    So I'm going to hurl myself against the wall
    'Cause I'd rather feel bad than not feel anything at all

You end up wanting to write lots of documentation up front because you’ve always done it, or you want to test in places where the stories don’t go, or you want to remind everyone that ‘Agile’ isn’t done like that.

Whatever...

So very often, my job….

    I've been to Paris
    And it ain't that pretty at all
    I've been to Rome
    Guess what?

Is to help people, with their beliefs and expectations. To work with the people and system around them.

Because when I walked away from the Grand Canyon it wasn’t a reflection on the grand canyon, it was a reflection of me. My expectations were different from the reality. My belief in being wowed, because of all the things I’d heard about it stepped in the way of seeing what was on the ground in front of me. Or in the ground in front of me.

So they don’t do something stupid

   I'd like to go back to Paris someday and visit the Louvre Museum
   Get a good running start and hurl myself at the wall
   Going to hurl myself against the wall
   'Cause I'd rather feel bad than feel nothing at all
   And it ain't that pretty at all
   Ain't that pretty at all

So they don’t do something stupid on the project that stops them maximising their effectiveness, and puts blockers in the way to working effectively with the team.

I help testers survive in Agile projects

That’s never my role title. Test Manager, Test Consultant, Automation Specialist. Agile Tester. Blah Blah Blah.

But I seem to help testers survive, and help testing survive. So that we don’t fall prey to just automating acceptance criteria, we actually test the product and explore its capabilities.

And I say ’survive’, because that's what I had to learn to do.

We survive when we add value

I think we survive when we add value, learn, and make ourselves a viable part of the project in ways that are unique to us.

In “The Princess Bride”, the hero Westley is caught by The Dread Pirate Roberts, whom he offers to be his valet for 5 years.

The Dread Pirate Roberts agrees to try it but says he will most probably kill him in the morning. He’s never had a valet before so doesn’t know how a valet would add value to him, or how he could use him.

And when the morning arrives and The Dread Pirate Roberts comes to kill poor Westley. Westley thanks him for the opportunity to have learned so much about the workings of a pirate ship the previous day.

Westley explains to the dread pirate roberts how much he learned about the workings of the ship, and how he had helped the cook by pairing with him, and how he had reorganised the items in the cargo hold.

And every day Westley survives his capture by the Dread Pirate Roberts by working and adding value during the day. And every day he demonstrates the value that he can add to the operation of the pirate ship. And every day he learns more about piracy and the skills of pirates.

Until, years later, The Dread Pirate Roberts explains that The Dread Pirate Roberts is a role, not a person, and so the dock into a port, take on a new crew and Westley adopts the role of Dread Pirate Roberts.

And testers need to act such that people don't ask "What does testing do on Agile" because they know what their testers do on the project, and they see value in those activities. They know what specifically "Bob and Eris, or Dick and Jane or Janet and John" their testers, actually do.

This isn’t fluffy people stuff

When I look online and see the type of strategies that people describe for surviving and fitting in on Agile projects, it isn’t quite what I do or recommend, so I know there are alternative paths and routes in.

I’m not here to make you look good

I’m really not here to make you look good.

I mean I’m not here to make you look bad… but I might.

And I’m not here to make the product look bad… but I might.

And I’m not here to make the process, or the testing, or the ‘whatever’ look bad… but I might.

If it aint that pretty, then we can do something about it when we recognise that, and negative feedback can help.

I’m really here to help raise the bar and improve the work we all do together. If that makes you look good, then that’s great, if that makes you look bad and you improve as a result, then that’s great too. Looking good is a side-effect.

Survival does not mean 'fitting in'

I’m not talking about fitting in. Buying Lunch. Making people look good. yada yada.

I don’t think you get bonus points for doing that.

But if that’s you, don’t stop. I think its great if people want to buy doughnuts for everyone.

Its not me. So there are other ways to survive on projects.  I don’t have many strategies for ‘social success'

Personally I think team work means helping other people contribute the best stuff they can, and the stuff that only they can, and help pass that knowledge around the team. And everyone needs to to hat.

Testers survive by doing testing stuff

Its pretty much as complicated as that. Learn to test as much as you can, and drop all the bureaucracy and waste that adds no value. Then you’ve covered the basics.

Then you learn how to help others improve by passing on your knowledge and mindsets.

And we are taught how to do ‘testing stuff’ so if we do that well and improve our testing skills then we have a good chance of survival.

Testers are taught to work with the System Under Development

Essentially we are taught how to understand and work with the System Under Development

We learn techniques to analyse the system and model it and then build questions from the model which we ask of the system.

We might build a data domain model, then build questions around its boundaries and then ask the system those questions by phrasing them in form the system understands.

We learn how to do that as testers with techniques and analysis and blah de blah.. testing stuff.

We survive when we adapt to the System Of Development

Testing survives when it learns to work with the Systems in place. And two obvious systems are the System under development, and the System of development. We have to adapt to both.

And fortunately testers are often good at analysing systems. Modeling them. Viewing them from different perspectives. Breaking them into chunks. Viewing the flow of data. working out what data becomes information to subsystems. Looking for data transformations. Looking for risk in process and communication. Then figuring out different injection points.

When we work with systems under development we don't just input data at one end and see what happens out the other. Same with Systems of development, we don't just work at the stories that come in, and then check the system that comes out. We learn to look for different injection points and create feedback earlier.

So one of the main things I help testers do, is analyse the System Of Development, and adapt to it.

Many of the testing survival tricks and techniques we learn relate to Waterfall projects.

Annie Edson Taylor, thought she would become rich and famous if she could survive a trip over the niagra falls in a barrel.

She survived. She wasn’t rich and famous.

She survived by padding out her barrel and increasing the air pressure in the barrel.

You can get yourself ready before you hit Agile.

I survived waterfall by removing all padding, and taking responsibility for what I did.

I analysed the system of development and did what this specific implementation needed, so I could survive, and I took responsibility so that I could add value.

So I was in a better place than many people on waterfall projects when we started working on Agile.

Blake wrote "I must create a system..."

I must create a system or be enslaved by another man's. My business is not to reason and compare, my business is to create.
Jerusalem The Emanation of The Giant Albion

This is one of the first texts I ever read on system design and modelling, and I read it when I was about 19 or 20, and stuff stays with you when you encounter it early.

But this is my meta model. of modelling. And I know I have to own and take responsibility for my view of the world and the systems that I work with. Otherwise I’ll fall prey to ‘other’ peoples’ strategies.

I remember the first time I worked on an ‘Agile’ project

I had to learn the strategies to survive. And that’s how I can recognise the same blockers in other testers or projects new to Agile.

I fell prey to the Agile Hype

I had a whole set of beliefs about what agile was going to be like:

- the pairing
- the TDD
- the ATDD
- the BDD
- the ADBDBTBD - I might have made that one up

But there was a lot of stuff.

And it ain't that pretty at all

Reality wasn’t as pretty as I imagined.

I got stuck.

Remember I knew how to handle Waterfall. I had that down.

I could work the system of development and work around the blocks and annoyances that it threw at me.

But here was a new thing. A new system.

Stuck on...

I knew what we were ‘supposed’ to do. But people weren’t really doing that. and I didn’t know how to do this… hybrid thing.

I realised that people didn’t know what to do.

And I realised I simply didn’t have the basic skills to work in this new system of development.

I was the worst ‘pair’ in the world. You know when deer get caught in headlights. That was me the first time I was given the keyboard in a pairing session.

So I did what I always do...

Try to take over the world.

No - of course not. I’ve tried that before. Trying to impose your model on top of everyone else’s and making it work, is hard.

Its easier to treat the system as a unique system and model it. Observe it. Understand it.

Look at the parts, the subsystems, the relationships, the feedback flows, the amplifiers, the attenuators.

See I think this is what I do, I work with systems

That’s why my social strategies are… different.

I see the people systems in front of me. The processes. The politics. etc.

And that led to me changing… me

And much of what I did, I teach testers to do on projects.

I worked on my coding skills so I could pair

Every Agile Project is different. 

So we learn to analyse the system that is in place. TheWeltanschauung. The context.

System's have common concepts and elements, in general: entities, relationships, data flows, subsystems etc.

But each specific system, is different, and we can view it in different ways, with different models.

Confessions of An Accidental Security Tester

Thursday, November 06, 2014 22:01 PM

At Oredev 2014 I presented "Confessions of an Accidental Security Tester".

The slides are on slideshare, the video below and on vimeo:


"Alan Richardson does not describe himself as a security tester. He's read the books so he knows enough to know he doesn't know or do that stuff properly. But he has found security issues, on projects, and on live sites that he depends on for his business.

You want to know user details? Yup, found those. You want to download the paid for assets from the site without paying for them? Yup, can do. You want to see the payment details for other people? OK, here they are. All of this, and more, as Alan stumbled, shocked, from one security issue to the next,

In this session Alan describes examples of security issues, and how he found them: the tools he used, why he used them, what he observed and what that triggered in his thought processes.

Perhaps most shocking, is not that the issues were live, and relatively easy to find and exploit. But that the companies were so uninterested in them. So this talk also covers how to 'advocate' for these issues. It also warns you not to expect rewards and gratitude. Companies with these type of issues typically do not have bug bounty schemes.

Nowadays, many of the tools you need to find and exploit these issues are built in to the browser. Anyone could find them. But testers have a head start. So in this session Alan shows how you can build on the knowledge and thought processes you already have, to find these types of issues.

This is a talk about pushing your functional testing further, deeper, and with more technical observation, so you too can 'accidentally' discover security issues."

CONFESSIONS OF AN ACCIDENTAL SECURITY TESTER - "I DIDN'T BREAK IN, YOU LEFT THE DOOR OPEN" from Øredev Conference on Vimeo.

An exploratory testing example explored: Taskwarrior

Friday, September 26, 2014 14:05 PM

or "Why I explored Taskwarrior the way I did".

In a previous post I discussed the tooling environment that I wanted to support my testing of Taskwarrior for the Black Ops Testing webinar of 22nd September 2014.

In this post, I'll discuss the 'actual testing' that I performed, and why I think I performed it the way I did. At the end of the post I have summarised some 'principles' that I have drawn from my notes.

I didn't intend to write such a long post, but I've added sections to break it up. Best of luck if you make it all the way through.


First Some Context

First, some context:
  • 4 of us, are involved in the testing for the webinar
  • I want the webinars to have entertainment and educational value
  • I work fast to try and find 'talking points' to support the webinar

The above means that I don't always approach the activity the same way I would a commercial project, but I use the same skills and thought processes. 

Therefore to meet the context:
  • I test to 'find bugs fast' because they are 'talking points' in the webinar
  • I explore different ways of testing because we learn from that and can hopefully talk about things that are 'new' for us, and possibly also new for some people on the webinar

I make these points so that you don't view the testing in the webinar as 'this is how you should do exploratory testing' but you understand 'why' I perform the specific exploration I talk about in the webinar.

And then I started Learning


So what did I do, and what led to what?

(At this point I'm reading my notes from github to refresh my memory of the testing)

After making sure I had a good enough toolset to support my testing...

I spent a quick 20 minutes learning the basics of the application using the 30 second guide. At this point I knew 'all you need to know' to use the application and create tasks, delete tasks, complete tasks and view upcoming tasks.

Because of the tooling I set up, I can now see that there are 4 files used in the application to store data (in version 2.2)

  • pending.data
  • history
  • undo.data
  • completed.data

From seeing the files using multitail, I know:
  • the format of the data
  • items move from one file to another
  • undo.data has a 'before' and 'after' state
  • history would normally look 'empty' when viewing the file, but because I'm tailing it, I can see data added and then removed, so it acts as a temporary buffer for data

In the back of my head now, I have a model that:
  • there might be risks moving data from one file to another if a file was locked etc. 
  • text files can be amended, 
    • does the app handle malformed files? truncated files? erroneous input in files?
  • state is represented in the files, so if the app crashed midway then the files might be out of sync
  • undo would act on a task action by task action
  • as a user, the text files make it simpler for me to recover from any application errors and increase the flexibility open to me as a technical user
  • as a tester, I could create test data pretty easily by creating files from scratch or amending the files
  • as a tester, I can put the application into the state I want by amending the files and bypassing the GUI

All of these things I might want to test for if I was doing a commercial exploratory session - I don't pursue these in this session because 'yes they might make good viewing' but 'I suspect there are bugs waiting in the simpler functions'.

After 20 minutes:
  • I have a basic understanding of the application, 
  • I've read minimal documentation, 
  • I've had hands on experience with the application,
  • I have a working model of the application storage mechanism
  • I have a model of some risks to pursue

I take my initial learning and build a 'plan'


I decide to experiment with minimal data for the 'all you need to know' functionality and ask myself a series of questions, which I wrote down as a 'plan' in my notes.

  • What if I do the minimal commands incorrectly?

Which I then expanded as a set of sub questions to explore this question:
  • if I give a wrong command?
  • if I miss out an id?
  • if I repeat a command?
  • if a task id does not exist?
  • if I use a priority that does not exist?
  • if I use an attribute that does not exist?
This is not a complete scope expansion for the command and attributes I've learned about, but this is a set of questions that I can ask of the application as 'tests'.

I gain more information about the application and learn stuff, and cover some command and data scope as I do so.

I start to follow the plan


10 minutes later I start 'testing' by asking these questions to the application.

Almost immediately my 'plan' of 'work through the questions and learn stuff' goes awry.

I chose 'app' as a 'wrong command' instead of 'add'. but the application prompts me that it will modify all tasks. I didn't expect that. So I have to think:
  • the message by the application suggests that 'app' == 'append'
  • I didn't know there was an append command, I didn't spot that. A quick 'task help | grep append' tells me that there is an append command. I try to grep for 'app' but I don't see that 'app' is a synonym for append. Perhaps commands can be truncated? (A question about truncation to investigate for later)
  • I also didn't expect to see 3 tasks listed as being appended to, since I only have one pending task, one completed task, and one deleted task. (Is that normal? A question to investigate in the future)

And so, with one test I have a whole new set of areas to investigate and research. Some of this would lead to tests. Some of this would lead to reading documentation. Some of this would lead to conversations with developers and analysts and etc. if I was working on a project.

I conduct some more testing using the questions, and add the answers to my notes.

I also learn about the 'modify' command.

Now I have CRUD commands: 'add', 'task', 'modify' | 'done', 'delete'

I can see that I stop my learning session at that point - it coincidentally happens to be 90 mins. Not by design. Just, that seemed to be the right amount of time for a focused test learning session.

I break and reflect


When I come back from my break, I reflect on what I've done and decide on an approach to ask questions of:
  • how can I have the system 'show' me the data? i.e. deleted data
  • how can I add a task with a 'due' date to explore more functionality?
These questions lead me to read the documentation a little more. And I discover how to show the internal data from the system using the 'info' command. I also learn how to see 'deleted' tasks and a little more about filtering.

Seeing the data returned by 'info' makes me wonder if I can 'modify' the data shown there. 

I start testing modify


I already know that I can delete a task with the delete command. But perhaps I can modify the 'status' and create a 'deleted' task that way?

And I start to explore the modify command using the attributes that are shown to me, by the application. (you can see more detail in the report)

And I can 'delete' a task using the modify, but it does not exercise the full workflow i.e. the task is still in the pending file until I issue a 'report'. 

My model of the application changes:
  • reporting - actually does a data clean up, prior to the report and moves deleted tasks and hanging actions so that the pending file is 'clean'
  • 'modify' bypasses some of the top level application controls so might be a way of stressing the functionality a little
At this point, I start to observe unexpected side-effects in the files. And I find a bug where I can create a blank undo record that I can't undo (we demonstrate this in the webinar).

This is my first 'bug' and it is a direct result of observing a level lower than the GUI, i.e. at a technical level.

I start modifying calculated fields


I know, from viewing the storage mechanism that some fields shown on the GUI i.e. ID, and Urgency. Do not actually exist. They do not exist in the storage mechanism in the file. They are calculated fields and exist in the internal model of the data, not in the persistent model.

So I wonder if I can modify those?

The system allows me to modify them, and add them to the persistence mechanism, but ignores them for the internal model.

This doesn't seem like a major bug, but I believe it to be a bug since other attributes I try to modify, which do not exist in the internal model e.g. 'newAttrib' and 'bob' are not accepted by the modify command, but 'id' and 'urgency' are.

I start exploring the 'entry' value


The 'entry' value is the date the task was entered. This is automatically added:

  •  Should I be able to amend it?
  • What happens if I can?
So I start to experiment.

I discover that I can amend it, and that I can amend it to be in the future.

I might expect this with a 'due' date. But not a 'task creation date' i.e. I know about this task but it hasn't been created yet.

I then check if this actually matters. i.e. if tasks that didn't officially exist yet are always less urgent than tasks that do, then it probably didn't matter.

But I was able to see that a task that didn't exist yet was more urgent than a task that did.

follow on questions that I didn't pursue would then relate to other attributes on the task:
  •  if the task doesn't exist yet, should priority matter?
  • Can I have a due date before the entry exists?
  • etc.
Since I was following a defect seam, I didn't think about this at the time.

And that's why we review our testing. As I'm doing now. To reflect on what I did, and identify new ways of using the information that I found.

Playtime


I'd been testing for a while, so I wanted a 'lighter' approach.

I decided to see if I could create a recurring task, that recurred so quickly, that it would generate lots of data for me.

Recurring tasks are typically, weekly or daily. But what if a task repeated every second? in a minute I could create 60 and have more data to play with.

But I created a Sorcerers Apprentice moment. Tasks were spawning every second, but I didn't know how to stop them. The application would not allow me to mark a 'parent' recurring task as done, or delete a 'parent' recurring task. I would have to delete all the children, but they were spawning every second. What could I do?

I could probably amend the text files, but I might introduce a referential integrity or data error. I really wanted to use the system to 'fix' this.

Eventually I turned to the 'modify' command. If I can't mark the task as 'deleted' using the 'delete' command. Perhaps I can bypass the controls and modify it to 'status:deleted'

And I did. So a 'bug' that I identify about bypassing controls, was actually useful for my testing. Perhaps the 'modify' command is working as expected and is actually for admins etc.

Immediate Reflection


I decided to finish my day by looking at my logs.

I decided:

  • If only there were a better way of tracking the testing.

Which was one of the questions I identify in my 'tool setup session' but decided I had a 'good enough' approach to start.

And now, having added some value with testing, and learning even more about what I need the tooling to support. I thought I could justify some time to improve my tooling.

I had a quick search, experimented with the 'script' command, but eventually found that using a different terminal which logged input and output would give me an eve better 'good enough' environment.

Release


We held the Black Ops Testing Webinar and discussed our testing, where I learned some approaches from the rest of the team.

I released my test notes to the wild on github.

Later Reflection


This blog post represents my more detailed reflection on what I did.

This reflection was not for the purpose of scope expansion, clearly I could do 'more' testing around the areas I mentioned in my notes.

This reflection was for the purpose of thinking through my thinking, and trying to communicate it to others as well as myself. Because it is all very well seeing what I did, with the notes. And seeing the questions that I wrote allows you to build a model of my test thinking. But a meta reflection on my notes seemed like a useful activity to pursue.

If you notice any principles of approaches in my notes that I didn't document here, then please let me know in the comments.

Principles

  • Questions can drive testing...
    • You don't need the answers, you just need to know how to convert the question into a form the application can understand. The application knows all the answers, and the answers it gives you are always correct. You have to decide if those 'correct' answers were the ones you, or the spec, or the user, or the data, or etc. etc., actually wanted.
  • The answers you receive from your questions will drive your testing
    • Answers lead to more questions. More questions drive more testing.
  • The observations you make as you ask questions and review your answers, will drive your testing.
    • The level to which you can observe, will determine how far you can take this. If you only observe at the GUI layer then you are limited to a surface examination. I was able to observe at the storage layer, so I was able to pursue different approaches than someone working at the GUI.
  • Experiment with internal consistency
    • Entity level with consistency between attributes
    • Referential consistency between entities
    • Consistency between internal storage and persistent storage
    • etc.
  • Debriefs are important to create new plans of attack
    • personal debriefs allow you to learn from yourself, identify gaps and new approaches
    • your notes have to support your debriefs otherwise you will work from memory and don't give yourself all the tools you need to do a proper debrief.
  • Switch between 'serious' testing and 'playful' testing
    • It gives your brain a rest
    • It keeps you energised
    • You'll find different things.


Lessons learned testing Command Line Applications from Black Ops Testing Webinar

Friday, September 26, 2014 14:26 PM

For the Black Ops Testing Webinar on 22nd September 2014 we were testing Taskwarrior, a CLI application.

I test a lot of Web software, hence the "Technical Web Testing 101" course, but this was a CLI application so I needed to get my Unix skills up to date again, and figure out what supporting infrastructure I needed.

By the way, you can see the full list of notes I made at github. In this post I'm going to explain the thought process a little more.

Before I started testing I wanted to make sure I could meet my 'technical testing' needs:

  • Observe the System
  • Control the Environment
  • Restore the App to known states
  • Take time stamp logs of my testing
  • Backup my data and logs from the VM to my main machine
And now I'll explain how I met those needs... and the principles the process served.



You will also find here a real life exploratory testing log that I wrote as I tested.

Observe the System

Part of my Technical Testing approach requires me to have the ability to Observe the system that I test.

With the web this is easy, I use proxies and developer tools.

How to do this with a CLI app? Well in this case Taskwarrior is file based. So I hunted around for some file monitoring tools.

I already knew about Multitail, so that was my default.  'Tail' allows you to monitor the changes to the end of a file. Multitail allows me to 'tail' multiple files in the same window.

I looked around for other file monitoring tools, but couldn't really find any.

With Multitail I was able to start it with a single command that was 'monitor all files in this directory'
  • multitail -Q 1 ~/.task/*

I knew that the files would grow larger than tail could display, so I really wanted a way to view the files.

James Lyndsay used Textmate to see the files changing. I didn't have time to look around for an editor that would do that (and I didn't know James was using Textmate because we test in isolation and debrief later so we can all learn from each other and ask questions). 

So I used gedit. The out of the box editing tool on Linux Ubuntu. This will reload the file if it has changed on the disk when I change tabs. And since I was monitoring the files using Multitail, I knew when to change tabs.

OK, so I'm 'good' on the monitoring front.

Control the Environment

Next thing I want the ability to do? Reset my environment to a clean state.

With Taskwarrior that simply involves deleting the data files:
  • rm ~/.task/*

Restore the App to known states

OK. So now I want the ability to backup my data, and restore it.

I found a backup link on the Taskwarrior site. But I wanted to zip up the files rather than tar them, simply because it made it easier to work cross platform.
  • zip -r ~/Dropbox/shared/taskData.zip ~/.task
The above zips up, recursively, a directory. I used recursive add, just in case Taskwarrior changed its file setup as I tested. Because my analysis of how it stored the data was based on an initial 'add a few tasks' usage I could not count on it remaining true for the life of the application usage. Perhaps as I do more complicated tasks it would change? I didn't know, so using a 'recursive add' gave me a safety net.

Same as the Multitail command, which will automatically add any new files it finds in the directory, so it added a history file after I started monitoring.

Backup my data and logs from the VM to my main machine

I would do most of my testing in a VM, and I really wanted the files to exist on my main machine because I have all the tools I need to edit and process them there. And rather than messing about with a version control system or shared folders between VM etc.

I decided the fastest way of connecting my machines was by using Dropbox. So I backup the data and my logs to Dropbox on the VM and it automatically syncs to all my other machines, including my desktop.

Take time stamp logs of my testing

I had in the back of my mind that I could probably use the 'history' function in Bash to help me track my testing.

Every command you type into the Bash shell is recorded in the history log. And you can see the commands you type if you use the 'history' command. You can even repeat them if you use '!' i.e. '!12' repeats the 12th command in the history.

I wanted to use the log as my 'these are the actual commands I typed in over this period of time.
So I had to figure out how to add time stamps to the history log
  • export HISTTIMEFORMAT='%F %T    '

I looked for simple way to extract items from the history log to a text file, but couldn't find anything in time so I eventually settled on a simple redirect e.g. redirect the output from the history command to a text file (in dropbox of course to automatically sync it)
  • history > ~/Dropbox/shared/task20140922_1125_installAndStartMultiTail.txt

And since I did not add the HISTTIMEFORMAT to anything global I tagged it on the front of my history command
  • export HISTTIMEFORMAT='%F %T    '; history > ~/Dropbox/shared/task20140922_1125_installAndStartMultiTail.txt

I added 'comments' to the history using the 'echo' command, which displays the text on screen and appears in the history as an obviously non-testing command.

I also found with history:
  • add a ' ' space before the command and it doesn't appear in the history - great for 'non core' actions
  • you can delete 'oops' actions from the history with 'history -d #number'

And that was my basic test environment setup. If I tracked this as session based testing then this would have been a test environment setup session.

All my notes are in the full testing notes I made.

I could have gone further:
  • ideally I wanted to automatically log the output of commands as well as the command input
  • I could have figured out how to make the HISTTIMEFORMAT stick
  • I could have figured out how to have the dropbox daemon run automatically rather than running in a command window
  • I could have found a tool to automatically refresh a view of the full file (i.e. as James did with textmate)
  • etc.

But I had done enough to meet my basic needs for Observation, Manipulation and Reporting.
Oh, and I decided on a Google doc for my test notes very early on, because, again, that would be real time accessible from all the machines I was working from. I exported a pdf of the docs to the gihub repo.

And what about the testing?

Yeah, I did some testing as well. You can see the notes I made for the testing in the github repo.

So how did the tools help?


As I was testing, I was able to observe the system as I entered commands.

So I 'task add' a new task and I could see an entry added to the pending.data, and the history.data, and the undo.data.

As I explored, I was able to see 'strange' data in the undo.data file that I would have missed had I not had visibility into the system.

You are able to view the data if you want, because I backed up up as I went along, and it automatically saved to dropbox, which I later committed to github.

What else?

At the end of the day, I spent a little time on the additional tooling tasks I had noted earlier.

I looked at the 'script' command which allows you to record your input and output to a file, but since it had all the display esc characters in it, it wasn't particularly useful for reviewing.

Then I thought I'd see what alternative terminal tools might be useful.

I initially thought of Terminator, since I've used it before. And when I tried it, I noticed the logger plug-in, which would have been perfect, had I had that at the start of my testing since the output is human readable and captures the command input and the system output as you test. You can see an example of the output in the github repo.

I would want to change the bash prompt so that it had the date time in it though so the log was more self documenting.

You can see the change in my environment in the two screenshots I captured.

  • The initial setup was just a bunch of terminals 
  • The final setup has Terminator for the execution, with gedit for viewing the changed files, and multitail on the right showing me the changes in the system after each command

Principles

And as I look over the preceding information I can pull out some principles that underpin what I do.
  • Good enough tooling
    • Ideally I'd love a fully featured environment and great tools, but so long as my basic needs are met, I can spend time on testing. There is no point building a great environment and not testing. But similarly, if I don' t have the tools, I can't see and do certain things that I need in my testing.
  • Learn to use the default built in tools
    • Gedit, Bash, History - none of that was new. By using the defaults I have knowledge I can transfer between environments. And can try and use the tools in more creative ways e.g. 'echo' statements in the history
  • Automated Logging doesn't trump note-taking
    • Even if I had Terminator at the start of my testing, and a full log. I'd still take notes in my exploratory log. I need to write down my questions, conclusions, thoughts, research links, etc. etc.
    • If I only had one, I'd take my exploratory log above any automated log taking tool.
  • Write the log to be read
    • When you read the exploratory log, I think it is fairly readable (you might disagree). But I did not go back and edit the content. That is all first draft writing.
      • What do I add in the edit? I answer any questions that I spot that I asked and answered. I change TODOs that I did. etc. I try not to change the substance of the text, even if my 'conclusions' as to how something are working are wrong at the start, but change at the end, I leave it as is, so I have a log of my changing model.
You might identify other principles in this text. If you do, please leave a comment and let me know.

Why didn't you use tool X?

If you have particular tools that you prefer to use when testing CLI applications then please leave a comment so I can learn from your experience.


Using Wireshark to observe Mobile HTTP Network traffic

Thursday, September 11, 2014 11:58 AM

You can find wireshark on line - it is a free tool.

https://www.wireshark.org/

Note that you may not be able to capture the mobile traffic on Windows because of WinPCap limitations. You may need to buy an additional adapter to do this. I'm using Mac to show you this functionality.
http://wiki.wireshark.org/CaptureSetup/WLAN#windows

What is it?

Wireshark is a tool for monitoring network traffic. Unlike an HTTP proxy server where you have to configure your machine to point to the HTTP proxy server in order to monitor the traffic. With Wireshark, you tell it to capture traffic from your network card, and it can then capture any traffic going through that network.
So if your mobile device is on the same wifi network as your Wireshark machine's wifi card. Then you can capture the wifi traffic, filter it, and then monitor the HTTP traffic from your mobile device.

Why would I want to do that?

Because sometimes the mobile app you are testing does not honour the proxy settings of the device and goes direct, so you don't see the traffic.
And because you can start learning more about the network traffic layers being used by your application and your device in general.
(It's also fun to hook into hotel wifi and airport lounge wifi - but don't tell anyone.)
But the serious point, is that we know we want to observe the http traffic. If we have the issue that we can't because we can't configure the app to point to the proxy then we need other options. We need to increase our flexibility to approaching the observation. So we have a new option - work at the network traffic level, rather than proxy.
Our aim is to keep looking for new ways of achieving our outcomes. Not finding tools, for the sake of tools. But finding new approaches.

Installing Wireshark

On Windows

The Windows install is simple. Just download and run.
https://www.wireshark.org/download.html

On Mac

Mac install was a little harder for me and it didn't work out the box so I had to do the extra steps to add the application to XQuartz
If it doesn't work then you could try, start xquartz In the Applications menu of Xquartz, customize it and "Add Item" with the command:
  • "open /Applications/Wireshark.app/Contents/MacOs/Wireshark"
  • or "open wireshark"
  • or "wireshark"
Then you could try, running wireshark from the Applications menu in XQuartz, or from the application icon directly.
You might find these links helpful if you are on a mac:

On Linux

I haven't tried the install on linux - I imagine the instructions on the Wireshark website work fine.

First Usage

Wireshark, can seem intimidating initially to work with.
It is a complicated tool and there is a lot to learn about it.

Start a Capture

On the main page, select your network card hooked to the wifi network. Then click "Capture Options".
In Capture options table. Check to see that "Mon. Mode" says enabled, for the interface you want to use. If it doesn't, you'll only see your own traffic.
To change "Mon. Mode", double click the item in the table, and choose "Capture packets in promiscuous mode" and "Capture packets in monitor mode", press [OK].
Then [Start] the capture.
If you are on an encrypted network then you might need to decrypt the traffic.
http://wiki.wireshark.org/HowToDecrypt802.11
I sometimes have to fiddle with the IEEE 802.11 preferences: changing them, hitting apply, changing them, etc. Until I see the actual http traffic.
I also have to disconnect the android device from the network, and then reconnect it, so that it sends the initial network connection and decryption packets. Feel free to test on open networks where you don't have these type of issues because they are insecure if you want to.

Filter the capture

At this point your going to start seeing a lot of traffic flowing through your network.
So you want to filter it.
in the filter text editor type "ip.addr eq 192.168.1.143" or whatever the ip of your device is, to start seeing that traffic.
then if you just want to see the http traffic you can do
"ip.addr eq 192.168.1.143 and http"
or, to see just the GET requests:
"ip.addr eq 192.168.1.143 and http.request.method eq "GET""
This is a useful tool to have in your toolbox, for those moments where you have less control over the application under test, but still want to observe the traffic in your testing.

You can see examples of Wireshark in action to help test an Android app in our Technical Web Testing 101 online course.

StarEast 2014 Lightning Talk: "A Sense of Readiness"

Wednesday, November 26, 2014 16:53 PM

At StarEast 2014, I presented a Lightning Talk as part of their "Lightning Strikes the Keynotes"

You can watch all the keynotes here or see just mine.


I make quite a lot of notes and prep for my talks before I present them, and so in this post I will walk you through some of the notes, and the process I used to get ready for the talk.
And I'll use the medium of the blog to expand on the topic a little with additional lessons learned from pulp authors, relating to test planning and preparation.


I think a lightning talk is 'as hard' as a full talk, in some ways it is harder. For those people who present a lightning talk as their 'first talk' because they think it will be easier. The only 'easy' part is that you are on and off the stage faster. I find I have to work much harder to condense my message into such a small time frame. 
I made notes on a few different topics, but eventually decided upon the theme:
  • "Are you ready to start testing tomorrow?"
with the title 
  • "A sense of urgency"? or "A sense of readiness"?
Because... "Are you ready to start testing tomorrow?" was a question I use when evaluating my strategy and planning process as a test manager, and when I'm a tester. I always want to know that I am ready to test tomorrow (or now) if I have to. And because I don't think everyone else adopts this frame of mind, I wanted to explore it a little.


My first step when preparing a talk is...
  1. to just talk
Given the title, I talk to the wall, record it, and make notes.
For me, this is a purely temporary measure, and I delete these artifacts afterwards.
Then I collate my notes into a small first person script like essay.
Which in this instance came out as a single thread below. By single thread I mean, one topic, one path through, no 'asides' or references to analogous material.

Over the years, I've been on site and people have been talking about a "sense of urgency", 'people' generally means management. And "sense of urgency" generally means "why aren't these people working harder to meet the deadlines that we have arbitrarily imposed upon them.
When I get involved, I don't usually try and solve that problem. What I like to focus on is a sense of readiness.
Because I often see testers - not ready to test the software. They are writing the strategy and the approach and everything else they are asked to, but they aren't getting ready. 
This is really basic but I ask people "could you start testing the software tomorrow"? If you could then you're in a pretty good place. If you're not ready, then you're in a pretty good place because you have your todo list to get ready, by asking 'what do we need in order to be ready'. Everything else - strategy, policy, approach, etc. is a bonus. Because if you're ready - you can communicate your readiness, and your 'documentation' is a result of taking the time to write it down.
And you need to be ready to test at different levels. functionality, requirements, domain, technology. But all of that is for nothing if you don't yet have the attitude that you could test this thing at the drop of a hat. Mentally building that 'testing' sense of readiness so that you could test it now, if you had to.
So I encourage you all. Build a sense of readiness. Are you ready to test a week from now, tomorrow, an hour from now, can you test it now? If not - work out why not and and stick that on your preparation list. and get ready.


This had the basics of what I wanted to cover. And I left it to sit for a while. Because really, I wanted to try multi-threading the talk. Adding in some analogous threads, creating and closing open loops as I talked.  I hadn't tried this approach for a short talk before, but since this wasn't a 'Lightning Talk', it was supposed to be a 'Lightning Keynote', I wanted to add more texture to the presentation.
And during the 'sitting' period I read a Novel called "Silvertip's Search" by Max Brand.
You can see my copy above. The London, Hodder and Stoughton edition, first published in 1948. (Max Brand died in 1944)

And in here, I found a passage that I thought fit my topic. A conversation between the head bad guy and one of his minions. 
Throughout the book, both the head bad guy, and the hero, are 'ready for anything' and 'at any time'. And In this paragraph, the head bad guy explains his secret.
"Are you laughing at me, chief?" he asked. "You know that nobody in the world can stand up to you."
"Nobody? Ah, ah, the world is larger than we are," said the criminal. "I should never pretend that nobody can stand up against me. All I know is that I keep myself in practice, patiently, every day, working away my hours." He sighed. "A little natural talent, and constant preparation. That's all it needs. You fellows are my equals, every one of you. Taking a little pains is all the difference between us..."
This is on page 70 of the edition I own.


Given this, I thought I could weave into the presentation: the text of the book, and additionally, Max Brand and his writing strategies. 
That would then give me at least three threads. One personal, one fictional, and one cross domain.
So my next set of notes looked like this.

Some managers like to talk about a “Sense of Urgency”, which in management speak means - why are my staff not working hard enough to meet these arbitrary deadlines we’ve set.
I read a lot of pulp novels. Most written to very tight deadlines. Generally filled with life and death decisions, made quickly, based on minimal information and minimal planning. Urgency in a pulp novel gets you killed, or lets the villain get away. Readiness defines the best heroes.
Max Brand knew a lot about deadlines. His official biography lists over 500 novels. and 400 short stories He was so prolific, that new books based on his outlines continue to be written and published after his death.
I see a lot of testers on site being busy, writing stuff like policies, strategies, plans, approaches etc. They think they are getting ‘ready’, most often they are complying with a ‘sense of urgency’ that says we need a strategy, or we need a plan. They are getting ready. They’re getting ready for their next meeting. But they are not getting ready for their testing.
And if you ask them. Are you ready? They’ll typically tell you about all the things they are waiting for, and they are in a holding pattern. 
And that’s not what I mean by readiness.
Readiness works at different levels. Could you test an application that you don’t know anything about about? But you understand the technology it is built on? Or you understand the domain it sits in. There are lots of models around readiness: skills, domain, the app requirements, techniques, technology, and these models all overlap.
And if you were ready, you could test the app from the point of view of any of these models and add value. And gain enough time to develop one of the other models and test from another perspective. Your strategies, and plans and policies become a communication and explanation of your readiness.
A Sense of Readiness leads to a confidence and flexibility that you could test something if it was delivered to you tomorrow, or now.
So back to Max Brand, and specifically his novel "Silvertip’s Search". 
One of the bad guys has betrayed his gang, and he’s up before the head bad guy trying to convince them not to kill him.
And I’ll paraphrase, here. Max Brand is a better writer than I’m making him sound like here.
The bad guy persuading for is life says “I wouldn't betray you boss, nobody can stand up to you”
And the boss disagrees, and Max Brand, or Faust, then has the lead bad guy describe his approach to writing. and it is “Nope, I don’t promise no-one is better than me. I just keep myself in practice, a little every day, constant preparation. We’re the same. Taking pains is all the difference between us”
Max Brand describes his prolific approach to writing. And how we go about developing a sense of readiness, because we don’t know what is going to come at us. All we can do is work on ourselves so that we have the confidence to tackle it if it comes in next week, or tomorrow, or today, or ten minutes from now.


I emboldened the first part of the sentence because that becomes the outline that I commit to memory to inform my talk.


At this point, I discovered the Internet Archive contains a version of the novel. The quoted paragraph is on page 86 of the Internet Archive version. Yes, if you want to read this novel, you can.
So I decided to download the novel to my Kindle and wrap the hardback cover around the Kindle as a 'prop' for the talk. Since I didn't want slides, and I was talking about the novel, having a physical representation of it seemed like a useful stage device. 
And I could possibly build some tension by 'teasing' a reveal early in the talk, then reading from the novel at the end of the talk.And I added the following lines into the outline.
I brought along a pulp novel for you. This is Silvertip’s Search. A western published in 1945, based on his pulp story published in 1933, written by Max Brand. Or Frederick Faust - his real name. He created the character of Destry, and probably most famously Dr Kildare.
And hidden In this novel, Max Brand describes the secret of his writing success.

You can watch the talk and see how closely it matches the outline above. I think I missed out some stuff and I think I added a little more.
 And now, in this blog, I can expand a little further - with information I wouldn't include in a lightning talk. 
If I was using this in a longer talk then I might well include the information I'm about to give you below.

Additional reasons I like pulp as an example of readiness... 
The pulp authors worked from small outlines:
  • A Title
  • A Paragraph
  • A Blurb
  • A short plot outline
They did this for a number of reasons:
  • They wrote for money.
    • So they had to pitch the story, and didn't want to spend the time writing a full treatment, so they pitched outlines. Sometimes they pitched titles, to see what grabbed attention.
  • They could expand them, quickly, when needed. 
    • Sometimes they would be asked to contribute a story to a magazine with only a few days notice because some other author had let the editor down. And out would come, either an earlier story that hadn't sold, or an expanded form from the outline. 
"Silvertip's Search" is a good example of this process. The novel, was an expanded version of one of Max Brand's short stories. So a published (and previously paid for) work, was expanded into a novel, which was then sold again. 
And because pulp authors worked like this, they often left behind lots of outlines, scraps of ideas, blurbs etc. Which hadn't been sold, or used, or expanded. Which is how pulp authors continue to publish work long after they are dead. Someone manages to take their fast preparation and turn it into something else.
 
And so, to relate this back to testing...
A lot of the time in testing we see promoted the idea that you have to prep in advance, and that your advanced prep has to have copious detail.
I don't think you have to. My experience of testing tells me otherwise.
I work to be 'ready' as fast as I can. I know there will be gaps in my readiness. But I know I could start, and add value, fast. And with each passing moment that allows me more time to prepare I increase my readiness, until at some point, I test.
And one external source of validation I use for this, is the work, and approach, of pulp authors.
Pulp authors used "a sense of readiness" to help them. We can too.



How to convert VirtualBox to VMWare and install the ethernet device drivers

Thursday, August 14, 2014 21:22 PM

I couldn't work around the recent bug in VirtualBox version 4.3.14. It conflicts with my Anti-virus software under windows.

When I upgraded to version 4.3.15, I no longer experienced the anti-virus crash, but my networking was trashed and I couldn't get it working. So I decided to try and migrate over to VMWare.

I use VMWare Fusion on the Mac to run my Windows and Linux VMs, and VMWare VMs are cross platform. It seems like a sensible move, even though it will cost me some cash to buy the VMWare Player on Windows.

  • Step 1 - convert VirtualBox to an Appliance
  • Step 2 - load the appliance into VMWare
  • Step 3 - uninstall the VirtualBox addons
  • Step 4 - install the VMWare addons
  • Step 5 - edit the .vmxf file to change the network settings
  • Step 6 - reauthenticate the Windows license
  • Step 7 - enjoy your ported VM
  • Step 8 - delete the VirtualBox VMs

Step 1 - convert VirtualBox to an Appliance

In VirtualBox I exported the VM as an appliance using the "File \ Export Appliance..." menu.

Step 2 - load the appliance into VMWare

Using VMWare Player, I open the appliance using "Open a Virtual Machine".

VMWare complains about certificates, but we tell it to retry and continue.

Then it prompts to save as a VM location.

Voila, the VM is converted - but it doesn't work yet.

Step 3 - uninstall the VirtualBox addons

Start the VMWare machine, and uninstall the VirtualBox addons.

Step 4 - install the VMWare addons

Install the VMWare tools.

At this point, we should be finished, but I wasn't. I had to fix the networking.

It took me a while to find the right online resource for the next step.

Step 5 - edit the .vmxf file to change the network settings

I could not get the drivers for the ethernet adapters to work with the VMWare Windows XP VM.

I had to edit the .vmxf file that configures the virtual machine, and remove the "e1000" line.

Then, when I restart the machine, the correct drivers from the VMWare tools are used for the ethernet device.

I also had to make sure that the "Automatic Bridging Settings" in the VM Settings were set to bridge the correct networks, including the "Microsoft Wi-Fi Direct Virtual Adapter" and the "Microsoft Hosted Network Virtual Adapter".

Step 6 - re-authenticate the Windows license

At this point Windows complains that the hardware configuration has changed, and needs to re-authenticate.

For some reason, it wouldn't re-authenticate when I was logged in, so I had to restart the machine, and re-authenticate prior to the login.

Step 7 - enjoy your ported VM

I couldn't face a re-install of Windows, and downloading all the service packs etc. So I wanted to get this conversion working, which is why I persisted, and you now see my notes as a blog post.

So now. I'll use VMWare for my VMs on Windows, as well as Mac.

Hopefully, Oracle will fix VirtualBox fully, as I like to have options.

But I think VirtualBox has lost me as a user for my licensed Windows VMs. The conversion from VMWare to VirtualBox is not as easy as the conversion from VirtualBox to VMWare.

Step 8 - delete the VirtualBox VMs

Since I use licensed versions of Windows, in addition to the modern.ie VMs, I had to remember to delete the VirtualBox VMs.


Back to Basics: How to use the Windows Command Line

Monday, February 17, 2014 16:16 PM

Those of us that have worked with computers for most of our lives, take the command line for granted. We know it exists, we know basically how to use it, and we know how to find the commands we need even if we can't remember them.

But, not everyone knows how to use the command line. I've had quite a few questions on the various courses I conduct because people have no familiarity with the command line. And the worst part was, I could not find a good resource to send them to, in order to learn the command line.

As a result, I created a short 6 minute video that shows how to start the windows command line, change to a specific directory, run some commands, and how to find out more information.



Start Command line by:

  • clicking on start \ "Command Prompt"
  • Start \ Run, "cmd"
  • Start \ search for "cmd"
  • Win+R, "cmd"
  • Windows Powertoy "Open Command Window Here"
  • Shift + Right Click - "Open Command Prompt Here"
  • type "cmd" in explorer (Win+e, navigate, "cmd")     
  • Windows 8 command from dashboard
Change to a directory using "cd /d " then copy and paste the absolute path from Windows Explorer.

Basic Commands:
  • dir - show directory listing
  • cd .. - move up a directory
  • cd directoryname  - change to a subdirectory
  • cls - clear the screen
  • title name - retitle a command window
  • help - what commands are available
  • help command - information on the command

If anyone wants more videos like this then please either leave comments here, or on YouTube and let me know. Or if you know of any great references to point beginners at then I welcome those comments as well.

Introducing Virtualbox modern.ie Turnkey Virtual Machines for Web Testing

Thursday, February 13, 2014 22:02 PM

My install of VirtualBox prompted me to update today. And I realised that I hadn't written much about VirtualBox, and I find any videos I had created about it.

Which surprised me since I use Virtual Machines. A lot.


No Matter, since I created the above video today.

In it, I show the basic install process for VirtualBox. A free Virtualisation platform from Oracle which runs on Windows, Mac and Linux.

Also, Modern.IE, which I know I have mentioned before. The Microsoft site where you can download virtual machines for each version of MS Windows - XP through to Windows 8, with a variety of IE versions.

Perfect for 'compatibility' testing - the main use case I think Microsoft envisioned for the site. Or for creating sandbox environments and for running automation against different browsers, which I often use it to do.

I even mention TurnkeyLinux, where you can find pre-built virtual machines for numerous open source tools.

In fact the version of RedMine that I used on the Black Ops Testing Workshops to demonstrate the quick automation I created. I installed via a TurnkeyLinux virtual machine.

Oracle, even host a set of pre-built virtual machines.

A New Feature in VirtualBox (that I only noticed today)

I noticed some functionality had crept in to VirtualBox today.

The cool 'Seamless Mode' which I had previously noticed in Parallels on the Mac (as 'Coherence' mode) and on VM Fusion on the Mac called ('Unity' mode). This allows 'windows' on the virtual machine to run as though they were 'normal' windows on your machine - so not contrained within the virtual machine window.

I love this feature. It means I no longer have to keep switching in and out of a VM Window and can run the virtualised apps alongside native apps. And with shared clipboard and drag and drop, it seems too easy to forget that I ran the app from a VM.

If you haven't tried this yet. Download VirtualBox, install the Win XP with IE6 VM, and then run it in 'Seamless' Mode so you have IE6 running on the desktop of your shiny whiz bang monster desktop. Try it. Testing with IE6 becomes a fun thing to do - how often do you hear that?



How to emulate mobile devices using Chrome browser

Saturday, February 08, 2014 07:56 AM

Google Chrome continually changes, which usually means good news as new features appear. Unfortunately sometimes it means changes to our existing workflow.
This happened recently when Google released a new version of Chrome, but moved the Emulator settings.
I eventually found them, and show you how in the video below:

Or for those of you that prefer to read, read on. I've added references at the bottom.
We have to start by using the Overrides in the Chrome developer tools settings. All the emulation used to exist here, but it has moved.
Right click, and Inspect, to show the developer tools. Then click the cog on the right to show the Settings. And show the Override settings.
So the first thing we do is make sure that we have checked "Show 'Emulation' view in console drawer".
Great.
So now where is the console drawer?
Close the settings and in the dev tools on any of these tabs, we can display the "Console drawer" by pressing the escape key, and lo the drawer did appear and an emulation tab was present.
And we can use the emulation tab to help us test.
In the demo video I show this in action on the bbc site.
Choose a device to emulate. I pick the "Samsung Galaxy Note II" because I have a physical device for that on my desk, and if I encounter any issues I can try the same functionality on my device.
Choose Device, Click Emulate, and you can see the screen size refreshes to a scaled smaller size.
You can amend the display settings using the 'Screen' options. By default it is shown scaled, but you can make if full size if you want.
But we still don't have the mobile site yet. So I refresh the screen. Using Ctrl+F5. And because Chrome is now sending the correct mobile headers for the Note II, we are directed to the Mobile site.
And now the issues.
I try and use the site. Click on the links. And nothing happens.
So, I change sensors and switch off the emulate touch screen. And we have a working site again.
This works on the Note, so it might be a BBC issue, or it might be a Chrome issue. But really it shows us the problems of testing through emulation, when we find suspected issues, we have to replicate them on a better emulator or a physical device.
But the Chrome emulation is so convenient on the desktop that for a first run check on the site, and certainly checking how your server responds to mobile headers, these are a great first step.
And you can stop the emulation by clicking the [Reset] button.
In the video I show a bonus, which I thought was an emulator bug, but seems to be by design by the BBC, where the Weather page does not redirect.
Chrome emulation? Very easy way to run a first check on the site, if you know how to access the functionality.
Additional References:

Software Development Summit 2013

Tuesday, December 17, 2013 14:03 PM

I attended the Software Development Summit December 2013 in Helsinki.

I was fortunate in being asked to perform a keynote, and asked to fill in for a keynote speaker who unfortunately couldn't attend, so I did two keynotes. Lucky me.

You can find slides for the talks listed over on my Compendium Developments site.

I managed to catch up with Kristian Karl and learn more about the CI and testing regime at spotify, you can watch his Eurostar conference Experiences of Test Automation Webinar online (Q&A).

I was able to quickly hang out and see a few of the twitter enabled attendees: Johan Atting,  Johan Jonasson, Aleksis Tolonen, and my fellow Eurostar Committee member Maaret Pyhäjärvi

I did receive a pointer to FreeNest - an open source platform put together by students which is designed to help teams get up to speed with a set of collaboration tools on Ubuntu quickly.

But it was all over very quickly and I had very little time to chat with Ilari Aegerter or Gojko Adzic.

The only downside I had was that I had to miss Gojko's Keynote because I thought I was flying out early. Of course the London Fog had other ideas so instead of learning from Gojko, I was stuck at the airport instead of enjoying the end of the conference :).

SIGIST 2013 Panel - Should Testers Be Able to Code

Tuesday, December 17, 2013 13:24 PM

I attended the SIGIST in December because I was asked to be part of a Panel with the starting discussion title of "Should testers be able to code?".

I was on the panel with Dorothy Graham, Paul Gerrard and Dr Stuart Reid


Embedded image permalink

Initial Notes

I wrote the following in an email to the other panel members during the run up to the SIGIST. It wasn't polished but represented my notes and pre-conf prep.

....

I pretty much have to ignore the title "Should testers be able to code?"

In my mind "Should", at that point in the sentence equals "An obligation to..."

We don't work in an industry where testers have an obligation to code. So any question about obligation has no place in my reality.

I've met developers who do not seem to feel they have an obligation to be able to code. Similarly I've met testers who do not seem to feel they have an obligation to be able to test, or managers who do not seem to feel they have an obligation to be able to 'manage'. The process of Software Development has a lot of fungibility built in and can work with many different skill sets and skill levels on the project.

Personally, I do know how code, and while I can code some things as well as professional developers I consider my coding skills intermediate. Therefore I hope to phrase my answers in the form "When testers can code the advantages are ..." "When testers can not code the disadvantages are..." and "My experience of having up to date and intermediate coding ability has been..."

I do hold some opinions that might pop up:
  • "Testers who can not code, should not write automation code" 
  • "Testers who can not code well, will write worse automation code than testers who can code well."
  • and I suspect that "Many failed test automation programmes are a result of testers not knowing how to code". 
  • "I've also seen developers write awful automation code, I think automation code may require some different coding and design styles than application code."
I've worked with enough teams and reviewed enough automation code over the years to have some evidence base for those opinions. But we have an industry that has pretty low expectations around automation skillsets and automation, and for some reason has lumped most 'automation' in the 'testing' realm.

But this panel isn't about automation. Automation != Testing.

Much of my recent on-line work has been about lowering the barriers to entry for people who do want to code or develop more technical skills. I prefer to help someone do something, if they express an interest.

Personally I try to learn as much as possible about the process, and skill sets involved in, Software Development. One small part of that involves 'coding'. Other parts include "Architecture", "Design", "Databases", "Modelling", "Protocols", "Tools", "Estimation", "Planning", "Communication" etc. etc. etc.

I don't think that any role ("Tester", "Developer", "Analyst") has exclusive right to a set of skills, techniques and knowledge: "testing", "coding", "modelling", "analysis" etc.

I value diverse skills across the team.

On the Panel

My brain, working the way it does, has forgotten most of the questions and answers, so I revisit the memory with some degree of trepidation, false memories may well rear their head.

I think many of the above notes were covered during the Q&A. I was able to pull on some of the material from my Helsinki Talk on 'Experiences with Exploratory Testing...' because someone in the audience said "Developers should not test their own code", and so I was able to riff off the T-Shirt slogan slide from that presentation.

Basically - statements like "Developers should not test their own code" and "Developers do not make good testers" are the kind of statements people post on internet forums, and we should relegate them to T-Shirt slogans so we can mock them and laugh at them. Developers do test their own code, and they can learn to test better. The sooner the 'test community' wipes this nonsense from its collective meme pool the better.

Because we were sitting down on the panel, my opinions were phrased in a less confrontational amanner nd with more humour than might appear from the written form on this page.

I remember saying things like:

  • Projects depend on teams. So we need the team to have a diverse set of skills. And when we build teams look at the gaps in the skillsets.
  • Keep investing in your staff to make sure they keep expanding and improving their skillsets.
  • Becoming a better programmer has helped me test better.
  • Becoming better at testing, has helped me write code better
  • I recommend the book "Growing Object Oriented Software Guided by Tests"
  • Teams are systems. As soon as you add a team member, or mandate that they do something, you change the system. Keep looking at and evaluating the system.
  • Programming means lots of different approaches because there are different styles: OO, Functional, Procedural, etc. They require different skills and models
  • Modelling is a vital skill for testers

We on the panel certainly had fun, I hope the discussions and alternative view points added value to the audience.

End Notes

I think one message that came through from everyone on the panel was that testers need to have the ability to demonstrably add value to projects.

Having the role 'tester' does not mean you automatically add value.

Having the ability to write code does not mean you automatically add value (you might write really bad code).

Each tester needs to identify how they can add value.

The current market vogue is for testers with coding skills.

If that isn't your thang. Then it might mean becoming an expert in UX and psychology so that you can add more value for user focused testing. It might mean a whole bunch of things.

Each tester needs to figure out what they can do to add value, and what they can do to demonstrate their capabilities to the teams or potential employers.

And keep improving.


How to use Jira to subjectively track and report daily on your testing?

Friday, November 22, 2013 12:36 PM

A long time ago I coded a now defunct modelling tool to help me with my testing. Half the battle with managing and reporting testing involves deciding how you will model it for the project you work on.

Generic Modelling


The generic set of formal modelling techniques I use, I often map on to:
  • Entities
  • Lists
  • Hierarchies
  • Graphs
When using Jira, I have access to Entities and Lists.

Lightweight Subjective Status Reporting


On a recent project we wanted a lightweight way of tracking progress/thoughts/notes over time. I really wanted a subjective 'daily' summary  report which provided interested viewers insight into the testing without having to ask.

As part of my normal routine I have become used to creating a daily log and updating it throughout the day. Ofttimes creating a summary section that I can offer to anyone who asks.

How to do this using Jira?


We created a custom entity called something similar to "Status Tracking Summary".

Every day, someone on the team would create this, and title it with the date "20 November 2013".

We only really cared about the title and the description attributes on the entity.

The description took the form of a set of bullets that we maintained over the day to document the status e.g.

- waiting for db schema to configure environment
- release 23.45 received - not deployed yet
- ... etc.

Over the day we would maintain this, so at the end of the day it might look like

- db schema and release 23.45 deployed to environment
- initial sanity testing started see Jira-2567
- ... etc.

I initially thought that the title would change at the end of the day to represent a summary of the summary e.g. "Environment setup and sanity testing", "Defect retesting after new release". But this never felt natural and added no real value so the title normally reflected the date.

Typically, as a team of 3-4, we had 5 - 15 bullets on the list.

Use Dashboards to make things visible


To make it visible, we added a "Filter" on this entity, and added Filter display gadget to the testing dashboard which displayed the last 2 status updates.

This meant that anyone viewing the testing dashboard could see subjective statements of progress throughout the day, and historical end of day summaries throughout the project.

But people don't like writing reports


I have grown used to tracking my day through bullets and actions that I take it for granted that everyone can do this. Still, I had initial concerns that not everyone on the team would add to the status and I might have to chase.

Fortunately that didn't happen.

The team used the Dashboard throughout the day to see what defects they had allocated to them, and to work on tasks and defects in the appropriate state. Therefore they always saw the subjective daily status report when they visited the Dashboard and updating it became a natural task during the day. 

You can report Daily, with mininal overhead


Very often stakeholders ask us to prepare daily reports. I find that creating, and updating, a summary log throughout the day often satisfies that requirement. 

As a team, building it into our documentation process throughout the day added very little overhead and made a big difference to the stakeholders had to our testing.

iOS Screen Capture, Streaming and ScreenRecording tools for Mobile Testing

Friday, November 01, 2013 18:40 PM

I listed the results of my investigation into Android Screen Capture, Streaming and ScreenRecording tools for Mobile Testing, now time to turn to iOS.

Note that I'm not really covering static screen capture here, since short cuts are built into each operating system for static capture.

iOS is pretty locked down. And without jail breaking, your options are limited.

However, iOS has a built in screen sharing capability called AirPlay, designed to be used for streaming to your Apple TV. But that hasn't stopped some enterprising developers building AirPlay servers for both Windows and Mac OS computers.

Both applications are easy to use and offer much the same capability, so which you choose will depend on your evaluation on the machines you use.

Both are insanely affordable. And both have a version for Windows and Mac OS.

AirServer offers more configuration options, although it worked fine out of the box for me. AirServer offers a 7 day trial.

Reflector offers a trial where you can star as many sessions as you want. But each session only lasts for 10 minutes.

To capture your on-iOS-testing, 'airplay' the iOS screen to your desktop or laptop computer, and then use a screen recording tool like Camtasia, BB Flashback (or its big brother BB Test Assistant) and capture the screen movies there.

This doesn't let you interact with the actual iOS device from your computer, but goes some way to making your testing recordable and reportable.

FAQ: How do your books "integrate" with your courses?

Friday, November 01, 2013 17:07 PM

Dear Alan
I'd like to embark on learning from your books and online course but should I do one before the other? Or does one set of materials supersede another?
Thanks, 
A Correspondent 
I receive this question often enough that I'm going to try and answer it fully on the blog.

On a timeline, I created the following products:
If you still want to learn Selenium-RC using Java then the Selenium Simplified Book is the one to get. I walk you through learning the basics of Java, setting up the environments and Selenium-RC in a single book. In my mind the WebDriver courses and Java For Testers, supersede  the Selenium Simplified Book, but if you want to use Selenium-RC then the book remains valid, but remember Selenium-RC has been deprecated in favour of WebDriver.

Feedback I received on the Selenium Simplified book suggested that it was overly oriented to the beginner. Many people already knew how to code and setup the tools, and they just wanted to learn the API.
So, for Selenium WebDriver I created 3 products:
If you don't know how to code, but are a self starter and can learn from online resources when you get started, I recommend:
If you know you're going to need help working through the API then

And since it seemed top heavy on Automation, when that only represents part of what I do in my daily work life, I created the Technical Web Testing 101 online course to introduce people to the tools and thought processes I use when testing Web Applications.

I created  Java For Testers independently of Selenium 2 WebDriver Basics online Course. I use much of the Java in Java For Testers, on the WebDriver course, but don't explain the use of the Java constructs in detail.
I think they complement each other rather than directly overlap or supersede each other. Java For Testers is designed as a stand alone introduction to Java Programming and the WebDriver course doesn't spend a lot of time explaining the Java used.

Hope that helps. And "Thank You" to the most recent set of correspondents that asked the question.

How to connect your iOS device to an HTTP proxy on your desktop or laptop

Friday, November 01, 2013 12:14 PM

Connecting an iOS device to an HTTP Proxy is much the same as we demonstrated on Android devices.

On your iOS settings:

  1. In Wifi
  2. Select the (i) information icon next to your wifi network
  3. On the bottom of the screen is the HTTP Proxy settings
  4. Set this to Manual
  5. Type in the Server IP address and port of your proxy
Done.

Now your HTTP traffic should flow through the desktop proxy.

iOS does a pretty good job of caching so I either clear the cache before I start testing use the Safari Settings and "Clear Cookies and Data" or use the "Private" link at the bottom of Safari to start a new session.

I found iOS doesn't like connecting to some networks, and so I setup a local hotspot with connectify.me and that made my life a little easier.

Remember to switch off the proxy when you are done.

Create a local WiFi hotspot for testing using connectify.me

Friday, October 25, 2013 10:02 AM

We took delivery of the new iOS devices for mobile testing. And in order to activate them, you need to connect over Wi-Fi  But they wouldn't connect to the Wi-Fi network during the activation wizard. What to do?

I decided to use my laptop as a local Wi-Fi hotspot. That way I could configure it with different passwords and encryption types to try the different options and hope that the iOS device would connect.

I had trouble sharing the Wi-Fi connection using the in built windows functionality (ably explained on lifehacker)

So I installed connectify.me to help me use my laptop as a hotspot. And LO! The iOS devices happily connected through my local hotspot and activation could ensue.

Of course - once I had activated them, the iOS devices had no problems connecting to the Wi-Fi network I originally wanted to use for activation.

I suspect that the local hotspot approach might have some useful secondary benefits that I may have to research:

  • monitor traffic without having to setup a proxy
  • throttle the network speed

This also means that I can configure my mobile devices to connect over a single Wi-Fi network, and route the requests over other Wi-Fi networks by configuring the laptop connection rather than on multiple mobile devices.

Anyone else doing this? Any hints and tips or other uses you want to share?

How to chain HTTP Debug Proxies

Thursday, October 24, 2013 14:57 PM

I chain HTTP debug proxies.

That way I can use features from all of them, at the same time:

  •     Fiddler Autoresponders
  •     BurpSuite passive sitemap building
  •     ZAP's multiple breakpoints
  •     etc.

Fiddler

I usually work on Windows, so the first proxy I start is Fiddler. Fiddler hooks into the windows system seamlessly without any additional config. All other proxies I point at Fiddler as the down stream proxy.

When fiddler is running. Test your setup by pointing your browser through Fiddler.

BurpSuite

BurpSuite Options tab. Upstream Proxy Servers. Add an entry for Fiddler:

  • Destination Host: *
  • Proxy Host: localhost
  • Proxy Port: 8888

At this point - test your setup again. Don't chain everything together and then try and figure out where the problem is. Point your browser at BurpSuite and check you can see traffic through them all.

If you get stuck, use the Alerts tab in BurpSuite to check for errors.

Hint: Firefox and Opera maintain their proxy settings independently from the Windows settings so test your setup with Firefox or Opera.

ZAP

Tools \ Options \ Connection. Then point it at BurpSuite
  • Port: 8082 (or whatever you bound BurpSuite too)
Find the port you have bound ZAP to in Tools \ Options \ Local Proxy

And point the browser at this port now.

End

Voila, you should have it all chained.

If not, just revisit the last step. And don't panic.

Step by step. check each part of the journey. If its not working, it will probably just be some stupid error where you had previous config left over from a previous session.

Just remember to unwind them when you are done.

I have a video in the Technical Web Testing 101 course that shows this in more detail.

Links:

How to view http traffic on your mobile phone device via a computer proxy

Wednesday, October 23, 2013 12:46 PM

Viewing the HTTP traffic from your mobile browsers doesn't take that long to set up, but there are a few gotchas to be aware of:
  • You need to find the right IP address on your desktop
  • You need to change your proxy settings on your phone
  • Make sure your proxy allows external connections
  • Make sure both Mobile Device and Proxy Machine are on the same network



The setup I normally use:
  • Phone connected to wifi network
  • Windows Desktop connected to same network via wired connection (or laptop connected via wireless)
As an example, using Fiddler as the debug proxy, on the desktop:
  • Start Fiddler
  • Check that Fiddler has "Allow remote computers to connect" (via Tools \ Fiddler Options \ Connections)
  • Start a command prompt and use "ipconfig" to show you the current list of ip addresses for you computer
On the mobile device, details below are for android (but the principle is the same for other operating systems):
  • Open Settings
  • Wi-Fi settings
  • Long press on the wireless network you are using to access the connection settings
  • Modify Network Config
  • Proxy Settings - Manual
    • change the Proxy HostName to the IP Address of your desktop
    • Add the port for your debug proxy e.g. 8888 for Fiddler
Then your browser should be connecting to the proxy.

Other applications may not use the proxy in this way. You might need to setup port forwarding use adb to feed them through to your proxy (don't leave comments asking how to do this check on google. I haven't had to do this for some time so my memory of hacking around on the android device to get it working is hazy.)

On Android: Chrome, Firefox, Dolphin and the inbuilt browser all worked without issue. I had some hassle connecting Opera, but didn't try and diagnose why.

Try it and see how you get on.

 It makes a big difference to the visibility of your testing when working through mobile.

Additional Notes: 
  • For burpsuite, in "Proxy \ Options" edit the proxy listener to Bind to address "All interfaces"

Android Screen Capture, Streaming and ScreenRecording tools for Mobile Testing

Friday, October 25, 2013 09:43 AM

I looked at my mobile testing options and I realised that I didn't have a full toolbox to help me.

I first wanted to identify screen capture and screen recording options for my Android devices.

Most of the tools I found wanted root devices. When testing, you may not have this option, people might come across paranoid about interfering with the device state.

So I limited myself to tools which did not require root access. They all pretty much work the same way using ADB and Debugging over USB enabled.

If you bought all tools that I recommend in here, then the total cost would hit the dizzying height of £8.98, so I don't see a lot of point trying to roll my own solution.

To use these tools you pretty much need to have a working SDK setup. So work on that first. And if you can connect to your device with adb or monitor.bat then you're probably good to go.

http://developer.android.com/sdk/index.html

In order to record the screen for some of these I use them in combination with a desktop screen recording tool like Camtasia or the Blueberry Software tools BB FlashBack or BB Test Assistant

Free and Open Source Tools

Both Droid@Screen and Android Screen Monitor offer much the same functionality. I think your ultimate choice will depend on which GUI you prefer.

Droid@Screen



Droid@Screen is a pretty good wrapper around the adb.

The main GUI display shows a continually refreshed view of the device.
  • You can take a screenshot very easily. 
  • The main GUI has easy orientation buttons to adjust the GUI display for landscape or GUI.
  • You can capture screenshots to a folder automatically.
  • You can view device properties
  • You can scale the output view
On my Samsung Galaxy Note II the refresh rate was a little slow (about 1 - 2 frames per second), but it is a pretty high res screen. For lower resolution screens you might find that you can use this for screen recording as well.

Android Screen Monitor



Much the same as droid@screen, the GUI is simpler with a right click menu instead of icons.

Sometimes this is a little faster than droid@screen, sometimes droid@screen is a little faster.

Commercial

ASC - Advanced Screen Capture



ASC performs on device screen capture, so it writes a movie file to the phone's memory. It has a bunch of options to adjust framerate. What I particularly like is that it will highlight the taps you make on the screen so you can view the interaction on the device.

On non rooted devices requres you to use an 'activation' program on the PC or mac. The desktop activator program acts as a simple way of making a connection to your device and taking a screenshot, so an easy way of accessing some of the sdk funtionality.

Looking at the popups as the screen 'activates' it is using adb in some way - I assume to enable the android screenshot api.

Application description on the play store says it only works on non-Tegra devices. The trial worked fine on my Samsung Galaxy Note II.

Buy through an in-app purchase for £3.99

Activation Notes: I had some trouble activating it after purchase, but after a few emails with the developer. I had to uninstall it, then re-install it, then click the 'buy' again (I wasn't charged twice). The activation does work, but a bit more fiddly than it needed to be.

VMLite VNC Server

Costs £4.99

A desktop program to start the server on your phone if you work non-rooted.

Once the server runs I can head off to http://<deviceip>:5801 to use the HTTP interface.

Or connect a desktop vnc server to <deviceip>:5901 

The HTML5 viewer was about the same as Droid@Screen or Android Screen Monitor. 

The Java Applet VNC is a little faster, and the best out of the desktop tools I tried.

The video was not as smooth as ASC, but remember that this has the advantage that I can interact with the android device as well from the desktop and use my mouse and keyboard.


Used But Can't Recommend Fully

I also tried a couple of other Open Source Tools, but they didn't work well on my machine, that doesn't mean they won't work on yours, so I list them below:

Summary

A mix of tools there:
  • Desktop Connection for Screenshots and low frame rate streaming
  • VNC for higher framerate and interaction
  • On Device Capture for High Frame Rate
What do you use when you test on mobile devices to record the testing you perform? Leave a comment and let myself and the world know, so we can evaluate the tools you recommend and expand our options.

WinMerge Revisited - my default file and directory comparison tool

Friday, September 27, 2013 08:00 AM

I seem to default to WinMerge for my file and directory comparisons.

Whenever I need to:

  • compare two directories
  • compare two files for differences
  • copy a set of files between directories
  • compare contents of zip files or rar files

When you install WinMerge:

  • You can choose to use it as the merge view in Tortoise SVN. I tend not to do this because the built in Tortoise SVN diff works fine for me. 
  • Add WinMerge to your system path, which allows you to call it from the command line easily and use the command line options. I choose this option.
  • Enable explorer context menu integration - an essential option.
Read the online documentation, or the help file.

When you install. Have a quick flick through the quick tour to get you up to speed quickly.

If you download the .zip file then you essentially have a portable install so can use it from a USB stick.

A few things I especially like:
  • You can drag and drop like a crazy man
    • drag and drop folders on to the desktop icon for comparison
    • drag and drop on to the main pane to start a compare
    • drag and drop into the input fields
  • If you install 7-zip and the7-zip plugin you can compare rar and zip files
    • 7-zip acts as my default archive management tool
A few things to note:
  • The tree mode makes life excellent, just make sure you enable it
  • Look through the options and switch on all the "Shell Integration" options - particularly the "Include subfolders by default"
Why would I use this?
  • I create a backup folder before testing. I test. I can compare and see what changed.
  • I want to revert back to previous files selectively, so I compare dirs and selectively move changed files
  • I have an oracle file and want to compare results
What do you use for your comparison tools? In built version control tools? Commercial tools? Leave a comment if you want to share your secret weapon of choice.


Alternative Uses for Fiddler

Wednesday, September 25, 2013 09:28 AM

A few days ago I realised that one of my use cases for Fiddler, did not stem from a need for debug proxying.


Since I work on a variety of client sites, I periodically have issues with restrictive proxy servers where a .pac file somewhere sets up the browser config, and you have username and password issues etc. etc.

In those circumstances some tools need configuring to use the client site proxy - I recently tried to use httpgraph based on a suggestion from James Lyndsay

Like many tools that allow you to configure the proxy httpgraph has a url and port. Nothing else. No http or https config, no usernames, no passwords.

And like many tools, it didn't work out of the box with a restrictive client side proxy.

So, choose your own adventure time:
  1. Game Over? Stop using the tool?
  2. Mess around with config settings for a few hours trying to get it to work?
  3. Use Fiddler?
1. Stop using the tool

You sit back in your chair and sigh. Nothing ever works out of the box. Why don't vendors make technical tools easy to use. Oh well, nothing lost except a few minutes downloading the tool. Time to get on with the hum drum day to day work with no advantages.

You finish your tasks, and complete the day, but leave with a nagging sense of unfulfilled potential. 

Game Over.

2. Mess around with the settings

You try everything you know. You tweak the registry. You setup system variables. You run the app from the command line with -D proxy settings. You even mess around with the Windows Routing table. Sadly to no avail. You look at the clock. Crikey, its 4pm already, and you still have to finish the test strategy, distribute it, add it to version control, setup the review meeting, and respond to emails. 

Looks like you need to get your priorities in order and stay here till midnight.

You Lose. Game Over.

3. Use Fiddler

"I'll do what I always do first" you think. 

Start up fiddler.

Fiddler hooks in to the Windows internet controls seamlessly, it handles all the proxy stuff, including passwords. Then I'll configure the new tool, to use Fiddler as its proxy. This way the tool has a simple proxy server to connect to, where I control the port and protocol config, and as a bonus, I can see the traffic sent.

So you start up fiddler, and point your new tool's proxy to "127.0.0.1:8888", the Fiddler defaults. 

You know you can use other Fiddler urls if this doesn't work 
But it does work.

Leaving you plenty of time to get on with today's necessary todo list items.

A Double point win. Congratulations. Adventure completed.

Do you have any use cases for debug proxies that you take for granted, that don't immediately involve debug proxying? If so, leave a comment to let me and the world know.

How to Turn on and off JavaScript in Firefox

Wednesday, September 11, 2013 08:40 AM

Whoa, I turn my back for a couple of months and Mozilla remove the option to switch off JavaScript in Firefox.

Short version: Install QuickJava or type "about:config" as the URL then search for "javascript.enabled"

We spent a good 5 or 10 minutes thinking we were crazy. "I'm sure the option used to live here..."

As ever, Google came to the rescue..

Why did Mozilla do this? Because of "Checkboxes that kill your product".

Unfortunately, as a tester I still experience moments where I need to kill the product, so How can I do that now?

"about:config"

Firefox has the built in "about:config" which you can type into the URL entry field. Then search for "JavaScript" and click on the "javascript.enabled" to switch JavaScript on and off.

Add-ons

One of the official forum Q&A items mentions an add on called QuickJava, just make sure that enable the "Add on bar" from the toolbar menu "View \ Toolbars \ Add-on bar"

Other plugins mentioned in this other official forum Q&A item

I chose to install QuickJava because it lets me toggle a bunch of things quickly.

End Notes

As an opinion, I find it a bit odd that since Browsers make ever more developer functionality available at the click of a mouse to every user: allowing them to edit the DOM, or execute arbitrary JavaScript, or amend and delete cookies, etc. etc. I don't think I'd make switching JavaScript off a hard thing to do - this is one of the few things I actually do, as a user, to make a misbehaving website behave. 

But hey ho. Testers have had it worse - remember when we used to have to same html to the desktop to edit it? And edit actual cookie files? We have it easy - particularly when you get to discover a new helpful add-on that you didn't know about - QuickJava.