Many people asked me to explain how TestSpicer works. This post will explain how TestSpicer can be used for manual or automated testing.
Let me start with manual testing.
TestSpicer for manual testing could be extremely useful for doing experiments with data. For example, if you are testing username and run out of ideas, you can quickly use TestSpicer to generate a random username. On the same lines, If you need currency, few paragraphs of text or need a big unicode string - you can get all of them @ TestSpicer. It is free and you do not need to sign up or create an account for generating random data manually. You can follow these steps
- Go to TestSpicer.com/docs
- Click on the appropriate GET.
- Specify parameters if required
- Get the data and off you go
This will ensure that you are not using static data, even subconsciously.
Let’s see ...
Some of you might know that I have been working on my pet project TestSpicer for some time. TestSpicer has a long way to go, however I am happy to announce that it is live now.
So what is TestSpicer?
TestSpicer is a collection of RESTful web services which can be used to make test automation more efficient and effective.
Please have a look at this (around 4 minute long) video to understand TestSpicer.
TestSpicer is in beta and is free to use. It would be great if you can sign up, give it a spin and let me know what you think about it.
With TestSpicer , I hope to make randomisation mainstream - as it will take the pain out of data generation, logging, reporting and will provide invaluable insight on the data used by test automation. Right now I have focussed on data generation, but reporting, logging and visualisation ...
How can I make test automation more effective? This simple question can lead to many interesting things.
According to dictionary, effective can be defined as - capable of accomplishing a purpose or capable of producing intended or expected results.
In order to understand effectiveness in the context of test automation, we need to answer following questions -
- What do we want to accomplish with test automation?
- What do we expect test automation to produce?
For me, automation should yield following two things -
- It should give confidence
- It should find defects
We strive to get confidence and we hope to find defects with test automation. However, most of the time test automation becomes repetition - execution with the same data and steps. There is immense value in this repetition - but should we stop there? What can we do to make test automation more effective?
In my opinion, testing is a sampling exercise. If we ...
How can we improve efficiency of test automation projects?
I work as an independent consultant and get involved in variety of test automation projects. I am fascinated by test automation and always think of ways to improve efficiency of test automation. But what is efficiency? What do we mean by improving efficiency of test automation projects?
According to the Wikipedia, efficiency is described as extent to which time, effort or cost is well used for the intended task or purpose. Efficiency is often a measurable concept. For example, if it takes 5 minutes to use standard libraries such as String, Math etc, and takes couple of days to replicate their functionality - it’s more efficient to use standard libraries.
In my opinion, efficiency of test automation projects can be described in two different ways.
- Execution efficiency - How fast test automation suite is executed? This is usually improved by things like ...
I have been thinking about randomization and test data for quite some time. If you are interested, you can find my views on randomization here. I strongly believe that testing is a sampling exercise and randomization increases sample size. If used properly, test automation would not be a repetition and will have potential to uncover something new in every run.
Despite its numerous benefits, I haven’t seen randomization used in many automation projects. This could be because of the lack of infrastructure around it. Randomization needs reliable test data generation, logging, reporting, visualization etc and often teams do not have bandwidth, motivation or skills to do it.
My vision for the Test Spicer is to - Increase efficiency and effectiveness of testing & test automation projects
If you would like to ...
How much automation is enough in software testing?
It is possible to answer this question in many different ways such as -
- As much as possible in the available time and budget.
- As much as needed - fight for budget and time if required.
- Enough to cover all the acceptance criteria - make it part of delivery.
- Should cover all happy paths, odd cases and boundary conditions.
- Classic answer would be any number such as 75.67% of code, 80.23% feature, 66.66% branch or whatever..
- And there may be other interesting answers
I agree that it's difficult to answer this question without looking at the specifics, however over a period of time, I have realised that there is a value in keeping automated suites small and simple. Automation code, like any other code base can have serious maintenance problems. If not handled properly, automation can have a big (and negative ...
I mentioned in my previous post that I will focus on testing mobile applications and will share tips, tricks and tools which might be useful for testing mobile applications. Today I am covering a topic which is very important for the user. This feature, However, is invisible (most of the time) and is often not covered by conventional non-functional testing types (accessibility, security, performance etc..).
In my previous article I briefly mentioned that unconventional non-functional requirements are one of the main differentiator between mobile and desktop applications. Let’s explore one such requirement - Power Consumption and answer two key questions -
- Why it is important to test power consumption of mobile applications?
- How can you get insight on power consumption by the application and improve it?
Let’s get started.
Battery - If you are not careful, I will drain
We do not need any research to prove that battery life is ...
We have witnessed transition from desktop to web and are witnessing another transition from web to mobile. I have been thinking about a blog series around testing mobile applications for a while and this is the first blog post in the series. In the coming few weeks, I will try to cover various topics / products / approaches related to testing mobile applications. I will focus on Android to start with and will move on to other platforms.
Before I dwell deeper into the subject - it is important to understand how testing mobile applications is different from testing browser / desktop applications. If we understand the distinction and challenges of testing mobile apps, it will be a bit more easier to tackle them.
1. Supported platforms & devices - you have more combinations to test
Desktop apps were usually targeted for specific platforms and it was relatively easy to access those platforms. Web based applications ...