How much automation is enough in software testing?
It is possible to answer this question in many different ways such as -
- As much as possible in the available time and budget.
- As much as needed - fight for budget and time if required.
- Enough to cover all the acceptance criteria - make it part of delivery.
- Should cover all happy paths, odd cases and boundary conditions.
- Classic answer would be any number such as 75.67% of code, 80.23% feature, 66.66% branch or whatever..
- And there may be other interesting answers
I agree that it's difficult to answer this question without looking at the specifics, however over a period of time, I have realised that there is a value in keeping automated suites small and simple. Automation code, like any other code base can have serious maintenance problems. If not handled properly, automation can have a big (and negative) impact on the project and can confuse otherwise good testing team.
I have painfully witnessed - lets rewrite automation (or it's variances such as change infrastructure, versioning, branching, CI, build pipeline etc around it) framework few times and often it did not matter a lot. Often more time was spent in any rewriting exercise than it was planned initially - this article from Joel may be old, but it’s relevant and you should read it if you have time and interest.
So how automation should be approached? What can we do?
It’s much easier to have decent, manageable and robust automation than you can imagine - you can always hire me and that would be the quickest :-) However, if you can’t do that, then may be you can try and avoid following things -
- STOP - Every time you copy-paste code and ask yourself this question - can I do it in any other way? If I need this code more than once, should it belong somewhere else? You can use tools like Check style and configure it to flag errors for copy-pasted code.
- STOP - Whenever you give class / package names as helper, utility etc - you can very well call them as NoIdeaWhatWillGoInThere.. or ItIsADumpYard.. if you can't think of a decent name, ask others. Often when you explain what you intend to put in these classes, names appear magically.
- STOP - Every time you add a new test and ask - do I really need to test this? Can it be covered by adding just one more assertion and renaming an existing test? Remember, numbers are not important. Granularity is important, but don't add lots of tests for the sake of it. Keep maintenance, execution overhead etc in mind every time you add a new test.
- STOP - When you have to make lots of changes in the core of your framework to accommodate tests for just one functionality. Always ask, is it worth the complexity it would bring? Can it be handled somewhere else in dev tests, manual tests or may be a separate suite just for this? CI tools are mature enough to handle multiple test suites, projects etc. and wherever possible, you should keep framework / core simple.
- STOP - When you spot that tests are testing same thing over and over again may be from different interfaces - Investigate if it is possible to have few end-to-end tests and focus on things which matter at specific layers instead of duplicating effort?
- STOP - Before you write any complicated code as part of your automation and see if there are libraries, existing solutions available for the problem you are solving? Always check with developers and take their help. Never reinvent the wheel.
So in nutshall, try to write clean code and keep your automated suite as small (in size, not in coverage) as possible. Remember, value of automated suite is not in it’s size, complexity or number of tests - but in improving productivity of the team.
What are your thoughts? What goals you have when you work on any automation project - coverage, tests, confidence, productivity or something else? Let’s discuss.
Do you like this post?
Subscribe to receive new posts via RSS or email. Join!