This is a guest post from Adam Carmi. He is a Co-founder and CTO of Applitools, a startup company focused on delivering innovative cloud-based Automated Visual Testing solutions for front-end developers, automation engineers and QA teams.
Applitools Eyes validates the correctness of GUI layout, content, functionality and appearance of web, mobile and hybrid apps across all browsers, devices, screen resolutions, and operating systems, automating previously-all-manual GUI testing. It integrates seamlessly with all major test automation frameworks, such as Selenium, Protractor, CodedUI, QTP/UFT and Appium, and is compatible with all major programming languages.
Visual Software Testing is the process of validating the visual aspects of an application’s User Interface (UI). In addition to validating that the UI displays the correct content or data, Visual Testing focuses on validating the Layout and Appearance of each visual element of the UI and of the UI as a whole. Layout correctness means ...
This is the introductory article to a series of occasional articles related to testing, from the perspective of a developer. These articles are intended to pass on some simple techniques to help a QA team to debug and test application application builds.
In many instances these will be the kinds of technique that might also be used by developers during development or even for debugging customer issues.
What we aim to do
Pass on examples of high bang to buck tips in the most digestible format. In some cases this will be in the form of a case study that aims to illustrate a key concept; this kind of format helps exposition by having multiple "bites at the cherry" to get a key rule of thumb across. Everyone understands differently; the aim should be to try all the possible routes in.
In other cases what is required is a ...
This post was shared with us by Patrick Martin.
This is a very interesting real life example of a classic concurrency issue and very neatly illustrates the abstract process of fault finding a race condition. What makes it particularly interesting is that rather than involving high performance multi-threaded code it is in an automated user interface test.
A test developer is in the process of working on an automated TestComplete test which is part of a suite that is run on a number of benchmark virtual machines. He discovers he has an interesting problem: his test fails quite regularly with an odd error on his machine but works on the test virtual machines. In an even more confusing development, when he debugs the test script in the test tool IDE the bug does not reproduce.
The error is "invalid operation" when the fault reproduces and happens ...
Bugs, Errors, Defects, Faults, Failures – All are indicators of customer dissatisfaction, especially when found in the production environment. With the increasing complexity of the software being developed, it is imperative to catch the high priority bugs before the application goes into production.
The purpose of this article is to provide a practical approach of identifying test cases so as to catch maximum bugs before the product goes into production environment.
There are several suggested ways of designing test cases such as Boundary Value Analysis, Equivalence Partitioning, Error guessing (Intuition) etc. However in this article we would learn how test cases can be derived from bugs which have been identified in the existing software.
The article is written in a bug-test-case approach. For every bug, one or more corresponding test cases are defined. The bugs and test cases listed are not specific to any application, but may apply to one or ...
This article talks about the following areas in Performance testing.
- Determine the type of Performance tests to be conducted
- Determine the transaction mix
- Methodology for Tool selection
- Considerations while scripting those transactions
The ‘take-away’ from this article would be all of the above and the risks and benefits of each type of Performance tests, which will help you understand the kind of tests that you need to conduct based on your Client’s specifications.
Determine the type of Performance test to be conducted
In order to solve a performance need. It is very important to know the types of performance tests that can be conducted that fits the Customer’s requirement.
Performance tests can be broadly classified in to * Load tests * Stress tests * Endurance tests * Spike tests * Capacity testing * Load test
Load tests are performance tests which are focused on determining or validating performance characteristics of the product under test ...
There are some basic rules of thumb which will serve you well in testing any application that deals with lists of data (and which applications don't?).
1.1; few; many"
2."don't make any assumptions"
3."remember to mix it up a little"
This case study covers a very interesting example of where following the rules of thumbs exactly paid dividends.
There was an interactive reporting solution that was having performance issues: essentially there was some pathological performance degradation under some circumstances. The code contributing the biggest slice of time wastage was in a 3rd party component that really could not be
There were multiple passes at the problem and the usual things were done:
- Direct: get the problem project and simply pause the debugger at the obvious pain point
- Indirect: scrub the code looking for opportunities to
- (Eventually) Validation through testing: write an automated script ...
The scope of this article is limited to the need, approach for requirement collection and the steps involved for Performance Testing.
We all know and understand that not “one-size-fits-all”. The approach discussed in this article predominantly fits in to most scenarios but is not limited to or the only process to go about.
Performance is a "must have" feature. No matter how rich your product is functionally, if it fails to meet the performance expectations of your customer the product will be branded a failure.
Application architectural design decisions may be greatly influenced by the importance placed by the customer on one or more specific requirements. Incorrect design decisions, made at the outset of a project as a result of invalid assumptions, may become impossible to remedy downstream. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression ...
This article is written by Anamika Chowdhury from HCL. She can be contacted at Anamika.firstname.lastname@example.org
In this white paper, we will focus ...