White Box Testing

By Anand Ramdeo on May 5, 2011 Comments

White box testing is very different in nature from black box testing. In black box testing, focus of all the activities is only on the functionality of system and not on what is happening inside the system.

Purpose of white box testing is to make sure that

  • Functionality is proper
  • Information on the code coverage

White box is primarily development teams job, but now test engineers have also started helping development team in this effort by contributing in writing unit test cases, generating data for unit test cases etc.

White box testing can be performed at various level starting from unit to system testing. Only distinction between black box and white box is the system knowledge, in white box testing you execute or select your test cases based on the code/architectural knowledge of system under test.

Even if you are executing integration or system test, but using data in such a way that one particular code path is exercised, it should fall under white box testing.

There are different type of coverage that can be targeted in white box testing

  • Statement coverage
  • Function coverage
  • Decision coverage
  • Decision and Statement coverage

Conditional Testing

The first improvement to white box techniques is to ensure that the Boolean controlling expressions are adequately tested, a process known as condition testing.

The process of condition testing ensures that a controlling expression has been adequately exercised whilst the software is under test by constructing a constraint set for every expression and then ensuring that every member on the constraint set is included in the values which are presented to the expression. This may require additional test runs to be included in the test plan.

To introduce the concept of constraint sets the simplest possible Boolean condition, a single Boolean variable or a negated Boolean variable, will be considered. These conditions may take forms such as:

if DateValid then
     while not DateValid
       then

The constraint set for both of these expressions is {t ,f } which indicates that to adequately test these expressions they should be tested twice with DateValid having the values True and False.

Perhaps the next simplest Boolean condition consists of a simple relational expression of the form value operator value, where the operator can be one of: is equal to ( = ), is not equal to ( /= ), is greater than ( > ), is less than ( = ) and is less than or equal to ( , <}, which indicates that to adequately test a relational expression it must be tested three times with values which ensure that the two values are equal, that the first value is less than the second value and that the first value is greater than the second value.

More complex control expressions involve the use of the Boolean operators, and, or and xor, which combine the values of two Boolean values. To construct a constraint set for a simple Boolean expression of the form BooleanValue operator BooleanValue all possible combinations of True and False have to be considered. This gives the constraint set for the expression as {{t ,t } {t ,f } {f ,t } {f ,f }}. If both BooleanValues are simple or negated Boolean variables then no further development of this set is required. However if one or both of the BooleanValues are relational expressions then the constraint set for the relational expression will have to be combined with this constraint set. The combination takes the form of noting that the equality condition is equivalent to true and the both inequality conditions are equivalent to false. Thus every true condition in the condition set is replaced with ' = ' and every false replaced twice, once with ' > ' and once with ' < '.

Thus if only one the left hand BooleanValue is a relational expression the condition set would be {{= ,t } {= ,f } {> ,t } { ,f } { } {= , ,= } { ,> } {> ,< } { } {< ,< }}.

An increase in the complexity of the Boolean expression by the addition of more operators will introduce implicit or explicit bracketing of the order of evaluation which will be reflected in the condition set and will increase the number of terms in the set. For example a Boolean expression of the following form:

BooleanValue1 operator1 BooleanValue2 operator3nbsp;BooleanValue3

Has the implicit bracketing:

(BooleanValue1 operator1 BooleanValue2) operato3 BooleanValue3

The constraint set for the complete expression would be {{e1,t}{e1,f}}, where e1 is the condition set of the bracketed sub-expression and when it is used to expand this constraint set gives {{t,t,t} {t,f,t} {f,t,t} {f,f,t} {t,t,f} {t,f,f} {f,t,f} {f,f,f}}. If any of the BooleanValues are themselves relational expressions this will increase the number of terms in the condition set. In this example the worst case would be if all three values were relational expressions and would produce a total of 27 terms in the condition set. This would imply that 27 tests are required to adequately test the expression. As the number of Boolean operators increases the number of terms in a condition set increases exponentially and comprehensive testing of the expression becomes more complicated and less likely. It is this consideration which led to the advice, initially given in Section 1, to keep Boolean control expressions as simple as possible, and one way to do this is to use Boolean variables rather than expressions within such control conditions.

Data Life Cycle Testing

Keeping control expressions simple can be argued to simply distribute the complexity from the control expressions into the other parts of the subprogram and an effective testing strategy should recognise this and account for it. One approach to this consideration is known as data lifecyle testing, and it is based upon the consideration that a variable is at some stage created, and subsequently may have its value changed or used in a controlling expression several times before being destroyed. If only locally declared Boolean variables used in control conditions are considered then an examination of the source code will indicate the place in the source code where the variable is created, places where it is given a value, where the value is used as part of a control expression and the place where it is destroyed.

This approach to testing requires all possible feasible lifecycles of the variable to be covered whilst the module is under test. In the case of a Boolean variable this should include the possibility of a Boolean variable being given the values True and False at each place where it is given a value. A typical outline sketch of a possible lifecycle of a controlling Boolean variable within a subprogram might be as follows:

---- SomeSubProgram( ---- ) is

ControlVar : BOOLEAN := FALSE;

begin -- SomeSubProgram


while not ControlVar loop


ControlVar := SomeExpression;

end loop;


end SomeSubProg;

In this sketch ---- indicates the parts of the subprogram which are not relevant to the lifecycle. In this example there are two places where the variable ControlVar is given a value, the location where it is created and the assignment within the loop. Additionally there is one place where it is used as a control expression. There are two possible lifecycles to consider, one which can be characterised as { f t } indicating that the variable is created with the value False and given the value True upon the first iteration of the loop. The other lifecycle can be characterised as { f, f, ... t }, which differs from the first by indicating that the variable is given the value False on the first iteration, following which there is the possibility of more iterations where it is also given the value False, being given the value True on the last iteration.

Other possible lifecycles such as { f } or { f t f .. t } can be shown from a consideration of the source code not to be possible. The first is not possible as the default value will ensure that the loop iterates causing the variable to experience at least two values during its lifecycle. The second is not possible as soon as the variable is given the value True within the loop, the loop and subsequently the subprogram will terminate, causing the variable to be destroyed.

This quick look at the variable lifecycle approach only indicates the basis of this approach to white box testing. It should also indicate that this is one of the most laborious and thus expensive and difficult techniques to apply. As it is expensive and does not add a great deal to the testing considerations which have already been discussed, it is not widely used.

Loop Testing

The final white box consideration which will be introduced is the testing of loops, which have been shown to be the most common cause of faults in subprograms. If a loop, definite or indefinite, is intended to iterate n times then the test plan should include the following seven considerations and possible faults.

  • That the loop might iterate zero times.
  • That the loop might iterate once
  • That the loop might iterate twice
  • That the loop might iterate several times
  • That the loop might iterate n - 1 times
  • That the loop might iterate n times
  • That the loop might iterate n + 1 times
  • That the loop might iterate infinite times

All feasible possibilities should be exercised whilst the software is under test. The last possibility, an infinite loop, is a very noticeable, and common fault. All loops should be constructed in such a way that it is guaranteed that they will at some stage come to an end. However this does not necessarily guarantee that they come to an end after the correct number of iterations, a loop which iterates one time too many or one time too few is probably the most common loop fault. Of these possibilities an additional iteration may hopefully cause a CONSTRAINT_ERROR exception to be raised announcing its presence. Otherwise the n - 1 and n + 1 loop faults can be very difficult to detect and correct.

A loop executing zero times may be part of the design, in which case it should be explicitly tested that it does so when required and does not do so when it is not required. Otherwise, if the loop should never execute zero times, and it does, this can also be a very subtle fault to locate. The additional considerations, once, twice and many are included to increase confidence that the loop is operating correctly.

The next consideration is the testing of nested loops. One approach to this is to combine the test considerations of the innermost loop with those of the outermost loop. As there are 7 considerations for a simple loop, this will give 49 considerations for two levels of nesting and 343 considerations for a triple nested loop, this is clearly not a feasible proposition.

What is possible is to start by testing the innermost loop, with all other loops set to iterate the minimum number of times. Once the innermost loop has been tested it should be configured so that it will iterate the minimum number of times and the next outermost loop tested. Testing of nested loops can continue in this manner, effectively testing each nested loop in sequence rather than in combination, which will result in the number of required tests increasing arithmetically ( 7, 14, 21 ..) rather than geometrically ( 7, 49, 343 .. )* .

blog comments powered by Disqus
Finished reading? Browse all posts »