Software Testing
Insuring that Computers Do What They Should, nothing More, Nothing Less
Prof. Gideon Samid, PhD, PE

[ * HOME * ][ * Underlying Assumptions * ][ * Requirement Level Testing * ][ * Design Level Testing * ][ * Implementation Level Testing * ][ * Testing Methodologies * ][ * TR3: Test Results: Reporting & Responding * ]

Computers are powerful and therefore need to be tightly controlled, lest their power will turn harmful. Software drives computers, and therefore software is the object to be carefully controlled, which in practice means that one has to contemplate carefully what one expects software to do. This generally leads to first preparing a 'requirement document', followed by a 'system design' which is subsequently implemented through software coding. These three steps must be carried out with great care, and to insure the quality of the outcome, it is critically important to develop a test plan.

A test plan follows the same stages as software development: (i) testing requirement document, (ii) design, and (iii) carrying out the test.

An important element of software development and software testing is the underlying assumptions. Unfortunately many assumptions are tacitly taken and are never specified. This is a hidden source for most of harmful software issues.

Software testing has three successive parts:

  • Testing Software Against the Software Requirement

  • Testing Software Against System Design

  • Testing The Per-Se Performance of the Implemented Software

    First is to check whether a computer system does what the impelemned software intends it to do (e.g. if a subroutine re-orders a list by an ascending order, one needs to check if for various input orders, the output is always properly ordered). Next one must check software behavior under an array of errors, mistakes and omissions, as well as under a malicious adversarial attack.

    When software passes this implementation test, one must compare the performance with the design of the system. E.g. if the design specifies high-degree of randomness for a particular variable, it is important to check whether that variable qualifies. An implementation test will not catch such a shortfall).

    Having passed the design test, the software will have to be cast against the articulated requirements -- are they all satisfied? E.g. requirement: database search will have to be completed in less than 250 ms. The design may have been specified to meet this requirement, and the software was coded for that purpose, but whether or not this is the case is a matter of software testing.

    There are plenty of testing methodologies, mostly named after their proposer. They all range on a spectrum that spans from "waterfall testing" to "continuous testing". Where the former assumed an orderly perfection: requirements-design-implementation, and the latter assumes a "ping pong of new ideas" in which the code, the design and the requirements keep changing through the life of the system. It is easier to test a 'waterfall' system, but modern software projects have too much built-in chaos in them, and the software tester is challenged to keep up.

    Ahead we first discuss the 'underlying assumptions' then we address the three testing modes.