Wednesday, August 25, 2010

When to stop Testing

Many of testers no All the testers came across this question during in interview. And all of us know that the testing never complete but it stops at particular stage. This never show that system is bug free but it ensure that the system can work efficiently now on this stage.
There are many criteria s or assumption on which it stop.

* Stop the testing when the committed / planned testing deadlines are about to expire.
* Stop the testing when we are not able to detect any more errors even after execution of all the planned test Cases.

We can see that both the above statements do not carry any meaning and are contradictory since we can satisfy the first statement even by doing nothing while the second statement is equally meaningless since it can not ensure the quality of our test cases.

Pin pointing the time - when to stop testing is difficult. Many modern software applications are so complex and run in such an Interdependent environment, that complete testing can never be done.

Most common factors helpful in deciding when to stop the testing are:

* Stop the Testing when deadlines like release deadlines or testing deadlines have reached
* Stop the Testing when the test cases have been completed with some prescribed pass percentage.
* Stop the Testing when the testing budget comes to its end.
* Stop the Testing when the code coverage and functionality requirements come to a desired level.
* Stop the Testing when bug rate drops below a prescribed level
* Stop the Testing when the period of beta testing / alpha testing gets over.

Keeping a Track on the Progress of Testing:

Testing metrics can help the testers to take better and accurate decisions; like when to stop testing or when the application is ready for release, how to track testing progress & how to measure the quality of a product at a certain point in the testing cycle.

The best way is to have a fixed number of test cases ready well before the beginning of test execution cycle. Subsequently measure the testing progress by recording the total number of test cases executed using the following metrics which are quite helpful in measuring the quality of the software product

1) Percentage Completion: (Number of executed test cases) / (Total number of test cases)

2) Percentage Test cases Passed: Defined as (Number of passed test cases) / (Number of executed test cases)

3) Percentage Test cases Failed: Defined as (Number of failed test cases) / (Number of executed test cases)

A test case is declared - Failed even when just one bug is found while executing it, otherwise it is considered as - Passed

Scientific Methods to decide when to stop testing:
1) Decision based upon Number of Pass / Fail test Cases:

a) Preparation of predefined number of test cases ready before test execution cycle.

b) Execution of all test cases In every testing cycle.

c) Stopping the testing process when all the test cases get Passed

d) Alternatively testing can be stopped when percentage of failure in the last testing cycle is observed to be extremely low.

2) Decision based upon Metrics:

a) Mean Time Between Failure (MTBF): by recording the average operational time before the system failure.

b) Coverage metrics: by recording the percentage of instructions executed during tests.

c) Defect density: by recording the defects related to size of software like "defects per 1000 lines of code" or the number of open bugs and their severity levels.

Finally How to Decide:
Stop the testing, If:

1) Coverage of the code is good

2) Mean time between failure is quite large

3) Defect density is very low

4) Number of high severity Open Bugs is very low.

Here 'Good', 'Large', 'Low' and 'High' are subjective terms and depend on the type of product being tested. Ultimately, the risk associated with moving the application into production, as well as the risk of not moving forward, must be taken into consideration.

Broad / Universal statement to define the time to stop testing is when:
All the test cases, derived from equivalent partitioning, cause-effect analysis & boundary-value analysis are executed without detecting errors.

No comments: