Friday, April 17, 2009

Risk Based Testing

What exactly is risk?
It’s the possibility of negative or undesirable outcome.
Risk can be defined as the chance of an event, hazard, threat or situation occurring and its undesirable consequences, a potential problem. The level of risk will be determined by the likelihood of an adverse event happening and the impact (the harm resulting from that event).

In the future, a risk has some likelihood between 0% and 100 %; it is possibility, not a certainty. In the past, however, either the risk has materialized and become an outcome or issue or it has not; the likelihood of a risk in the past is either 0% or 100%.
The likelihood of a risk becoming an outcome is one factor to consider when thinking about the level of risk associated with its possible negative consequences. The more likely the outcome is, the worse the risk. However, likelihood is not the only consideration.
For example, most people are likely to catch a cold in the course of their lives, usually more than once. The typical healthy individual suffers no serious consequences. Therefore, the overall level of risk associated with colds is low for this person. But the risk of a cold for an elderly person with breathing difficulties would be high. The potential consequences or impact is an important consideration affecting the level of risk, too.

Classification of Risk
1. Product Risks (Factors relating to what is produced by the work, i.e. the thing we are testing)
2. Project Risks (Factors relating to the way the work is carried out, i.e. the test project)

1.Product Risks:


Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are a risk to the quality of the product, such as:

o Failure-prone software delivered.
o The potential that the software/hardware could cause harm to an individual or company.
o Poor software characteristics (e.g. functionality, reliability, usability and performance).
o Software that does not perform its intended functions.
Risks are used to decide where to start testing and where to test more; testing is used to reduce the
risk of an adverse effect occurring, or to reduce the impact of an adverse effect.

Product risks are a special type of risk to the success of a project. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans.

You can think of a product risk as the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation. ‘Product Risks’ sometimes refer as ‘Quality Risks’ as they are risks to the quality of the product. Unsatisfactory software might omit some key functions that the customers specified, the users required or the stakeholders were promised. Unsatisfactory software might be unreliable and frequently fail to behave normally. Unsatisfactory software might fail in ways that cause financial or other damage to a user or the company that user works for. Unsatisfactory software might have problems related to a particular quality characteristic, which might not be functionality, but rather security, reliability, usability, maintainability or performance.

You can think of a product risk as the possibility that the system or software
might fail to satisfy some reasonable customer, user, or stakeholder
expectation. (Some authors refer to 'product risks' as 'quality risks' as they are risks to
the quality of the product.) Unsatisfactory software might omit some key
function that the customers specified, the users required or the stakeholders were
promised. Unsatisfactory software might be unreliable and frequently fail to
behave normally. Unsatisfactory software might fail in ways that cause financial
or other damage to a user or the company that user works for. Unsatisfactory
software might have problems related to a particular quality characteristic,
which might not be functionality, but rather security, reliability, usability,
maintainability or performance.

Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system ships. Risk-based testing uses risk to prioritize and emphasize the appropriate tests during test execution, but it's about more than that. Risk-based testing starts early in the project, identifying risks to system quality and using that knowledge of risk to guide testing planning, specification, preparation and execution. Risk-based testing involves both mitigation - testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects - and contingency - testing to identify work-arounds to make the defects that do get past us less painful.
Risk-based testing also involves measuring how well we are doing at finding and
removing defects in critical areas, as was shown in Table 1. Risk-based testing
can also involve using risk analysis to identify proactive opportunities to remove
or prevent defects through non-testing activities and to help us select which test
activities to perform.



Risk-based testing starts with product risk analysis. One technique for risk
analysis is a close reading of the requirements specification, design specifica-
tions, user documentation and other items. Another technique is brainstorming
with many of the project stakeholders. Another is a sequence of one-on-one or
small-group sessions with the business and technology experts in the company.
Some people use all these techniques when they can. To us, a team-based
approach that involves the key stakeholders and experts is preferable to a
purely document-based approach, as team approaches draw on the knowledge,
wisdom and insight of the entire team to determine what to test and how much.

While you could perform the risk analysis by asking, 'What should we worry
about?' usually more structure is required to avoid missing things. One way to
provide that structure is to look for specific risks in particular product risk
categories.You could consider risks in the areas of functionality, localization,
usability, reliability, performance and supportability.
You might have a checklist of typical or past risks that should be considered. You might also want to review the tests that failed and the bugs that you found in a previous release or a similar product. These lists and reflections serve to jog the memory, forcing you to think about risks of particular kinds, as well as helping you structure the documentation of the product risks.

When we talk about specific risks, we mean a particular kind of defect or
failure that might occur. For example, if you were testing the calculator utility
that is bundled with Microsoft Windows, you might identify 'incorrect
calculation' as a specific risk within the category of functionality. However, this is too broad. Consider incorrect addition. This is a high-impact kind of defect, as
everyone who uses the calculator will see it. It is unlikely, since addition is not
a complex algorithm. Contrast that with an incorrect sine calculation. This is a
low-impact kind of defect, since few people use the sine function on the
Windows calculator. It is more likely to have a defect, though, since sine
functions are hard to calculate.

After identifying the risk items, you and, if applicable, the stakeholders,
should review the list to assign the likelihood of problems and the impact of
problems associated with each one. There are many ways to go about this
assignment of likelihood and impact. You can do this with all the stakeholders
at once. You can have the business people determine impact and the technical
people determine likelihood, and then merge the determinations. Either way,
the reason for identifying risks first and then assessing their level, is that the
risks are relative to each other.

The scales used to rate likelihood and impact vary. Some people rate them
high, medium and low. Some use a 1-10 scale. The problem with a 1-10 scale is
that it's often difficult to tell a 2 from a 3 or a 7 from an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high, medium, low and very low) tends to work well.

Given two classifications of risk levels, likelihood and impact, we have a
problem, though: We need a single, aggregate risk rating to guide our testing
effort. As with rating scales, practices vary. One approach is to convert each risk
classification into a number and then either add or multiply the numbers to calculate a risk priority number. For example, suppose a particular risk has a high
likelihood and a medium impact. The risk priority number would then be 6 (2
times 3).

Armed with a risk priority number, we can now decide on the various risk-
mitigation options available to us. Do we use formal training for programmers
or analysts, rely on cross-training and reviews or assume they know enough? Do
we perform extensive testing, cursory testing or no testing at all? Should we
ensure unit testing and system testing coverage of this risk? These options and
more are available to us.

As you go through this process, make sure you capture the key information in
a document. We're not fond of excessive documentation but this quantity of infor-
mation simply cannot be managed in your head.
We recommend a lightweight table like the one shown in Table 2; we usually capture this in a spreadsheet.



Let's finish this section with two quick tips about product risk analysis. First,
remember to consider both likelihood and impact. While it might make you feel
like a hero to find lots of defects, testing is also about building confidence in key functions. We need to test the things that probably won't break but would be
catastrophic if they did.
Second, risk analyses, especially early ones, are educated guesses. Make
sure that you follow up and revisit the risk analysis at key project milestones.
For example, if you're following a V-model, you might perform the initial
analysis during the requirements phase, then review and revise it at the end
of the design and implementation phases, as well as prior to starting unit test,
integration test, and system test. We also recommend revisiting the risk
analysis during testing. You might find you have discovered new risks or
found that some risks weren't as risky as you thought and increased your confidence in the risk analysis.

2.Project risks
Project risks are the risks that surround the project’s capability to deliver its objectives, such as:

o Organizational factors:
- skill and staff shortages;
- personal and training issues;
- political issues, such as
. problems with testers communicating their needs and test results;
. failure to follow up on information found in testing and reviews (e.g. not
improving development and testing practices).
- improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing).

o Technical issues:
- problems in defining the right requirements;
- the extent that requirements can be met given existing constraints;
- the quality of the design, code and tests.

o Supplier issues:
- failure of a third party;
- contractual issues.

When analyzing, managing and mitigating these risks, the test manager is following well established
project management principles. The ‘Standard for Software Test Documentation’ (IEEE 829) outline
for test plans requires risks and contingencies to be stated.

To deal with the project risks that apply to testing, we can use the same concepts we apply to identifying, prioritizing and managing product risks.

Remembering that a risk is the possibility of a negative outcome, what
project risks affect testing? There are direct risks such as the late delivery of the test items to the test team or availability issues with the test environment. There are also indirect risks such as excessive delays in repairing defects found in
testing or problems with getting professional system administration support for
the test environment.

To discover project risks, ask yourself and other project participants and stakeholders,
-What could go wrong on the project to delay or invalidate the test plan, the test strategy and the test estimate?
-What are unacceptable outcomes of testing or in testing?
-What are the likelihoods and impacts of each of these risks?'
-This process is very much like the risk analysis process for products.

For any risk, product or project, you have four typical options:
• Mitigate: Take steps in advance to reduce the likelihood (and possibly the
impact) of the risk.
• Contingency: Have a plan in place to reduce the impact should the risk
become an outcome.
• Transfer: Convince some other member of the team or project stakeholder
to reduce the likelihood or accept the impact of the risk.
• Ignore: Do nothing about the risk, which is usually a smart option only
when there's little that can be done or when the likelihood and impact are
low.

There is another typical risk-management option, buying insurance, which is
not usually pursued for project or product risks on software projects, though it
is not unheard of.

Here are some typical risks along with some options for managing them.
• Logistics or product quality problems that block tests:
These can be mitigated through careful planning, good defect triage and management, and robust test design.

• Test items that won't install in the test environment:
These can be mitigated through smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous integration. Having a defined uninstall process is a good contingency plan.

• Excessive change to the product that invalidates test results or requires
updates to test cases, expected results and environments:
These can be mitigated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the
risk by escalation to management is often in order.

• Insufficient or unrealistic test environments that yield misleading results:
One option is to transfer the risks to management by explaining the limits on
test results obtained in limited environments. Mitigation - sometimes complete alleviation can be achieved by outsourcing tests such as performance
tests that are particularly sensitive to proper test environments.

Here are some additional risks to consider and perhaps to manage:

• Organizational issues such as shortages of people, skills or training,
problems with communicating and responding to test results, bad expec
tations of what testing can achieve and complexity of the project team or
organization.

• Supplier issues such as problems with underlying platforms or hardware,
failure to consider testing issues in the contract or failure to properly
respond to the issues when they arise.

• Technical problems related to ambiguous, conflicting or unprioritized
requirements, an excessively large number of requirements given other
project constraints, high system complexity and quality problems with the
design, the code or the tests.
There may be other risks that apply to your project and not all projects are
subject to the same risks.

Finally, don't forget that test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area or that the test cases do not exercise the critical areas of the system.

No comments: