Monday, October 26, 2009

Mercury Quality Centre: Introduction

Mercury Quality Centre is a web-based test management tool. It gives you a centralized control over the entire testing life cycle. It gives an easy interface to manage and organize activities like Requirements coverage, Test Case Management, Test Execution Reporting, Defect Management, and Test Automation. All these activities are provided from a single tool, which is web-based and can be accessed from any where. Hence, making the task of the testers and managers easy.

Mercury Quality Centre can be divided into two parts:
- Site Administrator Bin
- Quality Centre Bin

Site Administration Bin:
It is the starting point for the usage of Mercury Quality Centre. This part is used for all the administrative activities. Password for site admin is defined during the installation so make sure that you remember the password during installation. From this part of Mercury Quality Centre, we generally do the following activities:

- Creating the projects
- Assigning users to the projects
- Creating specific roles
- Configuring QTP or Winrunner scripts to use from Mercury Quality Centre
- Configuring the mail servers
- Verifying licensing information
- Information about database

Note: If you are using Winrunner, you need to make sure that backward compatibility property of the application to true.

Quality Center Bin:
This part of Mercury Quality Centre gives functionality of almost everything that as a tester or test manager you need to do in your day to day activity apart from execution. This is the most common interface used by the customers or users. In this part, we generally do the following activities:

- Creating test plans
- Defining requirements
- Creating test cases
- Creating test labs
- Associating requirements with defects in essence

Mercury Quality Centre is installed as a service in Microsoft windows environment. Before start working on it, make sure that Mercury Quality Centre service is running.

As soon as you access the application, the first screen is a login screen where you need to provide administrator credentials which were used during the installation of Mercury Quality Centre. Once you are logged on to the SABin, you can perform all the administrative tasks mentioned above.

Note: If Mercury Quality Centre is listening to default port, then you can access the application using the following URL:

http://%7Byourmachinename%7D:8080/sabin/SiteAdmin.htm

Define your projects in SABin. Mercury Quality Center provides the role based accessed to the Projects. For example, A Test Manager can create projects and Test Lead can prepare test plans and tester can write the test cases. This role based access makes it very easy to control access to various artifacts of the project and also distribution of responsibility among team members. Following four things can be managed in Mercury Quality Centre:
- Requirements
- Test Plan
- Test Lab
- Defects

Once you have created a project in SABin, Now log on to QCBin by providing your credentials and access the project that you have created. Here, you will notice different tabs for Requirements, TestPlan, TestLab and Defects.

Under Requirements Tab, you can organize the project requirements. You can also create folder hierarchy to represent various features in your project. This can be accomplished by just right-clicking and choosing appropriate options. After creating requirements, move on to next tab Test Plan.

Test Plan Tab will have information about the test cases. These test cases can also be mapped to requirements created in the earlier steps, thus makes foundation for the traceability metrics. Each requirement can be mapped to one or more than one test cases.

After creating new test case you will see in the left hand pane. The right hand pane will have tabs for writing the steps, mapping to requirements, description, expected result etc. Every test case will have steps and for every step you can specify the expected behavior.

The test cases written here, can also be linked to the QTP or Winrunner Scripts. This way, it is providing you better management for the automation and capability of executing automation scripts from Mercury Quality Centre itself. Once you are done with Test Plan preparation, move on to the next tab Test Lab.

To manage test execution for a specific release, you have to create a Test Lab. Test Labs can be created, specific to the release and execution of test cases specific to release can be managed very easily using this concept. In the Test Lab you can identify the set of test cases already written under test plan to include for execution.

If the test cases are already linked to the requirements, then after each test cycle the management will be able to trace what requirements have been tested.

When you choose the option of manual test execution, a window will open up containing the steps to execute. These steps are executed and after every step you can specify whether it is passed or not. Mercury Quality Center also allows parameterized manual test execution, where some of the default parameters like username, password etc. can automatically be read during the manual execution. If you encounter any defects during the failure of any of the steps, it will be automatically logged in to the defect tracking system of Mercury Quality Centre. Once you are done with Test Execution move on to next tab, Defects.

Report generation is one of the most important part of the test management process. Once you are done with planning and execution, its REPORTING time. Mercury Quality Centre provides a very good reporting feature by providing certain pre-defined reports and also capability to create your own reports.

Tuesday, October 20, 2009

Agile Testing

Agile testing is a software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. In other words, the emphasis is shifted from "testers as quality police" to something more like "entire project team working toward demonstrable quality."

Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
Since working increments of the software are released often in agile software development, there is also a need to test often. This is commonly done by using automated acceptance testing to minimize the amount of manual labor involved. Doing only manual testing in agile development may result in either buggy software or slipping schedules because it may not be possible to test the entire build manually before each release.

Testing Agenda in Agile Projects:
Agile methodologies are designed to break the software down into manageable parts that can be delivered earlier to the customer. The aim of any Agile project is to deliver a basic working product as quickly as possible and then to go through a process of continual improvement. An Agile project is characterized by having a large number of short delivery cycles (sprints) and priority is given to the feedback-loops from one cycle to the next. The feedback- loops drive continuous improvement and allow the issues that inevitably occur (including the way requirements have been defined or the quality of the solution provided) to be dealt with much earlier in the development life cycle. To achieve this, companies need to re-think their approach to delivery and have their previously independent contributors (Business Analysts, Developers, Testers, End Users etc.) work together in teams.
The testing agenda during the development phase is very different to the traditional approaches that support the Waterfall or V-Model development methodologies because Agile uses a Test Driven Development (TDD) model. However, many of the requirements of the extended testing model will still apply.

Project Initiation Phase:
For an Agile project to succeed, it requires effective planning and management. As part of the project team the Test Manager is responsible for establishing the quality processes, identifying test resources and delivering the test strategy. The test strategy will include details of the Agile development process being used. It will also include details of test phases not directly related to the Agile development for example, Operational Acceptance or Performance Testing.

Development Phase:
True Agile development uses a “Test Driven Development” (TDD) approach. The testers, developers, business analysts and project stakeholders all contribute to kick-off meetings where the “user stories” are selected for the next sprint. Sprints are usually of fixed duration and selecting the right number of stories and estimating the time to deliver them is the key to success. The estimation process can often be inaccurate at the beginning of an Agile project but improves as the team’s experience grows within the specific implementation they are working on. Once the user stories are selected for the sprint they are used as the basis for a set of tests. The testers create test scenarios which are presented to the business analysts and project stakeholders for their approval and signoff. These test scenarios are then broken down to test cases that offer adequate test coverage for the given functionality.
The developers then write code that will pass the tests. In this approach the development and testing take place continuously throughout the sprint – there is no separate testing phase. In the TDD approach the features are tested throughout the sprint and the results presented to the stakeholders for immediate feedback. The test scenarios defined are not limited to functional testing but can include other types of testing including performance and integration testing when the product is mature enough.
While the development is underway the user stories for the next sprint are written. These include the stories specified in the delivery plan but will also include additional stories required to cover any issues that have been identified as part of the feedback process from previous sprints. Sprints in an Agile project can extend to multiple levels in a complex system. A sprint might not lead to a product release if it does not add enough functionality to the product being developed. The stakeholders take a decision on when the application should be moved to the release phase depending on the market need or the level of key functionality being added to the system. While multiple iterations may be required to release a product, there may also be cases where releases are more regular owing to the additional value delivered at each iteration level. Whichever release approach is adopted, the testing team’s goal is to have a release available with minimal defects and low implementation risk at the end of the Sprint.
As functionality grows with each iteration, regression testing must be performed to ensure that existing functionality has not been impacted by the introduction of new functionality in each iteration cycle. Defect fixes should also be followed by extensive regression testing. The scale of the regression testing grows with each sprint and to ensure that this remains a manageable task the test team should use test automation for the regression suite and focus their manual testing effort towards locating new defects during the build phase.

Release Phase:
The aim of any Agile project is to deliver a basic working product as quickly as possible and then to go through a process of continual improvement. This means that there is no single release phase for a product. The product will move into production when enough of the functionality is working and the quality of the product is acceptable to the stakeholders. Prior to release, a final acceptance test is performed before transitioning the application into production. The testing activities listed above are not exhaustive but broadly cover the areas which the testing team contributes to the Agile approach.

Getting Equipped for Agile Testing:
Agile projects present their own challenges to the testing team. Unclear project scope, multiple iterations, minimal documentation, early and frequent testing needs and active stakeholder involvement all demand special and diverse skills from the testing team. Some of the essential skills are illustrated here:

Resource Management
The Agile approach required a mixture of test skills – usually held across one team. Test resource will be required to define the scenarios and test cases, conduct manual testing alongside the developers, write automated regression tests and execute the automated regression packs. As the project progresses, specialist skills will be required to cover further test areas that might include integration and performance testing. The availability of a pool of professional test resources offers the scope for effective resource management across the agile project life cycle. There should be an appropriate mix of domain specialists who plan and gather requirements in addition to test engineers who will have multiple skill sets and own the test execution.

Communication
The benefits of independent testing will not be realized unless good communication exists between the testers, developers, business analysts and project stakeholders. The iterative model of development and the frequency of releases mandate that all teams have a common understanding of the user requirements. The testing teams should be skilled in the use of change management tools. The communication model used by the testing team itself must enable both regular and ad-hoc project updates from various parties engaged in the project. The testing team should adopt the most efficient and effective methods of conveying information to the developers, project stakeholders, domain specialists using a combination of face-to-face conversation, meetings and workshops, phone calls, email and WebEx meetings.

Processes
Another key success factor for Agile Development is the implementation of quality governance processes such as configuration management. Typically, an organization that has no processes in place will be in chaos. As formal processes are implemented the organization will be able to follow a repeatable and consistent delivery method. Organizations considering the use of Agile should ensure that configuration management, change management, project management and release management are in place. Testing teams which bring with them best practices and are accredited with globally recognized certifications (e.g. TMMi, CMMi, ISO etc.) will be able to help organizations accelerate testing and enable lower cost of quality to be achieved.

Conclusion:
Companies which adopt Agile projects should note the importance of engaging test teams at project initiation. This will ensure an accelerated delivery of working software. If Agile projects are to achieve customer satisfaction and ROI then time, cost and quality must be controlled and balanced. To ensure accelerated delivery of working software which conforms to the desired quality, the testing team should be involved from the beginning of every iterative development cycle and not just after the first couple of sprints. The testing team must develop the necessary mindset for an Agile project. Their own agility and flexibility is essentially the key to their success in the project.

Tuesday, October 6, 2009

Incident Management

Terms:
Incident logging, incident management.

Description:
Since one of the objectives of testing is to find defects, the discrepancies between actual and expected outcomes need to be logged as incidents. Incidents should be tracked from discovery and classification to correction and confirmation of the solution. In order to manage all incidents to completion, an organization should establish a process and rules for classification.

Incidents may be raised during development, review, testing or use of a software product. They may be raised for issues in code or the working system, or in any type of documentation including requirements, development documents, test documents, and user information such as “Help” or installation guides.

Incident reports have the following objectives:

o Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
o Provide test leaders a means of tracking the quality of the system under test and the progress of the testing.
o Provide ideas for test process improvement.

Details of the incident report may include:

o Date of issue, issuing organization, and author.
o Expected and actual results.
o Identification of the test item (configuration item) and environment.
o Software or system life cycle process in which the incident was observed.
o Description of the incident to enable reproduction and resolution, including logs, database
dumps or screenshots.
o Scope or degree of impact on stakeholder(s) interests.
o Severity of the impact on the system.
o Urgency/priority to fix.
o Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest,
closed).
o Conclusions, recommendations and approvals.
o Global issues, such as other areas that may be affected by a change resulting from the
incident.
o Change history, such as the sequence of actions taken by project team members with respect
to the incident to isolate, repair, and confirm it as fixed.
o References, including the identity of the test case specification that revealed the problem.

Configuration Management

Configuration management (CM) is a field of management that focuses on establishing and maintaining consistency of a system's or product's performance and its functional and physical attributes with its requirements, design, and operational information throughout its life. For information assurance, CM can be defined as the management of security features and assurances through control of changes made to hardware, software, firmware, documentation, test, test fixtures, and test documentation throughout the life cycle of an information system.

Software configuration management:
The traditional software configuration management (SCM) process is looked upon as the best solution to handling changes in software projects. It identifies the functional and physical attributes of software at various points in time, and performs systematic control of changes to the identified attributes for the purpose of maintaining software integrity and traceability throughout the software development life cycle.

The SCM process further defines the need to trace changes, and the ability to verify that the final delivered software has all of the planned enhancements that are supposed to be included in the release. It identifies four procedures that must be defined for each software project to ensure that a sound SCM process is implemented. They are:

1. Configuration identification
2. Configuration control
3. Configuration status accounting
4. Configuration audits

These terms and definitions change from standard to standard, but are essentially the same.

* Configuration identification is the process of identifying the attributes that define every aspect of a configuration item. A configuration item is a product (hardware and/or software) that has an end-user purpose. These attributes are recorded in configuration documentation and baselined. Baselining an attribute forces formal configuration change control processes to be effected in the event that these attributes are changed.

* Configuration change control is a set of processes and approval stages required to change a configuration item's attributes and to re-baseline them.

* Configuration status accounting is the ability to record and report on the configuration baselines associated with each configuration item at any moment of time.

* Configuration audits are broken into functional and physical configuration audits. They occur either at delivery or at the moment of effecting the change. A functional configuration audit ensures that functional and performance attributes of a configuration item are achieved, while a physical configuration audit ensures that a configuration item is installed in accordance with the requirements of its detailed design documentation.

Configuration management is widely used by many military organizations to manage the technical aspects of any complex systems, such as weapon systems, vehicles, and information systems. The discipline combines the capability aspects that these systems provide an organization with the issues of management of change to these systems over time.

Outside of the military, CM is equally appropriate to a wide range of fields and industry and commercial sectors.

In software engineering, software configuration management (SCM) is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines.

SCM concerns itself with answering the question "Somebody did something, how can one reproduce it?" Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question thus becomes a matter of comparing different results and of analysing their differences. Traditional configuration management typically focused on controlled creation of relatively simple products. Now, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed.

Purposes
The goals of SCM are generally:

* Configuration identification - Identifying configurations, configuration items and baselines.
* Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
* Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
* Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
* Build management - Managing the process and tools used for builds.
* Process management - Ensuring adherence to the organization's development process.
* Environment management - Managing the software and hardware that host our system.
* Teamwork - Facilitate team interactions related to the process.
* Defect tracking - Making sure every defect has traceability back to the source.

Computer hardware configuration management:

omputer hardware configuration management is the process of creating and maintaining an up-to-date record of all the components of the infrastructure, including related documentation. Its purpose is to show what makes up the infrastructure and illustrate the physical locations and links between each item, which are known as configuration items.

Computer hardware configuration goes beyond the recording of computer hardware for the purpose of asset management, although it can be used to maintain asset information. The extra value provided is the rich source of support information that it provides to all interested parties. This information is typically stored together in a configuration management database (CMDB).

The scope of configuration management is assumed to include, at a minimum, all configuration items used in the provision of live, operational services.

Computer hardware configuration management provides direct control over information technology (IT) assets and improves the ability of the service provider to deliver quality IT services in an economical and effective manner. Configuration management should work closely with change management.

All components of the IT infrastructure should be registered in the CMDB. The responsibilities of configuration management with regard to the CMDB are:
# dentification
# control
# status accounting
# verification

The scope of configuration management is assumed to include:

* physical client and server hardware products and versions
* operating system software products and versions
* application development software products and versions
* technical architecture product sets and versions as they are defined and introduced
* live documentation
* networking products and versions
* live application products and versions
* definitions of packages of software releases
* definitions of hardware base configurations
* configuration item standards and definitions

The benefits of computer hardware configuration management are:

* helps to minimize the impact of changes
* provides accurate information on CIs
* improves security by controlling the versions of CIs in use
* facilitates adherence to legal obligations
* helps in financial and expenditure planning

Maintenance systems:
Configuration management is used to maintain an understanding of the status of complex assets with a view to maintaining the highest level of serviceability for the lowest cost. Specifically, it aims to ensure that operations are not disrupted due to the asset (or parts of the asset) overrunning limits of planned lifespan or below quality levels.

In the military, this type of activity is often classed as "mission readiness", and seeks to define which assets are available and for which type of mission; a classic example is whether aircraft onboard an aircraft carrier are equipped with bombs for ground support or missiles for defense.

Preventive maintenance
Understanding the "as is" state of an asset and its major components is an essential element in preventative maintenance as used in maintenance, repair, and overhaul and enterprise asset management systems.

Complex assets such as aircraft, ships, industrial machinery etc. depend on many different components being serviceable. This serviceability is often defined in terms of the amount of usage the component has had since it was new, since fitted, since repaired, the amount of use it has had over its life and several other limiting factors. Understanding how near the end of their life each of these components is has been a major undertaking involving labor intensive record keeping until recent developments in software.

Predictive maintenance
Many types of component use electronic sensors to capture data which provides live condition monitoring. This data is analyzed on board or at a remote location by computer to evaluate its current serviceability and increasingly its likely future state using algorithms which predict potential future failures based on previous examples of failure through field experience and modeling. This is the basis for "predictive maintenance".

Availability of accurate and timely data is essential in order for CM to provide operational value and a lack of this can often be a limiting factor. Capturing and disseminating the operating data to the various support organizations is becoming an industry in itself.

The consumers of this data have grown more numerous and complex with the growth of programs offered by original equipment manufacturers (OEMs). These are designed to offer operators guaranteed availability and make the picture more complex with the operator managing the asset but the OEM taking on the liability to ensure its serviceability. In such a situation, individual components within an asset may communicate directly to an analysis center provided by the OEM or an independent analyst.

Standards:
* ANSI/EIA-649-1998 National Consensus Standard for Configuration Management
* EIA-649-A 2004 National Consensus Standard for Configuration Management
* ISO 10007:2003 Quality management systems - Guidelines for configuration management
* Federal Standard 1037C
* GEIA Standard 836-2002 Configuration Management Data Exchange and Interoperability
* IEEE Std. 828-1998 IEEE Standard for Software Configuration Management Plans
* MIL-STD-973 Configuration Management (cancelled on September 20, 2000)
* STANAG 4159 NATO Material Configuration Management Policy and Procedures for Multinational Joint Projects
* STANAG 4427 Introduction of Allied Configuration Management Publications (ACMPs)
* CMMI CMMI® for Development, Version 1.2 CONFIGURATION MANAGEMENT

Guidelines:
* IEEE Std. 1042-1987 IEEE Guide to Software Configuration Management
* MIL-HDBK-61A CONFIGURATION MANAGEMENT GUIDANCE 7 February 2001
* ISO 10007 Quality management - Guidelines for configuration management
* GEIA-HB-649 - Implementation Guide for Configuration Management
* ANSI/EIA-649-1998 National Consensus Standard for Configuration Management
* EIA-836 Consensus Standard for Configuration Management Data Exchange and Interoperability
* ANSI/EIA-632-1998 Processes for Engineering a System

Test Strategy (OR Test Approaches)

A test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment.

In the test strategy is described how the product risks of the stakeholders are mitigated in the test levels, which test types are performed in the test levels, and which entry and exit criteria apply.

The test strategy is created based on development design documents. The system design document is the main one used and occasionally, the conceptual design document can be referred to. The design documents describe the functionalities of the software to be enabled in the upcoming release. For every set of development design, a corresponding test strategy should be created to test the new feature sets.

One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun:
o Preventative approaches, where tests are designed as early as possible.
o Reactive approaches, where test design comes after the software or system has been
produced.

Typical approaches or strategies include:

* Analytical approaches, such as risk-based testing where testing is directed to areas of greatest
risk.
* Model-based approaches, such as stochastic testing using statistical information about failure
rates (such as reliability growth models) or usage (such as operational profiles).
* Methodical approaches, such as failure-based (including error guessing and fault-attacks),
experienced-based, check-list based, and quality characteristic based.
* Process- or standard-compliant approaches, such as those specified by industry-specific
standards or the various agile methodologies.
* Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive
to events than pre-planned, and where execution and evaluation are concurrent tasks.
* Consultative approaches, such as those where test coverage is driven primarily by the advice
and guidance of technology and/or business domain experts outside the test team.
* Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites.

Different approaches may be combined, for example, a risk-based dynamic approach.

The selection of a test approach should consider the context, including:

* Risk of failure of the project, hazards to the product and risks of product failure to humans, the
environment and the company.
* Skills and experience of the people in the proposed techniques, tools and methods.
* The objective of the testing endeavour and the mission of the testing team.
* Regulatory aspects, such as external and internal regulations for the development process.
* The nature of the product and the business.

Test Levels

The test strategy describes the test level to be performed. There are primarily three levels of testing: unit testing, integration testing, and system testing. In most software development organizations, the developers are responsible for unit testing. Individual testers or test teams are responsible for integration and system testing.

Roles and Responsibilities


The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at a project level in this section. This may not have names associated: but the role has to be very clearly defined.

Testing strategies should be reviewed by the developers. They should also be reviewed by test leads for all levels of testing to make sure the coverage is complete yet not overlapping. Both the testing manager and the development managers should approve the test strategy before testing can begin.

Environment Requirements

Environment requirements are an important part of the test strategy. It describes what operating systems are used for testing. It also informs clearly the necessary OS patch levels and security updates required. For example, a certain test plan may require service pack 2 installed on the Windows XP OS as a prerequisite for testing.

Testing Tools


There are two methods used in executing test cases: manual and automation. Depending on the nature of the testing, it is usually the case that a combination of manual and automated testing is the most optimal testing method. Planner should find the appropriate automation tool to reduce total testing time.

Risks and Mitigation


Any risks that will affect the testing process must be listed along with the mitigation. By documenting the risks in this document, we can anticipate the occurrence of it well ahead of time and then we can proactively prevent it from occurring. Sample risks are dependency of completion of coding, which is done by sub-contractors, capability of testing tools etc.

Test Schedule


A test plan should make an estimation of how long it will take to complete the testing phase. There are many requirements to complete testing phases. First, testers have to execute all test cases at least once. Furthermore, if a defect was found, the developers will need to fix the problem. The testers should then re-test the failed test case until it is functioning correctly. Last but not the least, the tester need to conduct regression testing towards the end of the cycle to make sure the developers did not accidentally break parts of the software while fixing another part. This can occur on test cases that were previously functioning properly.

The test schedule should also document the number of tester available for testing. If possible, assign test cases to each tester.

It is often difficult to make an accurate approximation of the test schedule since the testing phase involves many uncertainties. Planners should take into account the extra time needed to accommodate contingent issues. One way to make this approximation is to look at the time needed by the previous releases of the software. If the software is new, multiplying the initial testing schedule approximation by two is a good way to start.

Regression Test Approach


When a particular problem is identified, the programs will be debugged and the fix will be done to the program. To make sure that the fix works, the program will be tested again for that criteria. Regression test will make sure that one fix does not create some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated again, to make sure that nothing else is affected by a particular fix. How this is going to be carried out must be elaborated in this section. In some companies, whenever there is a fix in one unit, all unit test cases for that unit will be repeated, to achieve a higher level of quality.

Test Groups

From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. For example, in a railway reservation system, anything related to ticket booking is a functional group; anything related with report generation is a functional group. Same way, we have to identify the test groups based on the functionality aspect.

Test Priorities

Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones and if they fail, the product cannot be released. Some other test cases may be treated like cosmetic and if they fail, we can release the product without much compromise on the functionality. This priority levels must be clearly stated. These may be mapped to the test groups also.

Test Status Collections and Reporting

When test cases are executed, the test leader and the project manager must know, where exactly we stand in terms of testing activities. To know where we stand, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed and how many failed etc. Also, how often we collect the status is to be clearly mentioned. Some companies will have a practice of collecting the status on a daily basis or weekly basis. This has to be mentioned clearly.

Test Records Maintenance


When the test cases are executed, we need to keep track of the execution details like when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location. This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories. The naming convention for the documents and files must also be mentioned.

Requirements Traceability Matrix


Ideally each software developed must satisfy the set of requirements completely. So, right from design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. Refer the following sample table which describes Requirements Traceability Matrix process. In this matrix, the rows will have the requirements. For every document {HLD, LLD etc}, there will be a separate column. So, in every cell, we need to state, what section in HLD addresses a particular requirement. Ideally, if every requirement is addressed in every single document, all the individual cells must have valid section ids or names filled in. Then we know that every requirement is addressed. In case of any missing of requirement, we need to go back to the document and correct it, so that it addressed the requirement.

Test Summary

The senior management may like to have test summary on a weekly or monthly basis. If the project is very critical, they may need it on a daily basis also. This section must address what kind of test summary reports will be produced for the senior management along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole project for the entire duration. This document will/may be presented to the client also, if needed. The person, who prepares this document, must be functionally strong in the product domain, with a very good experience, as this is the document that is going to drive the entire team for the testing activities. Test strategy must be clearly explained to the testing team members right at the beginning of the project.

Monday, October 5, 2009

Volume Testing

Definition:
Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.

Objective:
o Find problems with max. amounts of data.
o System performance or usability often degrades when large amounts of
data must be searched, ordered etc.

Test Procedure:
• The system is run with maximum amounts of data.
• Internal tables, databases, files, disks etc. are loaded with a maximum of
data.
• Maximal length of external input.
• Important functions where data volume may lead to trouble.

Result wanted:
• No problems, no significant performance degradation, and no lost data.

Considerations:
• Data generation may need analysis of a usage profile and may not be
trivial. (Same as in stress testing.)
• Copy of production data or random generation.
• Use data generation or extraction tools.
• Data variation is important!
• Memory fragmentation important!

Volume test shall check if there are any problems when running the system under test
with realistic amounts of data, or even maximum or more. Volume test is necessary, as ordinary function testing normally does not use large amounts of data, rather the
opposite.

A special task is to check out real maximum amounts of data, which are possible in
extreme situations, for example on days with extremely large amounts of processing to be done (new year, campaigns, tax deadlines, disasters, etc.) Typical problems are full or nearly full disks, databases, files, buffers, counters that may lead to overflow. Maximal data amounts in communications may also be a concern.

Part of the test is to run the system over a certain time with a lot of data. This is in order to check what happens to temporary buffers and to timeouts due to long times for access.

One variant of this test is using especially low volumes, such as empty databases or files, empty mails, no links etc. Some programs cannot handle this either.

One last variant is measuring how much space is needed by a program. This is important if a program is sharing resources with other ones. All programs taken together must not use more resources than available.

Examples:


Online system: Input fast, but not necessarily fastest possible, from different input channels. This is done for some time in order to check if temporary buffers tend to overflow or fill up, if execution time goes down. Use a blend of create, update, read and delete operations.

Database system: The database should be very large. Every object occurs with maximum number of instances. Batch jobs are run with large numbers of transactions, for example where something must be done for ALL objects in the database. Complex searches with sorting through many tables. Many or all objects linked to other objects, and to the maximum number of such objects. Large or largest possible numbers on sum fields.

File exchange: Especially long files. Maximal lengths. Lengths longer than typical maximum values in communication protocols (1, 2, 4, 8, … Mega- og Gigabytes). For example lengths that are not supported by mail protocols. Also especially MANY files,even in combination with large lengths. (1024, 2048 etc. files). Email with maximum number of attached files. Lengths of files that let input buffers overflow or trigger timeouts. Large lengths in general in order to tripper timeouts in communications.

Disk space: Try to fill disk space everywhere there a re disks. Check what happens if there is no more space left and even more data is fed into the system. Is there any kind of reserve like “overflow-buffers”? Are there any alarm signals, graceful degradation? Will there be reasonable warnings? Data loss? This can be tested by “tricks”, by making less
space available and testing with smaller volumes.

File system: Maximal numbers of files for the file system and/or maximum lengths.

Internal memory:
Minimum amount of memory available (installed). Open many
programs at the same time, at least on the client platform.

General points to check:



• Check error messages and warnings, if they come at all for volume problems and
if they are helpful and understandable.
• Is data lost?
• Does the system slow down too much?
• Do timeouts happen? IN that case failures may also happen.
• If it looks like it goes fine, are really ALL data processed or stored? Even the end
of files, objects or tables?
• Are data stored wrong?
• Are data lost or written over without warning?

How to find test conditions:

Test conditions are data that could turn into a problem:

Find which input data and output data occur in an application or function.
Find restrictions for number of data, especially maximum and minimum. Include
especially data, which are stored temporarily or are read from temporary store.

For every data element, check if there can occur larger volumes than allowed. Check
what happens if data elements are counted. Can the maxima come out of bounds? What
about sums, if many objects are summed up? Can that cause out of bounds values?

If a data element is stored temporarily, how much space is necessary? If there are many such elements, can the space be too little?

If there is any form of numbering or coding, are there restrictions in it that can preclude growth? For example if there are two character fields, there may not be more codes than 26*26 possibilities.

Find boundaries on system wide data, for example maximum disk volume, maximum
number of files for the file system, maximal file lengths, buffer lengths etc. and look if any of them can turn into a problem.

Find restrictions in temporary storage media like maximal length of mail, CD, DVD, tape etc.

Find restrictions in communications, for example timeouts and maximal lengths of
messages, files etc. Check how much data can be transferred before a timeout will come.

Find places where data is stored temporarily. Find the functions storing data there and functions reading and taking away these data. Make test scenarios (soap operas) going thorough both storing and deleting. Try to find out if the place can get full, and overflow problems. This requires long scenarios, maybe even random generated function calls. Check if an audit trail may lead to problems after logging many transactions.

Can volume test be left out?
The preconditions for taking away volume test is that volume questions are checked in earlier test and answered sufficiently well. This means volume is tested or checked in lower level tests or reviews and the results can be checked.

Caution when integrating several independent systems to be executed on the same
platform: It must be guaranteed that every system has enough resources for itself. If one cannot guarantee that the platform delivers the necessary resources, in this case all kinds of memory, then a volume test of all systems together should be executed with maximal data volumes for every system.

Checklist:
If at least one question is answered with NO, volume test is interesting.

• Is volume test executed before, on the whole system and can the result be
checked?
• Can we guarantee that the system always has he necessary memory resources?
• Can we guarantee this if several systems share the hardware?
• Is it guaranteed that no larger data volumes than specified will occur?
• Is there a low risk if data volume turn greater than specified anyway but the
system does not work well enough then?

Problems when generating data:

• Fragmentation of memory difficult to generate
• Relational integrity of generated data
• Dynamic generation of keys
• Data should follow the usage profile