Tuesday, July 15, 2008

Heuristics of Software Testing

Testability
Software testability is how easily, completely and conveniently a computer program can be tested.
Software engineers design a computer product, system or program keeping in mind the product testability. Good programmers are willing to do things that will help the testing process and a checklist of possible design points, features and so on can be useful in negotiating with them.
Here are the two main heuristics of software testing.
1. Visibility
2. Control

1.Visibility
Visibility is our ability to observe the states and outputs of the software under test. Features to improve the visibility are
• Access to Code
Developers must provide full access (source code, infrastructure, etc) to testers. The Code, change records and design documents should be provided to the testing team. The testing team should read and understand the code.
• Event logging
The events to log include User events, System milestones, Error handling and completed transactions. The logs may be stored in files, ring buffers in memory, and/or serial ports. Things to be logged include description of event, timestamp, subsystem, resource usage and severity of event. Logging should be adjusted by subsystem and type. Log file report internal errors, help in isolating defects, and give useful information about context, tests, customer usage and test coverage.
The more readable the Log Reports are, the easier it becomes to identify the defect cause and work towards corrective measures.
• Error detection mechanisms
Data integrity checking and System level error detection (e.g. Microsoft Appviewer) are useful here. In addition, Assertions and probes with the following features are really helpful
 Code is added to detect internal errors.
 Assertions abort on error.
 Probes log errors.
 Design by Contract theory---This technique requires that assertions be defined for functions. Preconditions apply to input and violations implicate calling functions while post-conditions apply to outputs and violations implicate called functions. This effectively solves the oracle problem for testing.

• Resource Monitoring
Memory usage should be monitored to find memory leaks. States of running methods, threads or processes should be watched (Profiling interfaces may be used for this.). In addition, the configuration values should be dumped. Resource monitoring is of particular concern in applications where the load on the application in real time is estimated to be considerable.

2.Control
Control refers to our ability to provide inputs and reach states in the software under test.
The features to improve controllability are:
• Test Points
Allow data to be inspected, inserted or modified at points in the software. It is especially useful for dataflow applications. In addition, a pipe and filters architecture provides many opportunities for test points.
• Custom User Interface controls
Custom UI controls often raise serious testability problems with GUI test drivers. Ensuring testability usually requires:
 Adding methods to report necessary information
 Customizing test tools to make use of these methods
 Getting a tool expert to advise developers on testability and to build the required support.
 Asking third party control vendors regarding support by test tools.

• Test Interfaces
Interfaces may be provided specifically for testing e.g. Excel and Xconq etc.
Existing interfaces may be able to support significant testing e.g. InstallSheild, AutoCAD, Tivoli, etc.
• Fault injection
Error seeding---instrumenting low level I/O code to simulate errors---makes it much easier to test error handling. It can be handled at both system and application level, Tivoli, etc.
• Installation and setup
Testers should be notified when installation has completed successfully. They should be able to verify installation, programmatically create sample records and run multiple clients, daemons or servers on a single machine.


A BROADER VIEW

Below are given a broader set of characteristics (usually known as James Bach heuristics) that lead to testable software.


Categories of Heuristics of software testing


•Operability
The better it works, the more efficiently it can be tested.
The system should have few bugs, no bugs should block the execution of tests and the product should evolve in functional stages (simultaneous development and testing).
•Observability
What we see is what we test.
 Distinct output should be generated for each input
 Current and past system states and variables should be visible during testing
 All factors affecting the output should be visible.
 Incorrect output should be easily identified.
 Source code should be easily accessible.
 Internal errors should be automatically detected (through self-testing mechanisms) and reported.

•Controllability
The better we control the software, the more the testing process can be automated and optimized.
Check that
 All outputs can be generated and code can be executed through some combination of input.
 Software and hardware states can be controlled directly by the test engineer.
 Inputs and output formats are consistent and structured.
 Test can be conveniently, specified, automated and reproduced.
•Decomposability
By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing.
The software system should be built from independent modules which can be tested independently.
•Simplicity
The less there is to test, the more quickly we can test it.
The points to consider in this regard are functional (e.g. minimum set of features), structural (e.g. architecture is modularized) and code (e.g. a coding standard is adopted) simplicity.
•Stability
The fewer the changes, the fewer are the disruptions to testing.
The changes to software should be infrequent, controlled and not invalidating existing tests. The software should be able to recover well from failures.
•Understandability
The more information we will have, the smarter we will test.
The testers should be able to understand well the design, changes to the design and the dependencies between internal, external and shared components.
Technical documentation should be instantly accessible, accurate, well organized, specific and detailed.
•Suitability
The more we know about the intended use of the software, the better we can organize our testing to find important bugs.

The above heuristics can be used by a software engineer to develop a software configuration (i.e. program, data and documentation) that is convenient to test and verify.

Monday, July 14, 2008

Testing artifacts

Software testing process can produce several artifacts.

Test case

A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[25] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.


Test script


The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

Test suite

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Test plan

A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.

Test harness

The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

In software testing, a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.

Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using an automation framework.

A test harness should allow specific tests to run (this helps in optimising), orchestrate a runtime environment, and provide a capability to analyse results.

The typical objectives of a test harness are to:

* Automate the testing process.
* Execute test suites of test cases.
* Generate associated test reports.

A test harness typically provides the following benefits:

* Increased productivity due to automation of the testing process.
* Increased probability that regression testing will occur.
* Increased quality of software components and application.

Test Report

A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort.

Contents of a Test Report

The contents of a test report are as follows:

Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations

These sections are explained as follows:

Executive Summary

This section would comprise of general information regarding the project, the client, the application, tools and people involved in such a way that it can be taken as a summary of the Test Report itself (i.e.) all the topics mentioned here would be elaborated in the various sections of the report.

1. Overview


This comprises of 2 sections – Application Overview and Testing Scope.

Application Overview – This would include detailed information on the application under test, the end users and a brief outline of the functionality as well.

Testing Scope – This would clearly outline the areas of the application that would / would not be tested by the QA team. This is done so that there would not be any misunderstandings between customer and QA as regards what needs to be tested and what does not need to be tested.
This section would also contain information of Operating System / Browser combinations if Compatibility testing is included in the testing effort.

2.Test Details

This section would contain the Test Approach, Types of Testing conducted, Test Environment and Tools Used.

Test Approach – This would discuss the strategy followed for executing the project. This could include information on how coordination was achieved between Onsite and Offshore teams, any innovative methods used for automation or for reducing repetitive workload on the testers, how information and daily / weekly deliverables were delivered to the client etc.

Types of testing conducted – This section would mention any specific types of testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along with related specifications.

Test Environment – This would contain information on the Hardware and Software requirements for the project (i.e.) server configuration, client machine configuration, specific software installations required etc.

Tools used – This section would include information on any tools that were used for testing the project. They could be functional or performance testing automation tools, defect management tools, project tracking tools or any other tools which made the testing work easier.


3.Metrics


This section would include details on total number of test cases executed in the course of the project, number of defects found etc. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort.

4.Test Results

This section is similar to the Metrics section, but is more for showcasing the salient features of the testing effort. Incase many defects have been logged for the project, graphs can be generated accordingly and depicted in this section. The graphs can be for Defects per build, Defects based on severity, Defects based on Status (i.e.) how many were fixed and how many rejected etc.



5.Test Deliverables

This section would include links to the various documents prepared in the course of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report etc.

6.Recommendations
This section would include any recommendations from the QA team to the client on the product tested. It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application.

TEST PLAN

1.What is a Test Plan?
A Test Plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope, responsibilities, deadlines and deliverables for the project. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers).

Purpose of preparing a Test Plan

A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product.
The completed document will help people outside the test group understand the 'why' and 'how' of product validation.
It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.

Contents of a Test Plan
1. Purpose
2. Scope
3. Test Approach
4. Entry Criteria
5. Resources
6. Tasks / Responsibilities
7. Exit Criteria
8. Schedules / Milestones
9. Hardware / Software Requirements
10. Risks & Mitigation Plans
11. Tools to be used
12. Deliverables
13. References
a. Procedures
b. Templates
c. Standards/Guidelines
14. Annexure
15. Sign-Off

2. Contents (in detail)

Purpose
This section should contain the purpose of preparing the test plan.

Scope
This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens, database, mainframe processes etc).

Test Approach
This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management).


Entry Criteria

This section explains the various steps to be performed before the start of a test (i.e.) pre-requisites. For example: Timely environment set up, starting the web server / app server, successful implementation of the latest build etc.

Resources
This section should list out the people who would be involved in the project and their designation etc.

Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project.


Exit criteria

Contains tasks like bringing down the system / server, restoring system to pre-test environment, database refresh etc.

Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met in the course of the project.


Hardware / Software Requirements

This section would contain the details of PC’s / servers required (with the configuration) to install the application or perform the testing; specific software that needs to be installed on the systems to get the application running or to connect to the database; connectivity related issues etc.

Risks & Mitigation Plans
This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.

Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project (e.g.) WinRunner, Test Director, PCOM, WinSQL.

Deliverables
This section contains the various deliverables that are due to the client at various points of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include Test Plans, Test Procedure, Test Matrices, Status Reports, Test Scripts etc. Templates for all these could also be attached.

References
Procedures
Templates (Client Specific or otherwise)
Standards / Guidelines (e.g.) QView
Project related documents (RSD, ADD, FSD etc)

Annexure
This could contain embedded documents or links to documents which have been / will be used in the course of testing (e.g.) templates used for reports, test cases etc. Referenced documents can also be attached here.

Sign-Off
This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan.

Tuesday, July 8, 2008

Specific Field Tests

1.Date Field Checks
1. Assure that leap years are validated correctly & do not cause errors/miscalculations.
2. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations.
3. Assure that 00 and 13 are reported as errors.
4. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations.
5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
6. Assure that Feb. 30 is reported as an error.
7. Assure that century change is validated correctly & does not cause errors/ miscalculations.
8. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations.

2.Numeric Fields
1. Assure that lowest and highest values are handled correctly.
2. Assure that invalid values are logged and reported.
3. Assure that valid values are handles by the correct procedure.
4. Assure that numeric fields with a blank in position 1 are processed or reported as an error.
5. Assure that fields with a blank in the last position are processed or reported as an error an error.
6. Assure that both + and - values are correctly processed.
7. Assure that division by zero does not occur.
8. Include value zero in all calculations.
9. Include at least one in-range value.
10. Include maximum and minimum range values.
11. Include out of range values above the maximum and below the minimum.
12. Assure that upper and lower values in ranges are handled correctly.

3.Alpha Field Checks
1. Use blank and non-blank data.
2. Include lowest and highest values.
3. Include invalid characters & symbols.
4. Include valid characters.
5. Include data items with first position blank.
6. Include data items with last position blank.

Screen Validation Checklist

1.Aesthetic Conditions:
1. Is the general screen background the correct color?
2. Are the field prompts the correct color?
3. Are the field backgrounds the correct color?
4. In read-only mode, are the field prompts the correct color?
5. In read-only mode, are the field backgrounds the correct color?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all group boxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be allowed to minimize?
13. Are all the field prompts spelt correctly?
14. Are all character or alphanumeric fields left justified? This is the default unless otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise specified.
16. Is all the micro-help text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?
18. Is all user input captured in UPPER case or lowercase consistently?
19. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.

2.Validation Conditions:
1. Does a failure of validation on every field cause a sensible user error message?
2. Is the user required to fix entries, which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message?
5. Is validation consistently applied at screen level unless specifically required at field level?
6. For all numeric fields check whether negative numbers can and should be able to be entered.
7. For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. (If any field, which initially was mandatory, has become optional then check whether null values are allowed in this field.)

3.Navigation Conditions:
1. Can the screen be accessed correctly from the menu?
2. Can the screen be accessed correctly from the toolbar?
3. Can the screen be accessed correctly by double clicking on a list control on the previous screen?
4. Can all screens accessible via buttons on this screen be accessed correctly?
5. Can all screens accessible by double clicking on a list control be accessed correctly?
6. Is the screen modal? (i.e.) Is the user prevented from accessing other functions when this screen is active and is this correct?
7. Can a number of instances of this screen be opened at the same time and is this correct?

4.Usability Conditions:
1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options that apply to your screen got fast keys associated and should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the mouse?
11. Is the cursor positioned in the first input field or control when the screen is opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error when the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application?
16. Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer.


5.Data Integrity Conditions:

1. Is the data saved when the window is closed by double clicking on the close box?
2. Check the maximum field lengths to ensure that there are no truncated characters?
3. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
6. If a set of radio buttons represents a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions, which are not screen based, and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.


6.Modes (Editable Read-only) Conditions:

1. Are the screen and field colors adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.

7.General Conditions:
1. Assure the existence of the "Help" menu.
2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative (hot-key) key sequence, which will invoke it where appropriate.
5. In drop down list boxes, ensure that the names are not abbreviations / cut short
6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost - Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates, as a Close button when changes have been made that cannot be undone.
11. Assure that only command buttons, which are used by a particular window, or in a particular dialog box, are present. – (i.e) make sure they don't work on the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assures that it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font & font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button, which makes sense according to the function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence, which traverses the screens, does so in a logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e the tab is opened, highlighting the field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary.
43. Microhelp text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open

Windows Compliance Testing

1.Application
Start Application by Double Clicking on its ICON. The Loading message should show the application name, version number, and a bigger pictorial representation of the icon. No Login is necessary. The main window of the application should have the same caption as the caption of the icon in Program Manager. Closing the application should result in an "Are you Sure" message box Attempt to start application twice. This should not be allowed - you should be returned to main window. Try to start the application twice as it is loading. On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass, then some enquiry in progress message should be displayed. All screens should have a Help button (i.e.) F1 key should work the same.

If Window has a Minimize Button, click it. Window should return to an icon on the bottom of the screen. This icon should correspond to the Original Icon under Program Manager. Double Click the Icon to return the Window to its original size. The window caption for every application should have the name of the application and the window name - especially the error messages. These should be checked for spelling, English and clarity, especially on the top of the screen. Check does the title of the window make sense. If the screen has a Control menu, then use all un-grayed options.

Check all text on window for Spelling/Tense and Grammar.
Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards. Tab order should be left to right, and Up to Down within a group box on the screen. All controls should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it should highlight the entire text in the field. The text in the Micro Help line should change - Check for spelling, clarity and non-updateable etc. If a field is disabled (grayed) then it should not get focus. It should not be possible to select them with either the mouse or by using TAB. Try this for every grayed control.

Never updateable fields should be displayed with black text on a gray background with a black label. All text should be left justified, followed by a colon tight to it. In a field that may or may not be updateable, the label text and contents changes from black to gray depending on the current status. List boxes are always white background with black text whether they are disabled or not. All others are gray.

In general, double-clicking is not essential. In general, everything can be done using both the mouse and the keyboard. All tab buttons should have a distinct letter.

2.Text Boxes
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar. If it doesn't then the text in the box should be gray or non-updateable. Refer to previous page. Enter text into Box Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W. Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields. SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should select all text in box.

3.Option (Radio Buttons)
Left and Right arrows should move 'ON' Selection. So should Up and Down. Select with mouse by clicking.

4.Check Boxes

Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same.

5.Command Buttons
If Command Button leads to another Screen, and if the user can enter or change details on the other screen then the Text on the button should be followed by three dots. All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a letter underlined in the button text. Pressing ALT+Letter should activate the button. Make sure there is no duplication. Click each button once with the mouse - This should activate Tab to each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate The above are VERY IMPORTANT, and should be done for EVERY command Button. Tab to another type of control (not a command button). One button on the screen should be default (indicated by a thick black border). Pressing Return in ANY no command button control should activate it.
If there is a Cancel Button on the screen, then pressing should activate it. If pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action.

6.Drop Down List Boxes
Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type text in the box. Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing ‘Ctrl - F4’ should open/drop down the list box. Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical order with the exception of blank/none, which is at the top or the bottom of the list box. Drop down with the item selected should be display the list with the selected item on the top. Make sure only one space appears, shouldn't have a blank line at the bottom.


7.Combo Boxes

Should allow text to be entered. Clicking Arrow should allow user to choose from list.


8. List Boxes
Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys. Pressing a letter should take you to the first item in the list starting with that letter. If there is a 'View' or 'Open' button besides the list box then double clicking on a line in the List Box, should act in the same way as selecting and item in the list box, then clicking the command button. Force the scroll bar to appear, make sure all the data can be seen in the box.

GUI Testing

What is GUI Testing?

GUI is the abbreviation for Graphic User Interface. It is absolutely essential that any application has to be user-friendly. The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. Hence it becomes very essential to test the GUI components of any application. GUI Testing can refer to just ensuring that the look-and-feel of the application is acceptable to the user, or it can refer to testing the functionality of each and every component involved.

The following is a set of guidelines to ensure effective GUI Testing and can be used even as a checklist while testing a product / application.

Sections Of GUI Testing:
1.Windows Compliance Testing
2.Screen Validation Checklist
3.Specific Field Tests
4.Validation Testing - Standard Actions

Monday, July 7, 2008

Verification Strategies

What is ‘Verification’?
Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

What is the importance of the Verification Phase?
Verification process helps in detecting defects early, and preventing their leakage downstream. Thus, the higher cost of later detection and rework is eliminated.

1. Review
A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.
The main goal of reviews is to find defects. Reviews are a good compliment to testing to help assure quality. A few purposes’ of SQA reviews can be as follows:
• Assure the quality of deliverable before the project moves to the next stage.
• Once a deliverable has been reviewed, revised as required, and approved, it can be used as a basis for the next stage in the life cycle.
What are the various types of reviews?
Types of reviews include Management Reviews, Technical Reviews, Inspections, Walkthroughs and Audits.

Management Reviews
Management reviews are performed by those directly responsible for the system in order to monitor progress, determine status of plans and schedules, confirm requirements and their system allocation.
Therefore the main objectives of Management Reviews can be categorized as follows:
• Validate from a management perspective that the project is making progress according to the project plan.
• Ensure that deliverables are ready for management approvals.
• Resolve issues that require management’s attention.
• Identify any project bottlenecks.
• Keeping project in Control.
Support decisions made during such reviews include Corrective actions, Changes in the allocation of resources or changes to the scope of the project
In management reviews the following Software products are reviewed:
Audit Reports
Contingency plans
Installation plans
Risk management plans
Software Q/A
The participants of the review play the roles of Decision-Maker, Review Leader, Recorder, Management Staff, and Technical Staff.

Technical Reviews
Technical reviews confirm that product Conforms to specifications, adheres to regulations, standards, guidelines, plans, changes are properly implemented, changes affect only those system areas identified by the change specification.

The main objectives of Technical Reviews can be categorized as follows:
• Ensure that the software confirms to the organization standards.
• Ensure that any changes in the development procedures (design, coding, testing) are implemented per the organization pre-defined standards.
In technical reviews, the following Software products are reviewed
• Software requirements specification
• Software design description
• Software test documentation
• Software user documentation
• Installation procedure
• Release notes

The participants of the review play the roles of Decision-maker, Review leader, Recorder, Technical staff.

What is Requirement Review?

A process or meeting during which the requirements for a system, hardware item, or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review.
Who is involved in Requirement Review?
• Product management leads Requirement Review. Members from every affected department participates in the review

Input Criteria
Software requirement specification is the essential document for the review. A checklist can be used for the review.

Exit Criteria

Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.

What is Design Review?
A process or meeting during which a system, hardware, or software design is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include critical design review, preliminary design review, and system design review.

Who involve in Design Review?

• QA team member leads design review. Members from development team and QA team participate in the review.

Input Criteria
Design document is the essential document for the review. A checklist can be used for the review.

Exit Criteria

Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.


What is Code Review?
A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.

Who is involved in Code Review?
• QA team member (In case the QA Team is only involved in Black Box Testing, then the Development team lead chairs the review team) leads code review. Members from development team and QA team participate in the review.

Input Criteria

The Coding Standards Document and the Source file are the essential documents for the review. A checklist can be used for the review.

Exit Criteria

Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.

2. Walkthrough
A static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.
The objectives of Walkthrough can be summarized as follows:
• Detect errors early.
• Ensure (re)established standards are followed:
• Train and exchange technical information among project teams which participate in the walkthrough.
• Increase the quality of the project, thereby improving morale of the team members.
The participants in Walkthroughs assume one or more of the following roles:
a) Walk-through leader
b) Recorder
c) Author
d) Team member
To consider a review as a systematic walk-through, a team of at least two members shall be assembled. Roles may be shared among the team members. The walk-through leader or the author may serve as the recorder. The walk-through leader may be the author.
Individuals holding management positions over any member of the walk-through team shall not participate in the walk-through.

Input to the walk-through shall include the following:
a) A statement of objectives for the walk-through
b) The software product being examined
c) Standards that are in effect for the acquisition, supply, development, operation, and/or maintenance of the software product
Input to the walk-through may also include the following:
d) Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected
e) Anomaly categories

The walk-through shall be considered complete when
a) The entire software product has been examined
b) Recommendations and required actions have been recorded
c) The walk-through output has been completed

3. Inspection

A static analysis technique that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include code inspection; design inspection, Architectural inspections, Test ware inspections etc.
The participants in Inspections assume one or more of the following roles:
a) Inspection leader
b) Recorder
c) Reader
d) Author
e) Inspector

All participants in the review are inspectors. The author shall not act as inspection leader and should not act as reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role.
Individuals holding management positions over any member of the inspection team shall not participate in the inspection.

Input to the inspection shall include the following:
a) A statement of objectives for the inspection
b) The software product to be inspected
c) Documented inspection procedure
d) Inspection reporting forms
e) Current anomalies or issues list
Input to the inspection may also include the following:
f) Inspection checklists
g) Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected
h) Hardware product specifications
i) Hardware performance data
j) Anomaly categories
The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate rework and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:
a) Accept with no or minor rework. The software product is accepted as is or with only minor rework. (For example, that would require no further verification).
b) Accept with rework verification. The software product is to be accepted after the inspection leader or
a designated member of the inspection team (other than the author) verifies rework.
c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection, as well as side effects of those changes.

When should Testing stop?

"When to stop testing" is one of the most difficult questions to a test engineer.
The following are few of the common Test Stop criteria:
1. All the high priority bugs are fixed.
2. The rate at which bugs are found is too small.
3. The testing budget is exhausted.
4. The project duration is completed.
5. The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

• Measuring Test Coverage.
• Number of test cycles.
• Number of high priority bugs.

When Testing should occur?

Wrong Assumption

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product .Test data sets must be derived and their correctness and consistency should be monitored throughout the development process. If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Testing Activities in Each Phase

The following testing activities should be performed during the phases
• Requirements Analysis - (1) Determine correctness (2) Generate functional test data.
• Design - (1) Determine correctness and consistency (2) Generate structural and functional test data.
• Programming/Construction - (1) Determine correctness and consistency (2) Generate structural and functional test data (3) Apply test data (4) Refine test data.
• Operation and Maintenance - (1) Retest.

Now we consider these in detail.

Requirements Analysis

The following test activities should be performed during this stage.
•Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.
The requirements statement should record the following information and decisions:
1. Program function - What the program must do?
2. The form, format, data types and units for input.
3. The form, format, data types and units for output.
4. How exceptions, errors and deviations are to be handled.
5. For scientific computations, the numerical method or at least the required accuracy of the solution.
6. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.
• Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data. In addition, following should also be included in the data set: (1) boundary values (2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.

• The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Design


The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:
• Principal data structures.
• Functions, algorithms, heuristics or special techniques used for processing.
• The program organization, how it will be modularized and categorized into external and internal interfaces.
• Any additional information.

Here the testing activities should consist of:
• Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

• Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

• Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

• Reexamination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

Programming/Construction

Here the main testing points are:

• Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.
• Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.

• Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

• Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

• Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.

• Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

• Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Operations and maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Friday, July 4, 2008

Defect Resolution

Once the developers have acknowledged a valid defect, the resolution process begins.
The steps involved in defect resolution are as follows:
Prioritize Risk -- Developers determine the importance of fixing a particular defect.
Schedule Fix and Fix Defect -- Developers schedule when to fix a defect. Then developers should fix defects in order of importance.
Report Resolution -- Developers notify all relevant parties how and when the defect was repaired

1. Prioritize Risk:

The purpose of this step is to answer the following questions and initiate any immediate action that might be required:

• Is this a previously reported defect, or is it new?

• What priority should be given to fixing this defect?

• What steps should be taken to minimize the impact of the defect prior to a fix? For example, should other users be notified of the problem? Is there a work-around for the defect?

• A suggested prioritization method is a three-level method, as follows:

• Critical: Would cause the software to stop.

• Major: Would cause an output of the software to be incorrect.

• Minor: Something wrong, but it does not directly affect the user of the system, such as a documentation error or cosmetic GUI error.

2. Schedule Fix and Fix Defect:
Schedule Fix
Based on the priority of the defect, the fix should be scheduled. It should be noted that some organizations treat lower priority defects as changes. All defects are not created equal from the perspective of how quickly they need to be fixed.

Fix Defect

This step involves correcting and verifying one or more deliverables (e.g., programs, documentation) required to remove the defect from the system. In addition, test data, checklists, etc. should be reviewed, and perhaps enhanced, so that in the future this defect would be caught earlier.

3. Report Resolution:
Once the defect has been fixed and the fix verified, appropriate developers, users, and testers must be notified that the defect has been fixed along with other pertinent information such as:

• The nature of the fix,
• When the fix will be released, and
• How the fix will be released.

• As in many aspects of defect management, this is an area where automation of the process can help. Most defect management tools capture information on who found and reported the problem, and therefore provide an initial list of who needs to be notified.

• The developer identifies and fixes problems that caused the defects. Then the developer must record resolution information with the defects. The resolution information is then passed back to impacted parties. Computer forums and electronic mail can help notify users of widely distributed software.

Defect Discovery

If technology can not guarantee that defects will not be created, then defects should be found quickly before the cost-to-fix becomes expensive. For purposes of this model, a defect has been discovered when the defect has been formally brought to the attention of the developers, and the developers acknowledge that the defect is valid. A defect has not necessarily been discovered when the user simply finds a problem with the software. The user must also report the defect and the developers must acknowledge that the defect is valid. Since it is important to minimize the time between defect origination and defect discovery, strategies need to be implemented that uncover the defect, and facilitate the reporting and acknowledge of the defect.
To make it easier to recognize defects, organizations should predefine defects by category. This is a one-time event, or an event that could be performed annually. It would involve the knowledgeable, respected individuals from all major areas of the IS organization. The group should be run by a facilitator. The objective is to identify the errors/problems that occur most frequently in the IS organization and then get agreement that they are, in fact, defects. A name should be attached to each category of defect. The objective of this activity is to minimize conflicts over the validity of defects. For example, developers may not want to acknowledge that a missing requirement is a defect, but if it has been previously defined as a defect category then that conflict can be avoided.

The steps involved in defect discovery are as follows:
1. Find Defect: Discover defects before they become major problems.
2. Report Defect: Report defects to developers so that they can be resolved.
3. Acknowledge Defect: Obtain development acknowledgement that the defect is valid and should be addressed.

1. Find Defect:
Defects are found either by preplanned activities specifically intended to uncover defects (e.g., quality control activities such as inspections, testing, etc.) or by accident (e.g., users in production).

Techniques to find defects can be divided into three categories:

Static techniques: Testing that is done without physically executing a program or system. A code review is an example of a static testing technique.

Dynamic techniques: Testing in which system components are physically executed to identify defects. Execution of test cases is an example of a dynamic testing technique.

Operational techniques: An operational system produces a deliverable containing a defect found by users, customers, or control personnel -- i.e., the defect is found as a result of a failure.

While it is beyond the scope of this study to compare and contrast the various static, dynamic, and operational techniques, the research did arrive at the following conclusions:

Both static and dynamic techniques are required for an effective defect management program. In each category, the more formally the techniques were integrated into the development process, the more effective they were.

Since static techniques will generally find defects earlier in the process, they are more efficient at finding defects.

When Shell Oil followed the inspection process, they recorded the following results:

For each staff-hour spent in the inspection process, ten hours were saved!

More informal (and less effective) reviews saved as much time as they cost. In other words, worst case (informal reviews) -- no extra cost, best case (formal inspections) -- a 10-1 savings.

Their defect removal efficiency with inspections was 95-97% versus roughly 60% for systems that did not use inspections.

Shell Oil also emphasized the more intangible, yet very significant, benefits of inspections. They found that if the standards for producing a deliverable were vague or ambiguous (or nonexistent), the group would attempt to define a best practice and develop a standard for the deliverable. Once the standard became well defined, checklists would be developed. (NASA also makes extensive use of checklists and cross references defects to the checklist item that should have caught the defect). Inspections were a good way to train new staff in both best practices and the functioning of the system being inspected.

2. Report Defect:
Once discovered, defects must be brought to the developers' attention. Defects discovered by a technique specifically designed to find them can be reported by a simple written or electronic report. However, some defects, are discovered more by accident -- i.e., people who are not trying to find defects. These may be development personnel or users. In these cases, techniques that facilitate the reporting of the defect may significantly shorten the defect discovery time. As software becomes more complex and more widely used, these techniques become more valuable. These techniques include computer forums, email, help desks, etc.

It should also be noted that there are some human factors/cultural issues involved with the defect discovery process. When a defect is initially uncovered, it may be very unclear whether it is a defect, a change, user error, or a misunderstanding. Developers may resist calling something a defect because that implies "bad work" and may not reflect well on the development team. Users may resist calling something a "change" because that implies that the developers can charge them more money. Some organizations have skirted this issue by initially labeling everything by a different name -- e.g., "incidents" or "issues." From a defect management perspective, what they are called is not an important issue. What is important is that the defect be quickly brought to the developers' attention and formally controlled.

3. Acknowledge Defect:
Once a defect has been brought to the attention of the developer, the developer must decide whether or not the defect is valid. Delays in acknowledging defects can be very costly. The primary cause of delays in acknowledging a defect appears to be an inability to reproduce the defect. When the defect is not reproducible and appears to be an isolated event ("no one else has reported anything like that"), there will be an increased tendency for the developer to assume the defect is invalid -- that the defect is caused by user error or misunderstanding. Moreover, with very little information to go on, the developer may feel that there is nothing he or she can do anyway. Unfortunately, as technology becomes more complex, defects which are difficult to reproduce will become more and more common. Software developers must develop strategies to more quickly pinpoint the cause of a defect.

Strategies to pinpoint cause of defect:
One strategy to pinpoint the cause of a defect is to instrument code to trap the state of the environment when anomalous conditions occur. Microsoft's Dr. Watson concept would be an example of this technique. In the Beta release of Windows 3.1, Microsoft included features (i.e., Dr. Watson) to trap the state of the system when a significant problem occurred. This information was then available to Microsoft when the problem was reported and helped them analyze the problem.

Writing code to check the validity of the system is another way to pinpoint the cause of a defect. This is actually a very common technique for hardware manufacturers. Unfortunately diagnostics may give a false sense of security -- they can find defects, but they cannot show the absence of defects. Virus checkers would be an example of this strategy.

Finally, analyzing reported defects to discover the cause of a defect is very effective. While a given defect may not be reproducible, quite often it will appear again (and again) perhaps in different guises. Eventually patterns may be noticed which will help in resolving the defect. If the defect is not logged, or if it is closed prematurely, then valuable information can be lost. In one instance reported to the research team, a development team was having difficulty reproducing a problem. Finally, during a visit to the location, they discovered how to reproduce the problem. The problem was caused when one of the users fell asleep with her finger on the enter key. In order to protect the user, the circumstances surrounding the problem were not reported to the developers until the on-site visit.

A resolution process needs to be established for use in the event there is a dispute regarding a defect. For example, if the group uncovering the defect believes it is a defect but the developers do not, a quick-resolution process must be in place. While many approaches can address this situation, the two most effective are:

Arbitration by the software owner -- the customer using the software determines whether or not the problem is a defect.

Arbitration by a software development manager -- a senior manager of the software development department will be selected to resolve the dispute.

Tuesday, July 1, 2008

Test Deliverables & Metrics

Test Deliverables

There are different test deliverables at every phase of the SDLC. These deliverables are provided based on the requirement once before the start of the test phase and there are other deliverables that are produced towards the end/after completion of each test phase. Also there are several test metrics that are collected at each phase of testing. Below are the details of the various test deliverables corresponding to each test phase along with their test metrics.

The standard deliverables provided as part of testing are:

* Test Trace-ability Matrix
* Test Plan
* Testing Strategy
* Test Cases (for functional testing)
* Test Scenarios (for non-functional testing)
* Test Scripts
* Test Data
* Test Results
* Test Summary Report
* Release Notes
* Tested Build
______________________________________________________________________________________

Test Metrics


There are several test metrics identified as part of the overall testing activity in order to track and measure the entire testing process. These test metrics are collected at each phase of the testing life cycle /SDLC and analyzed and appropriate process improvements are determined and implemented as a result of these test metrics that are constantly collected and evaluated as a parallel activity together with testing both for manual and automated testing irrespective of the type of application. The test metrics can be broadly classified into the following three categories such as:

1. Project Related Metrics – such as Test Size, # of Test Cases tested per day –Automated (NTTA), # of Test Cases tested per day –Manual (NTTM), # of Test Cases created per day – Manual (TCED), Total number of review defects (RD), Total number of testing defects (TD), etc

2. Process Related Metrics – such as Schedule Adherence (SA), Effort Variance (EV), Schedule Slippage (SS), Test Cases and Scripts Rework Effort, etc.

3. Customer related Metrics – such as Percentage of defects leaked per release (PDLPR), Percentage of automation per release (PAPR), Application Stability Index (ASI), etc.

Technology Based Software Testing

1. GUI testing
Testing Considerations for GUI
1. Communication; aspects to be tested are:
• Tool tips and status bar

Missing information
• Enable/ Disable toolbar buttons
• Wrong/ misleading/ confusing information
• Help text and Error messages
• Training documents

Dialog Boxes; aspects to be tested are:
• Keyboard actions
• Mouse actions
• Canceling
• Okaying
• Default buttons
• Layout error
• Modal
• Window buttons
• Sizable
• Title/ Icon
• Tab order
• Display layout
• Boundary conditions
• Sorting
• Active window
• Memory leak

Command structure; aspects to be tested are:
• Menus
• Popup menus
• Command Line Parameters
• State transitions

Program rigidity; aspects to be tested are:
• User options
• Control
• Output
Preferences; aspects to be tested are:
• User tailor-ability
• Visual preferences
• Localization

Usability; aspects to be tested are:
• Accessibility
• Responsiveness
• Efficiency
• Comprehensibility
• User scenarios
• Ease of use

Localization; aspects to be tested are:
• Translation
• English-only dependencies
• Cultural dependencies
• Unicode
• Currency
• Date/ Time
• Constants
• Dialog contingency

2. Application Testing

Testing Considerations for Application Testing
Applications Testing
C, C++ Applications Memory leak detection
Code coverage
Static and dynamic testing
Test coverage analysis
Runtime error detection
Automated component testing
Measurement of maintainability, portability, complexity, standards compliance
Boundary conditions testing

Java Applications/ Applets
Automated component testing
Functional testing
Performance testing
Applet/ application testing
Code coverage
Boundary conditions testing
Win 32-based Applications
Memory leak detection of win32 programs
Performance testing
Stress testing of Windows applications and system clients

3. Application Programming Interface (API) testing

Testing Considerations/ Issues

The following tasks can be automated:

1. Test-code builds
2. Test-suites execution
3. Report generation
4. Report publishing
5. Report notification
6. Periodic execution of test-suites
7. A tool for test automation should be identified very early in the project.
8. A simulator (that services the API calls) should be developed by the testing team. In the absence of this simulator, there is a dependency on a Server (not developed by the API development team).

Validating the APIs for user-friendliness:

* Meaningful names
* Concise and short names
* Number of arguments
* Easy-to-read names

4. Middleware Testing

* Functional testing
* Interoperability testing
* Performance testing
5. Database/ Backend (and database applications) testing
Testing Considerations/ Issues

Apart from testing (verify/ validate this) the application the following needs to be considered for databases:

* Distributed Environment
* Performance
* Integration with other systems
* Security Aspects

Description/ Sub-tasks

Boundary conditions testing (Data); aspects to be tested:

* Dataset
* Numeric
* Alpha
* Numerosity
* Field size
* Data structures
* Timeouts

Accuracy/ Integrity testing (Data); aspects to be tested:

Boundary conditions testing (Data); aspects to be tested:

* Calculations - Reports
* Calculations - Backend
* Divide by zero
* Truncate
* Compatibility
* Test data
* Data consistency

Database connectivity; aspects to be tested:

Database connectivity; aspects to be tested:

* Save
* Retrieval

Database schema testing; aspects to be tested:

Database connectivity; aspects to be tested:

* Databases and devices
* Tables, Fields, Constraints, Defaults
* Keys and Indices
* Stored procedures
* Error messages
* Triggers, Update
* Triggers, Insert
* Triggers, Delete
* Schema comparisons

Security testing; aspects to be tested

Security testing; aspects to be tested

* Login and User security


6. Web Site/ Page Testing

Applications Testing
Link and HTML Testing
Detecting HTML compatibility problems
Checking Cascading Style Sheets
Checking link and content
Checking HTML syntax/ Validating HTML documents
Detecting broken/ dead links
Web site performance analysis
Functional Testing
Web site functional testing
Testing for completeness and consistency of web pages
Performance Testing
Load testing of web based systems
Reliability, performance and scalability testing of web applications
Load and performance testing of web server