Monday, August 11, 2008

Bug Tracking

-Locating and repairing software bugs is an essential part of software development.
-Bugs can be detected and reported by engineers, testers, and end-users in all phases of the testing process.
-Information about bugs must be detailed and organized in order to schedule bug fixes and determine software release dates.

Bug Tracking involves two main stages: reporting and tracking.
1.Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or defects) that you detected. The bugs are stored in a database so that you can manage them and analyze the status of your application.
When you report a bug, you record all the information necessary to reproduce and fix it. You also make sure that the QA and development personnel involved in fixing the bug are notified.

2.Track and Analyze Bugs
The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and closed.
-First you report New bugs to the database, and provide all necessary information to reproduce, fix, and follow up the bug.
-The Quality Assurance manager or Project manager periodically reviews all New bugs and decides which should be fixed. These bugs are given the status Open and are assigned to a member of the development team.
-Software developers fix the Open bugs and assign them the status Fixed.
-QA personnel test a new build of the application. If a bug does not reoccur, it is Closed. If a bug is detected again, it is reopened.

Communication is an essential part of bug tracking; all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed.
The number of open or fixed bugs is a good indicator of the quality status of your application. You can use data analysis tools such as re-ports and graphs in interpret bug data.

Change Request

1. Initiating a Change Request
A user or developer wants to suggest a modification that would improve an existing application, notices a problem with an application, or wants to recommend an enhancement. Any major or minor request is considered a problem with an application and will be entered as a change request.

2.Type of Change Request
Bug the application works incorrectly or provides incorrect information. (for example, a letter is allowed to be entered in a number field)
Change a modification of the existing application. (for example, sorting the files alphabetically by the second field rather than numerically by the first field makes them easier to find)
Enhancement new functionality or item added to the application. (for example, a new report, a new field, or a new button).


3. Priority for the request

Low the application works but this would make the function easier or more user friendly.
High the application works, but this is necessary to perform a job.
Critical the application does not work, job functions are impaired and there is no work around. This also applies to any Section 508 infraction.

Test Execution

Test Execution is the heart of the testing process. Each time your application changes, you will want to execute the relevant parts of your test plan in order to locate defects and assess quality.

1.Create Test Cycles
During this stage you decide the subset of tests from your test database you want to execute.
Usually you do not run all the tests at once. At different stages of the quality assurance process, you need to execute different tests in order to address specific goals. A related group of tests is called a test cycle, and can include both manual and automated tests
Example: You can create a cycle containing basic tests that run on each build of the application throughout development. You can run the cycle each time a new build is ready, to determine the application's stability before beginning more rigorous testing.
Example: You can create another set of tests for a particular module in your application. This test cycle includes tests that check that module in depth.
To decide which test cycles to build, refer to the testing goals you defined at the beginning of the process. Also consider issues such as the current state of the application and whether new functions have been added or modified.
Following are examples of some general categories of test cycles to consider:
sanity cycle checks the entire system at a basic level (breadth, rather than depth) to see that it is functional and stable. This cycle should include basic-level tests containing mostly positive checks.
normal cycle tests the system a little more in depth than the sanity cycle. This cycle can group medium-level tests, containing both positive and negative checks.
advanced cycle tests both breadth and depth. This cycle can be run when more time is available for testing. The tests in the cycle cover the entire application (breadth), and also test advanced options in the application (depth).
regression cycle tests maintenance builds. The goal of this type of cycle is to verify that a change to one part of the software did not break the rest of the application. A regression cycle includes sanity-level tests for testing the entire software, as well as in-depth tests for the specific area of the application that was modified.

2. Run Test Cycles (Automated & Manual Tests)
Once you have created cycles that cover your testing objectives, you begin executing the tests in the cycle. You perform manual tests using the test steps. Testing Tools executes automated tests for you. A test cycle is complete only when all tests-automatic and manual-have been run.
 With Manual Test Execution you follow the instructions in the test steps of each test. You use the application, enter input, compare the application output with the expected output, and log the results. For each test step you assign either pass or fail status.
 During Automated Test Execution you create a batch of tests and launch the entire batch at once. Testing Tools runs the tests one at a time. It then imports results, providing outcome summaries for each test.


3.Analyze Test Results

After every test run one analyze and validate the test results. And have to identify all the failed steps in the tests and to determine whether a bug has been detected, or if the expected result needs to be updated.