Wednesday, May 14, 2008

Thursday, May 8, 2008

Software Testing Life Cycle

Software Testing Life Cycle is associated with the following phases:

1. Test Planning
2. Test Development/Design
3. Test Execution
4. Result Analysis
5. Bug Tracking
6. Reporting

1. Test Planning

A Test plan is defined as the strategic document, that describes the
whole procedure how to carry on the testing on the entire application effectively, efficiently and with a greater optimization.

Content of Test Plan:

i. Scope and the objective of the test plan.
ii. Areas of the functionalities to be tested.
iii. Areas of the functionalities not to be tested.
iv. Resource planning.
v. Scheduling.
vi. Test Deliverables.
vii. Test Strategy.
viii. Testing terminology and the defect metrics definition.
ix. Areas of testing to be automated.
x. Evaluation of the automated tools and the list of the automated tools used.
xi. Entry and Exit criteria
xii. The details of the author, reviewer and approval authority with respect to the test
plan document.
xiii. Risks and contingencies plan.

Scope: It defines the portion of the project to be addressed and focused by the test plan.

Objective: Which defines the testing services for a specific portion of the project
as per the scope.

Areas of the functionalities to be tested: This field describes the major areas of the functionalities interns of modules that are to be tested as per the scope.

Areas of the functionalities not to be tested: Some of the functional areas should not be tested for the following reasons:

1. If the functionalities are out of scope.
2. If the functionalities are not covered by the test strategy provided by the customer.

Resource Planning: It is the process in which what testing professional does what work, is decided. In other words, tasks are mapped with the man power of testing domain is described under this section.

Scheduling: This field details the information that is associated with the time factors of the task. In other words duration of the task, starting date of the task and the ending date of the task is defined here.

Test Deliverables: This means list of all the corresponding documents that are basically originated from testing department or testing effort some of the deliverables are given below:

Review Report
Test Report Document (TRD)
Defect Profile Document (DPD)
Detailed Test Case Document (DTCD)
Master Test Case Document (MTCD)
Function Point Document (FPD)
Test Metrics Document
Test Matrix Document
Periodic Project Report (PPR)
Software Delivery Note (SDN)
Certification Document.

Test Strategy: Test Strategy means what and all the activities that are associated with testing in terms of kinds of testing, methods of testing, levels of testing, types of testing and ways of testing are described.

Note: In the organization various documents are produced under the classification.
Organizational level, project level, and module level documents. Test process document or test strategy document belongs to organizational level document where as test plan document is project level document and test case document is module level document.

Testing terminology and the defect metrics definition: All the terms that belong to testing domain are listed out here. As there are various defects that are to be addressed with various expressions will be defined here based on the degree of seriousness of the defect.

Areas of testing to be automated: Having carefully analyzed the automation testing scope one has to list out all the possible testing efforts that can be tested with the automated tools. In other words the portion of testing to be automated and the portion not be automated can be detailed under this section.

Evolution of tools: Depends on the features of the application and the transactions of the application to tested after, careful evolution the automation experts must select the appropriate automated tool and list them out under this section.

Entry Criteria: The criteria that are required to initiate/start the testing activity can be detailed here as entry criteria.

Exit Criteria: The criteria that is required to terminate testing activities can be detailed here as exit criteria.

The details of the author, reviewer and approval authority with respect to the test plan document: For every document there are three roles involved in terms of author (who actually creates it), reviewer (who verifies it), and approval (who finalizes it). In order to carry on the respective responsibilities on the document. The details of all these roles will be given under this section.

Author – TE, Review – Peer reviewer, Approval – BA

Risks and contingencies plan: The test managers carefully must access all the risks in advance and be prepared with the solutions to resolve the risks if at all they shoot up in terms of contingency plan.

Note: The test plan is usually prepared by either SQM or Quality Lead (QL). Project plan is prepared by project manager. Project plan can be input or base document for the preparation of test plan document.

2. Test Development/Design

This stage is meant for developing a strategy for testing specially to test a specific functionality in a detailed fashion, in terms of test case document preparation. Hence the test engineers prepare the test case document in this phase.

The preparation of test case document is always done with the help of template which is given below:

i. Test objective/test scope
ii. Test scenario
iii. Test procedure
iv. Test cases
v. Test data

Note: There will be multiple test case documents prepared for a single project most often at the module level.

i. Test objective/scope: This section details what portion of the application is tested with the help of this test case document.
ii. Test scenario: This section details the situation in which the testing is carried on. In other words the specific context has to be described in which test case document can be executed.
iii. Test procedure: Any specific functionality is plans to be tested in such a way that the look and feel of the functionality is tested under GUI testing, the positive behavior is tested under the positive testing and negative behavior under negative testing with the help of test cases like GUI, positive and negative test cases.
iv. Test Cases: This section details the list of various test cases in terms of the corresponding test steps. The information described in a tabular format under the classification of GUI test cases, positive and negative test cases.

When test cases are drafted the test engineer must follow some of the standards as given below:

a. It should be précised and to the point.
b. It must be clear and easy to understand.
c. It must be in terms of instructions rather than statement.
d. It must reproducible and reusable.
e. It can use the hyper linking technique to fetch the test data with which testing can be proceeded.

v. Test Data: It is defined as the data that is utilized at the time of execution of the test cases to test a specific functionality.

3. Test Execution

This phase is meant for implementation of the test strategy that is developed in the previous phase – test design. In other words the test case document prepared will be executed on the developed and released functionality.

4. Result Analysis

It is the process of analysis in which the result is determined. Usually the test results are expressed in terms of pass or fail.

In the process of result analysis when the expected value is compared with the actual value and if they are matched with each other the result is expressed in terms of “pass” otherwise i.e., if they are mismatched then result “fail”.

Pass means the requirement is justified properly, on the other hand fail means there is a deviation from the requirements.

5. Bug Tracking

Soon after the test execution is over and the results are analyzed, all the failed cases are mostly considered as defects. These defects are not only identified they are basically tracked into a specific document known as defect profile document. Hence this phase is meant for preparing the defect profile document.

6. Reporting

The defect profile document that is prepared in the previous phase will be sent by the test engineer to the development team under the process known as reporting; this is also referred to as bug reporting process.

Apart from defect profile document (DPD) the test engineer prepare the high level document known as Test Report Document (TRD). This document will be sent to the high level management and customer (if he requires). In order to let them know the status of testing and the stability of functionality. This document contains the high level description of status and stability with the pictorial representation in order to make them understand clearly. Hence the reporting phase deals with two documents that is DPD and TRD.

Friday, April 25, 2008

Levels of Testing

Unit Testing: It is defined as the first level of testing in which soon after the unit (smaller part of project/ sub module) is developed; it will be tested for its perfection. If it is working properly as per the design.

Module Testing: It is another level of testing in which once all the sub modules or units are developed to form a module, it will be checked for its perfection, if it is as per requirements.

Module testing belongs to black box testing because at this level usually GUI is made available on which the functionality can be seen and tested hence it is always done by test engineers.

Integration Testing: It is another important level of testing in which soon after the individual modules are developed, tested are brought together to make an application or part of an application. If one performs testing on these integrated modules, it is known as integration testing.

The purpose of integration testing:

To test the individual modules, if their right functionality is affected due to integration.
The net functionality of the integrated modules is to be tested.
To check if the data is flowing among the modules as per DFD’s.
To check the navigation at high level, if it is as per the design.

Integration Approaches:

Integration approach is a process, which defines how the modules are integrated with each other. Based on this criterion there are basically three types of approaches:

1. Top down Approach
2. Bottom up Approach
3. Sandwich Approach

1. Top down Approach: This approach is usually proposed whenever the customer does not interfere and the sequence of development is not affected. In this approach as and when the child modules are developed, they are integrated with the parents, hence the direction of integration is from top to bottom, and hence it is top-bottom approach.

In this approach, there is every possibility that some of the child modules are missing (not yet developed) causing hindrance for the integration among the rest of module. In order to get along with the situation the temporary dummy program is employed in the place of missing child module to make the integration success is known as “STUB”.

2. Bottom-Up Approach: This approach is proposed whenever customer
Interferes in the beginning and changes the sequence of development. In this approach of integration as and when the parents are developed these get integrated with the child modules. In this process there is every possibility that parents are missing, causing hindrance to integration and as usual a temporary dummy program is used as a solution known as “DRIVER”.

3. Sand witch approach: This approach is proposed whenever the customer
Interfere in between the development process. It is basically a combination of top down and bottom up approach hence both the stubs and drivers are seen in this approach.

System Testing: It is another crucial level of testing in which once the application is developed and deployed into the customers specified and simulated environment to prepare the complete system. If one performs testing on entire system to check if the application is functioning as per the requirements, it is known as system testing.

System testing can be considered as full fledged testing as it covers GUI testing, functional testing (positive and negative), performance testing, load testing, and stress testing.

User Acceptance Testing: It is defined as final level of testing in which, the Application is tested in the presence of customer/customer’s representative in order to check, if the users acceptance criteria is justified or not.

The user acceptance testing belongs to black box testing and it is done by either test engineer or customer him self.

Load Testing: It is defined as a type of testing that will be done under system Testing in which the sequential load is performed on the application with a constant increment, in order to determine the load baring capacity of the application.

Performance Testing: It is defined as a type of testing that will be done under system Testing in which set of pre defined end to end transactions are performed on the application to determine the respective response times and to conclude if the performance of the application is ok, depends on, if the actual response times are well with in the expected response times. Hence
performance means “the time factor” i.e., the response time.

Stress Testing: It is defined as the type of testing in which abnormal actions, Beyond capacity and more volumes of data related operations are performed on the application by multiple users. In order to check if the application is stable in spite of the abnormal behavior and stress on the application. Hence stress means to find stability.

Thursday, April 24, 2008

Testing Definitions

Smoke Testing(Cursory Testing): It is a type of testing in which one can perform an initial, non-detailed testing on an application preferably in short span of time to check if all the desired objects/ features/ windows are basically available in order to perform detailed testing on them

Sanity Testing: It is a type of testing that is carried on initially as non-detailed testing in a short span of time, just to make sure that the application is proper ( to see all the entities are available ) in order to carryon detailed testing. Hence Sanity testing is same as Smoke testing.

Regression Testing: It is defined as a type of testing in which an already tested functionality is tested again and again in order to make sure that the functionality is defect free (under bug regression testing) as well as to ensure the existing right functionality is affected or not affected due to the addition of new functionality to it (under functional regression testing).

Regression testing is of two types:

Bug Regression Testing: It is a type of regression testing in which a specific functionality is repeatedly tested to see if there are any bugs as a result of the rectification of old bugs. Since focus is on bugs, it is known as bug regression testing.

Functional Regression Testing: It is another type of regression testing in which the existing right functionality is tested again and again whenever a change is added to it to check if the existing right functionality has any effect due to the new change.

Re-Testing: It is defined as a type of testing in which an already tested functionality is tested again and again to make sure that the defect is tested again and again to make sure that the defect is reproducible if at all any, to rule out environmental issues and to ensure the robustness(strength) of the functionality.

Note: The difference between Regression and Re-testing is that, the regression can’t be done in the first release and it is possible from the second release onwards whereas re-testing is possible in all the releases including first release.

Static Testing: It is a type of testing in which one can perform testing on an application without execution of application.
Example: GUI Testing, document testing, etc.

Dynamic Testing: It is a type of testing in which one can perform testing on an application while application is executed.
Example: Functionality testing.

Alpha Testing: It is a type of user acceptance testing, that is conducted on the product as a final testing within the development company, just before it is delivered to the customer. Alpha testing is done by either test engineer or the customer representative.

Beta Testing: It is a type of user acceptance testing, that is conducted on the product as a first time testing within the customer’s environment, when the product is delivered to the customer and is deployed in to customers environment and is being used by the real time users. Beta testing is done by the real time user.

The advantage of alpha testing is that if at all any defects are encountered; they can be rectified immediately as the product is not yet developed. The disadvantage of beta testing is that the immediate rectification is not possible and it is always time consuming as it needs to follow formal process.

Installation Testing (Deployment Testing): It is a type of testing, in which once the module is delivered to the testing department it will be deployed into the testing environment as per the guidelines defined by the deployment document. The developer checks if the deployment is done successfully as per the guidelines.

Compatibility Testing: It is a type of testing in which mostly the products are tested on various environments which are created and simulated with the combinations of several environmental components like clients, application servers, data base servers, operating systems etc., In order to check if these products are compatible for these environments, so that it addresses the business needs of various enterprises.

Usability Testing: It is a type of testing in which the test engineer checks the “user friendliness” factor of an application apart from the functionality of it.

Exploratory Testing: It is a type of testing in which, the test engineer doesn’t have any functional knowledge initially, and through the exploration on the application he will come to know the functionality and then performs testing on it simultaneously. Hence in this type of testing knowing and testing are performed simultaneously. Since exploration is associated with testing it is known as exploratory testing.

Mutation Testing: It is a type of white box testing in which an initial version of the program is modified into multiple versions while each changed version is known as ‘Mutant’. As each mutant is generated the corresponding text data is created to test the new mutant is perfect as per the requirement. Since mutants are used in this testing, it is known as mutation testing.

Monkey Testing: It is a type of testing in which one perform abnormal, beyond capacity and more volumes of data related operations intentionally on the application to check if it is stable in spite of users abnormal behavior.

End-to-End Testing (Environment Testing): It is a type of testing in which a full fledged complete transaction is performed on one application in order to check if all the environmental components are operationally available to complete the transaction successfully.

Forced Error Testing: It is a type of testing in which, application is tested with invalid inputs in order to check if the error message displayed by it is appropriate or not.

Scalability Testing:
It is a type of testing in which one can check and ensure that the application can be scaled or enhanced with respect to the new functionality addition or the external factors change without demanding re-design and the environments change.

Reliability Testing: It is a type of testing in which, one perform abnormal and normal activities in combination on the application preferably for longer duration in order to check if the stability of application is retained in spite of abnormality for a longer duration. This testing is also known as Soak Testing.

Security Testing: It is a type of testing in which, the vital information of the application/ the application itself is tested in such a way that, if it is accessed by the valid users, protected from invalid users and destructive agents like virus. In other words security testing ensures the protection of vital information from illegal access and the undesirable transactions.

Accessibility Testing: Accessibility is nothing but extendibility of the user friendliness to the disable/handicapped users apart from the normal users. Precisely accessibility is the extendibility of the usability. Accessibility testing is a type of testing in which the test engineer checks the application if the accessibility criteria is justified in it.

Heuristic Testing: It is a type of testing in which, the test engineer depends upon his past experience cover the hidden areas where the possibility of defects are more apart from the normal testing that he performs that is driven by the test case document.

Ad hoc Testing: It is a type of testing in which, the test engineer perform random, an informal testing on an application without using test case document. Unlike formal testing in order to cover the uncovered area with the help of test case document in the form of testing.

Testing done without any formal testing technique is called ad hoc testing.

Tuesday, April 15, 2008

Testing Methodologies

Depends on what factor of an application is tested and by when the specific part of an application is tested, the testing methodology is evolved in terms of following testing methods:

Black Box Testing: It is defined as method of testing in which one can perform testing on an application without having internal structural knowledge of the application.

Usually this testing is done by the test engineer without internal structural knowledge but with thorough functional knowledge.

White Box Testing:
It is defined as method of testing in which one can perform Testing on an application (program part of it) having internal structural knowledge.

This testing is done by the developers. This testing focus upon the programming part of an application where as black box testing on functional part. Hence the test engineer must be functional expert and the developer must be program expert.

Gray Box Testing: It is defined as another derived method of testing in which both the techniques of white box as well as black box techniques are applied.

In other words, Gray box testing is basically done by the test engineer but with internal structural knowledge in order to make sure that the testing is effective to hint the developer and to point out the error straight away, optimizing the developer’s rectification process.