Test Levels


Test Levels Comprise of

  • Component Testing
  • Integration Testing
  • System Testing
  • Acceptance Testing


Component testing

Also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable.

Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system.

Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner.


stub is called from the software component to be tested; a driver calls a component to be tested. 

Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage).

 Test cases are derived from work products such as the software design or the data model.

Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and in practice usually involves the programmer who wrote the code.

Defects are typically fixed as soon as they are found, without formally recording the incidents found.

Integration testing 

Tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems.

There may be more than one level of integration testing and it may be carried out on test objects of varying size. For example:


  • component integration testing tests the interactions between software components and is done after component testing;
  • system integration testing tests the interactions between different systems and may be done after system testing.

In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems that can even run on different platforms. 

The greater the scope of integration, the more difficult it becomes to isolate failures to a specific interface, which may lead to an increased risk. This leads to varying approaches to integration testing. 

Big-bang integration 


One extreme is that all components or systems are integrated simultaneously, after which everything is tested as a whole.

This is called 'big-bang' integration testing. Big-bang testing has the advantage that everything is finished before integration testing starts.

There is no need to simulate (as yet unfinished) parts. The major disadvantage is that in general it is time-consuming and difficult to trace the cause of failures with this late integration.

So big bang integration may seem like a good idea when planning the project, being optimistic and expecting to find no problems. 

Incremental integration  


If one thinks integration testing will find defects, it is a good practice to consider whether time might be saved by breaking down the integration test process. Another extreme is that all programs are integrated one by one, and a test is carried out after each step (incremental testing).

Between these two extremes, there is a range of variants.

The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause.

 A disadvantage is that it can be time-consuming since stubs and drivers have to be developed and used in the test.

Within incremental integration testing a range of possibilities exist, partly depending on the system architecture:

  • Top-down: testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.
  • Bottom-up: testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.
  • Functional incremental: integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification. 

The best choice is to start integration with those interfaces that are expected to cause most problems. Doing so prevents major defects at the end of the integration test stage.

In order to reduce the risk of late defect discovery, integration should normally be incremental rather than 'big bang'.

Ideally testers should understand the architecture and influence integration planning. If integration tests are planned before components or systems are built, they can be developed in the order required for most efficient testing. 

 Both functional and structural approaches may be used. Testing of specific non-functional characteristics(e.g. performance) may also be included in integration testing.

 Integration testing may be carried out by the developers, but can be done by a separate team of specialist integration testers, or by a specialist group of developers/ integrators including non-functional specialists. 

System testing 

System Testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product. System testing is most often the final test on behalf of development to verify that the system to be delivered meets the specification and its purpose may be to find as many defects as possible.

 Most often it is carried out by specialist testers that form a dedicated, and sometimes independent, test team within development, reporting to the development manager or project manager. 

System testing should investigate both functional and non-functional requirements of the system. Typical non-functional tests include performance and reliability. 

Testers may also need to deal with incomplete or undocumented requirements.

System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested. 

System testing requires a controlled test environment with regard to, among other things, control of the software versions, test ware and the test data.

A system test is executed by the development organization in a (properly controlled) environment. The test environment should  correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found by testing.


Acceptance testing

When the development organization has performed its system test and has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing.

Acceptance testing is most often the responsibility of the user or customer, although other stakeholders may be involved as well. The execution of the acceptance test requires a test environment that is for most aspects, representative of the production environment ('as-if production').

The goal of acceptance testing is to establish confidence in the system, part of the system or specific nonfunctional characteristics, e.g. usability, of the system.

Acceptance testing is most often focused on a validation type of testing, whereby we are trying to determine whether the system is fit for purpose. Finding defects should not be the main focus in acceptance testing. 

Although it assesses the system's readiness for deployment and use, it is not necessarily the final level of testing. For example, a large-scale system integration test may come after the acceptance of a system.


  • User acceptance test focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user, while the 
  • operational acceptance test (also called production acceptance test) validates whether the system meets the requirements for operation. 

The user acceptance test is performed by the users and application managers. In terms of planning, the user acceptance test usually links tightly to the system test and will, in many cases, be organized partly overlapping in time. 



  • Contract acceptance testing is performed against a contract's acceptance criteria for producing custom-developed software. Acceptance should be formally defined when the contract is agreed. 


  • Compliance acceptance testing or regulation acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations.



  • If the system has been developed for the mass market, e.g. commercial off-the-shelf software (COTS), then testing it for individual users or customers is not practical or even possible in some cases. Feedback is needed from potential or existing users in their market before the software product is put out for sale commercially. 

Very often this type of system undergoes two stages of acceptance test. The first is called
  • Alpha testing. This test takes place at the developer's site. A cross-section of potential users and members of the developer's organization are invited to use the system. Developers observe the users and note problems. Alpha testing may also be carried out by an independent test team. 


  • Beta testing, or field testing, sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization where the defects are repaired. 

Note that organizations may use other terms, such as factory acceptance testing and site acceptance testing for systems that are tested before and after being moved to a customer's site.





Comments

Popular posts from this blog

Types of Review

Roles and Resposibilities for a Formal Review

Phases of Formal Review