Test Types


Test types are introduced as a means of clearly defining the objective of a certain test level for a program or project. 

We need to think about different types of testing because testing the functionality of the  component or system may not be sufficient at each level to meet the overall test objectives.

A test type is focused on a particular test objective, which could be the testing of:
  • a function to be performed by the component or system; 
  • a non-functional quality characteristic, such as reliability or usability; 
  • the structure or architecture of the component or system; or 
  • related to changes, i.e. confirming that defects have been fixed (confirmation testing, or re-testing) and looking for unintended changes (regression testing).

Depending on its objectives, testing will be organized differently. For  example, component testing aimed at performance would be quite different to component testing aimed at achieving decision coverage.

  • Functional Testing
  • Nonfunctional Testing
  • Structural Testing
  • Regression/Confirmation Testing

Functional Testing

The function of a system (or component) is 'what it does'. This is typically described in a requirements specification, a functional specification, or in use cases.


Functional tests are based on these functions, described in documents or understood by the testers and may be performed at all test levels. Functional testing considers the specified behavior and is often also referred to as black-box testing.



Testing functionality can be done from two perspectives: 

  • requirements-based 
  • business-process-based 

Requirements-based testing uses a specification of the functional requirements for the system as the basis for designing tests. Also prioritize the requirements based on risk criteria and use this to prioritize the tests.

Business-process-based testing uses knowledge of the business processes. Business processes describe the scenarios involved in the day-to-day business use of the system. Use cases are a very useful basis for test cases from a business perspective.


The techniques used for functional testing are often specification-based, but experienced-based techniques can also be used. Test conditions and test cases are derived from the functionality of the component or system.

As part of test designing, a model may be developed, such as:
  • a process model, 
  • state transition model or a 
  • plain-language specification.

Non-Functional Testing
The target for Non-Functional testing is the testing of the quality characteristics, or non-functional attributes of the system. Here we are interested in how well or how fast something is done.



The characteristics and their sub-characteristics are, respectively:

  • functionality 
    •  suitability
    •  accuracy
    • security
    • interoperability
    • compliance
  •  reliability
    •  maturity (robustness)
    • fault-tolerance
    • recoverability 
    • compliance
  • usability
    • understandability
    • learnability
    • operability
    • attractiveness
    • compliance
  • efficiency
    • time behavior (performance)
    • resource utilization
    • compliance
  • maintainability
    • analyzability
    • changeability
    • stability
    • testability 
    • compliance
  • portability
    • adaptability
    • installability
    • co-existence
    • replaceability
    • compliance
 Structural Testing

If we are talking about the structure of a system, we may call it the system  architecture. Structural testing is often referred to as 'white-box' or 'glass-box' because we are interested in what is happening 'inside the box'. 

Structural testing is most often used as a way of measuring the thoroughness of testing through the coverage of a set of structural elements or coverage items.

At component integration level it may be based on the architecture of the system, such as a calling hierarchy. A system, system integration or acceptance testing test basis could be a business model or menu structure.

At component level, and to a lesser extent at component integration testing, there is good tool support to measure code coverage. 

Coverage measurement tools assess the percentage of executable elements  that have been exercised by a test suite.

If coverage is not 100%, then additional tests may need to be written and run to cover those parts that have not yet been exercised. This of course depends on the exit criteria.

The techniques used for structural testing are structure-based techniques, also referred to as white-box techniques. Control flow models are often used to support structural testing.

Testing related to changes (confirmation and regression testing) if you  have made a change to the software, you will have changed the way it functions, the way it performs (or both) and its structure.



Confirmation testing (re-testing)
When a test fails and we determine that the cause of the failure is a software defect, the defect is reported, and we can expect a new version of the software that has had the defect fixed. 

In this case we will need to execute the test again to confirm that the defect has indeed been fixed.

It is important to ensure that the test is executed in exactly the same way as it was the first time, using the same inputs, data and environment.




Regression testing
Like confirmation testing, regression testing involves executing test cases that have been executed before. The difference is that, for regression testing, the test cases probably passed the last time they were executed.

More specifically, the purpose of regression testing is to:
  • verify that modifications in the software or 
  • the environment have not caused unintended adverse side effects and that the system still meets its requirements.


Maintenance Testing
Once deployed, a system is often in service for years or even decades. During this time the system and its operational environment is often corrected, changed or extended. 

Testing that is executed during this life cycle phase is called 'maintenance testing'. 

Note that maintenance testing is different from maintainability testing, which defines how easy it is to maintain the system.



As stated maintenance testing is done on an existing operational system. 

It is triggered by:
  • modifications, 
  • migration, or 
  • retirement of the system. 

Modifications include:
  • planned enhancement changes, 
  • corrective and emergency changes, and 
  • changes of environment, such as planned operating system or database upgrades, or 
  • patches to newly exposed or discovered vulnerabilities of the operating system. 

Maintenance testing for migration should include:
  • operational testing of the new environment, as well as the changed software.

Maintenance testing for the retirement of a system may include:
  • the testing of data migration or archiving, if long data-retention periods are required.




Planned modifications
  • perfective modifications (adapting software to the user's wishes, for instance by supplying new functions or enhancing performance);
  • adaptive modifications (adapting software to environmental changes such as new hardware, new systems software or new legislation);
  • corrective planned modifications (deferrable correction of defects).The standard structured test approach is almost fully applicable to planned modifications. 


On average, planned modification represents over 90% of all maintenance work on systems.


Ad-hoc corrective modifications
Ad-hoc corrective modifications are concerned with defects requiring an immediate solution. There are different rules and different procedures for solving problems of this kind. It will be impossible to take the steps required for a structured approach to testing.









Comments

Popular posts from this blog

Types of Review

Roles and Resposibilities for a Formal Review

Phases of Formal Review