Identifying Test Conditions

  • Test analysis is the process of looking at something that can be used to derive test information. 

  • This basis for the tests is called the 'test basis'. It could be a system requirement, a technical specification, the code itself (for structural testing), or a business process. Sometimes tests can be based on an experienced user's knowledge of the system, which may not be documented. The test basis includes whatever the tests are based on. 

  • From a testing perspective, we look at the test basis in order to see what could be tested - these are the test conditions. A test condition is simply something that we could test. 

  • If we are looking to measure coverage of code decisions (branches), then the test basis would be the code itself, and the list of test conditions would be the decision outcomes (True and False). If we have a requirements specification, the table of contents can be our initial list of test conditions. 

  • A good way to understand requirements better is to try to define tests to meet those requirements. Although it is not intended to imply that everything must be tested, it is too easily interpreted in that way.

  • 'Test inventory' as a list of things that could be tested whereas 'Test objectives' are broad categories of things to test and 'test inventories' as the actual list of things that need to be tested .

  • When identifying test conditions, we want to 'throw the net wide' to identify as many as we can, and then we will start being selective about which ones to take forward to develop in more detail and combine into test cases. We could call them 'test possibilities'.

  • Testing everything is known as exhaustive testing; and that is an impractical goal. Therefore, a subet of possible tests should be selected. In practice the subset we select may be a very small subset and yet it has to have a high probability of finding most of the defects in a system.
  • We need some intelligent thought processes to guide our selection; test techniques (i.e. test design tech-niques) are such thought processes. A testing technique helps us select a good set of tests from the total number of all possible tests for a given system.
  • Different techniques offer different ways of looking at the software under test, possibly challenging assumptions made about it. Each technique provides a set of rules or guidelines for the tester to follow in identifying test conditions and test cases.
  • The test conditions that are chosen will depend on the test strategy or detailed test approach. They might be based on:
    •  risk
    • models of the system
    • likely failures
    • compliance requirements
    • expert advice or heuristics
  • heuristic is a way of directing your attention, a common sense rule useful in solving a problem. Test conditions should be able to be linked back to their sources in the test basis - this is called traceability.
  • Traceability can be either horizontal through all the test documentation for a given test level (e.g. system testing, from test conditions through test cases to test scripts) or vertical through the layers of development documentation (e.g. from requirements to components). 
  • Why is traceability important? 
    • The requirements for a given function or feature have changed. Some of the fields now have different ranges that can be entered. Which tests were looking at those boundaries? They now need to be changed. How many tests will actually be affected by this change in the requirements? These questions can be answered easily if the requirements can easily be traced to the tests.
    • A set of tests that has run OK in the past has started to have serious prob lems. What functionality do these tests actually exercise? Traceability between the tests and the requirement being tested enables the functions or features affected to be identified more easily.
    • Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification. We have the list of the tests that have passed - was every requirement tested? Having identified a list of test conditions, it is important to prioritize them, so that the most important test conditions are identified (before a lot of time is spent in designing test cases based on them). 

  • It is a good idea to try and think of twice as many test conditions as you need then you can throw away the less important ones, and you will have a much better set of test conditions! 
  • Note that spending some extra time now, while identifying test conditions, doesn't take very long, as we are only listing things that we could test. This is a good investment of our time - we don't want to spend time implementing poor tests!
  • Test conditions can be identified for test data as well as for test inputs and test outcomes, for example, different types of record, different distribution of types of record within a file or database, different sizes of records or fields in a record. The test data should be designed to represent the most important types of data, i.e. the most important data conditions.

Comments

Popular posts from this blog

Roles and Resposibilities for a Formal Review

Structure Based or Whitebox Testing Techniques

Phases of Formal Review