7 tips for improving load speed

Plan for performance

Are you building a new website? Be sure to discuss the importance of performance early on and set targets. That way, you have a faster load speed from the beginning and don’t have to implement fixes later.

Step 1: test, step 2: test…

Are you seeing a pattern here? 😉 Testing is crucial! Before you launch, load and test your website multiple times to make sure you can handle the traffic of real site visitors. This is especially important for sites with complex hosting, such as load-balanced configuration.

Implement some “quick wins”

To be clear, there’s no “get fast quick” scheme for site load speeds. But there is a tried-and-true template that will put you ahead of the curve. That includes making use of modern image formats, enabling compression on the server via Gzip, and leveraging browser cache. Find some more low-hanging fruit here.

Careful of your images!

Good websites have great graphic content – but they also take into account how images impact load speed. You can improve image performance by considering file formats, image compression, and lazy loading.

Think of the mobile visitors

More and more people surf the web on their phone these days, which makes mobile-optimized sites a huge priority! Since mobile users tend to use slower, less stable Internet connections, Accelerated Mobile Pages (AMPs) are a great way to get them content faster.

Prioritize above-the-fold

First impressions matter – and your above-the-fold content can make or break them! Consider inline styling for above-the-fold, then loading your code in chunks. This type of asynchronous loading can create a faster perceived load time for the user.

Assess your external scripts

Third-party scripts are a great tool – but can make your website feel a little crowded. Assess the performance of external scripts on your site load speed, and replace or remove those that are negatively impacting user experience.

DevOps preface

If you’re old, don’t try to change yourself, change your environment. —B. F. Skinner

One view of DevOps is that it helps take on that last mile problem in software: value delivery. The premise is that encouraging behaviors such as teaming, feedback, and experimentation will be reinforced by desirable outcomes such as better software, delivered faster and at lower cost. For many, the DevOps discourse then quickly turns to automation. That makes sense as automation is an environmental intervention that is relatively actionable. If you want to change behavior, change the environment!

In this context, automation becomes a significant investment decision with strategic import. DevOps automation engineers face a number of design choices. What level of interface abstraction is appropriate for the automation tooling? Where should you separate automation concerns of an infrastructure nature from those that should be more application centric?

These questions matter because automation tooling that is accessible to all can better connect all the participants in the software delivery process. That is going to help fos‐ ter all those positive teaming behaviors we are after. Automation that is decoupled from infrastructure provisioning events makes it possible to quickly tenant new project streams. Users can immediately self-serve without raising a new infrastructure requisition.

We want to open the innovation process to all, be they 10x programmers or citizen developers. Doing DevOps with makes this possible, and this blog will show you how.

This is a practical guide that will show how to easily implement and automate powerful cloud deployment patterns using. The container management platform provides a self-service platform for users. Its natively container-aware approach will allow us to show you an application-centric view to automation.

Basics of Testing

What is Testing?

Software systems are an integral part of life, from business applications (e.g., banking) to consumer products (e.g., cars). Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, or business reputation, and even injury or death. Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.

A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. As described, software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analysing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.

Some testing does involve the execution of the component or system being tested; such testing is called dynamic testing. Other testing does not involve the execution of the component or system being tested; such testing is called static testing. So, testing also includes reviewing work products such as requirements, user stories, and source code.

Another common misperception of testing is that it focuses entirely on verification of requirements, user stories, or other specifications. While testing does involve checking whether the system meets specified requirements, it also involves validation, which is checking whether the system will meet user and other stakeholder needs in its operational environment(s).

Test activities are organised and carried out differently in different lifecycles.

Typical Objectives of Testing

For any given project, the objectives of testing may include: 

  • To prevent defects by evaluate work products such as requirements, user stories, design, and code
  • To verify whether all specified requirements have been fulfilled 
  • To check whether the test object is complete and validate if it works as the users and other stakeholders expect
  • To build confidence in the level of quality of the test object 
  • To find defects and failures thus reduce the level of risk of inadequate software quality
  • To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
  • To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model. These differences may include, for example:

  • During component testing, one objective may be to find as many failures as possible so that the underlying defects are identified and fixed early. Another objective may be to increase code coverage of the component tests.
  • During acceptance testing, one objective may be to confirm that the system works as expected and satisfies requirements. Another objective of this testing may be to give information to stakeholders about the risk of releasing the system at a given time.

Testing and Debugging

Testing and debugging are different. Executing tests can show failures that are caused by defects in the software. Debugging is the development activity that finds, analyses, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging, associated component and component integration testing (continues integration). However, in Agile development and in some other software development lifecycles, testers may be involved in debugging and component testing.

Why is Testing Necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s Contributions to Success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include: 

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

In addition to these examples, the achievement of defined test objectives contributes to overall software development and maintenance success.

Quality Assurance and Testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organisation with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing. As described early on, testing contributes to the achievement of quality in a variety of ways.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra-system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analysed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced. 

For example, let suppose, incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Seven Testing Principles

A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing. 

1. Testing shows the presence of defects, not their absence 

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness. 

2. Exhaustive testing is impossible 

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts. 

3. Early testing saves time and money 

To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes.

4. Defects cluster together 

A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort (as mentioned in principle 2).

5. Beware of the pesticide paradox 

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.

6. Testing is context dependent 

Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project.

7. Absence-of-errors is a fallacy 

Some organisations expect that testers can run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfil the users’ needs and expectations, or that is inferior compared to other competing systems.

Test Process

There is no one universal software test process, but there are common sets of test activities without which testing will be less likely to achieve its established objectives. These sets of test activities are a test process. The proper, specific software test process in any given situation depends on many factors. Which test activities are involved in this test process, how these activities are implemented, and when these activities occur may be discussed in an organisation’s test strategy.

Test Process in Context 

Contextual factors that influence the test process for an organization, include, but are not limited to:

  • Software development lifecycle model and project methodologies being used
  • Test levels and test types being considered
  • Product and project risks
  • Business domain
  • Operational constraints, including but not limited to:
    • Budgets and resources
    • Timescales
    • Complexity
    • Contractual and regulatory requirements 
  • Organisational policies and practices 
  • Required internal and external standards

The following sections describe general aspects of organisational test processes in terms of the following: 

  • Test activities and tasks 
  • Test work products 
  • Traceability between the test basis and test work products

It is very useful if the test basis (for any level or type of testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives.

For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Test Activities and Tasks

A test process consists of the following main groups of activities:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design 
  • Test implementation
  • Test execution
  • Test completion

Each main group of activities is composed of constituent activities, which will be described in the subsections below. Each constituent activity consists of multiple individual tasks, which would vary from one project or release to another.
Further, although many of these main activity groups may appear logically sequential, they are often implemented iteratively. For example, Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this software development approach. Even in sequential software development, the stepped logical sequence of main groups of activities will involve overlap, combination, concurrency, or omission, so tailoring these main groups of activities within the context of the system and the project is usually required.

Test planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback from monitoring and control activities.

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development lifecycle models. For example, the evaluation of exit criteria for test execution as part of a given test level may include: 

  • Checking test results and logs against specified coverage criteria
  • Assessing the level of component or system quality based on test results and logs
  • Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed)

Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test analysis

During test analysis, the test basis is analysed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

Test analysis includes the following major activities: 

  • Analysing the test basis appropriate to the test level being considered, for example:
    • Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behaviour
    • Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure
    • The implementation of the component or system itself, including code, database metadata and queries, and interfaces
    • Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system
  • Evaluating the test basis and test items to identify defects of various types, such as: 
    • Ambiguities
    • Omissions
    • Inconsistencies
    • Inaccuracies
    • Contradictions
    • Superfluous statements
  • Identifying features and sets of features to be tested
  • Defining and prioritising test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks
  • Capturing bi-directional traceability between each element of the test basis and the associated test conditions

The application of black-box, white-box, and experience-based test techniques can be useful in the process of test analysis to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test charters. Test charters are typical work products in some types of experience-based testing. When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs. For example, techniques such as behaviour driven development (BDD) and acceptance test driven development (ATDD), which involve generating test conditions and test cases from user stories and acceptance criteria prior to coding. These techniques also verify, validate, and detect defects in the user stories and acceptance criteria.

Test design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other test-ware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:

  • Designing and prioritising test cases and sets of test cases 
  • Identifying necessary test data to support test conditions and test cases
  • Designing the test environment and identifying any required infrastructure and tools
  • Capturing bi-directional traceability between the test basis, test conditions, and test cases

The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques.

As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also, as with test analysis, the identification of defects during test design is an important potential benefit.

Test implementation

During test implementation, the test-ware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?” 

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  • Creating test suites from the test procedures and (if any) automated test scripts 
  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution
  • Building the test environment (including, potentially, test harnesses, service virtualisation, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
  • Preparing test data and ensuring it is properly loaded in the test environment 
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites

Test design and test implementation tasks are often combined.

In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented. 

Test execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and test-ware
  • Executing tests either manually or by using test execution tools
  • Comparing actual results with expected results
  • Analysing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
  • Reporting defects based on the failures observed
  • Logging the outcome of test execution (e.g., pass, fail, blocked)
  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  • Creating a test summary report to be communicated to stakeholders
  • Finalising and archiving the test environment, the test data, the test infrastructure, and other test-ware for later reuse
  • Handing over the test-ware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  • Analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  • Using the information gathered to improve test process maturity

Test Work Products

Test work products are created as part of the test process. Just as there is significant variation in the way that organisations implement the test process, there is also significant variation in the types of work products created during that process, in the ways those work products are organised and managed, and in the names used for those work products.

Many of the test work products described in this section can be captured and managed using test management tools and defect management tools.

Test planning work products 

Test planning work products typically include one or more test plans. The test plan includes information about the test basis, to which the other test work products will be related via traceability information, as well as exit criteria (or definition of done) which will be used during test monitoring and control.

Test monitoring and control work products

Test monitoring and control work products typically include various types of test reports, including test progress reports produced on an ongoing and/or a regular basis, and test summary reports produced at various completion milestones. All test reports should provide audience-relevant details about the test progress as of the date of the report, including summarising the test execution results once those become available. 

Test monitoring and control work products should also address project management concerns, such as task completion, resource allocation and usage, and effort. 

Test monitoring and control, and the work products created during these activities, are further explained on this site.

Test analysis work products

Test analysis work products include defined and prioritised test conditions, each of which is ideally bi-directionally traceable to the specific element(s) of the test basis it covers. For exploratory testing, test analysis may involve the creation of test charters. Test analysis may also result in the discovery and reporting of defects in the test basis. 

Test design work products

Test design results in test cases and sets of test cases to exercise the test conditions defined in test analysis. It is often a good practice to design high-level test cases, without concrete values for input data and expected results. Such high-level test cases are reusable across multiple test cycles with different concrete data, while still adequately documenting the scope of the test case. Ideally, each test case is bi-directionally traceable to the test condition(s) it covers.

Test design also results in:

  • the design and/or identification of the necessary test data
  • the design of the test environment
  • the identification of infrastructure and tools

Though the extent to which these results are documented varies significantly.

Test implementation work products

Test implementation work products include:

  • Test procedures and the sequencing of those test procedures
  • Test suites
  • A test execution schedule

Ideally, once test implementation is complete, achievement of coverage criteria established in the test plan can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions.

In some cases, test implementation involves creating work products using or used by tools, such as service virtualisation and automated test scripts.

Test implementation also may result in the creation and verification of test data and the test environment. The completeness of the documentation of the data and/or environment verification results may vary significantly.

The test data serve to assign concrete values to the inputs and expected results of test cases. Such concrete values, together with explicit directions about the use of the concrete values, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle.

In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly.

Test conditions defined in test analysis may be further refined in test implementation.

Test execution work products

Test execution work products include:

  • Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
  • Defect reports
  • Documentation about which test item(s), test object(s), test tools, and test-ware were involved in the testing

Ideally, once test execution is complete, the status of each element of the test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s). For example, we can say which requirements have passed all planned tests, which requirements have failed tests and/or have defects associated with them, and which requirements have planned tests still waiting to be run. This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products

Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations, change requests or product backlog items, and finalised test-ware.

Traceability between the Test Basis and Test Work Products

As mentioned, earlier, test work products and the names of those work products vary significantly. Regardless of these variations, in order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element, as described above. In addition to the evaluation of test coverage, good traceability supports:

  • Analysing the impact of changes
  • Making testing auditable
  • Meeting IT governance criteria
  • Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)
  • Relating the technical aspects of testing to stakeholders in terms that they can understand
  • Providing information to assess product quality, process capability, and project progress against business goals

Some test management tools provide test work product models that match part or all of the test work products outlined in this section. Some organisations build their own management systems to organise the work products and provide the information traceability they require.

The Psychology of Testing

Software development, including software testing, involves human beings. Therefore, human psychology has important effects on software testing.

Human Psychology and Testing 

Identifying defects during a static test such as a requirement review or user story refinement session, or identifying failures during dynamic test execution, may be perceived as criticism of the product and of its author. An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs. For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.

As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality. To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. This applies during both static and dynamic testing.

Testers and test managers need to have good interpersonal skills to be able to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive relationships with colleagues. Ways to communicate well include the following examples:

  • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems.
  • Emphasise the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organisation, defects found and fixed during testing will save time and money and reduce overall risk to product quality.
  • Communicate test results and other findings in a neutral, fact-focused way without criticising the person who created the defective item. Write objective and factual defect reports and review findings.
  • Try to understand how the other person feels and the reasons they may react negatively to the information.
  • Confirm that the other person has understood what has been said and vice versa.

Typical test objectives were discussed earlier. Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviours with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias.

Tester’s and Developer’s Mindsets

Developers and testers often think differently. The primary objective of development is to design and build a product. As discussed earlier, the objectives of testing include verifying and validating the product, finding defects prior to release, and so forth. These are different sets of objectives which require different mindsets. Bringing these mindsets together helps to achieve a higher level of product quality.

A mindset reflects an individual’s assumptions and preferred methods for decision making and problem-solving. A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.

A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to become aware of errors committed by themselves. 

With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organising the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different from that of the work product authors (i.e., business analysts, product owners, designers, and developers), since they have different cognitive biases from the authors.

Test management

Test Organisation

Independent Testing

Testing tasks may be done by people in a specific testing role, or by people in another role (e.g., customers). A certain degree of independence often makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases. Independence is not, however, a replacement for familiarity, and developers can efficiently find many defects in their own code. 

Degrees of independence in testing include the following (from low level of independence to high level):

  • No independent testers; the only form of testing available is developers testing their own code 
  • Independent developers or testers within the development teams or the project team; this could be developers testing their colleagues’ products 
  • Independent test team or group within the organisation, reporting to project management or executive management 
  • Independent testers from the business organisation or user community, or with specialisations in specific test types such as usability, security, performance, regulatory/compliance, or portability 
  • Independent testers external to the organisation, either working on-site (in-house) or off-site (outsourcing)

For most types of projects, it is usually best to have multiple test levels, with some of these levels handled by independent testers. Developers should participate in testing, especially at the lower levels, so as to exercise control over the quality of their own work.

The way in which independence of testing is implemented varies depending on the software development lifecycle model. For example, in Agile development, testers may be part of a development team. In some organisations using Agile methods, these testers may be considered part of a larger independent test team as well. In addition, in such organisations, product owners may perform acceptance testing to validate user stories at the end of each iteration.

Potential benefits of test independence include:

  • Isolation from the development team, may lead to a lack of collaboration, delays in providing feedback to the development team, or an adversarial relationship with the development team
  • Developers may lose a sense of responsibility for quality
  • Independent testers may be seen as a bottleneck
  • Independent testers may lack some important information (e.g., about the test object)

Many organisations are able to successfully achieve the benefits of test independence while avoiding the drawbacks.

Tasks of a Test Manager and Tester 

In this article, two test roles are covered, test managers and testers. The activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organisation.

The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organisations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.

Typical test manager tasks may include:

  • Develop or review a test policy and test strategy for the organisation 
  • Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management
  • Write and update the test plan(s) 
  • Coordinate the test plan(s) with project managers, product owners, and others 
  • Share testing perspectives with other project activities, such as integration planning 
  • Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done) and facilitate test completion activities 
  • Prepare and deliver test progress reports and test summary reports based on the information gathered 
  • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control 
  • Support setting up the defect management system and adequate configuration management of test-ware 
  • Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product
  • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s) 
  • Decide about the implementation of test environment(s) 
  • Promote and advocate the testers, the test team, and the test profession within the organisation 
  • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.)

The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organisation, or that have to do with personnel management, may be done by test managers outside of the development team, who are sometimes called test coaches.

Typical tester tasks may include:

  • Review and contribute to test plans 
  • Analyse, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis) 
  • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis 
  • Design, set up, and verify test environment(s), often coordinating with system administration and network management 
  • Design and implement test cases and test procedures 
  • Prepare and acquire test data
  • Create the detailed test execution schedule 
  • Execute tests, evaluate the results, and document deviations from expected results 
  • Use appropriate tools to facilitate the test process 
  • Automate tests as needed (may be supported by a developer or a test automation expert)
  • Evaluate non-functional characteristics such as performance efficiency, reliability, usability, security, compatibility, and portability 
  • Review tests developed by others

People who work on test analysis, test design, specific test types, or test automation may be specialists in these roles. Depending on the risks related to the product and the project, and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the acceptance test level, the role of a tester is often done by business analysts, subject matter experts, and users. At the system test level and the system integration test level, the role of a tester is often done by an independent test team. At the operational acceptance test level, the role of a tester is often done by operations and/or systems administration staff.

Test Planning and Estimation

Purpose and Content of a Test Plan

A test plan outlines test activities for development and maintenance projects. Planning is influenced by the test policy and test strategy of the organisation, the development lifecycles and methods being used, the scope of testing, objectives, risks, constraints, criticality, testability, and the availability of resources. 

As the project and test planning progress, more information becomes available and more detail can be included in the test plan. Test planning is a continuous activity and is performed throughout the product’s lifecycle. (Note that the product’s lifecycle may extend beyond a project’s scope to include the maintenance phase.) Feedback from test activities should be used to recognise changing risks so that planning can be adjusted. Planning may be documented in a master test plan and in separate test plans for test levels, such as system testing and acceptance testing, or for separate test types, such as usability testing and performance testing. Test planning activities may include the following and some of these may be documented in a test plan:

  • Determining the scope, objectives, and risks of testing
  • Defining the overall approach of testing
  • Integrating and coordinating the test activities into the software lifecycle activities
  • Making decisions about what to test, the people and other resources required to perform the various test activities, and how test activities will be carried out
  • Scheduling of test analysis, design, implementation, execution, and evaluation activities, either on particular dates (e.g., in sequential development) or in the context of each iteration (e.g., in iterative development)
  • Selecting metrics for test monitoring and control
  • Budgeting for the test activities
  • Determining the level of detail and structure for test documentation (e.g., by providing templates or example documents)

The content of test plans vary, and can extend beyond the topics identified above.

Test Strategy and Test Approach

A test strategy provides a generalised description of the test process, usually at the product or organisational level. Common types of test strategies include:

  • Analytical: This type of test strategy is based on an analysis of some factor (e.g., requirement or risk). Risk-based testing is an example of an analytical approach, where tests are designed and prioritised based on the level of risk.
  • Model-Based: In this type of test strategy, tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic (e.g., reliability). Examples of such models include business process models, state models, and reliability growth models.
  • Methodical: This type of test strategy relies on making systematic use of some predefined set of tests or test conditions, such as a taxonomy of common or likely types of failures, a list of important quality characteristics, or company-wide look-and-feel standards for mobile apps or web pages. 
  • Process-compliant (or standard-compliant): This type of test strategy involves analysing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards, by process documentation, by the rigorous identification and use of the test basis, or by any process or standard imposed on or by the organisation. 
  • Directed (or consultative): This type of test strategy is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organisation itself.
  • Regression-averse: This type of test strategy is motivated by a desire to avoid regression of existing capabilities. This test strategy includes reuse of existing testware (especially test cases and test data), extensive automation of regression tests, and standard test suites.
  • Reactive: In this type of test strategy, testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned (as the preceding strategies are). Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results. Exploratory testing is a common technique employed in reactive strategies.

An appropriate test strategy is often created by combining several of these types of test strategies. For example, risk-based testing (an analytical strategy) can be combined with exploratory testing (a reactive strategy); they complement each other and may achieve more effective testing when used together.

While the test strategy provides a generalised description of the test process, the test approach tailors the test strategy for a particular project or release. The test approach is the starting point for selecting the test techniques, test levels, and test types, and for defining the entry criteria and exit criteria (or definition of ready and definition of done, respectively). The tailoring of the strategy is based on decisions made in relation to the complexity and goals of the project, the type of product being developed, and product risk analysis. The selected approach depends on the context and may consider factors such as risks, safety, available resources and skills, technology, the nature of the system (e.g., custom-built versus COTS), test objectives, and regulations.

Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)

In order to exercise effective control over the quality of the software, and of the testing, it is advisable to have criteria which define when a given test activity should start and when the activity is complete. Entry criteria (more typically called definition of ready in Agile development) define the preconditions for undertaking a given test activity. If entry criteria are not met, it is likely that the activity will prove more difficult, more time-consuming, more costly, and more risky. Exit criteria (more typically called definition of done in Agile development) define what conditions must be achieved in order to declare a test level or a set of tests completed. Entry and exit criteria should be defined for each test level and test type, and will differ based on the test objectives.

Typical entry criteria include: 

  • Availability of testable requirements, user stories, and/or models (e.g., when following a model-based testing strategy)
  • Availability of test items that have met the exit criteria for any prior test levels
  • Availability of test environment
  • Availability of necessary test tools
  • Availability of test data and other necessary resources

Typical exit criteria include:

  • Planned tests have been executed
  • A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks, code) has been achieved 
  • The number of unresolved defects is within an agreed limit 
  • The number of estimated remaining defects is sufficiently low
  • The evaluated levels of reliability, performance efficiency, usability, security, and other relevant quality characteristics are sufficient

Even without exit criteria being satisfied, it is also common for test activities to be curtailed due to the budget being expended, the scheduled time being completed, and/or pressure to bring the product to market. It can be acceptable to end testing under such circumstances, if the project stakeholders and business owners have reviewed and accepted the risk to go live without further testing.

Test Execution Schedule

Once the various test cases and test procedures are produced (with some test procedures potentially automated) and assembled into test suites, the test suites can be arranged in a test execution schedule that defines the order in which they are to be run. The test execution schedule should take into account such factors as prioritizations, dependencies, confirmation tests, regression tests, and the most efficient sequence for executing the tests.

Ideally, test cases would be ordered to run based on their priority levels, usually by executing the test cases with the highest priority first. However, this practice may not work if the test cases have dependencies or the features being tested have dependencies. If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first. Similarly, if there are dependencies across test cases, they must be ordered appropriately regardless of their relative priorities. Confirmation and regression tests must be prioritised as well, based on the importance of rapid feedback on changes, but here again dependencies may apply.

In some cases, various sequences of tests are possible, with differing levels of efficiency associated with those sequences. In such cases, trade-offs between efficiency of test execution versus adherence to prioritisation must be made.

Factors Influencing the Test Effort

Test effort estimation involves predicting the amount of test-related work that will be needed in order to meet the objectives of the testing for a particular project, release, or iteration. Factors influencing the test effort may include characteristics of the product, characteristics of the development process, characteristics of the people, and the test results, as shown below.

Product characteristics

  • The risks associated with the product
  • The quality of the test basis
  • The size of the product
  • The complexity of the product domain
  • The requirements for quality characteristics (e.g., security, reliability) 
  • The required level of detail for test documentation 
  • Requirements for legal and regulatory compliance

Development characteristics process

  • The stability and maturity of the organisation
  • The development model in use
  • The approach to test
  • The tools used
  • The test to process 
  • Time pressure

People characteristics

  • The skills and experience of the people involved, especially with similar projects and products (e.g., domain knowledge)
  • Team cohesion and leadership

Test results

  • The number and severity of defects found
  • The amount of re-work required

Test Estimation Techniques

There are a number of estimation techniques used to determine the effort required for adequate testing. Two of the most commonly used techniques are:

  • The metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values
  • The expert-based technique: estimating the test effort based on the experience of the owners of the testing tasks or by experts

For example, in Agile development, burn-down charts are examples of the metrics-based approach as effort remaining is being captured and reported, and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration; whereas planning poker, also called scrum poker, is an example of the expert-based approach, as team members are estimating the effort to deliver a feature based on their experience.

Within sequential projects, defect removal models are examples of the metrics-based approach, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of a similar nature; whereas the Wideband Delphi estimation technique is an example of the expert-based approach in which a group of experts provides estimates based on their experience.

Test Monitoring and Control

The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and should be used to assess test progress and to measure whether the test exit criteria, or the testing tasks associated with an Agile project’s definition of done, are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and (possibly) reported. Actions may cover any test activity and may affect any other software lifecycle activity.

Examples of test control actions include: 

  • Re-prioritising tests when an identified risk occurs (e.g., software delivered late)
  • Changing the test schedule due to availability or unavailability of a test environment or other resources
  • Re-evaluating whether a test item meets an entry or exit criterion due to rework

Metrics Used in Testing

Metrics can be collected during and at the end of test activities in order to assess:

  • Progress against the planned schedule and budget
  • Current quality of the test object
  • Adequacy of the test approach
  • Effectiveness of the test activities with respect to the objectives

Common test metrics include:

  • Percentage of planned work done in test case preparation (or percentage of planned test cases implemented)
  • Percentage of planned work done in test environment preparation
  • Test case execution (e.g., number of test cases run/not run, test cases passed/failed, and/or test conditions passed/failed)
  • Defect information (e.g., defect density, defects found and fixed, failure rate, and confirmation test results)
  • Test coverage of requirements, user stories, acceptance criteria, risks, or code
  • Task completion, resource allocation and usage, and effort
  • Cost of testing, including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test

Audiences, Contents, and Purposes for Test Reports

The purpose of test reporting is to summarise and communicate test activity information, both during and at the end of a test activity (e.g., a test level). The test report prepared during a test activity may be referred to as a test progress report, while a test report prepared at the end of a test activity may be referred to as a test summary report.

During test monitoring and control, the test manager regularly issues test progress reports for stakeholders. In addition to content common to test progress reports and test summary reports, typical test progress reports may also include:

  • The status of the test activities and progress against the test plan
  • Factors impeding progress
  • Testing planned for the next reporting period
  • The quality of the test objects

When exit criteria are reached, the test manager issues the test summary report. This report provides a summary of the testing performed, based on the latest test progress report and any other relevant information.

Typical test summary reports may include:

  • Summary of testing performed
  • Information on what occurred during a test period
  • Deviations from plan, including deviations in schedule, duration, or effort of test activities
  • Status of testing and product quality with respect to the exit criteria or definition of done
  • Factors that have blocked or continue to block progress
  • Metrics of defects, test cases, test coverage, activity progress, and resource consumption.
  • Residual risks
  • Reusable test work products produced

The contents of a test report will vary depending on the project, the organisational requirements, and the software development lifecycle. For example, a complex project with many stakeholders or a regulated project may require more detailed and rigorous reporting than a quick software update. As another example, in Agile development, test progress reporting may be incorporated into task boards, defect summaries, and burn-down charts, which may be discussed during a daily stand-up meeting.

In addition to tailoring test reports based on the context of the project, test reports should be tailored based on the report’s audience. The type and amount of information that should be included for a technical audience or a test team may be different from what would be included in an executive summary report. In the first case, detailed information on defect types and trends may be important. In the latter case, a high-level report (e.g., a status summary of defects by priority, budget, schedule, and test conditions passed/failed/not tested) may be more appropriate.

Configuration Management

The purpose of configuration management is to establish and maintain the integrity of the component or system, the test-ware, and their relationships to one another through the project and product lifecycle. 

To properly support testing, configuration management may involve ensuring the following:

  • All test items are uniquely identified, version controlled, tracked for changes, and related to each other
  • All items of test-ware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process
  • All identified documents and software items are referenced unambiguously in test documentation

During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.

Risks and Testing

Definition of Risk

Risk involves the possibility of an event in the future which has negative consequences. The level of risk is determined by the likelihood of the event and the impact (the harm) from that event.

Product and Project Risks

Product risk involves the possibility that a work product (e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders. When the product risks are associated with specific quality characteristics of a product (e.g., functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability, and portability), product risks are also called quality risks. Examples of product risks include:

  • Software might not perform its intended functions according to the specification
  • Software might not perform its intended functions according to user, customer, and/or stakeholder needs
  • A system architecture may not adequately support some non-functional requirement(s)
  • A particular computation may be performed incorrectly in some circumstances
  • A loop control structure may be coded incorrectly
  • Response-times may be inadequate for a high-performance transaction processing system
  • User experience (UX) feedback might not meet product expectations

Project risk involves situations that, should they occur, may have a negative effect on a project’s ability to achieve its objectives. Examples of project risks include:

  • Project issues:
    • Delays may occur in delivery, task completion, or satisfaction of exit criteria or definition of done 
    • Inaccurate estimates, reallocation of funds to higher priority projects, or general cost-cutting across the organisation may result in inadequate funding 
    • Late changes may result in substantial re-work
  • Organisational issues: 
    • Skills, training, and staff may not be sufficient 
    • Personnel issues may cause conflict and problems 
    • Users, business staff, or subject matter experts may not be available due to conflicting business priorities
  • Political issues:
    • Testers may not communicate their needs and/or the test results adequately
    • Developers and/or testers may fail to follow up on information found in testing and reviews (e.g., not improving development and testing practices)
    • There may be an improper attitude toward, or expectations of, testing (e.g., not appreciating the value of finding defects during testing)
  • Technical issues: 
    • Requirements may not be defined well enough 
    • The requirements may not be met, given existing constraints 
    • The test environment may not be ready on time 
    • Data conversion, migration planning, and their tool support may be late 
    • Weaknesses in the development process may impact the consistency or quality of project work products such as design, code, configuration, test data, and test cases
    • Poor defect management and similar problems may result in accumulated defects and other technical debt
  • Supplier issues:
    • A third party may fail to deliver a necessary product or service, or go bankrupt
    • Contractual issues may cause problems to the project

Project risks may affect both development activities and test activities. In some cases, project managers are responsible for handling all project risks, but it is not unusual for test managers to have responsibility for test-related project risks.

Product Quality and Risk-based Testing

Risk is used to focus the effort required during testing. It is used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of an adverse event occurring, or to reduce the impact of an adverse event. Testing is used as a risk mitigation activity, to provide information about identified risks, as well as providing information on residual (unresolved) risks. 

A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk. It involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact. The resulting product risk information is used to guide test planning, the specification, preparation and execution of test cases, and test monitoring and control. Analysing product risks early contributes to the success of a project. 

In a risk-based approach, the results of product risk analysis are used to:

  • Determine the test techniques to be employed
  • Determine the particular levels and types of testing to be performed (e.g., security testing, accessibility testing)
  • Determine the extent of testing to be carried out
  • Prioritise testing in an attempt to find the critical defects as early as possible 
  • Determine whether any activities in addition to testing could be employed to reduce risk (e.g., providing training to inexperienced designers)

Risk-based testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis. To ensure that the likelihood of a product failure is minimised, risk management activities provide a disciplined approach to:

  • Analyse (and re-evaluate on a regular basis) what can go wrong (risks)
  • Determine which risks are important to deal with
  • Implement actions to mitigate those risks
  • Make contingency plans to deal with the risks should they become actual events

In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks.

Defect Management

Since one of the objectives of testing is to find defects, defects found during testing should be logged. The way in which defects are logged may vary, depending on the context of the component or system being tested, the test level, and the software development lifecycle model. Any defects identified should be investigated and should be tracked from discovery and classification to their resolution (e.g., correction of the defects and successful confirmation testing of the solution, deferral to a subsequent release, acceptance as a permanent product limitation, etc.). In order to manage all defects to resolution, an organisation should establish a defect management process which includes a workflow and rules for classification. This process must be agreed with all those participating in defect management, including architects, designers, developers, testers, and product owners. In some organisations, defect logging and tracking may be very informal. 

During the defect management process, some of the reports may turn out to describe false positives, not actual failures due to defects. For example, a test may fail when a network connection is broken or times out. This behaviour does not result from a defect in the test object, but is an anomaly that needs to be investigated. Testers should attempt to minimise the number of false positives reported as defects. 

Defects may be reported during coding, static analysis, reviews, or during dynamic testing, or use of a software product. Defects may be reported for issues in code or working systems, or in any type of documentation including requirements, user stories and acceptance criteria, development documents, test documents, user manuals, or installation guides. In order to have an effective and efficient defect management process, organisations may define standards for the attributes, classification, and workflow of defects.

Typical defect reports have the following objectives: 

  • Provide developers and other parties with information about any adverse event that occurred, to enable them to identify specific effects, to isolate the problem with a minimal reproducing test, and to correct the potential defect(s), as needed or to otherwise resolve the problem
  • Provide test managers a means of tracking the quality of the work product and the impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them instead of running tests, and there will be more confirmation testing needed)
  • Provide ideas for development and test process improvement

A defect report filed during dynamic testing typically includes:

  • An identifier
  • A title and a short summary of the defect being reported
  • Date of the defect report, issuing organization, and author
  • Identification of the test item (configuration item being tested) and environment
  • The development lifecycle phase(s) in which the defect was observed
  • A description of the defect to enable reproduction and resolution, including logs, database dumps, screenshots, or recordings (if found during test execution)
  • Expected and actual results
  • Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
  • Urgency/priority to fix
  • State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed)
  • Conclusions, recommendations and approvals
  • Global issues, such as other areas that may be affected by a change resulting from the defect
  • Change history, such as the sequence of actions taken by project team members with respect to the defect to isolate, repair, and confirm it as fixed
  • References, including the test case that revealed the problem

Some of these details may be automatically included and/or managed when using defect management tools, e.g., automatic assignment of an identifier, assignment and update of the defect report state during the workflow, etc. Defects found during static testing, particularly reviews, will normally be documented in a different way, e.g., in review meeting notes.

Why is testing necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s contributions to success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

Quality assurance and testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra- system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Tasks of a Test Manager and Tester

The activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organization.

The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organizations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.

Typical test manager tasks may include:

  •  Develop or review a test policy and test strategy for the organization
  •  Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management
  • Write and update the test plan(s)
  • Coordinate the test plan(s) with project managers, product owners, and others
  • Share testing perspectives with other project activities, such as integration planning
  • Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done) and facilitate test completion activities
  • Prepare and deliver test progress reports and test summary reports based on the information gathered
  • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control
  • Support setting up the defect management system and adequate configuration management of test-ware
  • Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product
  • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s)
  • Decide about the implementation of test environment(s)
  • Promote and advocate the testers, the test team, and the test profession within the organisation
  • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.)

The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organization, or that have to do with personnel management, may be done by test managers outside of the development team, who are sometimes called test coaches.

Typical tester tasks may include:

  • Review and contribute to test plans
  • Analyse, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis)
  • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis
  • Design, set up, and verify test environment(s), often coordinating with system administration and network management
  • Design and implement test cases and test procedures
  • Prepare and acquire test data
  • Create the detailed test execution schedule
  • Execute tests, evaluate the results, and document deviations from expected results
  • Use appropriate tools to facilitate the test process
  • Automate tests as needed (may be supported by a developer or a test automation expert)
  • Evaluate non-functional characteristics such as performance efficiency, reliability, usability,security, compatibility, and portability
  • Review tests developed by others

People who work on test analysis, test design, specific test types, or test automation may be specialists in these roles. Depending on the risks related to the product and the project, and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the acceptance test level, the role of a tester is often done by business analysts, subject matter experts, and users. At the system test level and the system integration test level, the role of a tester is often done by an independent test team. At the operational acceptance test level, the role of a tester is often done by operations and/or systems administration staff.

What are the testing objectives?

What should we test in a project may very and testing objective could include:

  • Testing or evaluating work products such as requirements, user stories, design and code.
  • Validated whether the test object is done or complete and work as expected by users and stakeholders.
  • Building confidence that in the quality of the test objective.
  • Preventing errors and defects.
  • Finding defects which lead to failure’s.
  • Providing to stakeholders information to let them make informed decisions, regarding the quality of the object under test.
  • Reducing the risk of the software quality.
  • Complying to legal, or regulatory standards, and verifying that the object under test comply with those standards or requirements.

The objectives under test may very from system to system, depending the context of the component under test, the level of test, and the model of the software development lifecycle being used.