7 tips for improving load speed

Plan for performance

Are you building a new website? Be sure to discuss the importance of performance early on and set targets. That way, you have a faster load speed from the beginning and don’t have to implement fixes later.

Step 1: test, step 2: test…

Are you seeing a pattern here? 😉 Testing is crucial! Before you launch, load and test your website multiple times to make sure you can handle the traffic of real site visitors. This is especially important for sites with complex hosting, such as load-balanced configuration.

Implement some “quick wins”

To be clear, there’s no “get fast quick” scheme for site load speeds. But there is a tried-and-true template that will put you ahead of the curve. That includes making use of modern image formats, enabling compression on the server via Gzip, and leveraging browser cache. Find some more low-hanging fruit here.

Careful of your images!

Good websites have great graphic content – but they also take into account how images impact load speed. You can improve image performance by considering file formats, image compression, and lazy loading.

Think of the mobile visitors

More and more people surf the web on their phone these days, which makes mobile-optimized sites a huge priority! Since mobile users tend to use slower, less stable Internet connections, Accelerated Mobile Pages (AMPs) are a great way to get them content faster.

Prioritize above-the-fold

First impressions matter – and your above-the-fold content can make or break them! Consider inline styling for above-the-fold, then loading your code in chunks. This type of asynchronous loading can create a faster perceived load time for the user.

Assess your external scripts

Third-party scripts are a great tool – but can make your website feel a little crowded. Assess the performance of external scripts on your site load speed, and replace or remove those that are negatively impacting user experience.

DevOps preface

If you’re old, don’t try to change yourself, change your environment. —B. F. Skinner

One view of DevOps is that it helps take on that last mile problem in software: value delivery. The premise is that encouraging behaviors such as teaming, feedback, and experimentation will be reinforced by desirable outcomes such as better software, delivered faster and at lower cost. For many, the DevOps discourse then quickly turns to automation. That makes sense as automation is an environmental intervention that is relatively actionable. If you want to change behavior, change the environment!

In this context, automation becomes a significant investment decision with strategic import. DevOps automation engineers face a number of design choices. What level of interface abstraction is appropriate for the automation tooling? Where should you separate automation concerns of an infrastructure nature from those that should be more application centric?

These questions matter because automation tooling that is accessible to all can better connect all the participants in the software delivery process. That is going to help fos‐ ter all those positive teaming behaviors we are after. Automation that is decoupled from infrastructure provisioning events makes it possible to quickly tenant new project streams. Users can immediately self-serve without raising a new infrastructure requisition.

We want to open the innovation process to all, be they 10x programmers or citizen developers. Doing DevOps with makes this possible, and this blog will show you how.

This is a practical guide that will show how to easily implement and automate powerful cloud deployment patterns using. The container management platform provides a self-service platform for users. Its natively container-aware approach will allow us to show you an application-centric view to automation.

FUTURE OF DEVOPS

THE EARLY MAJORITY
MOVES TO THE CLOUD

of business:

60% + 80%
DevOps world have raised the bar on collaboration, cross-organizational visibility,
of businesses are adopting or expanding DevOps culture and processes
of businesses are now operating in the cloud

DEVOPS AND THE CLOUD— A NATURAL PAIR
Let’s start with DevOps.
Forrester Research dubbed 2018 the year of DevOps. And it’s no wonder, with over half of enterprises implementing or expanding existing DevOps practices. So why are they doing that? Here are a few good reasons to consider it:
DEVOPS OFFERS YOUR ORGANIZATION:
• Greater productivity and faster delivery of products
• Greater visibility and collaboration across projects,
departments, and individuals
• Less siloing
So, DevOps removes friction; and as a practical environment for DevOps, the cloud just makes sense.
HOW THE CLOUD ENHANCES YOUR DEVOPS ORGANIZATION

• Rapid deployment of new environments
• Reduced IT costs through subscription and SaaS (pay as you go) payment structures
• Moving from CapEx expenditures for hardware to OpEx expenses for SaaS
• Fast, agile scalability
So why the urgency to make these innovations? The truth is, they’re not really innovative anymore. it’s already happened.
The bar has been raised and you need a new edge.

GAUGE YOUR DEVOPS PROGRESS
Institute Agile practices that focuses on communication, collaboration, customer feedback, and small and rapid releases. Agile operations remove rigidity from your processes and allow for greater innovation, while keeping accountability and increasing goal focus
Deploy a multi-cloud strategy with Kubernetes or other intermediary layer for cloud-agnostic and resilient infrastructure
Build cloud-native systems for core products, with lift-and-shift for systems that don’t require much scalability
Create microservices in containers over monolithic apps to increase your agility and your ability to innovate with less downtime

Acceptance Criteria, Acceptance Tests and Experience-Based Practices

Writing Acceptance Criteria

Specifying acceptance criteria is an important acceptance testing task. It helps to refine requirements or user stories and provides the basis for acceptance tests. Business analysts and testers should collaborate closely on the specification of these criteria. This collaboration ensures high business value from the acceptance testing phase and increases the chance of a successful iteration or product release. 

Writing acceptance criteria forces business analysts and testers to think about functionality, performance, and other characteristics from a stakeholder or usage perspective. This supports early verification and validation of the related requirement or user story and provides a better chance of detecting inconsistencies, contradictions, missing information or other problems. 

The following good practices should be considered when writing acceptance criteria:

  • Well-written acceptance criteria are precise, measurable and concise. Each criterion must be written in a way that enables the tester to measure whether or not the test object complies with the acceptance criterion.
  • Well-written acceptance criteria do not include technical solution details. They concentrate on the question “What shall be achieved?” rather than on the question “How shall it achieved?”.
  • Acceptance criteria should address non-functional requirements (quality characteristics) as well as functional requirements.

As with requirements and user stories, acceptance criteria should be reviewed through walkthroughs, technical reviews, iteration planning meetings or other methods (if necessary).

Designing Acceptance Tests

This section addresses the test techniques and approaches frequently used for acceptance testing.

Test Techniques for Acceptance Testing

In a requirements-based approach to acceptance testing, the tester derives test cases from the acceptance criteria related to each requirement or user story using black-box techniques such as equivalence partitioning or boundary value analysis.

Acceptance testing may be augmented with other test techniques or approaches:

  • Business process-based testing, possibly combined with decision table testing, validates business processes and rules.
  • Experience-based testing leverages the tester’s experience, knowledge and intuition.
  • Risk-based testing is based on risk types and levels. Prioritisation and thoroughness of testing depends on previously identified product risks.
  • Model-based testing uses graphical (or textual) models to obtain acceptance tests.

Acceptance criteria should be verified by acceptance tests and traceability between the requirements / user story and related test cases should be managed.

Using the Gherkin Language to Write Test Cases

In ATDD and BDD, acceptance tests are often formulated in a structured language, referred to as the Gherkin language. Using the Gherkin language, test cases are phrased declaratively using a standardised pattern:

  • Given [a situation]
  • When [an action on the system]
  • Then [the expected result]

The pattern allows business analysts, testers and developers to write test cases in a way that is easily shared with stakeholders and may be translated into automated tests. 

The “Given” block aims to put the test object in a state before performing test actions in the “When” block. The “Then” block specifies the consequences that can be observed from the actions defined in the “When” block. Test cases written in Gherkin do not refer to user interface elements but rather to user actions on the system. They are structured natural language test cases that can be understood by all relevant stakeholders. 

In addition, the structure “Given – When – Then” can be parsed in an automated way. This allows automated test script creation using a keyword-driven testing approach. 

Initially, Gherkin was specific to some software tools supporting BDD, but it is now synonymous with the “Given – When – Then” acceptance test design pattern. 

Experience-based Approaches for Acceptance Testing

All experience-based test techniques described in are relevant for acceptance testing. This section is focused on how exploratory testing can be used for acceptance tests, and on beta testing as a source of feedback on system usage. 

Exploratory Testing

Exploratory testing is an experience-based test technique that is not based on detailed predefined test procedures. In exploratory testing, all activities are carried out within an uninterrupted period of time called a session. The testers are domain experts. They are familiar with user needs, requirements and business processes, but they are not necessarily familiar with the product under test. 

During an exploratory testing session, the tester accomplishes the following:

  • Learns how to work with the product
  • Designs the tests
  • Performs the tests
  • Interprets the results

It is a good practice in exploratory testing to use a test charter. The test charter is prepared prior to the testing session (possibly jointly by the business analyst and the tester) and is used by the person in charge of the exploratory session (either a business analyst, tester or another stakeholder). It includes information about the purpose, target, and scope of the exploratory session, the test setup, the duration of the session, and possibly some tactics to be used during the session (such as the type of user that shall be simulated during the exploratory session). Time-boxed sessions help to control the time and effort dedicated to the exploratory session. It is also good practice to perform exploratory testing in pairs or as team work. 

In Agile development, exploratory test sessions can be conducted during an iteration by the product owner and/or the testers for acceptance testing of user stories assigned to the iteration. 

Exploratory testing should be used to complement other more formal techniques in acceptance testing. For example, it may be used to provide rapid feedback on new features before methodical testing is applied. 

Beta Testing

Beta testing is a form of acceptance testing that is often used for Commercial Off-the-Shelf Software (COTS) or for Software as a Service (SaaS) platforms. It is conducted to obtain feedback from the market after development and in-house testing are completed. 

Unlike other acceptance testing forms, beta testing is performed by potential or existing users at their own location. Beta tests neither impose predefined test procedures nor a test charter. Apart from the observed findings, the test activities are usually not documented at all. 

Because the product is tested in various realistic configurations by actual users in their business process context, beta testing may discover defects that escaped during the development process and previous test levels. Resolving issues found by beta tests helps organisations avoid costly hot-fixes or product recalls on a larger scale. 

Acceptance testing should not be limited to beta testing. Beta testing is not systematic or measurable. There is no guarantee that all requirements or user stories are covered by the tests. Moreover, beta testing is performed late in the development process whereas tests based on acceptance criteria support the “Early Testing” principle. 

Testing Throughout the Software Development Lifecycle

Software Development Lifecycle Models

A software development lifecycle model describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically. There are a number of different software development lifecycle models, each of which requires different approaches to testing.

Software Development and Software Testing

It is an important part of a tester’s role to be familiar with the common software development lifecycle models so that appropriate test activities can take place.

In any software development lifecycle model, there are several characteristics of good testing:

  • For every development activity, there is a corresponding test activity
  • Each test level has test objectives specific to that level
  • Test analysis and design for a given test level begin during the corresponding development activity
  • Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.

This article categories common software development lifecycle models as follows:

  • Sequential development models
  • Iterative and incremental development models

A sequential development model describes the software development process as a linear, sequential flow of activities. This means that any phase in the development process should begin when the previous phase is complete. In theory, there is no overlap of phases, but in practice, it is beneficial to have early feedback from the following phase.

In the Waterfall model, the development activities (e.g., requirements analysis, design, coding, testing) are completed one after another. In this model, test activities only occur after all other development activities have been completed.

Unlike the Waterfall model, the V-model integrates the test process throughout the development process, implementing the principle of early testing. Further, the V-model includes test levels associated with each corresponding development phase, which further supports early testing. In this model, the execution of tests associated with each test level proceeds sequentially, but in some cases overlapping occurs.

Sequential development models deliver software that contains the complete set of features, but typically require months or years for delivery to stakeholders and users.

Incremental development involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally. The size of these feature increments varies, with some methods having larger pieces and some smaller pieces. The feature increments can be as small as a single change to a user interface screen or new query option.

Iterative development occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration. Iterations may involve changes to features developed in earlier iterations, along with changes in project scope. Each iteration delivers working software which is a growing subset of the overall set of features until the final software is delivered or development is stopped.

Examples include: 

  • Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months), and the feature increments are correspondingly large, such as two or three groups of related features
  • Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features
  • Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once
  • Spiral: Involves creating experimental increments, some of which may be heavily re-worked or even abandoned in subsequent development work

Components or systems developed using these methods often involve overlapping and iterating test levels throughout development. Ideally, each feature is tested at several test levels as it moves towards delivery. In some cases, teams use continuous delivery or continuous deployment, both of which involve significant automation of multiple test levels as part of their delivery pipelines. Many development efforts using these methods also include the concept of self-organising teams, which can change the way testing work is organised as well as the relationship between testers and developers.

These methods form a growing system, which may be released to end-users on a feature-by-feature basis, on an iteration-by-iteration basis, or in a more traditional major-release fashion. Regardless of whether the software increments are released to end-users, regression testing is increasingly important as the system grows.

In contrast to sequential models, iterative and incremental models may deliver usable software in weeks or even days, but may only deliver the complete set of requirements product over a period of months or even years.

Software Development Lifecycle Models in Context

Software development lifecycle models must be selected and adapted to the context of project and product characteristics. An appropriate software development lifecycle model should be selected and adapted based on the project goal, the type of product being developed, business priorities (e.g., time-to-market), and identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety-critical system such as an automobile’s brake control system. As another example, in some cases organisational and cultural issues may inhibit communication between team members, which can impede iterative development.

Depending on the context of the project, it may be necessary to combine or reorganise test levels and/or test activities. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level (e.g., integration to the infrastructure and other systems) and at the acceptance test level (functional and non-functional, along with user acceptance testing and operational acceptance testing).

In addition, software development lifecycle models themselves may be combined. For example, a V-model may be used for the development and testing of the backend systems and their integrations, while an Agile development model may be used to develop and test the front-end user interface (UI) and functionality. Prototyping may be used early in a project, with an incremental development model adopted once the experimental phase is complete. 

Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and services, typically apply separate software development lifecycle models for each object. This presents a particular challenge for the development of Internet of Things system versions. Additionally the software development lifecycle of such objects places stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use (e.g., operate, update, and decommission phases). 

Reasons why software development models must be adapted to the context of project and product characteristics can be:

  • Difference in product risks of systems (complex or simple project)
  • Many business units can be part of a project or program (combination of sequential and agile development)
  • Short time to deliver a product to the market (merge of test levels and/or integration of test types in test levels)

Test Levels

Test levels are groups of test activities that are organised and managed together. Each test level is an instance of the test process, consisting of the activities described in the basics of testing article, performed in relation to software at a given level of development, from individual units or components to complete systems or, where applicable, systems of systems. Test levels are related to other activities within the software development lifecycle. The test levels used in this article are:

  • Component testing
  • Integration testing
  • System testing
  • Acceptance testing

Test levels are characterised by the following attributes:

  • Specific objectives
  • Test basis, referenced to derive test cases
  • Test object (i.e., what is being tested)
  • Typical defects and failures
  • Specific approaches and responsibilities

For every test level, a suitable test environment is required. In acceptance testing, for example, a production-like test environment is ideal, while in component testing the developers typically use their own development environment.

Component Testing

Objectives of component testing

Component testing (also known as unit or module testing) focuses on components that are separately testable. Objectives of component testing include:

  • Reducing risks
  • Verifying whether the functional and non-functional behaviours of the component are as designed and specified
  • Building confidence in the component’s quality
  • Finding defects in the component
  • Preventing defects from escaping to higher test levels

In some cases, especially in incremental and iterative development models (e.g., Agile) where code changes are ongoing, automated component regression tests play a key role in building confidence that changes have not broken existing components.

Component testing is often done in isolation from the rest of the system, depending on the software development lifecycle model and the system, which may require mock objects, service virtualisation, harnesses, stubs, and drivers. Component testing may cover functionality (e.g., correctness of calculations), non-functional characteristics (e.g., searching for memory leaks), and structural properties (e.g., decision testing).

Test basis

Examples of work products that can be used as a test basis for component testing include:

  • Detailed design
  • Code
  • Data model 
  • Component specifications

Test objects

Typical test objects for component testing include:

  • Components, units or modules
  • Code and data structures
  • Classes
  • Database modules

Typical defects and failures

Examples of typical defects and failures for component testing include:

  • Incorrect functionality (e.g., not as described in design specifications)
  • Data flow problems
  • Incorrect in the code and/or logic

Defects are typically fixed as soon as they are found, often with no formal defect management. However, when developers do report defects, this provides important information for root cause analysis and process improvement.

Specific approaches and responsibilities

Component testing is usually performed by the developer who wrote the code, but it at least requires access to the code being tested. Developers may alternate component development with finding and fixing defects. Developers will often write and execute tests after having written the code for a component. However, in Agile development especially, writing automated component test cases may precede writing application code. 

For example, consider test driven development (TDD). Test driven development is highly iterative and is based on cycles of developing automated test cases, then building and integrating small pieces of code, then executing the component tests, correcting any issues, and re-factoring the code. This process continues until the component has been completely built and all component tests are passing. Test driven development is an example of a test-first approach. While test driven development originated in eXtreme Programming (XP), it has spread to other forms of Agile and also to sequential lifecycles.

Integration Testing

Objectives of integration testing

Integration testing focuses on interactions between components or systems. Objectives of integration testing include:

  • Reducing risks
  • Verifying whether the functional and non-functional behaviours of the interfaces are as designed and specified
  • Building confidence in the quality of the interfaces
  • Finding defects (which may be in the interfaces themselves or within the components or systems)
  • Preventing defects from escaping to higher test levels

As with component testing, in some cases automated integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems.

There are two different levels of integration testing described in this article, which may be carried out on test objects of varying size as follows:

  • Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process.
  • System integration testing focuses on the interactions and interfaces between systems, packages, and micro-services. System integration testing can also cover interactions with, and interfaces provided by external organisations (e.g., web services). In this case, the developing organisation does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organisation’s code are resolved, arranging for test environments, etc.). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development).

Test basis

Examples of work products that can be used as a test basis for integration testing include:

  • Software and system design
  • Sequence diagrams
  • Interface and communication protocol specifications
  • Use cases
  • Architecture at component or system level
  • Workflows
  • External interface definitions

Test objects

Typical test objects for integration testing include:

  • Subsystems
  • Databases
  • Infrastructure
  • Interfaces
  • APIs
  • Microservices

Typical defects and failures

Examples of typical defects and failures for component integration testing include:

  • Incorrect data, missing data, or incorrect data encoding
  • Incorrect sequencing or timing of interface calls
  • Interface mismatch
  • Failures in communication between components
  • Unhandled or improperly handled communication failures between components
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components

Examples of typical defects and failures for system integration testing include: 

  • Inconsistent message structures between systems
  • Incorrect data, missing data, or incorrect data encoding
  • Interface mismatch
  • Failures in communication between systems
  • Unhandled or improperly handled communication failures between systems
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between systems
  • Failure to comply with mandatory security regulations

Specific approaches and responsibilities

Component integration tests and system integration tests should concentrate on the integration itself. For example, if integrating module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules, as that should have been covered during component testing. If integrating system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems, as that should have been covered during system testing. Functional, non-functional, and structural test types are applicable.

Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture, and should have influenced integration planning.

If integration tests and the integration strategy are planned before components or systems are built, those components or systems can be built in the order required for most efficient testing. Systematic integration strategies may be based on the system architecture (e.g., top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or components. In order to simplify defect isolation and detect defects early, integration should normally be incremental (i.e., a small number of additional components or systems at a time) rather than “big bang” (i.e., integrating all components or systems in one single step). A risk analysis of the most complex interfaces can help to focus the integration testing.

The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting. This is one reason that continuous integration, where software is integrated on a component-by-component basis (i.e., functional integration), has become common practice. Such continuous integration often includes automated regression testing, ideally at multiple test levels.

System Testing

Objectives of system testing

System testing focuses on the behaviour and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviours it exhibits while performing those tasks. Objectives of system testing include:

  • Reducing risks
  • Verifying whether the functional and non-functional behaviours of the system are as designed and specified
  • Validating that the system is complete and will work as expected
  • Building confidence in the quality of the system as a whole
  • Finding defects
  • Preventing defects from escaping to higher test levels or production

For certain systems, verifying data quality may also be an objective. As with component testing and integration testing, in some cases automated system regression tests provide confidence that changes have not broken existing features or end-to-end capabilities. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.

The test environment should ideally correspond to the final target or production environment.

Test basis

Examples of work products that can be used as a test basis for system testing include:

  • System and software requirement specifications (functional and non-functional)
  • Risk analysis reports
  • Use cases
  • Epics and user stories
  • Models of system behaviour
  • State diagrams
  • System and user manuals

Test objects

Typical test objects for system testing include:

  • Applications
  • Hardware/software systems
  • Operating systems
  • System under test (SUT)
  • System configuration and configuration data

Typical defects and failures

Examples of typical defects and failures for system testing include:

  • Incorrect calculations
  • Incorrect or unexpected system functional or non-functional behaviour
  • Incorrect control and/or data flows within the system
  • Failure to properly and completely carry out end-to-end functional tasks
  • Failure of the system to work properly in the system environment(s)
  • Failure of the system to work as described in system and user manuals

Specific approaches and responsibilities

System testing should focus on the overall, end-to-end behaviour of the system as a whole, both functional and non-functional. System testing should use the most appropriate techniques (see test techniques) for the aspect(s) of the system to be tested. For example, a decision table may be created to verify whether functional behaviour is as described in business rules.

System testing is typically carried out by independent testers who rely heavily on specifications. Defects in specifications (e.g., missing user stories, incorrectly stated business requirements, etc.) can lead to a lack of understanding of, or disagreements about, expected system behaviour. Such situations can cause false positives and false negatives, which waste time and reduce defect detection effectiveness, respectively. Early involvement of testers in user story refinement or static testing activities, such as reviews, helps to reduce the incidence of such situations.

Acceptance Testing

Objectives of acceptance testing

Acceptance testing, like system testing, typically focuses on the behaviour and capabilities of a whole system or product. Objectives of acceptance testing include:

  • Establishing confidence in the quality of the system as a whole
  • Validating that the system is complete and will work as expected
  • Verifying that functional and non-functional behaviours of the system are as specified

Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer (end-user). Defects may be found during acceptance testing, but finding defects is often not an objective, and finding a significant number of defects during acceptance testing may in some cases be considered a major project risk. Acceptance testing may also satisfy legal or regulatory requirements or standards.

Common forms of acceptance testing include the following:

  • User acceptance testing
  • Operational acceptance testing
  • Contractual and regulatory acceptance testing
  • Alpha and beta testing.

Each is described in the following four subsections.

User acceptance testing (UAT)

User acceptance testing of the system is typically focused on validating the fitness for use of the system by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfil requirements, and perform business processes with minimum difficulty, cost, and risk.

Operational acceptance testing (OAT)

The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include:

  • Testing of backup and restore
  • Installing, uninstalling and upgrading
  • Disaster recovery
  • User management
  • Maintenance tasks
  • Data load and migration tasks
  • Checks for security vulnerabilities
  • Performance testing

The main objective of operational acceptance testing is building confidence that the operators or system administrators can keep the system working properly for the users in the operational environment, even under exceptional or difficult conditions.

Contractual and regulatory acceptance testing

Contractual acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the parties agree to the contract. Contractual acceptance testing is often performed by users or by independent testers.

Regulatory acceptance testing is performed against any regulations that must be adhered to, such as government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.

The main objective of contractual and regulatory acceptance testing is building confidence that contractual or regulatory compliance has been achieved.

Alpha and beta testing

Alpha and beta testing are typically used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market. Alpha testing is performed at the developing organisation’s site, not by the development team, but by potential or existing customers, and/or operators or an independent test team. Beta testing is performed by potential or existing customers, and/or operators at their own locations. Beta testing may come after alpha testing, or may occur without any preceding alpha testing having occurred. 

One objective of alpha and beta testing is building confidence among potential or existing customers, and/or operators that they can use the system under normal, everyday conditions, and in the operational environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective may be the detection of defects related to the conditions and environment(s) in which the system will be used, especially when those conditions and environment(s) are difficult to replicate by the development team.

Test basis

Examples of work products that can be used as a test basis for any form of acceptance testing include:

  • Business processes
  • User or business requirements
  • Regulations, legal contract and/or standards
  • Use cases and/or user stories
  • System requirements
  • System or user documentation
  • Installation procedures
  • Risk analysis reports

In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the following work products can be used:

  • Backup and restore procedures
  • Disaster recovery procedures
  • Non-functional requirements
  • Operations documentation
  • Deployment and installation instructions
  • Performance targets
  • Database packages
  • Security standards or regulations

Typical test objects

Typical test objects for any form of acceptance testing include:

  • System under test
  • System configuration and configuration data
  • Business processes for a fully integrated system
  • Recovery systems and hot sites (for business continuity and disaster recovery testing)
  • Operational and maintenance processes
  • Forms
  • Reports
  • Existing and converted production data

Typical defects and failures

Examples of typical defects for any form of acceptance testing include:

  • System workflows do not meet business or user requirements
  • Business rules are not implemented correctly
  • System does not satisfy contractual or regulatory requirements
  • Non-functional failures such as security vulnerabilities, inadequate performance efficiency under high loads, or improper operation on a supported platform

Specific approaches and responsibilities

Acceptance testing is often the responsibility of the customers, business users, product owners, or operators of a system, and other stakeholders may be involved as well.

Acceptance testing is often thought of as the last test level in a sequential development lifecycle, but it may also occur at other times, for example:

  • Acceptance testing of a COTS software product may occur when it is installed or integrated
  • Acceptance testing of a new functional enhancement may occur before system testing

In iterative development, project teams can employ various forms of acceptance testing during and at the end of each iteration, such as those focused on verifying a new feature against its acceptance criteria and those focused on validating that a new feature satisfies the users’ needs. In addition, alpha tests and beta tests may occur, either at the end of each iteration, after the completion of each iteration, or after a series of iterations. User acceptance tests, operational acceptance tests, regulatory acceptance tests, and contractual acceptance tests also may occur, either at the close of each iteration, after the completion of each iteration, or after a series of iterations.

Test Types

A test type is a group of test activities aimed at testing specific characteristics of a software system, or a part of a system, based on specific test objectives. Such objectives may include:

  • Evaluating functional quality characteristics, such as completeness, correctness, and appropriateness
  • Evaluating non-functional quality characteristics, such as reliability, performance efficiency, security, compatibility, and usability
  • Evaluating whether the structure or architecture of the component or system is correct, complete, and as specified
  • Evaluating the effects of changes, such as confirming that defects have been fixed (confirmation testing) and looking for unintended changes in behaviour resulting from software or environment changes (regression testing)

Functional Testing

Functional testing of a system involves tests that evaluate functions that the system should perform. Functional requirements may be described in work products such as business requirements specifications, epics, user stories, use cases, or functional specifications, or they may be undocumented. The functions are “what” the system should do. 

Functional tests should be performed at all test levels (e.g., tests for components may be based on a component specification), though the focus is different at each level. 

Functional testing considers the behaviour of the software, so black-box techniques may be used to derive test conditions and test cases for the functionality of the component or system. 

The thoroughness of functional testing can be measured through functional coverage. Functional coverage is the extent to which some functionality has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and functional requirements, the percentage of these requirements which are addressed by testing can be calculated, potentially identifying coverage gaps.

Functional test design and execution may involve special skills or knowledge, such as knowledge of the particular business problem the software solves (e.g., geological modelling software for the oil and gas industries). 

Non-functional Testing

Non-functional testing of a system evaluates characteristics of systems and software such as usability, performance efficiency or security. Non-functional testing is the testing of “how well” the system behaves. 

Contrary to common misperceptions, non-functional testing can and often should be performed at all test levels, and done as early as possible. The late discovery of non-functional defects can be extremely dangerous to the success of a project. 

Black-box techniques may be used to derive test conditions and test cases for non-functional testing. For example, boundary value analysis can be used to define the stress conditions for performance tests. 

The thoroughness of non-functional testing can be measured through non-functional coverage. Non-functional coverage is the extent to which some type of non-functional element has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and supported devices for a mobile application, the percentage of devices which are addressed by compatibility testing can be calculated, potentially identifying coverage gaps. 

Non-functional test design and execution may involve special skills or knowledge, such as knowledge of the inherent weaknesses of a design or technology (e.g., security vulnerabilities associated with particular programming languages) or the particular user base (e.g., the personas of users of healthcare facility management systems).

White-box Testing

White-box testing derives tests based on the system’s internal structure or implementation. Internal structure may include code, architecture, work flows, and/or data flows within the system. 

The thoroughness of white-box testing can be measured through structural coverage. Structural coverage is the extent to which some type of structural element has been exercised by tests, and is expressed as a percentage of the type of element being covered. 

At the component testing level, code coverage is based on the percentage of component code that has been tested, and may be measured in terms of different aspects of code (coverage items) such as the percentage of executable statements tested in the component, or the percentage of decision outcomes tested. These types of coverage are collectively called code coverage. At the component integration testing level, white-box testing may be based on the architecture of the system, such as interfaces between components, and structural coverage may be measured in terms of the percentage of interfaces exercised by tests. 

White-box test design and execution may involve special skills or knowledge, such as the way the code is built, how data is stored (e.g., to evaluate possible database queries), and how to use coverage tools and to correctly interpret their results.

Change-related Testing

When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not caused any unforeseen adverse consequences.

  • Confirmation testing: After a defect is fixed, the software may be tested with all test cases that failed due to the defect, which should be re-executed on the new software version. The software may also be tested with new tests to cover changes needed to fix the defect. At the very least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new software version. The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.
  • Regression testing: It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behaviour of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system or database management system. Such unintended side-effects are called regressions. Regression testing involves running tests to detect such unintended side-effects.

Confirmation testing and regression testing are performed at all test levels.

Especially in iterative and incremental development lifecycles (e.g., Agile), new features, changes to existing features, and code refactoring result in frequent changes to the code, which also requires change-related testing. Due to the evolving nature of the system, confirmation and regression testing are very important. This is particularly relevant for Internet of Things systems where individual objects (e.g., devices) are frequently updated or replaced.

Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation. Automation of these tests should start early in the project.

Test Types and Test Levels

It is possible to perform any of the test types mentioned above at any test level. To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a banking application, starting with functional tests: 

  • For component testing, tests are designed based on how a component should calculate compound interest.
  • For component integration testing, tests are designed based on how account information captured at the user interface is passed to the business logic.
  • For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts.
  • For system integration testing, tests are designed based on how the system uses an external micro-service to check an account holder’s credit score.
  • For acceptance testing, tests are designed based on how the banker handles approving or declining a credit application.

The following are examples of non-functional tests:

  • For component testing, performance tests are designed to evaluate the number of CPU cycles required to perform a complex total interest calculation. 
  • For component integration testing, security tests are designed for buffer overflow vulnerabilities due to data passed from the user interface to the business logic. 
  • For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices. 
  • For system integration testing, reliability tests are designed to evaluate system robustness if the credit score micro-service fails to respond. 
  • For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s credit processing interface for people with disabilities.

The following are examples of white-box tests:

  • For component testing, tests are designed to achieve complete statement and decision coverage for all components that perform financial calculations. 
  • For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic.
  • For system testing, tests are designed to cover sequences of web pages that can occur during a credit line application.
  • For system integration testing, tests are designed to exercise all possible inquiry types sent to the credit score micro-service.
  • For acceptance testing, tests are designed to cover all supported financial data file structures and value ranges for bank-to-bank transfers.

Finally, the following are examples for change-related tests:

  • For component testing, automated regression tests are built for each component and included within the continuous integration framework.
  • For component integration testing, tests are designed to confirm fixes to interface-related defects as the fixes are checked into the code repository.
  • For system testing, all tests for a given workflow are re-executed if any screen on that workflow changes.
  • For system integration testing, tests of the application interacting with the credit scoring micro-service are re-executed daily as part of continuous deployment of that micro-service.
  • For acceptance testing, all previously failed tests are re-executed after a defect found in acceptance testing is fixed.

While this section provides examples of every test type across every level, it is not necessary, for all software, to have every test type represented across every level. However, it is important to run applicable test types at each level, especially the earliest level where the test type occurs.

Maintenance Testing

Once deployed to production environments, software and systems need to be maintained. Changes of various sorts are almost inevitable in delivered software and systems, either to fix defects discovered in operational use, to add new functionality, or to delete or alter already-delivered functionality. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system over its lifetime, especially performance efficiency, compatibility, reliability, security, and portability. 

When any changes are made as part of maintenance, maintenance testing should be performed, both to evaluate the success with which the changes were made and to check for possible side-effects (e.g., regressions) in parts of the system that remain unchanged (which is usually most of the system). Maintenance can involve planned releases and unplanned releases (hot fixes). 

A maintenance release may require maintenance testing at multiple test levels, using various test types, based on its scope. The scope of maintenance testing depends on: 

  • The degree of risk of the change, for example, the degree to which the changed area of software communicates with other components or systems
  • The size of the existing system
  • The size of the change

Triggers for Maintenance

There are several reasons why software maintenance, and thus maintenance testing, takes place, both for planned and unplanned changes. 

We can classify the triggers for maintenance as follows: 

  • Modification, such as planned enhancements (e.g., release-based), corrective and emergency changes, changes of the operational environment (such as planned operating system or database upgrades), upgrades of COTS software, and patches for defects and vulnerabilities
  • Migration, such as from one platform to another, which can require operational tests of the new environment as well as of the changed software, or tests of data conversion when data from another application will be migrated into the system being maintained
    • Retirement, such as when an application reaches the end of its life. When an application or system is retired, this can require testing of data migration or archiving if long data-retention periods are required. 
    • Testing restored/retrieve procedures after archiving for long retention periods may also be needed. 
    • Regression testing may be needed to ensure that any functionality that remains in service still works. 

For Internet of Things systems, maintenance testing may be triggered by the introduction of completely new or modified things, such as hardware devices and software services, into the overall system. The maintenance testing for such systems places particular emphasis on integration testing at different levels (e.g., network level, application level) and on security aspects, in particular those relating to personal data. 

Impact Analysis for Maintenance

Impact analysis evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change. Impact analysis can also help to identify the impact of a change on existing tests. The side effects and affected areas in the system need to be tested for regressions, possibly after updating any existing tests affected by the change. 

Impact analysis may be done before a change is made, to help decide if the change should be made, based on the potential consequences in other areas of the system. 

Impact analysis can be difficult if: 

  • Specifications (e.g., business requirements, user stories, architecture) are out of date or missing
  • Test cases are not documented or are out of date
  • Bi-directional traceability between tests and the test basis has not been maintained
  • Tool support is weak or non-existent
  • The people involved do not have domain and/or system knowledge
  • Insufficient attention has been paid to the software’s maintainability during development

Basics of Testing

What is Testing?

Software systems are an integral part of life, from business applications (e.g., banking) to consumer products (e.g., cars). Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, or business reputation, and even injury or death. Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.

A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. As described, software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analysing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.

Some testing does involve the execution of the component or system being tested; such testing is called dynamic testing. Other testing does not involve the execution of the component or system being tested; such testing is called static testing. So, testing also includes reviewing work products such as requirements, user stories, and source code.

Another common misperception of testing is that it focuses entirely on verification of requirements, user stories, or other specifications. While testing does involve checking whether the system meets specified requirements, it also involves validation, which is checking whether the system will meet user and other stakeholder needs in its operational environment(s).

Test activities are organised and carried out differently in different lifecycles.

Typical Objectives of Testing

For any given project, the objectives of testing may include: 

  • To prevent defects by evaluate work products such as requirements, user stories, design, and code
  • To verify whether all specified requirements have been fulfilled 
  • To check whether the test object is complete and validate if it works as the users and other stakeholders expect
  • To build confidence in the level of quality of the test object 
  • To find defects and failures thus reduce the level of risk of inadequate software quality
  • To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
  • To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model. These differences may include, for example:

  • During component testing, one objective may be to find as many failures as possible so that the underlying defects are identified and fixed early. Another objective may be to increase code coverage of the component tests.
  • During acceptance testing, one objective may be to confirm that the system works as expected and satisfies requirements. Another objective of this testing may be to give information to stakeholders about the risk of releasing the system at a given time.

Testing and Debugging

Testing and debugging are different. Executing tests can show failures that are caused by defects in the software. Debugging is the development activity that finds, analyses, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging, associated component and component integration testing (continues integration). However, in Agile development and in some other software development lifecycles, testers may be involved in debugging and component testing.

Why is Testing Necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s Contributions to Success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include: 

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

In addition to these examples, the achievement of defined test objectives contributes to overall software development and maintenance success.

Quality Assurance and Testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organisation with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing. As described early on, testing contributes to the achievement of quality in a variety of ways.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra-system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analysed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced. 

For example, let suppose, incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Seven Testing Principles

A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing. 

1. Testing shows the presence of defects, not their absence 

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness. 

2. Exhaustive testing is impossible 

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts. 

3. Early testing saves time and money 

To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes.

4. Defects cluster together 

A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort (as mentioned in principle 2).

5. Beware of the pesticide paradox 

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.

6. Testing is context dependent 

Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project.

7. Absence-of-errors is a fallacy 

Some organisations expect that testers can run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfil the users’ needs and expectations, or that is inferior compared to other competing systems.

Test Process

There is no one universal software test process, but there are common sets of test activities without which testing will be less likely to achieve its established objectives. These sets of test activities are a test process. The proper, specific software test process in any given situation depends on many factors. Which test activities are involved in this test process, how these activities are implemented, and when these activities occur may be discussed in an organisation’s test strategy.

Test Process in Context 

Contextual factors that influence the test process for an organization, include, but are not limited to:

  • Software development lifecycle model and project methodologies being used
  • Test levels and test types being considered
  • Product and project risks
  • Business domain
  • Operational constraints, including but not limited to:
    • Budgets and resources
    • Timescales
    • Complexity
    • Contractual and regulatory requirements 
  • Organisational policies and practices 
  • Required internal and external standards

The following sections describe general aspects of organisational test processes in terms of the following: 

  • Test activities and tasks 
  • Test work products 
  • Traceability between the test basis and test work products

It is very useful if the test basis (for any level or type of testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives.

For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Test Activities and Tasks

A test process consists of the following main groups of activities:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design 
  • Test implementation
  • Test execution
  • Test completion

Each main group of activities is composed of constituent activities, which will be described in the subsections below. Each constituent activity consists of multiple individual tasks, which would vary from one project or release to another.
Further, although many of these main activity groups may appear logically sequential, they are often implemented iteratively. For example, Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this software development approach. Even in sequential software development, the stepped logical sequence of main groups of activities will involve overlap, combination, concurrency, or omission, so tailoring these main groups of activities within the context of the system and the project is usually required.

Test planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback from monitoring and control activities.

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development lifecycle models. For example, the evaluation of exit criteria for test execution as part of a given test level may include: 

  • Checking test results and logs against specified coverage criteria
  • Assessing the level of component or system quality based on test results and logs
  • Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed)

Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test analysis

During test analysis, the test basis is analysed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

Test analysis includes the following major activities: 

  • Analysing the test basis appropriate to the test level being considered, for example:
    • Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behaviour
    • Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure
    • The implementation of the component or system itself, including code, database metadata and queries, and interfaces
    • Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system
  • Evaluating the test basis and test items to identify defects of various types, such as: 
    • Ambiguities
    • Omissions
    • Inconsistencies
    • Inaccuracies
    • Contradictions
    • Superfluous statements
  • Identifying features and sets of features to be tested
  • Defining and prioritising test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks
  • Capturing bi-directional traceability between each element of the test basis and the associated test conditions

The application of black-box, white-box, and experience-based test techniques can be useful in the process of test analysis to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test charters. Test charters are typical work products in some types of experience-based testing. When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs. For example, techniques such as behaviour driven development (BDD) and acceptance test driven development (ATDD), which involve generating test conditions and test cases from user stories and acceptance criteria prior to coding. These techniques also verify, validate, and detect defects in the user stories and acceptance criteria.

Test design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other test-ware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:

  • Designing and prioritising test cases and sets of test cases 
  • Identifying necessary test data to support test conditions and test cases
  • Designing the test environment and identifying any required infrastructure and tools
  • Capturing bi-directional traceability between the test basis, test conditions, and test cases

The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques.

As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also, as with test analysis, the identification of defects during test design is an important potential benefit.

Test implementation

During test implementation, the test-ware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?” 

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  • Creating test suites from the test procedures and (if any) automated test scripts 
  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution
  • Building the test environment (including, potentially, test harnesses, service virtualisation, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
  • Preparing test data and ensuring it is properly loaded in the test environment 
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites

Test design and test implementation tasks are often combined.

In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented. 

Test execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and test-ware
  • Executing tests either manually or by using test execution tools
  • Comparing actual results with expected results
  • Analysing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
  • Reporting defects based on the failures observed
  • Logging the outcome of test execution (e.g., pass, fail, blocked)
  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  • Creating a test summary report to be communicated to stakeholders
  • Finalising and archiving the test environment, the test data, the test infrastructure, and other test-ware for later reuse
  • Handing over the test-ware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  • Analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  • Using the information gathered to improve test process maturity

Test Work Products

Test work products are created as part of the test process. Just as there is significant variation in the way that organisations implement the test process, there is also significant variation in the types of work products created during that process, in the ways those work products are organised and managed, and in the names used for those work products.

Many of the test work products described in this section can be captured and managed using test management tools and defect management tools.

Test planning work products 

Test planning work products typically include one or more test plans. The test plan includes information about the test basis, to which the other test work products will be related via traceability information, as well as exit criteria (or definition of done) which will be used during test monitoring and control.

Test monitoring and control work products

Test monitoring and control work products typically include various types of test reports, including test progress reports produced on an ongoing and/or a regular basis, and test summary reports produced at various completion milestones. All test reports should provide audience-relevant details about the test progress as of the date of the report, including summarising the test execution results once those become available. 

Test monitoring and control work products should also address project management concerns, such as task completion, resource allocation and usage, and effort. 

Test monitoring and control, and the work products created during these activities, are further explained on this site.

Test analysis work products

Test analysis work products include defined and prioritised test conditions, each of which is ideally bi-directionally traceable to the specific element(s) of the test basis it covers. For exploratory testing, test analysis may involve the creation of test charters. Test analysis may also result in the discovery and reporting of defects in the test basis. 

Test design work products

Test design results in test cases and sets of test cases to exercise the test conditions defined in test analysis. It is often a good practice to design high-level test cases, without concrete values for input data and expected results. Such high-level test cases are reusable across multiple test cycles with different concrete data, while still adequately documenting the scope of the test case. Ideally, each test case is bi-directionally traceable to the test condition(s) it covers.

Test design also results in:

  • the design and/or identification of the necessary test data
  • the design of the test environment
  • the identification of infrastructure and tools

Though the extent to which these results are documented varies significantly.

Test implementation work products

Test implementation work products include:

  • Test procedures and the sequencing of those test procedures
  • Test suites
  • A test execution schedule

Ideally, once test implementation is complete, achievement of coverage criteria established in the test plan can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions.

In some cases, test implementation involves creating work products using or used by tools, such as service virtualisation and automated test scripts.

Test implementation also may result in the creation and verification of test data and the test environment. The completeness of the documentation of the data and/or environment verification results may vary significantly.

The test data serve to assign concrete values to the inputs and expected results of test cases. Such concrete values, together with explicit directions about the use of the concrete values, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle.

In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly.

Test conditions defined in test analysis may be further refined in test implementation.

Test execution work products

Test execution work products include:

  • Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
  • Defect reports
  • Documentation about which test item(s), test object(s), test tools, and test-ware were involved in the testing

Ideally, once test execution is complete, the status of each element of the test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s). For example, we can say which requirements have passed all planned tests, which requirements have failed tests and/or have defects associated with them, and which requirements have planned tests still waiting to be run. This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products

Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations, change requests or product backlog items, and finalised test-ware.

Traceability between the Test Basis and Test Work Products

As mentioned, earlier, test work products and the names of those work products vary significantly. Regardless of these variations, in order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element, as described above. In addition to the evaluation of test coverage, good traceability supports:

  • Analysing the impact of changes
  • Making testing auditable
  • Meeting IT governance criteria
  • Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)
  • Relating the technical aspects of testing to stakeholders in terms that they can understand
  • Providing information to assess product quality, process capability, and project progress against business goals

Some test management tools provide test work product models that match part or all of the test work products outlined in this section. Some organisations build their own management systems to organise the work products and provide the information traceability they require.

The Psychology of Testing

Software development, including software testing, involves human beings. Therefore, human psychology has important effects on software testing.

Human Psychology and Testing 

Identifying defects during a static test such as a requirement review or user story refinement session, or identifying failures during dynamic test execution, may be perceived as criticism of the product and of its author. An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs. For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.

As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality. To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. This applies during both static and dynamic testing.

Testers and test managers need to have good interpersonal skills to be able to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive relationships with colleagues. Ways to communicate well include the following examples:

  • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems.
  • Emphasise the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organisation, defects found and fixed during testing will save time and money and reduce overall risk to product quality.
  • Communicate test results and other findings in a neutral, fact-focused way without criticising the person who created the defective item. Write objective and factual defect reports and review findings.
  • Try to understand how the other person feels and the reasons they may react negatively to the information.
  • Confirm that the other person has understood what has been said and vice versa.

Typical test objectives were discussed earlier. Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviours with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias.

Tester’s and Developer’s Mindsets

Developers and testers often think differently. The primary objective of development is to design and build a product. As discussed earlier, the objectives of testing include verifying and validating the product, finding defects prior to release, and so forth. These are different sets of objectives which require different mindsets. Bringing these mindsets together helps to achieve a higher level of product quality.

A mindset reflects an individual’s assumptions and preferred methods for decision making and problem-solving. A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.

A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to become aware of errors committed by themselves. 

With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organising the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different from that of the work product authors (i.e., business analysts, product owners, designers, and developers), since they have different cognitive biases from the authors.

Testing throughout the software development lifecycle

A software development lifecycle model describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically. There are a number of different software development lifecycle models, each of which requires different approaches to testing.

Software development and software testing

It is an important part of a tester’s role to be familiar with the common software development lifecycle models so that appropriate test activities can take place.

In any software development lifecycle model, there are several characteristics of good testing:

  • For every development activity, there is a corresponding test activity
  • Each test level has test objectives specific to that level
  • Test analysis and design for a given test level begin during the corresponding development activity
  • Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.

This categorizes common software development lifecycle models as follows:

  • Sequential development models
  • Iterative and incremental development models

A sequential development model describes the software development process as a linear, sequential flow of activities. This means that any phase in the development process should begin when the previous phase is complete. In theory, there is no overlap of phases, but in practice, it is beneficial to have early feedback from the following phase.

In the Waterfall model, the development activities (e.g., requirements analysis, design, coding, testing) are completed one after another. In this model, test activities only occur after all other development activities have been completed.

Unlike the Waterfall model, the V-model integrates the test process throughout the development process, implementing the principle of early testing. Further, the V-model includes test levels associated with each corresponding development phase, which further supports early testing. In this model, the execution of tests associated with each test level proceeds sequentially, but in some cases overlapping occurs.

Sequential development models deliver software that contains the complete set of features, but typically require months or years for delivery to stakeholders and users.

Incremental development involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally. The size of these feature increments varies, with some methods having larger pieces and some smaller pieces. The feature increments can be as small as a single change to a user interface screen or new query option.

Iterative development occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration. Iterations may involve changes to features developed in earlier iterations, along with changes in project scope. Each iteration delivers working software which is a growing subset of the overall set of features until the final software is delivered or development is stopped.

Examples include:

  • Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months), and the feature increments are correspondingly large, such as two or three groups of related features
  • Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features
  • Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once
  • Spiral: Involves creating experimental increments, some of which may be heavily re-worked or even abandoned in subsequent development work

Components or systems developed using these methods often involve overlapping and iterating test levels throughout development. Ideally, each feature is tested at several test levels as it moves towards delivery. In some cases, teams use continuous delivery or continuous deployment, both of which involve significant automation of multiple test levels as part of their delivery pipelines. Many development efforts using these methods also include the concept of self-organizing teams, which can change the way testing work is organized as well as the relationship between testers and developers.

These methods form a growing system, which may be released to end-users on a feature-by-feature basis, on an iteration-by-iteration basis, or in a more traditional major-release fashion. Regardless of whether the software increments are released to end-users, regression testing is increasingly important as the system grows.

In contrast to sequential models, iterative and incremental models may deliver usable software in weeks or even days, but may only deliver the complete set of requirements product over a period of months or even years.

Software development lifecycle models in context

Software development lifecycle models must be selected and adapted to the context of project and product characteristics. An appropriate software development lifecycle model should be selected and adapted based on the project goal, the type of product being developed, business priorities (e.g., time-to- market), and identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety-critical system such as an automobile’s brake control system. As another example, in some cases organizational and cultural issues may inhibit communication between team members, which can impede iterative development.

Depending on the context of the project, it may be necessary to combine or reorganize test levels and/or test activities. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level (e.g., integration to the infrastructure and other systems) and at the acceptance test level (functional and non-functional, along with user acceptance testing and operational acceptance testing).

In addition, software development lifecycle models themselves may be combined. For example, a V- model may be used for the development and testing of the backend systems and their integrations, while an Agile development model may be used to develop and test the front-end user interface (UI) and functionality. Prototyping may be used early in a project, with an incremental development model adopted once the experimental phase is complete.

Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and services, typically apply separate software development lifecycle models for each object. This presents a particular challenge for the development of Internet of Things system versions. Additionally the software development lifecycle of such objects places stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use (e.g., operate, update, and decommission phases).

Reasons why software development models must be adapted to the context of project and product characteristics can be:

  • Difference in product risks of systems (complex or simple project)
  • Many business units can be part of a project or program (combination of sequential and agile development)
  • Short time to deliver a product to the market (merge of test levels and/or integration of test types in test levels)

Why is testing necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s contributions to success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

Quality assurance and testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra- system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

What are the testing objectives?

What should we test in a project may very and testing objective could include:

  • Testing or evaluating work products such as requirements, user stories, design and code.
  • Validated whether the test object is done or complete and work as expected by users and stakeholders.
  • Building confidence that in the quality of the test objective.
  • Preventing errors and defects.
  • Finding defects which lead to failure’s.
  • Providing to stakeholders information to let them make informed decisions, regarding the quality of the object under test.
  • Reducing the risk of the software quality.
  • Complying to legal, or regulatory standards, and verifying that the object under test comply with those standards or requirements.

The objectives under test may very from system to system, depending the context of the component under test, the level of test, and the model of the software development lifecycle being used.