Acceptance Criteria, Acceptance Tests and Experience-Based Practices

Writing Acceptance Criteria

Specifying acceptance criteria is an important acceptance testing task. It helps to refine requirements or user stories and provides the basis for acceptance tests. Business analysts and testers should collaborate closely on the specification of these criteria. This collaboration ensures high business value from the acceptance testing phase and increases the chance of a successful iteration or product release. 

Writing acceptance criteria forces business analysts and testers to think about functionality, performance, and other characteristics from a stakeholder or usage perspective. This supports early verification and validation of the related requirement or user story and provides a better chance of detecting inconsistencies, contradictions, missing information or other problems. 

The following good practices should be considered when writing acceptance criteria:

  • Well-written acceptance criteria are precise, measurable and concise. Each criterion must be written in a way that enables the tester to measure whether or not the test object complies with the acceptance criterion.
  • Well-written acceptance criteria do not include technical solution details. They concentrate on the question “What shall be achieved?” rather than on the question “How shall it achieved?”.
  • Acceptance criteria should address non-functional requirements (quality characteristics) as well as functional requirements.

As with requirements and user stories, acceptance criteria should be reviewed through walkthroughs, technical reviews, iteration planning meetings or other methods (if necessary).

Designing Acceptance Tests

This section addresses the test techniques and approaches frequently used for acceptance testing.

Test Techniques for Acceptance Testing

In a requirements-based approach to acceptance testing, the tester derives test cases from the acceptance criteria related to each requirement or user story using black-box techniques such as equivalence partitioning or boundary value analysis.

Acceptance testing may be augmented with other test techniques or approaches:

  • Business process-based testing, possibly combined with decision table testing, validates business processes and rules.
  • Experience-based testing leverages the tester’s experience, knowledge and intuition.
  • Risk-based testing is based on risk types and levels. Prioritisation and thoroughness of testing depends on previously identified product risks.
  • Model-based testing uses graphical (or textual) models to obtain acceptance tests.

Acceptance criteria should be verified by acceptance tests and traceability between the requirements / user story and related test cases should be managed.

Using the Gherkin Language to Write Test Cases

In ATDD and BDD, acceptance tests are often formulated in a structured language, referred to as the Gherkin language. Using the Gherkin language, test cases are phrased declaratively using a standardised pattern:

  • Given [a situation]
  • When [an action on the system]
  • Then [the expected result]

The pattern allows business analysts, testers and developers to write test cases in a way that is easily shared with stakeholders and may be translated into automated tests. 

The “Given” block aims to put the test object in a state before performing test actions in the “When” block. The “Then” block specifies the consequences that can be observed from the actions defined in the “When” block. Test cases written in Gherkin do not refer to user interface elements but rather to user actions on the system. They are structured natural language test cases that can be understood by all relevant stakeholders. 

In addition, the structure “Given – When – Then” can be parsed in an automated way. This allows automated test script creation using a keyword-driven testing approach. 

Initially, Gherkin was specific to some software tools supporting BDD, but it is now synonymous with the “Given – When – Then” acceptance test design pattern. 

Experience-based Approaches for Acceptance Testing

All experience-based test techniques described in are relevant for acceptance testing. This section is focused on how exploratory testing can be used for acceptance tests, and on beta testing as a source of feedback on system usage. 

Exploratory Testing

Exploratory testing is an experience-based test technique that is not based on detailed predefined test procedures. In exploratory testing, all activities are carried out within an uninterrupted period of time called a session. The testers are domain experts. They are familiar with user needs, requirements and business processes, but they are not necessarily familiar with the product under test. 

During an exploratory testing session, the tester accomplishes the following:

  • Learns how to work with the product
  • Designs the tests
  • Performs the tests
  • Interprets the results

It is a good practice in exploratory testing to use a test charter. The test charter is prepared prior to the testing session (possibly jointly by the business analyst and the tester) and is used by the person in charge of the exploratory session (either a business analyst, tester or another stakeholder). It includes information about the purpose, target, and scope of the exploratory session, the test setup, the duration of the session, and possibly some tactics to be used during the session (such as the type of user that shall be simulated during the exploratory session). Time-boxed sessions help to control the time and effort dedicated to the exploratory session. It is also good practice to perform exploratory testing in pairs or as team work. 

In Agile development, exploratory test sessions can be conducted during an iteration by the product owner and/or the testers for acceptance testing of user stories assigned to the iteration. 

Exploratory testing should be used to complement other more formal techniques in acceptance testing. For example, it may be used to provide rapid feedback on new features before methodical testing is applied. 

Beta Testing

Beta testing is a form of acceptance testing that is often used for Commercial Off-the-Shelf Software (COTS) or for Software as a Service (SaaS) platforms. It is conducted to obtain feedback from the market after development and in-house testing are completed. 

Unlike other acceptance testing forms, beta testing is performed by potential or existing users at their own location. Beta tests neither impose predefined test procedures nor a test charter. Apart from the observed findings, the test activities are usually not documented at all. 

Because the product is tested in various realistic configurations by actual users in their business process context, beta testing may discover defects that escaped during the development process and previous test levels. Resolving issues found by beta tests helps organisations avoid costly hot-fixes or product recalls on a larger scale. 

Acceptance testing should not be limited to beta testing. Beta testing is not systematic or measurable. There is no guarantee that all requirements or user stories are covered by the tests. Moreover, beta testing is performed late in the development process whereas tests based on acceptance criteria support the “Early Testing” principle. 

Tools Supporting for Testing

Test Tool Considerations

Test tools can be used to support one or more testing activities. Such tools include:

  • Tools that are directly used in testing, such as test execution tools and test data preparation tools
  • Tools that help to manage requirements, test cases, test procedures, automated test scripts, test results, test data, and defects, and for reporting and monitoring test execution
  • Tools that are used for analysis and evaluation
  • Any tool that assists in testing (a spreadsheet is also a test tool in this meaning)

Test Tool Classification

Test tools can have one or more of the following purposes depending on the context: 

  • Improve the efficiency of test activities by automating repetitive tasks or tasks that require significant resources when done manually (e.g., test execution, regression testing)
  • Improve the efficiency of test activities by supporting manual test activities throughout the test process
  • Improve the quality of test activities by allowing for more consistent testing and a higher level of defect reproducibility
  • Automate activities that cannot be executed manually (e.g., large scale performance testing)
  • Increase reliability of testing (e.g., by automating large data comparisons or simulating behaviour)

Tools can be classified based on several criteria such as purpose, pricing, licensing model (e.g., commercial or open source), and technology used. Tools are classified in this article according to the test activities that they support.

Some tools clearly support only or mainly one activity; others may support more than one activity, but are classified under the activity with which they are most closely associated. Tools from a single provider, especially those that have been designed to work together, may be provided as an integrated suite.

Some types of test tools can be intrusive, which means that they may affect the actual outcome of the test. For example, the actual response times for an application may be different due to the extra instructions that are executed by a performance testing tool, or the amount of code coverage achieved may be distorted due to the use of a coverage tool. The consequence of using intrusive tools is called the probe effect.

Some tools offer support that is typically more appropriate for developers (e.g., tools that are used during component and integration testing). Such tools are marked with “(D)” in the sections below.

Tool support for management of testing and test-ware

Management tools may apply to any test activities over the entire software development lifecycle. Examples of tools that support management of testing and test-ware include:

  • Test management tools and application lifecycle management tools (ALM)
  • Requirements management tools (e.g., traceability to test objects)
  • Defect management tools
  • Configuration management tools
  • Continuous integration tools (D)

Tool support for static testing

Static testing tools are associated with the activities and benefits described in the static testing page. Examples of such tool include:

  • Static analysis tools (D)

Tool support for test design and implementation

Test design tools aid in the creation of maintainable work products in test design and implementation, including test cases, test procedures and test data. Examples of such tools include:

  • Model-Based testing tools
  • Test data preparation tools

In some cases, tools that support test design and implementation may also support test execution and logging, or provide their outputs directly to other tools that support test execution and logging.

Tool support for test execution and logging

Many tools exist to support and enhance test execution and logging activities. Examples of these tools include:

  • Test execution tools (e.g., to run regression tests)
  • Coverage tools (e.g., requirements coverage, code coverage (D))
  • Test harnesses (D)

Tool support for performance measurement and dynamic analysis

Performance measurement and dynamic analysis tools are essential in supporting performance and load testing activities, as these activities cannot effectively be done manually. Examples of these tools include:

  • Performance testing tools
  • Dynamic analysis tools (D)

Tool support for specialised testing needs

In addition to tools that support the general test process, there are many other tools that support more specific testing for non-functional characteristics.

Benefits and Risks of Test Automation

Simply acquiring a tool does not guarantee success. Each new tool introduced into an organisation will require effort to achieve real and lasting benefits. There are potential benefits and opportunities with the use of tools in testing, but there are also risks. This is particularly true of test execution tools (which is often referred to as test automation).

Potential benefits of using tools to support test execution include:

  • Reduction in repetitive manual work (e.g., running regression tests, environment set up/tear down tasks, re-entering the same test data, and checking against coding standards), thus saving time
  • Greater consistency and repeatability (e.g., test data is created in a coherent manner, tests are executed by a tool in the same order with the same frequency, and tests are consistently derived from requirements)
  • More objective assessment (e.g., static measures, coverage)
  • Easier access to information about testing (e.g., statistics and graphs about test progress, defect rates and performance)

Potential risks of using tools to support testing include:

  • Expectations for the tool may be unrealistic (including functionality and ease of use)
  • The time, cost and effort for the initial introduction of a tool may be under-estimated (including training and external expertise)
  • The time and effort needed to achieve significant and continuing benefits from the tool may be under-estimated (including the need for changes in the test process and continuous improvement in the way the tool is used)
  • The effort required to maintain the test work products generated by the tool may be under-estimated
  • The tool may be relied on too much (seen as a replacement for test design or execution, or the use of automated testing where manual testing would be better)
  • Version control of test work products may be neglected
  • Relationships and interoperability issues between critical tools may be neglected, such as requirements management tools, configuration management tools, defect management tools and tools from multiple vendors
  • The tool vendor may go out of business, retire the tool, or sell the tool to a different vendor
  • The vendor may provide a poor response for support, upgrades, and defect fixes
  • An open source project may be suspended
  • A new platform or technology may not be supported by the tool
  • There may be no clear ownership of the tool (e.g., for mentoring, updates, etc.)

Special Considerations for Test Execution and Test Management Tools

In order to have a smooth and successful implementation, there are a number of things that ought to be considered when selecting and integrating test execution and test management tools into an organisation. 

Test execution tools

Test execution tools execute test objects using automated test scripts. This type of tools often requires significant effort in order to achieve significant benefits. 

  • Capturing test approach: Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not scale to large numbers of test scripts. A captured script is a linear representation with specific data and actions as part of each script. This type of script may be unstable when unexpected events occur, and require ongoing maintenance as the system’s user interface evolves over time. 
  • Data-driven test approach: This test approach separates out the test inputs and expected results, usually into a spreadsheet, and uses a more generic test script that can read the input data and execute the same test script with different data.
  • Keyword-driven test approach: This test approach, a generic script processes keywords describing the actions to be taken (also called action words), which then calls keyword scripts to process the associated test data.

The above approaches require someone to have expertise in the scripting language (testers, developers or specialists in test automation). When using data-driven or keyword-driven test approaches testers who are not familiar with the scripting language can also contribute by creating test data and/or keywords for these predefined scripts. Regardless of the scripting technique used, the expected results for each test need to be compared to actual results from the test, either dynamically (while the test is running) or stored for later (post-execution) comparison.

Model-Based testing (MBT) tools enable a functional specification to be captured in the form of a model, such as an activity diagram. This task is generally performed by a system designer. The MBT tool interprets the model in order to create test case specifications which can then be saved in a test management tool and/or executed by a test execution tool.

Test management tools

Test management tools often need to interface with other tools or spreadsheets for various reasons, including:

  • To produce useful information in a format that fits the needs of the organisation
  • To maintain consistent traceability to requirements in a requirements management tool
  • To link with test object version information in the configuration management tool

This is particularly important to consider when using an integrated tool (e.g., Application Lifecycle Management), which includes a test management module, as well as other modules (e.g., project schedule and budget information) that are used by different groups within an organisation.

Effective Use of Tools

Main Principles for Tool Selection

The main considerations in selecting a tool for an organisation include: 

  • Assessment of the maturity of the own organisation, its strengths and weaknesses
  • Identification of opportunities for an improved test process supported by tools
  • Understanding of the technologies used by the test object(s), in order to select a tool that is compatible with that technology
  • Understanding the build and continuous integration tools already in use within the organisation, in order to ensure tool compatibility and integration
  • Evaluation of the tool against a clear requirements and objective criteria
  • Consideration of whether or not the tool is available for a free trial period (and for how long)
  • Evaluation of the vendor (including training, support and commercial aspects) or support for non-commercial (e.g., open source) tools
  • Identification of internal requirements for coaching and mentoring in the use of the tool
  • Evaluation of training needs, considering the testing (and test automation) skills of those who will be working directly with the tool(s)
  • Consideration of pros and cons of various licensing models (e.g., commercial or open source)
  • Estimation of a cost-benefit ratio based on a concrete business case (if required)

As a final step, a proof-of-concept evaluation should be done to establish whether the tool performs effectively with the software under test and within the current infrastructure or, if necessary, to identify changes needed to that infrastructure to use the tool effectively.

Pilot Projects for Introducing a Tool into an Organisation

After completing the tool selection and a successful proof-of-concept, introducing the selected tool into an organisation generally starts with a pilot project, which has the following objectives:

  • Gaining in-depth knowledge about the tool, understanding both its strengths and weaknesses
  • Evaluating how the tool fits with existing processes and practices, and determining what would need to change
  • Deciding on standard ways of using, managing, storing, and maintaining the tool and the test work products (e.g., deciding on naming conventions for files and tests, selecting coding standards, creating libraries and defining the modularity of test suites)
  • Assessing whether the benefits will be achieved at reasonable cost
  • Understanding the metrics that you wish the tool to collect and report, and configuring the tool to ensure these metrics can be captured and reported

Success Factors for Tools

Success factors for evaluation, implementation, deployment, and on-going support of tools within an organisation include:

  • Rolling in the tool to the rest of the organisation incrementally
  • Adapting and improving processes to fit with the use of the tool
  • Providing training, coaching, and mentoring for tool users
  • Defining guidelines for the use of the tool (e.g., internal standards for automation)
  • Implementing a way to gather usage information from the actual use of the tool
  • Monitoring tool use and benefits
  • Providing support to the users of a given tool
  • Gathering lessons learned from all users

It is also important to ensure that the tool is technically and organisationally integrated into the software development lifecycle, which may involve separate organisations responsible for operations and/or third party suppliers.