Quality Assurance Testing: Ensuring Excellence in Software Development

In the fast-paced world of software development, where innovation and efficiency are key, the importance of Quality Assurance (QA) testing cannot be overstated. QA testing plays a crucial role in ensuring that software meets the highest standards of functionality, reliability, and performance. In this article, we will delve into the significance of QA testing, its key principles, and its impact on the overall success of software development projects.

The Essence of QA Testing

At its core, QA testing is the process of systematically evaluating a software product to identify and eliminate any defects or inconsistencies. The primary goal is to deliver a flawless product that not only meets but exceeds user expectations. QA testing encompasses a wide range of activities, including functional testing, performance testing, security testing, and usability testing.

Key Principles of QA Testing

  1. Early Integration of QA in the Development Lifecycle:
  • Successful QA testing starts early in the software development lifecycle. By integrating QA from the initial stages, issues can be identified and resolved before they escalate, saving both time and resources.
  1. Comprehensive Test Planning:
  • A well-thought-out test plan is the foundation of effective QA testing. It outlines the testing approach, objectives, resources, and schedules, ensuring a systematic and organized testing process.
  1. Test Automation:
  • Automation has become a cornerstone of modern QA testing. Automated testing tools not only expedite the testing process but also enhance accuracy and repeatability, especially for repetitive and time-consuming test scenarios.
  1. Realistic Test Environments:
  • QA testing should be conducted in environments that closely mimic real-world conditions. This ensures that the software performs reliably in different scenarios, providing a more accurate representation of its actual behavior.
  1. Continuous Testing:
  • In the era of agile development, continuous testing is essential. It involves ongoing testing throughout the development process, allowing for immediate detection and resolution of issues as they arise.
  1. Collaboration and Communication:
  • Effective communication between development and QA teams is paramount. Collaboration ensures that both teams have a clear understanding of project requirements and objectives, leading to more efficient testing and bug resolution.

Types of QA Testing

  1. Functional Testing:
  • This type of testing focuses on verifying that the software functions as intended. It involves testing individual functions and features to ensure they meet the specified requirements.
  1. Performance Testing:
  • Performance testing evaluates how well a system performs under various conditions, including load testing to assess its response to high user volumes and stress testing to determine its limits.
  1. Security Testing:
  • Security testing identifies vulnerabilities and weaknesses in the software to prevent potential security breaches. It includes testing for data integrity, authentication, authorization, and protection against external threats.
  1. Usability Testing:
  • Usability testing assesses the user-friendliness of the software. It involves evaluating the interface, navigation, and overall user experience to ensure that the software is intuitive and easy to use.

Impact of QA Testing on Software Development

  1. Enhanced Product Quality:
  • QA testing is the linchpin of delivering high-quality software. By identifying and rectifying defects early in the development process, the end product is more likely to meet user expectations and function reliably.
  1. Cost Savings:
  • Detecting and fixing defects during the early stages of development is significantly more cost-effective than addressing issues post-release. QA testing helps minimize the risk of costly bug fixes and reputation damage.
  1. Customer Satisfaction:
  • A reliable and bug-free software product enhances customer satisfaction. QA testing ensures that the software performs as expected, providing users with a positive experience and fostering loyalty.
  1. Faster Time-to-Market:
  • Continuous testing and early defect detection contribute to a faster development cycle. By resolving issues promptly, software development teams can adhere to project timelines and bring products to market more quickly.

Challenges in QA Testing

Despite its numerous benefits, QA testing comes with its set of challenges. Some common challenges include evolving technology, tight deadlines, and the need for skilled QA professionals. Addressing these challenges requires a proactive approach, ongoing training, and the adoption of innovative testing methodologies.

Conclusion

In the dynamic landscape of software development, QA testing stands as a pillar of assurance, guaranteeing that the end product aligns with the highest standards of quality. By adhering to key principles, embracing various testing methodologies, and recognizing its broader impact, QA testing ensures that software not only meets but exceeds the expectations of users. As technology continues to advance, the role of QA testing remains indispensable, guiding the path toward excellence in software development.

7 tips for improving load speed

Plan for performance

Are you building a new website? Be sure to discuss the importance of performance early on and set targets. That way, you have a faster load speed from the beginning and don’t have to implement fixes later.

Step 1: test, step 2: test…

Are you seeing a pattern here? 😉 Testing is crucial! Before you launch, load and test your website multiple times to make sure you can handle the traffic of real site visitors. This is especially important for sites with complex hosting, such as load-balanced configuration.

Implement some “quick wins”

To be clear, there’s no “get fast quick” scheme for site load speeds. But there is a tried-and-true template that will put you ahead of the curve. That includes making use of modern image formats, enabling compression on the server via Gzip, and leveraging browser cache. Find some more low-hanging fruit here.

Careful of your images!

Good websites have great graphic content – but they also take into account how images impact load speed. You can improve image performance by considering file formats, image compression, and lazy loading.

Think of the mobile visitors

More and more people surf the web on their phone these days, which makes mobile-optimized sites a huge priority! Since mobile users tend to use slower, less stable Internet connections, Accelerated Mobile Pages (AMPs) are a great way to get them content faster.

Prioritize above-the-fold

First impressions matter – and your above-the-fold content can make or break them! Consider inline styling for above-the-fold, then loading your code in chunks. This type of asynchronous loading can create a faster perceived load time for the user.

Assess your external scripts

Third-party scripts are a great tool – but can make your website feel a little crowded. Assess the performance of external scripts on your site load speed, and replace or remove those that are negatively impacting user experience.

DevOps preface

If you’re old, don’t try to change yourself, change your environment. —B. F. Skinner

One view of DevOps is that it helps take on that last mile problem in software: value delivery. The premise is that encouraging behaviors such as teaming, feedback, and experimentation will be reinforced by desirable outcomes such as better software, delivered faster and at lower cost. For many, the DevOps discourse then quickly turns to automation. That makes sense as automation is an environmental intervention that is relatively actionable. If you want to change behavior, change the environment!

In this context, automation becomes a significant investment decision with strategic import. DevOps automation engineers face a number of design choices. What level of interface abstraction is appropriate for the automation tooling? Where should you separate automation concerns of an infrastructure nature from those that should be more application centric?

These questions matter because automation tooling that is accessible to all can better connect all the participants in the software delivery process. That is going to help fos‐ ter all those positive teaming behaviors we are after. Automation that is decoupled from infrastructure provisioning events makes it possible to quickly tenant new project streams. Users can immediately self-serve without raising a new infrastructure requisition.

We want to open the innovation process to all, be they 10x programmers or citizen developers. Doing DevOps with makes this possible, and this blog will show you how.

This is a practical guide that will show how to easily implement and automate powerful cloud deployment patterns using. The container management platform provides a self-service platform for users. Its natively container-aware approach will allow us to show you an application-centric view to automation.

FUTURE OF DEVOPS

THE EARLY MAJORITY
MOVES TO THE CLOUD

of business:

60% + 80%
DevOps world have raised the bar on collaboration, cross-organizational visibility,
of businesses are adopting or expanding DevOps culture and processes
of businesses are now operating in the cloud

DEVOPS AND THE CLOUD— A NATURAL PAIR
Let’s start with DevOps.
Forrester Research dubbed 2018 the year of DevOps. And it’s no wonder, with over half of enterprises implementing or expanding existing DevOps practices. So why are they doing that? Here are a few good reasons to consider it:
DEVOPS OFFERS YOUR ORGANIZATION:
• Greater productivity and faster delivery of products
• Greater visibility and collaboration across projects,
departments, and individuals
• Less siloing
So, DevOps removes friction; and as a practical environment for DevOps, the cloud just makes sense.
HOW THE CLOUD ENHANCES YOUR DEVOPS ORGANIZATION

• Rapid deployment of new environments
• Reduced IT costs through subscription and SaaS (pay as you go) payment structures
• Moving from CapEx expenditures for hardware to OpEx expenses for SaaS
• Fast, agile scalability
So why the urgency to make these innovations? The truth is, they’re not really innovative anymore. it’s already happened.
The bar has been raised and you need a new edge.

GAUGE YOUR DEVOPS PROGRESS
Institute Agile practices that focuses on communication, collaboration, customer feedback, and small and rapid releases. Agile operations remove rigidity from your processes and allow for greater innovation, while keeping accountability and increasing goal focus
Deploy a multi-cloud strategy with Kubernetes or other intermediary layer for cloud-agnostic and resilient infrastructure
Build cloud-native systems for core products, with lift-and-shift for systems that don’t require much scalability
Create microservices in containers over monolithic apps to increase your agility and your ability to innovate with less downtime

Acceptance Testing Business Process and Business Rules Modelling

Modelling Business Processes and Rules

Organisations need confidence that critical business processes, such as order-to-cash procedures, human resource on-boarding, or production planning, can be performed without disruption. This is known as “business process assurance” and it is an essential objective of acceptance testing. In this context, two standards exist that provide a common language for business analysts and testers for graphically representing business processes and business rules: Business Process Model and Notation (BPMN) and Decision Model and Notation (DMN). These models support the design and implementation of tests and help to determine the priority for execution.

Business process/rule models describe the business flow and the expected behaviour of the test object. Representing business processes and rules to be tested using a graphical notation helps to establish a common understanding of what is expected. A business process corresponds to a flow of tasks, alternative paths, and the various events at the start, the end or possibly during the control flow. Business rules define explicit criteria for guiding behaviour, shaping judgments, or making decisions. 

Business Process Model and Notation (BPMN), maintained by the Object Management Group (OMG), is a recognised standard for business process modelling which uses a flowcharting technique. In this article, a subset of the Business Process Model and Notation (BPMN) notation is used that is sufficient to draw simple business process models in the context of acceptance testing activities.

Decision Model and Notation (DMN), also standardised by the Object Management Group (OMG), is complementary to the BPMN standard. While Business Process Model and Notation (BPMN) is used to represent workflows, DMN is used to represent decisions, business rules and outcomes/output within the workflow. In this article, a subset of the Decision Model and Notation (DMN) notation is used that is sufficient to define business rules in conjunction with simple business process models in Business Process Model and Notation (BPMN).

Deriving Acceptance Tests from Business Process/Rule Models

A business process model with business rules, described with the Business Process Model and Notation (BPMN) and/or Decision Model and Notation (DMN) notations, provides a precise definition of the scenarios to be tested, including the cases related to business rules. It is a good basis for generating acceptance tests using coverage-based test selection criteria as defined in a model-based testing approach. 

Coverage-based test selection follows the principle that the business analyst and tester agree on the coverage items that shall be fully tested. Typical coverage items for business process models when generating acceptance tests include the following: 

  • User stories, requirements, and risks annotated in the business process model.
  • Decisions in the decision tables describing the business rules.
  • User scenarios defined by different paths through the business process model.
  • All paths (usually without loops) through the business process model.

Once the coverage items are defined, the tester then identifies a set of test cases that covers those items. Full coverage is achieved if the test suite covers each occurrence of the coverage item in the model at least once during execution.

Different coverage criteria may be combined to meet the acceptance testing objectives. For example, the objective may be to cover all paths of a given main scenario, but only one path of each alternative scenario.

Business Process Modelling for Acceptance Testing

Business process/rule models describe the business flow and the expected behaviour of the test object. The use of business process/rule modelling in the context of acceptance testing is based on good modelling practices and supports visual ATDD practices.

Good Practices for Business Process Modelling for Acceptance Testing

The following good practices should be considered when using Business Process Model and Notation (BPMN) and Decision Model and Notation (DMN) for acceptance testing: 

  • It is not necessary to describe everything in a business process model. The graphical representations of business processes in BPMN should focus on requirements to be tested. Therefore, workflow descriptions that only partially cover the behaviour of related software systems are acceptable, as long as they represent what is to be tested.
  • Especially for rule-based business processes, using decision tables helps manage dependencies. DMN supports the definition of conditions and outcomes corresponding to the business rules under test.
  • Diagrams should be as simple as possible and be structured in sub-processes when needed to limit the number of graphical elements in a single business process diagram. This improves readability and facilitates reviews.
  • Business process modelling for acceptance testing should be a collaborative work between business analysts and testers. Artefacts produced should be shared between and reviewed by both roles. Early and close communication between those two roles improves the quality of requirements or user stories as well as tests. (This is true for all test levels.)
  • Additional information such as links to user stories, requirements, risks, priorities and any other information useful for acceptance testing should be added to the diagrams using annotations. By keeping all relevant information in a single location, it becomes easier to make decisions and reasons are better documented.

Using Business Process Models for ATDD

During the refinement sessions for requirements and user stories, the business process and business rule models will help the team to get into the details of the expected behaviour and the acceptance criteria. The representation of workflows in Business Process Model and Notation (BPMN) and of rules in Decision Model and Notation (DMN) directly enable testers to design appropriate test cases that verify the acceptance criteria.

Business process modelling for ATDD is based on the following principles:

  • Business analysts and testers collaborate to model workflows and business rules using graphical notations such as BPMN and DMN.
  • These business process/rule models are reviewed with relevant stakeholders and contribute to the validation of the requirements and acceptance criteria.
  • Testers derive tests from these business process/rule models to ensure and demonstrate the required coverage through the different paths and business rules.
  • Business analysts and testers may also use the models to identify changes that necessitate test case maintenance and to select regression test cases.
  • Business process/rule models created and maintained for ATDD can be viewed as living documentation used by business analysts to present the actual behaviour of
    the test object.
  • Automated test generation techniques can be used to produce and maintain automated test scripts. The model-based testing approach can also be combined with keyword-driven testing and data-driven testing approaches.

Business process/rule modelling in ATDD provides a visualisation of the workflows to be tested. This is the major difference from the Gherkin language used in BDD.

Basics of Testing

What is Testing?

Software systems are an integral part of life, from business applications (e.g., banking) to consumer products (e.g., cars). Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, or business reputation, and even injury or death. Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.

A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. As described, software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analysing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.

Some testing does involve the execution of the component or system being tested; such testing is called dynamic testing. Other testing does not involve the execution of the component or system being tested; such testing is called static testing. So, testing also includes reviewing work products such as requirements, user stories, and source code.

Another common misperception of testing is that it focuses entirely on verification of requirements, user stories, or other specifications. While testing does involve checking whether the system meets specified requirements, it also involves validation, which is checking whether the system will meet user and other stakeholder needs in its operational environment(s).

Test activities are organised and carried out differently in different lifecycles.

Typical Objectives of Testing

For any given project, the objectives of testing may include: 

  • To prevent defects by evaluate work products such as requirements, user stories, design, and code
  • To verify whether all specified requirements have been fulfilled 
  • To check whether the test object is complete and validate if it works as the users and other stakeholders expect
  • To build confidence in the level of quality of the test object 
  • To find defects and failures thus reduce the level of risk of inadequate software quality
  • To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
  • To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model. These differences may include, for example:

  • During component testing, one objective may be to find as many failures as possible so that the underlying defects are identified and fixed early. Another objective may be to increase code coverage of the component tests.
  • During acceptance testing, one objective may be to confirm that the system works as expected and satisfies requirements. Another objective of this testing may be to give information to stakeholders about the risk of releasing the system at a given time.

Testing and Debugging

Testing and debugging are different. Executing tests can show failures that are caused by defects in the software. Debugging is the development activity that finds, analyses, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging, associated component and component integration testing (continues integration). However, in Agile development and in some other software development lifecycles, testers may be involved in debugging and component testing.

Why is Testing Necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s Contributions to Success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include: 

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

In addition to these examples, the achievement of defined test objectives contributes to overall software development and maintenance success.

Quality Assurance and Testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organisation with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing. As described early on, testing contributes to the achievement of quality in a variety of ways.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra-system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analysed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced. 

For example, let suppose, incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Seven Testing Principles

A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing. 

1. Testing shows the presence of defects, not their absence 

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness. 

2. Exhaustive testing is impossible 

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts. 

3. Early testing saves time and money 

To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes.

4. Defects cluster together 

A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort (as mentioned in principle 2).

5. Beware of the pesticide paradox 

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.

6. Testing is context dependent 

Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project.

7. Absence-of-errors is a fallacy 

Some organisations expect that testers can run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfil the users’ needs and expectations, or that is inferior compared to other competing systems.

Test Process

There is no one universal software test process, but there are common sets of test activities without which testing will be less likely to achieve its established objectives. These sets of test activities are a test process. The proper, specific software test process in any given situation depends on many factors. Which test activities are involved in this test process, how these activities are implemented, and when these activities occur may be discussed in an organisation’s test strategy.

Test Process in Context 

Contextual factors that influence the test process for an organization, include, but are not limited to:

  • Software development lifecycle model and project methodologies being used
  • Test levels and test types being considered
  • Product and project risks
  • Business domain
  • Operational constraints, including but not limited to:
    • Budgets and resources
    • Timescales
    • Complexity
    • Contractual and regulatory requirements 
  • Organisational policies and practices 
  • Required internal and external standards

The following sections describe general aspects of organisational test processes in terms of the following: 

  • Test activities and tasks 
  • Test work products 
  • Traceability between the test basis and test work products

It is very useful if the test basis (for any level or type of testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives.

For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Test Activities and Tasks

A test process consists of the following main groups of activities:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design 
  • Test implementation
  • Test execution
  • Test completion

Each main group of activities is composed of constituent activities, which will be described in the subsections below. Each constituent activity consists of multiple individual tasks, which would vary from one project or release to another.
Further, although many of these main activity groups may appear logically sequential, they are often implemented iteratively. For example, Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this software development approach. Even in sequential software development, the stepped logical sequence of main groups of activities will involve overlap, combination, concurrency, or omission, so tailoring these main groups of activities within the context of the system and the project is usually required.

Test planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback from monitoring and control activities.

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development lifecycle models. For example, the evaluation of exit criteria for test execution as part of a given test level may include: 

  • Checking test results and logs against specified coverage criteria
  • Assessing the level of component or system quality based on test results and logs
  • Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed)

Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test analysis

During test analysis, the test basis is analysed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

Test analysis includes the following major activities: 

  • Analysing the test basis appropriate to the test level being considered, for example:
    • Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behaviour
    • Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure
    • The implementation of the component or system itself, including code, database metadata and queries, and interfaces
    • Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system
  • Evaluating the test basis and test items to identify defects of various types, such as: 
    • Ambiguities
    • Omissions
    • Inconsistencies
    • Inaccuracies
    • Contradictions
    • Superfluous statements
  • Identifying features and sets of features to be tested
  • Defining and prioritising test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks
  • Capturing bi-directional traceability between each element of the test basis and the associated test conditions

The application of black-box, white-box, and experience-based test techniques can be useful in the process of test analysis to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test charters. Test charters are typical work products in some types of experience-based testing. When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs. For example, techniques such as behaviour driven development (BDD) and acceptance test driven development (ATDD), which involve generating test conditions and test cases from user stories and acceptance criteria prior to coding. These techniques also verify, validate, and detect defects in the user stories and acceptance criteria.

Test design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other test-ware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:

  • Designing and prioritising test cases and sets of test cases 
  • Identifying necessary test data to support test conditions and test cases
  • Designing the test environment and identifying any required infrastructure and tools
  • Capturing bi-directional traceability between the test basis, test conditions, and test cases

The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques.

As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also, as with test analysis, the identification of defects during test design is an important potential benefit.

Test implementation

During test implementation, the test-ware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?” 

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  • Creating test suites from the test procedures and (if any) automated test scripts 
  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution
  • Building the test environment (including, potentially, test harnesses, service virtualisation, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
  • Preparing test data and ensuring it is properly loaded in the test environment 
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites

Test design and test implementation tasks are often combined.

In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented. 

Test execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and test-ware
  • Executing tests either manually or by using test execution tools
  • Comparing actual results with expected results
  • Analysing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
  • Reporting defects based on the failures observed
  • Logging the outcome of test execution (e.g., pass, fail, blocked)
  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  • Creating a test summary report to be communicated to stakeholders
  • Finalising and archiving the test environment, the test data, the test infrastructure, and other test-ware for later reuse
  • Handing over the test-ware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  • Analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  • Using the information gathered to improve test process maturity

Test Work Products

Test work products are created as part of the test process. Just as there is significant variation in the way that organisations implement the test process, there is also significant variation in the types of work products created during that process, in the ways those work products are organised and managed, and in the names used for those work products.

Many of the test work products described in this section can be captured and managed using test management tools and defect management tools.

Test planning work products 

Test planning work products typically include one or more test plans. The test plan includes information about the test basis, to which the other test work products will be related via traceability information, as well as exit criteria (or definition of done) which will be used during test monitoring and control.

Test monitoring and control work products

Test monitoring and control work products typically include various types of test reports, including test progress reports produced on an ongoing and/or a regular basis, and test summary reports produced at various completion milestones. All test reports should provide audience-relevant details about the test progress as of the date of the report, including summarising the test execution results once those become available. 

Test monitoring and control work products should also address project management concerns, such as task completion, resource allocation and usage, and effort. 

Test monitoring and control, and the work products created during these activities, are further explained on this site.

Test analysis work products

Test analysis work products include defined and prioritised test conditions, each of which is ideally bi-directionally traceable to the specific element(s) of the test basis it covers. For exploratory testing, test analysis may involve the creation of test charters. Test analysis may also result in the discovery and reporting of defects in the test basis. 

Test design work products

Test design results in test cases and sets of test cases to exercise the test conditions defined in test analysis. It is often a good practice to design high-level test cases, without concrete values for input data and expected results. Such high-level test cases are reusable across multiple test cycles with different concrete data, while still adequately documenting the scope of the test case. Ideally, each test case is bi-directionally traceable to the test condition(s) it covers.

Test design also results in:

  • the design and/or identification of the necessary test data
  • the design of the test environment
  • the identification of infrastructure and tools

Though the extent to which these results are documented varies significantly.

Test implementation work products

Test implementation work products include:

  • Test procedures and the sequencing of those test procedures
  • Test suites
  • A test execution schedule

Ideally, once test implementation is complete, achievement of coverage criteria established in the test plan can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions.

In some cases, test implementation involves creating work products using or used by tools, such as service virtualisation and automated test scripts.

Test implementation also may result in the creation and verification of test data and the test environment. The completeness of the documentation of the data and/or environment verification results may vary significantly.

The test data serve to assign concrete values to the inputs and expected results of test cases. Such concrete values, together with explicit directions about the use of the concrete values, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle.

In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly.

Test conditions defined in test analysis may be further refined in test implementation.

Test execution work products

Test execution work products include:

  • Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
  • Defect reports
  • Documentation about which test item(s), test object(s), test tools, and test-ware were involved in the testing

Ideally, once test execution is complete, the status of each element of the test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s). For example, we can say which requirements have passed all planned tests, which requirements have failed tests and/or have defects associated with them, and which requirements have planned tests still waiting to be run. This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products

Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations, change requests or product backlog items, and finalised test-ware.

Traceability between the Test Basis and Test Work Products

As mentioned, earlier, test work products and the names of those work products vary significantly. Regardless of these variations, in order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element, as described above. In addition to the evaluation of test coverage, good traceability supports:

  • Analysing the impact of changes
  • Making testing auditable
  • Meeting IT governance criteria
  • Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)
  • Relating the technical aspects of testing to stakeholders in terms that they can understand
  • Providing information to assess product quality, process capability, and project progress against business goals

Some test management tools provide test work product models that match part or all of the test work products outlined in this section. Some organisations build their own management systems to organise the work products and provide the information traceability they require.

The Psychology of Testing

Software development, including software testing, involves human beings. Therefore, human psychology has important effects on software testing.

Human Psychology and Testing 

Identifying defects during a static test such as a requirement review or user story refinement session, or identifying failures during dynamic test execution, may be perceived as criticism of the product and of its author. An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs. For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.

As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality. To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. This applies during both static and dynamic testing.

Testers and test managers need to have good interpersonal skills to be able to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive relationships with colleagues. Ways to communicate well include the following examples:

  • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems.
  • Emphasise the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organisation, defects found and fixed during testing will save time and money and reduce overall risk to product quality.
  • Communicate test results and other findings in a neutral, fact-focused way without criticising the person who created the defective item. Write objective and factual defect reports and review findings.
  • Try to understand how the other person feels and the reasons they may react negatively to the information.
  • Confirm that the other person has understood what has been said and vice versa.

Typical test objectives were discussed earlier. Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviours with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias.

Tester’s and Developer’s Mindsets

Developers and testers often think differently. The primary objective of development is to design and build a product. As discussed earlier, the objectives of testing include verifying and validating the product, finding defects prior to release, and so forth. These are different sets of objectives which require different mindsets. Bringing these mindsets together helps to achieve a higher level of product quality.

A mindset reflects an individual’s assumptions and preferred methods for decision making and problem-solving. A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.

A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to become aware of errors committed by themselves. 

With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organising the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different from that of the work product authors (i.e., business analysts, product owners, designers, and developers), since they have different cognitive biases from the authors.

Test Techniques

Categories of Test Techniques 

The purpose of a test technique, including those discussed in this section, is to help in identifying test conditions, test cases, and test data.

The choice of which test techniques to use depends on a number of factors, including: 

  • Component or system complexity 
  • Regulatory standards 
  • Customer or contractual requirements 
  • Risk levels and types 
  • Available documentation 
  • Tester knowledge and skills 
  • Available tools 
  • Time and budget 
  • Software development lifecycle model 
  • The types of defects expected in the component or system 

Some techniques are more applicable to certain situations and test levels; others are applicable to all test levels. When creating test cases, testers generally use a combination of test techniques to achieve the best results from the test effort.

The use of test techniques in the test analysis, test design, and test implementation activities can range from very informal (little to no documentation) to very formal. The appropriate level of formality depends on the context of testing, including the maturity of test and development processes, time constraints, safety or regulatory requirements, the knowledge and skills of the people involved, and the software development lifecycle model being followed. 

Categories of Test Techniques and Their Characteristics

In this article , test techniques are classified as black-box, white-box, or experience-based. 

Black-box test techniques (also called behavioural or behaviour-based techniques) are based on an analysis of the appropriate test basis (e.g., formal requirements documents, specifications, use cases, user stories, or business processes). These techniques are applicable to both functional and non-functional testing. Black-box test techniques concentrate on the inputs and outputs of the test object without reference to its internal structure. 

White-box test techniques (also called structural or structure-based techniques) are based on an analysis of the architecture, detailed design, internal structure, or the code of the test object. Unlike black-box test techniques, white-box test techniques concentrate on the structure and processing within the test object. 

Experience-based test techniques leverage the experience of developers, testers and users to design, implement, and execute tests. These techniques are often combined with black-box and white-box test techniques.

Common characteristics of black-box test techniques include the following: 

  • Test conditions, test cases, and test data are derived from a test basis that may include software requirements, specifications, use cases, and user stories
  • Test cases may be used to detect gaps between the requirements and the implementation of the requirements, as well as deviations from the requirements 
  • Coverage is measured based on the items tested in the test basis and the technique applied to the test basis

Common characteristics of white-box test techniques include:

  • Test conditions, test cases, and test data are derived from a test basis that may include code, software architecture, detailed design, or any other source of information regarding the structure of the software
  • Coverage is measured based on the items tested within a selected structure (e.g., the code or interfaces) and the technique applied to the test basis

Common characteristics of experience-based test techniques include:

  • Test conditions, test cases, and test data are derived from a test basis that may include knowledge and experience of testers, developers, users and other stakeholders 

This knowledge and experience includes expected use of the software, its environment, likely defects, and the distribution of those defects.

Black-box Test Techniques

Equivalence Partitioning 

Equivalence partitioning divides data into partitions (also known as equivalence classes) in such a way that all the members of a given partition are expected to be processed in the same way. There are equivalence partitions for both valid and invalid values. 

  • Valid values are values that should be accepted by the component or system. An equivalence partition containing valid values is called a “valid equivalence partition.” 
  • Invalid values are values that should be rejected by the component or system. An equivalence partition containing invalid values is called an “invalid equivalence partition.” 
  • Partitions can be identified for any data element related to the test object, including inputs, outputs, internal values, time-related values (e.g., before or after an event) and for interface parameters (e.g., integrated components being tested during integration testing). 
  • Any partition may be divided into sub partitions if required. 
  • Each value must belong to one and only one equivalence partition.
  • When invalid equivalence partitions are used in test cases, they should be tested individually, i.e., not combined with other invalid equivalence partitions, to ensure that failures are not masked. Failures can be masked when several failures occur at the same time but only one is visible, causing the other failures to be undetected. 

To achieve 100% coverage with this technique, test cases must cover all identified partitions (including invalid partitions) by using a minimum of one value from each partition. Coverage is measured as the number of equivalence partitions tested by at least one value, divided by the total number of identified equivalence partitions, normally expressed as a percentage. Equivalence partitioning is applicable at all test levels.

Boundary Value Analysis

Boundary value analysis (BVA) is an extension of equivalence partitioning, but can only be used when the partition is ordered, consisting of numeric or sequential data. The minimum and maximum values (or first and last values) of a partition are its boundary values. 

For example, let suppose an input field accepts a single integer value as an input, using a keypad to limit inputs so that non-integer inputs are impossible. The valid range is from 1 to 5, inclusive. So, there are three equivalence partitions: invalid (too low); valid; invalid (too high). For the valid equivalence partition, the boundary values are 1 and 5. For the invalid (too high) partition, the boundary value is 6. For the invalid (too low) partition, there is only one boundary value, 0, because this is a partition with only one member. 

In the example above, we identify two boundary values per boundary. The boundary between invalid (too low) and valid gives the test values 0 and 1. The boundary between valid and invalid (too high) gives the test values 5 and 6. Some variations of this technique identify three boundary values per boundary: the values before, at, and just over the boundary. In the previous example, using three-point boundary values, the lower boundary test values are 0, 1, and 2, and the upper boundary test values are 4, 5, and 6. 

Behaviour at the boundaries of equivalence partitions is more likely to be incorrect than behaviour within the partitions. It is important to remember that both specified and implemented boundaries may be displaced to positions above or below their intended positions, may be omitted altogether, or may be supplemented with unwanted additional boundaries. Boundary value analysis and testing will reveal almost all such defects by forcing the software to show behaviours from a partition other than the one to which the boundary value should belong. 

Boundary value analysis can be applied at all test levels. This technique is generally used to test requirements that call for a range of numbers (including dates and times). Boundary coverage for a partition is measured as the number of boundary values tested, divided by the total number of identified boundary test values, normally expressed as a percentage.

Decision Table Testing

Decision tables are a good way to record complex business rules that a system must implement. When creating decision tables, the tester identifies conditions (often inputs) and the resulting actions (often outputs) of the system. These form the rows of the table, usually with the conditions at the top and the actions at the bottom. Each column corresponds to a decision rule that defines a unique combination of conditions which results in the execution of the actions associated with that rule. The values of the conditions and actions are usually shown as Boolean values (true or false) or discrete values (e.g., red, green, blue), but can also be numbers or ranges of numbers. These different types of conditions and actions might be found together in the same table.

The common notation in decision tables is as follows:

For conditions:

  • Y means the condition is true (may also be shown as T or 1) 
  • N means the condition is false (may also be shown as F or 0) 
  • — means the value of the condition doesn’t matter (may also be shown as N/A)

For actions: 

  • X means the action should occur (may also be shown as Y or T or 1) 
  • Blank means the action should not occur (may also be shown as – or N or F or 0)

A full decision table has enough columns (test cases) to cover every combination of conditions. By deleting columns that do not affect the outcome, the number of test cases can decrease considerably. For example by removing impossible combinations of conditions.

The common minimum coverage standard for decision table testing is to have at least one test case per decision rule in the table. This typically involves covering all combinations of conditions. Coverage is measured as the number of decision rules tested by at least one test case, divided by the total number of decision rules, normally expressed as a percentage.

The strength of decision table testing is that it helps to identify all the important combinations of conditions, some of which might otherwise be overlooked. It also helps in finding any gaps in the requirements. It may be applied to all situations in which the behaviour of the software depends on a combination of conditions, at any test level.

State Transition Testing

Components or systems may respond differently to an event depending on current conditions or previous history (e.g., the events that have occurred since the system was initialised). The previous history can be summarised using the concept of states. A state transition diagram shows the possible software states, as well as how the software enters, exits, and transitions between states. A transition is initiated by an event (e.g., user input of a value into a field). The event results in a transition. The same event can result in two or more different transitions from the same state. The state change may result in the software taking an action (e.g., outputting a calculation or error message). 

A state transition table shows all valid transitions and potentially invalid transitions between states, as well as the events, and resulting actions for valid transitions. State transition diagrams normally show only the valid transitions and exclude the invalid transitions. 

Tests can be designed to cover a typical sequence of states, to exercise all states, to exercise every transition, to exercise specific sequences of transitions, or to test invalid transitions. 

State transition testing is used for menu-based applications and is widely used within the embedded software industry. The technique is also suitable for modelling a business scenario having specific states or for testing screen navigation. The concept of a state is abstract — it may represent a few lines of code or an entire business process. 

Coverage is commonly measured as the number of identified states or transitions tested, divided by the total number of identified states or transitions in the test object, normally expressed as a percentage. For more information on coverage criteria for state transition testing.

Use Case Testing 

Tests can be derived from use cases, which are a specific way of designing interactions with software items. They incorporate requirements for the software functions. Use cases are associated with actors (human users, external hardware, or other components or systems) and subjects (the component or system to which the use case is applied).

Each use case specifies some behaviour that a subject can perform in collaboration with one or more actors. A use case can be described by interactions and activities, as well as preconditions, postconditions and natural language where appropriate. Interactions between the actors and the subject may result in changes to the state of the subject. Interactions may be represented graphically by work flows, activity diagrams, or business process models.

A use case can include possible variations of its basic behaviour, including exceptional behaviour and error handling (system response and recovery from programming, application and communication errors, e.g., resulting in an error message). Tests are designed to exercise the defined behaviours (basic, exceptional or alternative, and error handling). Coverage can be measured by the number of use case behaviours tested divided by the total number of use case behaviours, normally expressed as a percentage.

White-box Test Techniques 

White-box testing is based on the internal structure of the test object. White-box test techniques can be used at all test levels, but the two code-related techniques discussed in this section are most commonly used at the component test level. There are more advanced techniques that are used in some safety-critical, mission-critical, or high integrity environments to achieve more thorough coverage, but those are not discussed here.

Statement Testing and Coverage 

Statement testing exercises the potential executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage. 

Decision Testing and Coverage

Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes. To do this, the test cases follow the control flows that occur from a decision point (e.g., for an IF statement, one for the true outcome and one for the false outcome; for a CASE statement, test cases would be required for all the possible outcomes, including the default outcome). 

Coverage is measured as the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the test object, normally expressed as a percentage.

The Value of Statement and Decision Testing

When 100% statement coverage is achieved, it ensures that all executable statements in the code have been tested at least once, but it does not ensure that all decision logic has been tested. Of the two white-box techniques discussed in this syllabus, statement testing may provide less coverage than decision testing. 

When 100% decision coverage is achieved, it executes all decision outcomes, which includes testing the true outcome and also the false outcome, even when there is no explicit false statement (e.g., in the case of an IF statement without an else in the code). Statement coverage helps to find defects in code that was not exercised by other tests. Decision coverage helps to find defects in code where other tests have not taken both true and false outcomes. 

Achieving 100% decision coverage guarantees 100% statement coverage (but not vice versa).

Experience-based Test Techniques

When applying experience-based test techniques, the test cases are derived from the tester’s skill and intuition, and their experience with similar applications and technologies. These techniques can be helpful in identifying tests that were not easily identified by other more systematic techniques. Depending on the tester’s approach and experience, these techniques may achieve widely varying degrees of coverage and effectiveness. Coverage can be difficult to assess and may not be measurable with these techniques. 

Commonly used experience-based techniques are discussed in the following sections.

Error Guessing 

Error guessing is a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge, including: 

  • How the application has worked in the past 
  • What kind of errors tend to be made 
  • Failures that have occurred in other applications

A methodical approach to the error guessing technique is to create a list of possible errors, defects, and failures, and design tests that will expose those failures and the defects that caused them. These error, defect, failure lists can be built based on experience, defect and failure data, or from common knowledge about why software fails.

Exploratory Testing

In exploratory testing, informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically during test execution. The test results are used to learn more about the component or system, and to create tests for the areas that may need more testing. 

Exploratory testing is sometimes conducted using session-based testing to structure the activity. In session-based testing, exploratory testing is conducted within a defined time-box, and the tester uses a test charter containing test objectives to guide the testing. The tester may use test session sheets to document the steps followed and the discoveries made. 

Exploratory testing is most useful when there are few or inadequate specifications or significant time pressure on testing. Exploratory testing is also useful to complement other more formal testing techniques. 

Exploratory testing is strongly associated with reactive test strategies. Exploratory testing can incorporate the use of other black-box, white-box, and experience-based techniques.

Checklist-based Testing

In checklist-based testing, testers design, implement, and execute tests to cover test conditions found in a checklist. As part of analysis, testers create a new checklist or expand an existing checklist, but testers may also use an existing checklist without modification. Such checklists can be built based onexperience, knowledge about what is important for the user, or an understanding of why and how software fails. 

Checklists can be created to support various test types, including functional and non-functional testing. In the absence of detailed test cases, checklist-based testing can provide guidelines and a degree of consistency. As these are high-level lists, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.

Why is testing necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s contributions to success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

Quality assurance and testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra- system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

What are the testing objectives?

What should we test in a project may very and testing objective could include:

  • Testing or evaluating work products such as requirements, user stories, design and code.
  • Validated whether the test object is done or complete and work as expected by users and stakeholders.
  • Building confidence that in the quality of the test objective.
  • Preventing errors and defects.
  • Finding defects which lead to failure’s.
  • Providing to stakeholders information to let them make informed decisions, regarding the quality of the object under test.
  • Reducing the risk of the software quality.
  • Complying to legal, or regulatory standards, and verifying that the object under test comply with those standards or requirements.

The objectives under test may very from system to system, depending the context of the component under test, the level of test, and the model of the software development lifecycle being used.

What is testing?

Application or software systems, in this modern age, are all part of life, users all over the world are using and even testing systems with out even knowing that they are part of the testing. In our daily life we are using systems on our phones or our desktops, from banks, cellular providers, medical, ordering food and much more.

Software which does not function properly can lead to many problems, that include loss of money, time, reputation and more. Software testing, which is part of QA, can reduce errors, defects and failure in the software under testing.

Software testing is a process which includes many different activities such as test execution, planing, analysing, designing, implementing tests, reporting progress and results, and evaluate the quality of the object under test.