Why is testing necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s contributions to success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

Quality assurance and testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra- system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Tasks of a Test Manager and Tester

The activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organization.

The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organizations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.

Typical test manager tasks may include:

  •  Develop or review a test policy and test strategy for the organization
  •  Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management
  • Write and update the test plan(s)
  • Coordinate the test plan(s) with project managers, product owners, and others
  • Share testing perspectives with other project activities, such as integration planning
  • Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done) and facilitate test completion activities
  • Prepare and deliver test progress reports and test summary reports based on the information gathered
  • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control
  • Support setting up the defect management system and adequate configuration management of test-ware
  • Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product
  • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s)
  • Decide about the implementation of test environment(s)
  • Promote and advocate the testers, the test team, and the test profession within the organisation
  • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.)

The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organization, or that have to do with personnel management, may be done by test managers outside of the development team, who are sometimes called test coaches.

Typical tester tasks may include:

  • Review and contribute to test plans
  • Analyse, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis)
  • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis
  • Design, set up, and verify test environment(s), often coordinating with system administration and network management
  • Design and implement test cases and test procedures
  • Prepare and acquire test data
  • Create the detailed test execution schedule
  • Execute tests, evaluate the results, and document deviations from expected results
  • Use appropriate tools to facilitate the test process
  • Automate tests as needed (may be supported by a developer or a test automation expert)
  • Evaluate non-functional characteristics such as performance efficiency, reliability, usability,security, compatibility, and portability
  • Review tests developed by others

People who work on test analysis, test design, specific test types, or test automation may be specialists in these roles. Depending on the risks related to the product and the project, and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the acceptance test level, the role of a tester is often done by business analysts, subject matter experts, and users. At the system test level and the system integration test level, the role of a tester is often done by an independent test team. At the operational acceptance test level, the role of a tester is often done by operations and/or systems administration staff.

What are the testing objectives?

What should we test in a project may very and testing objective could include:

  • Testing or evaluating work products such as requirements, user stories, design and code.
  • Validated whether the test object is done or complete and work as expected by users and stakeholders.
  • Building confidence that in the quality of the test objective.
  • Preventing errors and defects.
  • Finding defects which lead to failure’s.
  • Providing to stakeholders information to let them make informed decisions, regarding the quality of the object under test.
  • Reducing the risk of the software quality.
  • Complying to legal, or regulatory standards, and verifying that the object under test comply with those standards or requirements.

The objectives under test may very from system to system, depending the context of the component under test, the level of test, and the model of the software development lifecycle being used.

What is testing?

Application or software systems, in this modern age, are all part of life, users all over the world are using and even testing systems with out even knowing that they are part of the testing. In our daily life we are using systems on our phones or our desktops, from banks, cellular providers, medical, ordering food and much more.

Software which does not function properly can lead to many problems, that include loss of money, time, reputation and more. Software testing, which is part of QA, can reduce errors, defects and failure in the software under testing.

Software testing is a process which includes many different activities such as test execution, planing, analysing, designing, implementing tests, reporting progress and results, and evaluate the quality of the object under test.