Blog

Quality Assurance Testing: Ensuring Excellence in Software Development

In the fast-paced world of software development, where innovation and efficiency are key, the importance of Quality Assurance (QA) testing cannot be overstated. QA testing plays a crucial role in ensuring that software meets the highest standards of functionality, reliability, and performance. In this article, we will delve into the significance of QA testing, its key principles, and its impact on the overall success of software development projects.

The Essence of QA Testing

At its core, QA testing is the process of systematically evaluating a software product to identify and eliminate any defects or inconsistencies. The primary goal is to deliver a flawless product that not only meets but exceeds user expectations. QA testing encompasses a wide range of activities, including functional testing, performance testing, security testing, and usability testing.

Key Principles of QA Testing

  1. Early Integration of QA in the Development Lifecycle:
  • Successful QA testing starts early in the software development lifecycle. By integrating QA from the initial stages, issues can be identified and resolved before they escalate, saving both time and resources.
  1. Comprehensive Test Planning:
  • A well-thought-out test plan is the foundation of effective QA testing. It outlines the testing approach, objectives, resources, and schedules, ensuring a systematic and organized testing process.
  1. Test Automation:
  • Automation has become a cornerstone of modern QA testing. Automated testing tools not only expedite the testing process but also enhance accuracy and repeatability, especially for repetitive and time-consuming test scenarios.
  1. Realistic Test Environments:
  • QA testing should be conducted in environments that closely mimic real-world conditions. This ensures that the software performs reliably in different scenarios, providing a more accurate representation of its actual behavior.
  1. Continuous Testing:
  • In the era of agile development, continuous testing is essential. It involves ongoing testing throughout the development process, allowing for immediate detection and resolution of issues as they arise.
  1. Collaboration and Communication:
  • Effective communication between development and QA teams is paramount. Collaboration ensures that both teams have a clear understanding of project requirements and objectives, leading to more efficient testing and bug resolution.

Types of QA Testing

  1. Functional Testing:
  • This type of testing focuses on verifying that the software functions as intended. It involves testing individual functions and features to ensure they meet the specified requirements.
  1. Performance Testing:
  • Performance testing evaluates how well a system performs under various conditions, including load testing to assess its response to high user volumes and stress testing to determine its limits.
  1. Security Testing:
  • Security testing identifies vulnerabilities and weaknesses in the software to prevent potential security breaches. It includes testing for data integrity, authentication, authorization, and protection against external threats.
  1. Usability Testing:
  • Usability testing assesses the user-friendliness of the software. It involves evaluating the interface, navigation, and overall user experience to ensure that the software is intuitive and easy to use.

Impact of QA Testing on Software Development

  1. Enhanced Product Quality:
  • QA testing is the linchpin of delivering high-quality software. By identifying and rectifying defects early in the development process, the end product is more likely to meet user expectations and function reliably.
  1. Cost Savings:
  • Detecting and fixing defects during the early stages of development is significantly more cost-effective than addressing issues post-release. QA testing helps minimize the risk of costly bug fixes and reputation damage.
  1. Customer Satisfaction:
  • A reliable and bug-free software product enhances customer satisfaction. QA testing ensures that the software performs as expected, providing users with a positive experience and fostering loyalty.
  1. Faster Time-to-Market:
  • Continuous testing and early defect detection contribute to a faster development cycle. By resolving issues promptly, software development teams can adhere to project timelines and bring products to market more quickly.

Challenges in QA Testing

Despite its numerous benefits, QA testing comes with its set of challenges. Some common challenges include evolving technology, tight deadlines, and the need for skilled QA professionals. Addressing these challenges requires a proactive approach, ongoing training, and the adoption of innovative testing methodologies.

Conclusion

In the dynamic landscape of software development, QA testing stands as a pillar of assurance, guaranteeing that the end product aligns with the highest standards of quality. By adhering to key principles, embracing various testing methodologies, and recognizing its broader impact, QA testing ensures that software not only meets but exceeds the expectations of users. As technology continues to advance, the role of QA testing remains indispensable, guiding the path toward excellence in software development.

Kubernetes introduction

Kubernetes (commonly stylized as K8s) is an open-sourcecontainer-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of database management systems”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker. Kubernetes originally interfaced with the Docker runtime through a “Dockershim”; however, the shim has since been deprecated in favor of directly interfacing with the container through containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI) introduced by Kubernetes in 2016.

Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

Kubernetes API

The design principles underlying Kubernetes allow one to programmatically create, configure, and manage Kubernetes clusters. This function is exposed via an API called the Cluster API. A key concept embodied in the API is the notion that the Kubernetes cluster is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces – the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider’s services and resources.

Kubernetes uses

Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture.

7 tips for improving load speed

Plan for performance

Are you building a new website? Be sure to discuss the importance of performance early on and set targets. That way, you have a faster load speed from the beginning and don’t have to implement fixes later.

Step 1: test, step 2: test…

Are you seeing a pattern here? 😉 Testing is crucial! Before you launch, load and test your website multiple times to make sure you can handle the traffic of real site visitors. This is especially important for sites with complex hosting, such as load-balanced configuration.

Implement some “quick wins”

To be clear, there’s no “get fast quick” scheme for site load speeds. But there is a tried-and-true template that will put you ahead of the curve. That includes making use of modern image formats, enabling compression on the server via Gzip, and leveraging browser cache. Find some more low-hanging fruit here.

Careful of your images!

Good websites have great graphic content – but they also take into account how images impact load speed. You can improve image performance by considering file formats, image compression, and lazy loading.

Think of the mobile visitors

More and more people surf the web on their phone these days, which makes mobile-optimized sites a huge priority! Since mobile users tend to use slower, less stable Internet connections, Accelerated Mobile Pages (AMPs) are a great way to get them content faster.

Prioritize above-the-fold

First impressions matter – and your above-the-fold content can make or break them! Consider inline styling for above-the-fold, then loading your code in chunks. This type of asynchronous loading can create a faster perceived load time for the user.

Assess your external scripts

Third-party scripts are a great tool – but can make your website feel a little crowded. Assess the performance of external scripts on your site load speed, and replace or remove those that are negatively impacting user experience.

DevOps preface

If you’re old, don’t try to change yourself, change your environment. —B. F. Skinner

One view of DevOps is that it helps take on that last mile problem in software: value delivery. The premise is that encouraging behaviors such as teaming, feedback, and experimentation will be reinforced by desirable outcomes such as better software, delivered faster and at lower cost. For many, the DevOps discourse then quickly turns to automation. That makes sense as automation is an environmental intervention that is relatively actionable. If you want to change behavior, change the environment!

In this context, automation becomes a significant investment decision with strategic import. DevOps automation engineers face a number of design choices. What level of interface abstraction is appropriate for the automation tooling? Where should you separate automation concerns of an infrastructure nature from those that should be more application centric?

These questions matter because automation tooling that is accessible to all can better connect all the participants in the software delivery process. That is going to help fos‐ ter all those positive teaming behaviors we are after. Automation that is decoupled from infrastructure provisioning events makes it possible to quickly tenant new project streams. Users can immediately self-serve without raising a new infrastructure requisition.

We want to open the innovation process to all, be they 10x programmers or citizen developers. Doing DevOps with makes this possible, and this blog will show you how.

This is a practical guide that will show how to easily implement and automate powerful cloud deployment patterns using. The container management platform provides a self-service platform for users. Its natively container-aware approach will allow us to show you an application-centric view to automation.

GETTING AHEAD OF THE
DEVOPS AND CLOUD CURVE

Now that we have those newly-raised table stakes covered, let’s talk about how to stand out and deliver faster than your cloud- based DevOps competition. To jump ahead of the tech herd, you need to provide your DevOps team tools that increase your software delivery speed, quality, and security.
To do that in this age of exploding data volumes and complex processes as possible, while gaining (or maintaining) full control of binary and dependency sets.
Automation is great, but not if it forces your developers to go speed also needs to integrate instantly with tech your teams
In other words, the minute you deploy, you boost productivity immediately through integration with your ecosystem and DevOps tools. When you can do that, you also save time and money through easy management of the DevOps pipeline.
Can you see how this is all coming together?

THE WORKINGS OF A SUPERIOR REPOSITORY MANAGER

To achieve all of the above, a universal binary repository manager like JFrog Artifactory gives developers a powerful possible. It provides a searchable and clickable repository for binaries, saving them hours, even days, reinventing the wheel.
But it’s not that simple. It needs to be more than that.
in the cloud, superior pipeline tools—like Artifactory—needs to natively integrate with security scanning and compliance solutions. Enter JFrog Xray.
Through a tool like Xray, you empower developers to identify and mitigate known security vulnerabilities and open source license violations. You give them the tools to provide impact and new components have on your overall system.
It also lets them drill down to identify all dependencies of each build package and Docker layer using deep recursive scanning, allowing them to continuously govern and audit artifacts consumed and produced in your CI/CD pipeline.
And Xray does it all while protecting against open source security vulnerabilities using the most comprehensive vulnerabilities database in the industry.

FUTURE OF DEVOPS

THE EARLY MAJORITY
MOVES TO THE CLOUD

of business:

60% + 80%
DevOps world have raised the bar on collaboration, cross-organizational visibility,
of businesses are adopting or expanding DevOps culture and processes
of businesses are now operating in the cloud

DEVOPS AND THE CLOUD— A NATURAL PAIR
Let’s start with DevOps.
Forrester Research dubbed 2018 the year of DevOps. And it’s no wonder, with over half of enterprises implementing or expanding existing DevOps practices. So why are they doing that? Here are a few good reasons to consider it:
DEVOPS OFFERS YOUR ORGANIZATION:
• Greater productivity and faster delivery of products
• Greater visibility and collaboration across projects,
departments, and individuals
• Less siloing
So, DevOps removes friction; and as a practical environment for DevOps, the cloud just makes sense.
HOW THE CLOUD ENHANCES YOUR DEVOPS ORGANIZATION

• Rapid deployment of new environments
• Reduced IT costs through subscription and SaaS (pay as you go) payment structures
• Moving from CapEx expenditures for hardware to OpEx expenses for SaaS
• Fast, agile scalability
So why the urgency to make these innovations? The truth is, they’re not really innovative anymore. it’s already happened.
The bar has been raised and you need a new edge.

GAUGE YOUR DEVOPS PROGRESS
Institute Agile practices that focuses on communication, collaboration, customer feedback, and small and rapid releases. Agile operations remove rigidity from your processes and allow for greater innovation, while keeping accountability and increasing goal focus
Deploy a multi-cloud strategy with Kubernetes or other intermediary layer for cloud-agnostic and resilient infrastructure
Build cloud-native systems for core products, with lift-and-shift for systems that don’t require much scalability
Create microservices in containers over monolithic apps to increase your agility and your ability to innovate with less downtime

THE 8 ADVANTAGES YOU SHOULD GET FROM A CLOUD-BASED REPOSITORY

1 A UNIVERSAL, END-TO-END 3
SOLUTION FOR ALL BINARIES
• Compatibility with all build and integration tools on the

• packaging formats and integrating with all the moving parts of the ecosystem
and all other major package formats (25+ and growing)
• Supports Maven, npm, Python, NuGet, Gradle, Helm,
2 SCALABILITY AND REDUNDANCY
• pay-only-for-what-you-use cloud model
• Security that all data is stored in multiple locations
3 MANAGEMENT OF MANY BINARIES ACROSS DIFFERENT ENVIRONMENTS THAT SOLVES FOR
and providers

• Lack of metadata context
• Policy enforcement

5 SECURITY, ACCESS, CONTROL AND TRACEABILITY
• Information access management through authenticated users and access control
• Full artifact traceability to fully reproduce a build and debug it
• Secure binaries by identifying vulnerabilities and
6 RELIABLE REMOTE REPOSITORIES
• Consistent and reliable access to remote artifacts
• Local caching of artifacts eliminates the need to download them again as well as removes the dependency on unreliable networks and remote public repositories
7 ACTS AS A SECURE, ROBUST DOCKER REGISTRY
• docker registries
• Smart search for images
• Full integration with your building ecosystem
• Security and access control

8
A KUBERNETES REGISTRY
• Additional insight to your code-to-cluster process while relating to each layer for each application
• As your main Kubernetes Docker registry, collects trace content, dependencies and relationships with other Docker images which cannot be done using a simple Docker registry

3 expert tips for (new) developers part-3

1 Don’t focus on reinventing the wheel

The expectations of you are probably lower than you think, because, hey, you’re brand new!

You’ll find a wealth of ready-made packages and libraries of code online to use at your disposal. Do your research and be sure to sense-check the quality, but don’t be afraid to use these resources to help you spend less time “reinventing in the wheel” and more time developing your skills and knowledge in other areas.

Which ties nicely with the next tip:

2 Make Google your friend

Seeking a solution online is often the most efficient first step towards a solution. 

A great piece of advice is to “get good at Googling”. Someone has run into the same problem as you, you just need to find it. Once you’ve found it, try to understand the what, why and how before copying and pasting it. This is an opportunity to learn and develop your knowledge.

3 Be kind to yourself (and your team!)

It may sound cliché – and perhaps a little cheesy – but it’s important to be kind to yourself when starting out in your development career, as nobody becomes an award-winning developer overnight 🤷‍♀️

While it is sometimes easier said than done, don’t put too much pressure on yourself and make sure you allow yourself the time to learn, grow and most importantly, make mistakes! 

And you will make mistakes – just remember that it’s solving these mistakes that will help you become a stronger developer. And try not to strive for perfection – aim to write clean, reusable and easy to read code in a timely manner. 

And don’t forget to be kind to your team too and remember nobody comes to work to do a bad job. The key to a successful development team is helping and supporting each other. A happy team will always produce the best work – and it’s less likely to feel like a job!

3 expert tips for (new) developers part-2

1 Expose your ignorance

Ouch – this one can be a tough one for some. It’s only natural that you don’t want to look ignorant but you must fight this urge and speak up. 

If you don’t understand something or haven’t heard of a term or technology – ask. If you don’t, it’s a missed opportunity to learn and verify your understanding. Software development is a multifaceted industry, you can’t know everything and you’re not expected to, but you can always gain knowledge by speaking up.

2 Communication is key

This one might surprise you, but your communication skills are just as important as your software development skills. Take the time to practice writing – you’ll use it more in your job than you might think.

And get comfortable explaining what you do to non-developers. Especially in the world of consulting and cross-team projects, you’ll likely be communicating with people who don’t have the same technical background as you do. 

Miscommunication is perhaps the biggest threat to any project. You need to be able to effectively communicate with other developers, project managers and clients. Clear, concise and timely written or verbal communication can go a long way. It might take some practice, but if you’re aware of this from the start, it will become a strong skill for you going forward!

3 Develop your project management skills

Similar to social skills and communication, you need to be able to communicate your progress on development tasks.

Tools like Trello, Jira and Azure DevOps support developers in task management, planning and scheduling. These skills will help you when you’re fixing a bug or writing a new piece of functionality; breaking down a larger task into smaller pieces making it more manageable for you as well as making it easier to present an overview to your manager or other team members.

expert tips for (new) developers part-1

1 Create your own GitHub account

When starting out, create your own GitHub account where you can start adding your own projects and snippets of code as you go along. Not only is this a great place to build up a reference library of code, it also helps when showcasing your work to potential employers too.

You’ll find that when you’re interviewing for roles, most employers appreciate being able to see some code you have written.

2 It’s important to know what’s cooking now – and in the future

Keep yourself up to date with whatever develops within your field – it’s crucial to know what’s cooking.

Explore and try out different areas within web development and different technologies. If you want to work with web development, try working with one CMS and becoming an expert in that – e.g. sqaeb. It will help you get a better idea of where you want to focus on later.

In the long run, I think you need to pick a specific area and master it – and this also means keeping yourself updated on this particular area!

3 Be curious – learn from others

The support you can get from your colleagues, friends and the online developer communities (like us) is invaluable, and you should never be afraid to ask for help.

If you’re struggling with some code, the chances are that someone has struggled before you and has already solved your exact problem! By having the confidence to reach out to those around you or online, you’ll find solutions much more quickly, increasing your knowledge in the process.

METADATA SPELLBOOK FOR CONFLUENCE WIZARDRY

Get practical tips and best practices for using Metadata for Confluence from real-life use cases.

Assemble Your Product Portfolio in a Flash

For users who simply need an overview of the company’s product portfolio, it’s frus- trating to have to search through every single product page. They would also need to be familiar with the product name or related keywords to find the content. For new team members, the task becomes harder as they lack the basic information to even begin searching.

The solution is to create a directory page with all the relevant information your users may need about the company products. By adding metadata to individual product pages, you can then populate all the information across those pages through addi- tional macros, filtered by the metadata values.

Whether for marketing initiatives or for customer support, product pages with meta- data come in handy whenever you need well-organized information on a product.

Just make sure to set up appropriate metadata sets that your users can fill out every- time they create a new product page. To configure your product metadata sets, simply create a form that is required when a product owner or developer creates a new page, as shown below:

You’ll have predefined fields associated with the product page template, which allows teams to easily identify the critical information about the product.

MAKING MAGIC WITH METADATA FOR CONFLUENCE

Learn the basics of metadata and how it makes Confluence great again.


To organize the collective intelligence from multiple business functions, you first need to design an intuitive content structure to make sure that information is discov- erable, whether through site navigation or search queries.

User-generated Labels Lead to Content Chaos

You may already be familiar with Confluence’s labelled content feature – the primary method for organizing content.

However, letting your team members freely add page labels can create problems. You’ll end up with a raging storm of tags that only brings more chaos to the wiki space. Not only does this approach require constantly keeping track of all the avail- able tags, you’ll also have to correct misspellings and updating teams with the right taxonomy.

Let’s face it. Even with a labelling system in place, with every new page comes a new topic and a plethora of new labels. And having all users consistently follow your label- ling rules is wishful thinking.

Page Properties Fall Short

So, aside from labels, what are other ways to help teams effectively manage content?

What your team really wants is to have the right information in front of them when they want it. Much like searching on “Atlassian” in Google and immediately getting a neat summary of all the information about the company Atlassian.

Confluence out-of-the-box comes with basic data categorization via the page proper- ties macro. With this function, your user can generate a table containing key informa- tion about the content and have it shown on a “summary page.”

Here’s an example, based on the properties created including Title, Owner, Due Date, and Status. The user can report the information about all project pages in a table.

However, similar to the limitation of labels, page properties lack the flexibility to present collective information that matters to different users. Plus, it requires tedious macro setup along with user-generated parameters. Which means you’ll end up with yet again more clutter than before.

This is where metadata comes in.

Metadata Brings Order to It All

In a nutshell, metadata refers to information about a page and its content, such as creator and creation date, among other details.

With metadata, it’s extremely easy to add predefined categories to pages. This allows you to pull information from those pages and display only relevant data in a table format for quick insights into the content.


There are three main categories of metadata for Confluence:

Descriptive metadata: Information that enables content discoverability Structural metadata: Information about the page structure Administrative metadata: Information about the source of content

Using Metadata for Confluence, you can skillfully conjure myriad content manage- ment capabilities, including:

Maintain a structural space organization and improve usability Enhance content discoverability, regardless of naming conventions Implement a more user-friendly Confluence navigation
Build a directory based on content from multiple sources
Make sure only relevant content is shown to a particular user

In the next chapter, we’ll let you in on our secrets to building a robust content plat- form using Metadata for Confluence app.

Introduction Overview of Confluence user needs and challenges.

As a Confluence admin, you’re entrusted with a mission-critical system for day-to-day business operations. Your teams count on you to bring institutional information to life on a wiki space, so that everyone can work more efficiently without constraints on knowledge.

While it’s common to think that the more content available, the more reliable your Confluence will be, we beg to differ.

Navigating Confluence can be challenging when you have thousands of pages scattered in many places, in different templates, and with no standardization. New information gets buried and never gets to see the light of day.

In fact, the biggest challenges for any Confluence admin include orga- nizing the magnitude of content, maintaining organizational transpar- ency, and ensuring the smooth flow of work.

What if there was a way to revitalize your Confluence site, organize pages, and maximize usability? That’s exactly why we built Metadata for Confluence app.

No gimmicks needed. With metadata, creating a structured wiki and organized content is simple. Metadata doesn’t just bring contour and clarity to your team spaces. It gives you new capabilities as a Conflu- ence wizard. Want to embed structured page properties? Check. Need to assemble information from thousands of pages for reporting? Check. You can do all kinds of amazing things with metadata, from building personalized intranet experiences to standardizing workflow implementations.

This serious of posts uncovers several use cases for Metadata Confluence app, so you can learn how to apply and create your own Confluence applications across your organization.

Let the magic begin! 🪄🦄

Pillar 4 | Personalization

Make it relevant and personal
Bring end-user data into the conversation by connecting your mailing list, CRM or customer database to our digital human platform. With the volume of user data available at your fingertips, there are plenty of opportunities to create value through the use of analytics and user preferences. A virtual barista could easily learn and recommend your favorite drink, or a virtual product genius would already know which brands you prefer.

Value-add integrations
UneeQ digital humans integrate into a number of third-party services, such as translation tools, real-time data-driven APIs and knowledge bases that enhance your capabilities at scale. Consider which your users would find most valuable depending on your use case.

Manage latency through optimization
The quality of your technical implementation is imperative to ensure the experience is natural and seamless. In the same way talking to a real human would be jarring if there were large delays, latency in your digital human responses will create a difficult-to-navigate experience. Efficient NLP design and technical architecture will ensure a seamless and humanlike interaction.

Analyze and craft performance
Add your favorite analytics platform or other behavior-tracking tools to capture user sessions, clicks and utterances. Watch for opportunities to improve the digital human’s response in your NLP service and quash any unmatched user questions.

99%
personalization has improved their customer relationships.

Pillar 3 | Multimodal UX

Preparing for the next web experience
As Web 4.0 “the symbiotic web” continues to develop, early signs show it will be about a linked web experience that communicates with us, like we communicate with each other. While Web 5.0 “the emotional web” will create an entirely new human to machine experience. Aspects of these trends can be used today by pairing a digital human with an interactive user interface.

Use UI to your advantage
Unlike the human face, your digital human is a multimodal digital interface. Consider the use of speech, on-screen displays, image and video content, interactive elements, and escalation features all as tools to create a balanced and versatile experience. A great example of this is the ability to walk a user through a home loan application, all while answering the user’s questions throughout the process.

Stage and test the journey
Whether it’s escalating to a human, sending the user an email, creating an account for them or helping them fill out a form, plan the many ways your digital human conversations can end. You can then work backwards to find the many ways your users will get there. Iterate on your journey with A/B testing to help smooth over tricky interactions and provide an optimized experience for the end user.

Environmental restraints
Consider the environment in which your users will interact. There may be design and deployment considerations for certain situations including, environments that are too loud, lack the necessary privacy or have poor internet connection, for example.

49%
… of consumers say the best thing brands can do to improve CX is integrate physical and digital channels.

Pillar 2 | Design for conversations

Great conversation design begins with role play
As you begin to think through your conversational design, and especially if you are evolving a chatbot into a digital human experience, review and role play how key interactions should make you feel. Written content is not always appropriate for spoken performance; using simple sentences and amending scripts to be suitable ‘for the ear’ is key. Role playing and documenting your own expressions and sentiment throughout the conversation will help guide the inputs and identify any content that is difficult to perform and mis-aligns with the desired user experience.

Guiding the conversation
Similar to the customer journey for your website (maybe even in parallel), it’s vital you continue to guide the user through a conversation. The good news is that a digital human, unlike a chatbot or even voice assistant, is more helpful to guide the user mainly due to the fact that it’s not just command driven.

Small talk versus on-topic conversation
As you design the conversational path and role play through the emotional impact, also keep in mind the ability to include small talk. There are several great small talk conversational engines out there including Blenderbot or even GPT-3. The digital human advantage here is the ability to plug into many different “brains” or natural language processing engines, and layer the experience with highly curated content. So while GPT-3 is guiding small talk, Watson (or any other NLP) is the foundation for your guided or on-topic conversation.

Pillar 1 | Personality code

Embody your brand
Your customers love and trust you because your brand, your story and your tone of voice is aligned to their personal values. Make sure all those things come through in your digital human experience design.

Know your audience
Understanding your audience and targeted personas will help to confirm specific details in the conversational tone. For example, a healthcare digital human should focus less on using humor and witty replies, and instead focus on establishing credibility, nurturing and building trust.

Train for emotional connection
82%
… of customers say they want more human interaction as automated technologies continue to proliferate.
Personality is more about expressions and non-verbal cues than it is about the words. Your digital humans’ differentiation is about bringing a personality to an experience and doing it better than any other alternative. Use this to your advantage.

Create moments
If a smile was currency, how would you make the experience as lucrative as possible? Put a smile on users’ faces by delighting them and surprising them. Make solving their problems fun.

Conversational AI is a journey,
not a destination

Whether you are creating a virtual product expert, automating a complex financial form or introducing a virtual life health coach, it’s vital to the project’s success that you take into consideration each of the pillars we’ve outlined in this blog.

“Be very clear in what you want to achieve from this digital human – because the potential is limitless.”
Shashank Shekhar, CEO of Arcus Lending

We suggest that as you go through this blog, take some notes, jot down some questions and let us assist you in your journey. Our conversational AI specialists are eager to connect and help you implement best practices at each step.

So in line with that, let’s jump in a get started. The four pillars of an amazing digital human experience include both digital and tech imperatives, as well as highlighting the need for conversation, interaction and fun.

Digital humans | Introduction

Often referred to as avatars, artificial humans, or even virtual assistants, digital humans are AI-powered lifelike beings that look, sound and interact like real people.

Accessible 24-7-365 and fluent in over 70 languages, digital humans add empathy, compassion, engagement and a personality to any experience. Powered by conversational AI from Google, Amazon, Microsoft, IBM and other global tech leaders, digital humans are revolutionizing how we interface with brands, educators, healthcare workers, financial experts and other professions on a daily basis.

We’ve created new posts as a best practice guide to build amazing and engaging digital human experiences. Of course, best practices are always evolving, so we’d love to hear from you and what you’ve learned in your own journey. As always, visit us more often for information and connect with us on social media.

Salary structure in an agency

Perks and benefits that save employees money in the long run are always a valuable addition to a paycheck. Addition being the keyword here.

Because no amount of pizza parties can supplement the 10% increase in salary that people could get at the other agency across the street. Except, that’s not the case, the statistics surrounding this, point in the exact opposite direction:

  • 32% of people polled in the US would take a 10% pay cut to work at a company where they like the culture
  • 58% of workers will stay at a lower-paying job if it means having a great boss
  • And 60% of workers would even take half of the potential paycheck if it meant working at a job they love

So if culture makes up for the differences in salary between your agency and the agency next door, how do you structure the salaries in your company to both attract and retain top talent?

  • Don’t buy stars, build them – Have a partnership with the local media and technical schools that provides internships and part-time positions for promising students. If you follow our onboarding tips and you build a functional onboarding program, after a couple of weeks, your time investment in onboarding them should already be paying you back. And in a few months? You might just have your hands on your newest superstar.
  • Have a clear progression path – be upfront and transparent with the salary structure. It will eventually become the biggest motivator for the employees in the lower tiers. If you split your progression path into layers where everyone gets paid the same, you can skip long management discussions like: ’’Is a Senior Backend Developer with 4 years of experience worth the same as a Senior Art Director with 5?’’ An example of how to structure your progression path could be:
  1. Intern > unpaid, but gaining real-life skills and experience from an agency by working on real projects
  2. Trainee > paid, part-time or full time; self-taught, certified or freshly graduated
  3. Apprentice > Same credentials as a trainee, but with some successful commercial projects
  4. Junior > Proven 1-3 years of experience with commercial projects
  5. Senior > 3+ years of experience with commercial projects and proficientwith project management and delegating tasks
  6. Management > If you’re doing linear progression, this step is simple. But if you want to do non-linear progression, it’s worth differentiating at management level. a. Senior members with multiple specializations and experience with managing teams b. Senior members with extra non-managerial responsibilities (product development, decision making, etc.)
  7. Equity tier > Management whose investment with the company is substantial enough to warrant equity in the company
  • Promotions, raises and employees who feel undervalued – if you adopt the aforementioned salary structure, your employees should have a clear overview of where they fall and what they need to achieve to move up to the next salary level. But as it goes with highly ambitious people, you will always have individuals who take on more than their fair share of responsibility and then don’t feel adequately compensated. The answer should be obvious. If the employee performs above the set expectations, has the data to back it up, and asks for an increase in pay, they should get one. Sadly, when working with more than one person, it will never be that easy. Ben Horowitz summed it up the best in his class on Y combinator – how to start a startup.

A point he brings up is: If you give that employee a raise, will you give everyone else who is also performing well a raise as well? What about the employees who are performing just as well, but their personality prevents them from asking directly?

Apart from being approachable overall, managers and senior agency members can adopt these two methods to focus these conversations and help employees feel more valued and heard:

1. Monthly walk and talk:A manager and employee go for a half-hour walk outside of the office, talking about current projects, plans for future projects, the progress of the employee and any problems they might be having

2. Yearly progress conversation: Performance reviews are usually seen as a negative process because of the negative associations that people usually have with them. Walk and talks remove the need for quarterly performance reviews at a scary meeting room table.

But a walk and talk is not really the place to sign contracts and obsess over spreadsheets. So how about a yearly progress review, close to the end of the year, talking strictly about the employee’s progression path and salary?

That way, both current problems can be addressed from month to month, and larger issues or achievements can be accumulated over time.

Non-linear progression

When hearing the words ’’non-linear’’, if your mind immediately jumps towards video games, you already sort of get the point.

In a non-linear game progression system, you start at the same spot as every other player. But when you arrive at a crossroads, instead of going straight down the first path like you usually would, you get to choose if you want to go left, right, or even take a step back and see if you can get to your current position again, by taking another path. This progression helps you pick up new skills and new experiences that will make the path ahead much easier.

This is also how the current trend in career progression looks. Companies no longer expect people to stay in the same career path for decades, slowly working their way up the corporate ladder. This rings especially true for agencies, where skills from different career paths transfer almost seamlessly and complement each other with a broader outlook on the problems being solved.

As an example, if you have a frontend developer who discovered she likes designing more than she likes coding, you should give her a chance because:

  • She already knows the limitations that code can have on some designs
  • She can design with systems and reusable assets in mind
  • She can give better estimates on project length and the overall development time
  • If she wants to progress further into something like art direction, the added coding skills are always a plus when communicating to both clients and developers alike

If your agency has people who have invested in their craft to the point where they are considered experts, top talent, or masters, their progression will eventually hit a plateau.

And while just existing at the top and using your skills to their full potential is a fantastic feeling… ultimately, the need for self-improvement and innovation that got them to the top of the talent pool will make them want to progress further. But you can’t really go further up than the top, so where do you go?

This is where people start considering switching jobs or pursuing entrepreneurship because it seems like the only challenging way forward.

The classic solution to this “problem” is to promote them to the management level. Clearly, if someone is performing exceptionally well as a specialist they will automatically become an exceptional manager… Right?

The solution is not always that simple and pushing someone to become a manager (or a manager of a bigger team than before) is not for everyone. Some top talent enjoy being a specialist and would rather spend their time performing their tasks, than managing a team.

“In a hierarchy, every employee tends to rise to his level of incompetence.”

– Laurence J. Peter, Author of The Peter Principle

The previous quote refers to what is known as the Peter principle, a concept of management developed by Laurence J. Peter. The principle suggests that people tend to get promoted outside of their skillset and competence, based on previous success.

Meaning: Your best front-end developer is first and foremost… a front-
end developer. Having 10 award-winning projects under his belt does not make him an instant candidate for managing the next project. That requires knowledge of front-end and an additional management skill set, lack of which could lead to disaster down the line.

The modern solution to the problem is working with non-linear progression and promotion. Instead of the career path only going one way – towards management – you can set an alternative path. This could be anything from giving your top talent more influence on projects or a seat at the table when tough decisions are made to simply giving more freedom to perform tasks their own way. Once you start thinking outside the box you’ll be amazed at the possibilities there are for non-linear progression.

And the result?
Happier top talent that gets a truly unique position at your agency, which they won’t be able to find anywhere else.

At SQAEB, most of our junior employees start out in the SWAT department, helping our users with day to day issues. This helps them naturally and quickly get an overview of all the other departments, the products, and how everything fits together. Later they can choose to transition into newly opened positions in the company that they find interesting or get places in completely new positions based on their specializations.

Are you having any fun?

Fun is a fickle thing. Everyone inherently knows what fun is, but if you had to define fun at the workplace, it would not be as easy as it first sounds. Looking up the definition of fun will also get you reprimanded by the dictionary, and there is no one sure way to define it. The only sure thing is that if the most interesting thing at the office on the first day is the photocopier, the new employee getting the tour will probably start looking for another job during the lunch break.

The overall feeling of fun at the workplace impacts productivity. And so it’s
a topic without any specific bullet points, but a topic to think about and discuss nonetheless.
If you want to have fun at the workplace but can’t manage to play chess
on one screen while maintaining your focus on coding… or your keyboard shortcut hand is also your balloon tying and juggling hand… you will probably need to interact with other people eventually. But there is only a limited level of friendship and camaraderie that you can build with people when talking about code and sending each other design files.

When was the last time someone asked a different water cooler question than: ’’So, how was the weekend/any plans for the weekend?’’ In most agencies, it has probably been a while. And that’s expected. If you work in a consistent and focused environment, there are only so many topics that can come to mind.

But if you change up the setting, if you do different activities together, you might build more than just classic coworker bonds. You might build friendships. And what could be nicer than looking forward to Monday morning at the office to see your friends?

But not everyone comes to work looking for friendship. Especially top performers who just want to put on their headphones and forget that they are in an office environment.

Sadly headphones run out of battery, the wifi goes down, and progress meetings exist. Eventually, even the most focused people have to talk to their coworkers. And since you spend most of your day at work, people would prefer to cut down on the dry, corporate jargon and instead discuss or do something… fun.

This again brings us to the topic of shared values. The job of a back-end developer and the job of a UX designer require different personalities. So if your agency wants to have a varied offering of skills and backgrounds, you will have to find values that connect with every group.

But not just the ’’standard’’ values that are put on the agency “about us” page. The values that make up the constantly evolving personality of your agency. If you do this, you will eventually have an agency full of like minded individuals that don’t need to act corporate 24/7 and might even joke around from time to time.

Sadly, there is a thin line between having fun at the workplace and being overly quirky and disrupting everyone’s work. Unfortunately, you can also never get full value-alignment with every person that has been hired. But an agency where people think of each other as nothing more than colleagues and only spend time together at work is an agency that will have trouble scaling and keeping up with the more friendly teams later on.

Your culture and environment both have an impact on the quality of your work.

Talent Investment

You have to spend money to make money. And you have to invest in top talent to retain top talent. Achieving maximum focus in an office setting where a million things are gunning for your attention is tough.

All of that can be managed with a good work culture and processes. But if you don’t have the right equipment and tools, you’ll never be as efficient as you could be.

Maybe a chair is not comfortable. Maybe you can still hear your sales team in the other room, even with your headphones on. Maybe you found a SaaS tool that would save you hours upon hours of repetitive tasks.

If someone asks for a new keyboard, new tool, or new screen, it’s never a good idea to dismiss them right away. The person asking rarely brings up an issue like this on a whim, it has to be premeditated in some way, and that means that the problem they are facing is a recurring one.

“The way management treats
their associates is exactly how the associates will treat the customers.”

– Sam Walton, Founder of Walmart

A one-time investment, no matter how large, is actually pretty small when looking at it as a long term investment in focus and productivity. If an agency shows that it cares about its employees in all the ways that matter, the employees will return it multiple times over. Here are some small or large things in no particular order that could make or break an employee relationship with the company:

  • IT equipment. If you ask someone to work in front of a computer 8 hours each day, you better make sure they have the proper equipment to do their job. This includes everything from computer equipment to noise- cancelling headphones and online tools to do their job.
  • Chair and desk. This one is connected to the one above; spending a third of their day in uncomfortable working conditions will severely hurt their productivity and health.
  • Coffee, refreshments and snacks. We know it might not sound like much, but making sure that your employees have access to all the basics like coffee, cold water (or soda) and some fruit can drastically increase their productivity and improve health.
  • Indoor climate. The stereotype of a developer might be: someone sitting in a dark basement with a hoodie on – but nothing could be further from the truth if you want them to be productive. Proper lighting, some plants and good ventilation are all tiny details that have a huge impact.

Talent Professional growth

A promotion: While most talented people love what they do, as they repeat the same tasks day after day, eventually, they will find ways of improving the process or get ideas for new ventures that the team should pursue. And there is only so much one can do from the bottom of the corporate ladder. Career growth is a key part of goal setting strategies for high performersand agencies need to provide these opportunities if they want to retain their top talent. Otherwise those people might look for those higher positions elsewhere. Please note, that a “regular” promotion is not always the best option; we’ll cover that later in our post “Non-linear progression”.

A raise: Usually going hand in hand with a promotion. However, while every promotion should come with a raise, not every raise has to come with a promotion. Many people are not after the responsibility that comes with
a promotion, they just like what they do, and so they take on more tasks, spend more time at the office or even work weekends. But maybe they aren’t looking to delegate their tasks to their would-be replacements. Maybe they just want to feel like their extra time is seen as valuable by the agency. And seeing as time is money, sometimes the answer is as simple as that.

While all of the above will probably make your agency employees happy and get your agency valuable, educated and dedicated employees for a long time
to come, there are also smaller ways to improve productivity faster.

Talent Personal growth

Courses and conferences: There are always new books and courses popping up, covering the latest and greatest developments in the industry.

If your top performers ask about you helping fund their education, it’s one of the best ways to show them that you are counting on them in the future.

Maybe there is a developer conference coming up that would help them meet some like minded people and gather industry knowledge?

While it may seem like a big investment to send one or multiple developers away for a few days, the new knowledge and energy they bring back will pay dividends now as well as in the future. If they have valid arguments for going, why not give it a shot?

Schools and degrees: A similar approach to the one about courses and conferences, to an even higher degree (forgive the pun), should be taken if an employee asks about the possibility of returning to school.

Maybe they got this job straight after finishing their bachelor’s degree. Maybe they want to go for a manager position and think that an MBA would greatly improve their outlook.

Or maybe they want to slowly transition to another position, but wish to stay at the agency. Customer lifetime value and return on investment are some of the most important metrics that agencies need to keep an eye out. But try
to imagine the “employee lifetime value”, of someone who you helped put through school.

Personal and professional growth

Every movie about an office work environment has managed to, in one way or another, demonize the monotony of sitting at a cubicle doing the same work every single day. And who can blame them? Doing the same thing over and over again is widely referred to as the definition of insanity.

No one wants to feel like they aren’t progressing in their job. And this rings especially true when we are talking about top talent. If someone wants
to stay at the top (where you probably want to keep them), they need to continually have an eye on the newest developments in their field.

The information gathering and processing is on them – allowing for an environment where they can test new ideas, that’s on the agency.

There are many ways to help talented employees fuel their passion for their work. Every person is looking for something different, but we have a few ideas that should be universally interesting for most people.

Is ’’When and Where’’ Important?

Allowing for a full five-day remote work schedule is not something that can be implemented instantly, it’s something that agencies have to build towards over time.

For a large portion of agencies, a full week of remote work might not even make sense at all. But giving people the freedom to work from home as needed on special occasions can remove a lot of unnecessary stress. If a person needs to take care of some errands, look after the kids, or maybe they are not feeling well enough to drive to the office, but well enough to work, why not have the option of working from home?

Let’s say you have a single developer dedicated to taking care of your agency website. He has tasks that he doesn’t actively collaborate with anyone else on. He gets a mockup of the website, some copy, and gets to work. He might also be actively trying to sell his apartment. In most companies, this would mean that he has to run back and forth between the apartment and the office, sometimes multiple times a day, to deal with the buyers, real estate agents and contractors. But does he really have to?

Would it not be more comfortable for him to stay at home and work between meetings? And would it not make it easier for his team members and managers not to have to keep track of his travel schedule? And if the work gets done in the right time frame, does his physical presence at the office really matter? I’ll discuss this further in “Is it time to go fully remote?” post.

SQAEB TIP

At SQAEB, everyone has a setup that allows for secure remote work, and in case of sickness, family emergencies, schoolwork or other unforeseen events, they are always welcome to work from home. We give people the benefit of the doubt / assume positive intent, and so far, it has always paid off.

Talent Freedom

Freedom is often hailed as the ultimate solution to happy employees. But most people have an easier time being creative when there are some restrictions in place.

Example: If your agency needs you to write as many slogans as possible selling pineapples in the next 10 minutes. When do you think you will produce more? A) If the 10 minutes is the only restriction. B) If you have a 10 minute restriction, you cannot use the word pineapple and all the slogans have to be under 10 words or less?

Studies show that B is the right answer – even though you have more freedom in A. Sidenote: We tried it at our office and we are currently considering a new venture in ’’Spiky yellow fruit’’ advertising.

So does this prove that freedom may not be the answer to an infinitely creative and productive workplace culture?

Of course not – because we had the freedom to choose those restrictions.

Client expectations and agency needs dictate the tasks that have to be solved. Every agency also needs to have some time and budget restrictions to prevent a project getting out of hand.

Other than that, the freedom to solve the problem in any way possible is one of the most significant benefits you can grant your employees:

  • The most efficient way to a problem takes all the learning and experimentation out of the process
  • Using less billable hours and achieving maximum efficiency will inevitably mean that the client should probably expect cookie-cutter deliverables instead of innovative solutions
  • If there is a framework, guideline or brand book for everything, proposing new solutions and approaches might be perceived as too much of a hassle to even suggest

If you find the perfect balance in the above, you should have the How and Why of task management covered. But freedom in the workplace is a complicated thing. The How and Why are questions that have to be answered or the work will never get done. But why not take more weight off of people’s shoulders by not having them stress over the When and Where as well?

NURTURING AND RETAINING TOP TALENT

Hiring and onboarding new employees is one thing. But as we know, the costs of employee turnover is high. If you don’t work on having a great environment where your employees thrive, then it’s going to be very costly for you to keep replacing everyone.

Employees changing jobs is impossible to stop – especially in the tech industry – but there are things you can do to keep your turnover rate low.

This post could just be called ’’culture in the agency space’’ because that is the true key to acquiring and keeping top talent.

But what is company culture?

The 17-word, aka the short answer: Company culture is the combination of all the values, social interactions, and psychological behavior in an organization.

The 340-word, aka the long answer:Company culture is hard to define in specific terms, because unlike most essential things in business, it is entirely intangible, a feeling. Branding is closely intertwined with culture in every interaction that the company makes with any of its outside stakeholders. And if you want your brand to be consistent across all channels, you have to work towards a work culture that aligns with your corporate messaging.

A brand is a reflection of your company in the minds of your stakeholders.

That is why it takes on new forms in every piece of content shared on social media, every meeting with a possible client, and every shared lunch break with Debbie from the agency next door. A brand consists of many moving parts, some tangible, some not. The tangible can be boiled down to visual identity, messaging, and imagery, if need be. These can all be changed with a new set of guidelines, a new designer, or a new marketing department, but how do you control a culture?

Culture is not just a code of conduct, communication strategy, or a list of processes. Company culture includes all the small details:

  • The tone of voice the CEO uses to address a reporter while discussing a new acquisition
  • If your employees feel comfortable to talk about non-work related issues with their manager
  • If the new sales intern feels like waking up in the morning on his second week on the jobAnd that’s why culture is one of the hardest things to get right in an agency, as it can not be acquired, mandated or forced.

Culture has to be built and continuously monitored and maintained.

You can tell a lot about an agency culture:

  • In the way, your company treats employees, customers and the surrounding community
  • In the degree that your employees are committed to the company values and goals
  • By how comfortable employees are with innovating, making decisions and expressing their opinions
  • In how information is broadcasted from one department to another and from the higher-ups to the lower-level employees

Day one onboarding

There are many things a person needs to know on their first day at a company. And there are a lot of things that they will definitely not remember. To prevent information overload, it’s preferable to keep some essential things for the rest of the week so the fresh hire will pick them all up eventually. So what should they know on their first day?

  1. Give them an “onboarding buddy”. This should be someone from their team, who they can ask any and all questions to, without feeling like you are bothering them
  2. The values or the ’’WHY’’ of the company
  3. The names of their closest coworkers
  4. The tech stack your department is using
  5. Where to find the best coffee machine in the building, as well as any other refreshments they can get (fruit, cold water, etc.)
  6. How the company intranet or CMS works
  7. The most efficient way to get to their desk
  8. The information and communication flow of your company (emails, chat, phone calls, etc.)
  9. Where the bathrooms are (you’d be surprised how often this is an issue)
  10. What task management solution your team uses to keep track of tasks
  11. When lunch is
  12. Their first real work-related task

That’s about it, any other information would probably be too much, and
as we all know, if you go for a handshake tour with every department immediately, you forget the first person’s name while shaking the third one’s hand.

Onboarding that rocks

Onboarding a new person to the team is a masterclass in taking your own medicine for a lot of agencies. Every good agency prides itself on an in- depth understanding of user journeys and user experience, but what is the experience of joining your agency like?

Placing someone behind a desk, giving them access to your password manager, and asking them to start developing right away is the equivalent of ordering a pizza and giving the delivery guy just your zip code. It takes so much more, and a good onboarding experience can make or break your company’s ability to foster new top talent.

Interview a talent

Generally, tech companies started adopting ’’a multiple interview approach’’ that not only gives applicants a coding test or some homework, but also goes over their background and culture fit in the same depth. More and more agencies are now doing the same. This is where our hiring journey once again splits into two paths, this time, based on if you chose the internal hiring strategy or the headhunter/recruiter strategy.

The recruiter can take care of the searching, first impressions and the technical fit, but you should always have the most promising candidates meet the current team for a short and sweet meet and greet before you consider hiring them.

If the agency conducts the entire hiring process in-house, there is a lot of leeway in the process. Try new approaches and strategies, and eventually, you will find what works for you. But if you want a hint from a company that put culture first and has been doing so for 3 years, here’s how we do it at SQAEB:

  1. Collaborative effort to identify skills required. Once we are sure we need a new addition to a department, the team goes over the exact skills we are looking for. This ensures that the team knows which new skills are coming in, instead of a manager deciding it themselves.
  2. Job posting. When the manager has the final job posting ready, it is posted and shared online internally as well as externally. We know the value of a good network, so employees from all departments are asked to share it with anyone they might think is a good fit. To help gauge personality in the first screening process we usually ask for a short video introduction, along with a resumé, just to get an idea of who you are as a person even before we meet you.
  3. Screening of candidates. As soon as we have enough candidates, the first screening process starts. This consists of sorting out any that does not have the required skills or did not adequately show that they would be a good cultural fit.
  4. First interview. All candidates that pass our first screening are invited
    to a first interview. The purpose of the first interview is to get to know them as a person and figure out if they would be a good cultural fit. This includes having a current team member talk to them for 10 minutes one- on-one, without those involved with the hiring present. If the personality is a match to our culture, they are given homework and invited to a second interview.
  5. Homework. While the first interview is focused on the cultural fit, the second is about technical skills. And to judge that, each candidate is given homework to complete before the second interview. This consists of various work-related tasks where they have a chance to showcase their skills. The homework also includes writing a movie review. This is an added curveball to see how they approach problem solving of tasks they probably haven’t done since high school.
  6. Second interview. We have the second interview to go over the homework and technical questions. This is where their skills are assessed and the main goal is to ensure that the chosen candidate has the necessary skills to handle the tasks they would be given in the position.
  7. Hiring. After the second round of interviews it is often clear which candidate is the best cultural fit and whether or not they have the necessary skills.

Now that you’re done recruiting and have hired the right person, the real work starts: onboarding. Hiring the right candidate is one thing; but if you don’t manage to give them a proper onboarding experience they will not perform as well as they could. Onboarding is the first step towards nurturing top talent.

Talent, Takes one to know one

Agencies have a lot of ways to get new talent in the door. You might do all the recruitment in-house, outsource it to a headhunter/recruiter or grow to a point where a dedicated HR department or in-house recruitment person is the way to go.

But no matter which option is the most viable for you, always keep the cultural fit in mind. You might find out that the person with the most extensive resume might be too far in their career to adapt to the workflow that works for the rest of the team. There are also cases of people with less impressive qualifications, who fit in so well with the rest of the team, that they hit the ground running and start producing work way above their estimated skill-level right away.

Making your agency a cultural paradise for top talent pays off in more than one way:

On one hand, you will attract those who have already proven to be top talent, which can give the quality and speed of work an instant boost. And if they are the ones who come to you looking to join, you’ll have a much larger talent pool to choose from.

On the other hand, you will be nurturing potential top performers from their career infancy and help them grow into top talent with the right personality traits to perform at your company. That has a ROI that can only be beaten by time travelers going back in time and buying stocks in Apple.

This whole train of thought is where agencies might learn something from the world of sports, where it’s a common philosophy in some football clubs (or soccer if that’s the term you prefer to use):

”We don’t sign superstars, we make them”.
– Arsène Wenger, Manager of the Arsenal F.C.

But how do you make sure that your candidates are a cultural fit? And how can you make sure that they can do the work once they get hired?

Contrary to what you might think from our previous arguments about “personality > skills”, it’s important to start with the skills first. At the end of the day you need to know which skills you’re looking for before you can start evaluating personality and cultural fit.

When the hiring process is handled by the department or team that is looking for a new member, the senior members or managers are usually in charge of the process. If there is an obvious need for a specialist that the team doesn’t yet have, creating the requirements should be as easy as simply writing down the tasks that need to be done and translating them into skills. However, if there is just more work coming in for a specific skill set (UX, .NET Developer, etc.), the existing team members should be consulted so that the new hire can complement their skill set.

Once you are settled on the skills it’s time to consider the personality you’re looking for. Are you looking for a person with an extraordinary drive to grind it out 50 hours a week? Or maybe a true team player that makes everyone around them better? There’s no right or wrong answers here – but it’s important to have an idea of which personalities you’re looking for.

The tone of voice varies from agency to agency and even from team to team, and the structure of a job posting can vary quite a bit. But there are still some evergreen tips that could save you and potential candidates some time:

  • When a job has language or certification requirements that make or break the application, start with those
  • Don’t get caught up in the technical requirements and skills needed for the job.
  • Present the personality traits you are looking for on equal footing with skills, education and experience
  • When dealing with entry-level jobs, a portfolio of work could be supplemented with school projects that have a similar scope
  • Don’t put unnecessary year requirements on non-senior jobs
  • With software that has a steeper learning curve, ask for a specific platform that your team uses (Sketch/Adobe XD/InVision) instead of listing experience with prototyping software in general
  • Don’t ask for 8 years of experience in a language that has been around for 3 years

Job posting for a Talent

It’s fair to assume that people who can be considered top talent in their respective disciplines probably got there through a combination of hard work, dedication, and professionalism. Then it would be more than fair if they expect the same qualities from their potential new employer.

This is why you need to have an in-depth look at every part of the job posting, so both parties know if they are a match even before they finally meet face to face.

Talent Career page

A good starting point for your ’’first point of recruitment’’ (not the first point of contact, because that’s probably your landing page) is to create a clear value proposition for the inbound job candidates. Until your agency reaches a certain size, you can’t cater to everyone’s wishes concerning work-life balance. Your hiring decisions should always be based on a cultural fit more than a technical fit.

While technical skills are clearly important, it’s much easier to improve a skill than it is to change
a personality. If we want to go into specifics, we can go back to the user experience analogy. When writing a value proposition on the careers page, you need to think about what kind of agency you really are.

’’We are looking for dedicated people to help bring the most innovative web solutions to life for our clients by day, and help us put up new shelves for all these awards by night…’’

That statement will attract a certain kind of people:

  • Fresh graduates with a lot of ambition looking for validation of their skills
  • Experienced professionals who want an environment for their talents to be utilized
  • People looking for a challenge and don’t even consider crunch time a negative word
  • Career-building professionals who are looking for a place that gets them more awards to their resume
  • People who live for their jobs and look forward to evenings and Saturdays at the office filled with pizza and fixing the kinks in the code

Then on the other side of the spectrum, you could have:

’’ You bring the talent, we bring the perks. At AUE Inc. (Agency Used as an Example), we value strategy and planning above everything else. And thanks to our in-depth research and planning, clients always get the solution they need, instead of the solution they think they want. This also means that our employees never have to worry about scope creep or staying at work past 5 PM. Oh, and did we mention possibilities of

working from home or the 4 day work week?”

A few sentences like this on your career page could go a long way towards attracting people that:

  • Love their jobs, but don’t want to sacrifice time with their family for work
  • Are perfect for the job, but would have had to relocate or travel multiple hours every day
  • Are motivated for the job, but also have other ambitions and are trying to run some sort of side-hustle or project on the side

Sections like ”International Workplace” or ”Fun Squad” shows that we care about an open and fun work environment, where your colleagues also become your friends.

Personality vs. skills

Before we get any further it’s time to address the tiny elephant in the room:

What’s more important – personality or skills?

To answer that question, you only need to scroll back up a few pages to find our list of characteristics for top talent. Notice how there’s only 1 called skill, while the rest are primarily based on personality?

That’s no coincidence. While skill alone is incredibly important, it’s what makes them capable of doing their job after all, it’s not necessarily the thing that makes them top talent. If they are an amazing coder, but can’t be depended on to meet deadlines or have issues working together with their team, it’s hard to call them top talent.

At the end of the day it’s important to remember that skills can be taught and improved, but personality and culture can’t. And if you want your entire team to perform – not just the individual – it’s important to have the right mix of personalities and culture. If the right culture is there, you’ll see skills improve for everyone and soon you’ll have a team full of top talent that performs day in and day out.

SQAEB TIP

For 99 % of our job postings we use this to highlight our people- first focus:

”We care about people. That’s why the most important qualification is your personality: who you are, what values you have and how you interact with other people. We are looking for people with passion and energy to be part of something bigger than themselves and who are willing to dedicate their time and skills towards building great products and services in collaboration with talented and friendly colleagues.”

Talent RECRUITMENT

Recruitment. Love it or hate it, this is where it all starts if you want to attract top talent for your agency. But there’s so much more to recruitment than job postings and hiring recruiters. It’s in the recruitment phase that the first bit of onboarding starts. While it is 100% the candidate’s responsibility to find out as much as possible about the agency he wants to join, why not show your values and culture even at the earliest stages and make it easier for them?

We are drawn to leaders and organizations that are good at communicating what they believe. Their ability to make us feel like we belong, to make us feel special, safe and not alone is part of what gives them the ability to inspire us

– Simon Sinek, Author of ’’Start with Why’’

It’s no secret that even the most basic one-page websites have an “about us” section. But imagine being a top talent developer or specialist looking for new opportunities. They might go through 50 “about us” pages every day. Does your mission and vision statement stand out of the crowd? Do you communicate having a culture that provides a constant stream of challenging problems to solve? Do you have a hilarious video of your founder switching places with your human-sized-rabbit-office-mascot and shooting confetti at your unsuspecting support staff?

SQAEB TIP

Do you want to show your values to potential clients? Then video is the way to go. It doesn’t have to be a big production – the only thing it has to do, is to show your company values and culture.

Letting your mission, vision and culture shine through in your recruiting process helps you immensely in not only standing out from the crowd, but also in attracting the right people for your company.

What is top talent?

Before we start our deep dive into the obvious and not-so-obvious ways of attracting and retaining top talent, let’s take a moment to define:

What exactly is top talent?

Top talent is one of those terms that does not have a clear cut definition that people can point to. However, when talking about the agency world, there are certain characteristics that come up time and time again when discussing high performers:

Skill – The go-to metric for determining top talent. Whether it’s due to natural talent or 10,000 hours of practice, if someone is exceptionally skilled, they are on the best possible path to be considered top talent at any agency.

Ambition – The goal to become the top of their field. Ambition drives people to always keep up with the newest trends and developments in their field and continuously improve their skills.

Integrity – When they say something will get done, it gets done at all costs. And if both the managers and team members know they can count on someone when the going gets tough, that person becomes irreplaceable.

Communication – Knowing how to clearly communicate with managers and executives that speak the language of money on one side, while communicating with the technical team members who speak in code and high fidelity mockups on the other is a skill that should be paid in gold.

Teamwork – Everyone can excel at their individual tasks, but sharing a task or working efficiently in a team is a must-have for those that want to become the top performers in any agency

Creativity – Some creatives are a constant source of ideas during a brainstorming session. Some always see a problem from 3 more angles than everyone else. And while creativity manifests in a lot of ways, sometimes it’s the main thing behind a person’s top-talent status.

Leadership – Leadership is not just a skill for managers or team leads. People who join fresh out of college can find themselves at the top of the pyramid in any team within a few months, even with no direct effort. If an individual is approachable, facilitates a good workflow, or solves problems with a leveled head, they will soon become respected by their peers as a leader, even with no title involved.

Devotion – The green ’’you can talk to me’’-light next to the monitor turns red. The headphones go on.
6 hours, 3 cups of coffee, 1 missed lunch, and a single stretching session later, one individual just saved a 10-person project from being one week late. That’s how people become legends. And top talent.

Being considered top talent does not mean that a person has to have all of these qualities fully formed. It doesn’t even mean that top talent and top performers have to achieve all of these qualities eventually. A person who fully masters 3-4 of these qualities should quickly rise to become a prime asset to any agency. And if your agency finds itself hiring a person that displays most or all of these qualities, then you should do everything you can to keep them around until they decide it’s time to retire.

Why talent is more valuable than ever

Every day we are moving towards a world that is both more efficient and more digital than any sci- fi cartoon from the 70s could have predicted. One of the forces at the helm of this digital revolution is the creative, design, and web agencies that are facilitating this change for everyone else.

Whether it’s by helping businesses that previously had no digital presence be represented in the digital space or taking established businesses and expanding their opportunities with new online solutions… the role that agencies play is undeniable.

But to try and fuel this innovation, the agencies need a constant supply of developers to fill a multitude of general and specialist roles. And while the demand for developers is at an all-time high, the quantity in both University graduates and self-trained professionals is not even close to enough.

Multiple surveys of over the last couple of years have pointed at a worldwide shortage in developers. The top three issues software businesses face are a mix of:

  • Not having enough people
  • Sharing experience across seniority levels
  • Hiring suitable candidates

With almost 9 out of 10 IT businesses saying that hiring new talent is ”hard” (and 36% calling it ”very hard”), it’s starting to become evident that calling this a developer shortage might be an understatement.

Recruiters often refer to this situation somewhere along the lines of: ’’worldwide developer shortage crisis’’. So if hyperboles are on the table, what if you wanted to make your recruitment even more selective? If you don’t want to settle for just having any ol’ developer, but instead, you want to attract the top talent in the industry, with all the perks they might bring to your agency. Well then, you must be prepared to rethink or tweak some things about the way you operate.

If that sounds like a hassle, or you already have a team filled with top of the line developers, you might want to think about retention instead because employee turnover costs you more than you know, both directly and indirectly:

  • Teams that are in constant flux and have an unstable structure are obviously going to be less productive
  • The employees that leave are always going to leave with crucial experience/knowledge that is completely removed from the company
  • The brand might get damaged from bad reviews on employer-rating sites and word of mouth, or bad press in general
  • The cost of losing an employee can range anywhere from 16% to 213% of their annual salary in some cases!

Now that talented developers are more scarce than ever… you might be wondering:

How does one identify this ’’top talent’’? And once you’ve done so, how do you recruit, onboard and retain them?

GOOGLE TAG MANAGER Tracking

If your web agency handles digital marketing for your clients, you’ll need a specific marketing tech stack. But since that could be a whole white paper in itself, we’ve chosen to focus on two categories that no agency can go without. Website tracking and reporting

Every successful agency has some sort of key performance indicators (KPIs) that they use to track the success of their activities for end clients. To do so, you’ll need anything from more complex metrics, like customer acquisition costs or customer lifetime value, to simpler ones like ’’which link did the user click to get to the shop?’’

If you want your marketers to have complete control of what information you gather about your users, Google Tag Manager is the way to go. With Tag Manager, you get a tag management platform where you can set up tracking for pretty much anything your users do on your website and pass that data to any analytics and advertising tools you use. Google Tag Manager also comes with multiple pre-built tags that make your life easier by letting you customize and implement tags without any coding knowledge, especially within the Google Marketing Platform suite of apps.

Free option: Yes
Pricing starts at: Whatever you can negotiate with Google’s sales team, but the free option will take you a long way.
Notable features:

  • Covers everything you could ever want to track
  • Built-in tags and templates speed up the process
  • Multi-platform support

ADOBE CREATIVE CLOUD

Adobe has been in the creative software market for about 30 years, and they are still the industry gold standard for all things design. Their ever-expanding cloud suite of creative products makes them a one-stop-shop for any agency looking for a full suite of tools and resources. These include Photoshop and Illustrator for your raster and vector image needs, or Premiere Pro and After Effects for video editing and motion graphics. In addition to more than 20 different creative apps, your subscription to Adobe Cloud can also provide you with useful resources such as fonts, stock images, and tutorials, or even a portfolio page.

Free option: No
Pricing starts at: $79.99/month/per user for the entire suite or $33.99/month/per user/per app

Notable features:

  • The suite is a one-stop-shop for all creative needs
  • Asset collaboration and sharing features for business plans
  • Stock images and fonts included in business plans

MICROSOFT ONEDRIVE File Sharing

If you use at least one other Microsoft 365 solution in your business, you might want to consider adding Microsoft Onedrive as well. In 2017 the Microsoft blog published some impressive numbers:

’’What do our customers think? With over 85 percent of the Fortune 500 companies having OneDrive and SharePoint across250,000 organizations worldwide, we are delivering on our vision of a more connected workplace. In fact, usage of OneDrive for Business has more than doubled in the last year alone.’’

And numbers like that usually have some pretty amazing products behind them, and OneDrive is no exception. With seamless integration into both the existing Microsoft 365 suite and your workflow, you will have your files available anytime, anywhere.

Free option: No
Pricing starts at: $5.00/month/per user
Notable features: 

  • Seamless integrations with the Office 365 suite
  • Auto-sync features
  • 1 TB of OneDrive storage in the basic plan

GOOGLE DATA STUDIO Reporting

So if step one is to track data, what do you do once you have it? Well, unless your client specifically asks for an Excel document, it would be nice if you could somehow visualize the data. This is where data reporting dashboards enter the process.

Google Data Studio is a very customizable, straightforward to use, free data reporting dashboard. And for most agencies, Google Data Studio provides everything you need. You can use it to pull your data from most of your existing analytics software and dashboards. And you can then take all of this data and visualize it together in a way that gives you the best overview of your business-critical metrics. Everything from the Google dashboards is easily shareable, so the distribution of the reports should not be a problem, and collaboration in real-time feels seamless.

Free option: Yes
Pricing starts at: No paid options, but you might need paid data connectors to pull in data from all the platforms you use.

Notable features:

  • Free On-Premise Download
  • Quickly and easily gather and visualize data
  • Real-time collaboration, sharing and easily embeddable

DROPBOX File Sharing

File sharing tools are another important part of every agency tech stack. File sharing tools help you organize and distribute media files with your colleagues and clients. They are critical to streamlining workflows and building up processes, and in most cases, reinforcing security at the same time.

Dropbox is an industry staple, and for a good reason. It’s reliable, secure, fast, and for the value that combination provides, relatively cheap. It makes it easy to sort all your files based on projects/clients in a very familiar and intuitive user interface. You can also automatically have your files sync with the cloud and never worry about accidentally losing some work. Accompanying its best-of-breed features, there are integrations with most tools you could want and the possibility to scale along with your team and your needs. And since it has wide adaptation on a personal level, you’re sure that clients and employees are already familiar with the interface.

Free option: Yes
Pricing starts at: €10.00/month/per user – starting at 3 users Notable features:

  • Reliable, secure, fast
  • Auto-sync features
  • Standard plan has 5 TB of storage

DROPBOX

LASTPASS Password Management

The onboarding process in an agency mostly centers around getting you up and running with the currently used tech stack. This includes sharing passwords for every piece of software, which can be a very tedious task without a password manager. Every website/software provider has its own guidelines for what they think a strong password is nowadays. And for the user, that results in creating variations of one master password for every piece of software in your tech stack. This can get extremely frustrating without keeping a spreadsheet, and solutions like that become a huge security threat over time. Luckily there are alternatives to encrypted USB sticks locked inside safes behind office paintings.

Reliable, secure, convenient, and exactly what you would want from a password manager. Lastpass makes keeping track of current passwords, creating and sharing new passwords, or on/off-boarding new team members a breeze. And thanks to a combination of securely generated passwords, an overall security score, and 2 factor authentication, you will never have to get another ’’Forgot your password, eh?’’ email ever again. In addition to taking care of your passwords, LastPass can also remember your credit card information for faster checkouts or even fill out contact forms for you automatically.

Free option: Yes
Pricing starts at: $4.00/month/per user (for 5-50 users) 

Notable features: 

  • Useful browser extension
  • 2 factor authentication
  • Autofill for forms and credit cards

AIRTABLE

Airtable is to spreadsheets what Trello is to bullet-lists. And just like Trello, teams can go months or years using Airtable and not even realizing that they are on the free plan. Unlike Trello, Airtable’s strengths lie in the complexity it is able to achieve while still keeping a straightforward user interface.

The pricing plans are set up in a way where small teams don’t even need to upgrade based on the number of entries, as long as they clean up their backlog every now and then. There are, of course, some very beneficial features in the higher pricing tiers.

Free option: Yes
Pricing starts at: $10/month/per user
Notable features: 

  • Spreadsheet style task overview
  • Multiple overview styles
  • Generous free plan

THE POMODORO TECHNIQUE

To be clear, this won’t help you know how many billable hours you have on a project or a client. But sometimes you don’t need to track how much time you spend on a project or task. Sometimes, you just need to focus. This is where Pomodoro comes in. The Pomodoro technique has a very straightforward ruleset to help you stay focused and maximize your productivity.

HOW TO DO IT

  1. You press the timer
  2. You work for 25 minutes
  3. You have a 5 or 10-minute break
  4. That’s it.

After that, just rinse and repeat. (Get it? Because Pomodoro is the italian word for tomato… moving on.)
Free option: Yes

Popular Pomodoro tracker sites:

https://pomodoro-tracker.com/

https://tomato-timer.com/

https://pomodoro.cc/

CLOCKIFY Time Management

The free time tracking tool for any company that just needs the most basic features for a lot of users. With Clockify, you only pay for extra features, and there are no volume restrictions on the basic tracking features. You get unlimited time tracking for unlimited users. And this also includes unlimited projects… and unlimited reports. If you need something more, there are some very friendly app and API integrations.

Free option: Yes
Pricing starts at: $9.99/month

Notable features:

  • Unlimited time tracking on the free plan
  • Unlimited users and projects on the free plan
  • Unlimited reports on the free plan

SLACK Internal Communication

Ever since mankind realized that average walking speed could carry a message to the other floor of the office building faster than the fax machine, we have been looking for a more convenient way to connect with our coworkers while successfully avoiding human contact. Email became the go-to solution for office communication for a long time, but in an agile and fast-paced work environment, chat and instant messaging has taken over.

Slack helps you segment your company communication into multiple chat- channels that can be used for communication/file sharing inside teams,
or across projects. There are, of course, person to person conversation options or smaller group chats as well. These feel just like any non-corporate messaging app that you are already used to. This makes conversations quicker than email and makes them feel a little more personal, which may… or may not be, exactly what you are looking for.

Free option: Yes
Pricing starts at: €6.25/month/per 5 users
Notable features: 

  • Person-to-person, group or project channels
  • Quick and easy messaging, calling and file sharing
  • Generous free plan

SLACK

MONDAY.COM

There are specific use cases for every type of project management software. But if you want every department to be unified under one project management roof, while keeping the necessary customization options, you should be looking at Monday. Monday can be best described as a suite approach to project management software, thanks to all the different types of boards that you can create and manage from a single main dashboard. While there is no free tier, you can always give Monday a try through their two-week free trial and see if it fits with your company.

Free option: No
Pricing starts at: $39/month/per 5 users 

Notable features: 

  • Quick onboarding
  • Extensive dashboard customization
  • Workflow automation possibilities

MONDAY.COM

TRELLO Task Management

While time tracking is certainly essential, it’s just as important to keep track of WHAT you need to be doing.

There are a lot of different styles and frameworks to set up your taskboard, but who are the best providers? Quick note: we did not include any development task boards like Azure Boards, but only looked at tools that could be used by all teams in an agency.

If you need a simple project board setup within a couple of minutes, Trello is the right place to be. Trello is one of those magical tools that you can use for years at scale, never having to spend a single cent (if you don’t need to expand past the basic feature set and storage, of course). Also, compared to a tool like Airtable, the app is much more simple. This benefits projects that require quick navigation or if you mainly use Trello for its mobile app. For projects that require more complicated overviews, we would suggest something like Airtable.

Free option: Yes
Pricing starts at: $9.99/month

Notable features:

  • Quick mobile-friendly UI
  • Workflow automation with bots
  • Generous free plan

TOGGL Time Management

Time is literally money when you’re working on client projects. Therefore being sure that you are spending your time on the right things is the single most important metric in any agency. Proper time management can help speed up your development time, make planning much easier, and even help you charge more per project. So what are the industry standard tools when it comes to time tracking?

Toggl is a simple time tracking tool with all the productivity and reporting features you could ever want. You can follow all your tasks separately or
get an overview of the entire project at once. You have the option to gather actionable insights from your data/team dashboards and create other useful data visualizations. If you are not a fan of real-time time tracking, you can also input your entries manually or integrate Toggl with your calendar. There’s also an option to put in your billable rates and figure out just how much your time is worth.

Free option: Yes
Pricing starts at: $10/month/per user

Notable features: 

  • Time data visualization
  • Real-time or manual time tracking
  • Time-cost calculation

Usability Testing Basic Concepts

Fundamentals

This section considers the following fundamental concepts: 

  • Usability
  • User experience
  • Accessibility

Usability

Usability is the extent to which a software product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Usability testers should be aware that other definitions may be used in organisations.

The user interface consists of all components of a software product that provide information and controls for the user to accomplish specific tasks with the system.

Usability evaluation includes the following principal activities:

  • Usability reviews
  • Usability testing
  • User surveys

A usability problem is a software defect which results in difficulty in performing tasks via the user interface. This affects the user’s ability to achieve their goals effectively, or efficiently, or with satisfaction. Usability problems can lead to confusion, error, delay or outright failure to complete some task on the part of the user. In safety-critical systems such as medical systems, usability problems can also lead to injuries or death. 

A software product can work exactly to specification and still have serious usability problems, as shown by the following examples: 

  • A car rental mobile app has a dead link. This is a defect which results in a usability problem.
  • A car rental mobile app allows users to cancel a reservation, but the users perceive the cancellation procedure as unreasonably complicated. This is a usability problem which affects the efficiency of the mobile app.
  • A car rental mobile app conforms to the specification and works both effectively and efficiently, but users think it looks unprofessional. This is a usability problem which affects user satisfaction when using the mobile app.

Usability always relates to the context of use and can be considered in different components. As the following examples show, user expectations of usability are rather different for these components. 

ComponentComponent Name
1Users
2Tasks
3Equipment
4Environment

Description of Component in Context of Use 

1. A user is a person who interacts with a software product by providing inputs, or by using the output of the software product.

2. Particular activities performed by users or particular groups of users (e.g., inexperienced users, administrators). 

3. Equipment relates to the hardware, software and materials required to use a software product.

4.  The environment consists of the physical, social and technical conditions in which a user interacts with a software product. The social conditions include the organisational conditions.

The following scenarios describe different contexts of use for the same software product: 

  • Administrative staff use Microsoft Word ® to write documents in a consultancy firm
  • An elderly person uses Microsoft Word® for the first time to write an invitation to her birthday

User Experience Concepts

User experience describes a person’s perceptions and responses that result from the use and/or anticipated use of a product, system or service.

User experience includes the following user characteristics that occur before, during and after use: 

  • emotions
  • beliefs
  • preferences
  • perceptions
  • physical and psychological responses 
  • behaviours and accomplishments

User experience is influenced by: 

  • brand image (i.e., the users’ trust in the manufacturer)
  • presentation (i.e., the appearance of the software product, including packaging and documentation)
  • functionality
  • software product performance
  • interactive behaviour
  • the helpfulness of the software product, including help system, support and training
  • learnability
  • the user’s internal and physical state resulting from prior experiences, attitudes, skills, personality, education and intelligence
  • the context of use

Usability criteria such as effectiveness, efficiency and satisfaction can be used to assess aspects of user experience such as brand image and presentation (satisfaction), functionality (effectiveness) and software product performance (efficiency).

Accessibility

Accessibility is the degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.

Evaluating Usability, User Experience and Accessibility

The key objectives of usability evaluation, user experience evaluation and accessibility evaluation are compared in the following table and discussed in more detail in subsequent sections.

Type of evaluation 

Usability evaluation 

User experience evaluation 

Accessibility evaluation 

Target group 

All users 

Key objective 

 Evaluate the direct interaction between users and the software product. 

  • Evaluate the services received prior to the use of the software product.
  • Evaluate the direct interaction between users and the software product.
  • Evaluate the services received after the use of the software product.

Evaluate the direct interaction between users and the software product, focusing on understanding problems related to accessibility barriers, rather than general efficiency or satisfaction. 

The principal techniques applied in usability evaluation, user experience evaluation and accessibility evaluation are shown in the following table and discussed in more detail in later chapters. 

Technique 

Usability review 

Usability testing 

User surveys 

Users involved? 

Optionally and Yes 

Key characteristic 

Experts and users evaluate the user interface of a software product for usability problems; the evaluation is based on their experience. 

Users are observed while they perform typical tasks with the software product. 

Users fill out questionnaires regarding their satisfaction with the software product. 

Specific techniques 

Informal usability review 

Expert usability review 

Heuristic evaluation 

Think aloud testing 

International Software Testing

Qual = Qualitative usability evaluation

Quant = Quantitative usability evaluation 

Usability Evaluation

A process through which information about the usability of a system is gathered in order to improve the system (known as formative evaluation) or to assess the merit or worth of a system (known as summative evaluation). 

There are two types of usability evaluation: 

  • Formative (or “exploratory”) evaluation is conducted to understand usability issues. Formative evaluation is often conducted early on in the development lifecycle during the design and prototyping stages to get ideas and to guide (or “form”) the design by identifying usability design problems.
  • Summative evaluation is conducted late in the development lifecycle shortly before or after implementation to measure the usability of a component or software product. Summative usability testing is quantitative; it focuses on obtaining measurements for the effectiveness, efficiency or satisfaction of a software product. A summative usability evaluation can be used to evaluate a design based on usability requirements so that the design’s acceptability can be established from the users’ point of view.

Both types of evaluation can be conducted iteratively.

Usability evaluation relating to software products. Usability evaluation can also be applied to other products or services where usability is important, such as with user guides, vending machines, aircraft cockpits, medical systems and train stations.

Usability evaluation addresses the direct interaction between users and the software product. The direct interaction occurs via a screen dialogue or other form of system use. Usability evaluation can be based on a software application, on design documents and on prototypes.

The objectives of usability evaluation are: 

  • to assess whether usability requirements have been met
  • to uncover usability problems so they can be corrected
  • to measure the usability of a software product (see below)

Usability evaluation addresses the following: 

Effectiveness:

  • The extent to which correct and complete goals are achieved
  • Answers the question: “Does the software product do what I want?” 

Efficiency:

  • Resources expended to achieve specified goals
  • Answers the question: “Does the software product solve my tasks quickly?”

Satisfaction:

  • Freedom from discomfort, and positive attitudes towards the use of the software
    product
  • Answers the question: “Do I feel comfortable while using the software product?”

If users are involved, a usability evaluation can be carried out by performing usability testing, conducting user surveys and performing usability reviews. If users are not present, usability reviews may still be performed. If software will be used by disabled individuals, include them early in usability reviews (i.e., color blind users). 

A qualitative usability evaluation enables identification and analysis of usability problems, focusing on understanding user needs, goals and reasons for the observed user behaviour. 

A quantitative usability evaluation focuses on obtaining measurements for the effectiveness, efficiency or satisfaction of a software product.

User Experience Evaluation

User experience describes a person’s perceptions and responses resulting from the use or anticipated use of a software product. 

Usability is part of the user experience. Consequently, usability evaluation is a part of user experience evaluation. The principal techniques used for user experience evaluation are the same as those used for usability evaluation. 

User experience evaluation addresses the whole user experience with the software product, not just the direct interaction. User experience includes: 

  • Advertisements that make users aware of the software product
  • Training in the use of the software product
  • Touch-points with the software product other than screen dialogue, such as encounters with support, letters or goods received as a result of interaction with the software product
  • Problems that are not handled by the user interface of the software product, such as the notifications of delays, handling of complaints and unsolicited calls

User experience can be evaluated using the principal techniques outlined in the tables above. In a user experience test, time gaps can be bridged during a usability test session.

Accessibility Evaluation

Accessibility evaluation is a usability evaluation which focuses on the accessibility of a software product. It addresses the direct interaction between a user with disabilities or limitations and the software product. 

The following advice applies specifically to accessibility evaluation: 

1. Define the ambition level for accessibility
The Web Content Accessibility Guidelines (WCAG) document defines three priority levels for accessibility; A, AA and AAA. It is recommended to adopt conformance level AA, which implies satisfying the most basic requirements for web accessibility and the biggest barriers for users with disabilities. 

2. Create or adept guidelines for accessible design.
These guidelines should comply with legal requirements. They should also be in accordance with the chosen ambition level for accessibility. Additionally, the usability of the guidelines for developers should be verified. 

  • Review the guidelines for accuracy
  • Establish an accessibility hotline, where accessibility questions from development teams can be answered competently within an agreed time limit

3. Train development teams in order to prevent as many accessibility problems as possible. This includes factors such as: 

  • Legal requirements for accessibility
  • Guidelines for accessible design and how to interpret and apply them
  • Tools and techniques to use when evaluating accessibility
  • The relationship between usability and accessibility

4. Accessibility testing focuses on the following aspects:

  • Use of a think aloud technique to understand the test participant’s thoughts and vocabulary during accessibility testing
  • Focus on understanding mistakes related to accessibility barriers, rather than on efficiency or satisfaction
  • Use tasks that concentrate on specific areas of concern for potential accessibility problems, rather than on general software product usage

Accessibility evaluation should consider relevant accessibility standards.

Usability Evaluation in Human-Cantered Design

Human-cantered design activities and their interdependence. Human-cantered design is an approach to design that aims to make software products more usable by focusing on the use of the software products and applying human factors, ergonomics, and usability knowledge and techniques.

The human-cantered design process can be summarised as follows: 

  • Analyze: Talk with people and discover “what is the problem?”
  • Design: Prototype what you assume is a solution
  • Evaluate: Watch people use the prototype and learn from their experiences
  • Iterate: Repeat until the usability requirements are achieved
Human-centered design activities and their interdependence

The human-cantered design activities are based on the following three key elements: 

1. Users 

Observe and interview users in their work environment. Users are involved throughout the design stage by discussing designs and alternatives with them directly (where possible), or with representative users. In agile software development, representative users are typically the product owners, who are an integral part of the development team and enable frequent feedback to be given to designers and developers on usability issues. 

2. Evaluation 

Perform usability evaluation on the software product. A usability evaluation may take place at any time during human-cantered design, from early analysis through software product delivery and beyond. A usability evaluation may be based on a prototype, as mentioned above, or on a completed software product. Usability evaluations that are conducted in the design phase can be cost effective by finding usability problems early. 

3. Iterations 

Iterate between design and usability evaluation. 

Considering the human-cantered design process, the most frequent iterations take place between the activities “Produce design solutions” and “Evaluate design solutions”. This generally involves the successive development of a prototype, which is a representation of all or part of a software product’s user interface. Although prototypes are limited in some way, they can be useful for usability evaluation. Prototypes may take the form of paper sketches or display mock-ups, as well as software products under design. Starting with an initial prototype, the following activities are performed:

  • The prototype is evaluated. The person who performs the evaluation conducts usability testing on the prototype.
  • The prototype is improved and refined based on the results of the evaluation. The person who performs the evaluation helps the developers evolve the prototype by incorporating user feedback into the design.

These activities are repeated until the usability requirements are achieved. When prototypes are developed in iterations, the steady refinement gives the user a more realistic impression of how the finished product will look and feel. Additionally, the risk of forgetting or ignoring usability issues is reduced.
Both usability and accessibility must be considered during the design phase. Usability testing often takes place during system integration and continues through system testing and into acceptance testing. 

Usability Requirements

A usability requirement is a requirement on the usability of a component or system. 

It provides the basis for the evaluation of a software product to meet identified user needs. Usability requirements may have a variety of sources:

  • They may be stated explicitly, such as in requirements documentation or a user story
  • They may be implicit, undocumented user expectations (e.g., a user might implicitly expect that an application provides shortcut keys for particular user actions)
  • They may be included in adopted or required standards

Examples of usability requirements (in this case described as user stories) are:

  • “As a frequent user of the airline’s booking portal, an overview of my currently booked flights shall be automatically shown after I log on. This shall enable me to get a quick overview of my booked flights and quickly make any updates.”
    This usability requirement is about the effectiveness component of usability.
  •  “As a help-desk assistant, I must be able to enter and log the details of a customer request into the Customer Relations database in no more than two simple steps. This shall enable me to focus on the customer request and provide them with optimum support.” This usability requirement is about the efficiency component of usability.

Agile Usability Evaluation

Usability evaluations are also suitable in agile software development. 

Agile software development is a group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between members of a self-organising team. 

In agile software development, teams work in short iterations, each of which has the goal of designing, implementing and testing a group of features. 

The following usability evaluation approaches work well with agile software development: 

  • Rapid Iterative Testing and Evaluation (RITE) is a qualitative usability test method where changes to the user interface are made as soon as a usability problem is identified and a solution is clear. The RITE method focuses on instant redesign to fix problems and then confirming that the solution works with new test participants (real users or representative users). Changes can occur after observing as few as one test participant. Once the data for a test participant has been collected, the usability tester and the stakeholders decide if any changes are needed prior to the next test participant. The modified user interface is then tested with the remaining test participants.
  • Informal and quick usability test sessions are useful where many potential users can be accessed (e.g., a cafe, a conference or a trade show). Such forms of usability test sessions typically last less than fifteen minutes and apply techniques such as think aloud and heuristic evaluation.
  • Weekly testing. Test participants are recruited well in advance and scheduled for a particular day of the week (e.g., each Tuesday), so that the software build can be usability tested on that day. Usability tasks are prepared shortly before the scheduled testing day and may include exploratory testing sessions, where the knowledge of the tester and heuristic checklists are used to focus on usability issues.
  • Usability reviews.

Embarrassing mishap:

Thousands of Tesla car owners were locked out of the vehicle
According to reports in the US, it was a simple software update that caused a malfunction in which thousands of car owners of the company were locked out of their vehicles.

A simple software update has caused thousands of Tesla car owners to be locked out of their cars, U.S. media reported

Tesla, unlike other automakers that are just now entering the field, takes the approach that the car is a technological-upgradeable product, meaning that through software updates it is possible to increase engine power, release speed and charge limits and also install new applications.

This approach, despite the considerable benefits inherent in it, can sometimes also become problematic. In a case that happened in the US yesterday, a technical malfunction in the cellular application that the company uses completely disconnected thousands of car owners from the car itself and from Tesla. That is, not only could the phones not communicate with the car – they could not communicate with the company. , Which is able to “talk” to the smartphone, the immediate result was locking the vehicle owner out of the car for long periods of time.

Upon learning of the problem, thousands of Tesla owners in the United States tried to get into their car and rushed to report on social media that the car refused to allow them to enter the car via cell phone and even locked them out of the car in some cases.

Tesla itself has gone through a difficult week, after its shares have fallen since Alon Muskab’s statement during the annual shareholders’ meeting. Although Musk noted that Tesla will produce its first popular car in three years at a price of $ 25,000, he also stressed that Tesla’s revolutionary cybertrack will be produced in over 300,000 units, By Musk.

This way you will get a complete and clear picture of the QA findings

Smart integration of the worlds of manual, automated and mass testing is a weakness in many companies, and has a direct impact on the decision whether to release a product or version to market or return it for further work. A new approach offers a holistic solution and optimization in the world of testing. A software company must have a good idea, great developers and creative interface designers, but without strict and thorough software testers all work can go down the drain. The QA departments are responsible for ensuring that the software or application exits the company gates without any glitches, no matter who the user is, which operating system is installed on the device and in which language it is running; They are an important part of a chain that determines the customer experience, and accordingly the success of the product and its compliance with consumer expectations.

Software testers perform the tests in a variety of methods and ways. Some are designed to make sure there are no glitches, some are usability tests, some are done manually by the company’s QA team, some are performed by masses (Crowd Testing) and some using automated tools. In order to enter the market with a perfect product, companies must perform a combination of tests: Test automation provides more reliable and faster results, enabling versions of applications to be released to the market more quickly. When manual tests are added to this, which are amplified with mass testing (Crows-testing), a picture is obtained that complements the coverage gaps of the automatic tests, helps to verify faults and provides a complete and reliable overview of the product quality.

Indeed, most companies and organizations rely on automated and manual testing in their testing strategy, but most of them run the tests simultaneously and separately and are not synchronized with each other, so that unnecessary and less quality tests are often performed. At the end of the process the QA managers need to concentrate all the feedback from all the tests into one place, and only then start working on improving and correcting the problems found.

 leads to unnecessary and cumbersome work, waste of time and resources and the need for double budgeting. A little mess and disorder in the results obtained is enough, and the conclusions may harm the company’s business goals and the chances of success of the software or application. To date, QA departments have required a great deal of effort to manage the vast amount of information flowing from the variety of quality tests performed and to examine the information on many dashboards, in order to get a snapshot of the tests, their quality and results.

All the information in one place

 results, the integrated testing approach has been developed, which allows for a holistic perspective on all test results and thus make decisions quickly and efficiently. This approach combines in one place all the information and results of all types of tests – understandable and incomprehensible, manual or automatic; Design tests, usability, accessibility and more. This way the QA teams have a look at and complete control over all the testing processes and all the results, and they can easily conduct an in-depth investigation of each problem.

An integrated solution can completely reduce the complexity and loads often created as a result of the multiplicity of testing platforms, and even makes it possible to grow and expand the work environment for all business units.

An integrated solution includes several components and features. The dashboard, the main screen in the system, shows the complete list of tests – manual or automatic – and whether they succeeded or failed. When this information appears in one place, it is easier to identify patterns of faults and the connections between different failed tests, so that managers can make a quicker and more informed decision about releasing the product or continuing to work on it.

The integrated approach seamlessly supports the CI / CD workflow process, and when a new software or version is ready for testing you can quickly create a new test cycle and see the results on the screen in real time.

First the results of the automatic tests appear, as they are faster, and then the manual tests and the mass tests. From there you can perform manual tests or repeat mass tests, in order to prevent False Negatives from the automatic tests, and the repeat test results are also updated on the dashboard. All test history is available at all times and can be used to understand trends and strengthen testing strategy, and test results and bugs can also be exported to other systems like JIRA.

Each company must conduct tests before the product arrives on the market, and each one makes its own considerations in choosing the types of tests. But testing management, which leads to making decisions that affect the product and the company, seems to be a weakness in many of them, a point that can be strengthened using tools that exist in the market and can save valuable time in all departments and management.

Acceptance Testing Business Process and Business Rules Modelling

Modelling Business Processes and Rules

Organisations need confidence that critical business processes, such as order-to-cash procedures, human resource on-boarding, or production planning, can be performed without disruption. This is known as “business process assurance” and it is an essential objective of acceptance testing. In this context, two standards exist that provide a common language for business analysts and testers for graphically representing business processes and business rules: Business Process Model and Notation (BPMN) and Decision Model and Notation (DMN). These models support the design and implementation of tests and help to determine the priority for execution.

Business process/rule models describe the business flow and the expected behaviour of the test object. Representing business processes and rules to be tested using a graphical notation helps to establish a common understanding of what is expected. A business process corresponds to a flow of tasks, alternative paths, and the various events at the start, the end or possibly during the control flow. Business rules define explicit criteria for guiding behaviour, shaping judgments, or making decisions. 

Business Process Model and Notation (BPMN), maintained by the Object Management Group (OMG), is a recognised standard for business process modelling which uses a flowcharting technique. In this article, a subset of the Business Process Model and Notation (BPMN) notation is used that is sufficient to draw simple business process models in the context of acceptance testing activities.

Decision Model and Notation (DMN), also standardised by the Object Management Group (OMG), is complementary to the BPMN standard. While Business Process Model and Notation (BPMN) is used to represent workflows, DMN is used to represent decisions, business rules and outcomes/output within the workflow. In this article, a subset of the Decision Model and Notation (DMN) notation is used that is sufficient to define business rules in conjunction with simple business process models in Business Process Model and Notation (BPMN).

Deriving Acceptance Tests from Business Process/Rule Models

A business process model with business rules, described with the Business Process Model and Notation (BPMN) and/or Decision Model and Notation (DMN) notations, provides a precise definition of the scenarios to be tested, including the cases related to business rules. It is a good basis for generating acceptance tests using coverage-based test selection criteria as defined in a model-based testing approach. 

Coverage-based test selection follows the principle that the business analyst and tester agree on the coverage items that shall be fully tested. Typical coverage items for business process models when generating acceptance tests include the following: 

  • User stories, requirements, and risks annotated in the business process model.
  • Decisions in the decision tables describing the business rules.
  • User scenarios defined by different paths through the business process model.
  • All paths (usually without loops) through the business process model.

Once the coverage items are defined, the tester then identifies a set of test cases that covers those items. Full coverage is achieved if the test suite covers each occurrence of the coverage item in the model at least once during execution.

Different coverage criteria may be combined to meet the acceptance testing objectives. For example, the objective may be to cover all paths of a given main scenario, but only one path of each alternative scenario.

Business Process Modelling for Acceptance Testing

Business process/rule models describe the business flow and the expected behaviour of the test object. The use of business process/rule modelling in the context of acceptance testing is based on good modelling practices and supports visual ATDD practices.

Good Practices for Business Process Modelling for Acceptance Testing

The following good practices should be considered when using Business Process Model and Notation (BPMN) and Decision Model and Notation (DMN) for acceptance testing: 

  • It is not necessary to describe everything in a business process model. The graphical representations of business processes in BPMN should focus on requirements to be tested. Therefore, workflow descriptions that only partially cover the behaviour of related software systems are acceptable, as long as they represent what is to be tested.
  • Especially for rule-based business processes, using decision tables helps manage dependencies. DMN supports the definition of conditions and outcomes corresponding to the business rules under test.
  • Diagrams should be as simple as possible and be structured in sub-processes when needed to limit the number of graphical elements in a single business process diagram. This improves readability and facilitates reviews.
  • Business process modelling for acceptance testing should be a collaborative work between business analysts and testers. Artefacts produced should be shared between and reviewed by both roles. Early and close communication between those two roles improves the quality of requirements or user stories as well as tests. (This is true for all test levels.)
  • Additional information such as links to user stories, requirements, risks, priorities and any other information useful for acceptance testing should be added to the diagrams using annotations. By keeping all relevant information in a single location, it becomes easier to make decisions and reasons are better documented.

Using Business Process Models for ATDD

During the refinement sessions for requirements and user stories, the business process and business rule models will help the team to get into the details of the expected behaviour and the acceptance criteria. The representation of workflows in Business Process Model and Notation (BPMN) and of rules in Decision Model and Notation (DMN) directly enable testers to design appropriate test cases that verify the acceptance criteria.

Business process modelling for ATDD is based on the following principles:

  • Business analysts and testers collaborate to model workflows and business rules using graphical notations such as BPMN and DMN.
  • These business process/rule models are reviewed with relevant stakeholders and contribute to the validation of the requirements and acceptance criteria.
  • Testers derive tests from these business process/rule models to ensure and demonstrate the required coverage through the different paths and business rules.
  • Business analysts and testers may also use the models to identify changes that necessitate test case maintenance and to select regression test cases.
  • Business process/rule models created and maintained for ATDD can be viewed as living documentation used by business analysts to present the actual behaviour of
    the test object.
  • Automated test generation techniques can be used to produce and maintain automated test scripts. The model-based testing approach can also be combined with keyword-driven testing and data-driven testing approaches.

Business process/rule modelling in ATDD provides a visualisation of the workflows to be tested. This is the major difference from the Gherkin language used in BDD.

Acceptance Criteria, Acceptance Tests and Experience-Based Practices

Writing Acceptance Criteria

Specifying acceptance criteria is an important acceptance testing task. It helps to refine requirements or user stories and provides the basis for acceptance tests. Business analysts and testers should collaborate closely on the specification of these criteria. This collaboration ensures high business value from the acceptance testing phase and increases the chance of a successful iteration or product release. 

Writing acceptance criteria forces business analysts and testers to think about functionality, performance, and other characteristics from a stakeholder or usage perspective. This supports early verification and validation of the related requirement or user story and provides a better chance of detecting inconsistencies, contradictions, missing information or other problems. 

The following good practices should be considered when writing acceptance criteria:

  • Well-written acceptance criteria are precise, measurable and concise. Each criterion must be written in a way that enables the tester to measure whether or not the test object complies with the acceptance criterion.
  • Well-written acceptance criteria do not include technical solution details. They concentrate on the question “What shall be achieved?” rather than on the question “How shall it achieved?”.
  • Acceptance criteria should address non-functional requirements (quality characteristics) as well as functional requirements.

As with requirements and user stories, acceptance criteria should be reviewed through walkthroughs, technical reviews, iteration planning meetings or other methods (if necessary).

Designing Acceptance Tests

This section addresses the test techniques and approaches frequently used for acceptance testing.

Test Techniques for Acceptance Testing

In a requirements-based approach to acceptance testing, the tester derives test cases from the acceptance criteria related to each requirement or user story using black-box techniques such as equivalence partitioning or boundary value analysis.

Acceptance testing may be augmented with other test techniques or approaches:

  • Business process-based testing, possibly combined with decision table testing, validates business processes and rules.
  • Experience-based testing leverages the tester’s experience, knowledge and intuition.
  • Risk-based testing is based on risk types and levels. Prioritisation and thoroughness of testing depends on previously identified product risks.
  • Model-based testing uses graphical (or textual) models to obtain acceptance tests.

Acceptance criteria should be verified by acceptance tests and traceability between the requirements / user story and related test cases should be managed.

Using the Gherkin Language to Write Test Cases

In ATDD and BDD, acceptance tests are often formulated in a structured language, referred to as the Gherkin language. Using the Gherkin language, test cases are phrased declaratively using a standardised pattern:

  • Given [a situation]
  • When [an action on the system]
  • Then [the expected result]

The pattern allows business analysts, testers and developers to write test cases in a way that is easily shared with stakeholders and may be translated into automated tests. 

The “Given” block aims to put the test object in a state before performing test actions in the “When” block. The “Then” block specifies the consequences that can be observed from the actions defined in the “When” block. Test cases written in Gherkin do not refer to user interface elements but rather to user actions on the system. They are structured natural language test cases that can be understood by all relevant stakeholders. 

In addition, the structure “Given – When – Then” can be parsed in an automated way. This allows automated test script creation using a keyword-driven testing approach. 

Initially, Gherkin was specific to some software tools supporting BDD, but it is now synonymous with the “Given – When – Then” acceptance test design pattern. 

Experience-based Approaches for Acceptance Testing

All experience-based test techniques described in are relevant for acceptance testing. This section is focused on how exploratory testing can be used for acceptance tests, and on beta testing as a source of feedback on system usage. 

Exploratory Testing

Exploratory testing is an experience-based test technique that is not based on detailed predefined test procedures. In exploratory testing, all activities are carried out within an uninterrupted period of time called a session. The testers are domain experts. They are familiar with user needs, requirements and business processes, but they are not necessarily familiar with the product under test. 

During an exploratory testing session, the tester accomplishes the following:

  • Learns how to work with the product
  • Designs the tests
  • Performs the tests
  • Interprets the results

It is a good practice in exploratory testing to use a test charter. The test charter is prepared prior to the testing session (possibly jointly by the business analyst and the tester) and is used by the person in charge of the exploratory session (either a business analyst, tester or another stakeholder). It includes information about the purpose, target, and scope of the exploratory session, the test setup, the duration of the session, and possibly some tactics to be used during the session (such as the type of user that shall be simulated during the exploratory session). Time-boxed sessions help to control the time and effort dedicated to the exploratory session. It is also good practice to perform exploratory testing in pairs or as team work. 

In Agile development, exploratory test sessions can be conducted during an iteration by the product owner and/or the testers for acceptance testing of user stories assigned to the iteration. 

Exploratory testing should be used to complement other more formal techniques in acceptance testing. For example, it may be used to provide rapid feedback on new features before methodical testing is applied. 

Beta Testing

Beta testing is a form of acceptance testing that is often used for Commercial Off-the-Shelf Software (COTS) or for Software as a Service (SaaS) platforms. It is conducted to obtain feedback from the market after development and in-house testing are completed. 

Unlike other acceptance testing forms, beta testing is performed by potential or existing users at their own location. Beta tests neither impose predefined test procedures nor a test charter. Apart from the observed findings, the test activities are usually not documented at all. 

Because the product is tested in various realistic configurations by actual users in their business process context, beta testing may discover defects that escaped during the development process and previous test levels. Resolving issues found by beta tests helps organisations avoid costly hot-fixes or product recalls on a larger scale. 

Acceptance testing should not be limited to beta testing. Beta testing is not systematic or measurable. There is no guarantee that all requirements or user stories are covered by the tests. Moreover, beta testing is performed late in the development process whereas tests based on acceptance criteria support the “Early Testing” principle. 

The Gaming Industry Ecosystems

Testing phases within the Gaming Software Development Lifecycle

Test types during the gaming quality assurance phase

Compliance testing is a very involved process. There are lengthy submission forms to be filled and ITL compliance fees are often much higher than gaming quality assurance testing fees. Therefore, prior to submitting a product to an ITL, a machine manufacturer should ensure the quality of the product being submitted by performing gaming quality assurance testing.

Gaming quality assurance testing activities align with the fundamental test process. The following test activities are performed to ensure the product is tested for quality:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design
  • Test implementation
  • Test execution
  • Test completion

Gaming quality assurance testing is an iterative process with development. Defects are logged and then the product is returned to development to fix the logged defects. Once the logged defects are fixed, the product comes back to gaming quality assurance for further testing. This cycle continues until the product reaches the quality levels desired, as defined by the gaming quality assurance testing exit criteria.

The test types performed during gaming quality assurance testing include, but are not limited to:

  • Localisation testing – testing to determine that all screens have proper language translations and any other localisation items such as date/time or numbering formats are done correctly.
  • Functional testing – testing to determine that all functions work as designed and intended.
  • Performance testing – testing to determine that response time is acceptable.
  • Memory leak testing – testing to determine no performance issues arise due to lack of proper memory management by the software.
  • Install-ability testing – testing to determine that the product can be easily installed by the casino operator without any issues.
  • Portability testing – testing to determine that the product can be installed on all platforms it needs to run on.
  • System integration testing – testing to determine that the product can integrate with systems/components, different versions of an application interface or able to communicate to a system.
  • Operational testing – testing to determine that the product can function and be operated in a production environment, including reliability, security, recovery and failover testing.
  • Pre-compliance testing – testing to determine that all the regulations and standards are met prior to submitting the product to the ITL. This helps ensure a minimal number of submissions of the product to compliance testing.
  • Customer acceptance testing – Prior to submitting to compliance testing, some products are submitted to the client (i.e., lottery or casino) for customer acceptance testing to ensure all features function as the client expected.

Compliance testing

Compliance testing occurs when a machine manufacturer wants to enter a jurisdiction with a new or modified product, be it a game, platform, hardware or system. Based on the jurisdiction, the machine manufacturer needs to submit the product with a request for certification to either an ITL or government-based compliance test lab. In some jurisdictions, it goes through both. 

The formal submission documents are fully detailed, what is being submitted and what certifications are being requested for the product.

Once the product has passed compliance testing, the test lab will provide a certificate of compliance evidencing the certification of the rules and regulations that were to be met.

Once the regulatory commission has seen proof of the required certifications, it will allow the product to be installed in the gambling establishments in their jurisdictions.

When performing compliance testing, standard test plans are created and specific compliance checklists are used. These may include, but are not limited to: 

  • Jurisdictional specifications – usually defined by a governing body such as the federal, state and/or provincial government.
  • ITL defined standards – defined by an ITL in the gambling industry.
  • Other gaming related standards – some jurisdictions require other standards be adhered to. For example, some jurisdictions may require that gaming machines and systems in a jurisdiction are Game to System (G2S) protocol compliant. The G2S compliance checklist is defined by the Gaming Standards Association, the association that has defined the G2S
    protocol.

Many areas of the compliance testing will be the same as those performed in gaming quality assurance testing, but they are tested against the jurisdictional specifications and not the game specifications.
Some of the areas that are covered during compliance testing include: 

  • Rules of play – testing to determine that the rules meet the jurisdictional specifications.
  • RNG, Payout Percentages, odds and non-cash awards – testing to determine that the payout percentage is within the range regulated in that jurisdiction.
  • Bonus games – testing to determine that the game meets bonus regulations.
  • Electronic metering – testing to determine that all meters required to be
    monitored within that jurisdiction are being reported.
  • Game history – testing to determine that the game history tracks, at a minimum, the number of games required by the jurisdiction.
  • Power-up and power-down – testing to determine that the power up and down functionality works as per the jurisdictional specifications.
  • Setup and Configuration – testing to determine that only configurations that are permitted within the jurisdiction can be enabled.

The Gaming Ecosystem

The gaming industry ecosystem overview

The gaming industry ecosystem is composed of the following organisations: 

  • Game Developers – develop casino games not specific to a gaming machine model. These games are usually distributed by a manufacturer or casino.
  • Machine Manufacturers – make and sell the hardware, platforms, operating systems and games, developed in house or sub-contracted.
  • Independent Test Labs – test and certify that the game software, hardware, firmware, platform and operating system follow all the jurisdictional rules for each location where the game will be played.
  • Regulatory Commissions – approve every game played in their jurisdiction after the ITL certifies that the game meets the commission’s jurisdictional specifications.

The regulatory commission licenses the machine manufacturer to deploy the game in casinos or on online gaming sites in that specific jurisdiction. A game may be shipped to a casino before licensing; however, it cannot be deployed. The game must be licensed by the regulatory commission before it is deployed into the jurisdiction. Should any major defects be found in the casino, the regulatory commission can force the machine manufacturer to pull their game out of all casinos or demand that the online sites remove access to the game in that jurisdiction.

Video lottery terminals and their ecosystem

As indicated by the name, VLTs always have a video display for the game. VLTs either have standalone or server-based outcome architectures. In the standalone model, each VLT contains an RNG from which game outcomes are generated. In the server-based outcome architecture, VLTs obtain their outcomes from the server. This architecture has two possible models: the RNG model or the pre-determined finite pool model. In the server-based RNG model, the server generates the outcome it will provide to the VLT using an RNG located in the host. In the pre-determined finite pool model, the server obtains the outcome from a database of pre-determined outcomes. This model is similar to instant tickets and is often referred to as electronic instant tickets.

The types of games typically found on a VLT are: mechanical reel games, poker games and keno games. Most VLTs are multi-game machines, meaning multiple games are available for a player to choose from through a screen menu.

VLTs are frequently operated in a distributed environment over a Wide Area Network. For example, a few VLTs deployed in bars and/or pubs are connected to a central server through a Wide Area Network connection. 

The VLT ecosystem is composed of: 

  • The EGM
  • The site controller and/or bank controller
  • The systems/servers used for monitoring and/or managing functionality

The EGMs are the machines on which the players choose to play the games. Each machine communicates to a site controller and/or bank controller and one or more central servers through a communication interface board using an electronic communication language referred to as a protocol. When VLTs are installed in a distributed environment, each retail location has a site controller to which the VLTs at that location are connected. The site controller serves multiple functions:

  • Communicates and monitors VLTs to ensure they are online.
  • Records game play transactions, cash-in/cash-out transactions and security
    exceptions.
  • May act as a protocol converter by translating the protocol implemented on the VLT to the protocol understood by the central server.
  • Provide retailers with the ability to:
    • Register players for player tracking cards 
    • Validate and pay out cash tickets 

When VLTs are installed in a venue environment (i.e., a non-distributed environment), they are connected to a bank controller which functions like a site controller minus the retailer functions. A bank controller can support connection of several hundred VLTs, whereas one site controller typically supports fewer than 100 connected VLTs. 

The VLTs and bank controllers and/or site controllers are connected to various central servers based on the functionality offered by a jurisdiction. At a minimum, VLTs installed in a venue environment include the following: 

  • A casino accounting system, which is responsible for monitoring the amounts wagered and paid on each VLT.
  • A VLT CMS, which provides the ability to monitor game play, track, record and report security exceptions at the VLT and/or site controller, and monitor network availability in order to ensure continuous VLT operations in the event of communication loss.

Other central servers may include additional features, not limited to:

  • A cashless wagering server, which allows for cashless transactions either through ticket-in/ticket-out (TITO) functionality or through electronic funds transfer (EFT).
  • A distributed game content management server, which controls the selection, scheduling, distribution and auditing of VLT software to VLTs at remote retail sites.
  • A player services server, which supports player loyalty, player rewards and
    responsible gaming functionality.
  • A progressive server, which manages progressive game play.
  • A business intelligence server, which provides data warehousing and business analytics.

The other servers available are based on the functionality offered in the jurisdiction.

Slot machines and their ecosystem

Slot machines may have a video display or mechanical reels which have actual physical reels that spin. 

Slot machines outcome architectures come from the Indian Gaming Regulatory Act, a 1988 US Federal law that establishes the jurisdictional framework that governs Indian Gaming in the US, This law provides definitions for Class I, Class II and Class III architectures. Class I relates to traditional Indian gaming and will not be discussed further in the context of casino gaming. Class II and Class III define the two outcome architectures used by slot machines. Class II (also know as electronic bingo) is defined in the Act as “the game of chance commonly known as bingo whether or not electronic, computer or other technological aids are used”. Class III (also known as traditional slot machines) has a broad definition in the Act. It states “all forms of games that are neither Class I nor Class II. Games commonly played at casinos, such as slot machines and table games, e.g., blackjack, craps, roulette, etc., fall in the Class III category. 

The types of games typically found on a slot machine are: mechanical reel games, bingo games, poker games and keno games. Many slot machines are single-game machines, meaning only one game is available for play on the gaming machine. 

Slot machines are typically operated in a venue environment such as a casino. 

Slot machines (also known as Vegas style slot machines) are: 

  • Casino gaming machines with mechanical reels or a video display.
  • Machines that have an RNG that is local to that machine.

Machines usually include a currency input device, such as a coin acceptor or a note acceptor, and a currency output device, such as a coin hopper.

The slot machine ecosystem is composed of:

  • The slot machines
  • A slot machine interface board (SMIB)
  • A data collection unit or bank controller
  • Central servers

Each slot machine contains an SMIB that is linked to the data collection unit or bank controller. Historically, an SMIB was a small board that was put into every mechanical or electro-mechanical machines. These early SMIBs connected to a wiring harness that would detect when a mechanical meter was incremented, or a mechanical door switch was opened. As time passed, these SMIBs evolved and now communicate electronically with the gaming machine and are often responsible for implementing the protocol that is used to communicate with the data collection unit or bank controllers and the remote central servers. The SMIBs, at a minimum, capture:

  • The amounts wagered by the player.
  • The amounts paid out to the player.
  • And if the player is using a player card, any player data tracked by the casino.

Data collection units or bank controllers, as suggested by the name, are used to collect and store the data obtained from the SMIBs.

The data collected by the data collection unit or bank controller is communicated to the servers to update the data needed for the functionality provided by the servers.

Bingo machines (also know as electronic bingo machines), are: 

  • Machines that look and feel like slot games but are actually a game of electronic bingo.
  • Machines on which the outcomes are obtained from a centralised bingo server.
  • Machines that offer cashless input methods such as TITO or EFT. These
    gambling machines do not have currency input/output devices.
  • Machines on which the games:
    • Are played exclusively against other players rather than against the house or against a player acting as a bank.
    • Are based on multiple players playing for a common prize. 
    • Continue until there is a winner.

Each bingo machine contains an SMIB that is linked to the bingo server and other servers. The SMIBs, at a minimum, capture: 

  • The amounts wagered by the player.
  • The outcome obtained from the server and the corresponding results.
  • The amount paid out to the player.

The Network Switches are used to provide multi-player capabilities. Once a minimum number of players required for a specific game is met, the actual bingo game can start. The bingo server is the system that allows players to join a game until the group is at the minimum required and that provides the outcomes to the slot machines.

There are other central servers, such as the casino accounting server that tracks the amounts wagered and amount won, and the reporting server that allows the casino operators to report on the collected data.

Lottery and its ecosystem

The lottery ecosystem is composed of systems and devices deployed at the lottery and at each retail location. 

The main device is the point of sale (POS) lottery terminal. The POS lottery terminal facilitates the sale of traditional lottery tickets by allowing the retail employee to either scan a selection slip containing the player selected numbers, or to select a Quick Pick option where the POS lottery terminal randomly selects numbers for the player. The POS lottery terminal then prints the tickets on the attached printer. The POS lottery terminal facilitates the sale of instant tickets by scanning the instant ticket sold. The attached customer display unit (CDU) allows the customer to view all steps of the sales transaction. The POS lottery terminal must coordinate all lottery ticket sale transactions with the lottery CMS. The POS Lottery terminal, printer and CDU/PDU unit are either separate devices or, in some cases, these devices can be integrated into one unit. When a player is ready to validate a ticket, the player can choose to have the retail employee scan the ticket using the POS Lottery terminal or perform the validating themselves on a POS Self-Serve Terminal. 

The final device at the retail location is the multimedia display. The multimedia display is used for in-store advertising of lottery products, upcoming lottery promotions and winning numbers. 

Once the numbers are drawn, the numbers are entered into the lottery CMS. Using the data stored in the database of the CMS, reports can be generated indicating how many winning traditional lottery tickets were sold and which retail location sold the ticket(s). For instant tickets, the barcode data of each ticket is stored in the CMS database. This allows the lottery employees to generate reports indicating how many tickets remain unsold and manage replenishment of physical tickets to retailers. The lottery CMS is responsible for storing all transactional data of tickets sold at each retail location. It also manages the advertising content to be displayed on the multimedia display units at the retail locations and downloads the appropriate content to the POS lottery terminal from which the multimedia display unit displays the content. 

Lotteries are beginning to introduce alternative means by which to purchase lottery tickets. For example, purchases can be made at self-serve vending machines for instant tickets, self-serve ATMs like kiosks for traditional lottery tickets or online at their website for lottery tickets. These alternative devices and components must also coordinate all sale transactions with the lottery CMS. 

Some of the areas that are covered during functional testing of the lottery ecosystem include:

  • Online game rules and functionality.
  • Scratch ticket management.
  • Player account management. 
  • Geolocation functionality.
  • Player services manager.
  • Multi-player gaming
  • Player loyalty and rewards.
  • Account based play “cashless”.
  • Responsible gaming functionality
  • Game play functionality and playability.
  • RNG algorithm and math rules.
  • Artwork versus requirements
  • Host accounting and reporting – determine that the game pays out what it should and that the money at play goes to the client if they win, or to the casino if the client loses.
  • Tournament and real time event setup and management.
  • Multiple game engines functionality and capabilities.
  • Integrations with external gaming sites and mobile devices.

Introduction to the Gaming Industry

Objectives and Overview

Understand the objective of the gaming tester

Being a gaming industry tester means that you must understand both testing in general, and the unique set of skills for the gaming industry ecosystem. This ecosystem is filled with proprietary, complex, multifaceted gaming software, hardware, platforms, firmware and operating systems. The objective of this article  is to provide the regular Tester graduates with the specific knowledge that is required for a career in gaming industry testing. 

Why the gaming industry requires a specialist tester

Some of the specific testing for the gaming industry, not present in other testing areas, include the following: 

1. Gaming industry ecosystem – The unique hardware, firmware and operating systems that are proprietary to the gaming industry. 

2. Gaming industry compliance testing – There are over 440 different certification boards worldwide for gaming industry games. These boards have rules that games in the gaming industry must comply with. These rules impact hardware, software, platform, operating systems, visual and auditory functionality, mathematics, and return to player (RTP) calculations. One gaming industry game can be played in multiple gaming jurisdictions and needs to comply with the laws of each location. 

3. Fun factor or player perspective testing – This is something unique to gaming industry games, since they are an entertainment product. Not only are casino games supposed to work intuitively and provide the player pleasure, they must also be fun to play. This requires a unique insight into game design, with experience and information about the user group and what that group enjoys. 

4. Math testing – Testing the multitude of pay tables, permutations, Random Number Generator (RNG) results and RTP computations. This type of testing requires the tester to understand what triggers different types of payout behaviour and to understand financial return to the player and how these triggers can be treated by different parameters. Understanding math testing is critical to succeed in this field. 

5. Audio testing – Creating sound or playing media is common in software. However, gaming industry game music must engage the user in the game and enhance the game play. Not only should the audio play without stuttering or missing elements, it should also add to the game play. This requires extensive audio skills and specific understanding of game audio. 

6. Multiplayer testing – This type of testing is performed when many players are simultaneously interacting with casino games, with computer-controlled opponents, with game servers, and with each other. Typical  risk-based testing is followed to ensure against using unlimited amounts of time testing different scenarios. Understanding multiplayer game design, and how to test it efficiently, is required knowledge for this type of testing. 

7. Interoperability Testing is common in all software that communicates with other software, systems and/or components. Casino/Video Lottery games have a unique aspect in that they must implement interoperability using gaming industry open protocol standards or proprietary protocols as per the specifications of the central server deployed in the jurisdiction to which the game is deployed.

Gaming Activities and Artefacts

Background 

To understand gaming industry testing and its ecosystem specificities requires a review of the business model, activities, and artefacts as they pertain to the gaming industry. 

What is gaming?

Gaming can be defined as follows: 

  • The wagering of money or something of value, also called stakes, on an event
  • Where the outcome of the event is unknown
  • Where the whole intent is winning additional money, material goods or trips 

What is a gaming machine? A gaming machine is a machine that enables the wagering of money or something of value. Examples of gaming machines are: electronic or mechanical slot machines, a roulette table or even a computer for online gaming. 

Types of Gaming

Casino games

There are three categories of casino games: table games, electronic gaming machines (EGMs) and random number ticket games. 

Examples of table games are roulette, blackjack, baccarat or poker, which typically are not tested unless they are an electronic table game version of these games. 

The second group are EGMs, typically known as video lottery terminals (VLTs) or slot machines. These are usually played by one player at a time and do not require the involvement of casino employees to play. These games need to be tested, i.e., the game software, the machines, the operating systems, and platforms that they are based on. 

VLTs and slot machines are both gaming machines that allow players to bet on the outcome of a game. Physically, VLTs and slot machines are very similar in nature. The main difference between a VLT and a slot machine is that VLTs are gaming machines that are operated by government lotteries while slot machines are gaming machines operated by private organisations such as casinos. 

Both VLTs and slot machines are regulated and require licenses to be operated within their jurisdictions. Many countries around the world offer legalised VLT or slot play. For example: 

  • In the United States, a 1988 federal law established three classes of games for Native American casinos, with different regulatory schemes for each. Each state government follows variations of these classes to define their regulations.
  • In Canada, the provincial or territorial governments are responsible for regulating gaming operations. All provinces offer the ability to play, each with their own regulations.
  • In Australia, the laws regulating the use of gaming machines are the responsibility of the state governments.

Other terms by which a VLT and slot machine are referred to: EGM, Video Gaming Terminal, Video Gaming Device, Video Slot Machines and Interactive Video Terminal.

The third casino game category is random number ticket games such as Keno and simulated racing. These games are based on the selection of random numbers, either from a computerised RNG or from other gaming equipment. 

Lottery systems

A lottery is a form of gaming that involves selling numbered tickets and giving prizes to holders of winning tickets. The prize can be a fixed amount of cash or goods, but more commonly, the prize fund is a fixed percentage of the revenues from the tickets sold. 

There are typically two-forms of lottery products sold: traditional lottery tickets and instant tickets. 

Traditional lottery tickets are numbered tickets that are sold for regularly scheduled draws, most often weekly. On the draw date, random numbers are drawn either using a ball drop machine or electronically. Most lotteries that have moved to electronic draws still have ball drop machines as a backup in case of failures with the software solution. Once the numbers are drawn from the ball drop machine, they are entered into the lottery central management system (CMS). 

The chances of winning a lottery jackpot can vary widely depending on the lottery design, and are determined by several factors, including: 

  • The count of possible numbers
  • The count of winning numbers drawn
  • Whether or not the order is significant
  • Whether drawn numbers are returned for the possibility of further drawing

Instant tickets are numbered tickets from a pre-determined finite pool of outcomes. The most common form of instant tickets is the scratch card. Scratch cards are typically made of paper, with the outcome printed and hidden by an opaque substance that needs to be scratched off, hence the name of these tickets. The cards usually present the information in the form of a game, such as Tic-Tac-Toe, Bingo, Crossword or some other puzzle, to help add entertainment value. A variation of the scratch card is the break-open (also know as pull-tab) ticket in which, instead of scratching off an opaque substance to reveal the outcome, the player opens a perforated cardboard cover which is hiding the outcome. Since outcomes of scratch and break-open tickets are pre-determined, the cards do not need to be scratched or opened to be validated.

A barcode on the ticket can be scanned by the lottery CMS to determine if it is a winner or not. The scratching or breaking open is there for entertainment value to the player only. 

The chances of winning on a scratch card are typically much higher than on a traditional lottery, but prize amounts are typically much smaller. The probability of winning on a scratch card can be calculated using the odds found on the back of the scratch ticket. 

When it comes to lottery operations, it is critical that all parties are confident with the process. For everyone involved, including players, to feel confident, those running the lottery operations must acquire and uphold a secure environment that is documented and accessible. To address this, the Security Control Standard was put in place by the World Lottery Association and lottery organisations are audited against this standard on a regular basis. 

Race and sports gaming

Race and sports wagering is also called sports betting. It is the activity of predicting sports results and placing a wager on the outcome. Although most sports betting wagers are placed against amateur and professional level sports, sports betting is sometimes extended to non-athletic events such as reality show contests and political elections, or sometimes to non-human athletics such as horse racing and greyhound racing. 

Sports betting can be performed at the sports betting outlet in a casino, with bookmakers (also know as a sports-book) or online through a computer or mobile device. The types of sports bets include: 

  • Money-line Bet
  • Spread Betting
  • Proposition Bet
  • Over / Under Bet
  • Parlay
  • Progressive Parlay
  • Future Wager

Money-line bets (also known as win bets) are bets in sports wagering. It is one of the most popular wagers that can be placed and is easy to understand. It is used in almost every sport a player can bet on and is a wager on who the player thinks is going to win a match, game or other event. It does not have a spread or handicap (explained below). It should be noted that the predicted winner, i.e., the competitor expected to win, pays lower odds then does an underdog. 

Spread betting is defined as wagers that are made against the spread. The spread is a number assigned by the bookmaker which handicaps one team and favours another. This type of betting is similar to the Money-line win, in that the player is choosing which team he/she thinks will win, but there is a significant difference. A point spread is created to effectively make the two teams equal favourites in terms of betting. This means the player either backs the favourite to win by at least the size of the spread, or the player backs the underdog to win or lose by no more than the size of the spread. For example, the odds for this week’s National Football League games are posted and the point spread in the Washington Redskins versus Dallas Cowboys game looks like this: Dallas-4.5 Washington +4.5. The favourite team is associated with a minus (-) value, so Dallas is favoured by 4.5 points in this game. Consequently, the underdog is shown with a plus (+) value, which means Washington are 4.5-point underdogs. A wager on Dallas would be made if a player believe Dallas can win the game by 5 points or more. So, if Dallas wins the game 20-14, then the team not only wins by 6 points but also covers the 4.5-point spread as the favourite. However, if Dallas wins the game 20-17, then they win by 3 points and have NOT covered the 4.5 points, but Washington has because they stayed within the spread. 

Proposition bets (also known as Props or Specials) are wagers made on events that are not related to the final outcome. Example events are: who will win the first round of a boxing match or which team will score first in a match. 

Over/Under bets (also known as Totals) are wagers made on whether an outcome will be under or over an estimated outcome set by the bookmaker. For example, how many three-point shots will LeBron James make tonight?

– Over 2.5
– Under 2.5

In this example, notice how the Prop takes the form of a traditional game total wager. This is a simple wager to understand – if the person making the wager thinks that LeBron James can make three or more three-point shots tonight, bet on the over. If the player making the wager thinks LeBron cannot do that, take the under. 

There are specific odds for both the over and under bet. Payments depend on the odds at the time the bet is made. 

Parlays (also known as accumulators) involve multiple bets and rewards a successful player with a large payout. These types of bets are hard to predict because they involve making more than one selection as part of a single wager. For example, the player might place a single wager on what team will win the next five football matches. If the player successfully wagers, the payout is substantially higher than if the player had wagered on each game separately. The downside is that the player would lose his/her complete wager if the team he/she selected lost any one of the five games. Based on the number of selections, the parlay can receive a unique name. For example, “Double” when it contains two games, or “Treble” when it is composed of three games. 

Progressive Parlays are similar to parlays in that they involve making more than one selection as part of a single wager. However, they differ from a Parlay in that a player will be rewarded even if some of the bets lose. If all bets are won, the player will be awarded the full payout which is not as large as a regular parlay but will receive a reduced payout if some of the selections within the parlay lose. 

Future Wagers (also known as Outright wagers) are wagers placed on future events. Although all sports wagers are on future events, with a future bet, there is a long-term horizon measured in weeks or months. Future wagers usually are made before the season starts. Winning bets will not pay off until the end of the season. For example, the player might make a futures wager on a team winning the National Hockey League (NHL) Stanley Cup. The wager must be placed before the regular NHL season begins and the payoff will not be made until after the Stanley Cup playoffs end. 

Online and mobile gaming

Online gaming includes all areas of gaming offered via Internet, mobile, wireless in-venue, and interactive-TV channels. The online gaming space contains all the different types of gaming that have been discussed thus far, i.e., slot games, table games, lottery, and sports betting 

Online gaming has become one of the most popular and lucrative businesses present on the internet. Legalisation of online gaming varies based on the type of online gaming product and the jurisdictions in which they are offered. For example, purchasing traditional lotto tickets through online websites is legal in many jurisdictions. However, not all jurisdictions have legalised casino style gaming such as poker or slot games through online gaming websites. 

Mobile gaming is online gaming on a mobile device such as a tablet or smart phone. There are two types of mobile gaming. The first is the online gaming at casino websites that can be accessed through a mobile device either through a website or through a mobile app. The second is in-venue mobile gaming which allows on premise casinos to add mobile technology and content to their existing offerings. Products are accessible to players on the gaming machines on the casino floor and on mobile devices inside the casino. 

For the online and mobile gaming ecosystem, the player needs to be able to access the casino’s online gaming products. This can be done in two ways: 

  • Browser-based
  • Downloadable application

If the player chooses to play through a browser-based casino website, the games are available through the player’s browser while on the online casino’s website. 

If the player chooses to play through a downloadable application, he/she must first install the online casino’s software to his/her computer or mobile device. This option usually offers better graphics, sound and game play than the browser-based option. Then, in order to play at the online casino, the player must have a means of transferring money to and from the online casino. This can be accomplished by an electronic wallet (also known as a digital wallet), such as PayPal. When performing mobile in-venue gaming, some casinos have internal electronic wallets as part of the casino management system which are often associated to a player’s account. In this scenario, the player would deposit funds into or withdraw funds from the casino’s electronic wallet solution at the cashier booth.

To ensure online or mobile gaming is performed only where it is legal, geolocation, micro-technology and triangulation are used to confirm the location of the player. Geolocation is the estimation of the real-world geographic location of an object, i.e., the computer or mobile device a player is using to play online gaming. Micro-location technology is used for in-venue mobile gaming. This technology works by using the casino’s existing WIFI network or Bluetooth beacons to give accuracy of a player’s location to within a few feet. For out-of-venue online gaming, some jurisdictions have decided on mobile phone triangulation to confirm the location of players. This triangulation method determines which cellular towers are closest to the player’s mobile phone and ensures that the player is in the right geographical location. Mobile phone triangulation technology is accurate to within a mile of where the client resides. Other jurisdictions have decided to use Wi-Fi to verify geolocation for out-of-venue online gaming. This geolocation technology is accurate to within a few feet of the user’s residence. 

Individuals looking to circumvent restricting online gaming to specific locations use technical measures such as proxy servers to try to bypass restrictions imposed by geolocation software. Some online gaming sites can detect the use of proxies and anonymises and block their access to the online gaming systems.

Key Concepts in the Gaming Industry

Progressive jackpots

A progressive jackpot is a prize or payout which increases each time the game is played but the jackpot is not won. A small percentage of each wager placed by a player on the game contributes to the jackpot award amount. The game that the progressive jackpot is attached to can be any type of game (e.g., mechanical reels, poker, etc.). 

When the progressive jackpot is won, the jackpot for the next play is reset to a predetermined value, and resumes increasing under the same conditions. The progressive jackpot win is often associated with the highest winning combination on the gaming machine in which it is being played. In order to win the progressive jackpot, in most games, the player needs to have placed a maximum bet as the wager for the play. 

Progressive jackpots are available both on VLTs and slot machines. There are three types of progressive jackpots: 

  • Standalone progressive
  • Local area linked progressive
  • Multi-site linked progressive

A standalone progressive has a jackpot on the individual EGM. Only bets placed on that specific EGM increment the jackpot.
Local area linked progressives are games within a venue that are linked together to contribute to a common progressive jackpot. This type of jackpot is usually found in a casino. This type of network can include as few as a dozen EMGs and as many as hundreds of these.
Multi-site (also known as Wide Area) linked progressives link gaming machines from multiple venues to participate in the progressive jackpot. Due to jurisdictional rules being different, Multi-site linked progressives usually only link machines within the same jurisdiction, often across casinos operated by the same organisation. However, some examples of multi-jurisdiction progressive jackpots exist. For example, in July 2006, the Multi-State Lottery Association in the US introduced the first multi-jurisdictional progressive jackpot called Ca$hola. This progressive jackpot linked EGMs at nine lottery run casinos; three in Delaware, two in Rhode Island, and four in West Virginia. This linked progressive was replaced in 2011 by the Megabits jackpot and now includes two additional states: Ohio and Maryland.

A linked progressive jackpot solution adds some additional devices to VLT and slot machine ecosystems: 

  • A progressive jackpot display or sign
  • A progressive jackpot controller
  • A progressive jackpot server 

The progressive jackpot display or sign is used to display the current amount of the progressive jackpot. 

The progressive jackpot controller is used by the venue to manage the progressive jackpot. The jackpot controller links the games contributing to the progressive jackpot and communicates the jackpot value to the progressive jackpot display.

The progressive jackpot server is used to manage multiple jackpot controllers and different progressive jackpot games that may exist across a venue. It will also monitor and collect all progressive related data to allow for analytics and auditing of progressive jackpots.

Random Number Generator (RNG)

The Random Number Generator is a computational or physical device designed to generate a sequence of numbers that lack any pattern, so they are random, or they appear unrelated. RNGs are used in gaming, statistical sampling, computer simulations and other areas where producing an unpredictable result is desirable. Any machine-base gaming involves an RNG. 

The RNG is a vital part of all gaming machine operations. Where unpredictability is essential, such as in security applications, hardware generators are generally preferred over pseudo-random algorithms. 

The RNG is certified by either an ITL or by the jurisdiction’s regulatory board. 

The win selection flow

The selection process or the “did I win?” process is another key concept of the gaming industry. All gaming machines such as EGMs use some type of win selection process to determine and display the outcome of the game. This means if the player pulls a lever or presses a button, something happens on the screen and then there’s an outcome that says “Yeah! I’ve won!” or” No, I’ve lost!”. 

What is also important about the selection process is that it can be performed on the EGM itself or on a server. In some cases, the whole process from “spin the wheel”, “get a response”, “you won or lost” is done on a standalone EGM. 

The technology being used and the specific jurisdictional rules of where the game is being played will influence the selection process and whether it is performed on the EGM or on the servers.

This selection process will involve the following: 

  1. Start of spin
  2. A raw random number is generated by the RNG
  3. The raw random number is scaled to a usable number
  4. The number is mapped to a game element (e.g., is it a star? is it a 7? is it Wheel of Fortune?)
  5. There is an evaluation of the outcome of the results of that random number generation
  6. The prize is awarded to the player with that outcome. Either credits are taken away from the player in the case of a loss or credits are given because of a win.
  7. There is a display of the outcome to the player
  8. The prize is paid, if applicable
  9. End of spin

Player privacy and geolocation

Privacy laws in most jurisdictions mandate that any player’s information being tracked, whether for responsible gaming or player loyalty program purposes, adheres to the storage and use of personal information regulations set forth by those laws. An example of testing player privacy is verifying that the solution makes the player information available to only those that should have access, and that any such information is encrypted when being transferred between devices and systems. 

Some responsible gaming and player loyalty programs require knowing where the player is located. Testing this function consists of ensuring the geolocation functions accurately restrict play based on the rules mandated by the location from which the player is playing. 

Regulatory commissions, jurisdictions and associations

Compliance testing is also called jurisdictional testing. Each jurisdiction has its own rules, regulations, guidelines (also known as regulatory or jurisdictional specifications or rules) that must be tested. This testing is usually performed by an ITL. 

In the United States, there are over four hundred regulators and jurisdictions. Canada has at least one per province. South America has at least one jurisdiction per country that has legalised gaming. Europe, Asia and Africa also usually have one jurisdiction per country. Germany has lottery companies by province. Australia has at least one per state. Within these jurisdictions, there is usually an organisation that is responsible for issuing licenses and regulating the licensee or the people who have the licensee. These organisations are typically known as licensing authorities. 

Every jurisdiction controls the potential manufactures who need a licensee to operate in that jurisdiction. Manufactures cannot legally operate in any jurisdiction where they do not have a licensee. If a product fails compliance testing, it must be fixed and returned to the ITL for certification testing until it passes 100% of the mandatory certification tests. The product can be returned many times before it passes the compliance tests. 

Before gaming products are ready for compliance testing, a full range of gaming QA testing must occur. Some examples of test types and test techniques that are done for the gaming industry includes exploratory testing, functional testing, regression testing, pre-compliance testing, system integration testing, performance testing, penetration testing and failover testing.

Gaming Industry Metrics

Background 

Gaming industry testing uses many of the common test metrics. However, there are a few that are specific to the gaming industry. 

First pass percentage

First pass percentage identifies the percentage of games that receive certification from the ITL on the first submission of the product. 

The importance of receiving a first pass for a gaming product is related to both product cost and its time-to-market. If the product does not receive a first pass, there are extra costs for additional development, testing and product certification. A gaming product that does not receive a first pass is delayed from entering the market until it is certified. 

Escape compliance defects

These metrics measure data relating too escaped defects that do not comply with the jurisdictional rules or regulations and are found by the ITL or in the field. 

The resubmit factor is the number of times a game must be resubmitted to the ITL to pass certification testing. For example, if on average each game is resubmitted 4.5 times to achieve certification, the resubmits factor would be equal to 4.5. 

The number of revocations tracks how many games have been pulled from the field per time period, due to escaped compliance defects. For example, if two games have been removed from the field in a year, that would mean two revocations for the year. If a jurisdiction asks for a game to be removed due to an escaped compliance defect the manufacturer has a limited amount of time to remove the game. 

These two metrics are important because having escaped defects in a jurisdiction can impact a manufacturer’s right to be in that jurisdiction, negatively impacting their brand, making them lose revenue if the EGM and table games is not working on the casino floor. There are a fixed number of EGMs and table games in any casino. Manufacturers fight for floor space amongst themselves, so a revocation might also mean that a manufacturer loses that floor space to a competitor. 

Gaming Software Development Lifecycle

The gaming software development lifecycle overview

The Gaming Software Development Lifecycle follows the sequential development model. 

Game Concept and Design is the first phase of the gaming software development lifecycle. It starts with a game idea that is storyboarded and is reviewed. Game and sound designers, artists, video, and gaming experts, software architects and game developers, and gaming jurisdictional experts create a game prototype. The prototype is then scrutinised for innovation and playability by the targeted audience focus group. This group may be composed of internal (IT-professionals), external (non-employees, sometimes non-IT professionals), or a mix of both resources. The Game Concept and Design phase is an iterative process. The Game Concept and Design phases’ ultimate deliverables are documents which become the blueprint for the development team, artists, mathematicians, and sound designers. 

The Game Concept and Design documents include the following: 

  • Game Concept 
  • Game and Technical Design

The Alpha phase, not to be confused with alpha testing, is next. During this phase, game play functionality is developed and implemented, math functionality is completed, video and audio components are partially finished, and the game contains the major features. Black-box testing, especially functional, usability testing, exploratory testing, regression testing, math testing, and RTP testing occur.

The Code Complete phase is next. All features, audio, video and math components are finalised. At this phase, code is no longer added to the game, unless a change is needed to fix defects. Standard black-and white-box testing are typically performed at this phase. The emphasis is on test automation, testing for memory leaks, confirmation testing, and regression testing.

The Beta Build phase, not to be confused with Beta Testing, continues until no failures occur that prevent the game from being certified. Pre-certification testing is performed by the internal gaming quality assurance test team to assess the game versus the requirements of each jurisdiction. This phase is not a formal certification test cycle. It is a precursor to the ITL certification testing. Any defects discovered at this time will be corrected and the new builds are tested, and regression tests are performed. 

The Release Build phase is the one that is sent to the ITLs to ensure that the game complies with the requirements of each required jurisdiction. This build receives the final certification sign-off, which allows the game to be sent to casinos or be made available online. If the game fails this certification, it is sent back to the game developer and the process starts over. 

The role of the independent test lab (ITL)

Once the pre-certification phase is completed by the machine manufacturer, the game is ready to be certified by an ITL (also known as the Authorised Test Facility). If this is a game that will be played in North America and in Australia, it must be tested for all applicable jurisdictions which means approximately 450 jurisdictions for these two parts of the world.

Once the ITL has tested the game for all applicable jurisdictions, if it fails in any of the jurisdictions, the game is returned to the machine manufacturer or game developer who make the changes in the game or in the EGM and return it for another ITL certification test.

The only way to be an accredited ITL is to be accepted by each gaming regulatory commission. This is a lengthy and costly process and thus there are only a few ITLs who can certify games world-wide. A few of the jurisdictions have government-based certification test labs that play the role of the ITL. 

The role of regulatory commissions

Once the ITL has certified a game, the regulatory commission allows the game to be played in all casinos in their jurisdiction. However, the regulatory commission will revoke or pull a game from all its casinos if a major field issue arises. A major field issue is usually a defect that stops the game from playing, provides erroneous payouts or deviates from any of the rules of engagement that are required for certification. The machine manufacturer will have to immediately remove that game from every installation in the jurisdiction.

There are also minor field issues that will force the machine manufacturer to modify a game that is in the field, within a given timeframe. In this case the game must be certified again at an ITL and approved by the regulatory commission. 

Acceptance Testing Introduction and Basics

Fundamental Relationships

While it is certainly true that the roles and responsibilities of the tester and the business analyst are different, it is also true that their activities are complementary; work done by one group may greatly affect, either positively or negatively, that of the other. This is especially true in acceptance testing which is performed to assess the system’s readiness for deployment and its use by the customer (end-user). Good collaboration between business analysts and testers is particularly important for a proper consideration of the business implications at this test level.

Business Goals, Business Needs and Requirements

Business analysts first must understand the organisation’s overall business goals and identify current business processes and stakeholders. Once that is done, they describe specific business needs and determine a business case that addresses those needs. Once this high-level work has been completed, requirements can be elicited for the business solution that shall be developed.

Business goals, business needs, business requirements, and product requirements describe, at different levels of abstraction, what shall be achieved. In Agile development, the same principles apply, but different terms may be used (for example features and user stories).

In this document, the term “requirements” refers both to business requirements and to product requirements.

Requirements / User Stories, Acceptance Criteria and Acceptance Tests

During requirements elicitation, business analysts and testers (possibly together with developers) should begin to create specific acceptance criteria and develop acceptance tests as a joint effort. This ensures that there is a mutual understanding of what “acceptable” means from the business, development, and testing perspectives, right from the beginning of the project.

Acceptance criteria relate directly to a specific requirement or user story. They are either part of the detailed description or an attribute of the related requirement. If user stories are used, acceptance criteria are part of the user story’s definition and extend the story. 

In all cases, acceptance criteria are measurable criteria, formulated as a statement (or a set of statements), which can be either true or false. They are used to check whether a requirement or user story has been implemented as expected. Acceptance criteria represent the test conditions which determine “what” to test. They do not contain the detailed test procedures. 

Acceptance test cases are derived from acceptance criteria. These tests specify how the verification of the acceptance criteria should be performed. 

The Importance of the Quality of the Requirements

If acceptance criteria and tests are based on requirements, user stories, and/or acceptance criteria that are vague or ambiguous, it is likely that testers will make assumptions about stakeholder expectations and business needs. In this case, the resulting acceptance tests may be flawed. This will lead to rework or, even worse, the running of invalid tests, thus creating unnecessary costs as well as risks and uncertainty about product quality assurance. 

It is critical for testers to work closely with business analysts to make sure that requirements are clear and well understood by all stakeholders concerned. Ambiguities should be resolved and assumptions should be clarified so that the resulting acceptance tests are valid and are a meaningful way to determine the product’s readiness for release. 

In Agile development, the INVEST criteria define a set of criteria, or checklist, to assess the quality of a user story. These may be used by business analysts / product owners, developers, and testers to ensure the quality of user stories.

Business Analysis and Acceptance Testing

Too often, business analysts and testers work in their own separate silos, which can lead to misunderstandings about business and customer expectations. Those misunderstandings may stay hidden until the release approaches. By taking advantage of the complementary skills and by working together, business analysts and testers can positively affect the development process. This can be accomplished both by considering acceptance criteria and acceptance testing as early as possible and by coordinating efforts to make sure that the product has been tested appropriately prior to release at acceptance test level. 

Relationship between Business Analysis and Testing Activities

The following are the main elements of the business analysis activities: 

  • Strategy definition
  • Management of the business analysis processes
  • Requirements engineering in business analysis
  • Solution evaluation and optimisation

The business analyst is responsible for identifying business needs of stakeholders and for determining solutions to business problems with the aim of introducing change which adds value to the business. An important aspect of the business analyst’s role is to establish consensus between quality engineers, testers, developers, system integrators, product managers and project managers.

A test process consists of the following main groups of activities:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design
  • Test implementation
  • Test execution
  • Test completion

Quite a few of the associated activities and tasks relate to both business analysis and testing. The following examples illustrate the relationship between the two disciplines in the context of acceptance testing:

Requirements engineering in business analysis vs. test planning, test analysis and test design:

  • During the requirements engineering activities in business analysis, business analysts prepare detailed business and product requirements. These requirements are part of the test basis for the test planning, test analysis and test design activities, as testers define their objectives and plan their work, evaluate the specifications and requirements, identify test conditions and design test cases and test procedures.
  • Testers can contribute to the definition and verification of acceptance criteria as part of test analysis and test design activities. Working together, the two roles ascertain that there is proper understanding of the solution and agree on the appropriate approach to acceptance testing.
  • When requirements change, business analysts and testers can work together to assess the impact of the changes.

Solution evaluation in business analysis vs. test implementation, test execution and test completion:

  • During the solution evaluation phase in business analysis, business analysts support test implementation and test execution activities. They review the testers’ procedures/scripts, clarify issues and potentially help with creation of test data to support business-related tests.
  • Business analysts can assist with the implementation and execution of the acceptance tests. They may also support testers by evaluating test results. In addition, they may assist testers in test completion activities.

There is a strong and symbiotic relationship between the two roles and their respective activities, starting at the very beginning of a project and continuing until acceptance or release of the solution.

Collaboration between Business Analysts and Testers in Acceptance Testing

The common goal for business analysts and testers is to support the production of products with the highest possible value for the customer. Given their position within the organisation, business analysts and testers have various opportunities to collaborate during the acceptance testing activities described in the previous section. Apart from joint discussions and reviews of generated artefacts, business analysts and testers collaborate in other areas. For example, collaboration on test planning based on risk analysis is a good opportunity to ensure that the appropriate test cases will be developed and prioritised.

In addition to the direct benefits of working together and supporting each other’s efforts during acceptance testing, there is an important opportunity to cross-train team members. The more testers know about business needs and stakeholder requirements, and the more business analysts know about structured testing, the more likely the two groups will understand and appreciate each other’s work and better collaborate within the project.

How Acceptance Testing Can Drive the Development Process: ATDD and BDD

The wide acceptance of Agile software development practices has influenced how acceptance testing relates to requirements elicitation and other business analysis activities. In sequential lifecycle models, acceptance test analysis, design, and implementation are activities to be handled by the testers after the requirements are finalised. With the Agile lifecycle model, acceptance criteria and acceptance test cases are created during requirements analysis, requirements refinement sessions, and product backlog refinement. This allows the implementation of the “Early Testing” principle by using the design of test cases as part of the requirements definition activities. 

In the following two approaches, acceptance test analysis and design are formally part of the requirements engineering process: 

  • In Acceptance Test-Driven Development (ATDD), acceptance tests are produced collaboratively during requirements analysis by business analysts, product owners, testers and developers.
  • Behaviour-Driven Development (BDD) uses a domain-specific scripting language, Gherkin, that is based on natural language statements. The requirements are defined in a ‘Given – When – Then’ format. These requirements become the acceptance test cases and also serve as the basis for test automation.
  • Both of these approaches engage the entire Agile team and help to focus the development efforts on the business goals. The approaches also treat the acceptance test cases as living documentation of the product because they can be read and understood by business analysts and other stakeholders. Acceptance test cases represent scenarios of usage of the product.
  • The two approaches are similar and the two terms are sometimes used interchangeably. In practice, BDD is associated with the use of Gherkin to support writing acceptance tests, while ATDD relies on different forms of textual or graphic acceptance test design. For example, the graphical representation of application workflows may be used to implement a visual ATDD approach.

Agile Testing Methods, Techniques, and Tools

Agile Testing Methods

There are certain testing practices that can be followed in every development project (agile or not) to produce quality products. These include writing tests in advance to express proper behaviour, focusing on early defect prevention, detection, and removal, and ensuring that the right test types are run at the right time and as part of the right test level. Agile practitioners aim to introduce these practices early. Testers in Agile projects play a key role in guiding the use of these testing practices throughout the lifecycle. 

Test-Driven Development, Acceptance Test-Driven Development, and Behaviour-Driven Development

Test-driven development, acceptance test-driven development, and behaviour-driven development are three complementary techniques in use among Agile teams to carry out testing across the various test levels. Each technique is an example of a fundamental principle of testing, the benefit of early testing and QA activities, since the tests are defined before the code is written. 

Test-Driven Development

Test-driven development (TDD) is used to develop code guided by automated test cases. The process for test-driven development is:

  • Add a test that captures the programmer’s concept of the desired functioning of a small piece of code
  • Run the test, which should fail since the code doesn’t exist
  • Write the code and run the test in a tight loop until the test passes
  • Refactor the code after the test is passed, re-running the test to ensure it continues to pass against the refactored code
  • Repeat this process for the next small piece of code, running the previous tests as well as the added tests

The tests written are primarily unit level and are code-focused, though tests may also be written at the integration or system levels. Test-driven development gained its popularity through Extreme Programming, but is also used in other Agile methodologies and sometimes in sequential lifecycles. It helps developers focus on clearly defined expected results. The tests are automated and are used in continuous integration.

Acceptance Test-Driven Development

Acceptance test-driven development defines acceptance criteria and tests during the creation of user stories. Acceptance test-driven development is a collaborative approach that allows every stakeholder to understand how the software component has to behave and what the developers, testers, and business representatives need to ensure this behaviour.

Acceptance test-driven development creates reusable tests for regression testing. Specific tools support creation and execution of such tests, often within the continuous integration process. These tools can connect to data and service layers of the application, which allows tests to be executed at the system or acceptance level. Acceptance test-driven development allows quick resolution of defects and validation of feature behaviour. It helps determine if the acceptance criteria are met for the feature.

Behaviour-Driven Development

Behaviour-driven development allows a developer to focus on testing the code based on the expected behaviour of the software. Because the tests are based on the exhibited behaviour from the software, the tests are generally easier for other team members and stakeholders to understand.

Specific behaviour-driven development frameworks can be used to define acceptance criteria based on the given/when/then format:

Given some initial context,

When an event occurs,

Then ensure some outcomes. 

From these requirements, the behaviour-driven development framework generates code that can be used by developers to create test cases. Behaviour-driven development helps the developer collaborate with other stakeholders, including testers, to define accurate unit tests focused on business needs. 

The Test Pyramid

A software system may be tested at different levels. Typical test levels are, from the base of the pyramid to the top, unit, integration, system, and acceptance. The test pyramid emphasises having a large number of tests at the lower levels (bottom of the pyramid) and, as development moves to the upper levels, the number of tests decreases (top of the pyramid). Usually unit and integration level tests are automated and are created using API-based tools. At the system and acceptance levels, the automated tests are created using GUI-based tools. The test pyramid concept is based on the testing principle of early QA and testing (i.e., eliminating defects as early as possible in the lifecycle). 

Testing Quadrants, Test Levels, and Testing Types

Testing quadrants, align the test levels with the appropriate test types in the Agile methodology. The testing quadrants model, and its variants, helps to ensure that all important test types and test levels are included in the development lifecycle. This model also provides a way to differentiate and describe the types of tests to all stakeholders, including developers, testers, and business representatives. 

In the testing quadrants, tests can be business (user) or technology (developer) facing. Some tests support the work done by the Agile team and confirm software behaviour. Other tests can verify the product. Tests can be fully manual, fully automated, a combination of manual and automated, or manual but supported by tools. The four quadrants are as follows: 

  • Quadrant Q1 is unit level, technology facing, and supports the developers. This quadrant contains unit tests. These tests should be automated and included in the continuous integration process.
  • Quadrant Q2 is system level, business facing, and confirms product behaviour. This quadrant contains functional tests, examples, story tests, user experience prototypes, and simulations. These tests check the acceptance criteria and can be manual or automated. They are often created during the user story development and thus improve the quality of the stories. They are useful when creating automated regression test suites.
  • Quadrant Q3 is system or user acceptance level, business facing, and contains tests that critique the product, using realistic scenarios and data. This quadrant contains exploratory testing, scenarios, process flows, usability testing, user acceptance testing, alpha testing, and beta testing. These tests are often manual and are user-oriented.
  • Quadrant Q4 is system or operational acceptance level, technology facing, and contains tests that critique the product. This quadrant contains performance, load, stress, and scalability tests, security tests, maintainability, memory management, compatibility and interoperability, data migration, infrastructure, and recovery testing. These tests are often automated.

During any given iteration, tests from any or all quadrants may be required. The testing quadrants apply to dynamic testing rather than static testing.

The Role of a Tester

Throughout this article, general reference has been made to Agile methods and techniques, and the role of a tester within various Agile lifecycles. This subsection looks specifically at the role of a tester in a project following a Scrum lifecycle. 

Teamwork 

Teamwork is a fundamental principle in Agile development. Agile emphasises the whole-team approach consisting of developers, testers, and business representatives working together. The following are organisational and behavioural best practices in Scrum teams:

  • Cross-functional: Each team member brings a different set of skills to the team. The team works together on test strategy, test planning, test specification, test execution, test evaluation, and test results reporting.
  • Self-organising: The team may consist only of developers, but, as noted before, ideally there would be one or more testers.
  • Co-located: Testers sit together with the developers and the product owner.
  • Collaborative: Testers collaborate with their team members, other teams, the stakeholders, the product owner, and the Scrum Master.
  • Empowered: Technical decisions regarding design and testing are made by the team as a whole (developers, testers, and Scrum Master), in collaboration with the product owner and other teams if needed.
  • Committed: The tester is committed to question and evaluate the product’s behaviour and characteristics with respect to the expectations and needs of the customers and users.
  • Transparent: Development and testing progress is visible on the Agile task board.
  • Credible: The tester must ensure the credibility of the strategy for testing, its implementation, and execution, otherwise the stakeholders will not trust the test results. This is often done by providing information to the stakeholders about the testing process.
  • Open to feedback: Feedback is an important aspect of being successful in any project, especially in Agile projects. Retrospectives allow teams to learn from successes and from failures.
  • Resilient: Testing must be able to respond to change, like all other activities in Agile projects.

These best practices maximise the likelihood of successful testing in Scrum projects.

Sprint Zero 

Sprint zero is the first iteration of the project where many preparation activities take place. The tester collaborates with the team on the following activities during this iteration:

  • Identify the scope of the project (i.e., the product backlog)
  • Create an initial system architecture and high-level prototypes
  • Plan, acquire, and install needed tools (e.g., for test management, defect management, test automation, and continuous integration)
  • Create an initial test strategy for all test levels, addressing (among other topics) test scope, technical risks, test types, and coverage goals
  • Perform an initial quality risk analysis
  • Define test metrics to measure the test process, the progress of testing in the project, and product quality
  • Specify the definition of “done”
  • Create the task board
  • Define when to continue or stop testing before delivering the system to the customer

Sprint zero sets the direction for what testing needs to achieve and how testing needs to achieve it throughout the sprints.

Integration 

In Agile projects, the objective is to deliver customer value on a continuous basis (preferably in every sprint). To enable this, the integration strategy should consider both design and testing. To enable a continuous testing strategy for the delivered functionality and characteristics, it is important to identify all dependencies between underlying functions and features.

Test Planning

Since testing is fully integrated into the Agile team, test planning should start during the release planning session and be updated during each sprint. Test planning for the release and each sprint should address the issues.

Sprint planning results in a set of tasks to put on the task board, where each task should have a length of one or two days of work. In addition, any testing issues should be tracked to keep a steady flow of testing.

Agile Testing Practices

Many practices may be useful for testers in a scrum team, some of which include: 

  • Pairing: Two team members (e.g., a tester and a developer, two testers, or a tester and a product owner) sit together at one workstation to perform a testing or other sprint task.
  • Incremental test design: Test cases and charters are gradually built from user stories and other test bases, starting with simple tests and moving toward more complex ones.
  • Mind mapping: Mind mapping is a useful tool when testing. For example, testers can use mind mapping to identify which test sessions to perform, to show test strategies, and to describe test data.

These practices are in addition to other practices discussed in this article and previous articles on the basics pages.

Assessing Quality Risks and Estimating Test Effort

A typical objective of testing in all projects, Agile or traditional, is to reduce the risk of product quality problems to an acceptable level prior to release. Testers in Agile projects can use the same types of techniques used in traditional projects to identify quality risks (or product risks), assess the associated level of risk, estimate the effort required to reduce those risks sufficiently, and then mitigate those risks through test design, implementation, and execution. However, given the short iterations and rate of change in Agile projects, some adaptations of those techniques are required.

Assessing Quality Risks in Agile Projects

One of the many challenges in testing is the proper selection, allocation, and prioritisation of test conditions. This includes determining the appropriate amount of effort to allocate in order to cover each condition with tests, and sequencing the resulting tests in a way that optimises the effectiveness and efficiency of the testing work to be done. Risk identification, analysis, and risk mitigation strategies can be used by the testers in Agile teams to help determine an acceptable number of test cases to execute, although many interacting constraints and variables may require compromises.

Risk is the possibility of a negative or undesirable outcome or event. The level of risk is found by assessing the likelihood of occurrence of the risk and the impact of the risk. When the primary effect of the potential problem is on product quality, potential problems are referred to as quality risks or product risks. When the primary effect of the potential problem is on project success, potential problems are referred to as project risks or planning risks.

In Agile projects, quality risk analysis takes place at two places.

  • Release planning: business representatives who know the features in the release provide a high-level overview of the risks, and the whole team, including the tester(s), may assist in the risk identification and assessment.
  • Iteration planning: the whole team identifies and assesses the quality risks.

Examples of quality risks for a system include:

  • Incorrect calculations in reports (a functional risk related to accuracy)
  • Slow response to user input (a non-functional risk related to efficiency and response time)
  • Difficulty in understanding screens and fields (a non-functional risk related to usability and understandability)

As mentioned earlier, an iteration starts with iteration planning, which culminates in estimated tasks on a task board. These tasks can be prioritised in part based on the level of quality risk associated with them. Tasks associated with higher risks should start earlier and involve more testing effort. Tasks associated with lower risks should start later and involve less testing effort.

An example of how the quality risk analysis process in an Agile project may be carried out during iteration planning is outlined in the following steps:

  1. Gather the Agile team members together, including the tester(s).
  2. List all the backlog items for the current iteration (e.g., on a task board).
  3. Identify the quality risks associated with each item, considering all relevant quality
    characteristics.
  4. Assess each identified risk, which includes two activities: categorising the risk and determining its level of risk based on the impact and the likelihood of defects.
  5. Determine the extent of testing proportional to the level of risk.
  6. Select the appropriate test technique(s) to mitigate each risk, based on the risk, the level of risk, and the relevant quality characteristic.

The tester then designs, implements, and executes tests to mitigate the risks. This includes the totality of features, behaviours, quality characteristics, and attributes that affect customer, user, and stakeholder satisfaction. 

Throughout the project, the team should remain aware of additional information that may change the set of risks and/or the level of risk associated with known quality risks. Periodic adjustment of the quality risk analysis, which results in adjustments to the tests, should occur. Adjustments include identifying new risks, re-assessing the level of existing risks, and evaluating the effectiveness of risk mitigation activities.

Quality risks can also be mitigated before test execution starts. For example, if problems with the user stories are found during risk identification, the project team can thoroughly review user stories as a mitigating strategy.

Estimating Testing Effort Based on Content and Risk

During release planning, the Agile team estimates the effort required to complete the release. The estimate addresses the testing effort as well. A common estimation technique used in Agile projects is planning poker, a consensus-based technique. The product owner or customer reads a user story to the estimators. Each estimator has a deck of cards with values similar to the Fibonacci sequence (i.e., 0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …), or any other progression of choice (e.g., shirt sizes ranging from extra-small to extra-extra-large). The values represent the number of story points, effort days, or other units in which the team estimates. The Fibonacci sequence is recommended because the numbers in the sequence reflect that uncertainty grows proportionally with the size of the story. A high estimate usually means that the story is not well understood or should be broken down into multiple smaller stories. 

The estimators discuss the feature, and ask questions of the product owner as needed. Aspects such as development and testing effort, complexity of the story, and scope of testing play a role in the estimation. Therefore, it is advisable to include the risk level of a backlog item, in addition to the priority specified by the product owner, before the planning poker session is initiated. When the feature has been fully discussed, each estimator privately selects one card to represent his or her estimate. All cards are then revealed at the same time. If all estimators selected the same value, that becomes the estimate. If not, the estimators discuss the differences in estimates after which the poker round is repeated until agreement is reached, either by consensus or by applying rules (e.g., use the median, use the highest score) to limit the number of poker rounds. These discussions ensure a reliable estimate of the effort needed to complete product backlog items requested by the product owner and help improve collective knowledge of what has to be done. 

Techniques in Agile Projects

Many of the test techniques and testing levels that apply to traditional projects can also be applied to Agile projects. However, for Agile projects, there are some specific considerations and variances in test techniques, terminologies, and documentation that should be considered. 

Acceptance Criteria, Adequate Coverage, and Other Information for Testing

Agile projects outline initial requirements as user stories in a prioritised backlog at the start of the project. Initial requirements are short and usually follow a predefined format. Non-functional requirements, such as usability and performance, are also important and can be specified as unique user stories or connected to other functional user stories. Non-functional requirements may follow a predefined format or standard, such as [ISO25000], or an industry specific standard.

The user stories serve as an important test basis. Other possible test bases include:

  • Experience from previous projects
  • Existing functions, features, and quality characteristics of the system
  • Code, architecture, and design
  • User profiles (context, system configurations, and user behaviour)
  • Information on defects from existing and previous projects
  • A categorisation of defects in a defect taxonomy
  • Applicable standards (e.g., [DO-178B] for avionics software)
  • Quality risks

During each iteration, developers create code which implements the functions and features described in the user stories, with the relevant quality characteristics, and this code is verified and validated via acceptance testing. To be testable, acceptance criteria should address the following topics where relevant:

  • Functional behaviour: The externally observable behaviour with user actions as input operating under certain configurations.
  • Quality characteristics: How the system performs the specified behaviour. The characteristics may also be referred to as quality attributes or non-functional requirements. Common quality characteristics are performance, reliability, usability, etc.
  • Scenarios (use cases): A sequence of actions between an external actor (often a user) and the system, in order to accomplish a specific goal or business task.
  • Business rules: Activities that can only be performed in the system under certain conditions defined by outside procedures and constraints (e.g., the procedures used by an insurance company to handle insurance claims).
  • External interfaces: Descriptions of the connections between the system to be developed and the outside world. External interfaces can be divided into different types (user interface, interface to other systems, etc.).
  • Constraints: Any design and implementation constraint that will restrict the options for the developer. Devices with embedded software must often respect physical constraints such as size, weight, and interface connections.
  • Data definitions: The customer may describe the format, data type, allowed values, and default values for a data item in the composition of a complex business data structure (e.g., the ZIP code in a US mail address).

In addition to the user stories and their associated acceptance criteria, other information is relevant for the tester, including:

  • How the system is supposed to work and be used
  • The system interfaces that can be used/accessed to test the system
  • Whether current tool support is sufficient
  • Whether the tester has enough knowledge and skill to perform the necessary tests

Testers will often discover the need for additional information (e.g., code coverage) throughout the iterations and should work collaboratively with the rest of the Agile team members to obtain that information. Relevant information plays a part in determining whether a particular activity can be considered done. This concept of the definition of done is critical in Agile projects and applies in a number of different ways as discussed in the following sub-subsections. 

Test Levels

Each test level has its own definition of done. The following list gives examples that may be relevant for the different test levels.

  • Unit testing
    • 100% decision coverage where possible, with careful reviews of any infeasible paths
    • Static analysis performed on all code
    • No unresolved major defects (ranked based on priority and severity)
    • No known unacceptable technical debt remaining in the design and the code
    • All code, unit tests, and unit test results reviewed
    • All unit tests automated
    • Important characteristics are within agreed limits (e.g., performance)
  • Integration testing
    • All functional requirements tested, including both positive and negative tests, with the number of tests based on size, complexity, and risks
    • All interfaces between units tested
    • All quality risks covered according to the agreed extent of testing
    • No unresolved major defects (prioritised according to risk and importance)
    • All defects found are reported
    • All regression tests automated, where possible, with all automated tests stored in a common repository
  • System testing
    • End-to-end tests of user stories, features, and functions
    • All user personas covered
    • The most important quality characteristics of the system covered (e.g., performance, robustness, reliability)
    • Testing done in a production-like environment(s), including all hardware and software for all supported configurations, to the extent possible
    • All quality risks covered according to the agreed extent of testing
    • All regression tests automated, where possible, with all automated tests stored in a common repository
    • All defects found are reported and possibly fixed
    • No unresolved major defects (prioritised according to risk and importance)

User Story

The definition of done for user stories may be determined by the following criteria: 

  • The user stories selected for the iteration are complete, understood by the team, and have detailed, testable acceptance criteria
  • All the elements of the user story are specified and reviewed, including the user story acceptance tests, have been completed
  • Tasks necessary to implement and test the selected user stories have been identified and estimated by the team

Feature

The definition of done for features, which may span multiple user stories or epics, may include:

  • All constituent user stories, with acceptance criteria, are defined and approved by the customer
  • The design is complete, with no known technical debt
  • The code is complete, with no known technical debt or unfinished refactoring
  • Unit tests have been performed and have achieved the defined level of coverage
  • Integration tests and system tests for the feature have been performed according to the defined coverage criteria
  • No major defects remain to be corrected
  • Feature documentation is complete, which may include release notes, user manuals, and on-line help functions

Iteration 

The definition of done for the iteration may include the following:

  • All features for the iteration are ready and individually tested according to the feature level criteria
  • Any non-critical defects that cannot be fixed within the constraints of the iteration added to the product backlog and prioritised 
  • Integration of all features for the iteration completed and tested 
  • Documentation written, reviewed, and approved 

At this point, the software is potentially releasable because the iteration has been successfully completed, but not all iterations result in a release. 

Release

The definition of done for a release, which may span multiple iterations, may include the following areas:

  • Coverage: All relevant test basis elements for all contents of the release have been covered by testing. The adequacy of the coverage is determined by what is new or changed, its complexity and size, and the associated risks of failure.
  • Quality: The defect intensity (e.g., how many defects are found per day or per transaction), the defect density (e.g., the number of defects found compared to the number of user stories, effort, and/or quality attributes), estimated number of remaining defects are within acceptable limits, the consequences of unresolved and remaining defects (e.g., the severity and priority) are understood and acceptable, the residual level of risk associated with each identified quality risk is understood and acceptable.
  • Time: If the pre-determined delivery date has been reached, the business considerations associated with releasing and not releasing need to be considered.
  • Cost: The estimated lifecycle cost should be used to calculate the return on investment for the delivered system (i.e., the calculated development and maintenance cost should be considerably lower than the expected total sales of the product). The main part of the lifecycle cost often comes from maintenance after the product has been released, due to the number of defects escaping to production. 

Applying Acceptance Test-Driven Development

Acceptance test-driven development is a test-first approach. Test cases are created prior to implementing the user story. The test cases are created by the Agile team, including the developer, the tester, and the business representatives and may be manual or automated. The first step is a specification workshop where the user story is analysed, discussed, and written by developers, testers, and business representatives. Any incompleteness, ambiguities, or errors in the user story are fixed during this process. 

The next step is to create the tests. This can be done by the team together or by the tester individually. In any case, an independent person such as a business representative validates the tests. The tests are examples that describe the specific characteristics of the user story. These examples will help the team implement the user story correctly. Since examples and tests are the same, these terms are often used interchangeably. The work starts with basic examples and open questions. 

Typically, the first tests are the positive tests, confirming the correct behaviour without exception or error conditions, comprising the sequence of activities executed if everything goes as expected. After the positive path tests are done, the team should write negative path tests and cover non-functional attributes as well (e.g., performance, usability). Tests are expressed in a way that every stakeholder is able to understand, containing sentences in natural language involving the necessary preconditions, if any, the inputs, and the related outputs. 

The examples must cover all the characteristics of the user story and should not add to the story. This means that an example should not exist which describes an aspect of the user story not documented in the story itself. In addition, no two examples should describe the same characteristics of the user story. 

Functional and Non-Functional Black Box Test Design

In Agile testing, many tests are created by testers concurrently with the developers’ programming activities. Just as the developers are programming based on the user stories and acceptance criteria, so are the testers creating tests based on user stories and their acceptance criteria. (Some tests, such as exploratory tests and some other experience-based tests, are created later, during test execution) Testers can apply traditional black box test design techniques such as equivalence partitioning, boundary value analysis, decision tables, and state transition testing to create these tests. For example, boundary value analysis could be used to select test values when a customer is limited in the number of items they may select for purchase. 

In many situations, non-functional requirements can be documented as user stories. Black box test design techniques (such as boundary value analysis) can also be used to create tests for non-functional quality characteristics. The user story might contain performance or reliability requirements. For example, a given execution cannot exceed a time limit or a number of operations may fail less than a certain number of times. 

Exploratory Testing and Agile Testing

Exploratory testing is important in Agile projects due to the limited time available for test analysis and the limited details of the user stories. In order to achieve the best results, exploratory testing should be combined with other experience-based techniques as part of a reactive testing strategy, blended with other testing strategies such as analytical risk-based testing, analytical requirements-based testing, model-based testing, and regression-averse testing. Test strategies and test strategy blending is discussed in the basics Level pages. 

In exploratory testing, test design and test execution occur at the same time, guided by a prepared test charter. A test charter provides the test conditions to cover during a time-boxed testing session. During exploratory testing, the results of the most recent tests guide the next test. The same white box and black box techniques can be used to design the tests as when performing pre-designed testing. 

A test charter may include the following information: 

  • Actor: intended user of the system
  • Purpose: the theme of the charter including what particular objective the actor wants to achieve, i.e., the test conditions
  • Setup: what needs to be in place in order to start the test execution
  • Priority: relative importance of this charter, based on the priority of the associated user story or the risk level
  • Reference: specifications (e.g., user story), risks, or other information sources
  • Data: whatever data is needed to carry out the charter
  • Activities: a list of ideas of what the actor may want to do with the system (e.g., “Log on to the system as a super user”) and what would be interesting to test (both positive and negative tests)
  • Oracle notes: how to evaluate the product to determine correct results (e.g., to capture what happens on the screen and compare to what is written in the user’s manual)
  • Variations: alternative actions and evaluations to complement the ideas described under activities

To manage exploratory testing, a method called session-based test management can be used. A session is defined as an uninterrupted period of testing which could last from 60 to 120 minutes. Test sessions include the following:

  • Survey session (to learn how it works)
  • Analysis session (evaluation of the functionality or characteristics)
  • Deep coverage (corner cases, scenarios, interactions)

The quality of the tests depends on the testers’ ability to ask relevant questions about what to test. Examples include the following:

  • What is most important to find out about the system?
  • In what way may the system fail?
  • What happens if…..?
  • What should happen when…..?
  • Are customer needs, requirements, and expectations fulfilled?
  • Is the system possible to install (and remove if necessary) in all supported upgrade paths?

During test execution, the tester uses creativity, intuition, cognition, and skill to find possible problems with the product. The tester also needs to have good knowledge and understanding of the software under test, the business domain, how the software is used, and how to determine when the system fails.

A set of heuristics can be applied when testing. A heuristic can guide the tester in how to perform the testing and to evaluate the results [Hendrickson]. Examples include:

  • Boundaries
  • CRUD (Create, Read, Update, Delete)
  • Configuration variations
  • Interruptions (e.g., log off, shut down, or reboot)

It is important for the tester to document the process as much as possible. Otherwise, it would be difficult to go back and see how a problem in the system was discovered. The following list provides examples of information that may be useful to document:

  • Test coverage: what input data have been used, how much has been covered, and how much remains to be tested
  • Evaluation notes: observations during testing, do the system and feature under test seem to be stable, were any defects found, what is planned as the next step according to the current observations, and any other list of ideas
  • Risk/strategy list: which risks have been covered and which ones remain among the most important ones, will the initial strategy be followed, does it need any changes
  • Issues, questions, and anomalies: any unexpected behaviour, any questions regarding the efficiency of the approach, any concerns about the ideas/test attempts, test environment, test data, misunderstanding of the function, test script or the system under test
  • Actual behaviour: recording of actual behaviour of the system that needs to be saved (e.g., video, screen captures, output data files)

The information logged should be captured and/or summarised into some form of status management tools (e.g., test management tools, task management tools, the task board), in a way that makes it easy for stakeholders to understand the current status for all testing that was performed.

Tools in Agile Projects

Tools described in the basics Level pages are relevant and used by testers on Agile teams. Not all tools are used the same way and some tools have more relevance for Agile projects than they have in traditional projects. For example, although the test management tools, requirements management tools, and incident management tools (defect tracking tools) can be used by Agile teams, some Agile teams opt for an all-inclusive tool (e.g., application lifecycle management or task management) that provides features relevant to Agile development, such as task boards, burn-down charts, and user stories. Configuration management tools are important to testers in Agile teams due to the high number of automated tests at all levels and the need to store and manage the associated automated test artefacts.

In addition to the tools described in the basic Level pages, testers on Agile projects may also utilise the tools described in the following subsections. These tools are used by the whole team to ensure team collaboration and information sharing, which are key to Agile practices.

Task Management and Tracking Tools

In some cases, Agile teams use physical story/task boards (e.g., whiteboard, cork-board) to manage and track user stories, tests, and other tasks throughout each sprint. Other teams will use application lifecycle management and task management software, including electronic task boards. These tools serve the following purposes:

  • Record stories and their relevant development and test tasks, to ensure that nothing gets lost during a sprint
  • Capture team members’ estimates on their tasks and automatically calculate the effort required to implement a story, to support efficient iteration planning sessions
  • Associate development tasks and test tasks with the same story, to provide a complete picture of the team’s effort required to implement the story
  • Aggregate developer and tester updates to the task status as they complete their work, automatically providing a current calculated snapshot of the status of each story, the iteration, and the overall release
  • Provide a visual representation (via metrics, charts, and dashboards) of the current state of each user story, the iteration, and the release, allowing all stakeholders, including people on geographically distributed teams, to quickly check status
  • Integrate with configuration management tools, which can allow automated recording of code check-ins and builds against tasks, and, in some cases, automated status updates for tasks

Communication and Information Sharing Tools

In addition to e-mail, documents, and spoken communication, Agile teams often use three additional types of tools to support communication and information sharing: wikis, instant messaging, and desktop sharing.

Wikis allow teams to build and share an online knowledge base on various aspects of the project, including the following:

  • Product feature diagrams, feature discussions, prototype diagrams, photos of whiteboard discussions, and other information
  • Tools and/or techniques for developing and testing found to be useful by other members of the team
  • Metrics, charts, and dashboards on product status, which is especially useful when the wiki is integrated with other tools such as the build server and task management system, since the tool can update product status automatically
  • Conversations between team members, similar to instant messaging and email, but in a way that is shared with everyone else on the team

Instant messaging, audio teleconferencing, and video chat tools provide the following benefits:

  • Allow real time direct communication between team members, especially distributed teams
  • Involve distributed teams in standup meetings
  • Reduce telephone bills by use of voice-over-IP technology, removing cost constraints that could reduce team member communication in distributed settings

Desktop sharing and capturing tools provide the following benefits:

  • In distributed teams, product demonstrations, code reviews, and even pairing can occur
  • Capturing product demonstrations at the end of each iteration, which can be posted to the team’s wiki

These tools should be used to complement and extend, not replace, face-to-face communication in Agile teams.

Software Build and Distribution Tools

As discussed earlier in this article, daily build and deployment of software is a key practice in Agile teams. This requires the use of continuous integration tools and build distribution tools. The uses, benefits, and risks of these tools was described earlier on the basics of agile page. 

Configuration Management Tools

On Agile teams, configuration management tools may be used not only to store source code and automated tests, but manual tests and other test work products are often stored in the same repository as the product source code. This provides traceability between which versions of the software were tested with which particular versions of the tests, and allows for rapid change without losing historical information. The main types of version control systems include centralised source control systems and distributed version control systems. The team size, structure, location, and requirements to integrate with other tools will determine which version control system is right for a particular Agile project.

Test Design, Implementation, and Execution Tools

Some tools are useful to Agile testers at specific points in the software testing process. While most of these tools are not new or specific to Agile, they provide important capabilities given the rapid change of Agile projects.

  • Test design tools: Use of tools such as mind maps have become more popular to quickly design and define tests for a new feature.
  • Test case management tools: The type of test case management tools used in Agile may be part of the whole team’s application lifecycle management or task management tool.
  • Test data preparation and generation tools: Tools that generate data to populate an application’s database are very beneficial when a lot of data and combinations of data are necessary to test the application. These tools can also help re-define the database structure as the product undergoes changes during an Agile project and refactor the scripts to generate the data. This allows quick updating of test data as changes occur. Some test data preparation tools use production data sources as a raw material and use scripts to remove or anonymise sensitive data. Other test data preparation tools can help with validating large data inputs or outputs.
  • Test data load tools: After data has been generated for testing, it needs to be loaded into the application. Manual data entry is often time consuming and error prone, but data load tools are available to make the process reliable and efficient. In fact, many of the data generator tools include an integrated data load component. In other cases, bulk-loading using the database management systems is also possible.
  • Automated test execution tools: There are test execution tools which are more aligned to Agile testing. Specific tools are available via both commercial and open source avenues to support test first approaches, such as behaviour-driven development, test-driven development, and acceptance test-driven development. These tools allow testers and business staff to express the expected system behaviour in tables or natural language using keywords.
  • Exploratory test tools: Tools that capture and log activities performed on an application during an exploratory test session are beneficial to the tester and developer, as they record the actions taken. This is useful when a defect is found, as the actions taken before the failure occurred have been captured and can be used to report the defect to the developers. Logging steps performed in an exploratory test session may prove to be beneficial if the test is ultimately included in the automated regression test suite.

Cloud Computing and Virtualisation Tools

Virtualisation allows a single physical resource (server) to operate as many separate, smaller resources. When virtual machines or cloud instances are used, teams have a greater number of servers available to them for development and testing. This can help to avoid delays associated with waiting for physical servers. Provisioning a new server or restoring a server is more efficient with snapshot capabilities built into most virtualisation tools. Some test management tools now utilise virtualisation technologies to snapshot servers at the point when a fault is detected, allowing testers to share the snapshot with the developers investigating the fault.

Basics of Agile Testing Practices, Principles, and Processes

The Differences between Testing in Traditional and Agile Approaches

As described in the basics pages, test activities are related to development activities, and thus testing varies in different lifecycles. Testers must understand the differences between testing in traditional lifecycle models (e.g., sequential such as the V-model or iterative such as RUP) and Agile lifecycles in order to work effectively and efficiently. The Agile models differ in terms of the way testing and development activities are integrated, the project work products, the names, entry and exit criteria used for various levels of testing, the use of tools, and how independent testing can be effectively utilised. 

Testers should remember that organisations vary considerably in their implementation of lifecycles. Deviation from the ideals of Agile lifecycles may represent intelligent customisation and adaptation of the practices. The ability to adapt to the context of a given project, including the software development practices actually followed, is a key success factor for testers. 

Testing and Development Activities

One of the main differences between traditional lifecycles and Agile lifecycles is the idea of very short iterations, each iteration resulting in working software that delivers features of value to business stakeholders. At the beginning of the project, there is a release planning period. This is followed by a sequence of iterations. At the beginning of each iteration, there is an iteration planning period. Once iteration scope is established, the selected user stories are developed, integrated with the system, and tested. These iterations are highly dynamic, with development, integration, and testing activities taking place throughout each iteration, and with considerable parallelism and overlap. Testing activities occur throughout the iteration, not as a final activity. 

Testers, developers, and business stakeholders all have a role in testing, as with traditional lifecycles. Developers perform unit tests as they develop features from the user stories. Testers then test those features. Business stakeholders also test the stories during implementation. Business stakeholders might use written test cases, but they also might simply experiment with and use the feature in order to provide fast feedback to the development team. 

In some cases, hardening or stabilisation iterations occur periodically to resolve any lingering defects and other forms of technical debt. However, the best practice is that no feature is considered done until it has been integrated and tested with the system. Another good practice is to address defects remaining from the previous iteration at the beginning of the next iteration, as part of the backlog for that iteration (referred to as “fix bugs first”). However, some complain that this practice results in a situation where the total work to be done in the iteration is unknown and it will be more difficult to estimate when the remaining features can be done. At the end of the sequence of iterations, there can be a set of release activities to get the software ready for delivery, though in some cases delivery occurs at the end of each iteration. 

When risk-based testing is used as one of the test strategies, a high-level risk analysis occurs during release planning, with testers often driving that analysis. However, the specific quality risks associated with each iteration are identified and assessed in iteration planning. This risk analysis can influence the sequence of development as well as the priority and depth of testing for the features. It also influences the estimation of the test effort required for each feature. 

In some Agile practices (e.g., Extreme Programming), pairing is used. Pairing can involve testers working together in twos to test a feature. Pairing can also involve a tester working collaboratively with a developer to develop and test a feature. Pairing can be difficult when the test team is distributed, but processes and tools can help enable distributed pairing. 

Testers may also serve as testing and quality coaches within the team, sharing testing knowledge and supporting quality assurance work within the team. This promotes a sense of collective ownership of quality of the product. 

Test automation at all levels of testing occurs in many Agile teams, and this can mean that testers spend time creating, executing, monitoring, and maintaining automated tests and results. Because of the heavy use of test automation, a higher percentage of the manual testing on Agile projects tends to be done using experience-based and defect-based techniques such as software attacks, exploratory testing, and error guessing. While developers will focus on creating unit tests, testers should focus on creating automated integration, system, and system integration tests. This leads to a tendency for Agile teams to favour testers with a strong technical and test automation background. 

One core Agile principle is that change may occur throughout the project. Therefore, lightweight work product documentation is favoured in Agile projects. Changes to existing features have testing implications, especially regression testing implications. The use of automated testing is one way of managing the amount of test effort associated with change. However, it’s important that the rate of change not exceed the project team’s ability to deal with the risks associated with those changes. 

Project Work Products

Project work products of immediate interest to Agile testers typically fall into three categories: 

  1. Business-oriented work products that describe what is needed (e.g., requirements specifications) and how to use it (e.g., user documentation)
  2. Development work products that describe how the system is built (e.g., database entity-relationship diagrams), that actually implement the system (e.g., code), or that evaluate individual pieces of code (e.g., automated unit tests)
  3. Test work products that describe how the system is tested (e.g., test strategies and plans), that actually test the system (e.g., manual and automated tests), or that present test results (e.g., test dashboards)

In a typical Agile project, it is a common practice to avoid producing vast amounts of documentation. Instead, focus is more on having working software, together with automated tests that demonstrate conformance to requirements. This encouragement to reduce documentation applies only to documentation that does not deliver value to the customer. In a successful Agile project, a balance is struck between increasing efficiency by reducing documentation and providing sufficient documentation to support business, testing, development, and maintenance activities. The team must make a decision during release planning about which work products are required and what level of work product documentation is needed. 

Typical business-oriented work products on Agile projects include user stories and acceptance criteria. User stories are the Agile form of requirements specifications, and should explain how the system should behave with respect to a single, coherent feature or function. A user story should define a feature small enough to be completed in a single iteration. Larger collections of related features, or a collection of sub-features that make up a single complex feature, may be referred to as “epics”. Epics may include user stories for different development teams. For example, one user story can describe what is required at the API-level (middleware) while another story describes what is needed at the UI-level (application). These collections may be developed over a series of sprints. Each epic and its user stories should have associated acceptance criteria. 

Typical developer work products on Agile projects include code. Agile developers also often create automated unit tests. These tests might be created after the development of code. In some cases, though, developers create tests incrementally, before each portion of the code is written, in order to provide a way of verifying, once that portion of code is written, whether it works as expected. While this approach is referred to as test first or test-driven development, in reality the tests are more a form of executable low-level design specifications rather than tests. 

Typical tester work products on Agile projects include automated tests, as well as documents such as test plans, quality risk catalogs, manual tests, defect reports, and test results logs. The documents are captured in as lightweight a fashion as possible, which is often also true of these documents in traditional lifecycles. Testers will also produce test metrics from defect reports and test results logs, and again there is an emphasis on a lightweight approach. 

In some Agile implementations, especially regulated, safety critical, distributed, or highly complex projects and products, further formalisation of these work products is required. For example, some teams transform user stories and acceptance criteria into more formal requirements specifications. Vertical and horizontal traceability reports may be prepared to satisfy auditors, regulations, and other requirements. 

Test Levels

Test levels are test activities that are logically related, often by the maturity or completeness of the item under test. 

In sequential lifecycle models, the test levels are often defined such that the exit criteria of one level are part of the entry criteria for the next level. In some iterative models, this rule does not apply. Test levels overlap. Requirement specification, design specification, and development activities may overlap with test levels. 

In some Agile lifecycles, overlap occurs because changes to requirements, design, and code can happen at any point in an iteration. While Scrum, in theory, does not allow changes to the user stories after iteration planning, in practice such changes sometimes occur. During an iteration, any given user story will typically progress sequentially through the following test activities: 

  • Unit testing, typically done by the developer
  • Feature acceptance testing, which is sometimes broken into two activities: 
  • Feature verification testing, which is often automated, may be done by developers or testers, and involves testing against the user story’s acceptance criteria
  • Feature validation testing, which is usually manual and can involve developers, testers, and business stakeholders working collaboratively to determine whether the feature is fit for use, to improve visibility of the progress made, and to receive real feedback from the business stakeholders

In addition, there is often a parallel process of regression testing occurring throughout the iteration. This involves re-running the automated unit tests and feature verification tests from the current iteration and previous iterations, usually via a continuous integration framework.

In some Agile projects, there may be a system test level, which starts once the first user story is ready for such testing. This can involve executing functional tests, as well as non-functional tests for performance, reliability, usability, and other relevant test types.

Agile teams can employ various forms of acceptance testing. Internal alpha tests and external beta tests may occur, either at the close of each iteration, after the completion of each iteration, or after a series of iterations. User acceptance tests, operational acceptance tests, regulatory acceptance tests, and contract acceptance tests also may occur, either at the close of each iteration, after the completion of each iteration, or after a series of iterations.

Testing and Configuration Management

Agile projects often involve heavy use of automated tools to develop, test, and manage software development. Developers use tools for static analysis, unit testing, and code coverage. Developers continuously check the code and unit tests into a configuration management system, using automated build and test frameworks. These frameworks allow the continuous integration of new software with the system, with the static analysis and unit tests run repeatedly as new software is checked in. 

These automated tests can also include functional tests at the integration and system levels. Such functional automated tests may be created using functional testing harnesses, open-source user interface functional test tools, or commercial tools, and can be integrated with the automated tests run as part of the continuous integration framework. In some cases, due to the duration of the functional tests, the functional tests are separated from the unit tests and run less frequently. For example, unit tests may be run each time new software is checked in, while the longer functional tests are run only every few days. 

One goal of the automated tests is to confirm that the build is functioning and installable. If any automated test fails, the team should fix the underlying defect in time for the next code check-in. This requires an investment in real-time test reporting to provide good visibility into test results. This approach helps reduce expensive and inefficient cycles of “build-install-fail-rebuild-reinstall” that can occur in many traditional projects, since changes that break the build or cause software to fail to install are detected quickly. 

Automated testing and build tools help to manage the regression risk associated with the frequent change that often occurs in Agile projects. However, over-reliance on automated unit testing alone to manage these risks can be a problem, as unit testing often has limited defect detection effectiveness. Automated tests at the integration and system levels are also required. 

Organisational Options for Independent Testing

As discussed in the basics section, independent testers are often more effective at finding defects. In some Agile teams, developers create many of the tests in the form of automated tests. One or more testers may be embedded within the team, performing many of the testing tasks. However, given those testers’ position within the team, there is a risk of loss of independence and objective evaluation.

Other Agile teams retain fully independent, separate test teams, and assign testers on-demand during the final days of each sprint. This can preserve independence, and these testers can provide an objective, unbiased evaluation of the software. However, time pressures, lack of understanding of the new features in the product, and relationship issues with business stakeholders and developers often lead to problems with this approach. 

A third option is to have an independent, separate test team where testers are assigned to Agile teams on a long-term basis, at the beginning of the project, allowing them to maintain their independence while gaining a good understanding of the product and strong relationships with other team members. In addition, the independent test team can have specialised testers outside of the Agile teams to work on long-term and/or iteration-independent activities, such as developing automated test tools, carrying out non-functional testing, creating and supporting test environments and data, and carrying out test levels that might not fit well within a sprint (e.g., system integration testing). 

Status of Testing in Agile Projects

Change takes place rapidly in Agile projects. This change means that test status, test progress, and product quality constantly evolve, and testers must devise ways to get that information to the team so that they can make decisions to stay on track for successful completion of each iteration. In addition, change can affect existing features from previous iterations. Therefore, manual and automated tests must be updated to deal effectively with regression risk. 

Communicating Test Status, Progress, and Product Quality

Agile teams progress by having working software at the end of each iteration. To determine when the team will have working software, they need to monitor the progress of all work items in the iteration and release. Testers in Agile teams utilise various methods to record test progress and status, including test automation results, progression of test tasks and stories on the Agile task board, and burn-down charts showing the team’s progress. These can then be communicated to the rest of the team using media such as wiki dashboards and dashboard-style emails, as well as orally during stand-up meetings. Agile teams may use tools that automatically generate status reports based on test results and task progress, which in turn update wiki-style dashboards and emails. This method of communication also gathers metrics from the testing process, which can be used in process improvement. Communicating test status in such an automated manner also frees testers’ time to focus on designing and executing more test cases.

Teams may use burn-down charts to track progress across the entire release and within each iteration. A burn-down chart represents the amount of work left to be done against time allocated to the release or iteration.

To provide an instant, detailed visual representation of the whole team’s current status, including the status of testing, teams may use Agile task boards. The story cards, development tasks, test tasks, and other tasks created during iteration planning are captured on the task board, often using colour-coordinated cards to determine the task type. During the iteration, progress is managed via the movement of these tasks across the task board into columns such as to do, work in progress, verify, and done. Agile teams may use tools to maintain their story cards and Agile task boards, which can automate dashboards and status updates. 

Testing tasks on the task board relate to the acceptance criteria defined for the user stories. As test automation scripts, manual tests, and exploratory tests for a test task achieve a passing status, the task moves into the done column of the task board. The whole team reviews the status of the task board regularly, often during the daily stand-up meetings, to ensure tasks are moving across the board at an acceptable rate. If any tasks (including testing tasks) are not moving or are moving too slowly, the team reviews and addresses any issues that may be blocking the progress of those tasks. 

The daily stand-up meeting includes all members of the Agile team including testers. At this meeting, they communicate their current status. The agenda for each member is: 

  • What have you completed since the last meeting?
  • What do you plan to complete by the next meeting?
  • What is getting in your way?

Any issues that may block test progress are communicated during the daily stand-up meetings, so the whole team is aware of the issues and can resolve them accordingly.

To improve the overall product quality, many Agile teams perform customer satisfaction surveys to receive feedback on whether the product meets customer expectations. Teams may use other metrics similar to those captured in traditional development methodologies, such as test pass/fail rates, defect discovery rates, confirmation and regression test results, defect density, defects found and fixed, requirements coverage, risk coverage, code coverage, and code churn to improve the product quality.

As with any lifecycle, the metrics captured and reported should be relevant and aid decision-making. Metrics should not be used to reward, punish, or isolate any team members. 

Managing Regression Risk with Evolving Manual and Automated Test Cases

In an Agile project, as each iteration completes, the product grows. Therefore, the scope of testing also increases. Along with testing the code changes made in the current iteration, testers also need to verify no regression has been introduced on features that were developed and tested in previous iterations. The risk of introducing regression in Agile development is high due to extensive code churn (lines of code added, modified, or deleted from one version to another). Since responding to change is a key Agile principle, changes can also be made to previously delivered features to meet business needs. In order to maintain velocity without incurring a large amount of technical debt, it is critical that teams invest in test automation at all test levels as early as possible. It is also critical that all test assets such as automated tests, manual test cases, test data, and other testing artefacts are kept up-to-date with each iteration. It is highly recommended that all test assets be maintained in a configuration management tool in order to enable version control, to ensure ease of access by all team members, and to support making changes as required due to changing functionality while still preserving the historic information of the test assets. 

Because complete repetition of all tests is seldom possible, especially in tight-timeline Agile projects, testers need to allocate time in each iteration to review manual and automated test cases from previous and current iterations to select test cases that may be candidates for the regression test suite, and to retire test cases that are no longer relevant. Tests written in earlier iterations to verify specific features may have little value in later iterations due to feature changes or new features which alter the way those earlier features behave. 

While reviewing test cases, testers should consider suitability for automation. The team needs to automate as many tests as possible from previous and current iterations. This allows automated regression tests to reduce regression risk with less effort than manual regression testing would require. This reduced regression test effort frees the testers to more thoroughly test new features and functions in the current iteration. 

It is critical that testers have the ability to quickly identify and update test cases from previous iterations and/or releases that are affected by the changes made in the current iteration. Defining how the team designs, writes, and stores test cases should occur during release planning. Good practices for test design and implementation need to be adopted early and applied consistently. The shorter timeframes for testing and the constant change in each iteration will increase the impact of poor test design and implementation practices. 

Use of test automation, at all test levels, allows Agile teams to provide rapid feedback on product quality. Well-written automated tests provide a living document of system functionality. By checking the automated tests and their corresponding test results into the configuration management system, aligned with the versioning of the product builds, Agile teams can review the functionality tested and the test results for any given build at any given point in time.

Automated unit tests are run before source code is checked into the mainline of the configuration management system to ensure the code changes do not break the software build. To reduce build breaks, which can slow down the progress of the whole team, code should not be checked in unless all automated unit tests pass. Automated unit test results provide immediate feedback on code and build quality, but not on product quality. 

Automated acceptance tests are run regularly as part of the continuous integration full system build. These tests are run against a complete system build at least daily, but are generally not run with each code check-in as they take longer to run than automated unit tests and could slow down code check-ins. The test results from automated acceptance tests provide feedback on product quality with respect to regression since the last build, but they do not provide status of overall product quality. 

Automated tests can be run continuously against the system. An initial subset of automated tests to cover critical system functionality and integration points should be created immediately after a new build is deployed into the test environment. These tests are commonly known as build verification tests. Results from the build verification tests will provide instant feedback on the software after deployment, so teams don’t waste time testing an unstable build. 

Automated tests contained in the regression test set are generally run as part of the daily main build in the continuous integration environment, and again when a new build is deployed into the test environment. As soon as an automated regression test fails, the team stops and investigates the reasons for the failing test. The test may have failed due to legitimate functional changes in the current iteration, in which case the test and/or user story may need to be updated to reflect the new acceptance criteria. Alternatively, the test may need to be retired if another test has been built to cover the changes. However, if the test failed due to a defect, it is a good practice for the team to fix the defect prior to progressing with new features. 

In addition to test automation, the following testing tasks may also be automated: 

  • Test data generation
  • Loading testing data into systems
  • Deployment of builds into the test environments
  • Restoration of a test environment (e.g., the database or website data files) to a baseline
  • Comparison of data outputs

Automation of these tasks reduces the overhead and allows the team to spend time developing and testing new features.

Role and Skills of a Tester in an Agile Team

In an Agile team, testers must closely collaborate with all other team members and with business stakeholders. This has a number of implications in terms of the skills a tester must have and the activities they perform within an Agile team.

Agile Tester Skills

Agile testers should have all the skills mentioned in the basics section. In addition to these skills, a tester in an Agile team should be competent in test automation, test-driven development, acceptance test-driven development, white-box, black-box, and experience-based testing.

As Agile methodologies depend heavily on collaboration, communication, and interaction between the team members as well as stakeholders outside the team, testers in an Agile team should have good interpersonal skills. Testers in Agile teams should:

  • Be positive and solution-oriented with team members and stakeholders
  • Display critical, quality-oriented, skeptical thinking about the product
  • Actively acquire information from stakeholders (rather than relying entirely on written specifications)
  • Accurately evaluate and report test results, test progress, and product quality
  • Work effectively to define testable user stories, especially acceptance criteria, with customer representatives and stakeholders
  • Collaborate within the team, working in pairs with programmers and other team members
  • Respond to change quickly, including changing, adding, or improving test cases
  • Plan and organise their own work

Continuous skills growth, including interpersonal skills growth, is essential for all testers, including those on Agile teams. 

The Role of a Tester in an Agile Team

The role of a tester in an Agile team includes activities that generate and provide feedback not only on test status, test progress, and product quality, but also on process quality. In addition to the activities described elsewhere in this article, these activities include: 

  • Understanding, implementing, and updating the test strategy
  • Measuring and reporting test coverage across all applicable coverage dimensions
  • Ensuring proper use of testing tools
  • Configuring, using, and managing test environments and test data
  • Reporting defects and working with the team to resolve them
  • Coaching other of the team members in relevant aspects of testing 
  • Ensuring the appropriate testing tasks are scheduled during release and iteration planning
  • Actively collaborating with developers and business stakeholders to clarify requirements, especially in terms of testability, consistency, and completeness
  • Participating proactively in the team retrospectives, suggesting and implementing improvements

Within an Agile team, each team member is responsible for product quality and plays a role in performing test-related tasks.

Agile organisations may encounter some test-related organisational risks:

  • Testers work so closely to developers that they lose the appropriate tester mindset
  • Testers become tolerant of or silent about inefficient, ineffective, or low-quality practices within the team
  • Testers cannot keep pace with the incoming changes in time-constrained iterations

To mitigate these risks, organisations may consider variations for preserving independence discussed in this article.

Tools Supporting for Testing

Test Tool Considerations

Test tools can be used to support one or more testing activities. Such tools include:

  • Tools that are directly used in testing, such as test execution tools and test data preparation tools
  • Tools that help to manage requirements, test cases, test procedures, automated test scripts, test results, test data, and defects, and for reporting and monitoring test execution
  • Tools that are used for analysis and evaluation
  • Any tool that assists in testing (a spreadsheet is also a test tool in this meaning)

Test Tool Classification

Test tools can have one or more of the following purposes depending on the context: 

  • Improve the efficiency of test activities by automating repetitive tasks or tasks that require significant resources when done manually (e.g., test execution, regression testing)
  • Improve the efficiency of test activities by supporting manual test activities throughout the test process
  • Improve the quality of test activities by allowing for more consistent testing and a higher level of defect reproducibility
  • Automate activities that cannot be executed manually (e.g., large scale performance testing)
  • Increase reliability of testing (e.g., by automating large data comparisons or simulating behaviour)

Tools can be classified based on several criteria such as purpose, pricing, licensing model (e.g., commercial or open source), and technology used. Tools are classified in this article according to the test activities that they support.

Some tools clearly support only or mainly one activity; others may support more than one activity, but are classified under the activity with which they are most closely associated. Tools from a single provider, especially those that have been designed to work together, may be provided as an integrated suite.

Some types of test tools can be intrusive, which means that they may affect the actual outcome of the test. For example, the actual response times for an application may be different due to the extra instructions that are executed by a performance testing tool, or the amount of code coverage achieved may be distorted due to the use of a coverage tool. The consequence of using intrusive tools is called the probe effect.

Some tools offer support that is typically more appropriate for developers (e.g., tools that are used during component and integration testing). Such tools are marked with “(D)” in the sections below.

Tool support for management of testing and test-ware

Management tools may apply to any test activities over the entire software development lifecycle. Examples of tools that support management of testing and test-ware include:

  • Test management tools and application lifecycle management tools (ALM)
  • Requirements management tools (e.g., traceability to test objects)
  • Defect management tools
  • Configuration management tools
  • Continuous integration tools (D)

Tool support for static testing

Static testing tools are associated with the activities and benefits described in the static testing page. Examples of such tool include:

  • Static analysis tools (D)

Tool support for test design and implementation

Test design tools aid in the creation of maintainable work products in test design and implementation, including test cases, test procedures and test data. Examples of such tools include:

  • Model-Based testing tools
  • Test data preparation tools

In some cases, tools that support test design and implementation may also support test execution and logging, or provide their outputs directly to other tools that support test execution and logging.

Tool support for test execution and logging

Many tools exist to support and enhance test execution and logging activities. Examples of these tools include:

  • Test execution tools (e.g., to run regression tests)
  • Coverage tools (e.g., requirements coverage, code coverage (D))
  • Test harnesses (D)

Tool support for performance measurement and dynamic analysis

Performance measurement and dynamic analysis tools are essential in supporting performance and load testing activities, as these activities cannot effectively be done manually. Examples of these tools include:

  • Performance testing tools
  • Dynamic analysis tools (D)

Tool support for specialised testing needs

In addition to tools that support the general test process, there are many other tools that support more specific testing for non-functional characteristics.

Benefits and Risks of Test Automation

Simply acquiring a tool does not guarantee success. Each new tool introduced into an organisation will require effort to achieve real and lasting benefits. There are potential benefits and opportunities with the use of tools in testing, but there are also risks. This is particularly true of test execution tools (which is often referred to as test automation).

Potential benefits of using tools to support test execution include:

  • Reduction in repetitive manual work (e.g., running regression tests, environment set up/tear down tasks, re-entering the same test data, and checking against coding standards), thus saving time
  • Greater consistency and repeatability (e.g., test data is created in a coherent manner, tests are executed by a tool in the same order with the same frequency, and tests are consistently derived from requirements)
  • More objective assessment (e.g., static measures, coverage)
  • Easier access to information about testing (e.g., statistics and graphs about test progress, defect rates and performance)

Potential risks of using tools to support testing include:

  • Expectations for the tool may be unrealistic (including functionality and ease of use)
  • The time, cost and effort for the initial introduction of a tool may be under-estimated (including training and external expertise)
  • The time and effort needed to achieve significant and continuing benefits from the tool may be under-estimated (including the need for changes in the test process and continuous improvement in the way the tool is used)
  • The effort required to maintain the test work products generated by the tool may be under-estimated
  • The tool may be relied on too much (seen as a replacement for test design or execution, or the use of automated testing where manual testing would be better)
  • Version control of test work products may be neglected
  • Relationships and interoperability issues between critical tools may be neglected, such as requirements management tools, configuration management tools, defect management tools and tools from multiple vendors
  • The tool vendor may go out of business, retire the tool, or sell the tool to a different vendor
  • The vendor may provide a poor response for support, upgrades, and defect fixes
  • An open source project may be suspended
  • A new platform or technology may not be supported by the tool
  • There may be no clear ownership of the tool (e.g., for mentoring, updates, etc.)

Special Considerations for Test Execution and Test Management Tools

In order to have a smooth and successful implementation, there are a number of things that ought to be considered when selecting and integrating test execution and test management tools into an organisation. 

Test execution tools

Test execution tools execute test objects using automated test scripts. This type of tools often requires significant effort in order to achieve significant benefits. 

  • Capturing test approach: Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not scale to large numbers of test scripts. A captured script is a linear representation with specific data and actions as part of each script. This type of script may be unstable when unexpected events occur, and require ongoing maintenance as the system’s user interface evolves over time. 
  • Data-driven test approach: This test approach separates out the test inputs and expected results, usually into a spreadsheet, and uses a more generic test script that can read the input data and execute the same test script with different data.
  • Keyword-driven test approach: This test approach, a generic script processes keywords describing the actions to be taken (also called action words), which then calls keyword scripts to process the associated test data.

The above approaches require someone to have expertise in the scripting language (testers, developers or specialists in test automation). When using data-driven or keyword-driven test approaches testers who are not familiar with the scripting language can also contribute by creating test data and/or keywords for these predefined scripts. Regardless of the scripting technique used, the expected results for each test need to be compared to actual results from the test, either dynamically (while the test is running) or stored for later (post-execution) comparison.

Model-Based testing (MBT) tools enable a functional specification to be captured in the form of a model, such as an activity diagram. This task is generally performed by a system designer. The MBT tool interprets the model in order to create test case specifications which can then be saved in a test management tool and/or executed by a test execution tool.

Test management tools

Test management tools often need to interface with other tools or spreadsheets for various reasons, including:

  • To produce useful information in a format that fits the needs of the organisation
  • To maintain consistent traceability to requirements in a requirements management tool
  • To link with test object version information in the configuration management tool

This is particularly important to consider when using an integrated tool (e.g., Application Lifecycle Management), which includes a test management module, as well as other modules (e.g., project schedule and budget information) that are used by different groups within an organisation.

Effective Use of Tools

Main Principles for Tool Selection

The main considerations in selecting a tool for an organisation include: 

  • Assessment of the maturity of the own organisation, its strengths and weaknesses
  • Identification of opportunities for an improved test process supported by tools
  • Understanding of the technologies used by the test object(s), in order to select a tool that is compatible with that technology
  • Understanding the build and continuous integration tools already in use within the organisation, in order to ensure tool compatibility and integration
  • Evaluation of the tool against a clear requirements and objective criteria
  • Consideration of whether or not the tool is available for a free trial period (and for how long)
  • Evaluation of the vendor (including training, support and commercial aspects) or support for non-commercial (e.g., open source) tools
  • Identification of internal requirements for coaching and mentoring in the use of the tool
  • Evaluation of training needs, considering the testing (and test automation) skills of those who will be working directly with the tool(s)
  • Consideration of pros and cons of various licensing models (e.g., commercial or open source)
  • Estimation of a cost-benefit ratio based on a concrete business case (if required)

As a final step, a proof-of-concept evaluation should be done to establish whether the tool performs effectively with the software under test and within the current infrastructure or, if necessary, to identify changes needed to that infrastructure to use the tool effectively.

Pilot Projects for Introducing a Tool into an Organisation

After completing the tool selection and a successful proof-of-concept, introducing the selected tool into an organisation generally starts with a pilot project, which has the following objectives:

  • Gaining in-depth knowledge about the tool, understanding both its strengths and weaknesses
  • Evaluating how the tool fits with existing processes and practices, and determining what would need to change
  • Deciding on standard ways of using, managing, storing, and maintaining the tool and the test work products (e.g., deciding on naming conventions for files and tests, selecting coding standards, creating libraries and defining the modularity of test suites)
  • Assessing whether the benefits will be achieved at reasonable cost
  • Understanding the metrics that you wish the tool to collect and report, and configuring the tool to ensure these metrics can be captured and reported

Success Factors for Tools

Success factors for evaluation, implementation, deployment, and on-going support of tools within an organisation include:

  • Rolling in the tool to the rest of the organisation incrementally
  • Adapting and improving processes to fit with the use of the tool
  • Providing training, coaching, and mentoring for tool users
  • Defining guidelines for the use of the tool (e.g., internal standards for automation)
  • Implementing a way to gather usage information from the actual use of the tool
  • Monitoring tool use and benefits
  • Providing support to the users of a given tool
  • Gathering lessons learned from all users

It is also important to ensure that the tool is technically and organisationally integrated into the software development lifecycle, which may involve separate organisations responsible for operations and/or third party suppliers.

Testing Throughout the Software Development Lifecycle

Software Development Lifecycle Models

A software development lifecycle model describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically. There are a number of different software development lifecycle models, each of which requires different approaches to testing.

Software Development and Software Testing

It is an important part of a tester’s role to be familiar with the common software development lifecycle models so that appropriate test activities can take place.

In any software development lifecycle model, there are several characteristics of good testing:

  • For every development activity, there is a corresponding test activity
  • Each test level has test objectives specific to that level
  • Test analysis and design for a given test level begin during the corresponding development activity
  • Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.

This article categories common software development lifecycle models as follows:

  • Sequential development models
  • Iterative and incremental development models

A sequential development model describes the software development process as a linear, sequential flow of activities. This means that any phase in the development process should begin when the previous phase is complete. In theory, there is no overlap of phases, but in practice, it is beneficial to have early feedback from the following phase.

In the Waterfall model, the development activities (e.g., requirements analysis, design, coding, testing) are completed one after another. In this model, test activities only occur after all other development activities have been completed.

Unlike the Waterfall model, the V-model integrates the test process throughout the development process, implementing the principle of early testing. Further, the V-model includes test levels associated with each corresponding development phase, which further supports early testing. In this model, the execution of tests associated with each test level proceeds sequentially, but in some cases overlapping occurs.

Sequential development models deliver software that contains the complete set of features, but typically require months or years for delivery to stakeholders and users.

Incremental development involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally. The size of these feature increments varies, with some methods having larger pieces and some smaller pieces. The feature increments can be as small as a single change to a user interface screen or new query option.

Iterative development occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration. Iterations may involve changes to features developed in earlier iterations, along with changes in project scope. Each iteration delivers working software which is a growing subset of the overall set of features until the final software is delivered or development is stopped.

Examples include: 

  • Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months), and the feature increments are correspondingly large, such as two or three groups of related features
  • Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features
  • Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once
  • Spiral: Involves creating experimental increments, some of which may be heavily re-worked or even abandoned in subsequent development work

Components or systems developed using these methods often involve overlapping and iterating test levels throughout development. Ideally, each feature is tested at several test levels as it moves towards delivery. In some cases, teams use continuous delivery or continuous deployment, both of which involve significant automation of multiple test levels as part of their delivery pipelines. Many development efforts using these methods also include the concept of self-organising teams, which can change the way testing work is organised as well as the relationship between testers and developers.

These methods form a growing system, which may be released to end-users on a feature-by-feature basis, on an iteration-by-iteration basis, or in a more traditional major-release fashion. Regardless of whether the software increments are released to end-users, regression testing is increasingly important as the system grows.

In contrast to sequential models, iterative and incremental models may deliver usable software in weeks or even days, but may only deliver the complete set of requirements product over a period of months or even years.

Software Development Lifecycle Models in Context

Software development lifecycle models must be selected and adapted to the context of project and product characteristics. An appropriate software development lifecycle model should be selected and adapted based on the project goal, the type of product being developed, business priorities (e.g., time-to-market), and identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety-critical system such as an automobile’s brake control system. As another example, in some cases organisational and cultural issues may inhibit communication between team members, which can impede iterative development.

Depending on the context of the project, it may be necessary to combine or reorganise test levels and/or test activities. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level (e.g., integration to the infrastructure and other systems) and at the acceptance test level (functional and non-functional, along with user acceptance testing and operational acceptance testing).

In addition, software development lifecycle models themselves may be combined. For example, a V-model may be used for the development and testing of the backend systems and their integrations, while an Agile development model may be used to develop and test the front-end user interface (UI) and functionality. Prototyping may be used early in a project, with an incremental development model adopted once the experimental phase is complete. 

Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and services, typically apply separate software development lifecycle models for each object. This presents a particular challenge for the development of Internet of Things system versions. Additionally the software development lifecycle of such objects places stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use (e.g., operate, update, and decommission phases). 

Reasons why software development models must be adapted to the context of project and product characteristics can be:

  • Difference in product risks of systems (complex or simple project)
  • Many business units can be part of a project or program (combination of sequential and agile development)
  • Short time to deliver a product to the market (merge of test levels and/or integration of test types in test levels)

Test Levels

Test levels are groups of test activities that are organised and managed together. Each test level is an instance of the test process, consisting of the activities described in the basics of testing article, performed in relation to software at a given level of development, from individual units or components to complete systems or, where applicable, systems of systems. Test levels are related to other activities within the software development lifecycle. The test levels used in this article are:

  • Component testing
  • Integration testing
  • System testing
  • Acceptance testing

Test levels are characterised by the following attributes:

  • Specific objectives
  • Test basis, referenced to derive test cases
  • Test object (i.e., what is being tested)
  • Typical defects and failures
  • Specific approaches and responsibilities

For every test level, a suitable test environment is required. In acceptance testing, for example, a production-like test environment is ideal, while in component testing the developers typically use their own development environment.

Component Testing

Objectives of component testing

Component testing (also known as unit or module testing) focuses on components that are separately testable. Objectives of component testing include:

  • Reducing risks
  • Verifying whether the functional and non-functional behaviours of the component are as designed and specified
  • Building confidence in the component’s quality
  • Finding defects in the component
  • Preventing defects from escaping to higher test levels

In some cases, especially in incremental and iterative development models (e.g., Agile) where code changes are ongoing, automated component regression tests play a key role in building confidence that changes have not broken existing components.

Component testing is often done in isolation from the rest of the system, depending on the software development lifecycle model and the system, which may require mock objects, service virtualisation, harnesses, stubs, and drivers. Component testing may cover functionality (e.g., correctness of calculations), non-functional characteristics (e.g., searching for memory leaks), and structural properties (e.g., decision testing).

Test basis

Examples of work products that can be used as a test basis for component testing include:

  • Detailed design
  • Code
  • Data model 
  • Component specifications

Test objects

Typical test objects for component testing include:

  • Components, units or modules
  • Code and data structures
  • Classes
  • Database modules

Typical defects and failures

Examples of typical defects and failures for component testing include:

  • Incorrect functionality (e.g., not as described in design specifications)
  • Data flow problems
  • Incorrect in the code and/or logic

Defects are typically fixed as soon as they are found, often with no formal defect management. However, when developers do report defects, this provides important information for root cause analysis and process improvement.

Specific approaches and responsibilities

Component testing is usually performed by the developer who wrote the code, but it at least requires access to the code being tested. Developers may alternate component development with finding and fixing defects. Developers will often write and execute tests after having written the code for a component. However, in Agile development especially, writing automated component test cases may precede writing application code. 

For example, consider test driven development (TDD). Test driven development is highly iterative and is based on cycles of developing automated test cases, then building and integrating small pieces of code, then executing the component tests, correcting any issues, and re-factoring the code. This process continues until the component has been completely built and all component tests are passing. Test driven development is an example of a test-first approach. While test driven development originated in eXtreme Programming (XP), it has spread to other forms of Agile and also to sequential lifecycles.

Integration Testing

Objectives of integration testing

Integration testing focuses on interactions between components or systems. Objectives of integration testing include:

  • Reducing risks
  • Verifying whether the functional and non-functional behaviours of the interfaces are as designed and specified
  • Building confidence in the quality of the interfaces
  • Finding defects (which may be in the interfaces themselves or within the components or systems)
  • Preventing defects from escaping to higher test levels

As with component testing, in some cases automated integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems.

There are two different levels of integration testing described in this article, which may be carried out on test objects of varying size as follows:

  • Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process.
  • System integration testing focuses on the interactions and interfaces between systems, packages, and micro-services. System integration testing can also cover interactions with, and interfaces provided by external organisations (e.g., web services). In this case, the developing organisation does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organisation’s code are resolved, arranging for test environments, etc.). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development).

Test basis

Examples of work products that can be used as a test basis for integration testing include:

  • Software and system design
  • Sequence diagrams
  • Interface and communication protocol specifications
  • Use cases
  • Architecture at component or system level
  • Workflows
  • External interface definitions

Test objects

Typical test objects for integration testing include:

  • Subsystems
  • Databases
  • Infrastructure
  • Interfaces
  • APIs
  • Microservices

Typical defects and failures

Examples of typical defects and failures for component integration testing include:

  • Incorrect data, missing data, or incorrect data encoding
  • Incorrect sequencing or timing of interface calls
  • Interface mismatch
  • Failures in communication between components
  • Unhandled or improperly handled communication failures between components
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components

Examples of typical defects and failures for system integration testing include: 

  • Inconsistent message structures between systems
  • Incorrect data, missing data, or incorrect data encoding
  • Interface mismatch
  • Failures in communication between systems
  • Unhandled or improperly handled communication failures between systems
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between systems
  • Failure to comply with mandatory security regulations

Specific approaches and responsibilities

Component integration tests and system integration tests should concentrate on the integration itself. For example, if integrating module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules, as that should have been covered during component testing. If integrating system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems, as that should have been covered during system testing. Functional, non-functional, and structural test types are applicable.

Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture, and should have influenced integration planning.

If integration tests and the integration strategy are planned before components or systems are built, those components or systems can be built in the order required for most efficient testing. Systematic integration strategies may be based on the system architecture (e.g., top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or components. In order to simplify defect isolation and detect defects early, integration should normally be incremental (i.e., a small number of additional components or systems at a time) rather than “big bang” (i.e., integrating all components or systems in one single step). A risk analysis of the most complex interfaces can help to focus the integration testing.

The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting. This is one reason that continuous integration, where software is integrated on a component-by-component basis (i.e., functional integration), has become common practice. Such continuous integration often includes automated regression testing, ideally at multiple test levels.

System Testing

Objectives of system testing

System testing focuses on the behaviour and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviours it exhibits while performing those tasks. Objectives of system testing include:

  • Reducing risks
  • Verifying whether the functional and non-functional behaviours of the system are as designed and specified
  • Validating that the system is complete and will work as expected
  • Building confidence in the quality of the system as a whole
  • Finding defects
  • Preventing defects from escaping to higher test levels or production

For certain systems, verifying data quality may also be an objective. As with component testing and integration testing, in some cases automated system regression tests provide confidence that changes have not broken existing features or end-to-end capabilities. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.

The test environment should ideally correspond to the final target or production environment.

Test basis

Examples of work products that can be used as a test basis for system testing include:

  • System and software requirement specifications (functional and non-functional)
  • Risk analysis reports
  • Use cases
  • Epics and user stories
  • Models of system behaviour
  • State diagrams
  • System and user manuals

Test objects

Typical test objects for system testing include:

  • Applications
  • Hardware/software systems
  • Operating systems
  • System under test (SUT)
  • System configuration and configuration data

Typical defects and failures

Examples of typical defects and failures for system testing include:

  • Incorrect calculations
  • Incorrect or unexpected system functional or non-functional behaviour
  • Incorrect control and/or data flows within the system
  • Failure to properly and completely carry out end-to-end functional tasks
  • Failure of the system to work properly in the system environment(s)
  • Failure of the system to work as described in system and user manuals

Specific approaches and responsibilities

System testing should focus on the overall, end-to-end behaviour of the system as a whole, both functional and non-functional. System testing should use the most appropriate techniques (see test techniques) for the aspect(s) of the system to be tested. For example, a decision table may be created to verify whether functional behaviour is as described in business rules.

System testing is typically carried out by independent testers who rely heavily on specifications. Defects in specifications (e.g., missing user stories, incorrectly stated business requirements, etc.) can lead to a lack of understanding of, or disagreements about, expected system behaviour. Such situations can cause false positives and false negatives, which waste time and reduce defect detection effectiveness, respectively. Early involvement of testers in user story refinement or static testing activities, such as reviews, helps to reduce the incidence of such situations.

Acceptance Testing

Objectives of acceptance testing

Acceptance testing, like system testing, typically focuses on the behaviour and capabilities of a whole system or product. Objectives of acceptance testing include:

  • Establishing confidence in the quality of the system as a whole
  • Validating that the system is complete and will work as expected
  • Verifying that functional and non-functional behaviours of the system are as specified

Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer (end-user). Defects may be found during acceptance testing, but finding defects is often not an objective, and finding a significant number of defects during acceptance testing may in some cases be considered a major project risk. Acceptance testing may also satisfy legal or regulatory requirements or standards.

Common forms of acceptance testing include the following:

  • User acceptance testing
  • Operational acceptance testing
  • Contractual and regulatory acceptance testing
  • Alpha and beta testing.

Each is described in the following four subsections.

User acceptance testing (UAT)

User acceptance testing of the system is typically focused on validating the fitness for use of the system by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfil requirements, and perform business processes with minimum difficulty, cost, and risk.

Operational acceptance testing (OAT)

The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include:

  • Testing of backup and restore
  • Installing, uninstalling and upgrading
  • Disaster recovery
  • User management
  • Maintenance tasks
  • Data load and migration tasks
  • Checks for security vulnerabilities
  • Performance testing

The main objective of operational acceptance testing is building confidence that the operators or system administrators can keep the system working properly for the users in the operational environment, even under exceptional or difficult conditions.

Contractual and regulatory acceptance testing

Contractual acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the parties agree to the contract. Contractual acceptance testing is often performed by users or by independent testers.

Regulatory acceptance testing is performed against any regulations that must be adhered to, such as government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.

The main objective of contractual and regulatory acceptance testing is building confidence that contractual or regulatory compliance has been achieved.

Alpha and beta testing

Alpha and beta testing are typically used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market. Alpha testing is performed at the developing organisation’s site, not by the development team, but by potential or existing customers, and/or operators or an independent test team. Beta testing is performed by potential or existing customers, and/or operators at their own locations. Beta testing may come after alpha testing, or may occur without any preceding alpha testing having occurred. 

One objective of alpha and beta testing is building confidence among potential or existing customers, and/or operators that they can use the system under normal, everyday conditions, and in the operational environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective may be the detection of defects related to the conditions and environment(s) in which the system will be used, especially when those conditions and environment(s) are difficult to replicate by the development team.

Test basis

Examples of work products that can be used as a test basis for any form of acceptance testing include:

  • Business processes
  • User or business requirements
  • Regulations, legal contract and/or standards
  • Use cases and/or user stories
  • System requirements
  • System or user documentation
  • Installation procedures
  • Risk analysis reports

In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the following work products can be used:

  • Backup and restore procedures
  • Disaster recovery procedures
  • Non-functional requirements
  • Operations documentation
  • Deployment and installation instructions
  • Performance targets
  • Database packages
  • Security standards or regulations

Typical test objects

Typical test objects for any form of acceptance testing include:

  • System under test
  • System configuration and configuration data
  • Business processes for a fully integrated system
  • Recovery systems and hot sites (for business continuity and disaster recovery testing)
  • Operational and maintenance processes
  • Forms
  • Reports
  • Existing and converted production data

Typical defects and failures

Examples of typical defects for any form of acceptance testing include:

  • System workflows do not meet business or user requirements
  • Business rules are not implemented correctly
  • System does not satisfy contractual or regulatory requirements
  • Non-functional failures such as security vulnerabilities, inadequate performance efficiency under high loads, or improper operation on a supported platform

Specific approaches and responsibilities

Acceptance testing is often the responsibility of the customers, business users, product owners, or operators of a system, and other stakeholders may be involved as well.

Acceptance testing is often thought of as the last test level in a sequential development lifecycle, but it may also occur at other times, for example:

  • Acceptance testing of a COTS software product may occur when it is installed or integrated
  • Acceptance testing of a new functional enhancement may occur before system testing

In iterative development, project teams can employ various forms of acceptance testing during and at the end of each iteration, such as those focused on verifying a new feature against its acceptance criteria and those focused on validating that a new feature satisfies the users’ needs. In addition, alpha tests and beta tests may occur, either at the end of each iteration, after the completion of each iteration, or after a series of iterations. User acceptance tests, operational acceptance tests, regulatory acceptance tests, and contractual acceptance tests also may occur, either at the close of each iteration, after the completion of each iteration, or after a series of iterations.

Test Types

A test type is a group of test activities aimed at testing specific characteristics of a software system, or a part of a system, based on specific test objectives. Such objectives may include:

  • Evaluating functional quality characteristics, such as completeness, correctness, and appropriateness
  • Evaluating non-functional quality characteristics, such as reliability, performance efficiency, security, compatibility, and usability
  • Evaluating whether the structure or architecture of the component or system is correct, complete, and as specified
  • Evaluating the effects of changes, such as confirming that defects have been fixed (confirmation testing) and looking for unintended changes in behaviour resulting from software or environment changes (regression testing)

Functional Testing

Functional testing of a system involves tests that evaluate functions that the system should perform. Functional requirements may be described in work products such as business requirements specifications, epics, user stories, use cases, or functional specifications, or they may be undocumented. The functions are “what” the system should do. 

Functional tests should be performed at all test levels (e.g., tests for components may be based on a component specification), though the focus is different at each level. 

Functional testing considers the behaviour of the software, so black-box techniques may be used to derive test conditions and test cases for the functionality of the component or system. 

The thoroughness of functional testing can be measured through functional coverage. Functional coverage is the extent to which some functionality has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and functional requirements, the percentage of these requirements which are addressed by testing can be calculated, potentially identifying coverage gaps.

Functional test design and execution may involve special skills or knowledge, such as knowledge of the particular business problem the software solves (e.g., geological modelling software for the oil and gas industries). 

Non-functional Testing

Non-functional testing of a system evaluates characteristics of systems and software such as usability, performance efficiency or security. Non-functional testing is the testing of “how well” the system behaves. 

Contrary to common misperceptions, non-functional testing can and often should be performed at all test levels, and done as early as possible. The late discovery of non-functional defects can be extremely dangerous to the success of a project. 

Black-box techniques may be used to derive test conditions and test cases for non-functional testing. For example, boundary value analysis can be used to define the stress conditions for performance tests. 

The thoroughness of non-functional testing can be measured through non-functional coverage. Non-functional coverage is the extent to which some type of non-functional element has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and supported devices for a mobile application, the percentage of devices which are addressed by compatibility testing can be calculated, potentially identifying coverage gaps. 

Non-functional test design and execution may involve special skills or knowledge, such as knowledge of the inherent weaknesses of a design or technology (e.g., security vulnerabilities associated with particular programming languages) or the particular user base (e.g., the personas of users of healthcare facility management systems).

White-box Testing

White-box testing derives tests based on the system’s internal structure or implementation. Internal structure may include code, architecture, work flows, and/or data flows within the system. 

The thoroughness of white-box testing can be measured through structural coverage. Structural coverage is the extent to which some type of structural element has been exercised by tests, and is expressed as a percentage of the type of element being covered. 

At the component testing level, code coverage is based on the percentage of component code that has been tested, and may be measured in terms of different aspects of code (coverage items) such as the percentage of executable statements tested in the component, or the percentage of decision outcomes tested. These types of coverage are collectively called code coverage. At the component integration testing level, white-box testing may be based on the architecture of the system, such as interfaces between components, and structural coverage may be measured in terms of the percentage of interfaces exercised by tests. 

White-box test design and execution may involve special skills or knowledge, such as the way the code is built, how data is stored (e.g., to evaluate possible database queries), and how to use coverage tools and to correctly interpret their results.

Change-related Testing

When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not caused any unforeseen adverse consequences.

  • Confirmation testing: After a defect is fixed, the software may be tested with all test cases that failed due to the defect, which should be re-executed on the new software version. The software may also be tested with new tests to cover changes needed to fix the defect. At the very least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new software version. The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.
  • Regression testing: It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behaviour of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system or database management system. Such unintended side-effects are called regressions. Regression testing involves running tests to detect such unintended side-effects.

Confirmation testing and regression testing are performed at all test levels.

Especially in iterative and incremental development lifecycles (e.g., Agile), new features, changes to existing features, and code refactoring result in frequent changes to the code, which also requires change-related testing. Due to the evolving nature of the system, confirmation and regression testing are very important. This is particularly relevant for Internet of Things systems where individual objects (e.g., devices) are frequently updated or replaced.

Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation. Automation of these tests should start early in the project.

Test Types and Test Levels

It is possible to perform any of the test types mentioned above at any test level. To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a banking application, starting with functional tests: 

  • For component testing, tests are designed based on how a component should calculate compound interest.
  • For component integration testing, tests are designed based on how account information captured at the user interface is passed to the business logic.
  • For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts.
  • For system integration testing, tests are designed based on how the system uses an external micro-service to check an account holder’s credit score.
  • For acceptance testing, tests are designed based on how the banker handles approving or declining a credit application.

The following are examples of non-functional tests:

  • For component testing, performance tests are designed to evaluate the number of CPU cycles required to perform a complex total interest calculation. 
  • For component integration testing, security tests are designed for buffer overflow vulnerabilities due to data passed from the user interface to the business logic. 
  • For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices. 
  • For system integration testing, reliability tests are designed to evaluate system robustness if the credit score micro-service fails to respond. 
  • For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s credit processing interface for people with disabilities.

The following are examples of white-box tests:

  • For component testing, tests are designed to achieve complete statement and decision coverage for all components that perform financial calculations. 
  • For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic.
  • For system testing, tests are designed to cover sequences of web pages that can occur during a credit line application.
  • For system integration testing, tests are designed to exercise all possible inquiry types sent to the credit score micro-service.
  • For acceptance testing, tests are designed to cover all supported financial data file structures and value ranges for bank-to-bank transfers.

Finally, the following are examples for change-related tests:

  • For component testing, automated regression tests are built for each component and included within the continuous integration framework.
  • For component integration testing, tests are designed to confirm fixes to interface-related defects as the fixes are checked into the code repository.
  • For system testing, all tests for a given workflow are re-executed if any screen on that workflow changes.
  • For system integration testing, tests of the application interacting with the credit scoring micro-service are re-executed daily as part of continuous deployment of that micro-service.
  • For acceptance testing, all previously failed tests are re-executed after a defect found in acceptance testing is fixed.

While this section provides examples of every test type across every level, it is not necessary, for all software, to have every test type represented across every level. However, it is important to run applicable test types at each level, especially the earliest level where the test type occurs.

Maintenance Testing

Once deployed to production environments, software and systems need to be maintained. Changes of various sorts are almost inevitable in delivered software and systems, either to fix defects discovered in operational use, to add new functionality, or to delete or alter already-delivered functionality. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system over its lifetime, especially performance efficiency, compatibility, reliability, security, and portability. 

When any changes are made as part of maintenance, maintenance testing should be performed, both to evaluate the success with which the changes were made and to check for possible side-effects (e.g., regressions) in parts of the system that remain unchanged (which is usually most of the system). Maintenance can involve planned releases and unplanned releases (hot fixes). 

A maintenance release may require maintenance testing at multiple test levels, using various test types, based on its scope. The scope of maintenance testing depends on: 

  • The degree of risk of the change, for example, the degree to which the changed area of software communicates with other components or systems
  • The size of the existing system
  • The size of the change

Triggers for Maintenance

There are several reasons why software maintenance, and thus maintenance testing, takes place, both for planned and unplanned changes. 

We can classify the triggers for maintenance as follows: 

  • Modification, such as planned enhancements (e.g., release-based), corrective and emergency changes, changes of the operational environment (such as planned operating system or database upgrades), upgrades of COTS software, and patches for defects and vulnerabilities
  • Migration, such as from one platform to another, which can require operational tests of the new environment as well as of the changed software, or tests of data conversion when data from another application will be migrated into the system being maintained
    • Retirement, such as when an application reaches the end of its life. When an application or system is retired, this can require testing of data migration or archiving if long data-retention periods are required. 
    • Testing restored/retrieve procedures after archiving for long retention periods may also be needed. 
    • Regression testing may be needed to ensure that any functionality that remains in service still works. 

For Internet of Things systems, maintenance testing may be triggered by the introduction of completely new or modified things, such as hardware devices and software services, into the overall system. The maintenance testing for such systems places particular emphasis on integration testing at different levels (e.g., network level, application level) and on security aspects, in particular those relating to personal data. 

Impact Analysis for Maintenance

Impact analysis evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change. Impact analysis can also help to identify the impact of a change on existing tests. The side effects and affected areas in the system need to be tested for regressions, possibly after updating any existing tests affected by the change. 

Impact analysis may be done before a change is made, to help decide if the change should be made, based on the potential consequences in other areas of the system. 

Impact analysis can be difficult if: 

  • Specifications (e.g., business requirements, user stories, architecture) are out of date or missing
  • Test cases are not documented or are out of date
  • Bi-directional traceability between tests and the test basis has not been maintained
  • Tool support is weak or non-existent
  • The people involved do not have domain and/or system knowledge
  • Insufficient attention has been paid to the software’s maintainability during development

Agile Software Development

Basics of Agile Software Development

A tester on an Agile project will work differently than one working on a traditional project. Testers must understand the values and principles that underpin Agile projects, and how testers are an integral part of a whole-team approach together with developers and business representatives. The members in an Agile project communicate with each other early and frequently, which helps with removing defects early and developing a quality product. 

Agile Software Development and the Agile Manifesto 

In 2001, a group of individuals, representing the most widely used lightweight software development methodologies, agreed on a common set of values and principles which became known as the Manifesto for Agile Software Development or the Agile Manifesto [Agile-manifesto]. The Agile Manifesto contains four statements of values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The Agile Manifesto argues that although the concepts on the right have value, those on the left have greater value.

Individuals and Interactions

Agile development is very people-centred. Teams of people build software, and it is through continuous communication and interaction, rather than a reliance on tools or processes, that teams can work most effectively.

Working Software

From a customer perspective, working software is much more useful and valuable than overly detailed documentation and it provides an opportunity to give the development team rapid feedback. In addition, because working software, albeit with reduced functionality, is available much earlier in the development lifecycle, Agile development can confer significant time-to-market advantage. Agile development is, therefore, especially useful in rapidly changing business environments where the problems and/or solutions are unclear or where the business wishes to innovate in new problem domains.

Customer Collaboration

Customers often find great difficulty in specifying the system that they require. Collaborating directly with the customer improves the likelihood of understanding exactly what the customer requires. While having contracts with customers may be important, working in regular and close collaboration with them is likely to bring more success to the project.

Responding to Change 

Change is inevitable in software projects. The environment in which the business operates, legislation, competitor activity, technology advances, and other factors can have major influences on the project and its objectives. These factors must be accommodated by the development process. As such, having flexibility in work practices to embrace change is more important than simply adhering rigidly to a plan.

Principles 

The core Agile Manifesto values are captured in twelve principles

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  • Deliver working software frequently, at intervals of between a few weeks to a few months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity—the art of maximising the amount of work not done—is essential.
  • The best architectures, requirements, and designs emerge from self-organising teams.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

The different Agile methodologies provide prescriptive practices to put these values and principles into action.

Whole-Team Approach

The whole-team approach means involving everyone with the knowledge and skills necessary to ensure project success. The team includes representatives from the customer and other business stakeholders who determine product features. The team should be relatively small; successful teams have been observed with as few as three people and as many as nine. Ideally, the whole team shares the same workspace, as co-location strongly facilitates communication and interaction. The whole-team approach is supported through the daily stand-up meetings involving all members of the team, where work progress is communicated and any impediments to progress are highlighted. The whole-team approach promotes more effective and efficient team dynamics.

The use of a whole-team approach to product development is one of the main benefits of Agile development. Its benefits include:

  • Enhancing communication and collaboration within the team
  • Enabling the various skill sets within the team to be leveraged to the benefit of the project
  • Making quality everyone’s responsibility

The whole team is responsible for quality in Agile projects. The essence of the whole-team approach lies in the testers, developers, and the business representatives working together in every step of the development process. Testers will work closely with both developers and business representatives to ensure that the desired quality levels are achieved. This includes supporting and collaborating with business representatives to help them create suitable acceptance tests, working with developers to agree on the testing strategy, and deciding on test automation approaches. Testers can thus transfer and extend testing knowledge to other team members and influence the development of the product.

The whole team is involved in any consultations or meetings in which product features are presented, analysed, or estimated. The concept of involving testers, developers, and business representatives in all feature discussions is known as the power of three.

Early and Frequent Feedback

Agile projects have short iterations enabling the project team to receive early and continuous feedback on product quality throughout the development lifecycle. One way to provide rapid feedback is by continuous integration.

When sequential development approaches are used, the customer often does not see the product until the project is nearly completed. At that point, it is often too late for the development team to effectively address any issues the customer may have. By getting frequent customer feedback as the project progresses, Agile teams can incorporate most new changes into the product development process. Early and frequent feedback helps the team focus on the features with the highest business value, or associated risk, and these are delivered to the customer first. It also helps manage the team better since the capability of the team is transparent to everyone. For example, how much work can we do in a sprint or iteration? What could help us go faster? What is preventing us from doing so? 

The benefits of early and frequent feedback include:

  • Avoiding requirements misunderstandings, which may not have been detected until later in the development cycle when they are more expensive to fix.
  • Clarifying customer feature requests, making them available for customer use early. This way, the product better reflects what the customer wants. 
  • Discovering (via continuous integration), isolating, and resolving quality problems early.
  • Providing information to the Agile team regarding its productivity and ability to deliver.
  • Promoting consistent project momentum.

Aspects of Agile Approaches

There are a number of Agile approaches in use by organisations. Common practices across most Agile organisations include collaborative user story creation, retrospectives, continuous integration, and planning for each iteration as well as for overall release. This subsection describes some of the Agile approaches.

Agile Software Development Approaches

There are several Agile approaches, each of which implements the values and principles of the Agile Manifesto in different ways. In this article , three representatives of Agile approaches are considered: Extreme Programming (XP), Scrum, and Kanban.

Extreme Programming

Extreme Programming (XP), is an Agile approach to software development described by certain values, principles, and development practices.

XP embraces five values to guide development: communication, simplicity, feedback, courage, and respect.

XP describes a set of principles as additional guidelines: humanity, economics, mutual benefit, self-similarity, improvement, diversity, reflection, flow, opportunity, redundancy, failure, quality, baby steps, and accepted responsibility.

XP describes thirteen primary practices: sit together, whole team, informative workspace, energised work, pair programming, stories, weekly cycle, quarterly cycle, slack, ten-minute build, continuous integration, test first programming, and incremental design. 

Many of the Agile software development approaches in use today are influenced by XP and its values and principles. For example, Agile teams following Scrum often incorporate XP practices.

Scrum 

Scrum is an Agile management framework which contains the following constituent instruments and practices: 

  • Sprint: Scrum divides a project into iterations (called sprints) of fixed length (usually two to four weeks).
  • Product Increment: Each sprint results in a potentially releasable/shippable product (called an increment).
  • Product Backlog: The product owner manages a prioritised list of planned product items (called the product backlog). The product backlog evolves from sprint to sprint (called backlog refinement).
  • Sprint Backlog: At the start of each sprint, the Scrum team selects a set of highest priority items (called the sprint backlog) from the product backlog. Since the Scrum team, not the product owner, selects the items to be realised within the sprint, the selection is referred to as being on the pull principle rather than the push principle.
  • Definition of Done: To make sure that there is a potentially releasable product at each sprint’s end, the Scrum team discusses and defines appropriate criteria for sprint completion. The discussion deepens the team’s understanding of the backlog items and the product requirements.
  • Time-boxing: Only those tasks, requirements, or features that the team expects to finish within the sprint are part of the sprint backlog. If the development team cannot finish a task within a sprint, the associated product features are removed from the sprint and the task is moved back into the product backlog. Time-boxing applies not only to tasks, but in other situations (e.g., enforcing meeting start and end times).
  • Transparency: The development team reports and updates sprint status on a daily basis at a meeting called the daily scrum. This makes the content and progress of the current sprint, including test results, visible to the team, management, and all interested parties. For example, the development team can show sprint status on a whiteboard.

Scrum defines three roles:

  • Scrum Master: ensures that Scrum practices and rules are implemented and followed, and resolves any violations, resource issues, or other impediments that could prevent the team from following the practices and rules. This person is not the team lead, but a coach.
  • Product Owner: represents the customer, and generates, maintains, and priorities the product backlog. This person is not the team lead.
  • Development Team: develops and test the product. The team is self-organised: There is no team lead, so the team makes the decisions. The team is also cross-functional.

Scrum (as opposed to XP) does not dictate specific software development techniques (e.g., test first programming). In addition, Scrum does not provide guidance on how testing has to be done in a Scrum project.

Kanban

Kanban is a management approach that is sometimes used in Agile projects. The general objective is to visualise and optimise the flow of work within a value-added chain. Kanban utilises three instruments:

  • Kanban Board: The value chain to be managed is visualised by a Kanban board. Each column shows a station, which is a set of related activities, e.g., development or testing. The items to be produced or tasks to be processed are symbolised by tickets moving from left to right across the board through the stations.
  • Work-in-Progress Limit: The amount of parallel active tasks is strictly limited. This is controlled by the maximum number of tickets allowed for a station and/or globally for the board. Whenever a station has free capacity, the worker pulls a ticket from the predecessor station.
  • Lead Time: Kanban is used to optimise the continuous flow of tasks by minimising the (average) lead time for the complete value stream.

Kanban features some similarities to Scrum. In both frameworks, visualising the active tasks (e.g., on a public whiteboard) provides transparency of content and progress of tasks. Tasks not yet scheduled are waiting in a backlog and moved onto the Kanban board as soon as there is new space (production capacity) available.

Iterations or sprints are optional in Kanban. The Kanban process allows releasing its deliverables item by item, rather than as part of a release. Time-boxing as a synchronising mechanism, therefore, is optional, unlike in Scrum, which synchronies all tasks within a sprint.

Collaborative User Story Creation

Poor specifications are often a major reason for project failure. Specification problems can result from the users’ lack of insight into their true needs, absence of a global vision for the system, redundant or contradictory features, and other miscommunications. In Agile development, user stories are written to capture requirements from the perspectives of developers, testers, and business representatives. In sequential development, this shared vision of a feature is accomplished through formal reviews after requirements are written; in Agile development, this shared vision is accomplished through frequent informal reviews while the requirements are being written

The user stories must address both functional and non-functional characteristics. Each story includes acceptance criteria for these characteristics. These criteria should be defined in collaboration between business representatives, developers, and testers. They provide developers and testers with an extended vision of the feature that business representatives will validate. An Agile team considers a task finished when a set of acceptance criteria have been satisfied.

Typically, the tester’s unique perspective will improve the user story by identifying missing details or non-functional requirements. A tester can contribute by asking business representatives open-ended questions about the user story, proposing ways to test the user story, and confirming the acceptance criteria.

The collaborative authorship of the user story can use techniques such as brainstorming and mind mapping. The tester may use the INVEST technique [INVEST]:

  • Independent
  • Negotiable
  • Valuable
  • Estimable
  • Small
  • Testable

According to the 3C concept, a user story is the conjunction of three elements:

  • Card: The card is the physical media describing a user story. It identifies the requirement, its criticality, expected development and test duration, and the acceptance criteria for that story.
    The description has to be accurate, as it will be used in the product backlog.
  • Conversation: The conversation explains how the software will be used. The conversation can be documented or verbal. Testers, having a different point of view than developers and business representatives, bring valuable input to the exchange of thoughts, opinions, and experiences. Conversation begins during the release-planning phase and continues when the story is scheduled.
  • Confirmation: The acceptance criteria, discussed in the conversation, are used to confirm that the story is done. These acceptance criteria may span multiple user stories. Both positive and negative tests should be used to cover the criteria. During confirmation, various participants play the role of a tester. These can include developers as well as specialists focused on performance, security, interoperability, and other quality characteristics. To confirm a story as done, the defined acceptance criteria should be tested and shown to be satisfied.

Agile teams vary in terms of how they document user stories. Regardless of the approach taken to document user stories, documentation should be concise, sufficient, and necessary.

Retrospectives

In Agile development, a retrospective is a meeting held at the end of each iteration to discuss what was successful, what could be improved, and how to incorporate the improvements and retain the successes in future iterations. Retrospectives cover topics such as the process, people, organisations, relationships, and tools. Regularly conducted retrospective meetings, when appropriate follow up activities occur, are critical to self-organisation and continual improvement of development and testing.

Retrospectives can result in test-related improvement decisions focused on test effectiveness, test productivity, test case quality, and team satisfaction. They may also address the testability of the applications, user stories, features, or system interfaces. Root cause analysis of defects can drive testing and development improvements. In general, teams should implement only a few improvements per iteration. This allows for continuous improvement at a sustained pace.

The timing and organisation of the retrospective depends on the particular Agile method followed. Business representatives and the team attend each retrospective as participants while the facilitator organises and runs the meeting. In some cases, the teams may invite other participants to the meeting.

Testers should play an important role in the retrospectives. Testers are part of the team and bring their unique perspective. Testing occurs in each sprint and vitally contributes to success. All team members, testers and non-testers, can provide input on both testing and non-testing activities.

Retrospectives must occur within a professional environment characterised by mutual trust. The attributes of a successful retrospective are the same as those for any other review as is discussed in previous articles.

Continuous Integration

Delivery of a product increment requires reliable, working, integrated software at the end of every sprint. Continuous integration addresses this challenge by merging all changes made to the software and integrating all changed components regularly, at least once a day. Configuration management, compilation, software build, deployment, and testing are wrapped into a single, automated, repeatable process. Since developers integrate their work constantly, build constantly, and test constantly, defects in code are detected more quickly.

Following the developers’ coding, debugging, and check-in of code into a shared source code repository, a continuous integration process consists of the following automated activities:

  • Static code analysis: executing static code analysis and reporting results
  • Compile: compiling and linking the code, generating the executable files
  • Unit test: executing the unit tests, checking code coverage and reporting test results
  • Deploy: installing the build into a test environment
  • Integration test: executing the integration tests and reporting results
  • Report (dashboard): posting the status of all these activities to a publicly visible location or e-mailing status to the team

An automated build and test process takes place on a daily basis and detects integration errors early and quickly. Continuous integration allows Agile testers to run automated tests regularly, in some cases as part of the continuous integration process itself, and send quick feedback to the team on the quality of the code. These test results are visible to all team members, especially when automated reports are integrated into the process. Automated regression testing can be continuous throughout the iteration. Good automated regression tests cover as much functionality as possible, including user stories delivered in the previous iterations. Good coverage in the automated regression tests helps support building (and testing) large integrated systems. When the regression testing is automated, the Agile testers are freed to concentrate their manual testing on new features, implemented changes, and confirmation testing of defect fixes.

In addition to automated tests, organisations using continuous integration typically use build tools to implement continuous quality control. In addition to running unit and integration tests, such tools can run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code, and facilitate manual quality assurance processes. This continuous application of quality control aims to improve the quality of the product as well as reduce the time taken to deliver it by replacing the traditional practice of applying quality control after completing all development.

Build tools can be linked to automatic deployment tools, which can fetch the appropriate build from the continuous integration or build server and deploy it into one or more development, test, staging, or even production environments. This reduces the errors and delays associated with relying on specialised staff or programmers to install releases in these environments.

Continuous integration can provide the following benefits:

  • Allows earlier detection and easier root cause analysis of integration problems and conflicting changes
  • Gives the development team regular feedback on whether the code is working
  • Keeps the version of the software being tested within a day of the version being developed
  • Reduces regression risk associated with developer code refactoring due to rapid re-testing of the code base after each small set of changes
  • Provides confidence that each day’s development work is based on a solid foundation
  • Makes progress toward the completion of the product increment visible, encouraging developers and testers
  • Eliminates the schedule risks associated with big-bang integration
  • Provides constant availability of executable software throughout the sprint for testing, demonstration, or education purposes
  • Reduces repetitive manual testing activities
  • Provides quick feedback on decisions made to improve quality and tests

However, continuous integration is not without its risks and challenges:

  • Continuous integration tools have to be introduced and maintained
  • The continuous integration process must be defined and established
  • Test automation requires additional resources and can be complex to establish
  • Thorough test coverage is essential to achieve automated testing advantages
  • Teams sometimes over-rely on unit tests and perform too little system and acceptance testing

Continuous integration requires the use of tools, including tools for testing, tools for automating the build process, and tools for version control.

Release and Iteration Planning 

As mentioned in this article, planning is an on-going activity, and this is the case in Agile lifecycles as well. For Agile lifecycles, two kinds of planning occur, release planning and iteration planning. 

Release planning looks ahead to the release of a product, often a few months ahead of the start of a project. Release planning defines and re-defines the product backlog, and may involve refining larger user stories into a collection of smaller stories. Release planning provides the basis for a test approach and test plan spanning all iterations. Release plans are high-level. 

In release planning, business representatives establish and prioritise the user stories for the release, in collaboration with the team. Based on these user stories, project and quality risks are identified and a high-level effort estimation is performed.

Testers are involved in release planning and especially add value in the following activities:

  • Defining testable user stories, including acceptance criteria
  • Participating in project and quality risk analyses
  • Estimating testing effort associated with the user stories
  • Defining the necessary test levels
  • Planning the testing for the release

After release planning is done, iteration planning for the first iteration starts. Iteration planning looks ahead to the end of a single iteration and is concerned with the iteration backlog.

In iteration planning, the team selects user stories from the prioritised release backlog, elaborates the user stories, performs a risk analysis for the user stories, and estimates the work needed for each user story. If a user story is too vague and attempts to clarify it have failed, the team can refuse to accept it and use the next user story based on priority. The business representatives must answer the team’s questions about each story so the team can understand what they should implement and how to test each story.

The number of stories selected is based on established team velocity and the estimated size of the selected user stories. After the contents of the iteration are finalised, the user stories are broken into tasks, which will be carried out by the appropriate team members.

Testers are involved in iteration planning and especially add value in the following activities:

  • Participating in the detailed risk analysis of user stories
  • Determining the testability of the user stories
  • Creating acceptance tests for the user stories
  • Breaking down user stories into tasks (particularly testing tasks)
  • Estimating testing effort for all testing tasks
  • Identifying functional and non-functional aspects of the system to be tested
  • Supporting and participating in test automation at multiple levels of testing

Release plans may change as the project proceeds, including changes to individual user stories in the product backlog. These changes may be triggered by internal or external factors. Internal factors include delivery capabilities, velocity, and technical issues. External factors include the discovery of new markets and opportunities, new competitors, or business threats that may change release objectives and/or target dates. In addition, iteration plans may change during an iteration. For example, a particular user story that was considered relatively simple during estimation might prove more complex than expected.

These changes can be challenging for testers. Testers must understand the big picture of the release for test planning purposes, and they must have an adequate test basis and test oracle in each iteration for test development purposes as discussed in earlier articles. The required information must be available to the tester early, and yet change must be embraced according to Agile principles. This dilemma requires careful decisions about test strategies and test documentation.

Release and iteration planning should address test planning as well as planning for development activities. Particular test-related issues to address include:

  • The scope of testing, the extent of testing for those areas in scope, the test goals, and the reasons for these decisions.
  • The team members who will carry out the test activities.
  • The test environment and test data needed, when they are needed, and whether any additions or changes to the test environment and/or data will occur prior to or during the project.
  • The timing, sequencing, dependencies, and prerequisites for the functional and non-functional test activities (e.g., how frequently to run regression tests, which features depend on other features or test data, etc.), including how the test activities relate to and depend on development activities.
  • The project and quality risks to be addressed.

In addition, the larger team estimation effort should include consideration of the time and effort needed to complete the required testing activities.

Basics of Testing

What is Testing?

Software systems are an integral part of life, from business applications (e.g., banking) to consumer products (e.g., cars). Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, or business reputation, and even injury or death. Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.

A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. As described, software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analysing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.

Some testing does involve the execution of the component or system being tested; such testing is called dynamic testing. Other testing does not involve the execution of the component or system being tested; such testing is called static testing. So, testing also includes reviewing work products such as requirements, user stories, and source code.

Another common misperception of testing is that it focuses entirely on verification of requirements, user stories, or other specifications. While testing does involve checking whether the system meets specified requirements, it also involves validation, which is checking whether the system will meet user and other stakeholder needs in its operational environment(s).

Test activities are organised and carried out differently in different lifecycles.

Typical Objectives of Testing

For any given project, the objectives of testing may include: 

  • To prevent defects by evaluate work products such as requirements, user stories, design, and code
  • To verify whether all specified requirements have been fulfilled 
  • To check whether the test object is complete and validate if it works as the users and other stakeholders expect
  • To build confidence in the level of quality of the test object 
  • To find defects and failures thus reduce the level of risk of inadequate software quality
  • To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
  • To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model. These differences may include, for example:

  • During component testing, one objective may be to find as many failures as possible so that the underlying defects are identified and fixed early. Another objective may be to increase code coverage of the component tests.
  • During acceptance testing, one objective may be to confirm that the system works as expected and satisfies requirements. Another objective of this testing may be to give information to stakeholders about the risk of releasing the system at a given time.

Testing and Debugging

Testing and debugging are different. Executing tests can show failures that are caused by defects in the software. Debugging is the development activity that finds, analyses, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging, associated component and component integration testing (continues integration). However, in Agile development and in some other software development lifecycles, testers may be involved in debugging and component testing.

Why is Testing Necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s Contributions to Success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include: 

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

In addition to these examples, the achievement of defined test objectives contributes to overall software development and maintenance success.

Quality Assurance and Testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organisation with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing. As described early on, testing contributes to the achievement of quality in a variety of ways.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra-system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analysed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced. 

For example, let suppose, incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Seven Testing Principles

A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing. 

1. Testing shows the presence of defects, not their absence 

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness. 

2. Exhaustive testing is impossible 

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts. 

3. Early testing saves time and money 

To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes.

4. Defects cluster together 

A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort (as mentioned in principle 2).

5. Beware of the pesticide paradox 

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.

6. Testing is context dependent 

Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project.

7. Absence-of-errors is a fallacy 

Some organisations expect that testers can run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfil the users’ needs and expectations, or that is inferior compared to other competing systems.

Test Process

There is no one universal software test process, but there are common sets of test activities without which testing will be less likely to achieve its established objectives. These sets of test activities are a test process. The proper, specific software test process in any given situation depends on many factors. Which test activities are involved in this test process, how these activities are implemented, and when these activities occur may be discussed in an organisation’s test strategy.

Test Process in Context 

Contextual factors that influence the test process for an organization, include, but are not limited to:

  • Software development lifecycle model and project methodologies being used
  • Test levels and test types being considered
  • Product and project risks
  • Business domain
  • Operational constraints, including but not limited to:
    • Budgets and resources
    • Timescales
    • Complexity
    • Contractual and regulatory requirements 
  • Organisational policies and practices 
  • Required internal and external standards

The following sections describe general aspects of organisational test processes in terms of the following: 

  • Test activities and tasks 
  • Test work products 
  • Traceability between the test basis and test work products

It is very useful if the test basis (for any level or type of testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives.

For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Test Activities and Tasks

A test process consists of the following main groups of activities:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design 
  • Test implementation
  • Test execution
  • Test completion

Each main group of activities is composed of constituent activities, which will be described in the subsections below. Each constituent activity consists of multiple individual tasks, which would vary from one project or release to another.
Further, although many of these main activity groups may appear logically sequential, they are often implemented iteratively. For example, Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this software development approach. Even in sequential software development, the stepped logical sequence of main groups of activities will involve overlap, combination, concurrency, or omission, so tailoring these main groups of activities within the context of the system and the project is usually required.

Test planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback from monitoring and control activities.

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development lifecycle models. For example, the evaluation of exit criteria for test execution as part of a given test level may include: 

  • Checking test results and logs against specified coverage criteria
  • Assessing the level of component or system quality based on test results and logs
  • Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed)

Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test analysis

During test analysis, the test basis is analysed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

Test analysis includes the following major activities: 

  • Analysing the test basis appropriate to the test level being considered, for example:
    • Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behaviour
    • Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure
    • The implementation of the component or system itself, including code, database metadata and queries, and interfaces
    • Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system
  • Evaluating the test basis and test items to identify defects of various types, such as: 
    • Ambiguities
    • Omissions
    • Inconsistencies
    • Inaccuracies
    • Contradictions
    • Superfluous statements
  • Identifying features and sets of features to be tested
  • Defining and prioritising test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks
  • Capturing bi-directional traceability between each element of the test basis and the associated test conditions

The application of black-box, white-box, and experience-based test techniques can be useful in the process of test analysis to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test charters. Test charters are typical work products in some types of experience-based testing. When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs. For example, techniques such as behaviour driven development (BDD) and acceptance test driven development (ATDD), which involve generating test conditions and test cases from user stories and acceptance criteria prior to coding. These techniques also verify, validate, and detect defects in the user stories and acceptance criteria.

Test design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other test-ware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:

  • Designing and prioritising test cases and sets of test cases 
  • Identifying necessary test data to support test conditions and test cases
  • Designing the test environment and identifying any required infrastructure and tools
  • Capturing bi-directional traceability between the test basis, test conditions, and test cases

The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques.

As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also, as with test analysis, the identification of defects during test design is an important potential benefit.

Test implementation

During test implementation, the test-ware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?” 

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  • Creating test suites from the test procedures and (if any) automated test scripts 
  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution
  • Building the test environment (including, potentially, test harnesses, service virtualisation, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
  • Preparing test data and ensuring it is properly loaded in the test environment 
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites

Test design and test implementation tasks are often combined.

In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented. 

Test execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and test-ware
  • Executing tests either manually or by using test execution tools
  • Comparing actual results with expected results
  • Analysing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
  • Reporting defects based on the failures observed
  • Logging the outcome of test execution (e.g., pass, fail, blocked)
  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  • Creating a test summary report to be communicated to stakeholders
  • Finalising and archiving the test environment, the test data, the test infrastructure, and other test-ware for later reuse
  • Handing over the test-ware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  • Analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  • Using the information gathered to improve test process maturity

Test Work Products

Test work products are created as part of the test process. Just as there is significant variation in the way that organisations implement the test process, there is also significant variation in the types of work products created during that process, in the ways those work products are organised and managed, and in the names used for those work products.

Many of the test work products described in this section can be captured and managed using test management tools and defect management tools.

Test planning work products 

Test planning work products typically include one or more test plans. The test plan includes information about the test basis, to which the other test work products will be related via traceability information, as well as exit criteria (or definition of done) which will be used during test monitoring and control.

Test monitoring and control work products

Test monitoring and control work products typically include various types of test reports, including test progress reports produced on an ongoing and/or a regular basis, and test summary reports produced at various completion milestones. All test reports should provide audience-relevant details about the test progress as of the date of the report, including summarising the test execution results once those become available. 

Test monitoring and control work products should also address project management concerns, such as task completion, resource allocation and usage, and effort. 

Test monitoring and control, and the work products created during these activities, are further explained on this site.

Test analysis work products

Test analysis work products include defined and prioritised test conditions, each of which is ideally bi-directionally traceable to the specific element(s) of the test basis it covers. For exploratory testing, test analysis may involve the creation of test charters. Test analysis may also result in the discovery and reporting of defects in the test basis. 

Test design work products

Test design results in test cases and sets of test cases to exercise the test conditions defined in test analysis. It is often a good practice to design high-level test cases, without concrete values for input data and expected results. Such high-level test cases are reusable across multiple test cycles with different concrete data, while still adequately documenting the scope of the test case. Ideally, each test case is bi-directionally traceable to the test condition(s) it covers.

Test design also results in:

  • the design and/or identification of the necessary test data
  • the design of the test environment
  • the identification of infrastructure and tools

Though the extent to which these results are documented varies significantly.

Test implementation work products

Test implementation work products include:

  • Test procedures and the sequencing of those test procedures
  • Test suites
  • A test execution schedule

Ideally, once test implementation is complete, achievement of coverage criteria established in the test plan can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions.

In some cases, test implementation involves creating work products using or used by tools, such as service virtualisation and automated test scripts.

Test implementation also may result in the creation and verification of test data and the test environment. The completeness of the documentation of the data and/or environment verification results may vary significantly.

The test data serve to assign concrete values to the inputs and expected results of test cases. Such concrete values, together with explicit directions about the use of the concrete values, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle.

In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly.

Test conditions defined in test analysis may be further refined in test implementation.

Test execution work products

Test execution work products include:

  • Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
  • Defect reports
  • Documentation about which test item(s), test object(s), test tools, and test-ware were involved in the testing

Ideally, once test execution is complete, the status of each element of the test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s). For example, we can say which requirements have passed all planned tests, which requirements have failed tests and/or have defects associated with them, and which requirements have planned tests still waiting to be run. This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products

Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations, change requests or product backlog items, and finalised test-ware.

Traceability between the Test Basis and Test Work Products

As mentioned, earlier, test work products and the names of those work products vary significantly. Regardless of these variations, in order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element, as described above. In addition to the evaluation of test coverage, good traceability supports:

  • Analysing the impact of changes
  • Making testing auditable
  • Meeting IT governance criteria
  • Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)
  • Relating the technical aspects of testing to stakeholders in terms that they can understand
  • Providing information to assess product quality, process capability, and project progress against business goals

Some test management tools provide test work product models that match part or all of the test work products outlined in this section. Some organisations build their own management systems to organise the work products and provide the information traceability they require.

The Psychology of Testing

Software development, including software testing, involves human beings. Therefore, human psychology has important effects on software testing.

Human Psychology and Testing 

Identifying defects during a static test such as a requirement review or user story refinement session, or identifying failures during dynamic test execution, may be perceived as criticism of the product and of its author. An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs. For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.

As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality. To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. This applies during both static and dynamic testing.

Testers and test managers need to have good interpersonal skills to be able to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive relationships with colleagues. Ways to communicate well include the following examples:

  • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems.
  • Emphasise the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organisation, defects found and fixed during testing will save time and money and reduce overall risk to product quality.
  • Communicate test results and other findings in a neutral, fact-focused way without criticising the person who created the defective item. Write objective and factual defect reports and review findings.
  • Try to understand how the other person feels and the reasons they may react negatively to the information.
  • Confirm that the other person has understood what has been said and vice versa.

Typical test objectives were discussed earlier. Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviours with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias.

Tester’s and Developer’s Mindsets

Developers and testers often think differently. The primary objective of development is to design and build a product. As discussed earlier, the objectives of testing include verifying and validating the product, finding defects prior to release, and so forth. These are different sets of objectives which require different mindsets. Bringing these mindsets together helps to achieve a higher level of product quality.

A mindset reflects an individual’s assumptions and preferred methods for decision making and problem-solving. A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.

A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to become aware of errors committed by themselves. 

With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organising the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different from that of the work product authors (i.e., business analysts, product owners, designers, and developers), since they have different cognitive biases from the authors.

Test management

Test Organisation

Independent Testing

Testing tasks may be done by people in a specific testing role, or by people in another role (e.g., customers). A certain degree of independence often makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases. Independence is not, however, a replacement for familiarity, and developers can efficiently find many defects in their own code. 

Degrees of independence in testing include the following (from low level of independence to high level):

  • No independent testers; the only form of testing available is developers testing their own code 
  • Independent developers or testers within the development teams or the project team; this could be developers testing their colleagues’ products 
  • Independent test team or group within the organisation, reporting to project management or executive management 
  • Independent testers from the business organisation or user community, or with specialisations in specific test types such as usability, security, performance, regulatory/compliance, or portability 
  • Independent testers external to the organisation, either working on-site (in-house) or off-site (outsourcing)

For most types of projects, it is usually best to have multiple test levels, with some of these levels handled by independent testers. Developers should participate in testing, especially at the lower levels, so as to exercise control over the quality of their own work.

The way in which independence of testing is implemented varies depending on the software development lifecycle model. For example, in Agile development, testers may be part of a development team. In some organisations using Agile methods, these testers may be considered part of a larger independent test team as well. In addition, in such organisations, product owners may perform acceptance testing to validate user stories at the end of each iteration.

Potential benefits of test independence include:

  • Isolation from the development team, may lead to a lack of collaboration, delays in providing feedback to the development team, or an adversarial relationship with the development team
  • Developers may lose a sense of responsibility for quality
  • Independent testers may be seen as a bottleneck
  • Independent testers may lack some important information (e.g., about the test object)

Many organisations are able to successfully achieve the benefits of test independence while avoiding the drawbacks.

Tasks of a Test Manager and Tester 

In this article, two test roles are covered, test managers and testers. The activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organisation.

The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organisations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.

Typical test manager tasks may include:

  • Develop or review a test policy and test strategy for the organisation 
  • Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management
  • Write and update the test plan(s) 
  • Coordinate the test plan(s) with project managers, product owners, and others 
  • Share testing perspectives with other project activities, such as integration planning 
  • Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done) and facilitate test completion activities 
  • Prepare and deliver test progress reports and test summary reports based on the information gathered 
  • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control 
  • Support setting up the defect management system and adequate configuration management of test-ware 
  • Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product
  • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s) 
  • Decide about the implementation of test environment(s) 
  • Promote and advocate the testers, the test team, and the test profession within the organisation 
  • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.)

The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organisation, or that have to do with personnel management, may be done by test managers outside of the development team, who are sometimes called test coaches.

Typical tester tasks may include:

  • Review and contribute to test plans 
  • Analyse, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis) 
  • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis 
  • Design, set up, and verify test environment(s), often coordinating with system administration and network management 
  • Design and implement test cases and test procedures 
  • Prepare and acquire test data
  • Create the detailed test execution schedule 
  • Execute tests, evaluate the results, and document deviations from expected results 
  • Use appropriate tools to facilitate the test process 
  • Automate tests as needed (may be supported by a developer or a test automation expert)
  • Evaluate non-functional characteristics such as performance efficiency, reliability, usability, security, compatibility, and portability 
  • Review tests developed by others

People who work on test analysis, test design, specific test types, or test automation may be specialists in these roles. Depending on the risks related to the product and the project, and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the acceptance test level, the role of a tester is often done by business analysts, subject matter experts, and users. At the system test level and the system integration test level, the role of a tester is often done by an independent test team. At the operational acceptance test level, the role of a tester is often done by operations and/or systems administration staff.

Test Planning and Estimation

Purpose and Content of a Test Plan

A test plan outlines test activities for development and maintenance projects. Planning is influenced by the test policy and test strategy of the organisation, the development lifecycles and methods being used, the scope of testing, objectives, risks, constraints, criticality, testability, and the availability of resources. 

As the project and test planning progress, more information becomes available and more detail can be included in the test plan. Test planning is a continuous activity and is performed throughout the product’s lifecycle. (Note that the product’s lifecycle may extend beyond a project’s scope to include the maintenance phase.) Feedback from test activities should be used to recognise changing risks so that planning can be adjusted. Planning may be documented in a master test plan and in separate test plans for test levels, such as system testing and acceptance testing, or for separate test types, such as usability testing and performance testing. Test planning activities may include the following and some of these may be documented in a test plan:

  • Determining the scope, objectives, and risks of testing
  • Defining the overall approach of testing
  • Integrating and coordinating the test activities into the software lifecycle activities
  • Making decisions about what to test, the people and other resources required to perform the various test activities, and how test activities will be carried out
  • Scheduling of test analysis, design, implementation, execution, and evaluation activities, either on particular dates (e.g., in sequential development) or in the context of each iteration (e.g., in iterative development)
  • Selecting metrics for test monitoring and control
  • Budgeting for the test activities
  • Determining the level of detail and structure for test documentation (e.g., by providing templates or example documents)

The content of test plans vary, and can extend beyond the topics identified above.

Test Strategy and Test Approach

A test strategy provides a generalised description of the test process, usually at the product or organisational level. Common types of test strategies include:

  • Analytical: This type of test strategy is based on an analysis of some factor (e.g., requirement or risk). Risk-based testing is an example of an analytical approach, where tests are designed and prioritised based on the level of risk.
  • Model-Based: In this type of test strategy, tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic (e.g., reliability). Examples of such models include business process models, state models, and reliability growth models.
  • Methodical: This type of test strategy relies on making systematic use of some predefined set of tests or test conditions, such as a taxonomy of common or likely types of failures, a list of important quality characteristics, or company-wide look-and-feel standards for mobile apps or web pages. 
  • Process-compliant (or standard-compliant): This type of test strategy involves analysing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards, by process documentation, by the rigorous identification and use of the test basis, or by any process or standard imposed on or by the organisation. 
  • Directed (or consultative): This type of test strategy is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organisation itself.
  • Regression-averse: This type of test strategy is motivated by a desire to avoid regression of existing capabilities. This test strategy includes reuse of existing testware (especially test cases and test data), extensive automation of regression tests, and standard test suites.
  • Reactive: In this type of test strategy, testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned (as the preceding strategies are). Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results. Exploratory testing is a common technique employed in reactive strategies.

An appropriate test strategy is often created by combining several of these types of test strategies. For example, risk-based testing (an analytical strategy) can be combined with exploratory testing (a reactive strategy); they complement each other and may achieve more effective testing when used together.

While the test strategy provides a generalised description of the test process, the test approach tailors the test strategy for a particular project or release. The test approach is the starting point for selecting the test techniques, test levels, and test types, and for defining the entry criteria and exit criteria (or definition of ready and definition of done, respectively). The tailoring of the strategy is based on decisions made in relation to the complexity and goals of the project, the type of product being developed, and product risk analysis. The selected approach depends on the context and may consider factors such as risks, safety, available resources and skills, technology, the nature of the system (e.g., custom-built versus COTS), test objectives, and regulations.

Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)

In order to exercise effective control over the quality of the software, and of the testing, it is advisable to have criteria which define when a given test activity should start and when the activity is complete. Entry criteria (more typically called definition of ready in Agile development) define the preconditions for undertaking a given test activity. If entry criteria are not met, it is likely that the activity will prove more difficult, more time-consuming, more costly, and more risky. Exit criteria (more typically called definition of done in Agile development) define what conditions must be achieved in order to declare a test level or a set of tests completed. Entry and exit criteria should be defined for each test level and test type, and will differ based on the test objectives.

Typical entry criteria include: 

  • Availability of testable requirements, user stories, and/or models (e.g., when following a model-based testing strategy)
  • Availability of test items that have met the exit criteria for any prior test levels
  • Availability of test environment
  • Availability of necessary test tools
  • Availability of test data and other necessary resources

Typical exit criteria include:

  • Planned tests have been executed
  • A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks, code) has been achieved 
  • The number of unresolved defects is within an agreed limit 
  • The number of estimated remaining defects is sufficiently low
  • The evaluated levels of reliability, performance efficiency, usability, security, and other relevant quality characteristics are sufficient

Even without exit criteria being satisfied, it is also common for test activities to be curtailed due to the budget being expended, the scheduled time being completed, and/or pressure to bring the product to market. It can be acceptable to end testing under such circumstances, if the project stakeholders and business owners have reviewed and accepted the risk to go live without further testing.

Test Execution Schedule

Once the various test cases and test procedures are produced (with some test procedures potentially automated) and assembled into test suites, the test suites can be arranged in a test execution schedule that defines the order in which they are to be run. The test execution schedule should take into account such factors as prioritizations, dependencies, confirmation tests, regression tests, and the most efficient sequence for executing the tests.

Ideally, test cases would be ordered to run based on their priority levels, usually by executing the test cases with the highest priority first. However, this practice may not work if the test cases have dependencies or the features being tested have dependencies. If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first. Similarly, if there are dependencies across test cases, they must be ordered appropriately regardless of their relative priorities. Confirmation and regression tests must be prioritised as well, based on the importance of rapid feedback on changes, but here again dependencies may apply.

In some cases, various sequences of tests are possible, with differing levels of efficiency associated with those sequences. In such cases, trade-offs between efficiency of test execution versus adherence to prioritisation must be made.

Factors Influencing the Test Effort

Test effort estimation involves predicting the amount of test-related work that will be needed in order to meet the objectives of the testing for a particular project, release, or iteration. Factors influencing the test effort may include characteristics of the product, characteristics of the development process, characteristics of the people, and the test results, as shown below.

Product characteristics

  • The risks associated with the product
  • The quality of the test basis
  • The size of the product
  • The complexity of the product domain
  • The requirements for quality characteristics (e.g., security, reliability) 
  • The required level of detail for test documentation 
  • Requirements for legal and regulatory compliance

Development characteristics process

  • The stability and maturity of the organisation
  • The development model in use
  • The approach to test
  • The tools used
  • The test to process 
  • Time pressure

People characteristics

  • The skills and experience of the people involved, especially with similar projects and products (e.g., domain knowledge)
  • Team cohesion and leadership

Test results

  • The number and severity of defects found
  • The amount of re-work required

Test Estimation Techniques

There are a number of estimation techniques used to determine the effort required for adequate testing. Two of the most commonly used techniques are:

  • The metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values
  • The expert-based technique: estimating the test effort based on the experience of the owners of the testing tasks or by experts

For example, in Agile development, burn-down charts are examples of the metrics-based approach as effort remaining is being captured and reported, and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration; whereas planning poker, also called scrum poker, is an example of the expert-based approach, as team members are estimating the effort to deliver a feature based on their experience.

Within sequential projects, defect removal models are examples of the metrics-based approach, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of a similar nature; whereas the Wideband Delphi estimation technique is an example of the expert-based approach in which a group of experts provides estimates based on their experience.

Test Monitoring and Control

The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and should be used to assess test progress and to measure whether the test exit criteria, or the testing tasks associated with an Agile project’s definition of done, are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and (possibly) reported. Actions may cover any test activity and may affect any other software lifecycle activity.

Examples of test control actions include: 

  • Re-prioritising tests when an identified risk occurs (e.g., software delivered late)
  • Changing the test schedule due to availability or unavailability of a test environment or other resources
  • Re-evaluating whether a test item meets an entry or exit criterion due to rework

Metrics Used in Testing

Metrics can be collected during and at the end of test activities in order to assess:

  • Progress against the planned schedule and budget
  • Current quality of the test object
  • Adequacy of the test approach
  • Effectiveness of the test activities with respect to the objectives

Common test metrics include:

  • Percentage of planned work done in test case preparation (or percentage of planned test cases implemented)
  • Percentage of planned work done in test environment preparation
  • Test case execution (e.g., number of test cases run/not run, test cases passed/failed, and/or test conditions passed/failed)
  • Defect information (e.g., defect density, defects found and fixed, failure rate, and confirmation test results)
  • Test coverage of requirements, user stories, acceptance criteria, risks, or code
  • Task completion, resource allocation and usage, and effort
  • Cost of testing, including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test

Audiences, Contents, and Purposes for Test Reports

The purpose of test reporting is to summarise and communicate test activity information, both during and at the end of a test activity (e.g., a test level). The test report prepared during a test activity may be referred to as a test progress report, while a test report prepared at the end of a test activity may be referred to as a test summary report.

During test monitoring and control, the test manager regularly issues test progress reports for stakeholders. In addition to content common to test progress reports and test summary reports, typical test progress reports may also include:

  • The status of the test activities and progress against the test plan
  • Factors impeding progress
  • Testing planned for the next reporting period
  • The quality of the test objects

When exit criteria are reached, the test manager issues the test summary report. This report provides a summary of the testing performed, based on the latest test progress report and any other relevant information.

Typical test summary reports may include:

  • Summary of testing performed
  • Information on what occurred during a test period
  • Deviations from plan, including deviations in schedule, duration, or effort of test activities
  • Status of testing and product quality with respect to the exit criteria or definition of done
  • Factors that have blocked or continue to block progress
  • Metrics of defects, test cases, test coverage, activity progress, and resource consumption.
  • Residual risks
  • Reusable test work products produced

The contents of a test report will vary depending on the project, the organisational requirements, and the software development lifecycle. For example, a complex project with many stakeholders or a regulated project may require more detailed and rigorous reporting than a quick software update. As another example, in Agile development, test progress reporting may be incorporated into task boards, defect summaries, and burn-down charts, which may be discussed during a daily stand-up meeting.

In addition to tailoring test reports based on the context of the project, test reports should be tailored based on the report’s audience. The type and amount of information that should be included for a technical audience or a test team may be different from what would be included in an executive summary report. In the first case, detailed information on defect types and trends may be important. In the latter case, a high-level report (e.g., a status summary of defects by priority, budget, schedule, and test conditions passed/failed/not tested) may be more appropriate.

Configuration Management

The purpose of configuration management is to establish and maintain the integrity of the component or system, the test-ware, and their relationships to one another through the project and product lifecycle. 

To properly support testing, configuration management may involve ensuring the following:

  • All test items are uniquely identified, version controlled, tracked for changes, and related to each other
  • All items of test-ware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process
  • All identified documents and software items are referenced unambiguously in test documentation

During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.

Risks and Testing

Definition of Risk

Risk involves the possibility of an event in the future which has negative consequences. The level of risk is determined by the likelihood of the event and the impact (the harm) from that event.

Product and Project Risks

Product risk involves the possibility that a work product (e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders. When the product risks are associated with specific quality characteristics of a product (e.g., functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability, and portability), product risks are also called quality risks. Examples of product risks include:

  • Software might not perform its intended functions according to the specification
  • Software might not perform its intended functions according to user, customer, and/or stakeholder needs
  • A system architecture may not adequately support some non-functional requirement(s)
  • A particular computation may be performed incorrectly in some circumstances
  • A loop control structure may be coded incorrectly
  • Response-times may be inadequate for a high-performance transaction processing system
  • User experience (UX) feedback might not meet product expectations

Project risk involves situations that, should they occur, may have a negative effect on a project’s ability to achieve its objectives. Examples of project risks include:

  • Project issues:
    • Delays may occur in delivery, task completion, or satisfaction of exit criteria or definition of done 
    • Inaccurate estimates, reallocation of funds to higher priority projects, or general cost-cutting across the organisation may result in inadequate funding 
    • Late changes may result in substantial re-work
  • Organisational issues: 
    • Skills, training, and staff may not be sufficient 
    • Personnel issues may cause conflict and problems 
    • Users, business staff, or subject matter experts may not be available due to conflicting business priorities
  • Political issues:
    • Testers may not communicate their needs and/or the test results adequately
    • Developers and/or testers may fail to follow up on information found in testing and reviews (e.g., not improving development and testing practices)
    • There may be an improper attitude toward, or expectations of, testing (e.g., not appreciating the value of finding defects during testing)
  • Technical issues: 
    • Requirements may not be defined well enough 
    • The requirements may not be met, given existing constraints 
    • The test environment may not be ready on time 
    • Data conversion, migration planning, and their tool support may be late 
    • Weaknesses in the development process may impact the consistency or quality of project work products such as design, code, configuration, test data, and test cases
    • Poor defect management and similar problems may result in accumulated defects and other technical debt
  • Supplier issues:
    • A third party may fail to deliver a necessary product or service, or go bankrupt
    • Contractual issues may cause problems to the project

Project risks may affect both development activities and test activities. In some cases, project managers are responsible for handling all project risks, but it is not unusual for test managers to have responsibility for test-related project risks.

Product Quality and Risk-based Testing

Risk is used to focus the effort required during testing. It is used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of an adverse event occurring, or to reduce the impact of an adverse event. Testing is used as a risk mitigation activity, to provide information about identified risks, as well as providing information on residual (unresolved) risks. 

A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk. It involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact. The resulting product risk information is used to guide test planning, the specification, preparation and execution of test cases, and test monitoring and control. Analysing product risks early contributes to the success of a project. 

In a risk-based approach, the results of product risk analysis are used to:

  • Determine the test techniques to be employed
  • Determine the particular levels and types of testing to be performed (e.g., security testing, accessibility testing)
  • Determine the extent of testing to be carried out
  • Prioritise testing in an attempt to find the critical defects as early as possible 
  • Determine whether any activities in addition to testing could be employed to reduce risk (e.g., providing training to inexperienced designers)

Risk-based testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis. To ensure that the likelihood of a product failure is minimised, risk management activities provide a disciplined approach to:

  • Analyse (and re-evaluate on a regular basis) what can go wrong (risks)
  • Determine which risks are important to deal with
  • Implement actions to mitigate those risks
  • Make contingency plans to deal with the risks should they become actual events

In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks.

Defect Management

Since one of the objectives of testing is to find defects, defects found during testing should be logged. The way in which defects are logged may vary, depending on the context of the component or system being tested, the test level, and the software development lifecycle model. Any defects identified should be investigated and should be tracked from discovery and classification to their resolution (e.g., correction of the defects and successful confirmation testing of the solution, deferral to a subsequent release, acceptance as a permanent product limitation, etc.). In order to manage all defects to resolution, an organisation should establish a defect management process which includes a workflow and rules for classification. This process must be agreed with all those participating in defect management, including architects, designers, developers, testers, and product owners. In some organisations, defect logging and tracking may be very informal. 

During the defect management process, some of the reports may turn out to describe false positives, not actual failures due to defects. For example, a test may fail when a network connection is broken or times out. This behaviour does not result from a defect in the test object, but is an anomaly that needs to be investigated. Testers should attempt to minimise the number of false positives reported as defects. 

Defects may be reported during coding, static analysis, reviews, or during dynamic testing, or use of a software product. Defects may be reported for issues in code or working systems, or in any type of documentation including requirements, user stories and acceptance criteria, development documents, test documents, user manuals, or installation guides. In order to have an effective and efficient defect management process, organisations may define standards for the attributes, classification, and workflow of defects.

Typical defect reports have the following objectives: 

  • Provide developers and other parties with information about any adverse event that occurred, to enable them to identify specific effects, to isolate the problem with a minimal reproducing test, and to correct the potential defect(s), as needed or to otherwise resolve the problem
  • Provide test managers a means of tracking the quality of the work product and the impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them instead of running tests, and there will be more confirmation testing needed)
  • Provide ideas for development and test process improvement

A defect report filed during dynamic testing typically includes:

  • An identifier
  • A title and a short summary of the defect being reported
  • Date of the defect report, issuing organization, and author
  • Identification of the test item (configuration item being tested) and environment
  • The development lifecycle phase(s) in which the defect was observed
  • A description of the defect to enable reproduction and resolution, including logs, database dumps, screenshots, or recordings (if found during test execution)
  • Expected and actual results
  • Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
  • Urgency/priority to fix
  • State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed)
  • Conclusions, recommendations and approvals
  • Global issues, such as other areas that may be affected by a change resulting from the defect
  • Change history, such as the sequence of actions taken by project team members with respect to the defect to isolate, repair, and confirm it as fixed
  • References, including the test case that revealed the problem

Some of these details may be automatically included and/or managed when using defect management tools, e.g., automatic assignment of an identifier, assignment and update of the defect report state during the workflow, etc. Defects found during static testing, particularly reviews, will normally be documented in a different way, e.g., in review meeting notes.

Test Techniques

Categories of Test Techniques 

The purpose of a test technique, including those discussed in this section, is to help in identifying test conditions, test cases, and test data.

The choice of which test techniques to use depends on a number of factors, including: 

  • Component or system complexity 
  • Regulatory standards 
  • Customer or contractual requirements 
  • Risk levels and types 
  • Available documentation 
  • Tester knowledge and skills 
  • Available tools 
  • Time and budget 
  • Software development lifecycle model 
  • The types of defects expected in the component or system 

Some techniques are more applicable to certain situations and test levels; others are applicable to all test levels. When creating test cases, testers generally use a combination of test techniques to achieve the best results from the test effort.

The use of test techniques in the test analysis, test design, and test implementation activities can range from very informal (little to no documentation) to very formal. The appropriate level of formality depends on the context of testing, including the maturity of test and development processes, time constraints, safety or regulatory requirements, the knowledge and skills of the people involved, and the software development lifecycle model being followed. 

Categories of Test Techniques and Their Characteristics

In this article , test techniques are classified as black-box, white-box, or experience-based. 

Black-box test techniques (also called behavioural or behaviour-based techniques) are based on an analysis of the appropriate test basis (e.g., formal requirements documents, specifications, use cases, user stories, or business processes). These techniques are applicable to both functional and non-functional testing. Black-box test techniques concentrate on the inputs and outputs of the test object without reference to its internal structure. 

White-box test techniques (also called structural or structure-based techniques) are based on an analysis of the architecture, detailed design, internal structure, or the code of the test object. Unlike black-box test techniques, white-box test techniques concentrate on the structure and processing within the test object. 

Experience-based test techniques leverage the experience of developers, testers and users to design, implement, and execute tests. These techniques are often combined with black-box and white-box test techniques.

Common characteristics of black-box test techniques include the following: 

  • Test conditions, test cases, and test data are derived from a test basis that may include software requirements, specifications, use cases, and user stories
  • Test cases may be used to detect gaps between the requirements and the implementation of the requirements, as well as deviations from the requirements 
  • Coverage is measured based on the items tested in the test basis and the technique applied to the test basis

Common characteristics of white-box test techniques include:

  • Test conditions, test cases, and test data are derived from a test basis that may include code, software architecture, detailed design, or any other source of information regarding the structure of the software
  • Coverage is measured based on the items tested within a selected structure (e.g., the code or interfaces) and the technique applied to the test basis

Common characteristics of experience-based test techniques include:

  • Test conditions, test cases, and test data are derived from a test basis that may include knowledge and experience of testers, developers, users and other stakeholders 

This knowledge and experience includes expected use of the software, its environment, likely defects, and the distribution of those defects.

Black-box Test Techniques

Equivalence Partitioning 

Equivalence partitioning divides data into partitions (also known as equivalence classes) in such a way that all the members of a given partition are expected to be processed in the same way. There are equivalence partitions for both valid and invalid values. 

  • Valid values are values that should be accepted by the component or system. An equivalence partition containing valid values is called a “valid equivalence partition.” 
  • Invalid values are values that should be rejected by the component or system. An equivalence partition containing invalid values is called an “invalid equivalence partition.” 
  • Partitions can be identified for any data element related to the test object, including inputs, outputs, internal values, time-related values (e.g., before or after an event) and for interface parameters (e.g., integrated components being tested during integration testing). 
  • Any partition may be divided into sub partitions if required. 
  • Each value must belong to one and only one equivalence partition.
  • When invalid equivalence partitions are used in test cases, they should be tested individually, i.e., not combined with other invalid equivalence partitions, to ensure that failures are not masked. Failures can be masked when several failures occur at the same time but only one is visible, causing the other failures to be undetected. 

To achieve 100% coverage with this technique, test cases must cover all identified partitions (including invalid partitions) by using a minimum of one value from each partition. Coverage is measured as the number of equivalence partitions tested by at least one value, divided by the total number of identified equivalence partitions, normally expressed as a percentage. Equivalence partitioning is applicable at all test levels.

Boundary Value Analysis

Boundary value analysis (BVA) is an extension of equivalence partitioning, but can only be used when the partition is ordered, consisting of numeric or sequential data. The minimum and maximum values (or first and last values) of a partition are its boundary values. 

For example, let suppose an input field accepts a single integer value as an input, using a keypad to limit inputs so that non-integer inputs are impossible. The valid range is from 1 to 5, inclusive. So, there are three equivalence partitions: invalid (too low); valid; invalid (too high). For the valid equivalence partition, the boundary values are 1 and 5. For the invalid (too high) partition, the boundary value is 6. For the invalid (too low) partition, there is only one boundary value, 0, because this is a partition with only one member. 

In the example above, we identify two boundary values per boundary. The boundary between invalid (too low) and valid gives the test values 0 and 1. The boundary between valid and invalid (too high) gives the test values 5 and 6. Some variations of this technique identify three boundary values per boundary: the values before, at, and just over the boundary. In the previous example, using three-point boundary values, the lower boundary test values are 0, 1, and 2, and the upper boundary test values are 4, 5, and 6. 

Behaviour at the boundaries of equivalence partitions is more likely to be incorrect than behaviour within the partitions. It is important to remember that both specified and implemented boundaries may be displaced to positions above or below their intended positions, may be omitted altogether, or may be supplemented with unwanted additional boundaries. Boundary value analysis and testing will reveal almost all such defects by forcing the software to show behaviours from a partition other than the one to which the boundary value should belong. 

Boundary value analysis can be applied at all test levels. This technique is generally used to test requirements that call for a range of numbers (including dates and times). Boundary coverage for a partition is measured as the number of boundary values tested, divided by the total number of identified boundary test values, normally expressed as a percentage.

Decision Table Testing

Decision tables are a good way to record complex business rules that a system must implement. When creating decision tables, the tester identifies conditions (often inputs) and the resulting actions (often outputs) of the system. These form the rows of the table, usually with the conditions at the top and the actions at the bottom. Each column corresponds to a decision rule that defines a unique combination of conditions which results in the execution of the actions associated with that rule. The values of the conditions and actions are usually shown as Boolean values (true or false) or discrete values (e.g., red, green, blue), but can also be numbers or ranges of numbers. These different types of conditions and actions might be found together in the same table.

The common notation in decision tables is as follows:

For conditions:

  • Y means the condition is true (may also be shown as T or 1) 
  • N means the condition is false (may also be shown as F or 0) 
  • — means the value of the condition doesn’t matter (may also be shown as N/A)

For actions: 

  • X means the action should occur (may also be shown as Y or T or 1) 
  • Blank means the action should not occur (may also be shown as – or N or F or 0)

A full decision table has enough columns (test cases) to cover every combination of conditions. By deleting columns that do not affect the outcome, the number of test cases can decrease considerably. For example by removing impossible combinations of conditions.

The common minimum coverage standard for decision table testing is to have at least one test case per decision rule in the table. This typically involves covering all combinations of conditions. Coverage is measured as the number of decision rules tested by at least one test case, divided by the total number of decision rules, normally expressed as a percentage.

The strength of decision table testing is that it helps to identify all the important combinations of conditions, some of which might otherwise be overlooked. It also helps in finding any gaps in the requirements. It may be applied to all situations in which the behaviour of the software depends on a combination of conditions, at any test level.

State Transition Testing

Components or systems may respond differently to an event depending on current conditions or previous history (e.g., the events that have occurred since the system was initialised). The previous history can be summarised using the concept of states. A state transition diagram shows the possible software states, as well as how the software enters, exits, and transitions between states. A transition is initiated by an event (e.g., user input of a value into a field). The event results in a transition. The same event can result in two or more different transitions from the same state. The state change may result in the software taking an action (e.g., outputting a calculation or error message). 

A state transition table shows all valid transitions and potentially invalid transitions between states, as well as the events, and resulting actions for valid transitions. State transition diagrams normally show only the valid transitions and exclude the invalid transitions. 

Tests can be designed to cover a typical sequence of states, to exercise all states, to exercise every transition, to exercise specific sequences of transitions, or to test invalid transitions. 

State transition testing is used for menu-based applications and is widely used within the embedded software industry. The technique is also suitable for modelling a business scenario having specific states or for testing screen navigation. The concept of a state is abstract — it may represent a few lines of code or an entire business process. 

Coverage is commonly measured as the number of identified states or transitions tested, divided by the total number of identified states or transitions in the test object, normally expressed as a percentage. For more information on coverage criteria for state transition testing.

Use Case Testing 

Tests can be derived from use cases, which are a specific way of designing interactions with software items. They incorporate requirements for the software functions. Use cases are associated with actors (human users, external hardware, or other components or systems) and subjects (the component or system to which the use case is applied).

Each use case specifies some behaviour that a subject can perform in collaboration with one or more actors. A use case can be described by interactions and activities, as well as preconditions, postconditions and natural language where appropriate. Interactions between the actors and the subject may result in changes to the state of the subject. Interactions may be represented graphically by work flows, activity diagrams, or business process models.

A use case can include possible variations of its basic behaviour, including exceptional behaviour and error handling (system response and recovery from programming, application and communication errors, e.g., resulting in an error message). Tests are designed to exercise the defined behaviours (basic, exceptional or alternative, and error handling). Coverage can be measured by the number of use case behaviours tested divided by the total number of use case behaviours, normally expressed as a percentage.

White-box Test Techniques 

White-box testing is based on the internal structure of the test object. White-box test techniques can be used at all test levels, but the two code-related techniques discussed in this section are most commonly used at the component test level. There are more advanced techniques that are used in some safety-critical, mission-critical, or high integrity environments to achieve more thorough coverage, but those are not discussed here.

Statement Testing and Coverage 

Statement testing exercises the potential executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage. 

Decision Testing and Coverage

Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes. To do this, the test cases follow the control flows that occur from a decision point (e.g., for an IF statement, one for the true outcome and one for the false outcome; for a CASE statement, test cases would be required for all the possible outcomes, including the default outcome). 

Coverage is measured as the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the test object, normally expressed as a percentage.

The Value of Statement and Decision Testing

When 100% statement coverage is achieved, it ensures that all executable statements in the code have been tested at least once, but it does not ensure that all decision logic has been tested. Of the two white-box techniques discussed in this syllabus, statement testing may provide less coverage than decision testing. 

When 100% decision coverage is achieved, it executes all decision outcomes, which includes testing the true outcome and also the false outcome, even when there is no explicit false statement (e.g., in the case of an IF statement without an else in the code). Statement coverage helps to find defects in code that was not exercised by other tests. Decision coverage helps to find defects in code where other tests have not taken both true and false outcomes. 

Achieving 100% decision coverage guarantees 100% statement coverage (but not vice versa).

Experience-based Test Techniques

When applying experience-based test techniques, the test cases are derived from the tester’s skill and intuition, and their experience with similar applications and technologies. These techniques can be helpful in identifying tests that were not easily identified by other more systematic techniques. Depending on the tester’s approach and experience, these techniques may achieve widely varying degrees of coverage and effectiveness. Coverage can be difficult to assess and may not be measurable with these techniques. 

Commonly used experience-based techniques are discussed in the following sections.

Error Guessing 

Error guessing is a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge, including: 

  • How the application has worked in the past 
  • What kind of errors tend to be made 
  • Failures that have occurred in other applications

A methodical approach to the error guessing technique is to create a list of possible errors, defects, and failures, and design tests that will expose those failures and the defects that caused them. These error, defect, failure lists can be built based on experience, defect and failure data, or from common knowledge about why software fails.

Exploratory Testing

In exploratory testing, informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically during test execution. The test results are used to learn more about the component or system, and to create tests for the areas that may need more testing. 

Exploratory testing is sometimes conducted using session-based testing to structure the activity. In session-based testing, exploratory testing is conducted within a defined time-box, and the tester uses a test charter containing test objectives to guide the testing. The tester may use test session sheets to document the steps followed and the discoveries made. 

Exploratory testing is most useful when there are few or inadequate specifications or significant time pressure on testing. Exploratory testing is also useful to complement other more formal testing techniques. 

Exploratory testing is strongly associated with reactive test strategies. Exploratory testing can incorporate the use of other black-box, white-box, and experience-based techniques.

Checklist-based Testing

In checklist-based testing, testers design, implement, and execute tests to cover test conditions found in a checklist. As part of analysis, testers create a new checklist or expand an existing checklist, but testers may also use an existing checklist without modification. Such checklists can be built based onexperience, knowledge about what is important for the user, or an understanding of why and how software fails. 

Checklists can be created to support various test types, including functional and non-functional testing. In the absence of detailed test cases, checklist-based testing can provide guidelines and a degree of consistency. As these are high-level lists, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.

Static Testing

Static Testing Basics

In contrast to dynamic testing, which requires the execution of the software being tested, static testing relies on the manual examination of work products (i.e., reviews) or tool-driven evaluation of the code or other work products (i.e., static analysis). Both types of static testing assess the code or other work product being tested without actually executing the code or work product being tested. 

Static analysis is important for safety-critical computer systems (e.g., aviation, medical, or nuclear software), but static analysis has also become important and common in other settings. For example, static analysis is an important part of security testing. Static analysis is also often incorporated into automated software build and distribution tools, for example in Agile development, continuous delivery, and continuous deployment.

Work Products that Can Be Examined by Static Testing

Almost any work product can be examined using static testing (reviews and/or static analysis), for example: 

  • Specifications, including business requirements, functional requirements, and security requirements 
  • Epics, user stories, and acceptance criteria 
  • Architecture and design specifications 
  • Code 
  • Test-ware, including test plans, test cases, test procedures, and automated test scripts 
  • User guides 
  • Web pages 
  • Contracts, project plans, schedules, and budget planning 
  • Configuration set up and infrastructure set up 
  • Models, such as activity diagrams, which may be used for Model-Based testing. 

Reviews can be applied to any work product that the participants know how to read and understand. Static analysis can be applied efficiently to any work product with a formal structure (typically code or models) for which an appropriate static analysis tool exists. Static analysis can even be applied with tools that evaluate work products written in natural language such as requirements (e.g., checking for spelling, grammar, and readability). 

Benefits of Static Testing

Static testing techniques provide a variety of benefits. When applied early in the software development lifecycle, static testing enables the early detection of defects before dynamic testing is performed (e.g., in requirements or design specifications reviews, backlog refinement, etc.). Defects found early are often much cheaper to remove than defects found later in the lifecycle, especially compared to defects found after the software is deployed and in active use. Using static testing techniques to find defects and then fixing those defects promptly is almost always much cheaper for the organization than using dynamic testing to find defects in the test object and then fixing them, especially when considering the additional costs associated with updating other work products and performing confirmation and regression testing. 

Additional benefits of static testing may include:

  • Detecting and correcting defects more efficiently, and prior to dynamic test execution 
  • Identifying defects which are not easily found by dynamic testing 
  • Preventing defects in design or coding by uncovering inconsistencies, ambiguities, contradictions, omissions, inaccuracies, and redundancies in requirements 
  • Increasing development productivity (e.g., due to improved design, more maintainable code) 
  • Reducing development cost and time 
  • Reducing testing cost and time 
  • Reducing total cost of quality over the software’s lifetime, due to fewer failures later in the lifecycle or after delivery into operation 
  • Improving communication between team members in the course of participating in reviews

Differences between Static and Dynamic Testing

Static testing and dynamic testing can have the same objectives, such as providing an assessment of the quality of the work products and identifying defects as early as possible. Static and dynamic testing complement each other by finding different types of defects.

One main distinction is that static testing finds defects in work products directly rather than identifying failures caused by defects when the software is run. A defect can reside in a work product for a very long time without causing a failure. The path where the defect lies may be rarely exercised or hard to reach, so it will not be easy to construct and execute a dynamic test that encounters it. Static testing may be able to find the defect with much less effort.

Another distinction is that static testing can be used to improve the consistency and internal quality of work products, while dynamic testing typically focuses on externally visible behaviours.

Compared with dynamic testing, typical defects that are easier and cheaper to find and fix through static testing include:

  • Requirement defects (e.g., inconsistencies, ambiguities, contradictions, omissions, inaccuracies, and redundancies) 
  • Design defects (e.g., inefficient algorithms or database structures, high coupling, low cohesion) 
  • Coding defects (e.g., variables with undefined values, variables that are declared but never used,
    unreachable code, duplicate code) 
  • Deviations from standards (e.g., lack of adherence to coding standards) 
  • Incorrect interface specifications (e.g., different units of measurement used by the calling system than by the called system) 
  • Security vulnerabilities (e.g., susceptibility to buffer overflows) 
  • Gaps or inaccuracies in test basis traceability or coverage (e.g., missing tests for an acceptance
    criterion)

Moreover, most types of maintainability defects can only be found by static testing (e.g., improper modularisation, poor reusability of components, code that is difficult to analyse and modify without introducing new defects).

Review Process

Reviews vary from informal to formal. Informal reviews are characterised by not following a defined process and not having formal documented output. Formal reviews are characterised by team participation, documented results of the review, and documented procedures for conducting the review. The formality of a review process is related to factors such as the software development lifecycle model, the maturity of the development process, the complexity of the work product to be reviewed, any legal or regulatory requirements, and/or the need for an audit trail.

The focus of a review depends on the agreed objectives of the review (e.g., finding defects, gaining understanding, educating participants such as testers and new team members, or discussing and deciding by consensus).

Work Product Review Process 

The review process comprises the following main activities: 

Planning 

  • Defining the scope, which includes the purpose of the review, what documents or parts of documents to review, and the quality characteristics to be evaluated 
  • Estimating effort and timeframe 
  • Identifying review characteristics such as the review type with roles, activities, and checklists 
  • Selecting the people to participate in the review and allocating roles 
  • Defining the entry and exit criteria for more formal review types (e.g., inspections) 
  • Checking that entry criteria are met (for more formal review types)
    Initiate review 
  • Distributing the work product (physically or by electronic means) and other material, such as issue log forms, checklists, and related work products 
  • Explaining the scope, objectives, process, roles, and work products to the participants 
  • Answering any questions that participants may have about the review Individual review (i.e., individual preparation) 
    • Reviewing all or part of the work product 
    • Noting potential defects, recommendations, and questions Issue communication and analysis 
  • Communicating identified potential defects (e.g., in a review meeting) 
  • Analysing potential defects, assigning ownership and status to them 
  • Evaluating and documenting quality characteristics
  • Evaluating the review findings against the exit criteria to make a review decision (reject; major changes needed; accept, possibly with minor changes)

Fixing and reporting

  • Creating defect reports for those findings that require changes to a work product 
  • Fixing defects found (typically done by the author) in the work product reviewed 
  • Communicating defects to the appropriate person or team (when found in a work product related to the work product reviewed) 
  • Recording updated status of defects (in formal reviews), potentially including the agreement of the comment originator 
  • Gathering metrics (for more formal review types) 
  • Checking that exit criteria are met (for more formal review types) 
  • Accepting the work product when the exit criteria are reached
    

The results of a work product review vary, depending on the review type and formality.

Roles and responsibilities in a formal review

A typical formal review will include the roles below: 

Author 

  • Creates the work product under review 
  • Fixes defects in the work product under review (if necessary) Management 
  • Is responsible for review planning 
  • Decides on the execution of reviews 
  • Assigns staff, budget, and time 
  • Monitors ongoing cost-effectiveness 
  • Executes control decisions in the event of inadequate outcomes
    Facilitator (often called moderator) 
  • Ensures effective running of review meetings (when held) 
  • Mediates, if necessary, between the various points of view 
  • Is often the person upon whom the success of the review depends
  • Review leader 
  • Takes overall responsibility for the review 
  • Decides who will be involved and organises when and where it will take place

Reviewers

  • May be subject matter experts, persons working on the project, stakeholders with an interest in the work product, and/or individuals with specific technical or business backgrounds 
  • Identify potential defects in the work product under review 
  • May represent different perspectives (e.g., tester, developer, user, operator, business analyst, usability expert, etc.)

 Scribe (or recorder)

  • Collates potential defects found during the individual review activity 
  • Records new potential defects, open points, and decisions from the review meeting (when held)

In some review types, one person may play more than one role, and the actions associated with each role may also vary based on review type. In addition, with the advent of tools to support the review process, especially the logging of defects, open points, and decisions, there is often no need for a scribe.

Review Types

Although reviews can be used for various purposes, one of the main objectives is to uncover defects. All review types can aid in defect detection, and the selected review type should be based on the needs of the project, available resources, product type and risks, business domain, and company culture, among other selection criteria.

A single work product may be the subject of more than one type of review. If more than one type of review is used, the order may vary. For example, an informal review may be carried out before a technical review, to ensure the work product is ready for a technical review.

The types of reviews described below can be done as peer reviews, i.e., done by colleagues qualified to do the same work.

The types of defects found in a review vary, depending especially on the work product being reviewed. Reviews can be classified according to various attributes. The following lists the four most common types of reviews and their associated attributes.

Informal review (e.g., buddy check, pairing, pair review)

  • Main purpose: detecting potential defects 
  • Possible additional purposes: generating new ideas or solutions, quickly solving minor problems 
  • Not based on a formal (documented) process 
  • May not involve a review meeting 
  • May be performed by a colleague of the author (buddy check) or by more people 
  • Results may be documented 
  • Varies in usefulness depending on the reviewers 
  • Use of checklists is optional 
  • Very commonly used in Agile development 

Walkthrough

  • Main purposes: find defects, improve the software product, consider alternative implementations, evaluate conformance to standards and specifications 
  • Possible additional purposes: exchanging ideas about techniques or style variations, training of participants, achieving consensus 
  • Individual preparation before the review meeting is optional 
  • Review meeting is typically led by the author of the work product 
  • Scribe is mandatory 
  • Use of checklists is optional 
  • May take the form of scenarios, dry runs, or simulations 
  • Potential defect logs and review reports are produced 
  • May vary in practice from quite informal to very formal

Technical review

  • Main purposes: gaining consensus, detecting potential defects 
  • Possible further purposes: evaluating quality and building confidence in the work product, generating new ideas, motivating and enabling authors to improve future work products, considering alternative implementations 
  • Reviewers should be technical peers of the author, and technical experts in the same or other disciplines 
  • Individual preparation before the review meeting is required 
  • Review meeting is optional, ideally led by a trained facilitator (typically not the author) 
  • Scribe is mandatory, ideally not the author 
  • Use of checklists is optional 
  • Potential defect logs and review reports are produced

Inspection

  • Main purposes: detecting potential defects, evaluating quality and building confidence in the work product, preventing future similar defects through author learning and root cause analysis 
  • Possible further purposes: motivating and enabling authors to improve future work products and the software development process, achieving consensus 
  • Follows a defined process with formal documented outputs, based on rules and checklists 
  • Uses clearly defined roles, such as those specified earlier are mandatory, and may include a dedicated reader (who reads the work product aloud often paraphrase, i.e. describes it in own words, during the review meeting) 
  • Individual preparation before the review meeting is required 
  • Reviewers are either peers of the author or experts in other disciplines that are relevant to the work product 
  • Specified entry and exit criteria are used 
  • Scribe is mandatory 
  • Review meeting is led by a trained facilitator (not the author) 
  • Author cannot act as the review leader, reader, or scribe 
  • Potential defect logs and review report are produced
  • Metrics are collected and used to improve the entire software development process, including the inspection process 

Applying Review Techniques

There are a number of review techniques that can be applied during the individual review (i.e., individual preparation) activity to uncover defects. These techniques can be used across the review types described above. The effectiveness of the techniques may differ depending on the type of review used. Examples of different individual review techniques for various review types are listed below. 

Ad hoc 

In an ad hoc review, reviewers are provided with little or no guidance on how this task should be performed. Reviewers often read the work product sequentially, identifying and documenting issues as they encounter them. Ad hoc reviewing is a commonly used technique needing little preparation. This technique is highly dependent on reviewer skills and may lead to many duplicate issues being reported by different reviewers. 

Checklist-based 

A checklist-based review is a systematic technique, whereby the reviewers detect issues based on checklists that are distributed at review initiation (e.g., by the facilitator). A review checklist consists of a set of questions based on potential defects, which may be derived from experience. Checklists should be specific to the type of work product under review and should be maintained regularly to cover issue types missed in previous reviews. The main advantage of the checklist-based technique is a systematic coverage of typical defect types. Care should be taken not to simply follow the checklist in individual reviewing, but also to look for defects outside the checklist. 

Scenarios and dry runs 

In a scenario-based review, reviewers are provided with structured guidelines on how to read through the work product. A scenario-based review supports reviewers in performing “dry runs” on the work product based on expected usage of the work product (if the work product is documented in a suitable format such as use cases). These scenarios provide reviewers with better guidelines on how to identify specific defect types than simple checklist entries. As with checklist-based reviews, in order not to miss other defect types (e.g., missing features), reviewers should not be constrained to the documented scenarios. 

Perspective-based 

In perspective-based reading, similar to a role-based review, reviewers take on different stakeholder viewpoints in individual reviewing. Typical stakeholder viewpoints include end user, marketing, designer, tester, or operations. Using different stakeholder viewpoints leads to more depth in individual reviewing with less duplication of issues across reviewers.

In addition, perspective-based reading also requires the reviewers to attempt to use the work product under review to generate the product they would derive from it. For example, a tester would attempt to generate draft acceptance tests if performing a perspective-based reading on a requirements specification to see if all the necessary information was included. Further, in perspective-based reading, checklists are expected to be used. 

Empirical studies have shown perspective-based reading to be the most effective general technique for reviewing requirements and technical work products. A key success factor is including and weighing different stakeholder viewpoints appropriately, based on risks. 

Role-based 

A role-based review is a technique in which the reviewers evaluate the work product from the perspective of individual stakeholder roles. Typical roles include specific end user types (experienced, inexperienced, senior, child, etc.), and specific roles in the organization (user administrator, system administrator, performance tester, etc.). The same principles apply as in perspective-based reading because the roles are similar.

Success Factors for Reviews

In order to have a successful review, the appropriate type of review and the techniques used must be considered. In addition, there are a number of other factors that will affect the outcome of the review. 

Organizational success factors for reviews include: 

  • Each review has clear objectives, defined during review planning, and used as measurable exit criteria 
  • Review types are applied which are suitable to achieve the objectives and are appropriate to the type and level of software work products and participants 
  • Any review techniques used, such as checklist-based or role-based reviewing, are suitable for effective defect identification in the work product to be reviewed 
  • Any checklists used address the main risks and are up to date 
  • Large documents are written and reviewed in small chunks, so that quality control is exercised by providing authors early and frequent feedback on defects 
  • Participants have adequate time to prepare 
  • Reviews are scheduled with adequate notice 
  • Management supports the review process (e.g., by incorporating adequate time for review activities in project schedules)
  • Reviews are integrated in the company’s quality and/or test policies.

People-related success factors for reviews include: 

  • The right people are involved to meet the review objectives, for example, people with different skill sets or perspectives, who may use the document as a work input 
  • Testers are seen as valued reviewers who contribute to the review and learn about the work product, which enables them to prepare more effective tests, and to prepare those tests earlier 
  • Participants dedicate adequate time and attention to detail 
  • Reviews are conducted on small chunks, so that reviewers do not lose concentration during
    individual review and/or the review meeting (when held) 
  • Defects found are acknowledged, appreciated, and handled objectively 
  • The meeting is well-managed, so that participants consider it a valuable use of their time 
  • The review is conducted in an atmosphere of trust; the outcome will not be used for the evaluation of the participants 
  • Participants avoid body language and behaviors that might indicate boredom, exasperation, or hostility to other participants 
  • Adequate training is provided, especially for more formal review types such as inspections 
  • A culture of learning and process improvement is promoted

Testing throughout the software development lifecycle

A software development lifecycle model describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically. There are a number of different software development lifecycle models, each of which requires different approaches to testing.

Software development and software testing

It is an important part of a tester’s role to be familiar with the common software development lifecycle models so that appropriate test activities can take place.

In any software development lifecycle model, there are several characteristics of good testing:

  • For every development activity, there is a corresponding test activity
  • Each test level has test objectives specific to that level
  • Test analysis and design for a given test level begin during the corresponding development activity
  • Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.

This categorizes common software development lifecycle models as follows:

  • Sequential development models
  • Iterative and incremental development models

A sequential development model describes the software development process as a linear, sequential flow of activities. This means that any phase in the development process should begin when the previous phase is complete. In theory, there is no overlap of phases, but in practice, it is beneficial to have early feedback from the following phase.

In the Waterfall model, the development activities (e.g., requirements analysis, design, coding, testing) are completed one after another. In this model, test activities only occur after all other development activities have been completed.

Unlike the Waterfall model, the V-model integrates the test process throughout the development process, implementing the principle of early testing. Further, the V-model includes test levels associated with each corresponding development phase, which further supports early testing. In this model, the execution of tests associated with each test level proceeds sequentially, but in some cases overlapping occurs.

Sequential development models deliver software that contains the complete set of features, but typically require months or years for delivery to stakeholders and users.

Incremental development involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally. The size of these feature increments varies, with some methods having larger pieces and some smaller pieces. The feature increments can be as small as a single change to a user interface screen or new query option.

Iterative development occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration. Iterations may involve changes to features developed in earlier iterations, along with changes in project scope. Each iteration delivers working software which is a growing subset of the overall set of features until the final software is delivered or development is stopped.

Examples include:

  • Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months), and the feature increments are correspondingly large, such as two or three groups of related features
  • Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features
  • Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once
  • Spiral: Involves creating experimental increments, some of which may be heavily re-worked or even abandoned in subsequent development work

Components or systems developed using these methods often involve overlapping and iterating test levels throughout development. Ideally, each feature is tested at several test levels as it moves towards delivery. In some cases, teams use continuous delivery or continuous deployment, both of which involve significant automation of multiple test levels as part of their delivery pipelines. Many development efforts using these methods also include the concept of self-organizing teams, which can change the way testing work is organized as well as the relationship between testers and developers.

These methods form a growing system, which may be released to end-users on a feature-by-feature basis, on an iteration-by-iteration basis, or in a more traditional major-release fashion. Regardless of whether the software increments are released to end-users, regression testing is increasingly important as the system grows.

In contrast to sequential models, iterative and incremental models may deliver usable software in weeks or even days, but may only deliver the complete set of requirements product over a period of months or even years.

Software development lifecycle models in context

Software development lifecycle models must be selected and adapted to the context of project and product characteristics. An appropriate software development lifecycle model should be selected and adapted based on the project goal, the type of product being developed, business priorities (e.g., time-to- market), and identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety-critical system such as an automobile’s brake control system. As another example, in some cases organizational and cultural issues may inhibit communication between team members, which can impede iterative development.

Depending on the context of the project, it may be necessary to combine or reorganize test levels and/or test activities. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level (e.g., integration to the infrastructure and other systems) and at the acceptance test level (functional and non-functional, along with user acceptance testing and operational acceptance testing).

In addition, software development lifecycle models themselves may be combined. For example, a V- model may be used for the development and testing of the backend systems and their integrations, while an Agile development model may be used to develop and test the front-end user interface (UI) and functionality. Prototyping may be used early in a project, with an incremental development model adopted once the experimental phase is complete.

Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and services, typically apply separate software development lifecycle models for each object. This presents a particular challenge for the development of Internet of Things system versions. Additionally the software development lifecycle of such objects places stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use (e.g., operate, update, and decommission phases).

Reasons why software development models must be adapted to the context of project and product characteristics can be:

  • Difference in product risks of systems (complex or simple project)
  • Many business units can be part of a project or program (combination of sequential and agile development)
  • Short time to deliver a product to the market (merge of test levels and/or integration of test types in test levels)

Agile software development

The fundamentals of agile software development

A tester on an Agile project will work differently than one working on a traditional project. Testers must understand the values and principles that underpin Agile projects, and how testers are an integral part of a whole-team approach together with developers and business representatives. The members in an Agile project communicate with each other early and frequently, which helps with removing defects early and developing a quality product.

Agile software development and the agile manifesto

In 2001, a group of individuals, representing the most widely used lightweight software development methodologies, agreed on a common set of values and principles which became known as the Manifesto for Agile Software Development or the Agile Manifesto [Agilemanifesto]. The Agile Manifesto contains four statements of values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The Agile Manifesto argues that although the concepts on the right have value, those on the left have greater value.

Individuals and interactions

Agile development is very people-centered. Teams of people build software, and it is through continuous communication and interaction, rather than a reliance on tools or processes, that teams can work most effectively.

Working software

From a customer perspective, working software is much more useful and valuable than overly detailed documentation and it provides an opportunity to give the development team rapid feedback. In addition, because working software, albeit with reduced functionality, is available much earlier in the development lifecycle, Agile development can confer significant time-to-market advantage. Agile development is, therefore, especially useful in rapidly changing business environments where the problems and/or solutions are unclear or where the business wishes to innovate in new problem domains.

Customer collaboration

Customers often find great difficulty in specifying the system that they require. Collaborating directly with the customer improves the likelihood of understanding exactly what the customer requires. While having contracts with customers may be important, working in regular and close collaboration with them is likely to bring more success to the project.

Responding to change

Change is inevitable in software projects. The environment in which the business operates, legislation, competitor activity, technology advances, and other factors can have major influences on the project and its objectives. These factors must be accommodated by the development process. As such, having flexibility in work practices to embrace change is more important than simply adhering rigidly to a plan.

Agile principles

The core Agile Manifesto values are captured in twelve principles:

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  • Deliver working software frequently, at intervals of between a few weeks to a few months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity—the art of maximizing the amount of work not done—is essential.
  • The best architectures, requirements, and designs emerge from self-organizing teams.
  • At regular intervals, the team reflects on how to become more effective, then tunes andadjusts its behavior accordingly.

The different Agile methodologies provide prescriptive practices to put these values and principles into action.

Why is testing necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s contributions to success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

Quality assurance and testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra- system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Tasks of a Test Manager and Tester

The activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organization.

The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organizations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.

Typical test manager tasks may include:

  •  Develop or review a test policy and test strategy for the organization
  •  Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management
  • Write and update the test plan(s)
  • Coordinate the test plan(s) with project managers, product owners, and others
  • Share testing perspectives with other project activities, such as integration planning
  • Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done) and facilitate test completion activities
  • Prepare and deliver test progress reports and test summary reports based on the information gathered
  • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control
  • Support setting up the defect management system and adequate configuration management of test-ware
  • Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product
  • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s)
  • Decide about the implementation of test environment(s)
  • Promote and advocate the testers, the test team, and the test profession within the organisation
  • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.)

The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organization, or that have to do with personnel management, may be done by test managers outside of the development team, who are sometimes called test coaches.

Typical tester tasks may include:

  • Review and contribute to test plans
  • Analyse, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis)
  • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis
  • Design, set up, and verify test environment(s), often coordinating with system administration and network management
  • Design and implement test cases and test procedures
  • Prepare and acquire test data
  • Create the detailed test execution schedule
  • Execute tests, evaluate the results, and document deviations from expected results
  • Use appropriate tools to facilitate the test process
  • Automate tests as needed (may be supported by a developer or a test automation expert)
  • Evaluate non-functional characteristics such as performance efficiency, reliability, usability,security, compatibility, and portability
  • Review tests developed by others

People who work on test analysis, test design, specific test types, or test automation may be specialists in these roles. Depending on the risks related to the product and the project, and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the acceptance test level, the role of a tester is often done by business analysts, subject matter experts, and users. At the system test level and the system integration test level, the role of a tester is often done by an independent test team. At the operational acceptance test level, the role of a tester is often done by operations and/or systems administration staff.

What are the testing objectives?

What should we test in a project may very and testing objective could include:

  • Testing or evaluating work products such as requirements, user stories, design and code.
  • Validated whether the test object is done or complete and work as expected by users and stakeholders.
  • Building confidence that in the quality of the test objective.
  • Preventing errors and defects.
  • Finding defects which lead to failure’s.
  • Providing to stakeholders information to let them make informed decisions, regarding the quality of the object under test.
  • Reducing the risk of the software quality.
  • Complying to legal, or regulatory standards, and verifying that the object under test comply with those standards or requirements.

The objectives under test may very from system to system, depending the context of the component under test, the level of test, and the model of the software development lifecycle being used.

What is testing?

Application or software systems, in this modern age, are all part of life, users all over the world are using and even testing systems with out even knowing that they are part of the testing. In our daily life we are using systems on our phones or our desktops, from banks, cellular providers, medical, ordering food and much more.

Software which does not function properly can lead to many problems, that include loss of money, time, reputation and more. Software testing, which is part of QA, can reduce errors, defects and failure in the software under testing.

Software testing is a process which includes many different activities such as test execution, planing, analysing, designing, implementing tests, reporting progress and results, and evaluate the quality of the object under test.

About

Quality Assurance blog was created in order to build a better way to get into the world of testing and critical thinking. Skills that are important for any QA tester, and we are happy to deliver the best knowledge about the QA future.

All that we do is 💯% QA testing including this blog.

Made with ❤️