Kubernetes introduction

Kubernetes (commonly stylized as K8s) is an open-sourcecontainer-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of database management systems”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker. Kubernetes originally interfaced with the Docker runtime through a “Dockershim”; however, the shim has since been deprecated in favor of directly interfacing with the container through containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI) introduced by Kubernetes in 2016.

Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

Kubernetes API

The design principles underlying Kubernetes allow one to programmatically create, configure, and manage Kubernetes clusters. This function is exposed via an API called the Cluster API. A key concept embodied in the API is the notion that the Kubernetes cluster is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces – the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider’s services and resources.

Kubernetes uses

Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture.

7 tips for improving load speed

Plan for performance

Are you building a new website? Be sure to discuss the importance of performance early on and set targets. That way, you have a faster load speed from the beginning and don’t have to implement fixes later.

Step 1: test, step 2: test…

Are you seeing a pattern here? 😉 Testing is crucial! Before you launch, load and test your website multiple times to make sure you can handle the traffic of real site visitors. This is especially important for sites with complex hosting, such as load-balanced configuration.

Implement some “quick wins”

To be clear, there’s no “get fast quick” scheme for site load speeds. But there is a tried-and-true template that will put you ahead of the curve. That includes making use of modern image formats, enabling compression on the server via Gzip, and leveraging browser cache. Find some more low-hanging fruit here.

Careful of your images!

Good websites have great graphic content – but they also take into account how images impact load speed. You can improve image performance by considering file formats, image compression, and lazy loading.

Think of the mobile visitors

More and more people surf the web on their phone these days, which makes mobile-optimized sites a huge priority! Since mobile users tend to use slower, less stable Internet connections, Accelerated Mobile Pages (AMPs) are a great way to get them content faster.

Prioritize above-the-fold

First impressions matter – and your above-the-fold content can make or break them! Consider inline styling for above-the-fold, then loading your code in chunks. This type of asynchronous loading can create a faster perceived load time for the user.

Assess your external scripts

Third-party scripts are a great tool – but can make your website feel a little crowded. Assess the performance of external scripts on your site load speed, and replace or remove those that are negatively impacting user experience.

DevOps preface

If you’re old, don’t try to change yourself, change your environment. —B. F. Skinner

One view of DevOps is that it helps take on that last mile problem in software: value delivery. The premise is that encouraging behaviors such as teaming, feedback, and experimentation will be reinforced by desirable outcomes such as better software, delivered faster and at lower cost. For many, the DevOps discourse then quickly turns to automation. That makes sense as automation is an environmental intervention that is relatively actionable. If you want to change behavior, change the environment!

In this context, automation becomes a significant investment decision with strategic import. DevOps automation engineers face a number of design choices. What level of interface abstraction is appropriate for the automation tooling? Where should you separate automation concerns of an infrastructure nature from those that should be more application centric?

These questions matter because automation tooling that is accessible to all can better connect all the participants in the software delivery process. That is going to help fos‐ ter all those positive teaming behaviors we are after. Automation that is decoupled from infrastructure provisioning events makes it possible to quickly tenant new project streams. Users can immediately self-serve without raising a new infrastructure requisition.

We want to open the innovation process to all, be they 10x programmers or citizen developers. Doing DevOps with makes this possible, and this blog will show you how.

This is a practical guide that will show how to easily implement and automate powerful cloud deployment patterns using. The container management platform provides a self-service platform for users. Its natively container-aware approach will allow us to show you an application-centric view to automation.


Now that we have those newly-raised table stakes covered, let’s talk about how to stand out and deliver faster than your cloud- based DevOps competition. To jump ahead of the tech herd, you need to provide your DevOps team tools that increase your software delivery speed, quality, and security.
To do that in this age of exploding data volumes and complex processes as possible, while gaining (or maintaining) full control of binary and dependency sets.
Automation is great, but not if it forces your developers to go speed also needs to integrate instantly with tech your teams
In other words, the minute you deploy, you boost productivity immediately through integration with your ecosystem and DevOps tools. When you can do that, you also save time and money through easy management of the DevOps pipeline.
Can you see how this is all coming together?


To achieve all of the above, a universal binary repository manager like JFrog Artifactory gives developers a powerful possible. It provides a searchable and clickable repository for binaries, saving them hours, even days, reinventing the wheel.
But it’s not that simple. It needs to be more than that.
in the cloud, superior pipeline tools—like Artifactory—needs to natively integrate with security scanning and compliance solutions. Enter JFrog Xray.
Through a tool like Xray, you empower developers to identify and mitigate known security vulnerabilities and open source license violations. You give them the tools to provide impact and new components have on your overall system.
It also lets them drill down to identify all dependencies of each build package and Docker layer using deep recursive scanning, allowing them to continuously govern and audit artifacts consumed and produced in your CI/CD pipeline.
And Xray does it all while protecting against open source security vulnerabilities using the most comprehensive vulnerabilities database in the industry.



of business:

60% + 80%
DevOps world have raised the bar on collaboration, cross-organizational visibility,
of businesses are adopting or expanding DevOps culture and processes
of businesses are now operating in the cloud

Let’s start with DevOps.
Forrester Research dubbed 2018 the year of DevOps. And it’s no wonder, with over half of enterprises implementing or expanding existing DevOps practices. So why are they doing that? Here are a few good reasons to consider it:
• Greater productivity and faster delivery of products
• Greater visibility and collaboration across projects,
departments, and individuals
• Less siloing
So, DevOps removes friction; and as a practical environment for DevOps, the cloud just makes sense.

• Rapid deployment of new environments
• Reduced IT costs through subscription and SaaS (pay as you go) payment structures
• Moving from CapEx expenditures for hardware to OpEx expenses for SaaS
• Fast, agile scalability
So why the urgency to make these innovations? The truth is, they’re not really innovative anymore. it’s already happened.
The bar has been raised and you need a new edge.

Institute Agile practices that focuses on communication, collaboration, customer feedback, and small and rapid releases. Agile operations remove rigidity from your processes and allow for greater innovation, while keeping accountability and increasing goal focus
Deploy a multi-cloud strategy with Kubernetes or other intermediary layer for cloud-agnostic and resilient infrastructure
Build cloud-native systems for core products, with lift-and-shift for systems that don’t require much scalability
Create microservices in containers over monolithic apps to increase your agility and your ability to innovate with less downtime


• Compatibility with all build and integration tools on the

• packaging formats and integrating with all the moving parts of the ecosystem
and all other major package formats (25+ and growing)
• Supports Maven, npm, Python, NuGet, Gradle, Helm,
• pay-only-for-what-you-use cloud model
• Security that all data is stored in multiple locations
and providers

• Lack of metadata context
• Policy enforcement

• Information access management through authenticated users and access control
• Full artifact traceability to fully reproduce a build and debug it
• Secure binaries by identifying vulnerabilities and
• Consistent and reliable access to remote artifacts
• Local caching of artifacts eliminates the need to download them again as well as removes the dependency on unreliable networks and remote public repositories
• docker registries
• Smart search for images
• Full integration with your building ecosystem
• Security and access control

• Additional insight to your code-to-cluster process while relating to each layer for each application
• As your main Kubernetes Docker registry, collects trace content, dependencies and relationships with other Docker images which cannot be done using a simple Docker registry

3 expert tips for (new) developers part-3

1 Don’t focus on reinventing the wheel

The expectations of you are probably lower than you think, because, hey, you’re brand new!

You’ll find a wealth of ready-made packages and libraries of code online to use at your disposal. Do your research and be sure to sense-check the quality, but don’t be afraid to use these resources to help you spend less time “reinventing in the wheel” and more time developing your skills and knowledge in other areas.

Which ties nicely with the next tip:

2 Make Google your friend

Seeking a solution online is often the most efficient first step towards a solution. 

A great piece of advice is to “get good at Googling”. Someone has run into the same problem as you, you just need to find it. Once you’ve found it, try to understand the what, why and how before copying and pasting it. This is an opportunity to learn and develop your knowledge.

3 Be kind to yourself (and your team!)

It may sound cliché – and perhaps a little cheesy – but it’s important to be kind to yourself when starting out in your development career, as nobody becomes an award-winning developer overnight 🤷‍♀️

While it is sometimes easier said than done, don’t put too much pressure on yourself and make sure you allow yourself the time to learn, grow and most importantly, make mistakes! 

And you will make mistakes – just remember that it’s solving these mistakes that will help you become a stronger developer. And try not to strive for perfection – aim to write clean, reusable and easy to read code in a timely manner. 

And don’t forget to be kind to your team too and remember nobody comes to work to do a bad job. The key to a successful development team is helping and supporting each other. A happy team will always produce the best work – and it’s less likely to feel like a job!

3 expert tips for (new) developers part-2

1 Expose your ignorance

Ouch – this one can be a tough one for some. It’s only natural that you don’t want to look ignorant but you must fight this urge and speak up. 

If you don’t understand something or haven’t heard of a term or technology – ask. If you don’t, it’s a missed opportunity to learn and verify your understanding. Software development is a multifaceted industry, you can’t know everything and you’re not expected to, but you can always gain knowledge by speaking up.

2 Communication is key

This one might surprise you, but your communication skills are just as important as your software development skills. Take the time to practice writing – you’ll use it more in your job than you might think.

And get comfortable explaining what you do to non-developers. Especially in the world of consulting and cross-team projects, you’ll likely be communicating with people who don’t have the same technical background as you do. 

Miscommunication is perhaps the biggest threat to any project. You need to be able to effectively communicate with other developers, project managers and clients. Clear, concise and timely written or verbal communication can go a long way. It might take some practice, but if you’re aware of this from the start, it will become a strong skill for you going forward!

3 Develop your project management skills

Similar to social skills and communication, you need to be able to communicate your progress on development tasks.

Tools like Trello, Jira and Azure DevOps support developers in task management, planning and scheduling. These skills will help you when you’re fixing a bug or writing a new piece of functionality; breaking down a larger task into smaller pieces making it more manageable for you as well as making it easier to present an overview to your manager or other team members.


Get practical tips and best practices for using Metadata for Confluence from real-life use cases.

Assemble Your Product Portfolio in a Flash

For users who simply need an overview of the company’s product portfolio, it’s frus- trating to have to search through every single product page. They would also need to be familiar with the product name or related keywords to find the content. For new team members, the task becomes harder as they lack the basic information to even begin searching.

The solution is to create a directory page with all the relevant information your users may need about the company products. By adding metadata to individual product pages, you can then populate all the information across those pages through addi- tional macros, filtered by the metadata values.

Whether for marketing initiatives or for customer support, product pages with meta- data come in handy whenever you need well-organized information on a product.

Just make sure to set up appropriate metadata sets that your users can fill out every- time they create a new product page. To configure your product metadata sets, simply create a form that is required when a product owner or developer creates a new page, as shown below:

You’ll have predefined fields associated with the product page template, which allows teams to easily identify the critical information about the product.


Learn the basics of metadata and how it makes Confluence great again.

To organize the collective intelligence from multiple business functions, you first need to design an intuitive content structure to make sure that information is discov- erable, whether through site navigation or search queries.

User-generated Labels Lead to Content Chaos

You may already be familiar with Confluence’s labelled content feature – the primary method for organizing content.

However, letting your team members freely add page labels can create problems. You’ll end up with a raging storm of tags that only brings more chaos to the wiki space. Not only does this approach require constantly keeping track of all the avail- able tags, you’ll also have to correct misspellings and updating teams with the right taxonomy.

Let’s face it. Even with a labelling system in place, with every new page comes a new topic and a plethora of new labels. And having all users consistently follow your label- ling rules is wishful thinking.

Page Properties Fall Short

So, aside from labels, what are other ways to help teams effectively manage content?

What your team really wants is to have the right information in front of them when they want it. Much like searching on “Atlassian” in Google and immediately getting a neat summary of all the information about the company Atlassian.

Confluence out-of-the-box comes with basic data categorization via the page proper- ties macro. With this function, your user can generate a table containing key informa- tion about the content and have it shown on a “summary page.”

Here’s an example, based on the properties created including Title, Owner, Due Date, and Status. The user can report the information about all project pages in a table.

However, similar to the limitation of labels, page properties lack the flexibility to present collective information that matters to different users. Plus, it requires tedious macro setup along with user-generated parameters. Which means you’ll end up with yet again more clutter than before.

This is where metadata comes in.

Metadata Brings Order to It All

In a nutshell, metadata refers to information about a page and its content, such as creator and creation date, among other details.

With metadata, it’s extremely easy to add predefined categories to pages. This allows you to pull information from those pages and display only relevant data in a table format for quick insights into the content.

There are three main categories of metadata for Confluence:

Descriptive metadata: Information that enables content discoverability Structural metadata: Information about the page structure Administrative metadata: Information about the source of content

Using Metadata for Confluence, you can skillfully conjure myriad content manage- ment capabilities, including:

Maintain a structural space organization and improve usability Enhance content discoverability, regardless of naming conventions Implement a more user-friendly Confluence navigation
Build a directory based on content from multiple sources
Make sure only relevant content is shown to a particular user

In the next chapter, we’ll let you in on our secrets to building a robust content plat- form using Metadata for Confluence app.

Introduction Overview of Confluence user needs and challenges.

As a Confluence admin, you’re entrusted with a mission-critical system for day-to-day business operations. Your teams count on you to bring institutional information to life on a wiki space, so that everyone can work more efficiently without constraints on knowledge.

While it’s common to think that the more content available, the more reliable your Confluence will be, we beg to differ.

Navigating Confluence can be challenging when you have thousands of pages scattered in many places, in different templates, and with no standardization. New information gets buried and never gets to see the light of day.

In fact, the biggest challenges for any Confluence admin include orga- nizing the magnitude of content, maintaining organizational transpar- ency, and ensuring the smooth flow of work.

What if there was a way to revitalize your Confluence site, organize pages, and maximize usability? That’s exactly why we built Metadata for Confluence app.

No gimmicks needed. With metadata, creating a structured wiki and organized content is simple. Metadata doesn’t just bring contour and clarity to your team spaces. It gives you new capabilities as a Conflu- ence wizard. Want to embed structured page properties? Check. Need to assemble information from thousands of pages for reporting? Check. You can do all kinds of amazing things with metadata, from building personalized intranet experiences to standardizing workflow implementations.

This serious of posts uncovers several use cases for Metadata Confluence app, so you can learn how to apply and create your own Confluence applications across your organization.

Let the magic begin! 🪄🦄

Pillar 4 | Personalization

Make it relevant and personal
Bring end-user data into the conversation by connecting your mailing list, CRM or customer database to our digital human platform. With the volume of user data available at your fingertips, there are plenty of opportunities to create value through the use of analytics and user preferences. A virtual barista could easily learn and recommend your favorite drink, or a virtual product genius would already know which brands you prefer.

Value-add integrations
UneeQ digital humans integrate into a number of third-party services, such as translation tools, real-time data-driven APIs and knowledge bases that enhance your capabilities at scale. Consider which your users would find most valuable depending on your use case.

Manage latency through optimization
The quality of your technical implementation is imperative to ensure the experience is natural and seamless. In the same way talking to a real human would be jarring if there were large delays, latency in your digital human responses will create a difficult-to-navigate experience. Efficient NLP design and technical architecture will ensure a seamless and humanlike interaction.

Analyze and craft performance
Add your favorite analytics platform or other behavior-tracking tools to capture user sessions, clicks and utterances. Watch for opportunities to improve the digital human’s response in your NLP service and quash any unmatched user questions.

personalization has improved their customer relationships.

Pillar 3 | Multimodal UX

Preparing for the next web experience
As Web 4.0 “the symbiotic web” continues to develop, early signs show it will be about a linked web experience that communicates with us, like we communicate with each other. While Web 5.0 “the emotional web” will create an entirely new human to machine experience. Aspects of these trends can be used today by pairing a digital human with an interactive user interface.

Use UI to your advantage
Unlike the human face, your digital human is a multimodal digital interface. Consider the use of speech, on-screen displays, image and video content, interactive elements, and escalation features all as tools to create a balanced and versatile experience. A great example of this is the ability to walk a user through a home loan application, all while answering the user’s questions throughout the process.

Stage and test the journey
Whether it’s escalating to a human, sending the user an email, creating an account for them or helping them fill out a form, plan the many ways your digital human conversations can end. You can then work backwards to find the many ways your users will get there. Iterate on your journey with A/B testing to help smooth over tricky interactions and provide an optimized experience for the end user.

Environmental restraints
Consider the environment in which your users will interact. There may be design and deployment considerations for certain situations including, environments that are too loud, lack the necessary privacy or have poor internet connection, for example.

… of consumers say the best thing brands can do to improve CX is integrate physical and digital channels.

Pillar 2 | Design for conversations

Great conversation design begins with role play
As you begin to think through your conversational design, and especially if you are evolving a chatbot into a digital human experience, review and role play how key interactions should make you feel. Written content is not always appropriate for spoken performance; using simple sentences and amending scripts to be suitable ‘for the ear’ is key. Role playing and documenting your own expressions and sentiment throughout the conversation will help guide the inputs and identify any content that is difficult to perform and mis-aligns with the desired user experience.

Guiding the conversation
Similar to the customer journey for your website (maybe even in parallel), it’s vital you continue to guide the user through a conversation. The good news is that a digital human, unlike a chatbot or even voice assistant, is more helpful to guide the user mainly due to the fact that it’s not just command driven.

Small talk versus on-topic conversation
As you design the conversational path and role play through the emotional impact, also keep in mind the ability to include small talk. There are several great small talk conversational engines out there including Blenderbot or even GPT-3. The digital human advantage here is the ability to plug into many different “brains” or natural language processing engines, and layer the experience with highly curated content. So while GPT-3 is guiding small talk, Watson (or any other NLP) is the foundation for your guided or on-topic conversation.

Pillar 1 | Personality code

Embody your brand
Your customers love and trust you because your brand, your story and your tone of voice is aligned to their personal values. Make sure all those things come through in your digital human experience design.

Know your audience
Understanding your audience and targeted personas will help to confirm specific details in the conversational tone. For example, a healthcare digital human should focus less on using humor and witty replies, and instead focus on establishing credibility, nurturing and building trust.

Train for emotional connection
… of customers say they want more human interaction as automated technologies continue to proliferate.
Personality is more about expressions and non-verbal cues than it is about the words. Your digital humans’ differentiation is about bringing a personality to an experience and doing it better than any other alternative. Use this to your advantage.

Create moments
If a smile was currency, how would you make the experience as lucrative as possible? Put a smile on users’ faces by delighting them and surprising them. Make solving their problems fun.

Conversational AI is a journey,
not a destination

Whether you are creating a virtual product expert, automating a complex financial form or introducing a virtual life health coach, it’s vital to the project’s success that you take into consideration each of the pillars we’ve outlined in this blog.

“Be very clear in what you want to achieve from this digital human – because the potential is limitless.”
Shashank Shekhar, CEO of Arcus Lending

We suggest that as you go through this blog, take some notes, jot down some questions and let us assist you in your journey. Our conversational AI specialists are eager to connect and help you implement best practices at each step.

So in line with that, let’s jump in a get started. The four pillars of an amazing digital human experience include both digital and tech imperatives, as well as highlighting the need for conversation, interaction and fun.

Digital humans | Introduction

Often referred to as avatars, artificial humans, or even virtual assistants, digital humans are AI-powered lifelike beings that look, sound and interact like real people.

Accessible 24-7-365 and fluent in over 70 languages, digital humans add empathy, compassion, engagement and a personality to any experience. Powered by conversational AI from Google, Amazon, Microsoft, IBM and other global tech leaders, digital humans are revolutionizing how we interface with brands, educators, healthcare workers, financial experts and other professions on a daily basis.

We’ve created new posts as a best practice guide to build amazing and engaging digital human experiences. Of course, best practices are always evolving, so we’d love to hear from you and what you’ve learned in your own journey. As always, visit us more often for information and connect with us on social media.

Salary structure in an agency

Perks and benefits that save employees money in the long run are always a valuable addition to a paycheck. Addition being the keyword here.

Because no amount of pizza parties can supplement the 10% increase in salary that people could get at the other agency across the street. Except, that’s not the case, the statistics surrounding this, point in the exact opposite direction:

  • 32% of people polled in the US would take a 10% pay cut to work at a company where they like the culture
  • 58% of workers will stay at a lower-paying job if it means having a great boss
  • And 60% of workers would even take half of the potential paycheck if it meant working at a job they love

So if culture makes up for the differences in salary between your agency and the agency next door, how do you structure the salaries in your company to both attract and retain top talent?

  • Don’t buy stars, build them – Have a partnership with the local media and technical schools that provides internships and part-time positions for promising students. If you follow our onboarding tips and you build a functional onboarding program, after a couple of weeks, your time investment in onboarding them should already be paying you back. And in a few months? You might just have your hands on your newest superstar.
  • Have a clear progression path – be upfront and transparent with the salary structure. It will eventually become the biggest motivator for the employees in the lower tiers. If you split your progression path into layers where everyone gets paid the same, you can skip long management discussions like: ’’Is a Senior Backend Developer with 4 years of experience worth the same as a Senior Art Director with 5?’’ An example of how to structure your progression path could be:
  1. Intern > unpaid, but gaining real-life skills and experience from an agency by working on real projects
  2. Trainee > paid, part-time or full time; self-taught, certified or freshly graduated
  3. Apprentice > Same credentials as a trainee, but with some successful commercial projects
  4. Junior > Proven 1-3 years of experience with commercial projects
  5. Senior > 3+ years of experience with commercial projects and proficientwith project management and delegating tasks
  6. Management > If you’re doing linear progression, this step is simple. But if you want to do non-linear progression, it’s worth differentiating at management level. a. Senior members with multiple specializations and experience with managing teams b. Senior members with extra non-managerial responsibilities (product development, decision making, etc.)
  7. Equity tier > Management whose investment with the company is substantial enough to warrant equity in the company
  • Promotions, raises and employees who feel undervalued – if you adopt the aforementioned salary structure, your employees should have a clear overview of where they fall and what they need to achieve to move up to the next salary level. But as it goes with highly ambitious people, you will always have individuals who take on more than their fair share of responsibility and then don’t feel adequately compensated. The answer should be obvious. If the employee performs above the set expectations, has the data to back it up, and asks for an increase in pay, they should get one. Sadly, when working with more than one person, it will never be that easy. Ben Horowitz summed it up the best in his class on Y combinator – how to start a startup.

A point he brings up is: If you give that employee a raise, will you give everyone else who is also performing well a raise as well? What about the employees who are performing just as well, but their personality prevents them from asking directly?

Apart from being approachable overall, managers and senior agency members can adopt these two methods to focus these conversations and help employees feel more valued and heard:

1. Monthly walk and talk:A manager and employee go for a half-hour walk outside of the office, talking about current projects, plans for future projects, the progress of the employee and any problems they might be having

2. Yearly progress conversation: Performance reviews are usually seen as a negative process because of the negative associations that people usually have with them. Walk and talks remove the need for quarterly performance reviews at a scary meeting room table.

But a walk and talk is not really the place to sign contracts and obsess over spreadsheets. So how about a yearly progress review, close to the end of the year, talking strictly about the employee’s progression path and salary?

That way, both current problems can be addressed from month to month, and larger issues or achievements can be accumulated over time.

Non-linear progression

When hearing the words ’’non-linear’’, if your mind immediately jumps towards video games, you already sort of get the point.

In a non-linear game progression system, you start at the same spot as every other player. But when you arrive at a crossroads, instead of going straight down the first path like you usually would, you get to choose if you want to go left, right, or even take a step back and see if you can get to your current position again, by taking another path. This progression helps you pick up new skills and new experiences that will make the path ahead much easier.

This is also how the current trend in career progression looks. Companies no longer expect people to stay in the same career path for decades, slowly working their way up the corporate ladder. This rings especially true for agencies, where skills from different career paths transfer almost seamlessly and complement each other with a broader outlook on the problems being solved.

As an example, if you have a frontend developer who discovered she likes designing more than she likes coding, you should give her a chance because:

  • She already knows the limitations that code can have on some designs
  • She can design with systems and reusable assets in mind
  • She can give better estimates on project length and the overall development time
  • If she wants to progress further into something like art direction, the added coding skills are always a plus when communicating to both clients and developers alike

If your agency has people who have invested in their craft to the point where they are considered experts, top talent, or masters, their progression will eventually hit a plateau.

And while just existing at the top and using your skills to their full potential is a fantastic feeling… ultimately, the need for self-improvement and innovation that got them to the top of the talent pool will make them want to progress further. But you can’t really go further up than the top, so where do you go?

This is where people start considering switching jobs or pursuing entrepreneurship because it seems like the only challenging way forward.

The classic solution to this “problem” is to promote them to the management level. Clearly, if someone is performing exceptionally well as a specialist they will automatically become an exceptional manager… Right?

The solution is not always that simple and pushing someone to become a manager (or a manager of a bigger team than before) is not for everyone. Some top talent enjoy being a specialist and would rather spend their time performing their tasks, than managing a team.

“In a hierarchy, every employee tends to rise to his level of incompetence.”

– Laurence J. Peter, Author of The Peter Principle

The previous quote refers to what is known as the Peter principle, a concept of management developed by Laurence J. Peter. The principle suggests that people tend to get promoted outside of their skillset and competence, based on previous success.

Meaning: Your best front-end developer is first and foremost… a front-
end developer. Having 10 award-winning projects under his belt does not make him an instant candidate for managing the next project. That requires knowledge of front-end and an additional management skill set, lack of which could lead to disaster down the line.

The modern solution to the problem is working with non-linear progression and promotion. Instead of the career path only going one way – towards management – you can set an alternative path. This could be anything from giving your top talent more influence on projects or a seat at the table when tough decisions are made to simply giving more freedom to perform tasks their own way. Once you start thinking outside the box you’ll be amazed at the possibilities there are for non-linear progression.

And the result?
Happier top talent that gets a truly unique position at your agency, which they won’t be able to find anywhere else.

At SQAEB, most of our junior employees start out in the SWAT department, helping our users with day to day issues. This helps them naturally and quickly get an overview of all the other departments, the products, and how everything fits together. Later they can choose to transition into newly opened positions in the company that they find interesting or get places in completely new positions based on their specializations.

Are you having any fun?

Fun is a fickle thing. Everyone inherently knows what fun is, but if you had to define fun at the workplace, it would not be as easy as it first sounds. Looking up the definition of fun will also get you reprimanded by the dictionary, and there is no one sure way to define it. The only sure thing is that if the most interesting thing at the office on the first day is the photocopier, the new employee getting the tour will probably start looking for another job during the lunch break.

The overall feeling of fun at the workplace impacts productivity. And so it’s
a topic without any specific bullet points, but a topic to think about and discuss nonetheless.
If you want to have fun at the workplace but can’t manage to play chess
on one screen while maintaining your focus on coding… or your keyboard shortcut hand is also your balloon tying and juggling hand… you will probably need to interact with other people eventually. But there is only a limited level of friendship and camaraderie that you can build with people when talking about code and sending each other design files.

When was the last time someone asked a different water cooler question than: ’’So, how was the weekend/any plans for the weekend?’’ In most agencies, it has probably been a while. And that’s expected. If you work in a consistent and focused environment, there are only so many topics that can come to mind.

But if you change up the setting, if you do different activities together, you might build more than just classic coworker bonds. You might build friendships. And what could be nicer than looking forward to Monday morning at the office to see your friends?

But not everyone comes to work looking for friendship. Especially top performers who just want to put on their headphones and forget that they are in an office environment.

Sadly headphones run out of battery, the wifi goes down, and progress meetings exist. Eventually, even the most focused people have to talk to their coworkers. And since you spend most of your day at work, people would prefer to cut down on the dry, corporate jargon and instead discuss or do something… fun.

This again brings us to the topic of shared values. The job of a back-end developer and the job of a UX designer require different personalities. So if your agency wants to have a varied offering of skills and backgrounds, you will have to find values that connect with every group.

But not just the ’’standard’’ values that are put on the agency “about us” page. The values that make up the constantly evolving personality of your agency. If you do this, you will eventually have an agency full of like minded individuals that don’t need to act corporate 24/7 and might even joke around from time to time.

Sadly, there is a thin line between having fun at the workplace and being overly quirky and disrupting everyone’s work. Unfortunately, you can also never get full value-alignment with every person that has been hired. But an agency where people think of each other as nothing more than colleagues and only spend time together at work is an agency that will have trouble scaling and keeping up with the more friendly teams later on.

Your culture and environment both have an impact on the quality of your work.

Talent Investment

You have to spend money to make money. And you have to invest in top talent to retain top talent. Achieving maximum focus in an office setting where a million things are gunning for your attention is tough.

All of that can be managed with a good work culture and processes. But if you don’t have the right equipment and tools, you’ll never be as efficient as you could be.

Maybe a chair is not comfortable. Maybe you can still hear your sales team in the other room, even with your headphones on. Maybe you found a SaaS tool that would save you hours upon hours of repetitive tasks.

If someone asks for a new keyboard, new tool, or new screen, it’s never a good idea to dismiss them right away. The person asking rarely brings up an issue like this on a whim, it has to be premeditated in some way, and that means that the problem they are facing is a recurring one.

“The way management treats
their associates is exactly how the associates will treat the customers.”

– Sam Walton, Founder of Walmart

A one-time investment, no matter how large, is actually pretty small when looking at it as a long term investment in focus and productivity. If an agency shows that it cares about its employees in all the ways that matter, the employees will return it multiple times over. Here are some small or large things in no particular order that could make or break an employee relationship with the company:

  • IT equipment. If you ask someone to work in front of a computer 8 hours each day, you better make sure they have the proper equipment to do their job. This includes everything from computer equipment to noise- cancelling headphones and online tools to do their job.
  • Chair and desk. This one is connected to the one above; spending a third of their day in uncomfortable working conditions will severely hurt their productivity and health.
  • Coffee, refreshments and snacks. We know it might not sound like much, but making sure that your employees have access to all the basics like coffee, cold water (or soda) and some fruit can drastically increase their productivity and improve health.
  • Indoor climate. The stereotype of a developer might be: someone sitting in a dark basement with a hoodie on – but nothing could be further from the truth if you want them to be productive. Proper lighting, some plants and good ventilation are all tiny details that have a huge impact.

Talent Professional growth

A promotion: While most talented people love what they do, as they repeat the same tasks day after day, eventually, they will find ways of improving the process or get ideas for new ventures that the team should pursue. And there is only so much one can do from the bottom of the corporate ladder. Career growth is a key part of goal setting strategies for high performersand agencies need to provide these opportunities if they want to retain their top talent. Otherwise those people might look for those higher positions elsewhere. Please note, that a “regular” promotion is not always the best option; we’ll cover that later in our post “Non-linear progression”.

A raise: Usually going hand in hand with a promotion. However, while every promotion should come with a raise, not every raise has to come with a promotion. Many people are not after the responsibility that comes with
a promotion, they just like what they do, and so they take on more tasks, spend more time at the office or even work weekends. But maybe they aren’t looking to delegate their tasks to their would-be replacements. Maybe they just want to feel like their extra time is seen as valuable by the agency. And seeing as time is money, sometimes the answer is as simple as that.

While all of the above will probably make your agency employees happy and get your agency valuable, educated and dedicated employees for a long time
to come, there are also smaller ways to improve productivity faster.

Talent Personal growth

Courses and conferences: There are always new books and courses popping up, covering the latest and greatest developments in the industry.

If your top performers ask about you helping fund their education, it’s one of the best ways to show them that you are counting on them in the future.

Maybe there is a developer conference coming up that would help them meet some like minded people and gather industry knowledge?

While it may seem like a big investment to send one or multiple developers away for a few days, the new knowledge and energy they bring back will pay dividends now as well as in the future. If they have valid arguments for going, why not give it a shot?

Schools and degrees: A similar approach to the one about courses and conferences, to an even higher degree (forgive the pun), should be taken if an employee asks about the possibility of returning to school.

Maybe they got this job straight after finishing their bachelor’s degree. Maybe they want to go for a manager position and think that an MBA would greatly improve their outlook.

Or maybe they want to slowly transition to another position, but wish to stay at the agency. Customer lifetime value and return on investment are some of the most important metrics that agencies need to keep an eye out. But try
to imagine the “employee lifetime value”, of someone who you helped put through school.

Personal and professional growth

Every movie about an office work environment has managed to, in one way or another, demonize the monotony of sitting at a cubicle doing the same work every single day. And who can blame them? Doing the same thing over and over again is widely referred to as the definition of insanity.

No one wants to feel like they aren’t progressing in their job. And this rings especially true when we are talking about top talent. If someone wants
to stay at the top (where you probably want to keep them), they need to continually have an eye on the newest developments in their field.

The information gathering and processing is on them – allowing for an environment where they can test new ideas, that’s on the agency.

There are many ways to help talented employees fuel their passion for their work. Every person is looking for something different, but we have a few ideas that should be universally interesting for most people.

Is ’’When and Where’’ Important?

Allowing for a full five-day remote work schedule is not something that can be implemented instantly, it’s something that agencies have to build towards over time.

For a large portion of agencies, a full week of remote work might not even make sense at all. But giving people the freedom to work from home as needed on special occasions can remove a lot of unnecessary stress. If a person needs to take care of some errands, look after the kids, or maybe they are not feeling well enough to drive to the office, but well enough to work, why not have the option of working from home?

Let’s say you have a single developer dedicated to taking care of your agency website. He has tasks that he doesn’t actively collaborate with anyone else on. He gets a mockup of the website, some copy, and gets to work. He might also be actively trying to sell his apartment. In most companies, this would mean that he has to run back and forth between the apartment and the office, sometimes multiple times a day, to deal with the buyers, real estate agents and contractors. But does he really have to?

Would it not be more comfortable for him to stay at home and work between meetings? And would it not make it easier for his team members and managers not to have to keep track of his travel schedule? And if the work gets done in the right time frame, does his physical presence at the office really matter? I’ll discuss this further in “Is it time to go fully remote?” post.


At SQAEB, everyone has a setup that allows for secure remote work, and in case of sickness, family emergencies, schoolwork or other unforeseen events, they are always welcome to work from home. We give people the benefit of the doubt / assume positive intent, and so far, it has always paid off.

Talent Freedom

Freedom is often hailed as the ultimate solution to happy employees. But most people have an easier time being creative when there are some restrictions in place.

Example: If your agency needs you to write as many slogans as possible selling pineapples in the next 10 minutes. When do you think you will produce more? A) If the 10 minutes is the only restriction. B) If you have a 10 minute restriction, you cannot use the word pineapple and all the slogans have to be under 10 words or less?

Studies show that B is the right answer – even though you have more freedom in A. Sidenote: We tried it at our office and we are currently considering a new venture in ’’Spiky yellow fruit’’ advertising.

So does this prove that freedom may not be the answer to an infinitely creative and productive workplace culture?

Of course not – because we had the freedom to choose those restrictions.

Client expectations and agency needs dictate the tasks that have to be solved. Every agency also needs to have some time and budget restrictions to prevent a project getting out of hand.

Other than that, the freedom to solve the problem in any way possible is one of the most significant benefits you can grant your employees:

  • The most efficient way to a problem takes all the learning and experimentation out of the process
  • Using less billable hours and achieving maximum efficiency will inevitably mean that the client should probably expect cookie-cutter deliverables instead of innovative solutions
  • If there is a framework, guideline or brand book for everything, proposing new solutions and approaches might be perceived as too much of a hassle to even suggest

If you find the perfect balance in the above, you should have the How and Why of task management covered. But freedom in the workplace is a complicated thing. The How and Why are questions that have to be answered or the work will never get done. But why not take more weight off of people’s shoulders by not having them stress over the When and Where as well?


Hiring and onboarding new employees is one thing. But as we know, the costs of employee turnover is high. If you don’t work on having a great environment where your employees thrive, then it’s going to be very costly for you to keep replacing everyone.

Employees changing jobs is impossible to stop – especially in the tech industry – but there are things you can do to keep your turnover rate low.

This post could just be called ’’culture in the agency space’’ because that is the true key to acquiring and keeping top talent.

But what is company culture?

The 17-word, aka the short answer: Company culture is the combination of all the values, social interactions, and psychological behavior in an organization.

The 340-word, aka the long answer:Company culture is hard to define in specific terms, because unlike most essential things in business, it is entirely intangible, a feeling. Branding is closely intertwined with culture in every interaction that the company makes with any of its outside stakeholders. And if you want your brand to be consistent across all channels, you have to work towards a work culture that aligns with your corporate messaging.

A brand is a reflection of your company in the minds of your stakeholders.

That is why it takes on new forms in every piece of content shared on social media, every meeting with a possible client, and every shared lunch break with Debbie from the agency next door. A brand consists of many moving parts, some tangible, some not. The tangible can be boiled down to visual identity, messaging, and imagery, if need be. These can all be changed with a new set of guidelines, a new designer, or a new marketing department, but how do you control a culture?

Culture is not just a code of conduct, communication strategy, or a list of processes. Company culture includes all the small details:

  • The tone of voice the CEO uses to address a reporter while discussing a new acquisition
  • If your employees feel comfortable to talk about non-work related issues with their manager
  • If the new sales intern feels like waking up in the morning on his second week on the jobAnd that’s why culture is one of the hardest things to get right in an agency, as it can not be acquired, mandated or forced.

Culture has to be built and continuously monitored and maintained.

You can tell a lot about an agency culture:

  • In the way, your company treats employees, customers and the surrounding community
  • In the degree that your employees are committed to the company values and goals
  • By how comfortable employees are with innovating, making decisions and expressing their opinions
  • In how information is broadcasted from one department to another and from the higher-ups to the lower-level employees

Day one onboarding

There are many things a person needs to know on their first day at a company. And there are a lot of things that they will definitely not remember. To prevent information overload, it’s preferable to keep some essential things for the rest of the week so the fresh hire will pick them all up eventually. So what should they know on their first day?

  1. Give them an “onboarding buddy”. This should be someone from their team, who they can ask any and all questions to, without feeling like you are bothering them
  2. The values or the ’’WHY’’ of the company
  3. The names of their closest coworkers
  4. The tech stack your department is using
  5. Where to find the best coffee machine in the building, as well as any other refreshments they can get (fruit, cold water, etc.)
  6. How the company intranet or CMS works
  7. The most efficient way to get to their desk
  8. The information and communication flow of your company (emails, chat, phone calls, etc.)
  9. Where the bathrooms are (you’d be surprised how often this is an issue)
  10. What task management solution your team uses to keep track of tasks
  11. When lunch is
  12. Their first real work-related task

That’s about it, any other information would probably be too much, and
as we all know, if you go for a handshake tour with every department immediately, you forget the first person’s name while shaking the third one’s hand.

Onboarding that rocks

Onboarding a new person to the team is a masterclass in taking your own medicine for a lot of agencies. Every good agency prides itself on an in- depth understanding of user journeys and user experience, but what is the experience of joining your agency like?

Placing someone behind a desk, giving them access to your password manager, and asking them to start developing right away is the equivalent of ordering a pizza and giving the delivery guy just your zip code. It takes so much more, and a good onboarding experience can make or break your company’s ability to foster new top talent.

Interview a talent

Generally, tech companies started adopting ’’a multiple interview approach’’ that not only gives applicants a coding test or some homework, but also goes over their background and culture fit in the same depth. More and more agencies are now doing the same. This is where our hiring journey once again splits into two paths, this time, based on if you chose the internal hiring strategy or the headhunter/recruiter strategy.

The recruiter can take care of the searching, first impressions and the technical fit, but you should always have the most promising candidates meet the current team for a short and sweet meet and greet before you consider hiring them.

If the agency conducts the entire hiring process in-house, there is a lot of leeway in the process. Try new approaches and strategies, and eventually, you will find what works for you. But if you want a hint from a company that put culture first and has been doing so for 3 years, here’s how we do it at SQAEB:

  1. Collaborative effort to identify skills required. Once we are sure we need a new addition to a department, the team goes over the exact skills we are looking for. This ensures that the team knows which new skills are coming in, instead of a manager deciding it themselves.
  2. Job posting. When the manager has the final job posting ready, it is posted and shared online internally as well as externally. We know the value of a good network, so employees from all departments are asked to share it with anyone they might think is a good fit. To help gauge personality in the first screening process we usually ask for a short video introduction, along with a resumé, just to get an idea of who you are as a person even before we meet you.
  3. Screening of candidates. As soon as we have enough candidates, the first screening process starts. This consists of sorting out any that does not have the required skills or did not adequately show that they would be a good cultural fit.
  4. First interview. All candidates that pass our first screening are invited
    to a first interview. The purpose of the first interview is to get to know them as a person and figure out if they would be a good cultural fit. This includes having a current team member talk to them for 10 minutes one- on-one, without those involved with the hiring present. If the personality is a match to our culture, they are given homework and invited to a second interview.
  5. Homework. While the first interview is focused on the cultural fit, the second is about technical skills. And to judge that, each candidate is given homework to complete before the second interview. This consists of various work-related tasks where they have a chance to showcase their skills. The homework also includes writing a movie review. This is an added curveball to see how they approach problem solving of tasks they probably haven’t done since high school.
  6. Second interview. We have the second interview to go over the homework and technical questions. This is where their skills are assessed and the main goal is to ensure that the chosen candidate has the necessary skills to handle the tasks they would be given in the position.
  7. Hiring. After the second round of interviews it is often clear which candidate is the best cultural fit and whether or not they have the necessary skills.

Now that you’re done recruiting and have hired the right person, the real work starts: onboarding. Hiring the right candidate is one thing; but if you don’t manage to give them a proper onboarding experience they will not perform as well as they could. Onboarding is the first step towards nurturing top talent.

Talent, Takes one to know one

Agencies have a lot of ways to get new talent in the door. You might do all the recruitment in-house, outsource it to a headhunter/recruiter or grow to a point where a dedicated HR department or in-house recruitment person is the way to go.

But no matter which option is the most viable for you, always keep the cultural fit in mind. You might find out that the person with the most extensive resume might be too far in their career to adapt to the workflow that works for the rest of the team. There are also cases of people with less impressive qualifications, who fit in so well with the rest of the team, that they hit the ground running and start producing work way above their estimated skill-level right away.

Making your agency a cultural paradise for top talent pays off in more than one way:

On one hand, you will attract those who have already proven to be top talent, which can give the quality and speed of work an instant boost. And if they are the ones who come to you looking to join, you’ll have a much larger talent pool to choose from.

On the other hand, you will be nurturing potential top performers from their career infancy and help them grow into top talent with the right personality traits to perform at your company. That has a ROI that can only be beaten by time travelers going back in time and buying stocks in Apple.

This whole train of thought is where agencies might learn something from the world of sports, where it’s a common philosophy in some football clubs (or soccer if that’s the term you prefer to use):

”We don’t sign superstars, we make them”.
– Arsène Wenger, Manager of the Arsenal F.C.

But how do you make sure that your candidates are a cultural fit? And how can you make sure that they can do the work once they get hired?

Contrary to what you might think from our previous arguments about “personality > skills”, it’s important to start with the skills first. At the end of the day you need to know which skills you’re looking for before you can start evaluating personality and cultural fit.

When the hiring process is handled by the department or team that is looking for a new member, the senior members or managers are usually in charge of the process. If there is an obvious need for a specialist that the team doesn’t yet have, creating the requirements should be as easy as simply writing down the tasks that need to be done and translating them into skills. However, if there is just more work coming in for a specific skill set (UX, .NET Developer, etc.), the existing team members should be consulted so that the new hire can complement their skill set.

Once you are settled on the skills it’s time to consider the personality you’re looking for. Are you looking for a person with an extraordinary drive to grind it out 50 hours a week? Or maybe a true team player that makes everyone around them better? There’s no right or wrong answers here – but it’s important to have an idea of which personalities you’re looking for.

The tone of voice varies from agency to agency and even from team to team, and the structure of a job posting can vary quite a bit. But there are still some evergreen tips that could save you and potential candidates some time:

  • When a job has language or certification requirements that make or break the application, start with those
  • Don’t get caught up in the technical requirements and skills needed for the job.
  • Present the personality traits you are looking for on equal footing with skills, education and experience
  • When dealing with entry-level jobs, a portfolio of work could be supplemented with school projects that have a similar scope
  • Don’t put unnecessary year requirements on non-senior jobs
  • With software that has a steeper learning curve, ask for a specific platform that your team uses (Sketch/Adobe XD/InVision) instead of listing experience with prototyping software in general
  • Don’t ask for 8 years of experience in a language that has been around for 3 years

Job posting for a Talent

It’s fair to assume that people who can be considered top talent in their respective disciplines probably got there through a combination of hard work, dedication, and professionalism. Then it would be more than fair if they expect the same qualities from their potential new employer.

This is why you need to have an in-depth look at every part of the job posting, so both parties know if they are a match even before they finally meet face to face.

Talent Career page

A good starting point for your ’’first point of recruitment’’ (not the first point of contact, because that’s probably your landing page) is to create a clear value proposition for the inbound job candidates. Until your agency reaches a certain size, you can’t cater to everyone’s wishes concerning work-life balance. Your hiring decisions should always be based on a cultural fit more than a technical fit.

While technical skills are clearly important, it’s much easier to improve a skill than it is to change
a personality. If we want to go into specifics, we can go back to the user experience analogy. When writing a value proposition on the careers page, you need to think about what kind of agency you really are.

’’We are looking for dedicated people to help bring the most innovative web solutions to life for our clients by day, and help us put up new shelves for all these awards by night…’’

That statement will attract a certain kind of people:

  • Fresh graduates with a lot of ambition looking for validation of their skills
  • Experienced professionals who want an environment for their talents to be utilized
  • People looking for a challenge and don’t even consider crunch time a negative word
  • Career-building professionals who are looking for a place that gets them more awards to their resume
  • People who live for their jobs and look forward to evenings and Saturdays at the office filled with pizza and fixing the kinks in the code

Then on the other side of the spectrum, you could have:

’’ You bring the talent, we bring the perks. At AUE Inc. (Agency Used as an Example), we value strategy and planning above everything else. And thanks to our in-depth research and planning, clients always get the solution they need, instead of the solution they think they want. This also means that our employees never have to worry about scope creep or staying at work past 5 PM. Oh, and did we mention possibilities of

working from home or the 4 day work week?”

A few sentences like this on your career page could go a long way towards attracting people that:

  • Love their jobs, but don’t want to sacrifice time with their family for work
  • Are perfect for the job, but would have had to relocate or travel multiple hours every day
  • Are motivated for the job, but also have other ambitions and are trying to run some sort of side-hustle or project on the side

Sections like ”International Workplace” or ”Fun Squad” shows that we care about an open and fun work environment, where your colleagues also become your friends.

What is top talent?

Before we start our deep dive into the obvious and not-so-obvious ways of attracting and retaining top talent, let’s take a moment to define:

What exactly is top talent?

Top talent is one of those terms that does not have a clear cut definition that people can point to. However, when talking about the agency world, there are certain characteristics that come up time and time again when discussing high performers:

Skill – The go-to metric for determining top talent. Whether it’s due to natural talent or 10,000 hours of practice, if someone is exceptionally skilled, they are on the best possible path to be considered top talent at any agency.

Ambition – The goal to become the top of their field. Ambition drives people to always keep up with the newest trends and developments in their field and continuously improve their skills.

Integrity – When they say something will get done, it gets done at all costs. And if both the managers and team members know they can count on someone when the going gets tough, that person becomes irreplaceable.

Communication – Knowing how to clearly communicate with managers and executives that speak the language of money on one side, while communicating with the technical team members who speak in code and high fidelity mockups on the other is a skill that should be paid in gold.

Teamwork – Everyone can excel at their individual tasks, but sharing a task or working efficiently in a team is a must-have for those that want to become the top performers in any agency

Creativity – Some creatives are a constant source of ideas during a brainstorming session. Some always see a problem from 3 more angles than everyone else. And while creativity manifests in a lot of ways, sometimes it’s the main thing behind a person’s top-talent status.

Leadership – Leadership is not just a skill for managers or team leads. People who join fresh out of college can find themselves at the top of the pyramid in any team within a few months, even with no direct effort. If an individual is approachable, facilitates a good workflow, or solves problems with a leveled head, they will soon become respected by their peers as a leader, even with no title involved.

Devotion – The green ’’you can talk to me’’-light next to the monitor turns red. The headphones go on.
6 hours, 3 cups of coffee, 1 missed lunch, and a single stretching session later, one individual just saved a 10-person project from being one week late. That’s how people become legends. And top talent.

Being considered top talent does not mean that a person has to have all of these qualities fully formed. It doesn’t even mean that top talent and top performers have to achieve all of these qualities eventually. A person who fully masters 3-4 of these qualities should quickly rise to become a prime asset to any agency. And if your agency finds itself hiring a person that displays most or all of these qualities, then you should do everything you can to keep them around until they decide it’s time to retire.

Why talent is more valuable than ever

Every day we are moving towards a world that is both more efficient and more digital than any sci- fi cartoon from the 70s could have predicted. One of the forces at the helm of this digital revolution is the creative, design, and web agencies that are facilitating this change for everyone else.

Whether it’s by helping businesses that previously had no digital presence be represented in the digital space or taking established businesses and expanding their opportunities with new online solutions… the role that agencies play is undeniable.

But to try and fuel this innovation, the agencies need a constant supply of developers to fill a multitude of general and specialist roles. And while the demand for developers is at an all-time high, the quantity in both University graduates and self-trained professionals is not even close to enough.

Multiple surveys of over the last couple of years have pointed at a worldwide shortage in developers. The top three issues software businesses face are a mix of:

  • Not having enough people
  • Sharing experience across seniority levels
  • Hiring suitable candidates

With almost 9 out of 10 IT businesses saying that hiring new talent is ”hard” (and 36% calling it ”very hard”), it’s starting to become evident that calling this a developer shortage might be an understatement.

Recruiters often refer to this situation somewhere along the lines of: ’’worldwide developer shortage crisis’’. So if hyperboles are on the table, what if you wanted to make your recruitment even more selective? If you don’t want to settle for just having any ol’ developer, but instead, you want to attract the top talent in the industry, with all the perks they might bring to your agency. Well then, you must be prepared to rethink or tweak some things about the way you operate.

If that sounds like a hassle, or you already have a team filled with top of the line developers, you might want to think about retention instead because employee turnover costs you more than you know, both directly and indirectly:

  • Teams that are in constant flux and have an unstable structure are obviously going to be less productive
  • The employees that leave are always going to leave with crucial experience/knowledge that is completely removed from the company
  • The brand might get damaged from bad reviews on employer-rating sites and word of mouth, or bad press in general
  • The cost of losing an employee can range anywhere from 16% to 213% of their annual salary in some cases!

Now that talented developers are more scarce than ever… you might be wondering:

How does one identify this ’’top talent’’? And once you’ve done so, how do you recruit, onboard and retain them?


If your web agency handles digital marketing for your clients, you’ll need a specific marketing tech stack. But since that could be a whole white paper in itself, we’ve chosen to focus on two categories that no agency can go without. Website tracking and reporting

Every successful agency has some sort of key performance indicators (KPIs) that they use to track the success of their activities for end clients. To do so, you’ll need anything from more complex metrics, like customer acquisition costs or customer lifetime value, to simpler ones like ’’which link did the user click to get to the shop?’’

If you want your marketers to have complete control of what information you gather about your users, Google Tag Manager is the way to go. With Tag Manager, you get a tag management platform where you can set up tracking for pretty much anything your users do on your website and pass that data to any analytics and advertising tools you use. Google Tag Manager also comes with multiple pre-built tags that make your life easier by letting you customize and implement tags without any coding knowledge, especially within the Google Marketing Platform suite of apps.

Free option: Yes
Pricing starts at: Whatever you can negotiate with Google’s sales team, but the free option will take you a long way.
Notable features:

  • Covers everything you could ever want to track
  • Built-in tags and templates speed up the process
  • Multi-platform support


Adobe has been in the creative software market for about 30 years, and they are still the industry gold standard for all things design. Their ever-expanding cloud suite of creative products makes them a one-stop-shop for any agency looking for a full suite of tools and resources. These include Photoshop and Illustrator for your raster and vector image needs, or Premiere Pro and After Effects for video editing and motion graphics. In addition to more than 20 different creative apps, your subscription to Adobe Cloud can also provide you with useful resources such as fonts, stock images, and tutorials, or even a portfolio page.

Free option: No
Pricing starts at: $79.99/month/per user for the entire suite or $33.99/month/per user/per app

Notable features:

  • The suite is a one-stop-shop for all creative needs
  • Asset collaboration and sharing features for business plans
  • Stock images and fonts included in business plans


If you use at least one other Microsoft 365 solution in your business, you might want to consider adding Microsoft Onedrive as well. In 2017 the Microsoft blog published some impressive numbers:

’’What do our customers think? With over 85 percent of the Fortune 500 companies having OneDrive and SharePoint across250,000 organizations worldwide, we are delivering on our vision of a more connected workplace. In fact, usage of OneDrive for Business has more than doubled in the last year alone.’’

And numbers like that usually have some pretty amazing products behind them, and OneDrive is no exception. With seamless integration into both the existing Microsoft 365 suite and your workflow, you will have your files available anytime, anywhere.

Free option: No
Pricing starts at: $5.00/month/per user
Notable features: 

  • Seamless integrations with the Office 365 suite
  • Auto-sync features
  • 1 TB of OneDrive storage in the basic plan


So if step one is to track data, what do you do once you have it? Well, unless your client specifically asks for an Excel document, it would be nice if you could somehow visualize the data. This is where data reporting dashboards enter the process.

Google Data Studio is a very customizable, straightforward to use, free data reporting dashboard. And for most agencies, Google Data Studio provides everything you need. You can use it to pull your data from most of your existing analytics software and dashboards. And you can then take all of this data and visualize it together in a way that gives you the best overview of your business-critical metrics. Everything from the Google dashboards is easily shareable, so the distribution of the reports should not be a problem, and collaboration in real-time feels seamless.

Free option: Yes
Pricing starts at: No paid options, but you might need paid data connectors to pull in data from all the platforms you use.

Notable features:

  • Free On-Premise Download
  • Quickly and easily gather and visualize data
  • Real-time collaboration, sharing and easily embeddable

DROPBOX File Sharing

File sharing tools are another important part of every agency tech stack. File sharing tools help you organize and distribute media files with your colleagues and clients. They are critical to streamlining workflows and building up processes, and in most cases, reinforcing security at the same time.

Dropbox is an industry staple, and for a good reason. It’s reliable, secure, fast, and for the value that combination provides, relatively cheap. It makes it easy to sort all your files based on projects/clients in a very familiar and intuitive user interface. You can also automatically have your files sync with the cloud and never worry about accidentally losing some work. Accompanying its best-of-breed features, there are integrations with most tools you could want and the possibility to scale along with your team and your needs. And since it has wide adaptation on a personal level, you’re sure that clients and employees are already familiar with the interface.

Free option: Yes
Pricing starts at: €10.00/month/per user – starting at 3 users Notable features:

  • Reliable, secure, fast
  • Auto-sync features
  • Standard plan has 5 TB of storage


LASTPASS Password Management

The onboarding process in an agency mostly centers around getting you up and running with the currently used tech stack. This includes sharing passwords for every piece of software, which can be a very tedious task without a password manager. Every website/software provider has its own guidelines for what they think a strong password is nowadays. And for the user, that results in creating variations of one master password for every piece of software in your tech stack. This can get extremely frustrating without keeping a spreadsheet, and solutions like that become a huge security threat over time. Luckily there are alternatives to encrypted USB sticks locked inside safes behind office paintings.

Reliable, secure, convenient, and exactly what you would want from a password manager. Lastpass makes keeping track of current passwords, creating and sharing new passwords, or on/off-boarding new team members a breeze. And thanks to a combination of securely generated passwords, an overall security score, and 2 factor authentication, you will never have to get another ’’Forgot your password, eh?’’ email ever again. In addition to taking care of your passwords, LastPass can also remember your credit card information for faster checkouts or even fill out contact forms for you automatically.

Free option: Yes
Pricing starts at: $4.00/month/per user (for 5-50 users) 

Notable features: 

  • Useful browser extension
  • 2 factor authentication
  • Autofill for forms and credit cards


Airtable is to spreadsheets what Trello is to bullet-lists. And just like Trello, teams can go months or years using Airtable and not even realizing that they are on the free plan. Unlike Trello, Airtable’s strengths lie in the complexity it is able to achieve while still keeping a straightforward user interface.

The pricing plans are set up in a way where small teams don’t even need to upgrade based on the number of entries, as long as they clean up their backlog every now and then. There are, of course, some very beneficial features in the higher pricing tiers.

Free option: Yes
Pricing starts at: $10/month/per user
Notable features: 

  • Spreadsheet style task overview
  • Multiple overview styles
  • Generous free plan


To be clear, this won’t help you know how many billable hours you have on a project or a client. But sometimes you don’t need to track how much time you spend on a project or task. Sometimes, you just need to focus. This is where Pomodoro comes in. The Pomodoro technique has a very straightforward ruleset to help you stay focused and maximize your productivity.


  1. You press the timer
  2. You work for 25 minutes
  3. You have a 5 or 10-minute break
  4. That’s it.

After that, just rinse and repeat. (Get it? Because Pomodoro is the italian word for tomato… moving on.)
Free option: Yes

Popular Pomodoro tracker sites:




CLOCKIFY Time Management

The free time tracking tool for any company that just needs the most basic features for a lot of users. With Clockify, you only pay for extra features, and there are no volume restrictions on the basic tracking features. You get unlimited time tracking for unlimited users. And this also includes unlimited projects… and unlimited reports. If you need something more, there are some very friendly app and API integrations.

Free option: Yes
Pricing starts at: $9.99/month

Notable features:

  • Unlimited time tracking on the free plan
  • Unlimited users and projects on the free plan
  • Unlimited reports on the free plan

SLACK Internal Communication

Ever since mankind realized that average walking speed could carry a message to the other floor of the office building faster than the fax machine, we have been looking for a more convenient way to connect with our coworkers while successfully avoiding human contact. Email became the go-to solution for office communication for a long time, but in an agile and fast-paced work environment, chat and instant messaging has taken over.

Slack helps you segment your company communication into multiple chat- channels that can be used for communication/file sharing inside teams,
or across projects. There are, of course, person to person conversation options or smaller group chats as well. These feel just like any non-corporate messaging app that you are already used to. This makes conversations quicker than email and makes them feel a little more personal, which may… or may not be, exactly what you are looking for.

Free option: Yes
Pricing starts at: €6.25/month/per 5 users
Notable features: 

  • Person-to-person, group or project channels
  • Quick and easy messaging, calling and file sharing
  • Generous free plan



There are specific use cases for every type of project management software. But if you want every department to be unified under one project management roof, while keeping the necessary customization options, you should be looking at Monday. Monday can be best described as a suite approach to project management software, thanks to all the different types of boards that you can create and manage from a single main dashboard. While there is no free tier, you can always give Monday a try through their two-week free trial and see if it fits with your company.

Free option: No
Pricing starts at: $39/month/per 5 users 

Notable features: 

  • Quick onboarding
  • Extensive dashboard customization
  • Workflow automation possibilities


TRELLO Task Management

While time tracking is certainly essential, it’s just as important to keep track of WHAT you need to be doing.

There are a lot of different styles and frameworks to set up your taskboard, but who are the best providers? Quick note: we did not include any development task boards like Azure Boards, but only looked at tools that could be used by all teams in an agency.

If you need a simple project board setup within a couple of minutes, Trello is the right place to be. Trello is one of those magical tools that you can use for years at scale, never having to spend a single cent (if you don’t need to expand past the basic feature set and storage, of course). Also, compared to a tool like Airtable, the app is much more simple. This benefits projects that require quick navigation or if you mainly use Trello for its mobile app. For projects that require more complicated overviews, we would suggest something like Airtable.

Free option: Yes
Pricing starts at: $9.99/month

Notable features:

  • Quick mobile-friendly UI
  • Workflow automation with bots
  • Generous free plan

TOGGL Time Management

Time is literally money when you’re working on client projects. Therefore being sure that you are spending your time on the right things is the single most important metric in any agency. Proper time management can help speed up your development time, make planning much easier, and even help you charge more per project. So what are the industry standard tools when it comes to time tracking?

Toggl is a simple time tracking tool with all the productivity and reporting features you could ever want. You can follow all your tasks separately or
get an overview of the entire project at once. You have the option to gather actionable insights from your data/team dashboards and create other useful data visualizations. If you are not a fan of real-time time tracking, you can also input your entries manually or integrate Toggl with your calendar. There’s also an option to put in your billable rates and figure out just how much your time is worth.

Free option: Yes
Pricing starts at: $10/month/per user

Notable features: 

  • Time data visualization
  • Real-time or manual time tracking
  • Time-cost calculation

Embarrassing mishap:

Thousands of Tesla car owners were locked out of the vehicle
According to reports in the US, it was a simple software update that caused a malfunction in which thousands of car owners of the company were locked out of their vehicles.

A simple software update has caused thousands of Tesla car owners to be locked out of their cars, U.S. media reported

Tesla, unlike other automakers that are just now entering the field, takes the approach that the car is a technological-upgradeable product, meaning that through software updates it is possible to increase engine power, release speed and charge limits and also install new applications.

This approach, despite the considerable benefits inherent in it, can sometimes also become problematic. In a case that happened in the US yesterday, a technical malfunction in the cellular application that the company uses completely disconnected thousands of car owners from the car itself and from Tesla. That is, not only could the phones not communicate with the car – they could not communicate with the company. , Which is able to “talk” to the smartphone, the immediate result was locking the vehicle owner out of the car for long periods of time.

Upon learning of the problem, thousands of Tesla owners in the United States tried to get into their car and rushed to report on social media that the car refused to allow them to enter the car via cell phone and even locked them out of the car in some cases.

Tesla itself has gone through a difficult week, after its shares have fallen since Alon Muskab’s statement during the annual shareholders’ meeting. Although Musk noted that Tesla will produce its first popular car in three years at a price of $ 25,000, he also stressed that Tesla’s revolutionary cybertrack will be produced in over 300,000 units, By Musk.

Basics of Testing

What is Testing?

Software systems are an integral part of life, from business applications (e.g., banking) to consumer products (e.g., cars). Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, or business reputation, and even injury or death. Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.

A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. As described, software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analysing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.

Some testing does involve the execution of the component or system being tested; such testing is called dynamic testing. Other testing does not involve the execution of the component or system being tested; such testing is called static testing. So, testing also includes reviewing work products such as requirements, user stories, and source code.

Another common misperception of testing is that it focuses entirely on verification of requirements, user stories, or other specifications. While testing does involve checking whether the system meets specified requirements, it also involves validation, which is checking whether the system will meet user and other stakeholder needs in its operational environment(s).

Test activities are organised and carried out differently in different lifecycles.

Typical Objectives of Testing

For any given project, the objectives of testing may include: 

  • To prevent defects by evaluate work products such as requirements, user stories, design, and code
  • To verify whether all specified requirements have been fulfilled 
  • To check whether the test object is complete and validate if it works as the users and other stakeholders expect
  • To build confidence in the level of quality of the test object 
  • To find defects and failures thus reduce the level of risk of inadequate software quality
  • To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
  • To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model. These differences may include, for example:

  • During component testing, one objective may be to find as many failures as possible so that the underlying defects are identified and fixed early. Another objective may be to increase code coverage of the component tests.
  • During acceptance testing, one objective may be to confirm that the system works as expected and satisfies requirements. Another objective of this testing may be to give information to stakeholders about the risk of releasing the system at a given time.

Testing and Debugging

Testing and debugging are different. Executing tests can show failures that are caused by defects in the software. Debugging is the development activity that finds, analyses, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging, associated component and component integration testing (continues integration). However, in Agile development and in some other software development lifecycles, testers may be involved in debugging and component testing.

Why is Testing Necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s Contributions to Success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include: 

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

In addition to these examples, the achievement of defined test objectives contributes to overall software development and maintenance success.

Quality Assurance and Testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organisation with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing. As described early on, testing contributes to the achievement of quality in a variety of ways.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra-system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analysed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced. 

For example, let suppose, incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

Seven Testing Principles

A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing. 

1. Testing shows the presence of defects, not their absence 

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness. 

2. Exhaustive testing is impossible 

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts. 

3. Early testing saves time and money 

To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes.

4. Defects cluster together 

A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort (as mentioned in principle 2).

5. Beware of the pesticide paradox 

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.

6. Testing is context dependent 

Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project.

7. Absence-of-errors is a fallacy 

Some organisations expect that testers can run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfil the users’ needs and expectations, or that is inferior compared to other competing systems.

Test Process

There is no one universal software test process, but there are common sets of test activities without which testing will be less likely to achieve its established objectives. These sets of test activities are a test process. The proper, specific software test process in any given situation depends on many factors. Which test activities are involved in this test process, how these activities are implemented, and when these activities occur may be discussed in an organisation’s test strategy.

Test Process in Context 

Contextual factors that influence the test process for an organization, include, but are not limited to:

  • Software development lifecycle model and project methodologies being used
  • Test levels and test types being considered
  • Product and project risks
  • Business domain
  • Operational constraints, including but not limited to:
    • Budgets and resources
    • Timescales
    • Complexity
    • Contractual and regulatory requirements 
  • Organisational policies and practices 
  • Required internal and external standards

The following sections describe general aspects of organisational test processes in terms of the following: 

  • Test activities and tasks 
  • Test work products 
  • Traceability between the test basis and test work products

It is very useful if the test basis (for any level or type of testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives.

For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Test Activities and Tasks

A test process consists of the following main groups of activities:

  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design 
  • Test implementation
  • Test execution
  • Test completion

Each main group of activities is composed of constituent activities, which will be described in the subsections below. Each constituent activity consists of multiple individual tasks, which would vary from one project or release to another.
Further, although many of these main activity groups may appear logically sequential, they are often implemented iteratively. For example, Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this software development approach. Even in sequential software development, the stepped logical sequence of main groups of activities will involve overlap, combination, concurrency, or omission, so tailoring these main groups of activities within the context of the system and the project is usually required.

Test planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback from monitoring and control activities.

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development lifecycle models. For example, the evaluation of exit criteria for test execution as part of a given test level may include: 

  • Checking test results and logs against specified coverage criteria
  • Assessing the level of component or system quality based on test results and logs
  • Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed)

Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test analysis

During test analysis, the test basis is analysed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

Test analysis includes the following major activities: 

  • Analysing the test basis appropriate to the test level being considered, for example:
    • Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behaviour
    • Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure
    • The implementation of the component or system itself, including code, database metadata and queries, and interfaces
    • Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system
  • Evaluating the test basis and test items to identify defects of various types, such as: 
    • Ambiguities
    • Omissions
    • Inconsistencies
    • Inaccuracies
    • Contradictions
    • Superfluous statements
  • Identifying features and sets of features to be tested
  • Defining and prioritising test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks
  • Capturing bi-directional traceability between each element of the test basis and the associated test conditions

The application of black-box, white-box, and experience-based test techniques can be useful in the process of test analysis to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test charters. Test charters are typical work products in some types of experience-based testing. When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs. For example, techniques such as behaviour driven development (BDD) and acceptance test driven development (ATDD), which involve generating test conditions and test cases from user stories and acceptance criteria prior to coding. These techniques also verify, validate, and detect defects in the user stories and acceptance criteria.

Test design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other test-ware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:

  • Designing and prioritising test cases and sets of test cases 
  • Identifying necessary test data to support test conditions and test cases
  • Designing the test environment and identifying any required infrastructure and tools
  • Capturing bi-directional traceability between the test basis, test conditions, and test cases

The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques.

As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also, as with test analysis, the identification of defects during test design is an important potential benefit.

Test implementation

During test implementation, the test-ware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?” 

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  • Creating test suites from the test procedures and (if any) automated test scripts 
  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution
  • Building the test environment (including, potentially, test harnesses, service virtualisation, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
  • Preparing test data and ensuring it is properly loaded in the test environment 
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites

Test design and test implementation tasks are often combined.

In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented. 

Test execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and test-ware
  • Executing tests either manually or by using test execution tools
  • Comparing actual results with expected results
  • Analysing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
  • Reporting defects based on the failures observed
  • Logging the outcome of test execution (e.g., pass, fail, blocked)
  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)
  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  • Creating a test summary report to be communicated to stakeholders
  • Finalising and archiving the test environment, the test data, the test infrastructure, and other test-ware for later reuse
  • Handing over the test-ware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  • Analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  • Using the information gathered to improve test process maturity

Test Work Products

Test work products are created as part of the test process. Just as there is significant variation in the way that organisations implement the test process, there is also significant variation in the types of work products created during that process, in the ways those work products are organised and managed, and in the names used for those work products.

Many of the test work products described in this section can be captured and managed using test management tools and defect management tools.

Test planning work products 

Test planning work products typically include one or more test plans. The test plan includes information about the test basis, to which the other test work products will be related via traceability information, as well as exit criteria (or definition of done) which will be used during test monitoring and control.

Test monitoring and control work products

Test monitoring and control work products typically include various types of test reports, including test progress reports produced on an ongoing and/or a regular basis, and test summary reports produced at various completion milestones. All test reports should provide audience-relevant details about the test progress as of the date of the report, including summarising the test execution results once those become available. 

Test monitoring and control work products should also address project management concerns, such as task completion, resource allocation and usage, and effort. 

Test monitoring and control, and the work products created during these activities, are further explained on this site.

Test analysis work products

Test analysis work products include defined and prioritised test conditions, each of which is ideally bi-directionally traceable to the specific element(s) of the test basis it covers. For exploratory testing, test analysis may involve the creation of test charters. Test analysis may also result in the discovery and reporting of defects in the test basis. 

Test design work products

Test design results in test cases and sets of test cases to exercise the test conditions defined in test analysis. It is often a good practice to design high-level test cases, without concrete values for input data and expected results. Such high-level test cases are reusable across multiple test cycles with different concrete data, while still adequately documenting the scope of the test case. Ideally, each test case is bi-directionally traceable to the test condition(s) it covers.

Test design also results in:

  • the design and/or identification of the necessary test data
  • the design of the test environment
  • the identification of infrastructure and tools

Though the extent to which these results are documented varies significantly.

Test implementation work products

Test implementation work products include:

  • Test procedures and the sequencing of those test procedures
  • Test suites
  • A test execution schedule

Ideally, once test implementation is complete, achievement of coverage criteria established in the test plan can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions.

In some cases, test implementation involves creating work products using or used by tools, such as service virtualisation and automated test scripts.

Test implementation also may result in the creation and verification of test data and the test environment. The completeness of the documentation of the data and/or environment verification results may vary significantly.

The test data serve to assign concrete values to the inputs and expected results of test cases. Such concrete values, together with explicit directions about the use of the concrete values, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle.

In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly.

Test conditions defined in test analysis may be further refined in test implementation.

Test execution work products

Test execution work products include:

  • Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
  • Defect reports
  • Documentation about which test item(s), test object(s), test tools, and test-ware were involved in the testing

Ideally, once test execution is complete, the status of each element of the test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s). For example, we can say which requirements have passed all planned tests, which requirements have failed tests and/or have defects associated with them, and which requirements have planned tests still waiting to be run. This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products

Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations, change requests or product backlog items, and finalised test-ware.

Traceability between the Test Basis and Test Work Products

As mentioned, earlier, test work products and the names of those work products vary significantly. Regardless of these variations, in order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element, as described above. In addition to the evaluation of test coverage, good traceability supports:

  • Analysing the impact of changes
  • Making testing auditable
  • Meeting IT governance criteria
  • Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)
  • Relating the technical aspects of testing to stakeholders in terms that they can understand
  • Providing information to assess product quality, process capability, and project progress against business goals

Some test management tools provide test work product models that match part or all of the test work products outlined in this section. Some organisations build their own management systems to organise the work products and provide the information traceability they require.

The Psychology of Testing

Software development, including software testing, involves human beings. Therefore, human psychology has important effects on software testing.

Human Psychology and Testing 

Identifying defects during a static test such as a requirement review or user story refinement session, or identifying failures during dynamic test execution, may be perceived as criticism of the product and of its author. An element of human psychology called confirmation bias can make it difficult to accept information that disagrees with currently held beliefs. For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news.

As a result of these psychological factors, some people may perceive testing as a destructive activity, even though it contributes greatly to project progress and product quality. To try to reduce these perceptions, information about defects and failures should be communicated in a constructive way. This way, tensions between the testers and the analysts, product owners, designers, and developers can be reduced. This applies during both static and dynamic testing.

Testers and test managers need to have good interpersonal skills to be able to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive relationships with colleagues. Ways to communicate well include the following examples:

  • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems.
  • Emphasise the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organisation, defects found and fixed during testing will save time and money and reduce overall risk to product quality.
  • Communicate test results and other findings in a neutral, fact-focused way without criticising the person who created the defective item. Write objective and factual defect reports and review findings.
  • Try to understand how the other person feels and the reasons they may react negatively to the information.
  • Confirm that the other person has understood what has been said and vice versa.

Typical test objectives were discussed earlier. Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviours with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias.

Tester’s and Developer’s Mindsets

Developers and testers often think differently. The primary objective of development is to design and build a product. As discussed earlier, the objectives of testing include verifying and validating the product, finding defects prior to release, and so forth. These are different sets of objectives which require different mindsets. Bringing these mindsets together helps to achieve a higher level of product quality.

A mindset reflects an individual’s assumptions and preferred methods for decision making and problem-solving. A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.

A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to become aware of errors committed by themselves. 

With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organising the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different from that of the work product authors (i.e., business analysts, product owners, designers, and developers), since they have different cognitive biases from the authors.

Test management

Test Organisation

Independent Testing

Testing tasks may be done by people in a specific testing role, or by people in another role (e.g., customers). A certain degree of independence often makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases. Independence is not, however, a replacement for familiarity, and developers can efficiently find many defects in their own code. 

Degrees of independence in testing include the following (from low level of independence to high level):

  • No independent testers; the only form of testing available is developers testing their own code 
  • Independent developers or testers within the development teams or the project team; this could be developers testing their colleagues’ products 
  • Independent test team or group within the organisation, reporting to project management or executive management 
  • Independent testers from the business organisation or user community, or with specialisations in specific test types such as usability, security, performance, regulatory/compliance, or portability 
  • Independent testers external to the organisation, either working on-site (in-house) or off-site (outsourcing)

For most types of projects, it is usually best to have multiple test levels, with some of these levels handled by independent testers. Developers should participate in testing, especially at the lower levels, so as to exercise control over the quality of their own work.

The way in which independence of testing is implemented varies depending on the software development lifecycle model. For example, in Agile development, testers may be part of a development team. In some organisations using Agile methods, these testers may be considered part of a larger independent test team as well. In addition, in such organisations, product owners may perform acceptance testing to validate user stories at the end of each iteration.

Potential benefits of test independence include:

  • Isolation from the development team, may lead to a lack of collaboration, delays in providing feedback to the development team, or an adversarial relationship with the development team
  • Developers may lose a sense of responsibility for quality
  • Independent testers may be seen as a bottleneck
  • Independent testers may lack some important information (e.g., about the test object)

Many organisations are able to successfully achieve the benefits of test independence while avoiding the drawbacks.

Tasks of a Test Manager and Tester 

In this article, two test roles are covered, test managers and testers. The activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organisation.

The test manager is tasked with overall responsibility for the test process and successful leadership of the test activities. The test management role might be performed by a professional test manager, or by a project manager, a development manager, or a quality assurance manager. In larger projects or organisations, several test teams may report to a test manager, test coach, or test coordinator, each team being headed by a test leader or lead tester.

Typical test manager tasks may include:

  • Develop or review a test policy and test strategy for the organisation 
  • Plan the test activities by considering the context, and understanding the test objectives and risks. This may include selecting test approaches, estimating test time, effort and cost, acquiring resources, defining test levels and test cycles, and planning defect management
  • Write and update the test plan(s) 
  • Coordinate the test plan(s) with project managers, product owners, and others 
  • Share testing perspectives with other project activities, such as integration planning 
  • Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, and check the status of exit criteria (or definition of done) and facilitate test completion activities 
  • Prepare and deliver test progress reports and test summary reports based on the information gathered 
  • Adapt planning based on test results and progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control 
  • Support setting up the defect management system and adequate configuration management of test-ware 
  • Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product
  • Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s) 
  • Decide about the implementation of test environment(s) 
  • Promote and advocate the testers, the test team, and the test profession within the organisation 
  • Develop the skills and careers of testers (e.g., through training plans, performance evaluations, coaching, etc.)

The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organisation, or that have to do with personnel management, may be done by test managers outside of the development team, who are sometimes called test coaches.

Typical tester tasks may include:

  • Review and contribute to test plans 
  • Analyse, review, and assess requirements, user stories and acceptance criteria, specifications, and models for testability (i.e., the test basis) 
  • Identify and document test conditions, and capture traceability between test cases, test conditions, and the test basis 
  • Design, set up, and verify test environment(s), often coordinating with system administration and network management 
  • Design and implement test cases and test procedures 
  • Prepare and acquire test data
  • Create the detailed test execution schedule 
  • Execute tests, evaluate the results, and document deviations from expected results 
  • Use appropriate tools to facilitate the test process 
  • Automate tests as needed (may be supported by a developer or a test automation expert)
  • Evaluate non-functional characteristics such as performance efficiency, reliability, usability, security, compatibility, and portability 
  • Review tests developed by others

People who work on test analysis, test design, specific test types, or test automation may be specialists in these roles. Depending on the risks related to the product and the project, and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the acceptance test level, the role of a tester is often done by business analysts, subject matter experts, and users. At the system test level and the system integration test level, the role of a tester is often done by an independent test team. At the operational acceptance test level, the role of a tester is often done by operations and/or systems administration staff.

Test Planning and Estimation

Purpose and Content of a Test Plan

A test plan outlines test activities for development and maintenance projects. Planning is influenced by the test policy and test strategy of the organisation, the development lifecycles and methods being used, the scope of testing, objectives, risks, constraints, criticality, testability, and the availability of resources. 

As the project and test planning progress, more information becomes available and more detail can be included in the test plan. Test planning is a continuous activity and is performed throughout the product’s lifecycle. (Note that the product’s lifecycle may extend beyond a project’s scope to include the maintenance phase.) Feedback from test activities should be used to recognise changing risks so that planning can be adjusted. Planning may be documented in a master test plan and in separate test plans for test levels, such as system testing and acceptance testing, or for separate test types, such as usability testing and performance testing. Test planning activities may include the following and some of these may be documented in a test plan:

  • Determining the scope, objectives, and risks of testing
  • Defining the overall approach of testing
  • Integrating and coordinating the test activities into the software lifecycle activities
  • Making decisions about what to test, the people and other resources required to perform the various test activities, and how test activities will be carried out
  • Scheduling of test analysis, design, implementation, execution, and evaluation activities, either on particular dates (e.g., in sequential development) or in the context of each iteration (e.g., in iterative development)
  • Selecting metrics for test monitoring and control
  • Budgeting for the test activities
  • Determining the level of detail and structure for test documentation (e.g., by providing templates or example documents)

The content of test plans vary, and can extend beyond the topics identified above.

Test Strategy and Test Approach

A test strategy provides a generalised description of the test process, usually at the product or organisational level. Common types of test strategies include:

  • Analytical: This type of test strategy is based on an analysis of some factor (e.g., requirement or risk). Risk-based testing is an example of an analytical approach, where tests are designed and prioritised based on the level of risk.
  • Model-Based: In this type of test strategy, tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic (e.g., reliability). Examples of such models include business process models, state models, and reliability growth models.
  • Methodical: This type of test strategy relies on making systematic use of some predefined set of tests or test conditions, such as a taxonomy of common or likely types of failures, a list of important quality characteristics, or company-wide look-and-feel standards for mobile apps or web pages. 
  • Process-compliant (or standard-compliant): This type of test strategy involves analysing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards, by process documentation, by the rigorous identification and use of the test basis, or by any process or standard imposed on or by the organisation. 
  • Directed (or consultative): This type of test strategy is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organisation itself.
  • Regression-averse: This type of test strategy is motivated by a desire to avoid regression of existing capabilities. This test strategy includes reuse of existing testware (especially test cases and test data), extensive automation of regression tests, and standard test suites.
  • Reactive: In this type of test strategy, testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned (as the preceding strategies are). Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results. Exploratory testing is a common technique employed in reactive strategies.

An appropriate test strategy is often created by combining several of these types of test strategies. For example, risk-based testing (an analytical strategy) can be combined with exploratory testing (a reactive strategy); they complement each other and may achieve more effective testing when used together.

While the test strategy provides a generalised description of the test process, the test approach tailors the test strategy for a particular project or release. The test approach is the starting point for selecting the test techniques, test levels, and test types, and for defining the entry criteria and exit criteria (or definition of ready and definition of done, respectively). The tailoring of the strategy is based on decisions made in relation to the complexity and goals of the project, the type of product being developed, and product risk analysis. The selected approach depends on the context and may consider factors such as risks, safety, available resources and skills, technology, the nature of the system (e.g., custom-built versus COTS), test objectives, and regulations.

Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)

In order to exercise effective control over the quality of the software, and of the testing, it is advisable to have criteria which define when a given test activity should start and when the activity is complete. Entry criteria (more typically called definition of ready in Agile development) define the preconditions for undertaking a given test activity. If entry criteria are not met, it is likely that the activity will prove more difficult, more time-consuming, more costly, and more risky. Exit criteria (more typically called definition of done in Agile development) define what conditions must be achieved in order to declare a test level or a set of tests completed. Entry and exit criteria should be defined for each test level and test type, and will differ based on the test objectives.

Typical entry criteria include: 

  • Availability of testable requirements, user stories, and/or models (e.g., when following a model-based testing strategy)
  • Availability of test items that have met the exit criteria for any prior test levels
  • Availability of test environment
  • Availability of necessary test tools
  • Availability of test data and other necessary resources

Typical exit criteria include:

  • Planned tests have been executed
  • A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks, code) has been achieved 
  • The number of unresolved defects is within an agreed limit 
  • The number of estimated remaining defects is sufficiently low
  • The evaluated levels of reliability, performance efficiency, usability, security, and other relevant quality characteristics are sufficient

Even without exit criteria being satisfied, it is also common for test activities to be curtailed due to the budget being expended, the scheduled time being completed, and/or pressure to bring the product to market. It can be acceptable to end testing under such circumstances, if the project stakeholders and business owners have reviewed and accepted the risk to go live without further testing.

Test Execution Schedule

Once the various test cases and test procedures are produced (with some test procedures potentially automated) and assembled into test suites, the test suites can be arranged in a test execution schedule that defines the order in which they are to be run. The test execution schedule should take into account such factors as prioritizations, dependencies, confirmation tests, regression tests, and the most efficient sequence for executing the tests.

Ideally, test cases would be ordered to run based on their priority levels, usually by executing the test cases with the highest priority first. However, this practice may not work if the test cases have dependencies or the features being tested have dependencies. If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first. Similarly, if there are dependencies across test cases, they must be ordered appropriately regardless of their relative priorities. Confirmation and regression tests must be prioritised as well, based on the importance of rapid feedback on changes, but here again dependencies may apply.

In some cases, various sequences of tests are possible, with differing levels of efficiency associated with those sequences. In such cases, trade-offs between efficiency of test execution versus adherence to prioritisation must be made.

Factors Influencing the Test Effort

Test effort estimation involves predicting the amount of test-related work that will be needed in order to meet the objectives of the testing for a particular project, release, or iteration. Factors influencing the test effort may include characteristics of the product, characteristics of the development process, characteristics of the people, and the test results, as shown below.

Product characteristics

  • The risks associated with the product
  • The quality of the test basis
  • The size of the product
  • The complexity of the product domain
  • The requirements for quality characteristics (e.g., security, reliability) 
  • The required level of detail for test documentation 
  • Requirements for legal and regulatory compliance

Development characteristics process

  • The stability and maturity of the organisation
  • The development model in use
  • The approach to test
  • The tools used
  • The test to process 
  • Time pressure

People characteristics

  • The skills and experience of the people involved, especially with similar projects and products (e.g., domain knowledge)
  • Team cohesion and leadership

Test results

  • The number and severity of defects found
  • The amount of re-work required

Test Estimation Techniques

There are a number of estimation techniques used to determine the effort required for adequate testing. Two of the most commonly used techniques are:

  • The metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values
  • The expert-based technique: estimating the test effort based on the experience of the owners of the testing tasks or by experts

For example, in Agile development, burn-down charts are examples of the metrics-based approach as effort remaining is being captured and reported, and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration; whereas planning poker, also called scrum poker, is an example of the expert-based approach, as team members are estimating the effort to deliver a feature based on their experience.

Within sequential projects, defect removal models are examples of the metrics-based approach, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of a similar nature; whereas the Wideband Delphi estimation technique is an example of the expert-based approach in which a group of experts provides estimates based on their experience.

Test Monitoring and Control

The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and should be used to assess test progress and to measure whether the test exit criteria, or the testing tasks associated with an Agile project’s definition of done, are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and (possibly) reported. Actions may cover any test activity and may affect any other software lifecycle activity.

Examples of test control actions include: 

  • Re-prioritising tests when an identified risk occurs (e.g., software delivered late)
  • Changing the test schedule due to availability or unavailability of a test environment or other resources
  • Re-evaluating whether a test item meets an entry or exit criterion due to rework

Metrics Used in Testing

Metrics can be collected during and at the end of test activities in order to assess:

  • Progress against the planned schedule and budget
  • Current quality of the test object
  • Adequacy of the test approach
  • Effectiveness of the test activities with respect to the objectives

Common test metrics include:

  • Percentage of planned work done in test case preparation (or percentage of planned test cases implemented)
  • Percentage of planned work done in test environment preparation
  • Test case execution (e.g., number of test cases run/not run, test cases passed/failed, and/or test conditions passed/failed)
  • Defect information (e.g., defect density, defects found and fixed, failure rate, and confirmation test results)
  • Test coverage of requirements, user stories, acceptance criteria, risks, or code
  • Task completion, resource allocation and usage, and effort
  • Cost of testing, including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test

Audiences, Contents, and Purposes for Test Reports

The purpose of test reporting is to summarise and communicate test activity information, both during and at the end of a test activity (e.g., a test level). The test report prepared during a test activity may be referred to as a test progress report, while a test report prepared at the end of a test activity may be referred to as a test summary report.

During test monitoring and control, the test manager regularly issues test progress reports for stakeholders. In addition to content common to test progress reports and test summary reports, typical test progress reports may also include:

  • The status of the test activities and progress against the test plan
  • Factors impeding progress
  • Testing planned for the next reporting period
  • The quality of the test objects

When exit criteria are reached, the test manager issues the test summary report. This report provides a summary of the testing performed, based on the latest test progress report and any other relevant information.

Typical test summary reports may include:

  • Summary of testing performed
  • Information on what occurred during a test period
  • Deviations from plan, including deviations in schedule, duration, or effort of test activities
  • Status of testing and product quality with respect to the exit criteria or definition of done
  • Factors that have blocked or continue to block progress
  • Metrics of defects, test cases, test coverage, activity progress, and resource consumption.
  • Residual risks
  • Reusable test work products produced

The contents of a test report will vary depending on the project, the organisational requirements, and the software development lifecycle. For example, a complex project with many stakeholders or a regulated project may require more detailed and rigorous reporting than a quick software update. As another example, in Agile development, test progress reporting may be incorporated into task boards, defect summaries, and burn-down charts, which may be discussed during a daily stand-up meeting.

In addition to tailoring test reports based on the context of the project, test reports should be tailored based on the report’s audience. The type and amount of information that should be included for a technical audience or a test team may be different from what would be included in an executive summary report. In the first case, detailed information on defect types and trends may be important. In the latter case, a high-level report (e.g., a status summary of defects by priority, budget, schedule, and test conditions passed/failed/not tested) may be more appropriate.

Configuration Management

The purpose of configuration management is to establish and maintain the integrity of the component or system, the test-ware, and their relationships to one another through the project and product lifecycle. 

To properly support testing, configuration management may involve ensuring the following:

  • All test items are uniquely identified, version controlled, tracked for changes, and related to each other
  • All items of test-ware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process
  • All identified documents and software items are referenced unambiguously in test documentation

During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.

Risks and Testing

Definition of Risk

Risk involves the possibility of an event in the future which has negative consequences. The level of risk is determined by the likelihood of the event and the impact (the harm) from that event.

Product and Project Risks

Product risk involves the possibility that a work product (e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders. When the product risks are associated with specific quality characteristics of a product (e.g., functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability, and portability), product risks are also called quality risks. Examples of product risks include:

  • Software might not perform its intended functions according to the specification
  • Software might not perform its intended functions according to user, customer, and/or stakeholder needs
  • A system architecture may not adequately support some non-functional requirement(s)
  • A particular computation may be performed incorrectly in some circumstances
  • A loop control structure may be coded incorrectly
  • Response-times may be inadequate for a high-performance transaction processing system
  • User experience (UX) feedback might not meet product expectations

Project risk involves situations that, should they occur, may have a negative effect on a project’s ability to achieve its objectives. Examples of project risks include:

  • Project issues:
    • Delays may occur in delivery, task completion, or satisfaction of exit criteria or definition of done 
    • Inaccurate estimates, reallocation of funds to higher priority projects, or general cost-cutting across the organisation may result in inadequate funding 
    • Late changes may result in substantial re-work
  • Organisational issues: 
    • Skills, training, and staff may not be sufficient 
    • Personnel issues may cause conflict and problems 
    • Users, business staff, or subject matter experts may not be available due to conflicting business priorities
  • Political issues:
    • Testers may not communicate their needs and/or the test results adequately
    • Developers and/or testers may fail to follow up on information found in testing and reviews (e.g., not improving development and testing practices)
    • There may be an improper attitude toward, or expectations of, testing (e.g., not appreciating the value of finding defects during testing)
  • Technical issues: 
    • Requirements may not be defined well enough 
    • The requirements may not be met, given existing constraints 
    • The test environment may not be ready on time 
    • Data conversion, migration planning, and their tool support may be late 
    • Weaknesses in the development process may impact the consistency or quality of project work products such as design, code, configuration, test data, and test cases
    • Poor defect management and similar problems may result in accumulated defects and other technical debt
  • Supplier issues:
    • A third party may fail to deliver a necessary product or service, or go bankrupt
    • Contractual issues may cause problems to the project

Project risks may affect both development activities and test activities. In some cases, project managers are responsible for handling all project risks, but it is not unusual for test managers to have responsibility for test-related project risks.

Product Quality and Risk-based Testing

Risk is used to focus the effort required during testing. It is used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of an adverse event occurring, or to reduce the impact of an adverse event. Testing is used as a risk mitigation activity, to provide information about identified risks, as well as providing information on residual (unresolved) risks. 

A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk. It involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact. The resulting product risk information is used to guide test planning, the specification, preparation and execution of test cases, and test monitoring and control. Analysing product risks early contributes to the success of a project. 

In a risk-based approach, the results of product risk analysis are used to:

  • Determine the test techniques to be employed
  • Determine the particular levels and types of testing to be performed (e.g., security testing, accessibility testing)
  • Determine the extent of testing to be carried out
  • Prioritise testing in an attempt to find the critical defects as early as possible 
  • Determine whether any activities in addition to testing could be employed to reduce risk (e.g., providing training to inexperienced designers)

Risk-based testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis. To ensure that the likelihood of a product failure is minimised, risk management activities provide a disciplined approach to:

  • Analyse (and re-evaluate on a regular basis) what can go wrong (risks)
  • Determine which risks are important to deal with
  • Implement actions to mitigate those risks
  • Make contingency plans to deal with the risks should they become actual events

In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks.

Defect Management

Since one of the objectives of testing is to find defects, defects found during testing should be logged. The way in which defects are logged may vary, depending on the context of the component or system being tested, the test level, and the software development lifecycle model. Any defects identified should be investigated and should be tracked from discovery and classification to their resolution (e.g., correction of the defects and successful confirmation testing of the solution, deferral to a subsequent release, acceptance as a permanent product limitation, etc.). In order to manage all defects to resolution, an organisation should establish a defect management process which includes a workflow and rules for classification. This process must be agreed with all those participating in defect management, including architects, designers, developers, testers, and product owners. In some organisations, defect logging and tracking may be very informal. 

During the defect management process, some of the reports may turn out to describe false positives, not actual failures due to defects. For example, a test may fail when a network connection is broken or times out. This behaviour does not result from a defect in the test object, but is an anomaly that needs to be investigated. Testers should attempt to minimise the number of false positives reported as defects. 

Defects may be reported during coding, static analysis, reviews, or during dynamic testing, or use of a software product. Defects may be reported for issues in code or working systems, or in any type of documentation including requirements, user stories and acceptance criteria, development documents, test documents, user manuals, or installation guides. In order to have an effective and efficient defect management process, organisations may define standards for the attributes, classification, and workflow of defects.

Typical defect reports have the following objectives: 

  • Provide developers and other parties with information about any adverse event that occurred, to enable them to identify specific effects, to isolate the problem with a minimal reproducing test, and to correct the potential defect(s), as needed or to otherwise resolve the problem
  • Provide test managers a means of tracking the quality of the work product and the impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them instead of running tests, and there will be more confirmation testing needed)
  • Provide ideas for development and test process improvement

A defect report filed during dynamic testing typically includes:

  • An identifier
  • A title and a short summary of the defect being reported
  • Date of the defect report, issuing organization, and author
  • Identification of the test item (configuration item being tested) and environment
  • The development lifecycle phase(s) in which the defect was observed
  • A description of the defect to enable reproduction and resolution, including logs, database dumps, screenshots, or recordings (if found during test execution)
  • Expected and actual results
  • Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
  • Urgency/priority to fix
  • State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed)
  • Conclusions, recommendations and approvals
  • Global issues, such as other areas that may be affected by a change resulting from the defect
  • Change history, such as the sequence of actions taken by project team members with respect to the defect to isolate, repair, and confirm it as fixed
  • References, including the test case that revealed the problem

Some of these details may be automatically included and/or managed when using defect management tools, e.g., automatic assignment of an identifier, assignment and update of the defect report state during the workflow, etc. Defects found during static testing, particularly reviews, will normally be documented in a different way, e.g., in review meeting notes.

Test Techniques

Categories of Test Techniques 

The purpose of a test technique, including those discussed in this section, is to help in identifying test conditions, test cases, and test data.

The choice of which test techniques to use depends on a number of factors, including: 

  • Component or system complexity 
  • Regulatory standards 
  • Customer or contractual requirements 
  • Risk levels and types 
  • Available documentation 
  • Tester knowledge and skills 
  • Available tools 
  • Time and budget 
  • Software development lifecycle model 
  • The types of defects expected in the component or system 

Some techniques are more applicable to certain situations and test levels; others are applicable to all test levels. When creating test cases, testers generally use a combination of test techniques to achieve the best results from the test effort.

The use of test techniques in the test analysis, test design, and test implementation activities can range from very informal (little to no documentation) to very formal. The appropriate level of formality depends on the context of testing, including the maturity of test and development processes, time constraints, safety or regulatory requirements, the knowledge and skills of the people involved, and the software development lifecycle model being followed. 

Categories of Test Techniques and Their Characteristics

In this article , test techniques are classified as black-box, white-box, or experience-based. 

Black-box test techniques (also called behavioural or behaviour-based techniques) are based on an analysis of the appropriate test basis (e.g., formal requirements documents, specifications, use cases, user stories, or business processes). These techniques are applicable to both functional and non-functional testing. Black-box test techniques concentrate on the inputs and outputs of the test object without reference to its internal structure. 

White-box test techniques (also called structural or structure-based techniques) are based on an analysis of the architecture, detailed design, internal structure, or the code of the test object. Unlike black-box test techniques, white-box test techniques concentrate on the structure and processing within the test object. 

Experience-based test techniques leverage the experience of developers, testers and users to design, implement, and execute tests. These techniques are often combined with black-box and white-box test techniques.

Common characteristics of black-box test techniques include the following: 

  • Test conditions, test cases, and test data are derived from a test basis that may include software requirements, specifications, use cases, and user stories
  • Test cases may be used to detect gaps between the requirements and the implementation of the requirements, as well as deviations from the requirements 
  • Coverage is measured based on the items tested in the test basis and the technique applied to the test basis

Common characteristics of white-box test techniques include:

  • Test conditions, test cases, and test data are derived from a test basis that may include code, software architecture, detailed design, or any other source of information regarding the structure of the software
  • Coverage is measured based on the items tested within a selected structure (e.g., the code or interfaces) and the technique applied to the test basis

Common characteristics of experience-based test techniques include:

  • Test conditions, test cases, and test data are derived from a test basis that may include knowledge and experience of testers, developers, users and other stakeholders 

This knowledge and experience includes expected use of the software, its environment, likely defects, and the distribution of those defects.

Black-box Test Techniques

Equivalence Partitioning 

Equivalence partitioning divides data into partitions (also known as equivalence classes) in such a way that all the members of a given partition are expected to be processed in the same way. There are equivalence partitions for both valid and invalid values. 

  • Valid values are values that should be accepted by the component or system. An equivalence partition containing valid values is called a “valid equivalence partition.” 
  • Invalid values are values that should be rejected by the component or system. An equivalence partition containing invalid values is called an “invalid equivalence partition.” 
  • Partitions can be identified for any data element related to the test object, including inputs, outputs, internal values, time-related values (e.g., before or after an event) and for interface parameters (e.g., integrated components being tested during integration testing). 
  • Any partition may be divided into sub partitions if required. 
  • Each value must belong to one and only one equivalence partition.
  • When invalid equivalence partitions are used in test cases, they should be tested individually, i.e., not combined with other invalid equivalence partitions, to ensure that failures are not masked. Failures can be masked when several failures occur at the same time but only one is visible, causing the other failures to be undetected. 

To achieve 100% coverage with this technique, test cases must cover all identified partitions (including invalid partitions) by using a minimum of one value from each partition. Coverage is measured as the number of equivalence partitions tested by at least one value, divided by the total number of identified equivalence partitions, normally expressed as a percentage. Equivalence partitioning is applicable at all test levels.

Boundary Value Analysis

Boundary value analysis (BVA) is an extension of equivalence partitioning, but can only be used when the partition is ordered, consisting of numeric or sequential data. The minimum and maximum values (or first and last values) of a partition are its boundary values. 

For example, let suppose an input field accepts a single integer value as an input, using a keypad to limit inputs so that non-integer inputs are impossible. The valid range is from 1 to 5, inclusive. So, there are three equivalence partitions: invalid (too low); valid; invalid (too high). For the valid equivalence partition, the boundary values are 1 and 5. For the invalid (too high) partition, the boundary value is 6. For the invalid (too low) partition, there is only one boundary value, 0, because this is a partition with only one member. 

In the example above, we identify two boundary values per boundary. The boundary between invalid (too low) and valid gives the test values 0 and 1. The boundary between valid and invalid (too high) gives the test values 5 and 6. Some variations of this technique identify three boundary values per boundary: the values before, at, and just over the boundary. In the previous example, using three-point boundary values, the lower boundary test values are 0, 1, and 2, and the upper boundary test values are 4, 5, and 6. 

Behaviour at the boundaries of equivalence partitions is more likely to be incorrect than behaviour within the partitions. It is important to remember that both specified and implemented boundaries may be displaced to positions above or below their intended positions, may be omitted altogether, or may be supplemented with unwanted additional boundaries. Boundary value analysis and testing will reveal almost all such defects by forcing the software to show behaviours from a partition other than the one to which the boundary value should belong. 

Boundary value analysis can be applied at all test levels. This technique is generally used to test requirements that call for a range of numbers (including dates and times). Boundary coverage for a partition is measured as the number of boundary values tested, divided by the total number of identified boundary test values, normally expressed as a percentage.

Decision Table Testing

Decision tables are a good way to record complex business rules that a system must implement. When creating decision tables, the tester identifies conditions (often inputs) and the resulting actions (often outputs) of the system. These form the rows of the table, usually with the conditions at the top and the actions at the bottom. Each column corresponds to a decision rule that defines a unique combination of conditions which results in the execution of the actions associated with that rule. The values of the conditions and actions are usually shown as Boolean values (true or false) or discrete values (e.g., red, green, blue), but can also be numbers or ranges of numbers. These different types of conditions and actions might be found together in the same table.

The common notation in decision tables is as follows:

For conditions:

  • Y means the condition is true (may also be shown as T or 1) 
  • N means the condition is false (may also be shown as F or 0) 
  • — means the value of the condition doesn’t matter (may also be shown as N/A)

For actions: 

  • X means the action should occur (may also be shown as Y or T or 1) 
  • Blank means the action should not occur (may also be shown as – or N or F or 0)

A full decision table has enough columns (test cases) to cover every combination of conditions. By deleting columns that do not affect the outcome, the number of test cases can decrease considerably. For example by removing impossible combinations of conditions.

The common minimum coverage standard for decision table testing is to have at least one test case per decision rule in the table. This typically involves covering all combinations of conditions. Coverage is measured as the number of decision rules tested by at least one test case, divided by the total number of decision rules, normally expressed as a percentage.

The strength of decision table testing is that it helps to identify all the important combinations of conditions, some of which might otherwise be overlooked. It also helps in finding any gaps in the requirements. It may be applied to all situations in which the behaviour of the software depends on a combination of conditions, at any test level.

State Transition Testing

Components or systems may respond differently to an event depending on current conditions or previous history (e.g., the events that have occurred since the system was initialised). The previous history can be summarised using the concept of states. A state transition diagram shows the possible software states, as well as how the software enters, exits, and transitions between states. A transition is initiated by an event (e.g., user input of a value into a field). The event results in a transition. The same event can result in two or more different transitions from the same state. The state change may result in the software taking an action (e.g., outputting a calculation or error message). 

A state transition table shows all valid transitions and potentially invalid transitions between states, as well as the events, and resulting actions for valid transitions. State transition diagrams normally show only the valid transitions and exclude the invalid transitions. 

Tests can be designed to cover a typical sequence of states, to exercise all states, to exercise every transition, to exercise specific sequences of transitions, or to test invalid transitions. 

State transition testing is used for menu-based applications and is widely used within the embedded software industry. The technique is also suitable for modelling a business scenario having specific states or for testing screen navigation. The concept of a state is abstract — it may represent a few lines of code or an entire business process. 

Coverage is commonly measured as the number of identified states or transitions tested, divided by the total number of identified states or transitions in the test object, normally expressed as a percentage. For more information on coverage criteria for state transition testing.

Use Case Testing 

Tests can be derived from use cases, which are a specific way of designing interactions with software items. They incorporate requirements for the software functions. Use cases are associated with actors (human users, external hardware, or other components or systems) and subjects (the component or system to which the use case is applied).

Each use case specifies some behaviour that a subject can perform in collaboration with one or more actors. A use case can be described by interactions and activities, as well as preconditions, postconditions and natural language where appropriate. Interactions between the actors and the subject may result in changes to the state of the subject. Interactions may be represented graphically by work flows, activity diagrams, or business process models.

A use case can include possible variations of its basic behaviour, including exceptional behaviour and error handling (system response and recovery from programming, application and communication errors, e.g., resulting in an error message). Tests are designed to exercise the defined behaviours (basic, exceptional or alternative, and error handling). Coverage can be measured by the number of use case behaviours tested divided by the total number of use case behaviours, normally expressed as a percentage.

White-box Test Techniques 

White-box testing is based on the internal structure of the test object. White-box test techniques can be used at all test levels, but the two code-related techniques discussed in this section are most commonly used at the component test level. There are more advanced techniques that are used in some safety-critical, mission-critical, or high integrity environments to achieve more thorough coverage, but those are not discussed here.

Statement Testing and Coverage 

Statement testing exercises the potential executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage. 

Decision Testing and Coverage

Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes. To do this, the test cases follow the control flows that occur from a decision point (e.g., for an IF statement, one for the true outcome and one for the false outcome; for a CASE statement, test cases would be required for all the possible outcomes, including the default outcome). 

Coverage is measured as the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the test object, normally expressed as a percentage.

The Value of Statement and Decision Testing

When 100% statement coverage is achieved, it ensures that all executable statements in the code have been tested at least once, but it does not ensure that all decision logic has been tested. Of the two white-box techniques discussed in this syllabus, statement testing may provide less coverage than decision testing. 

When 100% decision coverage is achieved, it executes all decision outcomes, which includes testing the true outcome and also the false outcome, even when there is no explicit false statement (e.g., in the case of an IF statement without an else in the code). Statement coverage helps to find defects in code that was not exercised by other tests. Decision coverage helps to find defects in code where other tests have not taken both true and false outcomes. 

Achieving 100% decision coverage guarantees 100% statement coverage (but not vice versa).

Experience-based Test Techniques

When applying experience-based test techniques, the test cases are derived from the tester’s skill and intuition, and their experience with similar applications and technologies. These techniques can be helpful in identifying tests that were not easily identified by other more systematic techniques. Depending on the tester’s approach and experience, these techniques may achieve widely varying degrees of coverage and effectiveness. Coverage can be difficult to assess and may not be measurable with these techniques. 

Commonly used experience-based techniques are discussed in the following sections.

Error Guessing 

Error guessing is a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge, including: 

  • How the application has worked in the past 
  • What kind of errors tend to be made 
  • Failures that have occurred in other applications

A methodical approach to the error guessing technique is to create a list of possible errors, defects, and failures, and design tests that will expose those failures and the defects that caused them. These error, defect, failure lists can be built based on experience, defect and failure data, or from common knowledge about why software fails.

Exploratory Testing

In exploratory testing, informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically during test execution. The test results are used to learn more about the component or system, and to create tests for the areas that may need more testing. 

Exploratory testing is sometimes conducted using session-based testing to structure the activity. In session-based testing, exploratory testing is conducted within a defined time-box, and the tester uses a test charter containing test objectives to guide the testing. The tester may use test session sheets to document the steps followed and the discoveries made. 

Exploratory testing is most useful when there are few or inadequate specifications or significant time pressure on testing. Exploratory testing is also useful to complement other more formal testing techniques. 

Exploratory testing is strongly associated with reactive test strategies. Exploratory testing can incorporate the use of other black-box, white-box, and experience-based techniques.

Checklist-based Testing

In checklist-based testing, testers design, implement, and execute tests to cover test conditions found in a checklist. As part of analysis, testers create a new checklist or expand an existing checklist, but testers may also use an existing checklist without modification. Such checklists can be built based onexperience, knowledge about what is important for the user, or an understanding of why and how software fails. 

Checklists can be created to support various test types, including functional and non-functional testing. In the absence of detailed test cases, checklist-based testing can provide guidelines and a degree of consistency. As these are high-level lists, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.

Testing throughout the software development lifecycle

A software development lifecycle model describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically. There are a number of different software development lifecycle models, each of which requires different approaches to testing.

Software development and software testing

It is an important part of a tester’s role to be familiar with the common software development lifecycle models so that appropriate test activities can take place.

In any software development lifecycle model, there are several characteristics of good testing:

  • For every development activity, there is a corresponding test activity
  • Each test level has test objectives specific to that level
  • Test analysis and design for a given test level begin during the corresponding development activity
  • Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available

No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.

This categorizes common software development lifecycle models as follows:

  • Sequential development models
  • Iterative and incremental development models

A sequential development model describes the software development process as a linear, sequential flow of activities. This means that any phase in the development process should begin when the previous phase is complete. In theory, there is no overlap of phases, but in practice, it is beneficial to have early feedback from the following phase.

In the Waterfall model, the development activities (e.g., requirements analysis, design, coding, testing) are completed one after another. In this model, test activities only occur after all other development activities have been completed.

Unlike the Waterfall model, the V-model integrates the test process throughout the development process, implementing the principle of early testing. Further, the V-model includes test levels associated with each corresponding development phase, which further supports early testing. In this model, the execution of tests associated with each test level proceeds sequentially, but in some cases overlapping occurs.

Sequential development models deliver software that contains the complete set of features, but typically require months or years for delivery to stakeholders and users.

Incremental development involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally. The size of these feature increments varies, with some methods having larger pieces and some smaller pieces. The feature increments can be as small as a single change to a user interface screen or new query option.

Iterative development occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration. Iterations may involve changes to features developed in earlier iterations, along with changes in project scope. Each iteration delivers working software which is a growing subset of the overall set of features until the final software is delivered or development is stopped.

Examples include:

  • Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months), and the feature increments are correspondingly large, such as two or three groups of related features
  • Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features
  • Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once
  • Spiral: Involves creating experimental increments, some of which may be heavily re-worked or even abandoned in subsequent development work

Components or systems developed using these methods often involve overlapping and iterating test levels throughout development. Ideally, each feature is tested at several test levels as it moves towards delivery. In some cases, teams use continuous delivery or continuous deployment, both of which involve significant automation of multiple test levels as part of their delivery pipelines. Many development efforts using these methods also include the concept of self-organizing teams, which can change the way testing work is organized as well as the relationship between testers and developers.

These methods form a growing system, which may be released to end-users on a feature-by-feature basis, on an iteration-by-iteration basis, or in a more traditional major-release fashion. Regardless of whether the software increments are released to end-users, regression testing is increasingly important as the system grows.

In contrast to sequential models, iterative and incremental models may deliver usable software in weeks or even days, but may only deliver the complete set of requirements product over a period of months or even years.

Software development lifecycle models in context

Software development lifecycle models must be selected and adapted to the context of project and product characteristics. An appropriate software development lifecycle model should be selected and adapted based on the project goal, the type of product being developed, business priorities (e.g., time-to- market), and identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety-critical system such as an automobile’s brake control system. As another example, in some cases organizational and cultural issues may inhibit communication between team members, which can impede iterative development.

Depending on the context of the project, it may be necessary to combine or reorganize test levels and/or test activities. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level (e.g., integration to the infrastructure and other systems) and at the acceptance test level (functional and non-functional, along with user acceptance testing and operational acceptance testing).

In addition, software development lifecycle models themselves may be combined. For example, a V- model may be used for the development and testing of the backend systems and their integrations, while an Agile development model may be used to develop and test the front-end user interface (UI) and functionality. Prototyping may be used early in a project, with an incremental development model adopted once the experimental phase is complete.

Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and services, typically apply separate software development lifecycle models for each object. This presents a particular challenge for the development of Internet of Things system versions. Additionally the software development lifecycle of such objects places stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use (e.g., operate, update, and decommission phases).

Reasons why software development models must be adapted to the context of project and product characteristics can be:

  • Difference in product risks of systems (complex or simple project)
  • Many business units can be part of a project or program (combination of sequential and agile development)
  • Short time to deliver a product to the market (merge of test levels and/or integration of test types in test levels)

Why is testing necessary?

Rigorous testing of components and systems, and their associated documentation, can help reduce the risk of failures occurring during operation. When defects are detected, and subsequently fixed, this contributes to the quality of the components or systems. In addition, software testing may also be required to meet contractual or legal requirements or industry-specific standards.

Testing’s contributions to success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.
  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.
  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.
  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

Quality assurance and testing

While people often use the phrase quality assurance (or just QA) to refer to testing, quality assurance and testing are not the same, but they are related. A larger concept, quality management, ties them together. Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product. An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect in a related work product. For example, a requirements elicitation error can lead to a requirements defect, which then results in a programming error that leads to a defect in the code.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances. For example, some defects require very specific inputs or preconditions to trigger a failure, which may occur rarely or never.

Errors may occur for many reasons, such as:

  • Time pressure
  • Human fallibility
  • Inexperienced or insufficiently skilled project participants
  • Miscommunication between project participants, including miscommunication about requirements and design
  • Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used
  • Misunderstandings about intra-system and inter-system interfaces, especially when such intra- system and inter-system interactions are large in number
  • New, unfamiliar technologies

In addition to failures caused due to defects in the code, failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other testware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

In this example, the customer complaints are effects. The incorrect interest payments are failures. The improper calculation in the code is a defect, and it resulted from the original defect, the ambiguity in the user story. The root cause of the original defect was a lack of knowledge on the part of the product owner, which resulted in the product owner making an error while writing the user story.

What is testing?

Application or software systems, in this modern age, are all part of life, users all over the world are using and even testing systems with out even knowing that they are part of the testing. In our daily life we are using systems on our phones or our desktops, from banks, cellular providers, medical, ordering food and much more.

Software which does not function properly can lead to many problems, that include loss of money, time, reputation and more. Software testing, which is part of QA, can reduce errors, defects and failure in the software under testing.

Software testing is a process which includes many different activities such as test execution, planing, analysing, designing, implementing tests, reporting progress and results, and evaluate the quality of the object under test.