This section considers the following fundamental concepts:
- User experience
Usability is the extent to which a software product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Usability testers should be aware that other definitions may be used in organisations.
The user interface consists of all components of a software product that provide information and controls for the user to accomplish specific tasks with the system.
Usability evaluation includes the following principal activities:
- Usability reviews
- Usability testing
- User surveys
A usability problem is a software defect which results in difficulty in performing tasks via the user interface. This affects the user’s ability to achieve their goals effectively, or efficiently, or with satisfaction. Usability problems can lead to confusion, error, delay or outright failure to complete some task on the part of the user. In safety-critical systems such as medical systems, usability problems can also lead to injuries or death.
A software product can work exactly to specification and still have serious usability problems, as shown by the following examples:
- A car rental mobile app has a dead link. This is a defect which results in a usability problem.
- A car rental mobile app allows users to cancel a reservation, but the users perceive the cancellation procedure as unreasonably complicated. This is a usability problem which affects the efficiency of the mobile app.
- A car rental mobile app conforms to the specification and works both effectively and efficiently, but users think it looks unprofessional. This is a usability problem which affects user satisfaction when using the mobile app.
Usability always relates to the context of use and can be considered in different components. As the following examples show, user expectations of usability are rather different for these components.
Description of Component in Context of Use
1. A user is a person who interacts with a software product by providing inputs, or by using the output of the software product.
2. Particular activities performed by users or particular groups of users (e.g., inexperienced users, administrators).
3. Equipment relates to the hardware, software and materials required to use a software product.
4. The environment consists of the physical, social and technical conditions in which a user interacts with a software product. The social conditions include the organisational conditions.
The following scenarios describe different contexts of use for the same software product:
- Administrative staff use Microsoft Word ® to write documents in a consultancy firm
- An elderly person uses Microsoft Word® for the first time to write an invitation to her birthday
User Experience Concepts
User experience describes a person’s perceptions and responses that result from the use and/or anticipated use of a product, system or service.
User experience includes the following user characteristics that occur before, during and after use:
- physical and psychological responses
- behaviours and accomplishments
User experience is influenced by:
- brand image (i.e., the users’ trust in the manufacturer)
- presentation (i.e., the appearance of the software product, including packaging and documentation)
- software product performance
- interactive behaviour
- the helpfulness of the software product, including help system, support and training
- the user’s internal and physical state resulting from prior experiences, attitudes, skills, personality, education and intelligence
- the context of use
Usability criteria such as effectiveness, efficiency and satisfaction can be used to assess aspects of user experience such as brand image and presentation (satisfaction), functionality (effectiveness) and software product performance (efficiency).
Accessibility is the degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.
Evaluating Usability, User Experience and Accessibility
The key objectives of usability evaluation, user experience evaluation and accessibility evaluation are compared in the following table and discussed in more detail in subsequent sections.
Type of evaluation
User experience evaluation
Evaluate the direct interaction between users and the software product.
- Evaluate the services received prior to the use of the software product.
- Evaluate the direct interaction between users and the software product.
- Evaluate the services received after the use of the software product.
Evaluate the direct interaction between users and the software product, focusing on understanding problems related to accessibility barriers, rather than general efficiency or satisfaction.
The principal techniques applied in usability evaluation, user experience evaluation and accessibility evaluation are shown in the following table and discussed in more detail in later chapters.
Optionally and Yes
Experts and users evaluate the user interface of a software product for usability problems; the evaluation is based on their experience.
Users are observed while they perform typical tasks with the software product.
Users fill out questionnaires regarding their satisfaction with the software product.
Informal usability review
Expert usability review
Think aloud testing
International Software Testing
Qual = Qualitative usability evaluation
Quant = Quantitative usability evaluation
A process through which information about the usability of a system is gathered in order to improve the system (known as formative evaluation) or to assess the merit or worth of a system (known as summative evaluation).
There are two types of usability evaluation:
- Formative (or “exploratory”) evaluation is conducted to understand usability issues. Formative evaluation is often conducted early on in the development lifecycle during the design and prototyping stages to get ideas and to guide (or “form”) the design by identifying usability design problems.
- Summative evaluation is conducted late in the development lifecycle shortly before or after implementation to measure the usability of a component or software product. Summative usability testing is quantitative; it focuses on obtaining measurements for the effectiveness, efficiency or satisfaction of a software product. A summative usability evaluation can be used to evaluate a design based on usability requirements so that the design’s acceptability can be established from the users’ point of view.
Both types of evaluation can be conducted iteratively.
Usability evaluation relating to software products. Usability evaluation can also be applied to other products or services where usability is important, such as with user guides, vending machines, aircraft cockpits, medical systems and train stations.
Usability evaluation addresses the direct interaction between users and the software product. The direct interaction occurs via a screen dialogue or other form of system use. Usability evaluation can be based on a software application, on design documents and on prototypes.
The objectives of usability evaluation are:
- to assess whether usability requirements have been met
- to uncover usability problems so they can be corrected
- to measure the usability of a software product (see below)
Usability evaluation addresses the following:
- The extent to which correct and complete goals are achieved
- Answers the question: “Does the software product do what I want?”
- Resources expended to achieve specified goals
- Answers the question: “Does the software product solve my tasks quickly?”
- Freedom from discomfort, and positive attitudes towards the use of the software
- Answers the question: “Do I feel comfortable while using the software product?”
If users are involved, a usability evaluation can be carried out by performing usability testing, conducting user surveys and performing usability reviews. If users are not present, usability reviews may still be performed. If software will be used by disabled individuals, include them early in usability reviews (i.e., color blind users).
A qualitative usability evaluation enables identification and analysis of usability problems, focusing on understanding user needs, goals and reasons for the observed user behaviour.
A quantitative usability evaluation focuses on obtaining measurements for the effectiveness, efficiency or satisfaction of a software product.
User Experience Evaluation
User experience describes a person’s perceptions and responses resulting from the use or anticipated use of a software product.
Usability is part of the user experience. Consequently, usability evaluation is a part of user experience evaluation. The principal techniques used for user experience evaluation are the same as those used for usability evaluation.
User experience evaluation addresses the whole user experience with the software product, not just the direct interaction. User experience includes:
- Advertisements that make users aware of the software product
- Training in the use of the software product
- Touch-points with the software product other than screen dialogue, such as encounters with support, letters or goods received as a result of interaction with the software product
- Problems that are not handled by the user interface of the software product, such as the notifications of delays, handling of complaints and unsolicited calls
User experience can be evaluated using the principal techniques outlined in the tables above. In a user experience test, time gaps can be bridged during a usability test session.
Accessibility evaluation is a usability evaluation which focuses on the accessibility of a software product. It addresses the direct interaction between a user with disabilities or limitations and the software product.
The following advice applies specifically to accessibility evaluation:
1. Define the ambition level for accessibility
The Web Content Accessibility Guidelines (WCAG) document defines three priority levels for accessibility; A, AA and AAA. It is recommended to adopt conformance level AA, which implies satisfying the most basic requirements for web accessibility and the biggest barriers for users with disabilities.
2. Create or adept guidelines for accessible design.
These guidelines should comply with legal requirements. They should also be in accordance with the chosen ambition level for accessibility. Additionally, the usability of the guidelines for developers should be verified.
- Review the guidelines for accuracy
- Establish an accessibility hotline, where accessibility questions from development teams can be answered competently within an agreed time limit
3. Train development teams in order to prevent as many accessibility problems as possible. This includes factors such as:
- Legal requirements for accessibility
- Guidelines for accessible design and how to interpret and apply them
- Tools and techniques to use when evaluating accessibility
- The relationship between usability and accessibility
4. Accessibility testing focuses on the following aspects:
- Use of a think aloud technique to understand the test participant’s thoughts and vocabulary during accessibility testing
- Focus on understanding mistakes related to accessibility barriers, rather than on efficiency or satisfaction
- Use tasks that concentrate on specific areas of concern for potential accessibility problems, rather than on general software product usage
Accessibility evaluation should consider relevant accessibility standards.
Usability Evaluation in Human-Cantered Design
Human-cantered design activities and their interdependence. Human-cantered design is an approach to design that aims to make software products more usable by focusing on the use of the software products and applying human factors, ergonomics, and usability knowledge and techniques.
The human-cantered design process can be summarised as follows:
- Analyze: Talk with people and discover “what is the problem?”
- Design: Prototype what you assume is a solution
- Evaluate: Watch people use the prototype and learn from their experiences
- Iterate: Repeat until the usability requirements are achieved
The human-cantered design activities are based on the following three key elements:
Observe and interview users in their work environment. Users are involved throughout the design stage by discussing designs and alternatives with them directly (where possible), or with representative users. In agile software development, representative users are typically the product owners, who are an integral part of the development team and enable frequent feedback to be given to designers and developers on usability issues.
Perform usability evaluation on the software product. A usability evaluation may take place at any time during human-cantered design, from early analysis through software product delivery and beyond. A usability evaluation may be based on a prototype, as mentioned above, or on a completed software product. Usability evaluations that are conducted in the design phase can be cost effective by finding usability problems early.
Iterate between design and usability evaluation.
Considering the human-cantered design process, the most frequent iterations take place between the activities “Produce design solutions” and “Evaluate design solutions”. This generally involves the successive development of a prototype, which is a representation of all or part of a software product’s user interface. Although prototypes are limited in some way, they can be useful for usability evaluation. Prototypes may take the form of paper sketches or display mock-ups, as well as software products under design. Starting with an initial prototype, the following activities are performed:
- The prototype is evaluated. The person who performs the evaluation conducts usability testing on the prototype.
- The prototype is improved and refined based on the results of the evaluation. The person who performs the evaluation helps the developers evolve the prototype by incorporating user feedback into the design.
These activities are repeated until the usability requirements are achieved. When prototypes are developed in iterations, the steady refinement gives the user a more realistic impression of how the finished product will look and feel. Additionally, the risk of forgetting or ignoring usability issues is reduced.
Both usability and accessibility must be considered during the design phase. Usability testing often takes place during system integration and continues through system testing and into acceptance testing.
A usability requirement is a requirement on the usability of a component or system.
It provides the basis for the evaluation of a software product to meet identified user needs. Usability requirements may have a variety of sources:
- They may be stated explicitly, such as in requirements documentation or a user story
- They may be implicit, undocumented user expectations (e.g., a user might implicitly expect that an application provides shortcut keys for particular user actions)
- They may be included in adopted or required standards
Examples of usability requirements (in this case described as user stories) are:
- “As a frequent user of the airline’s booking portal, an overview of my currently booked flights shall be automatically shown after I log on. This shall enable me to get a quick overview of my booked flights and quickly make any updates.”
This usability requirement is about the effectiveness component of usability.
- “As a help-desk assistant, I must be able to enter and log the details of a customer request into the Customer Relations database in no more than two simple steps. This shall enable me to focus on the customer request and provide them with optimum support.” This usability requirement is about the efficiency component of usability.
Agile Usability Evaluation
Usability evaluations are also suitable in agile software development.
Agile software development is a group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between members of a self-organising team.
In agile software development, teams work in short iterations, each of which has the goal of designing, implementing and testing a group of features.
The following usability evaluation approaches work well with agile software development:
- Rapid Iterative Testing and Evaluation (RITE) is a qualitative usability test method where changes to the user interface are made as soon as a usability problem is identified and a solution is clear. The RITE method focuses on instant redesign to fix problems and then confirming that the solution works with new test participants (real users or representative users). Changes can occur after observing as few as one test participant. Once the data for a test participant has been collected, the usability tester and the stakeholders decide if any changes are needed prior to the next test participant. The modified user interface is then tested with the remaining test participants.
- Informal and quick usability test sessions are useful where many potential users can be accessed (e.g., a cafe, a conference or a trade show). Such forms of usability test sessions typically last less than fifteen minutes and apply techniques such as think aloud and heuristic evaluation.
- Weekly testing. Test participants are recruited well in advance and scheduled for a particular day of the week (e.g., each Tuesday), so that the software build can be usability tested on that day. Usability tasks are prepared shortly before the scheduled testing day and may include exploratory testing sessions, where the knowledge of the tester and heuristic checklists are used to focus on usability issues.
- Usability reviews.