The following section presents a bouquet of measures that can be applied in various stages of the development process and how efficient, effective and usable systems can be realized for a changing and increasingly diverse user population.
ISO/EN 4291/11 defines effectiveness, efficiency, and user-satisfaction as the three central criterions for the usability of interactive products. Although this norm provides essential usability criteria, it neither presents methods for realizing systems with high usability, nor does it provide off-the-shelf measures to quantify the usability of systems. So how can user-friendly systems be realized?
The foundations for designing usable systems for people can be found in the early dawn of graphical user interfaces: In 1985—just a year after Apple introduced Macintosh—Gould and Lewis proposed three key principles for building usable software (Gould and Lewis1985):Early focus on the users tasks,empirical mea- surement of the product usageanditerative design. Many stakeholders are involved in the specification and development of new software systems: Software engineers, user interface designers, and software architects on one side, as well as domain experts and managers who are responsible for the introduction of the new software.
Of course, the end users also belong in the circle of stakeholders, but often enough they are neglected or consulted only in the latest stages of the development.
Early focus on the users tasks The decision to invest in new software often comes from the managers of a company or external consultants. As they are not the actual users this often leads to miss defined or insufficiently defined task descrip- tions. Hence, users must be included in the earliest stages of the design of a system, their tasks must be well understood and the users’ wishes, abilities and motives must be captured and well understood by the design team.
Empirical measurement of the product usage To ensure that the developed system supports the tasks users want or have to perform the execution of prototypic
tasks by actual users must be observed. These task executions can be quantified (e.g. by measuring task completion, accuracy) and will identify the critical parts of the software that do not support users in their work and need to be revised.
Iterative designIt is not sufficient to attest a certain level of usability at certain point in time, but system usability must be a continuous focus during the whole design process. Ideally, a design team starts with an early prototype of a planned system and evaluates the usability of the system with typical tasks performed by typical users. These quantitative evaluations eliminate the most severe usability issues early in the development. Then, future iterations of the system with func- tional prototypes can focus on other and minor usability issues, but again with actual users who perform typical tasks with the planned system. Figure14.1shows a schematic presentation of this cyclic process.
14.2.1 Metrics, Procedures and Empirical Approaches
This section outlines a few but still the most central usability methods for user- centred design. An in-depth description and additional methods can be found in Dix et al. (2003) or Courage and Baxter (2005).
Methods for developing and evaluating user interfaces can be divided in methods with and methods without involvement of prospected users. Methods without user involvement, such asheuristic evaluation (when experts evaluate if interfaces meet certain heuristics), GOMS (a method for estimating the expected performance, similar to Methods-Time Measurement—MTM), orcognitive walk- throughs(human factors experts identify usability issues by predicting how users would solve tasks) are not covered in this article. They are a valuable addition to every development process, yet many issues will remain uncovered, if prospected users are not included during the design phases.
Paper prototypingPaper prototyping is a lowfidelity prototyping technique that allows the gathering of user feedback on an interface in early stages of the design and before the software implementation starts. The proposed user interface is drawn on paper and discussed with users. Their feedback can be integrated at once and interface suggestions can be drawn and discussed immediately. Replacing parts of the simulated Fig. 14.1 Schematic presentation of an iterative development process as proposed by Gould and Lewis (1985)
screen with new layers can simulate interactive interfaces and the traversal through multiple screens. Paper prototyping is best applied in early stages of the design, when the general interface is designed. But even in later stages individual changes, new dialogs and screens can quickly be designed and evaluated using this technique.
Rapid prototypingThis method carries forward the concept of paper prototyping.
This may include “clickable”interface mock-ups in a presentation tool, or func- tional screen prototypes.
Wizard-of-Oz This method allows the evaluation of interfaces even if the respective backend functionality is not yet available, by observing the user’s interaction with the prototyping system and simulating its outcome. For example, a paper prototype can“control”a machine, if a hidden observer simulates the user’s interaction with a remote control.
AB-testsTo understand if different interface alternatives result in higher speed, higher accuracy or higher user satisfaction different design alternatives can be compared by presenting both to a set of users, either within-subjects (every user uses every interface) or between-subjects (each user evaluates just one interface).
Measures may be objective measures, such as task performance and learnability, or subjective measures, such as users’perceived efforts.
Note that the feedback acquired changes with the perceived completeness of the interface: Paper prototypes are perceived as easy to change, thus users often request fundamental changes, while applications that appear complete are apprehended as difficult to change, hence the articulated feedback is often restricted to wording, colour choices, and other trivialities.
The methods presented are a necessity for building usable, understandable, learn- able, goal-oriented, task-adaptive, efficient and satisfying software for production systems. They should as a matter of fact belong to the standard toolbox of all stake- holders included in the technical development process. To identify and obliterate usability pitfalls and to assure that the software captures the users actual needs, these and similar methods must be applied frequently during the design process. However, to unfold their full potentials, human factors experts must also be included in this process, as they are able to detangle individual differences in effectiveness, efficiency and user- satisfaction that are caused for example by motivation, personality, or cognitive abil- ities. These then allows afine-grained tuned or individually tailored user interfaces.
14.2.2 Case Studies — Examples of the Potential of Exploring Human Factors
The following three exemplary cases present “success stories” in which the methodology of user-centred design and human factors research was applied in the area of production systems.
Thefirst case quantifies the influence of poor usability on efficiency while inter- preting large data sets in supply chain management. The second case outlines how
human factors relevant for good job performance in supply chain management can be identified. The third case describes how an adequately designed worker support system can relocate the focus between speed and accuracy depending on the task.
Case 1—Visual and Cognitive Ergonomics of Information Presentation Managing the follow of material in supply chain management depends on both, the ability to perceive, understand and interpret given data correctly, as well as the presentation of the data. To understand how insufficient data presentation and bad usability impacts the decision quality, we conducted a formalAB-test. We measured the decision speed and decision quality in dependence on human information processing speed and the presentation form (poor and good usability constructed as small or medium font sizes in a tabular presentation of supply chain data). The study revealed that poor usability obviously decreases the overall performance.
Strikingly though is the finding, that poor usability disproportionately impairs people who have a lower information processing speed, while faster information processors can compensate poor information presentation. Thisfinding highlights the necessity of user-centred and participatory design, as software developers, interfaces designers and contributing mechanical engineers usually do not realize the negative effects of poor interfaces on decision speed and decision quality, as they are able to compensate the negative effects, while the end users often are not.
Frequent user tests with methods as presented above will reveal these barriers even in early stages of the design.
Case 2—A game-based Business Simulation to Understand Human Factors in Supply Chain Management
Supply chains are sociotechnical systems with high dimensional and nonlinear solution spaces. The performance of a supply chains is not only determined by technical factors (e.g. shipping times, replacement times, delivery strategies, lot sizes, order costs, etc.) but also by the abilities of the human operators who needs oversee the possible choices and make good decisions in this complex solution space. To identify the factors that contribute to a better understanding of the supply chain and to develop methods that help supply chain managers to make better decisions in shorter time we developed a series of supply chain games (Brauner et al.2013; Stiller et al.2014). The business simulation games are virtual test beds with multiple uses: First, they are aflexible tool to identify and quantify human factors that contribute to efficiency and effectivity in managing information in logistics and supply chain management, by systematically varying the difficulty of the game and investigating how different human factors (capacity, processing speed, motivation, self-efficacy, personality traits) relate to game performance.
Second, through experimental variation the user interface and/or provided decision support tools, the benefit or costs of these can be evaluated and quantified within the test bed before the proposed changes are implemented in commercial applica- tions. Third, the relationship between in-game performance and performance in the job as supply chain mangers can be used to develop interactive methods for per- sonnel selection with an increased accuracy.
The design of this research and evaluation framework followed the design principles presented above and several of the usability methods presented above were applied. The following sections describe the iterative development process and some of the methods used during the process.
The development of the business simulation game was a collaborative effort between the four disciplines mechanical engineering, communication science, computer science and psychology. Each of the four disciplines contributed methods in order to on ensure the simulation model’s validity on one side and the good usability and suitability for psychometrical evaluations on the other side.
At the beginning of the project the experts form each discipline discussed the game model and the relevant indicators for inferring the simulated companies status.
Then thepaper prototypingtechnique was used to arrange the indicators to form the user interface of the game. In a second step a low-fidelity software prototype of the game simulation was realized and the previously selected indicators were populated with data from the simulation model and the experts evaluated the suitability of the indicators and the simulation model. Third, the game was implemented as a web application and the design of the user-interface ware strictly based on the earlier prototypes and influenced by technical considerations. Forth, during one user study feedback from external usability experts was gathered and their suggestions were integrated in the game. A subsequent user study attested that the user interface refinements led to an increased profit of the simulated company as the users hat a better overview of the performance indicators and were able to make better deci- sions. Throughout the design process feedback was gathered from other experts and test users and the user interface and the game model was refined accordingly. Fig- ure14.2shows the development progress of the game across three prototype levels.
Case 3—Augmented Reality Worker Support Systems
The third test case of including human factors research and design methodologies in production engineering is the design and evaluation of a work support system for carbon-fibre reinforced plastic manufacturing (CFRP) (Brauner et al.2014).
The CFRP production process relies on manual production steps in which multiple layers of carbonfibre cloth have to be aligned in specific orientations. The overall stability of a CFRP part is prone to misalignments and a mismatch of 5° reduces the mechanical stability of a component by 50 %. These defects can only be
Fig. 14.2 Different development stages of the Supply Chain Simulation Game (left paper prototype,first layout of the interface,centrerapid prototype in a spread sheet applications for evaluating the game’s model,rightfinal user interface of the game)
detected late in the production process, which yields in extra costs for the process steps between origin of the error and the detection.
To increase the stability of the process we designed a worker support system that alleviates the variances in the orientations of the carbon fibre cloths. The system captures the orientation of the currently placed layer and provides auditory and visual feedback to the worker. Later iterations of the prototype may be realized stationary in the assembly cell or on mobile augmented reality systems, such as Google Glass.
An evaluation of arapid prototypeof the system with the Wizard-of-Oz tech- nique(compared four different feedback modalities against each other (no feedback, auditory, visual, and combined feedback). The key finding is that providing no feedback yielded in the highest speed and the lowest accuracy, while combined auditory and visual feedback led to slowest speed and highest accuracy. Hence, a target-oriented design of worker support systems can nudge workers to either increase the speed of the production process or to increase its accuracy (speed— accuracy trade-off).
Summarizing all three cases, utilizing methods from user-centred design and human factors research reveal a deeper understanding of the behaviour of human workers on different levels of production processes. The methods facilitate the development of better systems and applications for production processes in shorter time. Adequately designed system can influence the production processes by shifting the trade-off between speed and accuracy by giving carefully designed feedback. Furthermore, some methods allow an explanation and prediction of individual differences in speed, accuracy, performance and motivational factors that might either contribute to better designed and better targeted software systems or to more specialized recruiting and training processes for the employees.