TESTING IN THE ABSENCE OF USERS

Một phần của tài liệu John wiley sons mobile interaction design feb 2006 (Trang 132 - 136)

Regardless of whether they are brought into the lab, or followed in the field, involving users is costly – it takes up your and their time, and you often have to pay them to participate. Some evaluation techniques do not need a user to be present and can provide rapid insights into the design qualities.

Scott MacKenzie demonstrates, for example, the usefulness of analytical models in determining the efficiency of different text entry methods for mobile devices (MacKenzie, 2002). He shows how to calculate the KSPC (key strokes per character), which is the average number of basic input actions, such as touching a key – like a mobile phone button, for instance – or gesturing with a pen-like stylus, needed to enter a text character. We won’t explain here how the calculation is made; the point is that while these sorts of model approaches do not replace user testing, they can cut down wasteful repeated studies. As MacKenzie puts it, ‘‘. . . potential text entry techniques can undergo analysis, comparison and redesign prior to labor-intensive implementations and evaluations.’’

In passing, if you interested in text-entry efficiency, it is worth looking at the KSPC for a range of common entry mechanisms, as shown in Table 4.1. As MacKenzie notes, it is likely that the methods with the lowest KSPC – the least effort for users – will give the best throughput of characters; from his results, then, the best approach is the unistroke method (gestures with a stylus) used together with word-level prediction.

TABLE 4.1

Keystrokes per character (KSPC) comparison of text entry methods (adapted from MacKenzie, 2002). The smaller the value of KSPC, the more efficient the input technique

Interaction technique KSPC

Multi-tap 2.0342

T9 1.0072

QWERTY 1

Word prediction (keypad,n=10)∗ 0.8132

Word prediction (keypad,n=2) 0.7086

Word prediction (stylus,n=10) 0.5000

nis the number of alternative word options displayed in schemes that use word prediction.

In addition to model-based, performance-predictive schemes, other popular non-user based testing methods include the heuristic review where the design is checked against good design rules-of- thumb, and the expert-based review where someone who has lots of experience in designing mobile interactions uses their knowledge of what works to spot weaknesses in the design (see Box 4.6).

B O X 4 . 6 EXPERT INSIGHTS

An interview with Scott Jenson, Head of Jenson Design (a product design consultancy) and former director of product design at Symbian

MJ: How did you get into mobile design?

SJ: I was working for Apple and had the chance to be on the Newton concept. Then, when I moved to London, I knew I just had to get into the mobile phone area as that’s where all the exciting design was – and is – going on. So, I ended up leading the DesignLab, working on projects such as the QUARTZ, which turned into the interface for devices like the Sony Ericsson P800 and P900.

MJ: What did the DesignLab do?

SJ: There was a lot of usability testing and design refinement. So, we had three full-time people whose job was to create functional prototypes using Macromedia Director; then we’d review, user-test and change these designs. Sometimes this included taking the prototypes to different countries to see how they worked, or not, with a broad range of people. We’d also work on trying to define new concepts for mobile interaction and services and in these cases we’d think more about the design semantics.

MJ: ‘Semantics’?

SJ: Yes, design is about semantics and syntax. First, you need to see what people do and what they want – the semantics – and then you have to find a way of making this possible – the syntax. I was involved in thinking about lots of early WAP services; the trouble was that the industry in general focused on and promoted a sort of syntax- focused design; worse still, lots of the offerings had many syntax errors, with users having to carry out many steps just to get a simple piece of information.

MJ: In doing all these activities within a fast-paced, commercial environment, what did you learn about design?

SJ: Well, first, design needs to know its place. It’s not the only thing that’s needed for a successful product – functionality, price, marketing, fashion, brand: lots of things have ➤

4.5 EVALUATION 111

a major impact too. As part of this ‘team player’ attitude, you have to accept that design is the art of compromise.

MJ: Can you give some examples of this pragmatic approach?

SJ: It comes in all sorts of forms. So, take the Sony Ericsson P800/P900 – we wanted to build a device that was a PDA and phone combined. Now Palm already had a very successful UI concept that really worked, but we couldn’t over-copy this as we wanted a distinct but also effective interface. Then, there were the rapid changes and additions to the platforms we were working with – so, we started with one set of devices in mind and then a company came along with another platform with a smaller display and a jog-wheel instead of a stylus. And, of course, when the design goes into production there are lots of times when something just can’t be done given the time and technical constraints; as a designer you then need to work with the engineers to try and retain as much of your idea as possible. So, for example, on a phone platform we worked on, for technical reasons, we had to have a ‘sync’ folder for email synchronized with the desktop client, and an ‘inbox’ for messages received when mobile. What we actually wanted was a single view of the ‘inbox’ combining all the user’s mail; so, we had to produce a ‘hack’

with a process that read mail from both sources and put it in a third on the device.

MJ: How did the DesignLab work to be what you’ve called a ‘team player’ within the company?

SJ: Design is an inherently social process. From the very start we’d ensure we had buy-in from key groups – our initial brainstorm sessions would, for instance, include software engineers. Then, when we produced the final design, before the development began, we’d draw up a ‘design manifesto’ spelling out our interaction concept in terms of purpose, functionality, marketing and the like, and get this signed off by a ‘champion’

from marketing, the technical team and the DesignLab.

MJ: Now that you’ve moved into independent consulting, what sorts of approach do you get from potential clients?

SJ: A whole range. The classic case is a company that phones you up and says ‘‘we’ve got two weeks to ship, can you look at the icons’’. At the other end of the extreme, I’ll be asked to generate ‘blue-skies’ concepts for future services.

MJ: When you’re shown concepts or designs, how do you assess them?

SJ: First, I’ll be asking ‘‘what’s the value of this?’’ – that is, will people really want it?

Take the iPod music player. It’s really easy to articulate what the value is – ‘‘it lets me take all my music with me’’; compare that with mobile Instant Messaging – ‘‘it allows ➤

me to be online and show you my availability if I’ve set it’’. So, a quick test is to try and sum up the value in one sentence. Next, I assess it from my strong belief in simplicity.

I was looking at a service recently that had all sorts of functionality with many, many nested menus. I sat down with the design team and tried to figure out the three key things that people wanted to do; then we hid all the rest of the functionality for the power users.

MJ: When you see UI ‘bugs’ in the systems, how do you help companies work on them?

SJ: First, you need to judge how important the bug is. Some usability people can be too pedantic and perfectionist: if you look at any mobile, you’ll find lots and lots of problems, but not all of them will have a noticeable impact on the users’ experience.

One technique I use is to think in terms of a ‘red card’ (this bug needs sorting out) and a ‘yellow card’ (it’s a problem, but a lower-priority one). Once you’ve spotted the problems, it depends how near to shipping the product is – if it’s late, then quick hacks, help text and manuals can be a fix until version 2; if it’s early on, you then need to persuade the developers why change is needed. ■

4 . 6 I T E R AT I V E D E V E L O P M E N T

One of the first software development models to be developed was called the Waterfall Model (Royce, 1970). As water leaps off a ledge it falls quickly downwards, dragged inexorably to the base;

similarly in the waterfall software model, development flows top-downwards, the system design moving from the abstract to the implementation through a series of well-defined phases. As each phase is completed it is ‘signed off’ and the next is commenced.

With interaction design there are some dependencies between the activities outlined in the previous sections. So it would be unwise to attempt to generate scenarios, the stories of use, before you have been immersed in the users’ world, for instance; and evaluation can obviously happen only after there is something to evaluate.

However, design is certainly not a case of following a simple checklist, moving with certainty from problem to solution. The design process does not proceed in a one-way, clear-cut fashion.

Rather, many prototypes are built, designs being formed and reformed over and over again. Your skills in understanding users’ needs and your ability to shape and manage the design space, as well as your prototyping and evaluation expertise, may all be needed as each new design emerges.

In many of the projects we’ve worked on, then, the development process has proceeded thus.

First, we do some ethnography and other types of in-situ study. Back at the office, we sift through our observations, extracting themes and issues. We call in users to interview, cross-checking with what we found earlier, and we return to the field to do more focused studies, guided by the initial fieldwork. Next come some prototypes that we take back to the field and put in the hands of potential users, both to check our assumptions and to further draw out possible opportunities and needs (the prototypes here thus acting as a fact-finding tool). More prototypes follow which are critiqued by us, other expert designers and users back at the lab. We usually develop a couple of competing proposals and get them to a status that allows them to be evaluated in a controlled test.

4.7 MULTIPLE VIEWPOINTS 113

After refining the ‘winning’ system further, we might then deploy it in the field to see how it performs in context over time.

Although this oversimplifies the case a little, engineering and interaction design differ in the way they approach problems and solutions. In engineering, the aim is to take a clearly defined problem – say a bridge that can withstand gale-force winds and heavy vehicles – to produce an effective, robust, stress-tested solution. As an interaction designer, though, you will often find yourself trying to form a view about what the problem actually is as much as producing the solution.

As designs develop you might well experience self-doubt, needing to reassess your certainties about what your users value, leading you to go back to further field observations, for example.

4 . 7 M U LT I P L E V I E W P O I N T S

Interaction design is a stimulating and demanding job. Part of the interest, and some frustration, results from the need to employ diverse techniques, to accommodate discordant attitudes and viewpoints, and to actively involve people with differing backgrounds and sets of skills.

Một phần của tài liệu John wiley sons mobile interaction design feb 2006 (Trang 132 - 136)

Tải bản đầy đủ (PDF)

(408 trang)