5Es of usability

I recently had to revise an existing PowerPoint file on usability for a demo, and I’d like to share some of the key points here. I’d place the references in i:\ but at the moment it’s either full or write-protected. Anyway, the presentation describes 5 qualities — the 5Es — of a usable product:  Effective, Efficient, Engaging, Error tolerant, and Easy to Learn.

Effective. “Capable of producing an effective result.” This addresses whether the product is useful and helps users achieve goals accurately. If this criterion fails, this would definitely frustrate users more than anything else.

Possible design approaches (as listed in the PowerPoint file):
– provide feedback on all critical actions
– eliminate opportunities for error
– provide sufficient information for user decisions

Efficient. “Being effective without wasting time or effort or expense.” This refers to the speed (with accuracy) with which work can be done using the product. Aside from providing a streamlined work flow, we need to avoid things that can interrupt the users’ work or that could slow them down.

Possible design approaches:
– design navigation for ideal and alternate work flows
– provide shortcuts
– use interaction styles and design widgets that support speed
– minimize extraneous elements on the screen

Engaging. “Attracting or delighting.” This concerns how pleasant, satisfying, or interesting an interface is to use. This means your interface can either encourage the use of the system OR can be a turn-off that makes your system painful to use.

Possible design approaches:
– use clear language and appropriate terminology
– set a helpful tone, with a level of conversation suitable for the users
– structure functions to match users’ tasks

Error Tolerant. This involves how well the product prevents errors and helps users recover from any errors that do occur. No one is perfect so making errors cannot be helped. E.g., you select the wrong option, or click the wrong link, or get into the wrong elevator. If the user makes a mistake, the system must be able to help lead the user back to the correct track. Otherwise, this could lead to a lot of support calls for data amendment.

Possible design approaches:
– transform “errors” into alternate paths
– use controls that aid in accurate selection
– be sure actions are easily reversible

Easy to Learn. This concerns how well the product supports both initial orientation and deeper learning. If you are using a product for the first time, it should be easy enough to figure out how to use it. And if you will only be using it after long intervals, it should not take too much effort to relearn how to use it.

Possible Design Approaches:
– make the interface helpful with minimalist prompts and instructions provided where they are needed
– create “guided” interfaces for difficult or infrequent tasks

Main reference: “Balancing the 5Es: Usability”, Cutter IT Journal, Whitney Quesenbery, 2004

Advertisements

JV usability test

Usability testing isn’t quite a usual thing here in [company name]. Sure, we can log usability comments in our buglists, but an actual usability test is a different thing altogether. When I first spotted a certain book on that subject, my initial thoughts were on applying it for designing for usability rather than on testing for it.

Unexpectedly, a need to consider and emphasize usability, and to actually conduct a usability test came in JV. Fortunately, I came across the book on usability testing again. The book “Don’t Make Me Think” is quite comprehensive, plus it’s only around 200 pages long. Although its main target is for web applications or web sites, its main points could be adapted for JV’s shrink-wrapped product. After getting some ideas from the book and reading some more about it on the net, I then tried to apply the little that I know by running an actual set of usability test sessions.

I didn’t prepare a step-by-step, detailed script of how the sessions would go. I simply prepared an outline of the screens that we needed to go through and the tasks that can be tried out. This actually allowed for flexibility. I essentially wanted to cover the book’s prescribed methods. One is the “Get it” testing wherein the you (the facilitator) present to them (the usability testers) the screens and try to see whether they get it — What are their first impressions on the screen? Do they immediately get what the available controls are for? Or what the screen is for? Do they get how it works? Or what it is for? The other is the “Key task testing” wherein you ask them to do certain tasks using the product and you try to see how well they do and whether they encounter any difficulties — Which parts took them more time than expected? Which parts were they prone to make errors? Which parts frustrated them?

I went through 5 sessions with one non-JV guy each and an average of 2.5 hrs each per session. During the course of which, I enouraged the usability testers to think out loud and to just be candid in giving their feedback — be it negative or positive. Afterwards, we sifted through the comments, and chose which ones we could and should apply before our deadline. We basically went for fixes requiring minimal effort but with obvious results, and for fixes to items we accept as valid usability flaws that could frustrate our potential users. Frustration brought about by a difficult-to-use product is a deterrent — chances are you’d brand the provider as evil and you’d think twice about using their products, or you’d complain a lot, or you’d look for something better.