Sometimes some of the best usability advice comes from users directly. In some recent usability testing, a user said something which I found very perceptive.
Continue reading "Is your design Evolutionary or Revolutionary?" »
Sometimes some of the best usability advice comes from users directly. In some recent usability testing, a user said something which I found very perceptive.
Continue reading "Is your design Evolutionary or Revolutionary?" »
Posted by Frank Spillers on March 22, 2011 at 10:16 PM in Usability Methodology | Permalink | Comments (0) | TrackBack (0)
Tags: redesign, usability, user experience best practice, web design
If you read nothing else on Demystifying Usability last year, read these top 3 blog posts from 2010...
Continue reading "My top 3 User Experience Blog posts of 2010" »
Posted by Frank Spillers on January 26, 2011 at 08:29 PM in Usability Methodology | Permalink | Comments (0) | TrackBack (0)
Tags: top usability blog posts 2010, user experience blog, ux blog posts
How well you get your customers to their destinations with your design, and help users do what they need to do, is the difference that makes a difference in customer experience. If you are not doing it well, I guarantee you your competitors are or are trying to find a way to. In this post, I'll cover 5 strategic patterns that you need to lead the pack.
Posted by Frank Spillers on June 28, 2010 at 09:41 AM in Usability Methodology | Permalink | Comments (3) | TrackBack (0)
"Imitation is the highest form of flattery"...so the saying goes. The problem with borrowing design and user interface metaphors from other applications, websites or brands is that what works in one place might not work in another.
In this post we'll look at the pitfalls of copying design elements from other designs and what to do instead.
Continue reading "5 Problems with Borrowing Design Ideas from Apple (and others)" »
Posted by Frank Spillers on March 18, 2010 at 11:21 PM in Usability Methodology | Permalink | Comments (0) | TrackBack (0)
Maybe you have heard the saying "we'll take care of that in user training". The notion that problems users have can be resolved by user training is severely flawed. Yet entire departments rally around this belief and worse many companies seem to wrap product management around it.
Continue reading "How relying on 'user education' is a failed strategy" »
Posted by Frank Spillers on April 06, 2009 at 12:54 PM in Usability Methodology | Permalink | Comments (0) | TrackBack (0)
Tags: help systems, usability, user assistance, user experience
Next week is World Usability Day, a day when the usability community gets out to raise awareness and visibility about the field and goals of usability engineering and user centered design.
"World Usability Day 2006 promotes the value of usability engineering and user-centered design and the belief that every user has the responsibility to ask for things that work better". (from the WUD site)
Here are a couple of items related to World Usability Day that we are doing at Experience Dynamics, a leading usability and user centered design consultancy.
1. Special Event-Usability Testing methods- What are we observing and why?
Nov 14th 2006 12pm (Americas and Europe) and 6pm (Asia)
When conducting usability testing, what do you measure and why? How do you capture metrics and what you should be measuring?
In this World Usability Day exclusive web seminar, we will discuss usability testing observation metrics and best practices.
Agenda:
1. Usability Testing metrics: What are the things you should be measuring? How to measure qualitative vs. quantitative data (e.g. satisfaction vs. effort).
2. Usability testing observation best practices: Do you measure time on task every time? What do you need to do a good job capturing metrics if you are doing "quick and dirty" discount usability or "guerilla" testing, without undermining your own efforts?
3. New tool for usability testing logging: LiveLogger. Just released this week, we will review a new usability test logging application. We will review the new LiveLogger interface and discuss what the tool does, how it captures and reports on usability testing metrics.
Summary: In this 1 hour live web seminar (held twice on World Usability Day), we will review usability testing observation best practices.
Length: 60 minutes
Who should attend: People new to usability testing or want to conduct rapid usability testing; usability managers; user experience team; anyone responsible for user advocacy or usability testing.
Learn more about this exclusive web seminar
Register by simply sending an email (limited seating)
2. Multi-language Translations of The Importance of User Experience (poster).
The poster now translated into 20 languages and being presented at numerous locations by usability consultants and practitioners around the globe (so I'm told) ;-)
The poster is available for download in: (or for purchase in English here)
French, Dutch; Spanish; Bulgarian; Swedish; Portuguese (Brazilian); Chinese (Mandarin); Danish; Arabic; Greek; German; Hebrew; Portuguese; Italian; Norwegian; Finnish
Coming soon: Turkish, Polish, Russian, Icelandic.
The poster has also appeared on UX Mag as picked up from Angie McKaig's site, where she said:
"Great little visual overview of UX and why it's important. If I still worked in an office, I'd totally want this for my cubicle. And my boss's. And his boss's". (thanks Angie ;-)
Thanks and Best Wishes,
Frank Spillers
p.s. Also check out The Importance of User Experience in B2B Enterprise Environments(JPEG, altered version)
Posted by Frank Spillers on November 09, 2006 at 10:32 AM in Usability Methodology | Permalink | Comments (1) | TrackBack (0)
The Importance of User Experience
Here's a poster that reflects some thoughts about user experience...all of the bottom row items (outcomes of positive user experiences) in the poster are based on empirical research. Let's review some of that research, a brief glimpse at the science behind what the poster is communicating...
About the poster project and translations into many languages below all these quotes (bottom of the post)...
Elements that contribute to a positive user experience: (the bottom row of the poster)
Loyalty > Trust > Perceived Credibility > Profitability > Intent to Return > Intent to Purchase > User Satisfaction > Word of Mouth
A few quotes that I think summarize the research nicely:
Loyalty
- "We discovered that visitors will return to websites to which they have no loyalty simply because they're familiar with the interface. As soon as someone directs the individual to a competitor's website and the individual determines the competitor's website is less painful to navigate, they're gone". Usability Studies 101: Brand Loyalty by Joseph Carrabis
- "Research findings point out that it takes more effort to develop new markets than to keep existing customers, and that existing customers tend to spend more money than new customers do. Repeat purchase behaviors occur after products are used. Hence, how to manage customer loyalty by means of product design becomes a critical issue to product designers and a key for company prosperity".A Preliminary Research on Product Design Strategies for Managing Customer Loyalty (PDF) Dr. Ding-Bang Luh, National Cheng Kung University, Taiwan
Trust
- "In short, it appears, as many suspect, that distrust of the Internet undermines e-commerce. Specifically, those who perceive greater risks on the Internet are less likely to shop online. In turn, perceptions of risks are associated with bad experiences online". Trust in the Internet: The Social Dynamics of an Experience Technology. (PDF) by William Dutton and Adrian Shepherd, Oxford University
- "The key finding is that trust is a long-term proposition that builds slowly as people use a site, get good results, and don't feel let down or cheated. In other words, true trust comes from a company's actual behavior towards customers experienced over an extended set of encounters. It's hard to build and easy to lose: a single violation of trust can destroy years of slowly accumulated credibility". "Trust or Bust: Communicating trustworthiness in web design" by Jakob Nielsen
Perceived Credibility
- "Guideline #7: Make your site easy to use -- and useful". Stanford Guidelines for Web Credibility, Stanford University
Profitability
- "The rule of thumb in many usability-aware organizations is that the cost-benefit ratio for usability is $1:$10-$100. Once a system is in development, correcting a problem costs 10 times as much as fixing the same problem in design. If the system has been released, it costs 100 times as much relative to fixing in design." (Gilb, 1988)
- "The average UI has some 40 flaws. Correcting the easiest 20 of these yields an average improvement in usability of 50%. The big win, however, occurs when usability is factored in from the beginning. This can yield efficiency improvements of over 700%." (Landauer, 1995)
- "IBM's Web presence has traditionally been made up of a difficult-to-navigate labyrinth of disparate subsites, but a redesign made it more cohesive and user-friendly. According to IBM, the massive redesign effort quickly paid dividends. The company said in the month after the February 1999 re-launch that traffic to the Shop IBM online store increased 120 percent, and sales went up 400 percent." (Battey, 1999) Selected quotes from: The ROI of Usability from UPA
- "In our first year we didn't spend a single dollar on advertising... the best dollars spent are those we use to improve the customer experience."- Jeff Bezos, Amazon.com
- "Improving user experience can increase both revenue and customer satisfaction while lowering costs." - "Get ROI from Design", Forrester Research, June 2001
Intention to Purchase, Intention to Return
In my experience, it is wise to measure this from a web analytics AND usability research perspective. Usability tests are a great way to expose a design to all measurements (ease of use, ease of understanding, user satisfaction, perceived pleasure, purchase intention and intent to return). Contact with users provides that *context* that pure web analytics measurements do not.
- "On the web, customer retention can be defined as whether or not a customer decides to return to a website. In terms of metrics, this can be quantified as the number of customers who a) intend to return and b) intend to purchase again from the website".
- "Understanding "intention of return and return purchase"hedges on one action: the decision the user makes based on their experience with the site, during and immediately after the session". How exactly is website usability, customer retention and brand perception linked? by Frank Spillers
User Satisfaction (or the measurability of it)
User satisfaction is often not studied in detail. It is usually just referred to in a paper or article. I am guilty of that, as is Jakob Nielsen in his writings.
- "Two important aspects of the overall consumer satisfaction are: (i) the level of satisfaction associated with the final chosen product (e.g., Day, 1984; Spreng et al., 1996), and (ii) the level of satisfaction associated with the purchasing process (e.g., Arnould and Price, 1993; Oliver, 1993). The former has been referred to as the product satisfaction and the latter has been referred to as the process satisfaction. The product satisfaction can be measured in two aspects: (i) a holistic satisfaction towards a chosen product (Spreng et al., 1996) and (ii) the specific levels of satisfaction towards the product attributes (Oliver, 1993). A typical means to evaluate product satisfaction is to measure rated consumers’ affective responses to the selected products (Cole and Balasubramanian, 1993; Westbrook, 1987; Mano and Oliver, 1993; Westbrook and Oliver, 1991)".
- Another excellent study that shows the link of user satisfaction to perception of pleasure and emotional aspects of a design (aka Emotion Design- my earlier post), too long to quote... "A systematic approach for coupling user satisfaction with product design" by Han, Sung and Hong, Sang in Ergonomics (2003) v 46. no 13/14.
Word of Mouth
- "Jupiter Communications reports that word-of-mouth is second only to a strong offline brand in building consumer trust.Almost half of consumers surveyed by Jupiter, cite word-of-mouth as a key influence in their online shopping habits...
The average U.S. adult online shopper now tells about 12 other people including family, friends, relatives and co-workers about their online shopping experiences.Contrast this to the average of nine people who hear rave movie reviews or six who are told about great restaurants". Reported May 27, 1999, Iconocast
- "Word-of-Mouth expands the purchase cycle. Word-of-Mouth impacts customer value. Post-purchase actions drive evangelism. Advertising vs. Word-of-Moth "When Consumers Control the Message: When Real People are the Biggest Advertisers". Dave Evans et. Al. 2005, Word of Mouth Marketing Association conference slides. More info at the Word of Mouth organization: WOMMA
- "A recent survey by Opinion Research discovered that online shopping escapades start more tongues wagging than either movies or restaurants". Latest research (2006) on Word of Mouth impact: http://jcmc.indiana.edu/vol11/issue4/sun.html
About the Poster Project
I am really happy how the poster turned out! Bryce Glass and I collaborated on this together. I was impressed by his earlier efforts to illustrate a "Flickr user model". Bryce's mastery of Illustrator is note-worthy, even if you think the poster is cluttered (If you do, take your time with it and don't take it too seriously- it's an inspiration piece).
How the poster was made
Users were interviewed, Bryce's earlier design was analyzed for what works and what doesn't work. We learned a lot from each other about visual design and the usability of flows... and the result is what you see above.
We have had some interesting feedback from users on Flickr:
This lead me to have the poster printed out to help teams evangelize usability! (production and design costs were paid for by my company, Experience Dynamics).
The point of the poster is to provide a learning piece (currently used by over a dozen universities throughout the world) and inspiration to design and development teams. Having this type of collateral on your wall might cause someone to actually pay closer attention to your efforts ;-)
This poster is an upgrade, if you will, to the UPA poster that hung on walls in team areas where I worked in the past and also the little IBM posters that you see around people's cubes.
Translations of the Poster
If you are interested in translating this poster, I will send you a free printed English version;-) (Inquire about poster translation). Since putting this shout out a few weeks ago, many people have expressed interest in translation- the result is below.
Download a free translation of this poster in the following languages: (see bottom left for latest additions)
- French
- Dutch
- Spanish
- Bulgarian
- Swedish
- Portuguese (Brazilian)
- Chinese (Simplified)
- Danish
- Arabic
- Greek
- German
Coming soon: Turkish, Polish, Hebrew, Portuguese, Russian.
Buy a Poster!
Buy a poster, and support the poster project.
Thanks and Best Wishes,
Frank Spillers
p.s. Is there a usability topic or theme that you would like to see clarified with a visual like this?
Posted by Frank Spillers on September 21, 2006 at 01:37 PM in Usability Methodology | Permalink | Comments (5) | TrackBack (4)
Tags: usability poster, user experience poster
Whom this applies to: Designers, Marketers, Developers, CEO's
If you design something for your company, organization or department, or help influence the direction of a design, it regularly can become very difficult for you to separate yourself from the design. And chances are, you are not even aware of it most of the time!
This entry looks at why this seems to happen and what you can do about it (if anything at all).
Identifying the problem
One possible answer as to why we loose objectivity when we create or contribute to a design is rooted in the Gestalt Psychology phenomenon of figure and ground:
The phenomenon of figure and ground in perception has been explored extensively by gestalt psychologists. A classic example is that of a picture that either appears to be a light colored chalice on a dark background, or two dark faces against a light background, depending on what aspect of the picture is focused on as ‘figure’ and what is perceived as ‘ground’. (see Figure 1)
Figure 1: The "Vase Faces" illustrating the "Figure-Ground" phenomenon. Is it a face or a vase?
The closer you get to an object (figure) the more blurred it becomes (ground). Figure/ground remind us that perception is relative and not absolute. Or the more time you spend in internal company meetings discussing a design, the more blurred your objectivity becomes.
It's a symptom that is probably responsible possibly for 95% of poor usability design choices.
Let's call it Heisenberg's Rule of Design: The closer you are to a design the less objective you become.
The more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa. --Heisenberg, uncertainty paper, 1927
When knowing "too much" can blind you
Every designer goes through this process of becoming consumed by his or her design ideas and assumptions dictated by style, taste or personal preference when creating the look and feel of an application.
Every developer experiences this when he or she tries to "skin the
UI", code the GUI or add the User Interface to an application after a
long day of coding.
Every marketer experiences this when he or she tries to map new features, new ideas, new ways to engage the customer to the functionality requirements.
Every business analyst experiences this when he or she tries to specify requirements based on business processes, system responses and user/group work flow.
Every VP or CEO experiences this when he or she drops in on the design team and projects the original vision, strategic direction, or business needs onto the design (mixed in with a little personal preference or as Jeroen van Erp put it at last year's Design and Emotion conference, design can be directed by "the CEO's wife").
To figure out how our perception blinds us, let's look at the stages of this "Forest for the Trees Syndrome"...
The Stages of "Forest for the Trees" Syndrome
Translation for International Readers: "Forest for the Trees" means you loose sight of seeing the "big picture" in something, because you are too close to the details.
Stage 1: Attached to the design
During this stage you become attached to your design. This is typically caused by spending too much time with the design and refinements. In a sense the design becomes a part of you and you necessarily feel like defending it because it makes sense to you.
Motto: "I don't see anything wrong with it".
Action: Argue for the design.
Stage 2: Blinded by the design
During this stage you are so exposed to the design (company objectives, brand, issues, constraints, history) that you can't even see that you are biased. Having argued for the design, you are now completely bought into it and are completely blinded from any other information.
Motto: "This is the only way to go".
Action: Fight for the design.
Stage 3: Hypnotized by the design
During this stage, you are so far gone the design has become second nature- like the furniture in your office. You don't question, you don't even think about it or feel that anything is wrong. You can't look at the design with a fresh set of eyes either because you are too patterned from over exposure or by now it seems perfectly fine or justified.
Motto: "This way seems normal".
Action: See any criticism as unfounded and unfair.
Is there a light at the end of this tunnel or are we stuck with tunnel vision?
The field of Human Computer Interaction (HCI) where Usability, Information Architecture and User Centered Design fall out of- represents a way out. User-centered Design (aka UCD), combines a set of methods, techniques and approaches that creates more objectivity in design by leveraging user data, user needs, user issues, user insights and user advocacy. User-centered design is a methodology (popularized by Donald Norman e.g. see his early book User Centered System Design) that triangulates technology (systems) and marketing (features) centered approaches with an outside look at what the user wants and needs, expectations, desires and requirements.
The User Centered Design approach (an industry standard usability methodology) provides several techniques to help "see the forest for the trees". From a usability standpoint, the forest is the user group. The trees are the features that sit between the application architecture and the user.
What does User Centered Design do that helps bring more objectivity to a design: (or at least ways we have found to help our clients at Experience Dynamics at leading user centered design firm based in Portland, Oregon):
Usability reviews: Analyzing a design from the perspective of users and their tasks with best practices (research based)
Outcome: Advocate for user needs around confusing, annoying, frustrating or difficult to use design elements in order to make better decisions about the direction of the user experience.
Usability testing: Having customers assess a design to detect confusion points and uncover areas of the design that mismatch their expectations.
Oucome: Bring user verbatim feedback from usability testing data direct to the design room.
Field Studies: Going to the user's natural environment and observing their world: seeing, hearing and feeling what they think, want, need...and learning how they construct and prioritize experiences.
Outcome: Incorporate research-based customer personas into the interaction design by seeing how Persona "X" or Persona "Y" will use the design.
Is User Centered Design a sure fired way to prevent seeing the forest for the trees?
No. Especially not with your own design. That is what motivated me to share this with you. Every time I work on a design for my own company (Experience Dynamics) I run into "Heisenberg's Rule of Design" or "Forest for the Trees Syndrome". At this point I know it's:
Which reminds me to ask, is the glass half full or half empty?
Best Wishes,
Frank Spillers, MS
Posted by Frank Spillers on June 01, 2005 at 11:54 PM in Usability Methodology | Permalink | Comments (0)
Question: How many users do you need to test with for a usability test?
Answer 1: = 5 users (Jakob Nielsen and Thomas Landauer, 1993).
Answer 2: = 15 users (Laurie Faulkner, 2004), PDF file.
So, which is it, 5 or 15? And why are we arguing about an extra 10 users, doesn't one need to test with at least 100 or more users for statistical significance, accuracy and validity?
Statistical Validity in Usability Testing
Usability research is largely qual-itative, or driven by insight (why users don't understand or why they are confused). Qual-itative research follows different research rules to quant-itative research and it is typical that sample size is low (i.e. 15 or 20 participants).
The end result of usability testing is not statistical validity per say (the outcome of quant-itative research) but verification of insights and assumptions based on behavioral observation (the outcome of qual-itative research).
Why don't we do large numbers in usability testing?
Behavior vs. Opinion
Usability research is behavior-driven: You observe what people do, not what they say.
In contrast, market
research is largely opinion-driven: You ask people what they think and
what they think they think. You need big samples for market research
because of this (though focus groups bend this because they are
somewhat qualitative). This is why phone or web surveys require
hundreds or thousands of responses. Behavior-driven research is more
predictable. Basically, if 10/15 users are confused you can assume that many more will also be confused as well.
Example:
If you ask someone "what do you think of this homepage?", you will need several hundred responses to gain statistical validity in order to validate what will be opinion-driven data. Asking someone their opinion does not constitute usability requirements, since usability testing is about isolating "how they will actually use" the design not just "what they think" of the design.
If you give a small set of users a scenario that forces them to
interact with home page elements and observe their behavior, and listen to
their unsolicited reactions, you will get a better idea of what they think
and need. The driver here is expectation (governed by cognitive factors) vs. opinion which can be driven solely by emotional, social or personal factors.
Suggested Sample Sizes for Research
Corporate Usability Research:
Academic Usability Research:
Samples are usually larger depending on size and scope and research objectives (e.g. 15 users per segment or 40-100 users in a usability test).
Jakob Nielsen's "test with 5 users" assumption
I think it is important to understand that Jakob Nielsen was trying to promote usability testing as a regular usability research activity in corporate environments. I believe he conducted this research (using a call center software application in the early 90's, rumor has it) in order to demystify the perceived complexity of setting up and running a usability test.
Remember in the early 1990's, only the hard core research and development labs at Apple, Bell Labs, Microsoft, IBM and Sun were doing usability testing. In Nielsen's much respected and equally criticized article "Why You Only Need to Test With 5 Users" (written in 2000) he recommends (based on the early 1990's analysis) that instead of opting for higher accuracy, you go for the "fast and dirty" approach of conducing three tests instead of one "elaborate" study.
Later on in the article Nielsen says that the rule only applies if your users are comparable. If you have other segments or user types, you will need to test more users.
Translation: 5 users per audience segment or target user group, or for a website with 3 diverse segments you will need 15 users for the one test.
Magic Number 15 for Usability Testing...or Why 5 Users is Not Enough
Laurie Faulkner ( PDF: 2004) has conducted new empirical research showing benefits from increased sample size. In her study, "Beyond the five-user assumption: Benefits of increased sample sizes in usability testing", she wrote:
It is widely assumed that 5 participants suffice for usability testing. In this study, 60 users were tested and random sets of 5 or more were sampled from the whole, to demonstrate the risks of using only 5 participants and the benefits of using more. Some of the randomly selected sets of 5 participants found 99% of the problems; other sets found only 55%. With 10 users, the lowest percentage of problems revealed by any one set was increased to 80%, and with 20 users, to 95%.
At Experience Dynamics, (usability consultancy) we have found that the cost savings of using fewer users is negligible. In other words, after you spend the time and money to set up, facilitate and report on the test, adding a few more users does not add "that much" time and money to the overall project.
The benefit you get from adding a few more users to the total (or in the case of 5 users, doubling the amount) is far greater than the small test that gives you "quick and dirty" results. In the case of running a series of usability tests or iterating your testing process (recommended for refinements based on evolving design decisions), you may want to choose a smaller number of users: I recommend no less than 8 users.
Best Wishes,
Frank Spillers, MS
Posted by Frank Spillers on January 21, 2005 at 05:29 PM in Usability Methodology | Permalink | Comments (2)
Eye-Tracking- following user eye patterns
Eye-tracking studies are a type of usability test where user gaze concentrations are recorded in thermal-like "heat zone maps". The heat zone maps track user eye movements. Eye tracking tests make usability testing look really interesting, sophisticated, high-tech and scientific. Eye tracking usability data appears to be more valuable or empirical since it is recorded using technology and gaze capture instruments.
The reality is that eye-tracking, while valuable, doesn't make usability testing any more powerful. It's what you do with the observations and the usability test data that counts.
Bottom line: If you are using eye-tracking, to make it meaningful, you must:
1. Have a trained observer or usability professional observing. Eye tracking vendors are not necessarily experts in interpreting usability research. So users looked over there, who cares? What is motivating their gazing activity?
2. Focus on what it is you are trying to learn. What aspect of user behavior are you trying to understand? What will eye-tracking offer that other methods won't?
3. Match what users are actually doing and feeling with the eye-tracking data reports. Data is just data unless it is meaningful and informative.
4. Be aware what eye-tracking is, what types of technologies exist and how your tests should be set up for maximum effectiveness. See the Problems Reported... section of this article below for discussion of this issue.
What Eye Tracking tells us about website usability
One of the recent and big studies to come out this year was the Poynter Institute's "EyeTrack III" 2004 Eye Tracking Study. This is the third eye track study conducted by Poynter since 1991.
Here's what Poynter has found from their eyetracking studies relating to website content usability, page layout, navigation and design: (my comment below finding)
1. Users spend a good deal of time initially looking at the top left of the page and upper portion of the page before moving down and right-ward.
Comment: Another thing to think about is how this user behavior mirrors search engine traffic (i.e. Google Bot visiting your site). Search engines read starting at the top left, and then downward in a left to right column fashion.2. Normal initial eye movement around the page focuses on the upper left portion of the screen.
Comment: Not surprising when you consider that users are patterned by all the other software and websites that they use which have a standard menu start point (e.g. File, Edit, View...). Note: For Japanese or Arabic it would be the mirror reverse.
3. Ads perform better in the left hand column over the right column of a page.
Comment: The right column is treated by users as an "after-thought" area and should be designed with that in mind.4. Smaller type encourages focused viewing behavior.
Comment: This is especially true in older or elderly users. For the rest of your users, stick with 9-12 point Sans Serif (Arial, Helvetica, Verdana) with an average of 10-11. FYI: Only developers appreciate miniature fonts!5. Larger type promotes lighter scanning.
Comment: Most reading behavior consists of skimming and scanning. If you want to slow your users down- use smaller fonts in the body of your content. Use larger fonts to help them cover more territory.6. Dominant headlines most often draw the eye first upon entering the page- especially upper left of the page.
Comment: Remember, Poynter's focus was a newspaper website. However, bear this in mind for portal type design and intranet design.7. Users only look at a sub headline if it engages them.
Comment: So make sub-headlines relevant and of course make them the keyword phrases users and search engines use.8. Navigation placed at the top of a homepage performed best.
Comment: Again, if you understand how users are patterned by other tools they use (Word, IE, Outlook Express)- the goodies are at the top of the page.9. People's eyes typically scan lower portions of a page seeking something to grab their attention.
Comment: This seems consistent with "Information Foraging Theory" where users are said to hunt for information by "scent" or navigation and content of the highest value to them.10. Shorter paragraphs performed better than longer ones.
Comment: Attention is clipped on the Internet. Short bursts of attention are the environment you are designing for at all times. Note: Longer product descriptions do better than shorter ones in ecommerce situations. As with all usability findings, context is key.
11. The standard one-column format performed better in terms of number of eye fixations.
Comment: Most users are overwhelmed by the average web page that they try to deflect information as a coping strategy. It is the same phenomenon that occurs at a party when you focus on one conversation and ignore the other conversations around you by categorizing them as "noise".
12. Ads in the top and left portions of a homepage received the most eye fixations.
Comment: Interesting, but I wouldn't recommend putting ads there. *Just because they receive eye fixations doesn't mean they put a smile on the user's face*. This is one of the main points of this article!13. Close proximity to popular editorial content really helped ads get seen.
Comment: One of the golden "rules" of usability is that anytime you satisfy the user's task (interest, goal, objective), you increase the likelihood or create the conditions that they will be open to other stimuli (advertising, cross-selling etc.)14. Text ads were viewed mostly intently of all types tested.
Comment: Text ads are popular because they are less distracting, camouflage well with the page and are often not known to be ads and therefore annoyances to the user. Oh, and since Google "pioneered" them- they are the de facto standard in effective web advertising.15. Bigger ads had a better chance of being seen.
Comment: Also repeat advertising on a page by the same company is being used on many sites to reinforce exposure.16. The bigger the image, the more time people took to look at it.
Comment: Using larger images (file sizes) is easier these days since 20% or more (USA) are on high speed connections, but using thumbnails with large images is always a safer bet.17. Clean, clear faces in images attract more eye fixations on homepages.
Comment: Humans are wired to recognize patterns and hard wired to other human faces.18. Higher recall of facts, names, and places occurred when people were presented with that information in text format.
Comment: Good recall depends on the level of relevancy, good copy-writing and content usability (structure and display).19. New, unfamiliar, conceptual information was more accurately recalled when participants received it in a multimedia graphic format.
Comment: It is known in the field of cognitive science that the more emotion involved in a learning transaction, the higher the retention and recall.
20. Story information about processes or procedures seemed to be comprehended well when presented using animation and text.
Comment: And the animation or text must be clear, easy to understand and in the language or conceptual world of the audience.
Types of Eye Tracking Technologies
1. Head-mounted tracker: Head mounted tracking devices as pictured at the start of this article (image from Poynter's earlier study) consist of a wire frame helmet that is mounted on the user's head in order to stabilize head and eye-movement.
2. Gaze-detection: This technique in addition to head mounted tracking has been around since the mid-1990's and featured as an interface device in virtual reality research. What's new with gaze detection are technological improvements.
In the Poynter III eye-track study, Stanford University derived www.EyeTools.com used the new eye-gaze technology developed by Sweden's www.Tobii.se. In this system, the computer screen itself detects, captures and tracks the user's eye gaze patterns. Other vendors like Australia's www.SeeingMachines.com, Germany's www.Eye-Square.com or American www.EyeTracking.com offer headset and headset-free kits.
Measuring more sophisticated variables...
I expect the next 15 years will see an increase in physiological measurements being used in consumer and usability research. Not because so called "traditional usability" techniques are inadequate, but rather because the field itself will help prove itself with the "hard proof" offered by the new technology. Already Eye-Square offers an additional skin-conductivity sensor to help detect such factors as shifts in sweat, temperature and heart-rate. Eye Tracking Inc. offers pupal diameter measurement as another way to gauge/track emotional response.
Further movements can been seen in physiological research in Harvard University's lab run by Gerald Zaltman (author of the amazing book How Customers Think), where fMRI (functional magnetic resonance imaging) is being used to determine where data is being processed in specific regions of the brain. Now we're talking "hard proof". It's not what they see but what they think about what they see!
Problems reported with using eye-tracking for Usability Testing
Many eye-tracking firms (and the original companies that were founded) emerged from academic settings and moved toward selling commercial research services. The use of eye-tracking in usability research is fairly new and is recognized to lack empirical evidence regarding its effectiveness.
Eye-tracking technology changes every few years. However, most of the vendor websites do not offer detailed information about their specific technologies or approach (scientific basis, trials or double blind studies with their technology). The attitude seems to be, "eye-tracking is cool, so just do it"!
Take a quick look at the research literature on eye-tracking and a different story emerges.
Schnipke and Todd (2000) at George Mason University reported extensive problems in properly collecting eye tracking data, despite vendor training and one year's experience. They identified a host of obstacles such as ease of use of the system, calibration stability, pupil fluctuations and pupil condition quality as well as the issue of omitting users who wear glasses. The authors used a remote eye-gaze system.
Goldberg et.al (2002) at Stanford University and Oracle Corporation identified two styles of eye tracking studies: top-down (task oriented) and bottom-up (behavioral inferences). The researchers found that both styles of eye tracking studies must be adopted if eye-tracking is to become a routine usability methodology.
Pan et. al (2004) at Columbia University confirmed previous work by Rayner (1998), finding that individual characteristics of the viewer as well as the stimuli both contribute to viewer's eye movement behavior.
Eye-tracking seems to have a promising future. As the technology improves, so too will the research application, methods and action-ability of eye-tracking data. However, eye-tracking does not seem to be the holy grail of usability testing. The two biggest practical problems are calibration and complex reporting and analysis. In corporate usability settings, easy test set-up and quick design insight, guidance and recommendation are the most valuable elements of the usability research activity. If eye-tracking jeopardizes those elements then it looses some appeal.
Scrutinizing the quality and end results of new eye-tracking technology developments and methods will become a bigger problem as traditional academic spin-offs compete for eye-tracking services in a commercial capacity. As with any new technology it is important to remember why you are using it and what it can do for you. The Poynter research like many eye-tracking studies, provides another data point to validate or challenge your existing assumptions about user behavior. As Poynter's Howard Finberg put it, "Eyetrack III is a tool, not a solution".
The reality is that eye-tracking, while valuable, doesn't make
usability testing any more powerful. It's what you do with the
observations and the usability test data that counts.
Best Wishes,
Frank Spillers, MS
Posted by Frank Spillers on December 10, 2004 at 05:15 PM in Usability Methodology | Permalink | Comments (6)
Methodology Madness: or "caveat emptor" (buyer beware)
What you buy or "buy into" influences how you think about something and how you represent that information in your mind is what cognitive scientists refer to as an "internal representation". Whether you buy usability services or not, at some point along the way I am sure you will or have encountered "methodology madness", and maybe you don't even know it.
What is "methodology madness"?
Methodology madness in the usability services and products area refers to the espousing of convenient beliefs, "truths" and proclamations about the right way or new way to do things. The methodology is typically proprietary or masked and is typically part of some form of a sales pitch- either for a report or a "customer experience management" solution. The "right or new way" implies that the approach is more refined, more advanced or a best practice.
Methodology madness is not new to usability consulting; in fact it exists in many industries. In terms of the usability industry, the problem with proprietary methodologies is that they are often inaccurate or distorted versions of the truth. The other obvious problem is that proprietary usability methodology, techniques, or research serves that company's interests, with a clear commercial bias.
A fair degree of usability nonsense seems to be emerging as the industry grows, and it's main motive is sales and competitive differentiation. Further it is hard to tell what is nonsense and what is valid, witnessed by the fact that I have met many usability consultants who believed certain methodology myths. Like many of my colleagues, I have even fallen for some of the mythology because it sounds so convincing.
Let's face it: it's hard to think critically about something when it's packaged in a compelling way and important details are withheld in the name of confidentiality.
To help identify the madness, let's look at just a few common methodology myths still in circulation today:
Claim 1: "Usability testing must be conducted in the user's natural setting".
Source: This one comes from a leading provider of a semi-proprietary online customer experience solution that uses panels of users in their homes, traffic log data and analysts to generate reports.
Problem: There is no evidence of this claim in the Human Computer Interaction literature (the field where usability comes from). While the claim makes sense, it dissolves when you trace it to the usability technique where it was borrowed from: field studies. In field studies (aka Task Analysis, Ethnographic studies, contextual interviews) it is absolutely essential that the user's environment be observed and assessed. The point in field studies is to note interactions and influences of the environment. In usability testing, the point is to gauge if the website or software works to expectations. This has very little to do with the user's setting, or PC settings etc.
Claim 2:"You need to test your website with hundreds of users".
Source: Same as above.
Problem: This belief sells statistics, not usability insights. Since the majority of people are most comfortable with statistical data (quantitative), this claim again sounds convincing. The flaw however, is that usability testing is a qualitative research technique (observation is the metric not numbers). In qualitative research the research rules are different and it is normal to have small sample sizes e.g. 15-40 users. Usability testing is about observing actual user behavior and capturing expectations. Insight is the indicator, not statistical significance.
Claim 3: "If it takes more than three clicks, forget it".
Source: Unknown. Probably went around dozens of startups in Silicon Valley in the late Roaring 90's.
Problem: This "3 click rule" metric is e-commerce centric. Three clicks to the user destination is a metaphor for saying "don't take the user down the garden path to do something". The 3-click rule losses validity in other domains where users will naturally click 10 times to research an issue or purchase.
Claim 4: "Navigation is not important. Users don't care where they are in the website".
Source: A popular "customer experience" guru and evangelist.
Problem: This is a new one (Feb 16th 04) cycled back from something guru Jakob Nielsen said a few years back with the effect that navigation was "overdone" on many sites. In the new version, we are told "consistency is NOT necessary", and does not apply to websites. Outside of falling down on the floor with laughter, the problem here is that while users don't appear to be consciously concerned with navigation, their unconscious behavior indicates otherwise. A simple fact that every seasoned usability practitioner knows is that consistency increases ease of use (whatever the medium). Again, the prescription references the insights to "listening labs" (a reframed usability testing lab with questionable methodology of it's own).
Skilled observation by professionals that understand consumer cognition can go a long way to prevent sweeping generalizations about user behavior. For more on the topic of understanding unconscious customer behavior, see Gerald Zaltman's new book How Customer Think, where research shows physiological evidence of consumer behavior using brain scans.
Claim 5: "Website usability can be measured by proprietary software, agents or algorithms".
Source: a) a now defunct company and b) a new consultancy with a similar story.
Problem: Because usability involves the understanding of complex, dynamic, state dependent cognition it is virtually impossible to model user behavior with a bot, agent or algorithm. For example how can a machine model semantic interpretation? Machines can't. I worked for a time with the company who claimed the had invented a "technique that models human perceptual, cognitive, and motor behavior, and is programmed with a set of characteristics and Web-browsing behavior that represents the way an average user sees, thinks, and moves through a Web site".
The claim is completely false and was disproved by a world authority at Xerox PARC. I even compared four automated tests to four equal real usability test and had consistent dramatic failure of results from the automated algorithm approach. I realized that it is impossible to model how a user makes sense of a website, how they interpret content and to predict their expectations and train of thought. Yet, a new usability consultancy (that refuses to provide basic details about their methodology) has "invented" a proprietary algorithm for assessing competitive usability performance capturing data such as scrolling, scanning, typing in data, reading text, clicking and annoyance. Sounds too good to be true. You don't get to find out unless you become their client the President told me...
What is the anti-dote to methodology madness? As the Latin term "caveat emptor" implies (let the buyer beware), the best thing you can do is think for yourself, do your homework, compare and contrast the information. Ask a seasoned practitioner if you are not sure.
For the usability consulting industry, the agenda ought to include the following:
1) Clarifying and providing rigorous detail about proprietary methodologies (including peer reviewing).
2) Promoting integrity by serving prospects and clients with non-biased and non-partisan information.
3) Building and expanding upon existing agenda-free techniques and methods that service the greater good of the community.
I personally don't believe that "new" usability methodologies should be kept proprietary under the auspices of commercial protocols. Best practice research is not like a new technology or invention. Usability is about understanding user behavior and there is nothing proprietary about human behavior.
I also don't think it serves the industry or the pursuit of integrity for that matter, to claim that a technique is the "secret sauce". That's like one lawyer saying they have a better methodology to practice law then another attorney. There are people and companies who are competent and skilled and there are those who are not.
Best Wishes,
FS
Posted by Frank Spillers on February 26, 2004 at 02:09 PM in Usability Methodology | Permalink | Comments (1)