No more mediocre UX, and that means we have to stop failing terribly first! Not shipping with broken UX is not the ultimate goal, this piece is more about how to prevent or stop horrible UX and abject failure.
http://www.measuringusability.com/five-users.php Compliments of Gabe Brown’s FB post.
Here is the takeaway. GET at least 5 GLOBAL users to visually interact with your UX in physical/mockup forms early on; then another 5 on iteration2, etc. Each iteration of user experiences gives you an 85% chance of finding the problems in the interface we are presenting to them. This math is deceptively simple and low cost; so don’t overthink the problem and defeat what you *can* do so very easily.
If you are interested in the derivation, or you want to investigate the math:
· State Farm, IBM, and some PhD analysis – telling you that 5 is most likely sufficient to catch most major errors in UX
· If you are in a corner (5-10% of users will hit it) you need to aim for 18 users to have an 85% chance of getting it reported
· They will demonstrate (in the above link) that somewhere beyond 30 or so test users, you are losing incremental return on investment. So do the right things, as they’re easy, and don’t overthink yourself out of doing this exercise or overcomplicating it.
· The best strategy is to bring in some set of field users, Watch! Don’t talk; as the users attempt to use the existing UI or paper mockups. LISTEN. Don’t contaminate this lesson by showing how to use the existing complex UI. Listen and find the problems they have, fix those problems, then bring in another set of users as part of an iterative design and test strategy. In the end, although you’re never testing more than 5 users at a time, in total you might test 15 or 20 users. In fact, this is what Nielsen recommends in his article, not just testing 5 users in total.
· For driving up your bar of certainty on covering core scenarios with non-failing UX he suggests: As a strategy, pick some percentage of problem occurrence, say 20%, and likelihood of discovery, say 85%, which would mean you’d need to plan on testing 9 users. After testing 9 users, you’d know you’ve seen most of the problems that affect 20% or more of the users. If you need to be surer of the findings, then increase the likelihood of discovery, for example, to 95%. Doing so would increase your required sample size to 13.
· They also have a cool coinflip experiment to show you the likelyhood of 3 coinflips giving you an 85% chance of being able to predict accurately that you’ve seen a specific head or tail in those 3 flips. Same with dice. Both of these represent binomial probability calculations. Then later discusses the Poisson Distribution (he calls it an equivalent) and .
There are about 3 cool examples included but Hertz.com is the funniest failure — (screenshot below).