by Steve Krug
I, Michael Parker, own this book and took these notes to further my own learning. If you enjoy these notes, please purchase the book!
- pg 8: Small, informal, do-it-yourself usability testing is also called discount usability testing.
- pg 13: Quantitative tests prove things by measuring others and are rigorous; qualitative tests simply acquire insights to improve what you're building.
- pg 16: The serious problems are usually easy to find, so focus on those; you can get rid of “hidden” ones when you have more resources.
- pg 19: Web analytics can tell you what people are doing on your site, but they can't tell you why.
- pg 24: Try testing a morning a month; the shortness simplifies recruiting, while the recurrence eliminates having to decide when to test as you just test whatever you have.
- pg 28: Do all the tests back-to-back in a half day so the debrief can be conducted with details still fresh in everyone's mind.
- pg 32: The worse shape something is in, the less you want to show it, but the more you can benefit if you do.
- pg 33: Usability test on other sites with the same kind of content or features you'll implement, and then learn from their mistakes.
- pg 35: If you sketch on a napkin, ask someone what they think it is, but not their opinion or their feedback.
- pg 37: Visual design can introduce usability problems, so test “comps,” or visual treatments of your wireframes.
- pg 41: Requiring participants with domain knowledge excludes first-time and new users; instead, recruit loosely and grade on a curve.
- pg 43: You need three participants; any more yields diminishing returns, increases tedium, and surfaces more nits that make triaging difficult.
- pg 47: If you put out an invitation for testing, screen them to see if they really meet your qualifications, and are comfortable talking out-loud.
- pg 49: Have a substitute participant “on-call” in case a participant doesn't show up, so that you're not wasting your observers' time.
- pg 49: Don't use participants again at a later round, as they know too much already.
- pg 52: A good “filler” task for a participant who finishes early is to have them do a task on a competitor's site.
- pg 55: Phrase tasks without using uncommon or unique words that appear on the screen, or else it's a simple game of word-finding.
- pg 57: On the day of testing, checklists get mundane details out of your head so you can give your full attention to the participant.
- pg 63: The facilitator must tell the participants what to do and keep them moving without answering questions, and enact the think aloud protocol, or have them verbalize their thoughts.
- pg 67: Set the screen resolution to something the average user likely has, not what a developer uses.
- pg 69: Turn off software that might interrupt the test, add bookmarks for any pages you'll need to open, and clear all browsing data.
- pg 75: Start by asking them what they make of the home page, such as what you can do there; don't ask for their opinion of it.
- pg 77: If the participant is miserable, the task is taking too long, or you aren't learning anything more, move on to the next task.
- pg 78: Ask substantial questions such as why the participant made particular choices at the end, after the tasks are completed, so you don't interrupt the flow or accidentally give clues.
- pg 79: Users aren't designers, and they usually don't always know what they need, or even what they really want.
- pg 82: Ask what the participant is thinking only if you're not entirely sure; don't make it a function of time and interrupt the user while reading or making progress.
- pg 86: It's okay to be persistent and a bit ruthless; you're paying the participant for their time, and if you don't get what you need, you're wasting everyone's time.
- pg 92: Have key stakeholders watch tests live instead of recordings; it's more compelling, and you benefit from the shared group experience and comparing observations.
- pg 93: Observers should take notes, find the three most important usability problems they saw in the session, and suggest questions for the facilitator to ask the participant.
- pg 100: The only time a team should be troubled by testing is when it's done so late in development that there's no time to fix any problems.
- pg 104: You'll have more usability problems than resources to fix, so focus ruthlessly on the most serious problems.
- pg 106: In the debriefing, list all observed problems, identify the ten worst, and then discuss what simple fixes to make within the next month.
- pg 111: Make the smallest and simplest change you can; you can spend the time to implement the perfect solution later.
- pg 115: If a tweak doesn't work, try a stronger version if it; if that doesn't work, try another tweak before resorting to redesigning.
- pg 117: If you're adding something to address a usability problem, question it; usually removing anything distracting is better.
- pg 123: Someone who starts off lost on your web site will stay lost; don't let too many stakeholders add too many disorienting bits.
- pg 124: Subtle visual distinctions that work in print don't work on the web; focus on making the important parts really stand out.
- pg 132: If you need buy-in from key stakeholders, ROI arguments are weak; make them observers in a live usability test.
- pg 138: You can't ask questions or probe with unmoderated remote testing, but it is cheap and still helpful.
- pg 139: Don't try any form of remote testing until you have some in-person tests under your belt.