- Consumers rated the Mayo Clinic as their number one choice. Healthcare professionals rated the Mayo Clinic as their number two choice.
- Healthcare professionals rated the National Institutes of Health website as their number one choice. Consumers rated the National Institutes of Health website their number three choice (and stated that the reason they ranked it lower was because the information on it was a bit overwhelming).
- Consumers rated the Aetna InteliHealth website as their number two choice. Healthcare professionals rated the Aetna website as their number four choice.
- Consumers rated MDchoice (rebranded now as HealthCentral) as their number four website in the list whereas healthcare professionals rated the same site number five.
I still hear about people conducting surveys of users as the sole or primary means of evaluating their product’s design and usability. Surveying users seems like a simple enough thing to do, particularly on the web. After all, your users are already within reach and your product is fresh in their minds, so asking them to complete a quick survey seems like a no-brainer. I have also been told that the comments made by users during a user-based usability evaluation are the “real findings” from this form of evaluation. Writing down comments made by users during the user-based evaluation is certainly easy, and their comments come across as compelling information to follow. After all, it was your user who made the comment, so it’s hard to consider ignoring it. But is relying on user comments really getting us what we want? What users tell us is what they are aware of, but this is not the whole story. They are also influenced by unconscious thoughts and things they don’t necessarily notice. It is the combination of both of these that affects their behavior and performance. Consider a survey conducted by Consumers Union, the nonprofit publisher of Consumer Reports. The well-known magazine, through its Web Watch program, surveyed 2,700 web users on the subject of medical websites. In the report, the authors stated that consumers rely too much on “style over substance” and care too much about a site’s “look and feel.” They stated that consumers “paid far more attention to superficial aspects of the information—-the graphics or visual cues—-than the content.” As a result, the authors of the study stated: “Consumers should be a little more savvy when they go online” and that consumers “may be exposing themselves to misleading or biased information.” This does sound pretty scary. The authors of this study even compared the consumer self-reported criteria to the criteria supposedly used by healthcare professionals to evaluate the same websites. The professionals reported that they cared more about the content credentials of the authors than about the “superficial” elements of the web sites. The problem is, all of this data collected is self-reported data. It is the respondents’ assumptions about what they use as their criteria, not necessarily the total criteria used. If we assume that there are influences outside of the user’s awareness that could be affecting performance, perhaps this survey’s “finding” is not as bad it sounds. We need to look beyond self-reported data and look at performance as well. Then we can better understand the influence of consumers’ beliefs about the criteria they are using for evaluating our products. Luckily performance data was also provided when the survey results were reported. In addition to reporting the supposed criteria used, both groups of users (consumers and healthcare professionals) ranked the ten medical websites that were part of the study. The results are interesting: