It's human nature to seek the quick answer, the simple solution, or the simple rules; to want to believe in guarantees. We want to believe that a new golf club will take several strokes off our game, or that a pill will help us lose weight without having to diet, or that an apple a day will keep the doctor away. The same is true when designing and testing. We want to believe that five users can uncover 80% of our usability issues, or that a website will be usable if everything is three clicks from the home screen, or that an open card sort will generate the information architecture for a website. Of course, none of these are true, but let's focus on the last one.
A card sort is a traditional tool for the design community to address website organization. In open card sorting, participants are provided with content on index cards and are asked to arrange them into groups and subgroups, give names to the groups, and thereby create a structure that will hold all of the content. Claims of the effectiveness of this approach on generating a final architecture vary. You can find references that claim fifteen people will show a very high correlation of .90. You can read that this task can be done on the web in an unmoderated study and that the results can be analyzed using automated cluster analysis software. Sounds simple, sounds great, sounds too good to be true. And it is too good to be true. If it were true, all information architectures, menu structures, and other things that need to be groups would be perfect.
Open card sorts are for generating data, not for providing answers. Here are three truisms of an open card sort:
Even more significant than these issues is that there is no one structure that works when sorting items. There may be multiple structures that could work. Clothes sorted by men, women, and children may be just as easy to find if sorted by summer, winter, fall, and bad weather. Health data organized by organ might be just as usable as the same data sorted by disease stage. Mixing the data from different possible structures that can be generated in an open card sort not only makes no sense, but the real information of the sort (that multiple possible structures were produced), could easily be lost.
Reaching a very high correlation is not really our goal, and cluster analysis is not all that useful. Learning is our goal. We're looking for ideas. We want to learn about why our participant thought to group things the way they did. We need to decide if their sort will scale to all content and if this approach will work for everyone.
Even those who suggest such enticing data as "fifteen people will generate a .95 correlation" say that listening to the comments of participants provides valuable insight. There's no magic formula, number of participants, or automated analysis that will prevent us from having to think. Maybe it will take fifteen to gain the insight needed (if you have the money, that's as good a number to shoot for as any), but maybe you'll learn what you need from five or ten participants. It might even take more. In any case, we need to observe, interact, listen, learn, and (most importantly) think. Yes, it's harder than accepting the analysis of some piece of software, or believing that correlation means we have the correct answers, or that correlation even exists just because we have fifteen people perform the sorting task.
The story goes that Einstein was once asked by a reporter if they could see his lab. He took a pen out of his pocket and pointed at his head. The value of thinking should not be underrated.