Saturday, October 9, 2010

Research 101

A friend just shared this Penn & Teller video with me. WARNING - NOT SAFE TO PLAY ON THE LOUD SPEAKERS AT WORK. While not completely academically correct, or a thorough two-sided investigation (not that there are ever many of those), it does quickly and easily explain some basics of survey writing, analysis and how it can all go wrong. And, it is entertaining. Some very basic things to watch when writing a  survey:

  1. Non-response bias. Who does not answer can sometimes be more compelling than who does. Quick example: Do a phone survey Monday night on TV matching habits. The results? Hardly anyone watches football anymore! No, not really - the people watching football did not answer the phone. You have a big non-response problem. That is why, for all surveys you want to manage replicates (how much sample you release), contact design (how many times and over how many days you try to reach a record before you call it dead) and always take into account non-response bias.
  2. Scales: Likert scales (good Wikipedia overview here) are the standard. But, even how you use these varies widely. A few quick comments:
    1. Using 4-point, 5-point, 7-point, 10-point scales. Always a hot debate here, quite enthralling and very sexy. But, in a nutshell I favor 5-point for several reasons. When you use 10-point, you typically roll up "top-box" as 9+10, so why not just go with 5 and make it easier? 10-point causes more respondent fatigue - larger survey questions (actually twice as long), more options, etc. Additionally, what is the difference between and 8 and 9? Moving from 4 to 5 seems a lot clearer form an analysis standpoint and form a respondent standpoint.
    2. End-labeled or label all points? If you can, always label all points. The human mind is much better and more consistent at deciding between "somewhat satisfied" and "very satisfied" more so than "3" vs. "4". This is another argument to keep scales short - you can't anchor every point on a 10-point scale, but you can on a 5-point scale.
    3. Neutral point? Typically I say no. The idea of using a scale is to differentiate respondents; to differentiate the most satisfied or dissatisfied - to find those levers you can pull as a business to make a difference. Allowing a neutral typically wastes a scale point. Instead I prefer to do a positively weighted 5-point scale (e.g., not at all satisfied, not very satisfied, satisfied, very satisfied, extremely satisfied). We want to make that "top box" the cream of the crop; hard for people to check, but if someone checks it you know you want to figure out how to get more of them. The goal is to move your 3s and 4s to this 5 spot. And, to figure out how to limit the bleeding with your 1s and 2s.
  3. Don't Know: The inclusion of "Don't Know" as an answer choice can drastically skew survey results. The classic example is in voter polls. If you only give the option of the two candidates with now "undecided" option your survey results can be drastically different than if you allow for an "undecided" option. Last minute voter turn-out drives, the results of primary races, the overall propensity for "undecided" voters to lean towards a given party - all throw off results by the inclusion or exclusion of this one "little" answer choice.
There are plenty of other ways that survey design can dramatically change how data looks upon completion. You have to take into account cultural differences in scale usage (including colors - there are some great survey platforms that allow visual selections and provide visual cues, but remember while red means stop or bad in the US, it is a positive color in China), respondent propensity to check "Don't Know" (much higher in Japan than the US) and a myriad of other little factors.

Okay, here is the darn video - enjoy!

No comments: