Friday, October 29, 2010

Marketing 101

Believe in what you are doing, build great stuff and sell the hell out of it.

That is what Apple does. For a quick and funny snap shot of their recent release of the new MacBook Air check out the video below. And then read the quick summary on TechCrunch.

In today's market - product is marketing, marketing is product. When you realize that and break down silos between product development, design, research and marketing you get a team working seamlessly together with a shared vision and focused goal. The result is the video below.

Friday, October 22, 2010

Velvet Rope Clients

Admittedly this is a weak blog post - I am posting about another blog, that was posting about a book. However; it has been a very busy few weeks and the other post is really good and gets to a core tenet here at Sentient Services - we do great work, for great people and great companies. Life is too short to work on junk and with not-so-nice people.

We are very fond of saying "We don't have clients, we have friends that happen to pay us money.". And, that is very true. Our "clients" are great friends - we travel the world together (drink tea in Turkey, hike to through the alleys in Hong Kong, beer in Germany), work long hours, create game changing products, and have a lot of fun along the way...good friends + good clients = great times!

Read about what makes great clients and some guidelines on the GetSatisfaction blog. It covers Michael Port's books and philosophy.

The key takeaway:

Do you have your own red velvet rope policy that allows in only the most ideal clients, the ones who energize and inspire you? If you don’t, you will shortly. Why?
First, because when you work with clients you love, you’ll truly enjoy the work you’re doing; you’ll love every minute of it. And when you love every minute of the work you do, you’ll do your best work, which is essential to book yourself solid.

Enjoy and have a great weekend!

Saturday, October 9, 2010

Research 101

A friend just shared this Penn & Teller video with me. WARNING - NOT SAFE TO PLAY ON THE LOUD SPEAKERS AT WORK. While not completely academically correct, or a thorough two-sided investigation (not that there are ever many of those), it does quickly and easily explain some basics of survey writing, analysis and how it can all go wrong. And, it is entertaining. Some very basic things to watch when writing a  survey:

  1. Non-response bias. Who does not answer can sometimes be more compelling than who does. Quick example: Do a phone survey Monday night on TV matching habits. The results? Hardly anyone watches football anymore! No, not really - the people watching football did not answer the phone. You have a big non-response problem. That is why, for all surveys you want to manage replicates (how much sample you release), contact design (how many times and over how many days you try to reach a record before you call it dead) and always take into account non-response bias.
  2. Scales: Likert scales (good Wikipedia overview here) are the standard. But, even how you use these varies widely. A few quick comments:
    1. Using 4-point, 5-point, 7-point, 10-point scales. Always a hot debate here, quite enthralling and very sexy. But, in a nutshell I favor 5-point for several reasons. When you use 10-point, you typically roll up "top-box" as 9+10, so why not just go with 5 and make it easier? 10-point causes more respondent fatigue - larger survey questions (actually twice as long), more options, etc. Additionally, what is the difference between and 8 and 9? Moving from 4 to 5 seems a lot clearer form an analysis standpoint and form a respondent standpoint.
    2. End-labeled or label all points? If you can, always label all points. The human mind is much better and more consistent at deciding between "somewhat satisfied" and "very satisfied" more so than "3" vs. "4". This is another argument to keep scales short - you can't anchor every point on a 10-point scale, but you can on a 5-point scale.
    3. Neutral point? Typically I say no. The idea of using a scale is to differentiate respondents; to differentiate the most satisfied or dissatisfied - to find those levers you can pull as a business to make a difference. Allowing a neutral typically wastes a scale point. Instead I prefer to do a positively weighted 5-point scale (e.g., not at all satisfied, not very satisfied, satisfied, very satisfied, extremely satisfied). We want to make that "top box" the cream of the crop; hard for people to check, but if someone checks it you know you want to figure out how to get more of them. The goal is to move your 3s and 4s to this 5 spot. And, to figure out how to limit the bleeding with your 1s and 2s.
  3. Don't Know: The inclusion of "Don't Know" as an answer choice can drastically skew survey results. The classic example is in voter polls. If you only give the option of the two candidates with now "undecided" option your survey results can be drastically different than if you allow for an "undecided" option. Last minute voter turn-out drives, the results of primary races, the overall propensity for "undecided" voters to lean towards a given party - all throw off results by the inclusion or exclusion of this one "little" answer choice.
There are plenty of other ways that survey design can dramatically change how data looks upon completion. You have to take into account cultural differences in scale usage (including colors - there are some great survey platforms that allow visual selections and provide visual cues, but remember while red means stop or bad in the US, it is a positive color in China), respondent propensity to check "Don't Know" (much higher in Japan than the US) and a myriad of other little factors.

Okay, here is the darn video - enjoy!