«

»

Jul 02

How Non-Response Bias Can Ruin Your Mail Survey

Many marketers rely on mail surveys to measure customer satisfaction, or to gather information about the marketplace. Unfortunately, their confidence in such research is often misplaced because they fail to compensate for certain limitations of the mail methodology.

The key to accurate survey research is that the sample is “representative” of the population as a whole. Think of a large pot of soup as an illustration. If you put the ladle in the pot and get only broth – or only chunks of vegetables – then you don’t have a representative sample.

Most researchers (for example, Babbie, The Practice of Social Research), recommend that one needs a 50 percent or better return rate in order to be confident that you have a representative sample. So if you send out 100 surveys, you want to get at least 50 back. The same applies if you’re making phone calls or sending Survey Monkey invitations. Since almost all single wave, non-incentive, mail out/mail back surveys get a low response rate, there is a strong probability such samples are not representative. A non-representative sample will not produce valid results.

This is not to say there isn’t a place for mail surveys, just that – as with any methodology or tactic – you need to know what you’re doing.

The key issue to understand when using mail-based research methodologies is the problem of “non-response bias.” This type of bias is caused when some segment of the sample doesn’t respond in the same proportion as needed for a representative sample. It may be that men don’t respond, or young people, or people who are dissatisfied with your services. All these examples would result in under representation of a certain segment of the population. According to Burns and Bush, “non-response has been labeled the marketing research industry’s biggest problem.”

The Impact on Satisfaction Research
In satisfaction research, one can sometimes recognize non-response bias by scores that are skewed – results that are especially high, especially low, or a combination of both. The latter is called a bi-modal response – in other words, compared with the normal bell curve, the bi-modal response looks like a camel with two humps. The people who respond are those that really love your organization, or those who really hate you, but the silent majority is “silently satisfied” and under represented. The reason appears to be that mail (and Internet) methodologies are self-selecting approaches which encourage a higher representation of the extremes.

Typical Telephone & Mail Response Rates
What type of response rates are we talking about for mail and telephone methodologies? In my experience, a mail survey sent out once, with no money or reward involved, will frequently generate about a 13 percent response rate. Marketing Research by Burns & Bush state that “Typically, mail surveys of households achieve response rates of less than 20%.” Likewise, rates cited by one well-know customer satisfaction firm specializing in mail methodology range from 10 to 32 percent. In contrast, telephone methodologies – even in this age of caller ID – can easily produce a sufficient response rates, especially with a standard 3-attempt approach.

In a comparative study, Thomas Burroughs (Patient Satisfaction Measurement Strategies: A Comparison of Phone and Mail Methods) found telephone response rates ranging from the low 40s to over 50 percent, compared with a low of 21 to a high of 47 percent with mail.

Who is Not Responding?
It is not uncommon for a higher percentage of older people to respond to a mail survey – and for a large number of people under age 35 not to respond at all. The problem is the same with newer, Internet-based methodologies; all segments of the population do not respond equally. Thus, when the response rate is low, the survey may not be any more valid that CNN’s engaging but unscientific “Quick Vote” feature.

The Jackson Organization (now HealthStream Research) phrases the problem this way:

In low-response (below 50%) surveys, such as most patient satisfaction surveys conducted by mail, there is a significant likelihood that those who respond to the survey are different (demographically and psychographically) from those who do not respond. This is called non-response bias – that those who respond are materially different from those who do not – and it compromises the validity of the results. The objective academic literature tells us that if response rates fall below fifty percent, the probability of introducing non-response bias is unacceptably high.

Addressing Low Response Rates
Although one can produce an invalid sample using any methodology, written surveys are more likely to suffer from non-response bias than telephone surveys. However, there are ways to increase mail response rates to 50 percent or greater and thus avoid non-response bias. The most common are by:

  • Follow-up reminder in the form of a postcard or letter
  • Mailing the survey multiple times (preferably to non responders)
  • Including or offering an incentive for completion of the survey
  • Personalize the mailing with hand-addressing, real signature in ink, or a personalized cover letter
  • Give preliminary notification that the survey is coming through letter, postcard or phone call
  • Use special postage, such as a commemorative stamp
  • Provide return postage in the form of a stamped envelope or BRE

Of course, these efforts take extra time and money, which increases costs, often well above the comparable expense of telephone methodologies.

Unfortunately, a less expensive approach is for the research firm to “weigh” the data to adjust for under-sampled segments. In these cases, 5 responses by under-age-35 responders might be “weighted” to represent the 10 that are needed to match the percentage in the population as a whole. The problem with this approach is that the margin of error still applies to the smaller number – so overall confidence is not really improved.

Don’t Telephone Methodologies Also Have Bias?
Telephone methodologies also have the potential for bias, but generally of a different type. As Melvin F. Hall explains in “Patient satisfaction or acquiescence? Comparing mail and telephone survey results,” respondents contacted by telephone may have a tendency to give a socially acceptable answer to the interviewer, regardless of the content of the question. This is called acquiescence bias, but is not often addressed in the literature. One reason may be that acquiescence bias is a systemic bias, one that potentially skews the results, but doesn’t threaten the validity of the results in the same way non-response bias does.

The Bottom Line
Good marketing begins with research. But marketers need to know enough about the tools they’re using to ensure that they’re getting good results. When it comes to mail research, it’s important to plan for techniques that will provide a sufficient response rate, or consider if other methodologies like telephone would actually provide a more economical approach. These issues are especially important for ongoing research projects such as customer satisfaction where invalid data could lead staff to focus on efforts that are rabbit trails unrelated to the true core issues facing the organization.

Additional Links

Links of Special Interest to Hospital Researchers

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>