*"A man with one watch always knows what time it is; a man with two watches always searches to identify the correct one; a man with ten watches is always reminded of the difficulty of measuring time" - Anonymous, (Groves, 1989, p. 295).
Tuesday, July 8, 2014
A beginners guide to response rates
One of the most common types of questions I get in survey practices is "What is a good response rate?" or "Is my survey's response rate good enough? Do I have nonresponse bias?" Survey methodologists reading this are probably taking a deep breath and figuring out where to start their response. Here are are few things I think everyone should know about response rates (non-technical...I will post later on AAPOR response rate calculation).
1) The answer depends on what you mean by "good".
"Good" can mean "high enough to publish in a specific journal," or, "high overall (e.g. 80-90% or more)," or what people usually want to know, "Are my results biased?"
"Good" might also mean "Do (will) I have enough cases for key analyses?".
In my mind, "good" should mean "relative to other surveys in the same mode with similar design features". We just can't expect 50% RR's from RDD surveys and shouldn't get upset when we don't see them.
2) Any statement about survey "goodness" or data quality has to be conditional on the amount of resources spent/spendable.
3) Response rates are good for some things, but not others.
a) Tracking an ongoing survey's performance over time
b) Comparing surveys that are conducted under the same or very similar "essential survey conditions" (e.g., mode, contact materials and protocols, costs/resources)
c) Planning survey costs, inference (CI's and power analyses), and number of completed cases
d) Making initial assessments of approximate representation of key subgroups
Not good for:
a) Assessing nonresponse bias. See work by Groves, Groves & Peytcheva and others. This is lesson number 1 or 2 in survey methodology training, but often isn't intuitive outside our field until explained. Statisticians usually understand this inherently, but substantive researchers may not. Easily taught though.
On a related note, I was just reviewing notes from Jill Montequilla and Kristen Olson's short course "Practical Tools for Nonresponse Bias Studies." If you want to learn more about how to asses NR bias, I recommend taking the course.