In the survey methodology and statistics worlds it can be easy to become dismissive of response rates as a method of evaluating survey goodness. We know that they don't indicate bias per se (and are often poor indicators of it). We know they really apply to individual questions or estimates, but are often only reported at the survey level. We know there are more accurate and statistically-satisfying ways to measure nonresponse error, not to mention general survey and estimate quality.
Yet I think there is still an important role for response rates. I've been readings a lot of technical survey documentation lately, which has me reflecting on our practices.
1) Formalizing response rates (as AAPOR and CASRO have done), give the field a starting guidepost for evaluating surveys. Even if RRs aren't the final word on quality, standardization makes comparison across surveys easier. Having a shorthand (e.g., "we used AAPOR RR4") also makes scientific communication quicker, so we avoid having to explain exactly how the response rate was calculated every time we present one. This has to be better than the days before the AAPOR Standard Definitions.
2) The definition of that rate, that is, deciding what to include or exclude from a response rate (or other similar rate) helps clarify what type of "participation" is being measured. Is it participation among cases that are eligible and contacted (e.g., a cooperation rate)? A cooperation rate can be used as a measure of efficacy of contact materials or interviewers in gaining cooperation by setting aside the tasks of getting contact. A response rate would be less clean of a measure for that purpose. Are respondents only included if they complete the entire survey, or are partially-complete interviews included in the numerator as equivalent to complete interviews (AAPOR RR2, RR4 and RR6, and COOP2 and COOP4)? If we want a measure of "sampled units from which we have any data (or key variables)" including partials makes sense. If we want a measure of "response among committed question answers," or "fully complete data" excluding them is better.
3) Having a formalized set of disposition and response rate terminology makes it easy to explain our methods and practices to people in other fields (including clients who might not be familiar with survey sampling and data collection). A close read of the Standard Definitions is a great tutorial on the logistics of survey response and response rate calculation.
4) More than just a tool for scientific communication, standardized disposition definitions and response rate formulas embody our field's ethics of clarity in and disclosure of methods. They are akin to reporting question and response option wording whenever reporting survey research.
While we continue to look for new indicators of data quality and estimate representativeness, I think response rates are hear to stay.