Tuesday, March 19, 2013

Thoughts on paradata costs and sparked by James Wagner

I wanted to post a comment on my friend/colleague James' blog, but found that what I wanted to say took up too many characters! (not uncommon). Here's the original post...my response below is as if a comment...http://jameswagnersurv.blogspot.com/2013/03/cost-of-paradata.html


I love the train of thought (just shows we're "nerds of a feather")!

I think first breaking things into fixed and variable costs is helpful as usual. The cost of setting up the system v. the cost of interviewer time to carry out the observations.

I also think you can get reasonable cost estimates (or at least data for those estimates) by good old-fashioned pilot testing. Don't even program it in CATI to start, just have interviewers try it and record how long it takes (e.g., look at average int duration with and without the new obs). Or if you're not worried about Hawthorne effects, send an observer. At Census (and most large-scale operations I imagine), it was hard to impart the idea that little pretests are possible and helpful (I'd argue essential) for anything that's worth doing full-scale. There was always a bias toward "on/off" thinking...and toward assessing the effects of a new method or tool ALL the interviewing staff at once (and with little data to backup the assessment). Made it hard to try things like what you're talking about.

A few other reflections on iwr obs (not costs  per se)...

-If the iwr obs come before the interview (like an assessment of the neighborhood or HU), they shouldn't distract from the interview.

-It's easy to spin stories about how these obs will be hard for interviewers to do, mess with their flow, etc., but your fine institution has shown us that this is naive thinking (of course it's also naive to throw too much new stuff at iwrs without prep, forethought, and testing).

-In trying to pitch additions to Census's CHI, I tried to answer the new questions we were proposing for my own neighborhood. I found that it took a few 10's of seconds to actually answer the questions, and the "observation" required was all done just by walking through the neighborhood anyway. We know that interviewers "scope out" (at least perceive) the neighborhoods they're in, so asking them to record their obs shouldn't be too hard if the questions are good and few. We know there is variability b/c they're ratings. A reliability sample could establish how "good" the ratings are. If we want detailed obs like LA FANS, or really technical ratings (e.g., how many feet is it from the HH's door to the nearest corner) it's a different story.

-For HU obs, interviewers a likely making the observations as they drive into or walk around the neighborhood, so their laptop may not even be open yet. Timing data would look different than if you required them to start the obs with the laptop open. Though the latter seems like overkill (I'd rather catch the obs the iwrs are doing naturally than setup something strict that really feels like "a new job to do").

-If the iwr is dovetailing "assessment" of the area with driving/walking in, there's some overlap in the costs (travel v. rating).

-Obs during the interview are a completely different story. Whatever's done has to fit within the flow of the CAPI screens (e.g., a pop-up screen that says "does the R see bored right now") and the interviewers personal style flow. A the same time, it can't throw the interviewer off pace. An interesting UI task, but very possible I think, depending on the CAPI system. Interviewers memories of interactions degrade just like respondents memories of facts and experiences. So it might be possible for iwrs to record some high-level perceptions/feeling about the interview IMMEDIATELY after the interview...global assessments about R involvement, interest, boredom, etc. If you want more detail levels, post-hoc coding from audio or video (or use of a second observer) may be better. This would all cost more, of course :)  But again, if cost is the concern, you'd have to consider (calculate) whether recording and coding audio using some student coders is more than the cost to design, develop, and implement coding that would be done live by interviewers right in the interview. If there's one thing I've learned from survey meth, it's "it depends" and the specifics are so important in making any design decision.

Neat stuff!

No comments:

Post a Comment