Tuesday, June 3, 2014

Training in survey methodology and practice


There's an upcoming DC AAPOR event on survey methodology training in DC on June 13 (http://dc-aapor.org/upcomingevents.php).  I can't attend so I thought I'd share some of my own thoughts on the matter here. I think about this topic from three different perspectives. 

As a survey methodology instructor and trainer of future methodologists: 
  1. Instructors should distinguish clearly between whether their training (e.g., their course or degree curriculum) is about "survey research practice" or "survey methodology" or what fraction of each. Those seeking practice training can be turned off by methodological debates and esoterica, and the line between esoterica and fundamentals isn't always clear, particularly in an interdisciplinary field like survey methodology. Survey methodology is an applied, yet scientific field, and should have a balance of both perspectives.
  2. While a methodology focus trains the next generation of scientists and leaders, it may not give one a good enough broad-based training in concrete techniques because the focus is on isolating and filling gaps in small areas of the field. That doesn't mean that graduate programs can't have both. For example an MS program could have an applied track, for those who want to go to work after training, and a "theory" (for lack of a better word) track for those who want to go to the PhD.
  3. Include official, sanctioned specializations (see student point 2 below) outside of survey methodology programs.
  4. Use Bloom's taxonomy of learning when planning courses. I've used this in my own and it helps operationalize clear course outcomes and goals,  structure the course to meet them. Otherwise we just end up teaching what we happened to learn in the way we happened to learn it, and may not be optimizing instruction and student experiences for the outcomes we want them to have. 
As a student:
  1. Stats v. Social Science focus: I'm sure opinions are split on this (my own opinion is split depending on the context). Groves (as you might expect) wanted us each to be strong in all of it (and "Do it better than we did." A tall order). On the other hand, the broader you go in topics, the less focused you can be. I'm glad I pushed my statistical boundaries and learned things I never thought I would learn. Although I still consider myself more of a social scientists, I can practice at a level of statistics I never thought I would. The counter argument is that it's been hard to focus on one or two problems and get research done. If you're going to go broad, make sure you get things out and published regularly so you don't end up with a scattered CV.
  2. Talks are fun but only pubs really matter (can't emphasize that enough now that I'm out). Take extra time to publish before life gets in the way (e.g., an extra 6 mos or year before defending or a postdoc instead of "regular job". I guess this means faculty should be giving you room to publish (either co-authoring or solo papers based on class projects). MPSM/JPSM have good models for this in the Practicum, TSE and Design Seminar courses.
     
  3. Read and study outside survey methodology: Not just to find your field of application, but to find areas that will advance survey methodology. For example, I took social psych courses and read the communications and linguistics literature in my graduate work. I still try to keep a portion of an eye on decision science and other psychological and behavior economic research that has something to say about measurement and nonresponse "decisions". I'm sure there are parallels in statistical work (e.g., estimation techniques or applied problems that aren't in the main view of usual survey statistics).

As an employer:
  1. I expect (or hope) that students coming out of formal survey methodology training (v. social, psych, or education research methods in another field/discipline or from stats programs) will have a balance of conceptual perspective and concrete skills. For example, I expect JPSM, MPSM, and SRAM students (or those who take my course) to have a handle the TSE framework and terminology, at least at a level that facilitates discussion so we can quickly/easily zero in on whether we're talking about coverage error, sampling error, or what. I don't know if every SM program is teaching this (or a similar) model, but we need something that moves us from niche jargon to relatively standard technical terminology. I'm probably biased, but I feel like TSE does that (well as some of the other frameworks out there). Terminology and models are a core part of the science of survey methodology in my mind, but I also expect grads to be able to DO things.
  2. I expect soc and stat side students to have decent quant skills (both interpretation and production). More so if on stats side. It doesn't seem right to me (given the current social science paradigm) to turn out students that can't do basic analysis, basic experiment design, or understand the basics of survey weights and variance estimation. Students should seek this kind of training if their program does't provide it. I would expect even MS students to have a working knowledge of these things and be able to refresh as needed.
  3. If I was hiring an MS level soc-side person I would expect these classes
    1. Data collection methods
    2. Questionnaire design
    3. Applied Sampling
    4. Cognition - or course on social aspects of measurement
    5. Practicum courses
      1. Covering nonresponse avoidance and sampling techniques...really "how to"
    6. Intro stats (2 semesters, through at least linear and logistic regression)
    7. Analysis of complex sample survey data
  4. If I was hiring an MS level stat person I would expect these classes
    1. Data collection methods
    2. Applied Sampling
    3. Sampling theory (or something more mathematical than applied sampling)
    4. Missing data/imputation
    5. Practicum courses
      1. Covering analysis
    6. Intro stats (3 semesters, through at least linear and logistic regression)
    7. Analysis of complex sample survey data
    8. Advanced variance estimation
    9. Introduction to latent variable models
      1. Pref with some exposure to complex survey data
    10. Introduction to longitudinal analysis
      1. Pref with some exposure to complex survey data

1 comment:

  1. I don't think any of the existing MS programs in Statistics offers the 12 courses you outlined. That's a great program, don't get me wrong, and I'd be happy to hire as many MS graduates of this program as I could have openings for (currently, zero). The courses you list take up 36 credits, and a Master-level program often only requires 30 credits. Within these 36 credits, you have to set aside some 6-9 credits for the core probability + distributions + statistical inference sequence (unless that's what you meant by item 6, but I am not sure).

    Also, at least for a statistician, I would expect familiarity with 2+ packages out of Stata, R, SAS, SPSS, Python -- you are probably taking that for granted, but I would spell it out explicitly. Some programs offer courses in statistical software, but these don't really teach computational thinking (see a recent discussion on LinkedIn that I contributed to).

    You might also want to update your course list with a recommendation of a textbook or two for each course: "As of summer 2014, good published textbooks include ..." Even coming from statistics side, I'd be hard pressed to give anything on advanced variance estimation -- Kirk Wolter's book is definitely a Ph.D. level material, and even at that requires something like Sarndal, Swennson & Wretman or Fuller to start off :). Latent variable model people and complex survey data people don't talk to each other (or I'd be one of the first to know, having worked and published in both areas), Willem Saris being probably the sole exception, so it will be up to the instructor to make connections to say questionnaire design, if that's what you have in mind.

    It is worth noting that your stats sequence is nearly 50% longer than the soc sequence (the latter can arguably be squeezed into 30 credits if not for the wild spread of the courses through at least three mainstream academic departments). Do you think this reflects the relative difficulty of training statistics-oriented survey methodologists, as compared to soc-side-oriented survey methodologists? I think there's a shorter supply of stat-side people compared to soc-side people, but I may be mistaken. Do not forget that Stat departments would typically expect their entering graduate students to have taken calculus and linear algebra as part of the prerequisities, so it places an additional burden of proof of one's worthiness on the applicants, making it harder still for say poli sci students who discovered their love of survey methodology.

    Yet another possible training "side" is project-management side, which would require even more specific training in project management, budgeting, and probably HR, most likely at the expense of most of the stat side. I think some of that may be covered by the springing Professional Science Master's programs, although I don't think there are any in survey methodology yet.

    ReplyDelete