Research Good Enough for Judicial Review
I recently came across this strange big book, the Reference Manual on Scientific Evidence, compiled by the Federal Judicial Center and the National Research Council. It’s not fun reading, but it provides an enormously useful and fascinating overview of what makes social and marketing research (and other kinds of scientific research, too) sufficiently rigorous to withstand scrutiny in a court of law.
According to this manual, in the early 1990’s “the Supreme Court instructed trial judges to serve as ‘gatekeepers’ in determining whether the opinion of a proffered expert is based on scientific reasoning and methodology.” The Reference Manual was developed to help legal professionals—most of them non-experts in the type of research we do—assess the quality and usefulness of research being presented in the courts. There are whole chapters on statistics, multiple regression, and survey research.
Here are the section titles of the chapter on survey research—all of them are laid out as questions that judges are urged to consider, with guidelines on how to assess them:
- Was the survey designed to address relevant questions?
- Was participation in the design, administration, and interpretation of the survey appropriately controlled to ensure the objectivity of the survey?
- Are the experts who designed, conducted, or analyzed the survey appropriately skilled and experienced?
- Are the experts who will testify about surveys conducted by others appropriately skilled and experienced?
- Was an appropriate universe or population identified?
- Did the sampling frame approximate the population?
- Does the sample approximate the relevant characteristics of the population?
- What is the evidence that nonresponse did not bias the results of the survey?
- What procedures were used to reduce the likelihood of a biased sample?
- What precautions were taken to ensure that only qualified respondents were included in the survey?
- Were questions on the survey framed to be clear, precise, and unbiased?
- Were some respondents likely to have no opinion? If so, what steps were taken to reduce guessing?
- Did the survey use open-ended or closed-ended questions? How was the choice in each instance justified?
- If probes were used to clarify ambiguous or incomplete answers, what steps were taken to ensure that the probes were not leading and were administered in a consistent fashion?
- What approach was used to avoid or measure potential order or context effects?
- If the survey was designed to test a causal proposition, did the survey include an appropriate control group or question?
- What limitations are associated with the mode of data collection used in the survey, including in-person interviews, telephone interviews, mail questionnaires, and Internet surveys?
- For surveys involving interviewers, were the interviewers appropriately selected and trained?
- What did the interviewers know about the survey and its sponsorship, and what procedures were used to ensure and determine that the survey was administered to minimize error and bias?
- What was done to ensure that the data were recorded accurately?
- What was done to ensure that the grouped data were classified consistently and accurately?
- When was information about the survey methodology and results disclosed?
- Does the survey report include complete and detailed information on all relevant characteristics?
- In surveys of individuals, what measures were taken to protect the identities of individual respondents?
Some of the questions may seem obvious for professional researchers with years of experience, but how often do you really feel confident that your survey research could withstand courtroom or even corporate boardroom scrutiny on all of these issues? If your answer is anything less than “most of the time,” then download this manual for a useful compendium of core issues worth re-visiting on your next project.