Versta Research Newsletter

Dear Reader,

Writing survey questionnaires, like any skilled professional task, is both easier than you might think and harder than you might think. It is easy in that any intelligent person, with enough time and training, can learn how to do it. No rare talents are required. It is hard, however, in that it take years of training, on-the-job experience, and quite a few mistakes, before you will feel comfortable and confident doing it.

In this newsletter we share with you Five Last Steps in Writing a Questionnaire, which offers Versta Research’s recommended best practices for finalizing a survey before it goes to programming for design or layout. It gives you an easy set of guidelines for the last crucial components that can make or break the success of your survey.

Other items of interest in this newsletter include:
We are also delighted to share with you:

… which showcases some of our recent work about narcolepsy for Harmony Biosciences, and elder financial protection for Wells Fargo.

As always, feel free to reach out with an inquiry or with questions you may have. We would be pleased to consult with you on your next research effort.
Happy fall!

The Versta Team

Five Last Steps in Writing a Questionnaire

In quantitative survey research, the questionnaire affects nearly everything else that is critical to a study: the data, the analysis, and the findings. If the questionnaire is no good, all else is lost.

While not especially hard to write, questionnaires take surprisingly deep experience. They require thinking on multiple levels, even as one writes. You must ask yourself: How will respondents interpret and react to my words? How are they likely to respond or be swayed? How will I use their answers, and what kinds of analyses do I anticipate? How will I report the responses, and to whom?

Textbooks and methods courses in survey research teach you standard best practices for writing survey questions (use neutral language, avoid double-barreled questions, and so on). But beyond that, we find there are crucial “non-standard” best practices that researchers lose track of unless they have constant reminders or a checklist at the end.

There are crucial “non-standard” best practices that researchers lose track of unless they have constant reminders or a checklist at the end.

Here is our end-of-the-process checklist with helpful hints about the final last steps (five of them) we recommend in writing a questionnaire. These final run-throughs will ensure that you have given careful consideration to important issues in survey design that likely got pushed aside in early drafts as you focused on all those standard best practices in getting your questions right.

Review Your Don’t-Knows

A perennial debate among survey researchers is whether respondents should be allowed to answer “don’t know” or “not sure.” This is something we at Versta Research wrestle with (and argue about) internally almost every day. We have seen strong evidence in our own work that a large majority of don’t-know responses reflect an unwillingness of respondents, not an uncertainty. This is corroborated by a larger body of literature on survey research, as well (see our 2016 article: Don’t Know Is Not an Option).

Therefore we recommend against inserting don’t-know options unless absolutely needed. In fact, we suggest trying to write a questionnaire so that they are not needed at all. How? First, do not ask people for information they do not have. If “don’t know” seems like a reasonable response, write a better question to measure what you need. Second, look at your answer options. They should be exhaustive, comprehensive, mutually exclusive, and based on information from other research or expertise. If it seems that respondents will not find an answer that reflects their thinking, write better ones so that they will.

After you have done everything possible to revise your questions and answer options, run through the entire survey item-by-item once again to decide whether don’t-know is still a reasonable response, or just a lazy response. Add the option if it truly makes sense, but do so sparingly.

Review Your Other-Specify Boxes

It is tempting to add an open box at the end of every survey question so that respondents can write in their own answer if the pre-listed answers do not fit them. Our clients do this, and they ask us to do it, all the time. They are worried that we might miss something, and so we ought to let respondents tell us what we missed.

But there are big downsides to doing this. First, it makes data harder to work with. Every question with an other-specify box will  require manual review. You need to read what people wrote, and then decide what to do. Second, it adds ambiguity. Often people write answers similar to what you already supplied. Will you assume they were lazy, and change their answer to the one you supplied? Or will you assume they meant something different? Third, other-specify boxes do not work very well. If you decide to code responses into categories and report new answer options, the percentages will be low and probably not be worth reporting. And then those low percentages will likely be misleading because they represent unaided responses versus the others which were aided.

The biggest reason other-specify boxes do not work well, however, is that respondents will not use them. As much as we want them to help us, few will tell us what we missed. They assume it is our job to supply a comprehensive list of answer options (they’re right), and will answer as best they can within the constraints we offer.

So after your survey is drafted take a closer look at every question with an other-specify box. Keep it if it seems essential, but otherwise know that it will not deliver much in terms of “data” so you can safely let it go.

Review Your Randomization

Randomizing lists of response options is a good idea to even out the biased ways in which respondents search for answers to questions. Items at tops of lists are more likely to be selected, and items in long lists are likely to be overlooked. Why? Mostly because respondents read from top to bottom, search quickly for a good answer (even if not the best answer) and move onto the next question in the survey.

But randomization of response options is not always a best practice, so you should never randomize without careful thought. When you design answer options in Google Surveys, for example, you will be told: “Randomization produces best quality results.” This is not true. If answer options are a Likert scale, scrambling the order of the scale is a terrible idea. If answer options represent any logical progression (for example, local to global, general to specific, work-related to family-related) then randomizing creates confusion.

The same goes for whole sets of questions, especially items that are being rated in grid-like formats. Sometimes it is best to randomize, but not always. As you finalize your questionnaire, go through every set of questions and answer options and consider whether randomization is valuable to reduce likely biases. But also consider whether randomization will create confusion, and if it will, don’t do it.

Review Your Scales

Good researchers use a variety of scales, because every scale should be designed with specific needs in mind. Those needs vary from survey to survey, and from question to question. Sometimes we should use just two points (to boost reliability), sometimes four points (for PR-driven research), sometimes seven, nine, or eleven (for statistical modeling).

We always try to build such thinking into our drafts as we write survey questions and layout answer options. But it’s easy to lose track, and easier still to unthinkingly cut-and-paste scales from elsewhere in a survey. That’s why at the end, we review every question that uses ordinal scales. We ask ourselves: What are the minimum number of scales points we need for our analysis and reporting needs? Do we want to capture neutrality, or should we require respondents to lean one way or another? Should we offer a don’t-know or not-applicable option, or should we revise our questions and skip patterns instead?

Another piece to review is the order in which scales are laid out. Generally we want to list response options from low to high, or from bad to good. We start with things like terrible, zero, poor, disagree, and so on. Then we move towards spectacular, excellent, ten, and agree. This helps counteract a social desirability bias in which respondents lean towards giving answers that make them look favorable, pleasant, and positive.

Of course this is a general rule only! Sometimes negative responses are more socially desirable than positive ones. Sometimes flipping the sequence of response options causes confusion. So it is always essential to give careful thought, question by question, with focused attention on your scales, as you put the finishing touches on your questionnaire draft.

Review Your Skip Logic and Programming Instructions

On the very final review of a questionnaire, review skip logic and programming instructions in one focused pass, undistracted by the other issues you considered above. Good skip logic helps avoid the need for don’t-know response options, as you carefully adjudicate whether questions are applicable to every respondent. Good programming instructions ensure high quality data by further constraining (or not constraining) what respondents can see and do.

With skip logic, first confirm there is no ambiguity for analysts, data coders, and programmers by specifying every skip pattern in both words and programming language (example: IF WOMEN, Q2=2). Second, ask a colleague to proofread and imagine what the survey will be like for every type of respondent. If questions seem weird or inappropriate for some, add skip logic accordingly.

Then review and specify all programming instructions, and specify instructions for respondents as well. Review whether each question should be multiple response or a single response. In lists of multiple items, consider whether some should be exclusive, or always shown on the bottom (such as “none of the above”).

For open-end answer boxes, decide, specify, and explain constraints. If you ask for a number, can respondents enter text? Can they enter as many numbers as they want, and any range they want? Can they enter commas? We mostly advise against constraints, instead allowing respondents to enter whatever they want. It makes data cleaning harder, but it usually provides a better respondent experience, and has the added benefit of giving us more information to identify bad ones.

Ready to get started? No, wait. These are five best practices for a final review of your questionnaire. So the question is: Are you ready to wrap up? When you are, here’s that final checklist:

☐ Review your don’t-knows
☐ Review your other-specify boxes
☐ Review your randomization
☐ Review your scales
☐ Review your skip logic and programming instructions

Check these things carefully and consistently, and we guarantee your research will be measurably improved.

Stories from the Versta Blog

Here are several recent posts from the Versta Research Blog. Click on any headline to read more.

Response Rates for Phone Surveys Plummet to 1%

Phone survey response rates have dropped 70 percentage points in just 35 years. Take a look at these numbers to see why phone surveys are just about dead.

Cut the Marketing Jargon from Your Surveys

Please do not ask consumers about your “brand,” their “product experience,” and about “sharing their voice.” This is not how consumers talk or think.

Nine Ways to Rescue a Failing Phone Survey

If you are fielding a phone survey and hitting a wall because response rates are terrible, here are nine ideas to consider that may save you from failure.

Polling Trends on Food, Biotechnology, and More

A meta-analysis of polling on food safety reveals trends relevant to all industries: declining trust, information overload, and a need for guidance.

The Strange Survival of the Focus Group

Contrary to many predictions, focus groups are thriving in a world where new technologies are ubiquitous and tons of data is easy to get. What gives?

Beware Bad Sample from Crowdsourcing

Academic psychologists were among the first to venture into research crowdsourding by using MTurk for survey sampling. Now they may be regretting it.

What Makes a Survey Scientific?

If you want to know what “scientific” surveys are, think beyond fun polls and old-fashioned suggestion boxes and complaint lines. Here is what it takes.

Easy Surveys Generate Tons of Errors

We strongly recommend against using “auto-advance” features for online surveys because if respondents make input errors, there is no way to correct them.

Surveys on Cell Phones Are Just as Good

Despite worries that surveys on cell phones are too difficult for people, research shows otherwise. Respondents give detailed, thoughtful, accurate answers.

Don’t Fall for the Neutrality Trap

Using a neutral middle point on your survey scales (like NEITHER agree nor disagree) will lower measurement reliability. We suggest a different approach.

Versta Research in the News

Surveys on How to Prevent Elder Financial Abuse

Wells Fargo has been publicizing the work it commissioned through Versta Research on its Wells Fargo Stories website, and has provided a full suite of materials for download, which includes a snapshot of the research findings plus a full whitepaper.

New Findings from Versta Research on Narcolepsy

Harmony Biosciences commissioned a research study about narcolepsy with Versta Research. It involved three audiences: patients, physicians, and the general public. Findings from the research are highlighted on Harmony Biosciences’ new Know Narcolepsy website, and are being submitted for presentation at the American Academy of Neurology (AAN) annual meeting to be held in Philadelphia in May 2019.

MORE VERSTA NEWSLETTERS