A Checklist for Testing Surveys
We recently offered an article about needing to “punish test” surveys lest you end up with a mess of mistakes like text piping that fails, or survey layouts that do not to adapt to user devices. But how do you actually punish test a survey? The best way, we think, is to start with a checklist.
Go through your survey and create a list of every element that should be reviewed and checked for accuracy and functionality. Your checklist will likely vary for every survey, because every survey is custom designed for a specific need. But as you develop a library of items for checklists, you will find yourself pulling in many of them over and over again.
To get you started, here is a checklist for checklists, with a description of each broad category of survey elements you should be checking:
- Check the Display. Click through every page of your survey in test mode and make sure all questions load quickly and correctly. All text and images should be fully visible, not truncated. Ideally they will not require users to scroll (and users should never have to scroll horizontally). Check that all display buttons function correctly, and that they allow the survey to advance forward (and maybe backward) as intended.
- Check the Text. In live-link testing when you’re almost ready to launch, go through the survey at a painstakingly slow pace and check that all text matches the questionnaire exactly. We use a macro in Word to physically strike through every word, one by one, to confirm that each has been verified. Check the question text, check the response text, and check all instructions. Also check that any lists you have appearing in multiple questions correctly match each other, as last-minute changes sometimes make it onto one list, but not others.
- Check the Question Controls. There are many programming specifications set at the level of each question, and all of them need to be checked individually. If response options should randomize, make sure they do. If they are supposed to rotate instead, make sure of that. Rows marked as exclusive should automatically un-check other rows selected. Multiple response questions should allow for multiple rows being selected. Check all text piping. Check required vs. not required specifications. Check for what is being allowed in open-end text and numeric boxes, including any range restrictions or validation protocols.
- Check the Top-Level Functionality. Here we are referring to things like quota controls, survey paths, skip functionality, and so on. Many of these elements are difficult to check manually, one question at a time, but easy to check with aggregate data from a random data generator. Be sure to run this and confirm that every skip and specified restriction works as intended. Note each component for checking on your survey check list, and tick the box when you are done.
The point is that testing a survey should involve more than running through it a few times after it is programmed. It should be subjected to an exceptionally thorough review, with a checklist, on multiple devices (yes, you should test it on a phone) and with multiple browsers. It must be a focused task, time-intensive, and deliberate, and the idea of a checklist is to keep you from getting distracted as there are so many things to check! Work your way through the list one item at a time, from the beginning of the survey to the end, then go to the next item, and do it all again.
Oh, and one more thing. If you use a survey tool that promises to save you the trouble of running through all these checks with an AI-driven expert review, don’t believe it! It’s a clever idea that will lead you astray, because for now at least, it just doesn’t work.
—Joe Hopper, Ph.D.