Web-Based Survey Testing: Seven Tips for More Effective Questionnaires

by | Sep 12, 2016 | Survey Methodology, Survey Operations

Testing a web-based survey is one of the most detested activities to many in the survey research business.  It requires patience, persistence, and tremendous focus. Finding someone who enjoys testing and is good at it is a valuable asset to any survey research team.

A wonderful resource for many details about testing questionnaires (including a chapter on web-based surveys) is Methods for Testing and Evaluating Survey Questionnaires.

Here are seven tips to better web-based survey testing practices for your next study:

Develop detailed questionnaire specifications – and use them as the testing benchmark for your web-based survey.

Do not rely on catching errors in logic or otherwise simply by “taking the survey”.  This approach is good for some elements of testing, but to ensure that the programmer correctly programmed what was desired, you must start with clear instructions for what was desired.  Ideally, these are the same specifications provided to the web-based survey programmer.  Use the specifications as a guide to identify and document problems.

Use multiple testers and test cases.

Everyone approaches testing slightly differently, and brings different experiences to the task.  Having multiple testers give you a range of experiences to draw on.  While this is anecdotal, we have found that seven testers saturate the possibility of approaches – and while more is better, having at least seven will give you good assurances that your survey is working as intended.  This, of course, depends a lot on the complexity of the instrument.

Look at the data that you generate in testing.

Whether the story is true or not, there is a legend that circulates the survey research world about a brand new computer assisted survey system that was introduced at a federal agency – the system functioned beautifully during data collection, except that once all was done, it was discovered that the data was not saved.  We’re in the data generation business, folks – look at your test data.  Make sure that it is complete, that all items have data, and that there are no oddities that require exploration.

Test your web-based survey on different browsers, operating systems, and device types.

Back when survey research first stepped over to use the web, this was an entirely new consideration.  Gone were the days of uniform computing equipment (purchased and configured by the research organization).  As much as our study participants vary – so do their devices.  In most situations you do not need to emphasize every possible combination, but hit the big ones.  Use web monitoring sites like W3Counter to identify what is most often used, and test on the majority.

While the differences between how browsers and devices display surveys on a computer screen have diminished in recent years, the new variability in mobile devices has made this a real issue.  Do you expect your participants to be taking the survey on their mobile devices?  (If no, are you sure???)  Ensuring that your survey displays well on mobile may be critical.

Define a primary purpose for each test.

When it comes to setting a primary purpose for your testing, I have found that a two-step approach seems to work the best.  First, set a primary test purpose on overall functionality and display.  Is the web survey doing what I expected it to do?  Are there usability or other design issues?  Once changes are made, then introduce a second primary purpose on correctness.  Now is the time to catch all of the typos, all of the incorrect logic, and validations.  Drill in to the details.  Whatever primary purpose you find fits your research needs is fine – just make sure that you have one.  Testing without a primary purpose will get you scattered results.

Find testers who come from different baseline knowledge domains.

This idea comes from the software testing industry, and is also known as black box / white box testing.  In a web-based survey, failures may come in the questionnaire or in the web-based survey system.  Get someone to test who knows the questionnaire well, but doesn’t know much about the web survey system.  They will focus where they know the questionnaire may fail.  Also get someone to test who knows the web survey system well, but maybe not so much about the questionnaire.  Their focus will be on the survey system weaknesses.  And lastly, get someone who is not already familiar with either the questionnaire or the web-based survey system.  This user will come in with no pre-conceived ideas of what they will find, and often picks up on things that we could not see.

Gather web survey comprehension / usability input as well as correctness.

Testing is not just about correctness (typos, logic, etc.), but it is also about ensuring that we are measuring the desired construct.  To do that, the survey must be easy to use and understandable.  Encourage testers to comment on comprehension and usability issues.  While hard to hear sometimes, these items have the tendency to be study savers at times.  There are no “dumb tester comments” in this field.

About the Author

Scott D. Crawford

Scott D. Crawford is the Founder and Chief Vision Officer at SoundRocket. He is also often found practicing being a husband, father, entrepreneur, forever-learner, survey methodologist, science writer & advocate, and podcast lover. While he doesn’t believe in reincarnation, he’s certain he was a Great Dane (of the canine type) in a previous life.