Health professions educators are increasingly engaged in educational scholarship. Many times, a survey is suggested as a way to collect data for scholarly activities related to teaching, learner assessment, program evaluation, and research, to name just a few. However, survey design is both an art and a science, and poorly designed surveys are unlikely to provide credible data. In this blog post, I offer six principles to guide the design and development of high-quality surveys in health professions education, with the ultimate goal of helping readers design better surveys for collecting better data.

Principle #1: The questions shape the answers

As survey designers, how we frame our questions largely determines the answers we get. Scharwz made this point more completely when he noted: “self-reports of behaviors and attitudes are strongly influenced by features of the research instrument, including question wording, format, and context.” For example, a teacher could likely get a large proportion of her students to answer a survey question in a “positive way” by simply providing an unbalanced set of response options. In the example below, three of the four response options are positive:

How would you rate the instructional quality of this course?

  • outstanding
  • very good
  • good
  • fair

Presumably, however, as ethical educators and scientists, obtaining the answers we want is not our primary goal when designing and administering a survey. As such, it is important to follow evidence-informed best practices when writing items and formatting the visual layout of our surveys.

Principle #2: A great deal of cognitive work is required to generate optimal answers to a survey

Stated more simply, it takes a lot of energy on the part of respondents to be thoughtful and complete when answering a survey.

One of the models used to describe the cognitive work associated with taking a survey is called the cognitive response process model. It says that when respondents take a survey, they work through four cognitive processes. First, they have to comprehend the question. That is, they must interpret the meaning of the words on the page. Next, they have to retrieve the relevant information from their long-term memory. That information could be specific dates for things they’ve done or it could be an attitude or opinion about a topic. Next, they have to integrate that information into a judgment and, in some cases, might also have to make an estimation. For example, respondents asked to report how often they gave blood in the last year might not remember all those instances and therefore may need to estimate that number based on how often blood drives are conducted at their work site. Finally, once respondents have an answer in mind, they need to report that answer on the survey; in addition, they may have to make changes to the way they answered the question based on the response options that are provided. Importantly, things can go wrong at any one of these processing steps. For instance, respondents might misunderstand the question because the visual layout is confusing; not be able to retrieve the relevant information because they’ve forgotten it; or not be able make a good judgment because they don’t have the necessary information to give an informed answer. In all of these examples, response errors are likely to occur, and the answers provided will be difficult if not impossible for the survey designer to interpret.

Principle #3: Respondents are generally unmotivated to take a survey

Respondents are often busy professionals who are just not that motivated to sit down and work their way through a survey that is likely quite boring. Instead, respondents usually agree to take part in a survey out of the goodness of their hearts, because we’ve asked them nicely or promised them some sort of small incentive for their participation.

When considering respondent motivation, we can think of it in at least two different ways. First is the motivation to even begin the survey. Second is the motivation to answer the questions thoughtfully, accurately, and completely. In most instances, however, potential respondents are unmotivated both to begin the survey and to put forth the cognitive energy needed to carefully work through the four cognitive processes described above. With this in mind, we as survey designers should do everything we can to bolster respondent motivation. In most cases, motivation can be bolstered by doing several things, including designing a high-quality survey that is easy to understand, providing up-front incentives to potential respondents, and obtaining sponsorship from a trusted person or organization. In addition, keeping the survey short can pay big dividends because, all else being equal, a shorter survey is always better than a longer survey when it comes to respondent motivation. 

Principle #4: A survey is a conversation between the survey designer and the respondents

Because a survey is a kind of conversation, it is important to remember that respondents will make a number of assumptions when taking the survey, assumptions that are based on conversational norms. There are four basic assumptions, sometimes referred to as Gricean Maxims, and they are all interrelated. The first maxim is that of quantity. In a conversation, we assume that the person speaking with us is providing the right quantity of information (and we all know people who give us much more information than we really wanted). Next is the maxim of quality. We assume that people are being truthful, unless we have reason to believe otherwise. Next is relevance. We assume the person is giving us information that is relevant to the conversation at hand. And finally, the fourth maxim is that of clarity or manner. We assume the person is trying to communicate clearly. So, as survey designers in conversation with our respondents, we should try to honor these conversational norms. If, for example, we provide confusing or inconsistent information to respondents and then ask for their opinions about the information provided, we should not be surprised if we get strange answers. After all, the person on the other end of the conversation -- the respondent -- is assuming the information we have provided is truthful, relevant, and clear, and so they will do their best to provide an appropriate answer.

Principle #5: One can never know exactly how a survey will function until it is pretested

Practically speaking, this principle may be the most important idea. That is because, despite our best efforts in designing a high-quality survey using evidence-informed best practices, we just never know how well a survey will function until individuals start taking it.

There are two primary ways to pretest a survey. The first is through expert reviews where we ask experts to read and provide written feedback on the survey prior to its use. Those experts can be content experts or survey design experts (or both), and they can provide feedback on, among other things, item clarity and whether or not we are missing key aspects of what it is we are trying to measure with our survey. The second way to pretest a survey is to conduct cognitive interviews. A cognitive interview is a qualitative analytic approach where we sit down with individuals who are similar to the sample of interest and ask them to work through a draft of the survey. For example, if we ultimately intend to survey medical students, we might do a cognitive interview with a few current students or recent graduates. During those cognitive interviews, the students would take the survey and provide verbal feedback to the designer on what is and is not working on the survey.

Once completed, the results from expert reviews and cognitive interviews can be used as validity evidence, which can bolster the survey’s credibility as a useful measurement tool. What is more, these pretesting efforts should be reported in any articles that discuss the survey.  

Principle #6: Good decisions cannot be made from bad surveys

When we create a survey, we are usually trying to make a decision based in part on the survey results. If we are surveying residents, maybe we are trying to decide if the trainees are satisfied with their graduate medical education. Or, if we are conducting an end-of-course evaluation, maybe we are trying to decide if the students felt the course achieved its objectives. Or, if we are surveying practicing physicians as part of a research project, maybe we are trying to decide if our predictions about self-reported practice patterns are correct. In all of these cases, we are attempting to make a decision. However, if the quality of the survey is low and the respondents do not understand the questions being asked, then the likelihood we have collected meaningful data to inform our decision is quite low, and our survey efforts will have been wasted.

What are your tips for survey design and development? Share your experiences below and join the conversation!

 

**The original artwork featured here is printed with permission and was created by Doug Dworkin. Doug can be followed on Twitter.

 

Disclaimer: "The views expressed in this blog are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.

 

Did you know that the Harvard Macy Institute Community Blog has had more than 165 posts? Previous blog posts have explored topics including bridging leaders, peer instruction, and health systems science.

 

Anthony R. Artino, Jr.

Anthony R. Artino, Jr., PhD, is a health professions educator and researcher; he is also a Captain in the U.S. Navy. Tony currently serves as Professor and Deputy Director for the Graduate Programs in Health Professions Education in the Department of Medicine at the Uniformed Services University of the Health Sciences. Tony’s areas of professional interest include survey design, self-regulated learning, and the responsible conduct of research. Tony can be followed on Twitter or LinkedIn.