Menu

Our thinking

SMART Survey Design

Share This Post

While many companies use popular online tools to launch web-based surveys, it is important to understand that the results generated automatically from these tools cannot always be taken at face value. To generate reliable insights from survey data, it is critical to design your survey strategically as well as have an understanding of the limitations of these tools’ outputting capabilities. This may seem easy on the service, but good survey design requires forethought and expertise.

SMART goals were developed by George Doran* as a means of giving more concrete, implementable criteria to business objectives. Principles of the SMART model can also be extended to inform better survey design. Below are some questions created around each SMART criterion (i.e. specific, measurable, attainable, relevant, and time-bound) applied to survey design and analysis. These questions help to confirm that you are measuring the right variable the right way – resulting in a more impactful and effective survey.

Principles of SMART Survey Design

Specific:

  • Are your survey questions specific enough to only be interpreted in one way?
    • Survey questions should be specific, concise, and clear – not complex, ambiguous, double-barreled, leading, or loaded. Answer sets should be equally clear with mutually exclusive responses and if needed “other” or “not applicable” options.
  • Does your survey include specific demographic and/or other questions to segment your respondents appropriately?
    • Segmentation can go beyond demographic variables such as age, gender, ethnicity, or customer type. Sometimes segmentation is determined after survey results are collected through data analysis and the creation of new variables.
  • What are you specific research hypotheses?
    • Starting with the end in mind (i.e. what do you need – versus – want to know) can help focus your survey and bring depth instead of exploratory breadth.

Measurable:

  • As you using the right scale of measurement for your hypotheses?
    • The type of response scale you use (i.e. nominal, ordinal, interval, or ratio) dictates what kind of statistical analyses you can perform with your data. For example, if you wanted to know if someone searches for product information online, you could ask:
      • Nominal: Do you search for product information online? Y / N
      • Ordinal: How often do you search for product information online? Never, Rarely, Sometimes, Frequently, or Always
      • Interval: Roughly, what times of day do you search for product information? 8:00AM, 9:00AM, 10:00AM, 11:00AM, 12:00PM, etc.
      • Ratio: How many hours a week do you search for product information online?

Each of these types of responses is analyzed in distinct ways – providing distinct types of information on customer behavior.

  • How are you measuring more abstract concepts? … And, will this require data transformations?
    • Concepts such as “customer loyalty” can be measured with a common metric like “Net Promotor Scores” (NPS). While NPS can be measured with a specific survey question, the data requires a minor transformation and cannot be used “as is” from the direct output of a survey tool. Often data may require certain transformations (i.e. using systematic formulas to create and/or recode new variables from current variables). One good starting point is to look at frequency distributions of certain variables – are they normal, skewed, bimodal? Sometimes in calculating a mean or average, you lose granularity in the data – and consequently lose insight into your customers’ responses.
  • What patterns are you looking to measure or explore between survey questions?
    • When analyzing survey data, sometimes the biggest insights arise from how certain questions interrelate with each other –assessing data patterns between questions. This kind of analysis often goes beyond basic segmentation or cross tabs.

Attainable (e.g. realistic):

  • Is your ability to obtain survey results attainable and/or realistic?
    • For example, do you have an active recruiting or mailing list of motivated participants or do you need to use a vendor’s list? Is your survey too long? Long surveys will increase attrition rates. Are you providing enough incentives? Raffle-based reimbursements will incentive the general population, but harder to recruit participants may require individual payments.
  • Are your survey questions realistic for participants to understand?
    • In other words, do your survey questions minimize participants’ cognitive load? Customers would have trouble ranking 10 items (as opposed to 3-5). Having simple and clear questions increases the chance that participants will complete your entire survey.
  • Are your survey questions from a real-world “outside-in” perspective?
    • Do you use language that customers will understand and not internal business or company terms, acronyms, or jargon?

Relevant:

  • Is your survey sample relevant to the outcomes you want to assess?
    • Do you have screening measures in place to ensure that you are targeting the right participant sample? Even when using a screener, it is necessary to manually check data to ensure that participants who identify in roles such as “other” still qualify for your survey.
  • Are your survey questions relevant to the intersection of your customers’ goals and your business goals?
    • How will the knowledge gained in your survey advance your business goals forward? If your survey also aligns with your customers’ goals, they will respond more earnestly. Do some survey questions tap into the emotional experience of your customers? Often innovation occurs at this goal intersection.
  • Is each of your data points relevant to your analysis method?
    • Data cleaning is often overlooked, but critical to the success of your survey. Automatically generated outputs from online survey tools do not necessarily correct for skewed data. Cleaning can involve removing outliers (e.g. if participants accidently input $2400 instead of $240), incorrect or corrupt patterns of data (e.g. if participants speed through the survey – entering all 1s), and irrelevant or incomplete data. Sometimes analyzing patterns of where data is missing will lead to an insight in itself.

Time Bound:

While “time bound” in goal setting refers to giving goals a target date for completion, here, the term is more loosely applied regarding how surveys may be influenced by time-related factors.

  • Do you expect any seasonality effects in your data?
    • The time of year your survey launches could generate biased responses (e.g. launching a survey on purchase behavior in mid-December).
  • Do your survey questions ask about recent opinions, attitudes, or events?
    • Customers are not good at projecting 5-years out or remembering 5-years back. Keeping your survey questions more targeted to the present will result in more accurate responses. Also adding time-locked boundaries makes your survey more specific (e.g. asking “how often” something occurs on a scale “never” to “always” scale versus “how many times in the past month”).
  • Are you anticipating the time of launch as well as the timing of a reminder email?
    • Launching your survey at 4PM on a Friday may not yield the highest response rate. Planning your survey’s launch day / time and how a reminder email will be sent will maximize the likelihood of reaching your desired response rate.

The most impactful surveys, as well as designs, are developed through thoughtful iterations. To prevent collecting unusable or skewed data, incorporate time in your project for testing and revising your survey before launch. Also consider working with an analyst who can guide you through numerous obstacles to ensure that you are collecting the right data to address your business needs.

*Doran, G. T. (1981). There’s a S.M.A.R.T. way to write management’s goals and objectives. Management Review, 7(11), 35-36.

Share This Post

Back to top