Innovation Challenge - Point me to Survey Design resources?

My students have been digging into the Innovation Challenge and are developing a survey to get insights into their community. They’re pretty excited!

…it currently has 50 questions, almost all of which are free answer…

Do y’all have any resources either targeted to a middle/high school reading level or in Spanish that discuss how to design a survey, in particular how to estimate how long a survey will take a user to complete? My students are telling me “Oh like twenty minutes” and I’m thinking hmmmmmmmmm.

What I’m getting from a Casual Google is a bunch of college-level survey design resources that are going to take hours for my students to digest, and they don’t really commit to a “How To” approach or concrete estimations like “A 100-character free answer is about three minutes”.
Is there good FLL curriculum somewhere on this I can start with?
I’ve seen a couple webinars advertised, but didn’t herd the cats well enough to put our students on them - are there recordings I could send out?

(Fortunately, I did just successfully talk them down from “Survey Everyone” to “Survey High School students in our city”. :stuck_out_tongue: )

I’m not aware of guidelines/curriculum for creating a survey… but here are some thoughts relating to surveys I’ve done recently:

  • There is a direct correlation between the length of the survey and a particpant’s likelihood to fill it out. The shorter it is, the more participants you’ll have.
  • The data you pull from a survey generally decreases in quality the more questions you add. In my experience, people have short attention spans and the quality of answers will deteriorate as the survey goes on.
  • I don’t know what your survey is pertaining to, but if your students think they need to ask 50 open ended questions to draw meaningful information from participants, it’s possible they should do more research on their target demographic. A lot of studies have been done on the lot of different topics, and it’s likely they can answer their own questions.
  • For me, I have better luck getting responses to surveys when I can advertise them as “5 minutes or less”. This would require the team to boil their questions down to the absolutely critical ones which is a good practice anyways.
  • In order to collect quantitative data that you can use to justify decisions, it’s generally best to ask participants to rate something on a scale from 1 to X. These questions are also easier to fill out and you can ask more of them in a 5 minute survey.
  • Most people suck at giving open ended answers. Oftentimes you’ll end up with a bunch of subjective, useless word soup. You’ll also get some nice testimonials that you could use, but there are more effective ways to get those.
  • It’s hard to take actionable steps based off of open ended questions.
1 Like

Yeah, I’ll probably start Monday with “What are we trying to learn about [target demographic] with this survey?”

Shameless plug. Here’s a half-hour recorded webinar we did with FRC 8027 where we talked about writing survey questions, targeting respondents, and more. It doesn’t offer the metrics you seem to be looking for, but the advice seems applicable nonetheless.

I wholeheartedly agree with @Ryan_Swanson – your survey needs to be much shorter. As a current high schooler myself, I speak from experience when I say I really don’t want to answer 50 questions. To get more responses, make your questions specific, easy (yes/no, 1-5) and make nearly everything optional.

If you’re really trying to dig deep into some of these, set up some times for your team to speak with other high school students in a “formal” interview to learn about their needs. You’ll be able to dig deeper with a hand-picked audience and the conversation will be more natural and enjoyable than typing in text boxes.

1 Like

I think this is where we ask @jaredhk if he knows of any resources (or would be willing to be one).

Also note: How you FRAME the survey is important. If you frame it as “we are looking to do a project to benefit your community, and this is where we’d like your input at this time”, you’ll probably get better results than “Please fill out this survey, it’s important to us”.

2 Likes

That’s really tempting. Survey design is such a large topic, I’m not sure what I could get done in a timely manner. That said, let me spit ball for a few minutes in this thread:

Please let me know if you have specific things you want to know about/know more about or resources for particular parts of research design!

And please pardon all of the political examples, it just makes more sense to use them than to not.

Step 1: Identify a research question

You’ll want to do some sort of literature review. In an academic paper, this would be looking for a bunch of scholarly sources related to your topic to get the full picture of what data is already out there. With something like this, you won’t need to go quite so big (or academic), but you should still have some sources showing what’s out there already.

Then, identify a gap in the literature. What is it that is unclear, out of date, or incomplete that you want to solve? For an academic paper, you’re looking at all of this in terms of data and research. For an innovative solution project, it can be a bit different. For example, say you are trying to look into mobility for people in wheelchairs on stairs. Your lit review will look for all the innovations you can find that solve related problems. Does something solve it and it’s not good enough? Is there no solution? Is the current solution good but you can improve on it? This is the gap in the literature.

Then, you write your research question. For academics, it’s simply about finding the answer to the gap. For example, if you’re researching how political party correlates to voter turnout and the last time a study was conducted was 2012, the gap is the newer data and your research question becomes “how does party id correlate…”

Step 2: Create a hypothesis

This is the answer to your research question. You can create a directional or non-directional hypothesis. For example, a directional hypothesis would be “Democrats have higher voter turnout than Republicans” (or the opposite). A non-directional would be “There is a difference in voter turnout between Democrats and Republicans.”

Since you are basically doing market research, your hypothesis is basically “will my customers/audience like this product/solution?” Keep that hypothesis in mind as we move on to step 3.

Step 3: Operationalize your hypothesis (the fun part!)

Maybe I’m just a nerd but I think this is where research starts to get really fun. So, you have a question, and you have a proposed answer (hypothesis). Now we get to test it!

Step 3A: Identify your variables

What variables do you want to test? Are they quantitative or qualitative? What is the independent variable and what is the dependent variable? Are there any control variables?

Sticking with the voter turnout question, our IV is political party affiliation (qualitative, nominal) and our DV is voter turnout (quantitative, interval-ratio). That is, when we change political party, we want to see what happens to voter turnout. If you want to get fancy (and you do!) you would control for things like age. For example, you wouldn’t want to compare a 25 year old Democrat with a 75 year old Republican to see how voter turnout differs because you wouldn’t know what is causing the change–the age or the party.

For the purpose of market research, you can have a few different variables. For example, maybe you want to create a “favorability score” to measure general impressions or a “necessity score” to demonstrate interest/demand for the product.

Step 3B: Operationalize the heck out of those variables

Having fun yet? Okay, we have some variables. How do we measure them? You have a ton of options for measuring data. Surveys are only the beginning. I’ll add a section at the end about other ways to measure data, but for these purposes, let’s assume you want to use a survey.

If you measure something like political party (and use a survey), all you need to do is ask political party as a multiple choice question. If you want to identify favorability of a product, your best bet is almost always a Likert scale. A Likert scale is simply one of those “With 1 being strongly disagree and 5 being strongly agree…” type questions.

You can ask a series of questions that contribute to one variable, too! So, maybe you decide that favorability is best measured by a combination of (1) excitement for the product, (2) helpfulness of the product and (3) willingness to purchase the product*, you pose these three statements on a Likert scale and take the combined total to have a favorability score out of 15:
-This product is exciting to me.
-I find this product helpful.
-I would purchase this product.

*This is not what makes up favorability, necessarily. This is arbitrary. Decide on your own metrics.

Open-ended questions are generally not good when you’re collecting large amounts of data. You’ll need to code the responses/sift through them. Open ended questions are better for generating ideas of getting very specific questions answered. If you really feel the need to have open ended questions, limit them as much as possible.

Step 4: Collecting the data

This is usually the easiest part. You have all of your questions and you know the format you want to collect them in. If it’s a survey, now is the time to create the survey. Make it as easy for people to submit their responses, don’t influence them (see section at the end with more info on this), and sit back and wait for the responses to roll in.

If you’re doing academic or professional research, you’ll need a certain response rate to have confidence in your data. If you have a budget, consider a tool like SurveyMonkey which lets you market your survey to random people for a fee or Amazon MTurk (way more work but cheaper).

Don’t forget to collect some demographic data. It never hurts to collect data you don’t know if you’ll use or not, as long as you don’t collect so much as to turn people away from the survey. Maybe you don’t think age matters for your question, but you might see some trends later – you never know!

Step 5: Interpret your results

I am not the most qualified person on this forum to tell you about statistical methods, so I won’t. I will say that there are a ton of resources out there to help you do all kinds of statistical tests on your data and produce meaningful results

Additional topics of note

Completion time

OP asked about completion time. The best way to measure this is to take the survey. Have five or ten students take a draft of your survey and see how long it takes them. Don’t forget to clear the data if they’re not part of the survey population.

Assuming you are not offering respondents any incentive for completing the survey, most will give up after about 5 minutes. Obviously this varies, but if you’re asking more than 15-20 questions, you’re not going to get very many responses. 15-20 is already pushing it and should only be done if necessary.

Remember, the more data you collect the more data there is to interpret. This is not always a bad thing, but if you have mounds and mounds of data, it will take you a while to go through it.

Methods for data collection

There are benefits to lots of different means of collecting data. Surveys are not the only answer and I wouldn’t discount the other options. Here are a few:

Surveys

We know surveys. These are conducted either online, in-person, or by phone. It is a series of questions which respondents are asked to answer.

Focus group

A focus group is great if you are trying to generate ideas. You get a bunch of people in a room (or Zoom), ask them some questions, and let a discussion flow. You can generate some great ideas when people who don’t know each other are in conversation with each other and can also get the general consensus on something pretty quickly.

Ethnography

You probably can’t/won’t do this for FIRST, but in an ethnography, researchers immerses themselves in an environment to collect research first hand. They might go to a foreign country and embed themselves in a community. Or maybe someone is researching something about Congress and they actually get a job or shadow someone in a Congressional office.

Interview

If you need to collect mostly in-depth qualitative data, and the quality and length of the responses are more important than the quantity of the responses, conduct 1:1 interviews with your sample population. For example, maybe you want to hear about issues facing a small community of 100 people. Rather than trying to get 75 of them to participate in a survey so that you have enough data points, you could have in depth conversations with 10 of them. Even though the 10% response rate is obviously much lower than the 75%, the quality and utility of your data will probably be much better with the 10.

Be careful

A few things to be careful of when conducting research:

Use screening questions

I can’t tell you the number of times I’ve clicked on a survey only to find that there is a required question which does not apply to me. For example, I would see a question like “What time do you wake up for school?” when I’m not in school and don’t wake up at a specified time. That’s maybe not the best example, but there are a ton of irrelevant questions like this.

You can fix it easily with screening questions. For example, I was doing research a few months ago on attitudes towards vote by mail among the electorate. I decided that I wanted my population to be eligible voters in the United States as of he 2016 presidential election. To do this, I implemented two screening questions. First, I asked respondents for their date of birth. If my logic showed they were at least 18 on the date of the 2016 general election, I asked if they are eligible to vote (this was easier than asking questions about citizenship, criminal history–since this varies by state, etc.).

Don’t use leading questions.

Leading questions lead the respondent to the answer you want them to provide. For example, a leading question would be “What do you think about FIRST’s inappropriate fundraising tactics expressed towards volunteers?” That question is framing the subject in question in a very specific light. For reliable data, rephrase it as “What do you think about FIRST’s fundraising tactics towards volunteers?” Easy fix. Just be careful.

The non-attitude problem

Think about the questions you plan on asking. Do the people who are likely to fill out your survey (a) know enough about the topic to have an informed opinion and (b) care enough about the topic to have a coherent opinion.

This comes up a lot in politics when doing public opinion research on down ballot candidates. If I asked you your opinion of your city councilmember, most people wouldn’t even know who it was, let alone have enough of an opinion on them.

Sometimes, you can provide some additional context before asking the question if that context would help the respondent without leading them to the answer. For example, if I asked you all to answer "What is your opinion of House Resolution 24? most of you probably would not have an answer.

However, I could rephrase it like this: "On January 11th, the U.S. House of Representatives introduced a resolution to impeach Donald John Trump, President of the United States, for high crimes and misdemeanors. What is your opinion of this resolution?" my bet is that WAY more of you would have an opinion.

9 Likes

My team is also working on designing a survey right now. I don’t have any formal resources to share about survey design, but I can share how we got to where we are now.

  1. Our initial brainstorm came up with 24 ideas
  2. We eliminated 10 of them at the next meeting - ones that didn’t quite fit what the challenge is asking for, or which already exist
  3. We split into 2 groups which each did a weighted decision matrix for half of the remaining 14 ideas. We had to guess a fair amount because we were still in a pretty early stage with all of them, so we didn’t want to take the winners of the matrices as gospel
  4. We chose 5 ideas that scored well on the matrices and each student took one idea to research further. I gave them a set of questions to look into and find evidence for their answers.
  5. We went over their research at the next meeting, and eliminated 2 more ideas that didn’t seem that great after further research. For the remaining 3 ideas, as I expected, there were a number of questions they couldn’t find evidence for, which lend themselves nicely to a market research survey. Questions like “How often do you encounter [problem]?” and “How much extra would you be willing to pay for cleats that _____________?”, and then we added a few demographic/filtering questions such as age, location, whether they play a sport, etc. This left us with an 11 question survey, which is mostly multiple choice with maybe a few short answers (we’re still finalizing it).

So my first advice is to reduce the field of ideas as much as you reasonably can before sending out a survey. Secondly, for each question you have about the ideas, really think about whether it needs to go in a general survey, or could be answered through research or interviewing an expert. Third, any kind of multiple-choice, linear scale, etc question is going to go a lot faster than an answer where they have to type anything (one reason lots of survey do multiple-choice age brackets instead of asking you to type in your age). Fourth, be very conservative about having mandatory questions (the kind where you can’t submit the form unless you’ve selected an answer) - when I take surveys, the #1 cause of me closing the survey without finishing it is if they’re requiring me to answer a question I don’t know or care about, especially if it requires a short answer. If you would rather someone complete half the survey than none of it, don’t take that option away.

1 Like