That’s really tempting. Survey design is such a large topic, I’m not sure what I could get done in a timely manner. That said, let me spit ball for a few minutes in this thread:
Please let me know if you have specific things you want to know about/know more about or resources for particular parts of research design!
And please pardon all of the political examples, it just makes more sense to use them than to not.
Step 1: Identify a research question
You’ll want to do some sort of literature review. In an academic paper, this would be looking for a bunch of scholarly sources related to your topic to get the full picture of what data is already out there. With something like this, you won’t need to go quite so big (or academic), but you should still have some sources showing what’s out there already.
Then, identify a gap in the literature. What is it that is unclear, out of date, or incomplete that you want to solve? For an academic paper, you’re looking at all of this in terms of data and research. For an innovative solution project, it can be a bit different. For example, say you are trying to look into mobility for people in wheelchairs on stairs. Your lit review will look for all the innovations you can find that solve related problems. Does something solve it and it’s not good enough? Is there no solution? Is the current solution good but you can improve on it? This is the gap in the literature.
Then, you write your research question. For academics, it’s simply about finding the answer to the gap. For example, if you’re researching how political party correlates to voter turnout and the last time a study was conducted was 2012, the gap is the newer data and your research question becomes “how does party id correlate…”
Step 2: Create a hypothesis
This is the answer to your research question. You can create a directional or non-directional hypothesis. For example, a directional hypothesis would be “Democrats have higher voter turnout than Republicans” (or the opposite). A non-directional would be “There is a difference in voter turnout between Democrats and Republicans.”
Since you are basically doing market research, your hypothesis is basically “will my customers/audience like this product/solution?” Keep that hypothesis in mind as we move on to step 3.
Step 3: Operationalize your hypothesis (the fun part!)
Maybe I’m just a nerd but I think this is where research starts to get really fun. So, you have a question, and you have a proposed answer (hypothesis). Now we get to test it!
Step 3A: Identify your variables
What variables do you want to test? Are they quantitative or qualitative? What is the independent variable and what is the dependent variable? Are there any control variables?
Sticking with the voter turnout question, our IV is political party affiliation (qualitative, nominal) and our DV is voter turnout (quantitative, interval-ratio). That is, when we change political party, we want to see what happens to voter turnout. If you want to get fancy (and you do!) you would control for things like age. For example, you wouldn’t want to compare a 25 year old Democrat with a 75 year old Republican to see how voter turnout differs because you wouldn’t know what is causing the change–the age or the party.
For the purpose of market research, you can have a few different variables. For example, maybe you want to create a “favorability score” to measure general impressions or a “necessity score” to demonstrate interest/demand for the product.
Step 3B: Operationalize the heck out of those variables
Having fun yet? Okay, we have some variables. How do we measure them? You have a ton of options for measuring data. Surveys are only the beginning. I’ll add a section at the end about other ways to measure data, but for these purposes, let’s assume you want to use a survey.
If you measure something like political party (and use a survey), all you need to do is ask political party as a multiple choice question. If you want to identify favorability of a product, your best bet is almost always a Likert scale. A Likert scale is simply one of those “With 1 being strongly disagree and 5 being strongly agree…” type questions.
You can ask a series of questions that contribute to one variable, too! So, maybe you decide that favorability is best measured by a combination of (1) excitement for the product, (2) helpfulness of the product and (3) willingness to purchase the product*, you pose these three statements on a Likert scale and take the combined total to have a favorability score out of 15:
-This product is exciting to me.
-I find this product helpful.
-I would purchase this product.
*This is not what makes up favorability, necessarily. This is arbitrary. Decide on your own metrics.
Open-ended questions are generally not good when you’re collecting large amounts of data. You’ll need to code the responses/sift through them. Open ended questions are better for generating ideas of getting very specific questions answered. If you really feel the need to have open ended questions, limit them as much as possible.
Step 4: Collecting the data
This is usually the easiest part. You have all of your questions and you know the format you want to collect them in. If it’s a survey, now is the time to create the survey. Make it as easy for people to submit their responses, don’t influence them (see section at the end with more info on this), and sit back and wait for the responses to roll in.
If you’re doing academic or professional research, you’ll need a certain response rate to have confidence in your data. If you have a budget, consider a tool like SurveyMonkey which lets you market your survey to random people for a fee or Amazon MTurk (way more work but cheaper).
Don’t forget to collect some demographic data. It never hurts to collect data you don’t know if you’ll use or not, as long as you don’t collect so much as to turn people away from the survey. Maybe you don’t think age matters for your question, but you might see some trends later – you never know!
Step 5: Interpret your results
I am not the most qualified person on this forum to tell you about statistical methods, so I won’t. I will say that there are a ton of resources out there to help you do all kinds of statistical tests on your data and produce meaningful results
Additional topics of note
OP asked about completion time. The best way to measure this is to take the survey. Have five or ten students take a draft of your survey and see how long it takes them. Don’t forget to clear the data if they’re not part of the survey population.
Assuming you are not offering respondents any incentive for completing the survey, most will give up after about 5 minutes. Obviously this varies, but if you’re asking more than 15-20 questions, you’re not going to get very many responses. 15-20 is already pushing it and should only be done if necessary.
Remember, the more data you collect the more data there is to interpret. This is not always a bad thing, but if you have mounds and mounds of data, it will take you a while to go through it.
Methods for data collection
There are benefits to lots of different means of collecting data. Surveys are not the only answer and I wouldn’t discount the other options. Here are a few:
We know surveys. These are conducted either online, in-person, or by phone. It is a series of questions which respondents are asked to answer.
A focus group is great if you are trying to generate ideas. You get a bunch of people in a room (or Zoom), ask them some questions, and let a discussion flow. You can generate some great ideas when people who don’t know each other are in conversation with each other and can also get the general consensus on something pretty quickly.
You probably can’t/won’t do this for FIRST, but in an ethnography, researchers immerses themselves in an environment to collect research first hand. They might go to a foreign country and embed themselves in a community. Or maybe someone is researching something about Congress and they actually get a job or shadow someone in a Congressional office.
If you need to collect mostly in-depth qualitative data, and the quality and length of the responses are more important than the quantity of the responses, conduct 1:1 interviews with your sample population. For example, maybe you want to hear about issues facing a small community of 100 people. Rather than trying to get 75 of them to participate in a survey so that you have enough data points, you could have in depth conversations with 10 of them. Even though the 10% response rate is obviously much lower than the 75%, the quality and utility of your data will probably be much better with the 10.
A few things to be careful of when conducting research:
Use screening questions
I can’t tell you the number of times I’ve clicked on a survey only to find that there is a required question which does not apply to me. For example, I would see a question like “What time do you wake up for school?” when I’m not in school and don’t wake up at a specified time. That’s maybe not the best example, but there are a ton of irrelevant questions like this.
You can fix it easily with screening questions. For example, I was doing research a few months ago on attitudes towards vote by mail among the electorate. I decided that I wanted my population to be eligible voters in the United States as of he 2016 presidential election. To do this, I implemented two screening questions. First, I asked respondents for their date of birth. If my logic showed they were at least 18 on the date of the 2016 general election, I asked if they are eligible to vote (this was easier than asking questions about citizenship, criminal history–since this varies by state, etc.).
Don’t use leading questions.
Leading questions lead the respondent to the answer you want them to provide. For example, a leading question would be “What do you think about FIRST’s inappropriate fundraising tactics expressed towards volunteers?” That question is framing the subject in question in a very specific light. For reliable data, rephrase it as “What do you think about FIRST’s fundraising tactics towards volunteers?” Easy fix. Just be careful.
The non-attitude problem
Think about the questions you plan on asking. Do the people who are likely to fill out your survey (a) know enough about the topic to have an informed opinion and (b) care enough about the topic to have a coherent opinion.
This comes up a lot in politics when doing public opinion research on down ballot candidates. If I asked you your opinion of your city councilmember, most people wouldn’t even know who it was, let alone have enough of an opinion on them.
Sometimes, you can provide some additional context before asking the question if that context would help the respondent without leading them to the answer. For example, if I asked you all to answer "What is your opinion of House Resolution 24? most of you probably would not have an answer.
However, I could rephrase it like this: "On January 11th, the U.S. House of Representatives introduced a resolution to impeach Donald John Trump, President of the United States, for high crimes and misdemeanors. What is your opinion of this resolution?" my bet is that WAY more of you would have an opinion.