FRC Blog - How We’re Doing and FIRST Babies

http://www3.usfirst.org/roboticsprograms/frc/blog-How-Were-Doing-and-FIRST-Babies

How We’re Doing

With much of the regular competition season behind us, I thought it would be a good idea to share some of the feedback we’ve received from teams. A link to a team survey is emailed to the main and alternate contacts for every team competing in a given week. These contacts are encouraged to forward the survey everyone from their team who participated in that week’s event, so we get responses from students as well as mentors. One of the questions in the survey, Question 13, asks how teams would rate the overall quality of the 2014 Aerial Assist game. This question is basically a generic “What do you think of the game?” Possible responses are ‘Very Poor’, ‘Poor’, ‘Fair’, ‘Good’ and ‘Very Good’. In the graph below, we group the ‘Good’ and ‘Very Good’ rating together in what could be considered positive ratings of the game. Similarly, we group the ‘Poor’ and ‘Very Poor’ ratings of the game together in what could be considered negative ratings of the game. The rest of the responses, those not shown on the graph, were ‘Fair’.

http://www3.usfirst.org/sites/default/files/uploadedFiles/Robotics_Programs/FRC/Game_and_Season__Info/2014/Q13SurveyResults.jpg
2014 Survey Results - All Teams

If you are interested in a finer breakdown between the positive and negative ratings for this last week of competition, here are the current responses (the Week 5 survey is open until Wednesday of this week): 1.2 % of respondents rated the game ‘Very Poor’, while 34.8% of respondents rated the game ‘Very Good’. As a point of comparison with last year’s game Ultimate Ascent, arguably one of our most popular in recent history, 91.3% rated the game positively over all weeks of the regular competition season, with 48.7% rating it ‘Very Good’, while 1.7% rated the game negatively, with 0.3% rating it ‘Very Poor’. (Yes, even for Ultimate Ascent, we had a handful of survey respondents who strongly disliked the game).

The usual caveats with this kind of survey apply. These results are a straight reporting of those who decided to respond. This is not a scientific survey, in which we attempt to get a representative sample from our community. For this last week, as an example, 55.9% of respondents were students, with the balance being mentors and a few other participants (alumni, parents, etc.), and we know we have a higher ratio of students to mentors on our teams as a whole than this.

We’d love for every game to achieve Ultimate Ascent-like popularity, and we did not reach that level this year. Aerial Assist was a very different game for FRC, with our attempt to have a more sports-like game and strongly encourage teamwork on alliances. Some aspects of the game are working well and some, such as the burden placed on our volunteer referees, are not. Your feedback is critically important as we work to incorporate the lessons learned from this game to improve our future game design efforts. Please keep filling in those surveys! Interestingly, the number of survey responses to this season’s surveys so far, totaling over 3,600, already greatly exceeds the number of responses we received to last year’s weekly surveys for the whole season of about 2,300.

Here’s another interesting graph, Q13 responses just from Rookies.

http://www3.usfirst.org/sites/default/files/uploadedFiles/Robotics_Programs/FRC/Game_and_Season__Info/2014/Q13SurveyResults-Rookie.jpg
2014 Survey Results - Rookie Teams Only

90.7% of our Rookie respondents gave a positive rating to this game last week, with 43.3% giving the game a ‘Very Good’ rating. As a comparison, for Ultimate Ascent, 91.7% gave the game a positive rating over all weeks of the regular competition season, with 64.6% giving the game a ‘Very Good’ rating.

As I said above, we really need to hear from you on those surveys. It’s an important tool for us as we continuously work to improve the quality of our games.

FIRST Babies

I saw a great post on Chief today about ‘FIRST Babies’. Check it out: http://www.chiefdelphi.com/forums/showthread.php?threadid=128377

Rumor is that Team 88, TJ^2, had a student on their team early on who eventually had a child who is a current student member of the team. A second generation team member! Also, see the post for some photos of youngsters who are ‘growing up FIRST’!

Frank

Fighting the sentiments of a few with data from the majority. Go Frank!

I did not respond to this survey for either of events because I didn’t have the time necessary to properly and fairly communicate all of the things that I dislike about this game.

Whether or not people realize it’s a mess has no bearing on whether or not it is actually a mess. Fundamentally, it plays poorly, the referees do a poor job of officiating it and the field is broken. FIRST should strive to achieve better than that, irrespective of whether or not people think they’re doing a good job.

I had no idea there was a survey.

A link to the survey is sent only to the main and alternate contacts for the team. That’s how I was aware of it.

I’d be curious to learn more about how closely main/alternate contacts are tied, typically, to the evaluation of their team’s on-field performance. That is, what is the ratio of NEMOs to engineering folks filling the role? Are NEMOs less likely to perceive problems with the implementation of the game?

A link to a team survey is emailed to the main and alternate contacts for every team competing in a given week.

I had no idea either, but I’m obviously not a main or alternate contact.

This feels a bit defensive to me. I believe it is fairly well know that Frank and/or other members of FIRST HQ monitor Chief Delphi. I feel like this blog was written to try to quell the wave of criticism over various game issues. While I certainly appreciate that HQ may want show the results, I wish the graphs showed a bit more information. Specifically I believe that the “Fair” responses may make up a significant portion of the responses. A graph of absolute quantities for each response would tell a larger story in my opinion. This sort of blog seems to say “Look at what we have done well” when many people are talking about the major issues in this game. The data here does not seem to match with the feedback I have heard from others.

All that said I still have the utmost respect for those in FIRST HQ and on the GDC. This is not meant to be a criticism on any one person, I just wish the data was a bit more complete.

Perhaps they should publish some of the raw survey data and let the people on CD draw their own conclusions.

^

I actually just earlier today filed a survey in which I rated the game “Good” simply because I like the concept and focus on teamwork. However, the rest of my survey was dedicated to discussing all of the issues, which I happen to see as less inherent to the game design than some, such as refereeing inconsistency, as well as some particularly contentious issues at our own event.

So it’d be nice if FIRST gave a little broader sample of data to inform us of the feedback, rather than show a little preview to calm things down.

It seems this was only one question (explicitly called out as #13) in a multi question survey, sent to only two contacts per team who are probably already inundated with FRC related email.

Results seem valid. :rolleyes:

The game is fun to watch, and I can imagine the students enjoy the sports-like atmosphere of the game. And the blog emphasizes one of the biggest issues with the game (the ref problem), so it’s pretty clear some good will come of this.

All mentors registered in TIMS (and possibly also students in STIMS) can log in, click “Edit My Account,” and scroll down to Email Broadcast Opt-In, to sign up for the email blast that all team leaders receive. Team leaders really should be forwarding them to the entire team (or at least the relevant information), but this is a good way to make sure you don’t miss anything [strike](such as an event survey).[/strike]

EDIT: I checked my archived mail, and it looks like I only got a forwarded message from our team lead. I guess they only send the survey links to Main/Alternate Contacts of teams who competed that week.

I can’t tell if the data is an April Fools joke or legitimate.

So you assume Frank is attempting to mislead us by cherry-picking the statistics he shares? Do you have some evidence for this, or is it based on personal bias and a very vocal few on CD?

I believe the evidence is that the survey was only sent to main and alternate contacts (probably accounting for most of the student:mentor ratio discrepancy), leaving the majority of FIRST participants unaware of its existence, and also that FIRST only posted limited stats on one particular question on the survey, which may or may not fully reflect participants’ satisfaction.

The survey was sent to main and alternate contacts with INSTRUCTIONS that those people send the surveys out to the rest of their team members.

I received an email from one of my team leads asking to fill out the survey for our event, and I did so.

I did not assert he was lying. I merely provided some interesting reading on the topic of statistics and using them to convey a point.

If you’d like my opinion I’d currently be quite unlikely to share it given your habit of accusing me of things that I didn’t say.

I would assume FIRST sends it out to these contacts hoping they forward it it to the rest of the team as necessary, rather than try and send it to every single person in STIMS/TIMS.

Ours forwarded it to all students and mentors as per the instructions.

Greetings Teams:
Please don’t forget to send this link to everyone from your team who participated in a Week 5 event: https://www.surveymonkey.com
Your feedback is very important to us! Please be sure to tell us about your experience by Wednesday, April 2nd.

EDIT: I do agree the couple days after competition is hectic so emails may not be forwarded until it is too late to respond. There probably is a disparity I just don’t know the best way to handle this other than the method they currently use.

What I see in this blog post is a 40% reduction of very good ratings and a 400% increase of very poor ratings.

I don’t think this is something to brag about.

We’d love for every game to achieve Ultimate Ascent-like popularity, and we did not reach that level this year.

Emphasis mine. No where in the blog post did I see anything even close to resembling bragging. In fact, he even states that the results this year are not as good as Ultimate Ascents popularity.