Because I was curious after I wrote this post I compiled all of the match data from week 1 to see what it required to win a qualifying match across the country this week.
To use this graph, pick a score on the bottom axis. Then look up until you see the red line (the red line is the empirical data, the blue line is MINITAB trying to fit a normal curve to it). Then look over. That is the percentage of matches you would’ve won in week 1 if you scored that many points. To read the table below, if your alliance scored a single point, that would’ve been enough to win 35% of week 1 matches.
There are lots of variables this doesn’t consider, so it isn’t an end all. Since minibots are dependent on how the other alliance deals with them, this data is not conclusive. It is a very strong indicator however that a working minibot is probably the way you want to spend your time.
Sorry for the doublepost, but I didn’t want to tie up CD-Media. Below is the link to the same graph, but for winning alliances. You can see that there is a wide gap between the average winning and losing scores.
I’m shocked by the data, even for a week 1 tournament.
How is it that if you score 1 point, it accounts for 35% of the wins?
To think that this game is similar to 2007, I find it hard to believe not being able to score on the high post just once. Minibot? Yes, its tough to deploy and reliably. Scoring on any peg just once in a 2 minute teleop period? Cmon.
A RED CARD determines the winner of a MATCH only in eliminations. (Except for the rare case where all 3 ALLIANCE members get a RED CARD, either by their own actions or because an uninspected TEAM is participating.)
I saw several matches this week where PENALTIES knocked the score to zero, or where PENALTIES were the factor between winning and losing.
Not sure I agree with that in practice for an overall season of competitions. I think it’s still too early to garner any holistic statistical analysis. It will be interesting to watch 2 things:
1.) How each weeks’ individual data changes
2.) How the combined data averages out over the overall season
Bottom row: still not a great strategy overall but it IS the only tube strategy that can be built from only KOP, free stuff via FIRST, and a few minor other things (speed controllers or valves, take your pick). Karthik talked about a team from Chicago that really stretches a minimum budget every year with GREAT success, and I suspect that’s what they chose this year if they still have the same constraints.
Which leads me to my next point about teams knowing their limits during week 1 (JVN/Karthik/others preach this too). Any team can take these graphs for all weeks and then the overall season and apply it to the decisions made during week 1. What assumptions were made that were irrelevant, dead on, or plainly incorrect? What strategy concessions could have been made in order to make a simpler robot yet still win 75, 80, or 90% of matches, given the week of play for competition?
For example, we have to skimp on our minibot deployment because our lift is so heavy. It’s so heavy because it has to reach the top row (2 stages). It wasn’t really in our capability this year to make the lift lighter, though there are other options for us to pursue if weight is an issue. Yet on build day 2, we decided that since we were competing in a Week 4 regional, and it would be the 2nd regional for many teams, our NEED was to put 1+ logos on the top row; minibot could be secondary. I think we’ll see minibots still be factors in match wins, yet overall scores will become higher because of tubes more than minibots. Thus, the weight and effort are worth the 2nd stage for the lift. If we would have attended a week 1 regional, I possibly could have driven the discussion more towards middle row + better minibot.
FWIW, I intend to keep compiling this every week. In the off season I’d like to compile it for previous years too to see just where you can draw the line for a “good” FRC robot.
That said, anecdotally, FRC teams are forever optimistic. I have seen nothing that would statistically support tube scoring over minibot scoring at the qualifying level. Using Bongle’s excellent OPR calculator, it would seem that there were about 4 robots per event that averaged 30+ points per match. There were typically about 8 that averaged 20+ points per match.
Build a good minibot, and you still might not win every race. However, I find it hard to believe you wouldn’t average somewhere between first and second place, and that puts you in the top 8 robots at the event!
Admittedly, scores drastically increase during the elimination phase. While the average total number of points scored in qualifiers was a little under 50, it seems that a very high percentage of week 1 elimination matches had a total score of over 100. Tube ability certainly factors in at this point, and the chance having a field deep enough to pick 2 good tube scoring robots at the regional level is probably pretty slim.
Anecdotally, 3467 seeded 16th and and was the first pick of the 5th alliance at BAE with a consistent minibot and bottom row scoring. I think the fact they couldn’t place tubes high was a major issue for them in the elims though.
The average losing/tying alliance scored 12.37 points. The average winning/tying alliance scored 40.62 points. These were compiled by the same MATLAB script as the week 1 results, so in both cases it is losing/tying winning/tying. The average losing alliance scored a little over a point more than last week, the average winning alliance scored a little over 3 points more.