|
|
|
![]() |
|
|||||||
|
||||||||
Because I was curious after I wrote this post I compiled all of the match data from week 1 to see what it required to win a qualifying match across the country this week.
To use this graph, pick a score on the bottom axis. Then look up until you see the red line (the red line is the empirical data, the blue line is MINITAB trying to fit a normal curve to it). Then look over. That is the percentage of matches you would've won in week 1 if you scored that many points. To read the table below, if your alliance scored a single point, that would've been enough to win 35% of week 1 matches.
By percentiles
35% Matches: 1 point
50% Matches: 5 points
60% Matches: 8 points
70% Matches: 13 points
80% Matches: 21 points
85% Matches: 30 points
90% Matches: 34 points
95% Matches: 42 points
Highest losing score in week 1: 81 points
There are lots of variables this doesn't consider, so it isn't an end all. Since minibots are dependent on how the other alliance deals with them, this data is not conclusive. It is a very strong indicator however that a working minibot is probably the way you want to spend your time.
07-03-2011 17:03
Ian Curtis
Sorry for the doublepost, but I didn't want to tie up CD-Media. Below is the link to the same graph, but for winning alliances. You can see that there is a wide gap between the average winning and losing scores.

07-03-2011 21:45
jason_zielkeGreat job on the data!
I am really surprised to see that >50% of the matches could have been won by placing a single ubertube on the top row in autonomous...
and...
an ubertube, one tube in teleop (over the ubertube) and a first place minibot wins 95% of the time!
I am really interested to see how this changes in week 2.
08-03-2011 01:45
waialua359I'm shocked by the data, even for a week 1 tournament.
How is it that if you score 1 point, it accounts for 35% of the wins?
To think that this game is similar to 2007, I find it hard to believe not being able to score on the high post just once. Minibot? Yes, its tough to deploy and reliably. Scoring on any peg just once in a 2 minute teleop period? Cmon.
08-03-2011 01:55
Joe Ross
|
I'm shocked by the data, even for a week 1 tournament.
How is it that if you score 1 point, it accounts for 35% of the wins? ![]() |
08-03-2011 02:27
David Brinza|
I'm shocked by the data, even for a week 1 tournament.
How is it that if you score 1 point, it accounts for 35% of the wins? |
08-03-2011 02:50
waialua359Yes,
I get the penalties part.
That implies that either its that bad or there were just too many red card matches.
08-03-2011 07:10
GaryVoshol
08-03-2011 08:44
Taylor|
I saw several matches this week where PENALTIES knocked the score to zero, or where PENALTIES were the factor between winning and losing.
|
08-03-2011 10:26
JesseK|
That is by no means a 2011 anomaly; it's the case in every FRC game.
Play clean, score a logo on the bottom row, you've (statistically) got a winning percentage! |
08-03-2011 12:27
Ian Curtis
FWIW, I intend to keep compiling this every week. In the off season I'd like to compile it for previous years too to see just where you can draw the line for a "good" FRC robot.
That said, anecdotally, FRC teams are forever optimistic. I have seen nothing that would statistically support tube scoring over minibot scoring at the qualifying level. Using Bongle's excellent OPR calculator, it would seem that there were about 4 robots per event that averaged 30+ points per match. There were typically about 8 that averaged 20+ points per match.
Build a good minibot, and you still might not win every race. However, I find it hard to believe you wouldn't average somewhere between first and second place, and that puts you in the top 8 robots at the event!
Admittedly, scores drastically increase during the elimination phase. While the average total number of points scored in qualifiers was a little under 50, it seems that a very high percentage of week 1 elimination matches had a total score of over 100. Tube ability certainly factors in at this point, and the chance having a field deep enough to pick 2 good tube scoring robots at the regional level is probably pretty slim.
Anecdotally, 3467 seeded 16th and and was the first pick of the 5th alliance at BAE with a consistent minibot and bottom row scoring. I think the fact they couldn't place tubes high was a major issue for them in the elims though.
Can't wait until this weekend!
|
Not sure I agree with that in practice for an overall season of competitions. I think it's still too early to garner any holistic statistical analysis. It will be interesting to watch 2 things:
1.) How each weeks' individual data changes 2.) How the combined data averages out over the overall season For example, we have to skimp on our minibot deployment because our lift is so heavy. It's so heavy because it has to reach the top row (2 stages). It wasn't really in our capability this year to make the lift lighter, though there are other options for us to pursue if weight is an issue. Yet on build day 2, we decided that since we were competing in a Week 4 regional, and it would be the 2nd regional for many teams, our NEED was to put 1+ logos on the top row; minibot could be secondary. I think we'll see minibots still be factors in match wins, yet overall scores will become higher because of tubes more than minibots. Thus, the weight and effort are worth the 2nd stage for the lift. If we would have attended a week 1 regional, I possibly could have driven the discussion more towards middle row + better minibot. |
14-03-2011 01:34
Ian Curtis
Here is the lowdown for Week 2.
The average losing/tying alliance scored 12.37 points. The average winning/tying alliance scored 40.62 points. These were compiled by the same MATLAB script as the week 1 results, so in both cases it is losing/tying winning/tying. The average losing alliance scored a little over a point more than last week, the average winning alliance scored a little over 3 points more.
The highest losing score was 76 points.

