|
|
|
![]() |
|
|||||||
|
||||||||
This is a plot of the 10 match moving average of alliance scores in 2011. The x axis is matches in the order they were played (ie they are not sorted by event, just by timestamp). A moving average is averaging a block of matches together and basically each point you change one number in the average. In this way, you reduce the volatility of the data. The trends remain the same without the moving average, it is just much harder to look at. The data is from the @FRCFMS twitter feed, Andrew Schreiber was nice enough to mine it for me.
I think it's neat for a couple of a reasons. There is a clear upward progression in alliance score over the course of the season. The spikes are all elimination matches, so you can see that elimination matches are clearly higher scoring. You can also see how large each week is compared to the others by the distances between the elimination peaks. Also interesting is that the Championship is clearly played on a different plain compared to other weeks, as the average match score is inline with a typical elimination round!
Worth nothing that some of the upward trend in the data between the peaks could be due to a difference in time zones as some regions may have been playing in elimination matches while other regions were still in qualifying rounds.
In case anyone is interested, I did some more work on pinning down the average robot which EWCP has on their blog.
Average points per robot across all qualifying matches in 2010 was 1.4, and in 2011 was 11.3. At your typical event, the 50% percentile robot is in the elimination rounds or on the verge of the elimination rounds.
Some of the most interesting other trends in that data were in both 2010 and 2011 about 20% of alliances scored zero points after penalties, and in both 2010 and 2011 penalties reversed the winner about 5% of the time, and turned a win into a tie about 10% of the time. I was not expecting to see such similar numbers between such different games.
15-12-2011 15:12
IKEAnyone care to do an alysis on how many teams would have to average net 0 pts. in order for 20% of alliances have a resultant score of 0 pts.?
for example, with dice, if I have 3 dice, the probility of at least 1 of them being a 1 during a roll would be 3*1/6 or 50%. the prob of 2 being 1s would be 3/2*1/36 or 4.5%. The probablility of 3 1s would be 0.5%. At a district event with 80 matches, there would be 160 alliances, and thus I would expect 1,1,1 0.8 times or 80% of events, there would be at least 1 alliance that got 1, 1, 1.
If 0 is assumed as the lower limit, then a 0,0,0 should be difficult to get. If FRC was on 2 vs 2, and 50% of the field could score 1 (or more), and 50% of the field could score 0. I believe you would expect on 25% of alliances to have a score of 0.
For 3 vs. 3, it should (in theory) be significantly more difficult... in theory. I guess my argument is that if "average" robot might correspond with your values, but the "median robot" may perform significantly lower...
15-12-2011 15:12
Andrew SchreiberHere is the direct link to your article Ian.
http://ewcp.org/blog/2011/12/08/aver...-to-your-team/
15-12-2011 17:49
Ian Curtis
|
Anyone care to do an alysis on how many teams would have to average net 0 pts. in order for 20% of alliances have a resultant score of 0 pts.?
for example, with dice, if I have 3 dice, the probility of at least 1 of them being a 1 during a roll would be 3*1/6 or 50%. the prob of 2 being 1s would be 3/2*1/36 or 4.5%. The probablility of 3 1s would be 0.5%. At a district event with 80 matches, there would be 160 alliances, and thus I would expect 1,1,1 0.8 times or 80% of events, there would be at least 1 alliance that got 1, 1, 1. If 0 is assumed as the lower limit, then a 0,0,0 should be difficult to get. If FRC was on 2 vs 2, and 50% of the field could score 1 (or more), and 50% of the field could score 0. I believe you would expect on 25% of alliances to have a score of 0. For 3 vs. 3, it should (in theory) be significantly more difficult... in theory. I guess my argument is that if "average" robot might correspond with your values, but the "median robot" may perform significantly lower... |
It only strengths the argument that the typical robot isn't as good as most people think on kickoff.
19-12-2011 15:56
Ian Curtis
Not an exact answer to IKE's question, but a move in that direction. His point that even if the mean robot scores 5 points, the median (or 50 percentile) may score significantly less if there are outliers to skew the high end of the field is a good point.
I don't have any actual data for how many points robots score per match. So I used OPR, as that should do a decent job approximating the real distribution. All data is from BAE in 2011, OPR was calculated from Bongle's OPR program.
First up, a histogram of OPR. OPR can be negative, just as robot contribution can be negative (more penalties than points). The mean was 10.1, the median was 6.7. This certainly supports the hypothesis that the median robot is not as good as scoring as the mean robot.



19-12-2011 16:26
Brandon Holley
|
As an interesting side note, it looks like in 2011 OPR did a suprisingly good job at predicting scores of the top 50% of the field, and was less good with the bottom 50% of the field.
|
19-12-2011 17:38
Ian Curtis
I agree that minibots throw big wrenches into the works, it makes the scoring nonlinear and hard to categorize typical (is 30/0/30/0 worth the same as 15/15/15/15?). I am not quite sure what you are saying with the 14 vs. 40 though, I'll chew on it some more.
See Chris's post below. FLR is apparently not indicative of the average match because of 6v0. Other regionals remain skewed, so I'll do one of those tomorrow.
In the meantime, I ran the same thing for 2010 just to see what it looks like and it pains a very different picture. I used FLR because BAE does not have posted scores on FIRST's website, so Bongle's OPR calculator won't work, and I've never quite got my MATLAB one to work. FLR is still an older week 1 event though, so I would hope the trends hold. (famous last words right?)
Firstly, the OPR distribution is very normal. The mean and the median differ by less than 10%.



19-12-2011 17:46
Chris is meIan, I think you're failing to account for the fact that FLR was an event that very quickly caught on to the 6v0 strategy. That would explain such a large scoring discrepancy.
19-12-2011 17:47
Ian Curtis
|
Ian, I think you're failing to account for the fact that FLR was an event that very quickly caught on to the 6v0 strategy. That would explain such a large scoring discrepancy.
|




19-12-2011 17:52
Chris is me
19-12-2011 19:41
Chris is me|
For this set the quartiles line up pretty well, with just a couple of outliers in the real world case. Chris, do you know if these were 6v0, or just exceptional performances?
|
21-12-2011 09:00
IKEYour OPR Alliance score estimator will always create a more normal distribution than the actuals because it is using an average value, and not a scoring distribution. the minibot is a great example of this, and you 30/0/30/0 vs. 15/15/15/15 hits the nail right on the head. Both of these scenarios have the same average and thus would add to the OPR scoring algorithm the same way. From an Actuals though, the 30/0 would lead to 2 groupings which is more accurrate for the 2011 scoring.
The point to my comment was that the "average" or median robot does score significantly less than most people would estimate.
Our kids did an estimate on the VEX game this year. I think the max possible score was on the order of 60 pts. I then let them work on an estimate of what a good score would be. Initially they came up with a figure in the 40s. We did some refining techniques and their new estimate was much closer to 24-25 points. At their tournament this past weekend, that was pretty much exactly where the "good" scores came in. One alliance hit a 29 during a match.
For 2010, the average alliance score was around 4 points, but this was partially skewed by having the higher scorers contribute to more alliance scores because eliminations data was used as well (16 elims matches at FLR relative to 74 Qual matches with the best of the best playing in 50% of those elims matches). If you only use Qualification data, the average alliance score will be slightly lower than 3 pts. which means the "average" contribution would/should be just under 1 pt. The Median being slightly below this. To put this into perspective, if you started in the home zone, and just scored the 1 ball in the home zone every match, you would be better than 50% of the 2010 field. If you could hang (worth 2 pts.) 100% of the time, you would be over 2X the national "average". If you could put 1 ball in and hang, then you would make it to 3 pts. and be able to outscore about 50% of alliances, all by yourself. At an event like FLR, this would put you in the top 7 or so of teams. Top 7 and you are only pushing 1 ball in the goal, and hanging at the end...
If your goal was to be an alliance captain or picked, targeting those easy 2-3 points is a very reasonable target. Notice the strategic difference though between these 3 points (which can be accomplished in the home zone) versus 3 points from a different zone. 3 points kicking balls means moving 3 balls into the home zone. Then moving the robot into the home zone, and then re-collecting and scoring the 3 balls. By my count this is a minimum of 7 actions to get 3 points (if you consider acquire, and then transfer seperate moves, it can be as many as 13 moves). Versus the original strategy which is 2-3 actions for 3 points...
For 2011, similar analysis shows the average score for an alliance being under 30 pts. It also showed that minibots were frequently not launched at all. Doing a post season analysis, If you simply had a good reliable minibot system, (not even a sub 2 second minibot) you would win most of your matches. At an absolute minimum, a scoring minibot was worth 10 points which was again more than the "average" contribution and well above the Median. Compare this to scoring tubes. Top row tubes are worth 3 points. 2x if you make a logo. If you hang an ubertube, its 6 in Auto, and up to an additional 6 points if you make a logo over it. In other words, in order to score 30 points in tubes, you would need to score an Ubertube in Autonomous, acquire and hang 3 different shaped tubes, in the right order (one of which would be difficult as you are hanging it over a ubertube). Again, this is 7 actions just to get to 30 points, versus essentially 2 actions for the minibot (align to tower and launch minibot). using the minibot minimum of 10 points, you would still need to score and uber tube and then acquire an hang another tube over it in order to beat the minibot minimum. If you don't have an autonomous, then you would have to hang a minimum of 3 different tubes top row creating a logo (6 actions) or 4 tubes top row not creating a logo (8 actions) just to beat the minimum minibot contribution...
************************************************** ******
This season:
1. Do a scoring analysis (all the way to get and block points, and then prioritize the way to get those points with the fewest distinct actions).
2. Do some field analysis. The best way to be playing in elims is to win qualifications and be an alliance captain. Be realistic on what a real alliance score will be. Understand that only about 25% of teams will get autonomous bonus points, and only about 25% of teams will hit most end game bonuses. Being able to get one of those bonuses every time will usually move you towards the top of the field.
3. Be realistic in your goals, and relentless in hitting them.
21-12-2011 13:37
Jim Zondag
The attached Graph shows the distribution of individual team's season OPRs for the 2011 Season.
The trend you see here is pretty typical and is important when doing game analysis and strategy: Typcially about 25% of FRC population has a season contribution of 0 or less (26% in 2011). The 50% population point is much lower than you think. This has been the case in 2008, 2010, and 2011 since the GDC got "penalty happy". (2009 was an exception with only about 5% being negative, but the distributions are the same, just shifted to the right). The average scores per team increases quickly as teams play more events: Last year the OPR average by experience trend was 6.1, 18.0, 27.7, 34.4, 39.2, for 1-5 events played. Notice that it nearly triples going from 1 event to 2.
The performance distibution follows a roughly Gamma distirbution for all the teams and is very assymetrical. Last year 532 teams have net contribution at or below zero, while only 112 teams were at 30 or higher.
However, this function changes dramatically the more teams play.
If you can achieve half of the season maximum at your first event (OPR of 35 last year), you will be in the top 5% or so in the world at the beginning. If you keep this same level of performance, by your 3rd event you will only be barely above average relative to other teams with the same level of experience.
21-12-2011 14:08
Ian Curtis
Jim, this is incredible! I hope lots of people get a chance to look at this and IKE's comments and realize that they are better off aiming low and hitting their goals than shooting for the moon and coming up way short. It also speaks volumes for having a robot done early so you can practice.
Since I assume 33 has pretty decent points per robot per match data, have you ever matched up the OPR with the actual points a robot is worth per match? Can you make any comments as to how good that fit is?
21-12-2011 15:03
Joe Ross
|
Jim, this is incredible! I hope lots of people get a chance to look at this and IKE's comments and realize that they are better off aiming low and hitting their goals than shooting for the moon and coming up way short. It also speaks volumes for having a robot done early so you can practice.
Since I assume 33 has pretty decent points per robot per match data, have you ever matched up the OPR with the actual points a robot is worth per match? Can you make any comments as to how good that fit is? |
21-12-2011 15:51
IKE|
Since I assume 33 has pretty decent points per robot per match data, have you ever matched up the OPR with the actual points a robot is worth per match? Can you make any comments as to how good that fit is?
|
21-12-2011 17:09
Jim Zondag
Same data graphed by Percentage lets you see the trends a little better:
You can see how the population center moves to the right as experience increases
A little bit about the data set and this method. I have a database which has all the OPRs for all the team at each event they play, spanning many years. I take all of the OPRs and group them into catagories, this year it is in segments of 5 points per segement. I have used the "20 slices" method since 2006 to allow me to overlay data from several years worth of competition onto the same chart for analysis of multi year trends even though the games often have very different scoring systems.
Included in the 2011 data set:
Teams who played at least one event = 2053
Teams who played at least two events = 800
Teams who played at least 3 events = 244
Teams who played at least 4 events = 45
Beyond this, the popluations are too small to be relevant.
You can see from the chart some of the things IKE mentions: At 4 events, the teams are clearly limiting one anothers' total performance as indicated by the big peak at 40-45. With multiple robots of this caliber, the per team score actually goes down.
22-12-2011 16:29
Racer26Observation in the first graph. from about 6400 to 6600 on the x axis looks like MSC.
The peaks from the 6 competition weeks, then MSC at a much higher caliber (quals nearly as good as week 6 elims), followed by CMP with CMP quals at MSC elims level.
22-12-2011 17:58
BHS_STopping|
Anyone care to do an alysis on how many teams would have to average net 0 pts. in order for 20% of alliances have a resultant score of 0 pts.?
for example, with dice, if I have 3 dice, the probility of at least 1 of them being a 1 during a roll would be 3*1/6 or 50%. the prob of 2 being 1s would be 3/2*1/36 or 4.5%. The probablility of 3 1s would be 0.5%. At a district event with 80 matches, there would be 160 alliances, and thus I would expect 1,1,1 0.8 times or 80% of events, there would be at least 1 alliance that got 1, 1, 1. If 0 is assumed as the lower limit, then a 0,0,0 should be difficult to get. If FRC was on 2 vs 2, and 50% of the field could score 1 (or more), and 50% of the field could score 0. I believe you would expect on 25% of alliances to have a score of 0. For 3 vs. 3, it should (in theory) be significantly more difficult... in theory. I guess my argument is that if "average" robot might correspond with your values, but the "median robot" may perform significantly lower... |
23-12-2011 11:04
Racer26|
Just wanting to correct you slightly on your math here.
If you throw three dice, the probability of at least one of them coming up with a 1 is not simply 3 * 1/6. Using this logic we could then assume that if we throw 6 dice then the number 1 is going to appear every single time (which is false, the actual probability in this case is about 66.5%). When throwing three dice, the probability of throwing at least one 1 is equal to 1 - (5/6)^3. This number turns out to be about 42.1%. The probability of throwing exactly two 1's is a little bit trickier, but it's not too difficult. There are 216 possible dice rolls for three dice, and 15 of those rolls have exactly two 1's in them. 15/216 is roughly 6.9%. If we include the 1, 1, 1 case (that is, all situations where at least two 1's come up) then our probability is 16/216, or 7.4%. The probability that all three dice show 1's is 1/216, or .46%, so you were right about that one. |