Is there a tool/spreadsheet on CD that helps compute OPR based on match results? I’d like to do so for the MN FTC Qualifying regionals. I believe I saw a white paper on the topic before, but I’m having trouble finding it now – I tried searching.
The difference for FTC is that qualifying matches are 2v2.
I’d like to explain the math to the kids (at a high level at least) which I think I can do from what I recall reading about it months ago. (Assume each team always contributes same points toward each match total, then come up with these per team values that best approximate actual match scores.)
Then I’d like to calculate it using an ‘easy to use’ tool. I don’t think the kids could start from the underlying math explanation and do it themselves, and they don’t really have the time to invest in that excercise (though I may be surprised about that)
I have 73 teams, up to 32 matches at each of 5 tournaments-- so ~160 matches of data total.
I just updated this to handle up to 4000 teams. For large datasets (like World OPR) it takes about 30 seconds. For single-event OPR it is effectively instantaneous.
It’s a 32-bit Windows console app written in Delphi.
I have a spreadsheet with the OPRs from the 7 FTC qualifiers in MN that I would be willing to send you. It calculates OPRs slightly different - it uses a set a successive approximations, estimates the error, corrects, and tries again. When I have checked it against the published numbers it seems to come up with the same answers. I found it easier to explain to middle school students than matrix mathematics.
One major issue I have seen so far in this years FTC game is a few penalties can really play havoc on the OPRs across the tournament. For example: on the Saturday of the Burnsville tournament, 9078/9414 got an additional 190 penalty points in a match. That penalty alone moves their OPRs from the mid 20s to the mid 60s. Team 8034 had matches with both of those robots later in the day, so that penalty lowers 8034 OPR from 31 to 12 - even though they were not involved in the original match. The Sunday in Columbia Heights has a similar situation with 11270/5330 getting a 150 point boost in a match.
I can see using iteration of linear approximations when solving a non-linear least squares problem… but OPR is a linear problem to begin with, so I’m a bit puzzled what algorithm you are using*:
How do you compute the successive approximations?
How do you compute the successive corrections?
Please explain it to me the same way you explain it to your middle school students. I won’t be offended.
One major issue I have seen so far in this years FTC game is a few penalties can really play havoc on the OPRs across the tournament. For example: on the Saturday of the Burnsville tournament, 9078/9414 got an additional 190 penalty points in a match. That penalty alone moves their OPRs from the mid 20s to the mid 60s. Team 8034 had matches with both of those robots later in the day, so that penalty lowers 8034 OPR from 31 to 12 - even though they were not involved in the original match. The Sunday in Columbia Heights has a similar situation with 11270/5330 getting a 150 point boost in a match.
I’m not that familiar with FTC. Are component scores available? If so, you can do OPR-type computations on the components you are interested in.
*Perhaps Gauss-Seidel? Is that easier for middle school students to understand?
*
*
for each robot -> (sum of scores) / matches / (robots per match) --> uses this as initial OPR
Estimates the score for each match using OPR
Calculates error using (real score) - (estimated score)
Calculates a new OPR using (OPR) + (sum of error)/(robots per match)/(# of matches per robot)]
Then keep looping back to 2 until I got sick of copying columns (roughly 50 times).
It seems to pass all the sniff tests - the average OPR converges to average score/2, the average adjust goes to zero, the average error goes to zero, and it seems to match OPR examples I could find online.
I should probably have asked Whatever to dis-ambiguate his algorithm before commenting on it (which is my usual modus operandi).
But taking his post at face value, this is how I interpreted what he wrote (my comments in [red]):
It uses this algorithm:
for each robot -> (sum of scores)[sum of scores for all alliances on which that robot participated] / matches[number of matches that robot played] / (robots per match[number of robots on an alliance]) --> uses this as initial OPR
Estimates the score [two alliance scores] for each match using [the estimated] OPR [by summing the estimated OPR score for each robot on each of the two alliances in that match]
Calculates error [for each alliance score] using (real score) - (estimated score)
Calculates a new OPR using (OPR) [OPR from previous iteration] + (sum of error [algebraic sum of all 2*M errors (aka residuals) from Step3, where M=#of matches in the event])/(robots per match[number of robots per alliance])/(# of matches per robot[number of matches the robot whose OPR you are re-estimating played])]
Then keep looping back to 2 until I got sick of copying columns (roughly 50 times).
If the above is the correct interpretation of what Whatever meant, it’s completely different form wgardner’s post you linked.
I don’t know about the 2*M part in Ether’s comments - I just take the answers from 3 and add them up. Otherwise it looks like he is reading it right.
For sanity I just double checked using the 2016 UNC Asheville event. It agrees with all of the top 15 OPRs listed on the blue alliance to the second decimal point.
I have a 32-bit Windows console app that computes OPR and other metrics for 2v2 alliances. If you like, I can upload it so you can use it to check your numbers.
Actually I thought I was replying to the OP because FTC7152 has tournament this weekend and I thought I could help. I just realized this is a zombie thread, the original post is from 2014.