OPR calculation

Is there a tool/spreadsheet on CD that helps compute OPR based on match results? I’d like to do so for the MN FTC Qualifying regionals. I believe I saw a white paper on the topic before, but I’m having trouble finding it now – I tried searching.

The difference for FTC is that qualifying matches are 2v2.

Thanks.

Do you just want to do the computation, or do you want to learn the underlying mathematics?

Both, if available.

I’d like to explain the math to the kids (at a high level at least) which I think I can do from what I recall reading about it months ago. (Assume each team always contributes same points toward each match total, then come up with these per team values that best approximate actual match scores.)

Then I’d like to calculate it using an ‘easy to use’ tool. I don’t think the kids could start from the underlying math explanation and do it themselves, and they don’t really have the time to invest in that excercise (though I may be surprised about that)

I have 73 teams, up to 32 matches at each of 5 tournaments-- so ~160 matches of data total.

What is the format of the data?

Attached is a simple-to-use tool which accepts data in the following format:

RED1 RED2 RED3 BLUE1 BLUE2 BLUE3 RED_SCORE BLUE_SCORE

… and calculates all sorts of stats, including OPR, and outputs the results to a CSV file which can be opened directly in Excel.

Tell me what format your data is in and maybe I can make a few simple changes so it can process the whole batch for you.

*

OPR.zip (29.4 KB)


OPR.zip (29.4 KB)

OPR math:

OPR explained using St Joe event data as example:

“formula” for OPR:

Ed Law’s OPR paper
http://www.chiefdelphi.com/media/papers/2174

Jay Lundy’s OPR & DPR

… I’m sure there are others
*
*

Although I asked for better machine readable data, I was told this is the best it gets:

http://www.hightechkids.org/sites/default/files/FTC/2013/ColumbiaHgts/1-25-14%20ColumbiaHgts%20Match%20Results.pdf

There are four other similar pages for the other tournaments.

I did enter one tournament into Excel with columns:
Tourney#, Match #, Color, Score, Team#
(total of four rows per match)

This format can be easily adjusted, of course.

I also just realized I can copy/paste into a text file and get this:

Q-1 117-161 B
7972 8034
7661 2887
Q-2 11-82 B
6707 7000
4140 8005

That’s:

Q-Match# RedScore-BlueScore WinningColor(R/B)
Red1 Red2
Blue1 Blue2

It gets a little messed up at page breaks in the PDF but that’s trivial to fix.

I just updated this to handle up to 4000 teams. For large datasets (like World OPR) it takes about 30 seconds. For single-event OPR it is effectively instantaneous.

It’s a 32-bit Windows console app written in Delphi.

[EDIT] added README.TXT

*

OPRrevA.zip (29.5 KB)
README.TXT (543 Bytes)


OPRrevA.zip (29.5 KB)
README.TXT (543 Bytes)

I have a spreadsheet with the OPRs from the 7 FTC qualifiers in MN that I would be willing to send you. It calculates OPRs slightly different - it uses a set a successive approximations, estimates the error, corrects, and tries again. When I have checked it against the published numbers it seems to come up with the same answers. I found it easier to explain to middle school students than matrix mathematics.

One major issue I have seen so far in this years FTC game is a few penalties can really play havoc on the OPRs across the tournament. For example: on the Saturday of the Burnsville tournament, 9078/9414 got an additional 190 penalty points in a match. That penalty alone moves their OPRs from the mid 20s to the mid 60s. Team 8034 had matches with both of those robots later in the day, so that penalty lowers 8034 OPR from 31 to 12 - even though they were not involved in the original match. The Sunday in Columbia Heights has a similar situation with 11270/5330 getting a 150 point boost in a match.

I can see using iteration of linear approximations when solving a non-linear least squares problem… but OPR is a linear problem to begin with, so I’m a bit puzzled what algorithm you are using*:

How do you compute the successive approximations?

How do you compute the successive corrections?

Please explain it to me the same way you explain it to your middle school students. I won’t be offended.

One major issue I have seen so far in this years FTC game is a few penalties can really play havoc on the OPRs across the tournament. For example: on the Saturday of the Burnsville tournament, 9078/9414 got an additional 190 penalty points in a match. That penalty alone moves their OPRs from the mid 20s to the mid 60s. Team 8034 had matches with both of those robots later in the day, so that penalty lowers 8034 OPR from 31 to 12 - even though they were not involved in the original match. The Sunday in Columbia Heights has a similar situation with 11270/5330 getting a 150 point boost in a match.

I’m not that familiar with FTC. Are component scores available? If so, you can do OPR-type computations on the components you are interested in.

*Perhaps Gauss-Seidel? Is that easier for middle school students to understand?
*
*

It uses this algorithm:

  1. for each robot -> (sum of scores) / matches / (robots per match) --> uses this as initial OPR
  2. Estimates the score for each match using OPR
  3. Calculates error using (real score) - (estimated score)
  4. Calculates a new OPR using (OPR) + (sum of error)/(robots per match)/(# of matches per robot)]
  5. Then keep looping back to 2 until I got sick of copying columns (roughly 50 times).

It seems to pass all the sniff tests - the average OPR converges to average score/2, the average adjust goes to zero, the average error goes to zero, and it seems to match OPR examples I could find online.

Step4 appears to be trying to minimize the algebraic sum of the errors (residuals), not the sum of the squares of the errors.

So I don’t see how it could converge to OPR (the de facto definition of OPR on CD is a least-squares solution to the overdetermined system).

What he is describing sounds similar to this method, which wgardner verified converges to OPR.

I should probably have asked Whatever to dis-ambiguate his algorithm before commenting on it (which is my usual modus operandi).

But taking his post at face value, this is how I interpreted what he wrote (my comments in [red]):

It uses this algorithm:

  1. for each robot -> (sum of scores)[sum of scores for all alliances on which that robot participated] / matches[number of matches that robot played] / (robots per match[number of robots on an alliance]) --> uses this as initial OPR
  2. Estimates the score [two alliance scores] for each match using [the estimated] OPR [by summing the estimated OPR score for each robot on each of the two alliances in that match]
  3. Calculates error [for each alliance score] using (real score) - (estimated score)
  4. Calculates a new OPR using (OPR) [OPR from previous iteration] + (sum of error [algebraic sum of all 2*M errors (aka residuals) from Step3, where M=#of matches in the event])/(robots per match[number of robots per alliance])/(# of matches per robot[number of matches the robot whose OPR you are re-estimating played])]
  5. Then keep looping back to 2 until I got sick of copying columns (roughly 50 times).

If the above is the correct interpretation of what Whatever meant, it’s completely different form wgardner’s post you linked.

I agree that we need more details from Whatever.

If the above is the correct interpretation of what Whatever meant, it’s completely different form wgardner’s post you linked.

Agreed, I was just reminded of this because it is an iterative calculation of OPR instead of a closed-form solution.

I don’t know about the 2*M part in Ether’s comments - I just take the answers from 3 and add them up. Otherwise it looks like he is reading it right.

For sanity I just double checked using the 2016 UNC Asheville event. It agrees with all of the top 15 OPRs listed on the blue alliance to the second decimal point.

As I said in my original response, I’m not that familiar with FTC.

So we could be talking past each other.

Here’s what I meant by 2*M:

M is the number of matches in the event. For each match, you have 2 alliance scores (one for red and one for blue). Yes?

So there are a total of 2 times M alliance scores. And you have one error for each alliance score. Yes?

So for example if the event had 80 qual matches, there would be 160 alliance scores, and therefore 160 errors to add up. Yes? No?

I just take the answers from 3 and add them up.

The question to be clarified is, how many “answers from 3” are you adding up? 160 of them if there are 80 matches? Or did you mean something else?

I just adding up the errors of the matches a given robot was in. So each robot has a separate error total.

Well that changes everything.

I have a 32-bit Windows console app that computes OPR and other metrics for 2v2 alliances. If you like, I can upload it so you can use it to check your numbers.

Actually I thought I was replying to the OP because FTC7152 has tournament this weekend and I thought I could help. I just realized this is a zombie thread, the original post is from 2014.

Here’s a spreadsheet posted onto Chief Delphi a while back that calculates OPRs based on match scores.

https://www.chiefdelphi.com/forums/showthread.php?t=152037

Hope this helps.