I don’t think that there is going to be any fix that is quite so easy to implement as the one used in 2012, but I think improvements could be made. If we do implement corrections though, we need to be very careful with our terminology. Corrections like this should only be used for elimination round matches.
2 potential modifications I can think of could be:
1: Increase the value of each crossing to be 5 points + (20 points/breach)/(8 crossings/breach) = 7.5 points, and increase the value of each ball scored by (25 points/capture)/(8 scored boulders/capture) = 3.125 additional points per scored boulder. A potential drawback to this approach is that doing more than the minimum required for crossing/breaching would be over-valued. For example, 10 successful scored crossings will net 75 points, instead of the 70 that this would actually be worth.
2: Using the data on breaches and captures, compute the OPR contribution by team for these. Take (OPR breach) * (20 points/breach) and (OPR capture) * (25 points/capture) and add these onto the team’s nominal OPR. A big disadvantage to this is that captures happen incredibly infrequently in qualifications, which will make this correction almost negligible for most teams, even though they may actually be able to contribute to capturing once they have better partners in elims.
I had a similar idea. I decided to write a program that calculates the OPRs, and for each match score I simply add 20/25 for breach/capture. Here are some results:
I have just uploaded a scouting database which contains component calculated contributions. Since component OPRs seem to have the property of linearity (although I don’t fully understand why), it should be easy for anyone to play around with these. Currently, I calculate eOPR using method 1 described in my original post. That is, eOPR = (total points) + 3.125*(subtracted tower strength) + 2.5*(cross defense count).
This is some pretty awesome data Caleb! I have been working on my own script to pull down the match data, organize it, and run various calculations on it. This is not exactly simple stuff, but it’s given me the chance to practice my (not-so-great) scripting skills, refresh on some linear algebra, and of course play with ROBOT DATA in JMP.
I think using the real match data (microbuns method) to add the +20 for a breach and +25 for a capture to Qual matches is the best way to calculate this new OPR (I call it “Modified OPR”). It’s also a pretty simple modification to my OPR calculation script since I already have the data columns there for breach and capture. Obviously I only add this to the final score if the match is not an elims match. This is most accurate to how elims matches are truly scored. I prefer to let the calculated contribution to tower weakening and defense breaching speak for itself in the analysis of those columns by themselves.
microbuns, I get the same results as you for 2016_ONTO with my Perl OPR calculator if I use data from only the qualification matches. Always nice to double-check work :).
One thing I would like to hear opinions on is if eliminations match data should be use in OPR calculations. My results change significantly when I factor in elims data for my Modified OPR and Regular OPR. See below (sorted by ‘Mod OPR Qual’). OPRs tend to drop for robots in elims, probably due to the “cap” for breach points, the alliance partnership with two other good teams, and the fact that defense is a bigger factor in elims.
Perhaps someone who attended this regional could comment which OPR more accurately reflects true robot performance (specifically looking at 2013, 610, 1241 and 1305 in this list since they change rank depending on which OPR I rank by).
OPR was first proposed and published on CD by Scott Weingart in April 2006. Since then OPR has always been calculated using qualification match results only. Here are a number of possible reasons.
There were quite a few years where the scoring during qualification rounds and elimination/playoff rounds are different, some years more than others. Adjusting the scores for qualification rounds or elimination/playoff rounds in order to combine these different matches together to calculate OPR of both rounds is artificial and lead to potential unnecessary arguments how the adjustment should be made.
The motivation for teams in qualification rounds and elimination/playoff rounds are very different. The higher ranked teams want to rank as high as possible by winning a match and earning as many ranking points as possible that is used for tiebreak. Lower ranked teams want to showcase their strength and winning is not the highest priority except for their obligations to their alliance partners. In elimination rounds, the goal is to advance by winning and not necessary by scoring as many points as possible. In playoff rounds, the goal is to score high to advance and winning may not be the top priority.
Alliance partners in Qualification rounds are randomly selected. That is not the case for elimination/playoff rounds. In some games, with better partners, teams in elimination/playoff rounds are limited in number of points each of them can score which can hurt their OPR.
In the calculations of OPR, if we use elimination/playoff matches also, there may be some undesirable effect when some teams play more matches than others. More data points may not be better when the data represents something different due to what I explained above.
However I am all for teams who want to modify OPR to rank teams based on what they feel is more important. 2012 was the only year that I published a modified OPR. There was a general consensus on CD to give 5 points for each coop point. I made it so that teams can easily use a number different from 5 if they chose to.
Due to all the above reasons, I strongly oppose changing the fundamental way how OPR is calculated. If anyone wants to change it, make sure you call it something else so that the community is not confused with what you are publishing.
The problem with this is that it does a horrible job of predicting capture points in playoffs at weak events. Take the Lake Superior regional as an example. There was only a single qual capture at Lake Superior. The number 1 seed captured in 4 of their 6 playoff matches, for a capture rate of 67%, but their expected capture rate found by summing their individual capture OPRs would be just 12%. There are just simply not enough captures happening at some events for “OPR capture” to mean anything useful. In the worst case, like at Palmetto, “OPR capture” can not distinguish teams at all because there were zero captures in quals.
My guess is that my eOPR1 would do better than eOPR2/modified OPR at predicting playoff scores at weaker events like this, although I have not done a thorough investigation.
One thing I would like to hear opinions on is if eliminations match data should be use in OPR calculations. My results change significantly when I factor in elims data for my Modified OPR and Regular OPR. See below (sorted by ‘Mod OPR Qual’). OPRs tend to drop for robots in elims, probably due to the “cap” for breach points, the alliance partnership with two other good teams, and the fact that defense is a bigger factor in elims.
I believe that it would be unwise to use elimination data for these calculations for the reasons listed by Ed Law.
Having been at GTRE myself, from the list of 2013,610,1241 and 1305, I’d probably give a ranking of 1241>2013>610>1305 in quals matches.
1241’s high goaling is fantastic, and building a strategy around them, feeding them balls and letting them score is a surefire way to do well in quals matches, making them even stronger with good planning. By themselves, they’re also a quality defense crosser and cycler.
2013’s overall a strong robot that can work independently of their alliance partners in quals, and their high goal shot is monstrously accurate when dialed in and firing from a consistent spot, which is pretty likely given the general lack of defensive play in quals matches.
610 is a low goal and defense crossing specialist, and we can ideally function independently of alliance partners in quals, but we suffered at GTRE from some robot issues and lack of high goal shot.
1305 is another strong independent robot, similar to 2013, but I think that 2013’s a little bit stronger. It can also be noted that 1305’s schedule had them playing with 2056,5807,4476,1241 and 610 in quals matches, among other notably strong teams.