# Accounting for breaches/captures in OPR

Inspiration from this post.

I don’t think that there is going to be any fix that is quite so easy to implement as the one used in 2012, but I think improvements could be made. If we do implement corrections though, we need to be very careful with our terminology. Corrections like this should only be used for elimination round matches.

2 potential modifications I can think of could be:
1: Increase the value of each crossing to be 5 points + (20 points/breach)/(8 crossings/breach) = 7.5 points, and increase the value of each ball scored by (25 points/capture)/(8 scored boulders/capture) = 3.125 additional points per scored boulder. A potential drawback to this approach is that doing more than the minimum required for crossing/breaching would be over-valued. For example, 10 successful scored crossings will net 75 points, instead of the 70 that this would actually be worth.

2: Using the data on breaches and captures, compute the OPR contribution by team for these. Take (OPR breach) * (20 points/breach) and (OPR capture) * (25 points/capture) and add these onto the team’s nominal OPR. A big disadvantage to this is that captures happen incredibly infrequently in qualifications, which will make this correction almost negligible for most teams, even though they may actually be able to contribute to capturing once they have better partners in elims.

I had a similar idea. I decided to write a program that calculates the OPRs, and for each match score I simply add 20/25 for breach/capture. Here are some results:

`````` GTRE
#	Old	New(cap+breach)
2056	77.16	99.25
1114	57.41	73.32
118	56.85	69.96
2013	48.66	66.19
1241	46.11	58.63
610	43.7	60.95
1305	36.43	47.19
5807	34.76	43.8
4476	33.27	45.98
3117	32.88	39.04
5596	32.78	40.99
4618	31.23	42.07
1547	30.39	37.5
296	30.38	42.91
5031	30.12	38.59
5036	29.24	34.46
4783	27.57	37.13
4976	27.33	39.1
4732	25.61	31.4
1325	25.3	32.7
2634	24.64	28.16
4939	23.83	21.89
2935	23.51	27.61
2340	22.82	22.18
4308	22.8	25.2
1285	22.79	30.11
6125	22.72	23.46
2228	22.33	27.98
4704	22.14	28.82
746	22.04	29.9
4525	21.7	20.95
2185	18.55	22.47
4248	18.17	23.21
4343	17.75	21.06
1075	17.18	20.06
3710	16.59	15.24
5580	15.94	14.99
6141	15.38	12.72
5428	13.33	13.04
4252	13.23	18.36
4015	11.12	13.24
6140	11.04	6.14
6046	10.69	9.15
1246	9.98	8.54
6070	9.52	10.16
2198	9.45	12.64
3541	9.37	8.04
5076	6.43	5.78
5094	-0.51	-5.58

``````

My code also removes foul points by default, but I included them here so that it is more comparable to The Blue Alliance’s data.

I have just uploaded a scouting database which contains component calculated contributions. Since component OPRs seem to have the property of linearity (although I don’t fully understand why), it should be easy for anyone to play around with these. Currently, I calculate eOPR using method 1 described in my original post. That is, eOPR = (total points) + 3.125*(subtracted tower strength) + 2.5*(cross defense count).

This is some pretty awesome data Caleb! I have been working on my own script to pull down the match data, organize it, and run various calculations on it. This is not exactly simple stuff, but it’s given me the chance to practice my (not-so-great) scripting skills, refresh on some linear algebra, and of course play with ROBOT DATA in JMP.

I think using the real match data (microbuns method) to add the +20 for a breach and +25 for a capture to Qual matches is the best way to calculate this new OPR (I call it “Modified OPR”). It’s also a pretty simple modification to my OPR calculation script since I already have the data columns there for breach and capture. Obviously I only add this to the final score if the match is not an elims match. This is most accurate to how elims matches are truly scored. I prefer to let the calculated contribution to tower weakening and defense breaching speak for itself in the analysis of those columns by themselves.

``````(if breach=='true')score+=20;} if(capture=='true'){score+=25;}
``````

microbuns, I get the same results as you for 2016_ONTO with my Perl OPR calculator if I use data from only the qualification matches. Always nice to double-check work :).

One thing I would like to hear opinions on is if eliminations match data should be use in OPR calculations. My results change significantly when I factor in elims data for my Modified OPR and Regular OPR. See below (sorted by ‘Mod OPR Qual’). OPRs tend to drop for robots in elims, probably due to the “cap” for breach points, the alliance partnership with two other good teams, and the fact that defense is a bigger factor in elims.

Perhaps someone who attended this regional could comment which OPR more accurately reflects true robot performance (specifically looking at 2013, 610, 1241 and 1305 in this list since they change rank depending on which OPR I rank by).

``````
Team	Mod OPR ALL	OPR ALL		Mod OPR Qual	OPR Qual
2056	87.76		69.46		99.25		77.16
1114	76.15		59.84		73.32		57.41
118	59.85		49.83		69.96		56.85
2013	54.16		41.92		66.19		48.66
610	48.24		36.67		60.95		43.7
1241	58.48		46.65		58.63		46.11
1305	52.56		38.84		47.19		36.43
4476	49.78		36.59		45.98		33.27
5807	46.69		35.65		43.8		34.76
296	43.15		31.33		42.91		30.38
4618	42.65		32.17		42.07		31.23
5596	28.24		25.83		40.99		32.78
4976	36.67		26.41		39.1		27.33
3117	38.74		33.71		39.04		32.88
5031	34.44		28.27		38.59		30.12
1547	37.79		30.53		37.5		30.39
4783	31.54		24.96		37.13		27.57
5036	39.98		32.47		34.46		29.24
1325	38.05		27.82		32.7		25.3
4732	34.1		27.2		31.4		25.61
1285	30.56		23.24		30.11		22.79
746	30.8		22.05		29.9		22.04
4704	31.71		23.86		28.82		22.14
2634	21.9		20.42		28.16		24.64
2228	27.65		22.06		27.98		22.33
2935	27.59		23.73		27.61		23.51
4308	24.77		22.28		25.2		22.8
6125	23.34		22.21		23.46		22.72
4248	23.54		18.12		23.21		18.17
2185	22.2		18.32		22.47		18.55
2340	22.77		23.04		22.18		22.82
4939	21.37		23.62		21.89		23.83
4343	16.26		15.33		21.06		17.75
4525	21.85		22.16		20.95		21.7
1075	22.49		18.58		20.06		17.18
4252	19.4		13.59		18.36		13.23
3710	23.6		21.57		15.24		16.59
5580	12.24		14.17		14.99		15.94
4015	13.23		11.04		13.24		11.12
5428	12.75		14.08		13.04		13.33
6141	18.76		18.79		12.72		15.38
2198	12.85		9.32		12.64		9.45
6070	12.56		10.67		10.16		9.52
6046	11.9		12.28		9.15		10.69
1246	11.73		11.87		8.54		9.98
3541	7.99		9.23		8.04		9.37
6140	6.63		11.06		6.14		11.04
5076	6.73		6.92		5.78		6.43
5094	-4.85		0.26		-5.58		-0.51

``````

OPR was first proposed and published on CD by Scott Weingart in April 2006. Since then OPR has always been calculated using qualification match results only. Here are a number of possible reasons.

1. There were quite a few years where the scoring during qualification rounds and elimination/playoff rounds are different, some years more than others. Adjusting the scores for qualification rounds or elimination/playoff rounds in order to combine these different matches together to calculate OPR of both rounds is artificial and lead to potential unnecessary arguments how the adjustment should be made.
2. The motivation for teams in qualification rounds and elimination/playoff rounds are very different. The higher ranked teams want to rank as high as possible by winning a match and earning as many ranking points as possible that is used for tiebreak. Lower ranked teams want to showcase their strength and winning is not the highest priority except for their obligations to their alliance partners. In elimination rounds, the goal is to advance by winning and not necessary by scoring as many points as possible. In playoff rounds, the goal is to score high to advance and winning may not be the top priority.
3. Alliance partners in Qualification rounds are randomly selected. That is not the case for elimination/playoff rounds. In some games, with better partners, teams in elimination/playoff rounds are limited in number of points each of them can score which can hurt their OPR.
4. In the calculations of OPR, if we use elimination/playoff matches also, there may be some undesirable effect when some teams play more matches than others. More data points may not be better when the data represents something different due to what I explained above.

However I am all for teams who want to modify OPR to rank teams based on what they feel is more important. 2012 was the only year that I published a modified OPR. There was a general consensus on CD to give 5 points for each coop point. I made it so that teams can easily use a number different from 5 if they chose to.
Due to all the above reasons, I strongly oppose changing the fundamental way how OPR is calculated. If anyone wants to change it, make sure you call it something else so that the community is not confused with what you are publishing.

The problem with this is that it does a horrible job of predicting capture points in playoffs at weak events. Take the Lake Superior regional as an example. There was only a single qual capture at Lake Superior. The number 1 seed captured in 4 of their 6 playoff matches, for a capture rate of 67%, but their expected capture rate found by summing their individual capture OPRs would be just 12%. There are just simply not enough captures happening at some events for “OPR capture” to mean anything useful. In the worst case, like at Palmetto, “OPR capture” can not distinguish teams at all because there were zero captures in quals.

My guess is that my eOPR1 would do better than eOPR2/modified OPR at predicting playoff scores at weaker events like this, although I have not done a thorough investigation.

One thing I would like to hear opinions on is if eliminations match data should be use in OPR calculations. My results change significantly when I factor in elims data for my Modified OPR and Regular OPR. See below (sorted by ‘Mod OPR Qual’). OPRs tend to drop for robots in elims, probably due to the “cap” for breach points, the alliance partnership with two other good teams, and the fact that defense is a bigger factor in elims.

I believe that it would be unwise to use elimination data for these calculations for the reasons listed by Ed Law.

Having been at GTRE myself, from the list of 2013,610,1241 and 1305, I’d probably give a ranking of 1241>2013>610>1305 in quals matches.

1241’s high goaling is fantastic, and building a strategy around them, feeding them balls and letting them score is a surefire way to do well in quals matches, making them even stronger with good planning. By themselves, they’re also a quality defense crosser and cycler.

2013’s overall a strong robot that can work independently of their alliance partners in quals, and their high goal shot is monstrously accurate when dialed in and firing from a consistent spot, which is pretty likely given the general lack of defensive play in quals matches.

610 is a low goal and defense crossing specialist, and we can ideally function independently of alliance partners in quals, but we suffered at GTRE from some robot issues and lack of high goal shot.

1305 is another strong independent robot, similar to 2013, but I think that 2013’s a little bit stronger. It can also be noted that 1305’s schedule had them playing with 2056,5807,4476,1241 and 610 in quals matches, among other notably strong teams.