pic: Archimedes Scouting: tote OPR vs. tote scoring



a comparison of tote OPRs and actual totes scored on Archimedes.

Y(tote points) = 0.983517 * X(tote OPR) - 0.300085

put simply, points below the best fit represent teams whose component OPR was greater than their actual scoring, and teams above the best fit represent teams whose component OPR was lower than their actual scoring.

If you have any general questions about the three scatter plots, feel free to post them here. As most of you probably guessed, they were generated in Tableau. OPRs are sourced from Ether’s component OPR spreadsheet, actual scoring is based on 2338 scouting data.

Cool charts!

Does OPR in this case refer to season-long OPR?

That’s a pretty good looking plot. I’m not sure if whatever draws it will tell you this, but do you know what sort of r squared value do you get for the line?

The line shown on the graph is not a trendline, but instead the “ideal function” if OPR=Avg. Scoring Performance.<- this is incorrect. I was looking on too small of a screen to realize this was incorrect…

As has been said many time, OPR is often a good indicator, but is different than what the actuals are. Some interesting theories can be drawn from the deltas.

For instance, each alliance has 3 very easy to get to cans. If the alliance is comprised of 3 teams that on average typically utilize 1 can each, and they are all able to utilize that one can each in almost all of their matches, then OPR might coincide very well with their average scoring ability.

In Archimedes, I believe I heard that Bedford typically utilized 2 or 3 of those “easy” cans in almost every match. I will utilize them as an example as they were really awesome this year, and help illustrate some points. If they were partnered with 2 teams that typically utilized 1 of the 2 cans to consistently make a 4 stack plus an RC, then it would make the most sense for the alliance to have 1023 make the 3 stacks, and have the others either stack totes, or try to get a Can off of the center (which might take them a significantly longer amount of time). So, let’s say goig into the match 1023 is averaging just below 20 auto+3x42pt. stacks or somewhere just below 146 total points. And they are partnered with two teams that again typically make 24 pt. stacks (4 totes + RC from the “easy” position). In this match, all 3 machines cannot get their typical score because they want to use some common resources. Thus 146+24+24 is unlikely (at least in that normal manner). The smart play would likely be to have 1023 do their 146pt. strategy (they were extremely consistent), and the others to adopt an alternate strategy. One might continue to just stack totes. Instead of an RC plus 4 totes, they might be able to get 6 total totes for a score of 12 pts. (instead of their normal 24). The other might work on getting the Coop stack, which might be a 50/50 on 40 points. If they hit it, they get a bump compared to their usual 24 point contribution. If they miss, (or the other alliance misses), then they would get nothing…
The OPR calculation does not care who does what above, it only uses the outcome to what the expected outcome is and tries to balance it. Thus if it was expecting 3+1+1=5, and instead 3expected+1expected+1expected=3 It may try to balance it out to 2+.5+.5=3

In the past, when looking at the data, some of the “deltas” have been indicators of interesting strategy. For instance, in 2010, 33 had a higher OPR than the number of actual points the team scored on average. This is because they primarily worked as a back and midfielder making passes into the home zone when they had a compitent home zone partner. This happened a lot at 2010 MSC that year where 2+2+2=8 or 3+2+1=10… Also that year, they had a very good win/loss record, but had a very low CCWM compared to many other high OPR teams. This was due to some concise strategy that because of the way ranking points were done that way, close high scoring matches were much more beneficial than blowing out an opponent, thus some matches you would use a a more cut throat strategy to ensure the win in a difficult match, but the next match might look like an easy victory, so you might use a much friendlier strategy so that your opponent could get a better score.

The opposite was true then for 2011. It was very difficult to get scores above 120* that year due to the regressive nature of the additional scores (each row lower was worth less points so the high point rows were filled first, also 30 pts. When to first minibot, but each additional minibot was worth less points). In 2011, it was not uncommon for 60+60+60=120…

*There were a lot of scores that year above the 120 mark as a number, but the percentage was small. If I remember correctly, 133 was the highest loosing score that could be achieved, and thus 134 was the minimum score that garuanteed a victory (this is not possible with some games).

Without access to Evan’s manual scoring data, I can’t verify that it’s a correct least-squares linear fit, but I can say it’s not the “ideal function”: the line does not pass through the point (40,40) nor, if you look very carefully, does it pass through the origin.

~

The OPRs here are calculated from Archimedes only, it does not include data from any previous event.

They used all three, every qualifier. They literally did the exact same thing every single qualification match, and did it well. By the way, they’re the point way over in the top right.

In the past, when looking at the data, some of the “deltas” have been indicators of interesting strategy. For instance, in 2010, 33 had a higher OPR than the number of actual points the team scored on average. This is because they primarily worked as a back and midfielder making passes into the home zone when they had a compitent home zone partner. This happened a lot at 2010 MSC that year where 2+2+2=8 or 3+2+1=10… Also that year, they had a very good win/loss record, but had a very low CCWM compared to many other high OPR teams. This was due to some concise strategy that because of the way ranking points were done that way, close high scoring matches were much more beneficial than blowing out an opponent, thus some matches you would use a a more cut throat strategy to ensure the win in a difficult match, but the next match might look like an easy victory, so you might use a much friendlier strategy so that your opponent could get a better score.

Even though it was incredibly dysfunctional, 2010 was a fun game for strategy. You hit the nail on the head, though. OPR can get thrown off by alliance partners and strategies. However, the biggest thing I noticed upon looking at the data is that OPR was surprisingly precise with totes, okay with cans and auto, and completely useless for litter.

Y(tote points) = 0.983517 * X(tote OPR) - 0.3000815 Pvalue <.0001

It’s tableau.

Version with a forced y-intercept of zero: https://i.imgur.com/3ok6XWN.jpg

Y = .963006X