Thanks Ed & Ether. 1370 used the OPR/CCWM spreadsheet this weekend with the auto-update feature and we got our first real solid scouting list going since the team was formed back in 2004. Of course, we then proceeded to lose our last two qualifying matches and become the third pick of the alliance we joined, but I’m not sure that I can completely lay the blame for this on either of you two or the spreadsheet.
I found an error in the World Ranking tab. The number of teams that has played so far exceeded the total number of teams from last year. I thought I changed all the formulas but I guess I missed a few.
Absolutely, probably the best since 1477 won Razorback. Easily a top robot worldwide, but this year the 6 and 7 alliances were way stronger than usual (too strong?). Often there were more than 16 robots that could score well, but fewer than 24, so you’d get 3 decent scorers against two good scorers fairly often. This was particularly true at Lone Star.
It’s ridiculous that they probably won’t go to Championships. They could get lucky with the waitlist, but with Wildcards this year, it seems unlikely.
Although unfortunate for 624, it brings up an interesting question:
-What game was the most OPR-friendly? That is to ask: What year’s game had the highest correlation between OPR and championship attendance?
You could probably ask the same thing (championship correlation level) for other neat parameters:
-Ranking after qualifications
-CCWM
-Auton OPR score (if we keep getting gifted with officially-recorded auton stats, this could get interesting over time, as it would provide a mathematical way to compare “how important autonomous was this year”)
In which games is OPR the most accurate: Games where robot scoring is minimally interdependent. Some good factors would be a large supply of game pieces and very little defense. Also a linear relationship between robot tasks and robot scoring (like this year instead of 2011, when there were multipliers). I’ve heard from multiple reputable people that OPR was really good in 2008. From the numbers I’ve run for the past few years, it was terrible for 2009 and relatively good in 2010. It’s quite good this year as well.
Not sure specifically about the correlation with Championship attendance, but I’d guess that there are too many confounding factors for the difference in those correlations to mean anything.
Thanks a lot for all your time and effort making these. Although my team chooses not to use OPRs and scouting data, I love using it for my own strategic endeavors.
Our team has been looking at the OPR and CCWM data and we are coming away from it slightly confused. Hopefully, someone here (Ed? Ether?) can help me to understand it so that I can explain it to the rest of the team in a marginally coherent manner. Oh, wait …
At TCNJ, we played an OK game and came away with a OPR of 22.3 and CCWM of 6.2 (both rounded to 1 decimal place via Excel). At Bridgewater, we played tremendously better (or so it felt) and our winning margin rose from 3.4 to 3.9 (+14.7%) and our average score rose from 19.6 to 24.1 (+23.0%), although our OPR dropped 0.5% (22.3 -> 22.2) and our CCWM went down 11 points to -5.0 :eek:
If we’re scoring better, more accurately, and more often while winning more matches, wouldn’t our CCWM and OPR go up?
Assuming your team actually did score higher at Bridgewater that TCNJ, and all other things being equal, yes.
But what the OPR and CCWM is telling you is that there was some negative synergy (be it random or systemic) between your team and the other teams on the alliances you played with at Bridgewater, such that your alliance scores were a bit lower than would have been expected based on the other teams’ performance on other alliances.
Just as one data point on the accuracy of this modeling, our Crossroads OPR is calculated at 63.1. Our actual average scoring, based on scouting data, is 63.5.
I was thinking about how to improve OPR and one of the biggest issues with it right now is that it does not take into account team improvement over the course of the event; it is an average.
In OPR we don’t really want to do anything like throw out early data points as this would make everything less accurate overall.
So what if we weigh the later matches teams play more? To do this (and I’m sure there’s an easier way) you could give them a multiplier on a scale of 1-2 with the increments being 1+(x/n), x being the current match they’ve played and n being the number of qualifications they will play. For example, for a teams 9th match out of 12 the multiplier would be 1 3/4, so that match counts into their OPR that much more than their earlier ones.
To do this with the current method of calculating OPR (and I don’t know if this will make the matrices exponentially too complex) you just count the first match in a schedule of 12 12 times, the second match 13 times, and so on and so forth.
I’m probably overcomplicating it but it looks like it should be possible.
You have to take your own pictures and rename the pictures to x.jpg where x is the team number.
If you don’t have resources to take a picture of every robot, come to our pit tomorrow after lunch and I can give you a picture of every robot at MSC. You have to put the 64 pictures in the same directory as the scouting database.
I wonder if it would reasonable to assume that the top ~100 will make it to elims at worlds. Of course the are holes in this theory, but I would like to see how accurate this can be as a model of prediction.
I’ve found that the best way to get pictures of every robot, is to get to the venue early on Friday morning, and try to be one of the first people into the pits.
(this for normal regionals, not sure how it would work at a district or championship event)
YOu can also download pictures of most robots from the 2013 FRC tracker app. It will send you an email with all the robot pics available. I have about 200 pictures of robots attending champs
OPR (and CCWM) use the very simple underlying model that each team’s performance can be approximated as a single constant parameter. This makes the math fairly easy so we can use matrix inversion (or your favorite decomposition method) to find the optimal solution to the regression.
But we can extend the underlying model to almost any function and use more general optimization methods to find a solution 1.
In particular we can use a linear function (am+b*) rather than just a constant for the underlying model of each team’s score, so each team is characterized by 2 parameters rather than 1. In this linear model the parameter b is similar to OPR and a is a measure of how much better or worse a team gets each match.
I tried this last year for a couple of regionals and while it matched the results better (smaller residuals as expected) it was no better at predicting unseen matches. If time allows I’ll redo the analysis for the 2013 game which is OPR friendly.
More importantly for scouting, having 2 parameters per team also makes it harder to rank them! Is a team where the model predicts 20 points every match (a=0;b=20) better or worse than a team where the model predicts 15 points initially but improving a point each match (a=1;b=15)?
I’ll leave the ranking question to wiser minds.
1 Note that more complex functions may produce solutions which are not guaranteed to be optimal