![]() |
OPR after Week Six Events
The OPR/CCWM numbers up to Week 6 events have been posted, please see
http://www.chiefdelphi.com/media/papers/2174 All events up to Week 5 are now included. If you find any error or have any questions, please let me know. I will post this again after next week before the World Championship. |
Re: OPR after Week Six Events
Of related interest: Max Event OPR achieved by each of the 2,490 teams Weeks 1-6 Twitter data spreadsheet |
Re: OPR after Week Six Events
Thanks Ed & Ether. 1370 used the OPR/CCWM spreadsheet this weekend with the auto-update feature and we got our first real solid scouting list going since the team was formed back in 2004. Of course, we then proceeded to lose our last two qualifying matches and become the third pick of the alliance we joined, but I'm not sure that I can completely lay the blame for this on either of you two or the spreadsheet.
Although I would like to... :mad: |
Re: OPR after Week Six Events
Going by Max Event OPR is it safe to say that 624 is one of the best teams to not win a regional?
|
Re: OPR after Week Six Events
I found an error in the World Ranking tab. The number of teams that has played so far exceeded the total number of teams from last year. I thought I changed all the formulas but I guess I missed a few.
It is now version 6.1. |
Re: OPR after Week Six Events
Quote:
It's ridiculous that they probably won't go to Championships. They could get lucky with the waitlist, but with Wildcards this year, it seems unlikely. |
Re: OPR after Week Six Events
Quote:
-What game was the most OPR-friendly? That is to ask: What year's game had the highest correlation between OPR and championship attendance? You could probably ask the same thing (championship correlation level) for other neat parameters: -Ranking after qualifications -CCWM -Auton OPR score (if we keep getting gifted with officially-recorded auton stats, this could get interesting over time, as it would provide a mathematical way to compare "how important autonomous was this year") |
Re: OPR after Week Six Events
Quote:
Not sure specifically about the correlation with Championship attendance, but I'd guess that there are too many confounding factors for the difference in those correlations to mean anything. |
Re: OPR after Week Six Events
Thanks a lot for all your time and effort making these. Although my team chooses not to use OPRs and scouting data, I love using it for my own strategic endeavors.
|
Re: OPR after Week Six Events
Our team has been looking at the OPR and CCWM data and we are coming away from it slightly confused. Hopefully, someone here (Ed? Ether?) can help me to understand it so that I can explain it to the rest of the team in a marginally coherent manner. Oh, wait ...
At TCNJ, we played an OK game and came away with a OPR of 22.3 and CCWM of 6.2 (both rounded to 1 decimal place via Excel). At Bridgewater, we played tremendously better (or so it felt) and our winning margin rose from 3.4 to 3.9 (+14.7%) and our average score rose from 19.6 to 24.1 (+23.0%), although our OPR dropped 0.5% (22.3 -> 22.2) and our CCWM went down 11 points to -5.0 :eek: If we're scoring better, more accurately, and more often while winning more matches, wouldn't our CCWM and OPR go up? |
Re: OPR after Week Six Events
Quote:
But what the OPR and CCWM is telling you is that there was some negative synergy (be it random or systemic) between your team and the other teams on the alliances you played with at Bridgewater, such that your alliance scores were a bit lower than would have been expected based on the other teams' performance on other alliances. |
Re: OPR after Week Six Events
How do you download pictures to the database. I can't seem to figure it out.
|
Re: OPR after Week Six Events
Just as one data point on the accuracy of this modeling, our Crossroads OPR is calculated at 63.1. Our actual average scoring, based on scouting data, is 63.5.
This is a very accurate data system. |
Re: OPR after Week Six Events
Ed, Ether,
I was thinking about how to improve OPR and one of the biggest issues with it right now is that it does not take into account team improvement over the course of the event; it is an average. In OPR we don't really want to do anything like throw out early data points as this would make everything less accurate overall. So what if we weigh the later matches teams play more? To do this (and I'm sure there's an easier way) you could give them a multiplier on a scale of 1-2 with the increments being 1+(x/n), x being the current match they've played and n being the number of qualifications they will play. For example, for a teams 9th match out of 12 the multiplier would be 1 3/4, so that match counts into their OPR that much more than their earlier ones. To do this with the current method of calculating OPR (and I don't know if this will make the matrices exponentially too complex) you just count the first match in a schedule of 12 12 times, the second match 13 times, and so on and so forth. I'm probably overcomplicating it but it looks like it should be possible. Just some thoughts. |
Re: OPR after Week Six Events
Quote:
If you don't have resources to take a picture of every robot, come to our pit tomorrow after lunch and I can give you a picture of every robot at MSC. You have to put the 64 pictures in the same directory as the scouting database. |
Re: OPR after Week Six Events
Quote:
|
Re: OPR after Week Six Events
I wonder if it would reasonable to assume that the top ~100 will make it to elims at worlds. Of course the are holes in this theory, but I would like to see how accurate this can be as a model of prediction.
|
Re: OPR after Week Six Events
Quote:
(this for normal regionals, not sure how it would work at a district or championship event) |
Re: OPR after Week Six Events
YOu can also download pictures of most robots from the 2013 FRC tracker app. It will send you an email with all the robot pics available. I have about 200 pictures of robots attending champs
|
Re: OPR after Week Six Events
Quote:
But we can extend the underlying model to almost any function and use more general optimization methods to find a solution 1. In particular we can use a linear function (a*m+b) rather than just a constant for the underlying model of each team's score, so each team is characterized by 2 parameters rather than 1. In this linear model the parameter b is similar to OPR and a is a measure of how much better or worse a team gets each match. I tried this last year for a couple of regionals and while it matched the results better (smaller residuals as expected) it was no better at predicting unseen matches. If time allows I'll redo the analysis for the 2013 game which is OPR friendly. More importantly for scouting, having 2 parameters per team also makes it harder to rank them! Is a team where the model predicts 20 points every match (a=0;b=20) better or worse than a team where the model predicts 15 points initially but improving a point each match (a=1;b=15)? I'll leave the ranking question to wiser minds. 1 Note that more complex functions may produce solutions which are not guaranteed to be optimal |
Re: OPR after Week Six Events
It was brought to my attention today that the macro to automatically update OPR and CCWM for MSC and MAR Championship do not work. For people at MSC, I was able to help them set it up there. For people at MAR Championship, sorry about the inconvenience. I did not set up the link at the beginning of the season because these two events do not have any teams registered at that time. I guess I forgot to do it when I published the spreadsheet after Week 6 events. You can download the new version 6.2spreadsheet at http://www.chiefdelphi.com/media/papers/2174.
|
Re: OPR after Week Six Events
Quote:
|
Re: OPR after Week Six Events
Quote:
|
| All times are GMT -5. The time now is 04:52. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi