|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: OPR after Week Four Events
FWIW: I was playing around with OPR and put together this spreadsheet. For each of the 3,833 Qual matches played so far, it shows the actual match score and the "expected" score based on the OPR of the teams in each alliance. |
|
#2
|
||||
|
||||
|
Re: OPR after Week Four Events
Quote:
What was the process used to do this? |
|
#3
|
||||
|
||||
|
Re: OPR after Week Four Events
How soon could this be generated after the qualifying matches are posted?
|
|
#4
|
|||
|
|||
|
Re: OPR after Week Four Events
Quote:
Predicted OPR match results match actual match results 82% of the time (3142 out of 3833). |
|
#5
|
||||
|
||||
|
Re: OPR after Week Four Events
Is it more likely to be over the expected result or under the expected result.
|
|
#6
|
||||
|
||||
|
Re: OPR after Week Four Events
My question is badly stated. How soon can projected results be generated once the qualifying schedule is published. How easy is this to do? I can do it myself if possible.
|
|
#7
|
||||
|
||||
|
Re: OPR after Week Four Events
Quote:
The over 80% accuracy is after all the qualifying matches are available. A least square fit is used which is why the error is minimized. The number is even more impressive if you consider the close matches that can swing either way. During the qualifying matches, even after 4 to 5 matches, the prediction is not as accurate. But it is the best method there is and that is what most people use. I have not done any studies but the accuracy should be over 66% this year and the later matches will get more and more accurate. Ether, this is another interesting challenge for you. However there is another way to do this. It is by using historical OPR or projected OPR. But it only works with all 6 teams in that match has played in another event. |
|
#8
|
||||||
|
||||||
|
Re: OPR after Week Four Events
What I would be interested in is a comparison between OPR for an event, and actual average points scored for each robot. This, of course, means that a very accurate log of all robots for all matches is needed. Anyone have that data? (Before we get flamed for poor scouting, we don't keep track of ALL robots in ALL matches. The majority, yes - but not all.)
We make videos of all of our matches so we can go back and look for improvements in the robot and the driving. Using the videos, I've kept track of all of the points our robot scored during qualifications at both of our districts so far. OPR has been within 10% of our actual average at both events. |
|
#9
|
||||
|
||||
|
Re: OPR after Week Four Events
Quote:
And of course our "actuals" are only as accurrate as the scouts. In general, they do a really good job, but I do frequently find errors (one scout recorded 33 putting up 8 discs in auton ) |
|
#10
|
||||
|
||||
|
Re: OPR after Week Four Events
Quote:
NEW: OPR FIRST 2013 Android App OPR and CCMW applications Either way, I use the OPR's to calculate a predicted score in real time. Like stated earlier, it is very accurate in terms of who wins/loses. As of scoring, the percentage varies. It would be cool to use past OPRs to predict the results. My app uses data specific to that regional and only that regional. So the longer the regional goes on, the more accurate the data is. Quote:
|
|
#11
|
||||
|
||||
|
Re: OPR after Week Four Events
Quote:
I saw your thread last night and have already installed and used it. Brilliant! You say you use OPR and not CCMW. Being new to this, what is CCMW and is it not preferable to OPR? Just asking, because I am fine with the OPR results. |
|
#12
|
|||
|
|||
|
Re: OPR after Week Four Events
CCWM is Calculated Contribution to the Winning Margin. For a given team, instead of using purely the team's score (OPR), CCWM uses the team's alliance score minus the opponent's score.
|
|
#13
|
||||
|
||||
|
Re: OPR after Week Four Events
Quote:
It's just like OPR, except instead of using alliance score as input to the computation, it uses the difference between the two alliance scores for each match. See Ed Law's paper (linked in post#1 in this thread). There's a discussion about it there. |
|
#14
|
||||
|
||||
|
Re: OPR after Week Four Events
So, I did a quick scrape through our data from Grand Blanc. I found a handful off questionable values, but most of them were realtively low impact to averages.
The highest delta between OPR and what our scouts provided was 13 points. This particualr instance was 13 points favorable to that particular team. This team also had a few of the values I question as it appears that they did not score outside of auton in their final 2 matches (which I find hard to believe, but will verify later). This was one of the top scoring teams at the event. The highest "unfavorable" OPR reading was 8.8 off from the scouts average. This particualr team also had some questionable data for one of their matches. Adjusting the values for that match to what I beleive were more accurrate (second scouting source), this delta went down to 5, and a different team became the most disadvantaged at 7.0. The team with this delta was a lower scoring team that OPR seems to be especially harsh on when comparing their 9.5 average to their 2.3 OPR. To get average error, I took the absolute value of the error and found the average to be 3.5 pts, and the median error to be 2.9 pts. Average OPR for the event was 24.8 and median OPR was 18.8. Thus the average error and median error for this event seems to be coming in at 15-16%. |
|
#15
|
|||
|
|||
|
Re: OPR after Week Four Events
Averages out to zero, but that's to be expected.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|