|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools |
Rating:
|
Display Modes |
|
#25
|
||||
|
||||
|
Re: looking at OPR across events
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.
1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner) 2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul) 3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114) 4. 33—Winner of Southfield district event 5. 987—Finalist at San Diego (alliance partner with 2485) 6. 624—Winner of Alamo regional 7. 16—Winner of Arkansas regional 8. 254—Winner of Central Valley regional 9. 3147—Finalist at Crossroads regional 10. 3393—Finalist at Auburn Mountain district In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well. EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out. Last edited by David8696 : 10-03-2014 at 19:23. Reason: Accuracy |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|