
10-03-2014, 19:20
|
 |
Registered User
 FRC #0254 (The Cheesy Poofs)
Team Role: Engineer
|
|
Join Date: Feb 2014
Rookie Year: 2003
Location: Menlo Park, CA
Posts: 57
|
|
|
Re: looking at OPR across events
Quote:
Originally Posted by David8696
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.
1. 1114—Finalist at Greater Toronto East regional (lost finals partly due to technical foul by alliance partner)
2. 2485—Finalist at San Diego regional(also lost finals partly due to tech foul)
3. 3683—Finalist at Greater Toronto East (alliance partner with 1114)
4. 33—Winner of Southfield district event
5. 987—Finalist at San Diego (alliance partner with 2485)
6. 624—Winner of Alamo regional
7. 16—Winner of Arkansas regional
8. 254—Winner of Central Valley regional
9. 3147—Finalist at Crossroads regional
10. 3393—Finalist at Auburn Mountain district
In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only five of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well.
|
1114 and 3683 lost in the semifinals
|