![]() |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
You need three things in this game. A really good front court robot that is going to make 90%+ of the shots they take, has a good intake, and isn't afraid to play some defense, a midfielder that is able to transition from assisting to defense on the fly, and an inbounder that can quickly pass the ball to the midfielder as well as play a bit of D or throw some picks for the midfielder to get away from defense to execute the pass. Two high scoring robots is the flashy alliance that looks great on paper but unless they can transition into those defensive roles on the fly they're not going to stand up to the above alliance. I don't think you will see examples of more brutal defense than what we saw in the CVR finals. If you can't counter that with equally effective defense of your own, or counter defense to free up your front court robot, you're in a world of hurt. |
Re: looking at OPR across events
Quote:
I know you do everything humanly possible to coax the truth out of the Twitter data, and the time and effort you invest is much appreciated. For the Twitter stats reports that I generate, I include the Twitter data even if that data is not 100% complete. http://www.chiefdelphi.com/forums/sh....php?p=1355882 |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
1. A great defensive robot doesn't have a big impact in matches where the other alliance would not have been scoring a lot of points anyway. 2. Catching robots require someone to provide a controlled truss shot. 3. Inbounders and assisters require someone to inbound to/assist to. In each, case, OPR/CCWM will tend to systemically underrate these attributes since they are only utilized in a subset of qualification matches and (in most cases) will have their dependencies met in elims. High goal scorers and truss shooters can manufacture pretty good scores by themselves and are therefore comparatively overrated by OPR/CCWM. |
Re: looking at OPR across events
Quote:
I will continue to agree to disagree with your general premises. I will also continue to utilize the systems in place until someone provides a better quantitative metric substitute. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Anyone know of an app I can get to view this on android? I've tried Office suite but it never loads all the way. Any suggestions?
|
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
I agree. That there is a general rule that OPR does not correctly represent teams with some strategies. I also agree that it is a highly interesting and useful metric most of the time. I was just wondering if it was another general rule that it's easier a team with good alliance partners to get a high OPR than a team with not so good partners? Quote:
Again, I completely agree. But i'm just wondering if in general, a team could get a high OPR more easily by going to a more competitive event? |
Re: looking at OPR across events
Quote:
EDIT: just realized I posted this on wrong thread, sorry |
Re: looking at OPR across events
Quote:
I assert that the distribution of robot capabilities in eliminations tends to be very different across all alliances. In total, more robot capabilities tend to be present, so partner- or opponent-dependent attributes will contribute more to an alliance than they did on average in qualifications. |
Re: looking at OPR across events
In defense of OPR this year, I'd like to look at the data—that is, how accurately OPR has predicted teams' performances in the competitions we've seen thus far. Here are the top 10 teams as ranked by OPR, accompanied by their finishes in the competition they competed in.
1. 1114—Semifinalist at Greater Toronto East regional (lost in semifinals partly due to technical foul by alliance partner) 2. 2485—Finalist at San Diego regional (also lost finals partly due to tech foul) 3. 3683—Semifinalist at Greater Toronto East (alliance partner with 1114) 4. 33—Winner of Southfield district event 5. 987—Finalist at San Diego (alliance partner with 2485) 6. 624—Winner of Alamo regional 7. 16—Winner of Arkansas regional 8. 254—Winner of Central Valley regional 9. 3147—Finalist at Crossroads regional 10. 3393—Finalist at Auburn Mountain district In fact, the top OPR team that didn't make at least finals in their respective regional/district event is 3494 at 12th, followed by 3476 in 13th (whose pickup broke during semifinals at San Diego). Overall, only seven of the top 30 teams failed to make the finals, and only two failed to make the semifinals of at least one event. As a robotics and statistics nerd, I think these numbers speak for themselves: OPR may not be a perfect metric, but it seems pretty dang accurate at predicting whether a team will finish well. EDIT: 1114 and 3683 lost in the semifinals. Thanks to Kevin Sheridan for pointing that out. |
Re: looking at OPR across events
Quote:
|
Re: looking at OPR across events
Quote:
|
| All times are GMT -5. The time now is 21:21. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi