pic: OPRs compared via standard deviation distribution - 2008-2015



Updated OPRs compared via standard deviation distribution - 2008-2015 using final data and coopertition points.

I love these plots - so useful for setting strategy in a few days!

I’m honestly shocked that the curves are so even for so many years, and so many varied games. Logomotion seems to be the season with the least “unequal performance”, but they’re all pretty close. That tells me you only have to beat the average score by 1 standard deviation to outperform 90% of teams. That’s way more skewed than your typical normal distribution.

Previously:
http://www.chiefdelphi.com/forums/showthread.php?t=137223&highlight=opr+standard+deviation
http://www.chiefdelphi.com/forums/showthread.php?t=136224&highlight=opr+standard+deviation
http://www.chiefdelphi.com/forums/showthread.php?t=136227&highlight=opr+standard+deviation

I’m curious to know what use you have in mind for these plots.

I suppose if you think your team is in the x percentile, you could target a score, but doing this effectively would require knowing both the average score and the standard deviation of scores early in the build season. I’m impressed enough with teams that predict the average score reasonably well early on, I don’t even know any teams that predict standard deviations early on.

You’re right that we would have to guess both the Average score, and the standard deviation to locate the curve, but our first-week guesses were remarkably accurate last year on both counts. Maybe it was beginners luck :slight_smile:

OTOH, this chart suggests that the bottom ~10% and the top 10% are each usually 1 standard deviation below the mean, so if you can imagine 10 teams with different abilities, and guess what score the highest and the lowest of them will get (on average), you’d have a pretty good estimate of the mean and stddev. Then all you have to do is design a robot to score more than the high scoring team :slight_smile:

My rule of thumb is to estimate how many points I think 3 small children could score in a regulation length match. Or, ask a student what they think the average score will be and divide it by (5-n) *, where n is years of FRC experience.

This is an interesting metric to look at games with though. What is the ideal shape of this curve?

For example, you probably don’t want a really sharp tail at the low end (bottom 20%). By this metric, Lunacy was terrible (make sense, if you didn’t reliable move the field cleaning Roomba made a better alliance partner). Why is Overdrive the next worse? Penalties?

You definitely want some slope (where would the fun be in identical robots?), but where should the best top out? I think it’s interesting the top tier was most limited in a game without explicit limits (or could top tier teams fill all the trailers?).

*The constant is somewhat team dependent, I think for some teams this constant is much closer to 4, and for other teams significantly higher.