Top bots of Week 1 2019 [Component OPRs]

And on this note, in the FTC world, there’s a popular site called ftcstats.org that now computes “OPRc” which is OPRs constrained to be >=0 and <= certain game limits (similar to FRC climbing <=12 this year). Note that this is NOT just computing the regular OPRs and limiting them, but actually computing the OPR values that minimize the squared error between the actual score and the sum of OPRs for alliance teams subject to these additional constraints. This is a new and novel approach that has become popular exactly because it removes the need to explain to folks what a negative OPR means, or what an OPR greater than the max component score means, exactly as happened in this thread. :slight_smile:

I’m curious about how this works, as It’s not particularly well-explained on the website with regards to how exactly are constraints being applied. Are they doing some form of constrained least-squares, or…?

OPR actually measures how much a team affects the total score of a given alliance, so it is possible for a team to improve the performance of the alliance or drag it down. I checked this in the 2013 Champs, where our OPR was about 42 but our scouting showed we average 67. It turned out that our alliance partners scored about 25 points less when they were with us than they averaged across all of their matches.

Adding constraints to a regression creates inefficiencies in parameter estimates. But I’ve always viewed the OPRs as reflecting these other not so visible aspects like working more closely with alliance mates.

2 Likes

Yes, constrained least squares, which still has a unique solution if the constraints are convex (like 0<= OPR <= max).

Yes, constrained least squares will result in higher squared error in the already-played matches than unconstrained least squares (of course).
As I noted in my stupidly long paper from a few years back though, OPRs are often/usually computed with too few match results which results in overfitting (too few matches to “filter out the noise” of the underlying parameters), even when a purely linear OPR model would be perfect (which it isn’t). Usually, negative OPRs or OPRs> the max possible score go away when more matches are played, so they usually aren’t a “real” effect and constraining the values doesn’t harm the forward-looking predictive power of OPRs (in fact, it usually improves the forward-looking predictive power).

When I say error bar, I am referring to how far OPR is off from a robots actual ability.
2013 kind of predates me - but was that -25 from your alliances partners something you were doing (and would carry forward) or something that happened (and wouldn’t carry forward)? The second option is what I would refer to as an error bar in OPR.

The one that jumped out at me was the teams that would have a 75 component OPR for touchpads in 2017. I am stretched to believe that there were teams that had the ability to make their partners climb 50% more likely. Those 25 points of OPR weren’t measuring a forward looking skill but were measuring backwards looking luck.

1 Like

In 2017, we tested everyone of our alliance partners before each match and supplied new ropes for many of them. The clearly added substantial points above the 50 climb points attributable to just our robot.

In 2013, I think the issue was that our alliances were so strong (we had an extremely favorable schedule that we took advantage of) that it was difficult for all of the robots to score up to their potential. So there was crowding out of scoring opportunities. It’s exactly these kinds of external factors that get rolled into a regression estimate parameter when the equation is not fully specified (e.g., including assistance level to other teams, constraints on scoring opportunities). It’s a well-known aspect of regression analysis.

1 Like

See my answer above about parameter estimates. Unless the regression equation captures all explanatory variables, which these do and can not, the parameter estimates will capture some power from the omitted variables that are correlated with the variable being specified.

The fact that OPR can be affected by “non-robot” actions of a drive team is an important factor that shouldn’t be overlooked. In some cases, a team that is a “good alliance partner” can raise the performance level of their partners in a match. This “positive impact” upon the other robots will be reflected in the OPR of the team being higher than just the effect of their robot.

Examples of this include many factors like the one that Citrus_Dad mentions above (they mentioned checking and supplying ropes in 2017). Others that I think can make a marked difference include things like

  • pre-match strategy planning (regularly coming up with a better-than-average alliance strategy will boost OPR for your team),
  • communications during a match (e.g. an astute drive coach calling out “20 seconds left, everybody come back to climb”)
  • in-match strategy changes (e.g. communicating and changing pre-agreed routes on the fly to accommodate opposing defense or avoiding congestion with alliance partners)

So, personally, I don’t like “capping” OPR for things like a climb at 12 points, just because that’s all the robot can do. It may be that a team is doing things that increase their partner’s ability to climb, which results in their component OPR being higher than 12, because they positively influence their partners ability to climb.

(PS: Conversely if teams regularly diminish the effectiveness of their alliance partners, their own OPR would hopefully be lower than the actual points scored by their robot, since they reduce the effectiveness of their partners. I think this is actually a good thing about OPR - it does reflect some of the “intangibles.”)

2 Likes

For the record, when I was talking about teams with a 75 component OPR in touchpad points in 2017 - I wasn’t talking about 1678. I looked it it up and their component OPR for touchpad points were right around 50 for all their events (45.8, 46.2, and 53.4 in the database I found).

I am not arguing that teams can’t do things to make the partners better or worse. I am just saying a quick check for how good OPR is doing for given year is to take a look at some of the extreme numbers it is showing and see if they make sense.

Edit to add - so far in 2019 OPR looks pretty good. I wouldn’t trust it beyond ±3 at this point though.

Yes, if OPRs are capturing effects of omitted variables or nonlinearities, then negative OPRs and/or OPRs > max possible scores can be reflecting “real” abilities or lack thereof. It can happen. But OPRs can also be negative or more than the max possible scores because the data is noisy and there aren’t enough data points to filter out signal from noise. That can happen too.

I’ll also add that using constrained OPRs may be more useful in FTC where there are usually only 5ish qual matches per event per team, compared to FRC with 12ish qual matches per event per team.

Early this FTC season, many teams were showing OPRs for the end game of more than 50 points. The FTC end game this year is just two robots on an alliance each individually latching/hanging/pulling themselves off the ground for 50 points each. It is 100% linear scoring and 99% independent, so in this case OPRs>50 were almost always reflective of random luck/noise and not true synergistic effects.

Any chance you plan on creating a similiar list of all week 2 performances?

Everything is now live on our website:
http://viperbotsvalor6800.com/scout

An overall list is available on the All page (Give it a few minutes - it is updating):
http://viperbotsvalor6800.com/scout/all

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.