Top bots of Week 1 2019 [Component OPRs]

@Whatever nice comparisons! I hadn’t even bothered to look at that yet but I am happy to see my data work out.

I might add another column that inspects penalties to see how many penalty points a team contributes in a match. May or may not be useful

I agree with you. Curious to see those numbers tonight!

This is correct, and is generally why I remove the foulPoints component when doing any sort of OPR analysis. I would like to take a look at seeing if applying regularization methods (i.e. ridge regression or LASSO) would help to enforce smoothness and limit the impact of components with extreme outliers.

Would treating the foul points as a negative for an alliance instead of positive for opponents, fix this at all? (eg. 2015)

1 Like

Instead of constantly uploading new spreadsheets, I added a live viewer to our website:
http://viperbotsvalor6800.com/scout/
I’ll be making changes throughout the next couple weeks but this is a good start

That’s a reason, but it’s not the biggest reason. Sykes Event Simulator 2019 has 330 at 43.14 total points (OPR) and 39.34 unpenalized total points. That still leaves around 5 points of difference.

1 Like

@mray190 It looks like your component OPRs don’t match with either Sykes Event Simulator 2019 or 2019 Google Sheets Component OPR Calculator

I don’t see any teams from Del Mar on the list, 4414,359, 2102, just a few to mention?

Looking at the 2019 Google Sheets COPR calculator, I noticed a few things:

  • Creator chose to add in penalties into the overall OPR. I don’t like doing that since penalties are not consistent
  • In the google sheet calculator, there are some teams that have climbing averages of over 12. Wasn’t possible during week 1 because no one had a double level 3 yet.
  • District event I was comparing (miket) had 0s for all cargo scored in rocket level 3. I know for a fact that a team scored a cargo game piece in the top level of the rocket at that event.

Edit: No bugs! Just me not paying close enough attention. See discussion below
So although that google docs generator is an awesome idea, looks like there are still a few bugs in it (not saying mine doesn’t have bugs in it either!)

I will have to look at the event simulator in a bit to do a comparison on that as well.

Which list are you referring to?

1 Like

An OPR calculation is statistical, not direct. It’s the effective contribution, not the actual point totals, that are calculated. That’s how some teams will end up with a negative OPR–they hurt the performance of the other teams on the alliance. The component contribution OPRs in the past have often shown values higher than the possible contribution of a single robot. For example in this case, a team may both contribute 12 points AND increase the likelihood that the other teams reach a L1 climb.

1 Like

Top 25 opr.

1 Like

Generally I look at things like Negative OPRs or component OPRs above the possible as a rough guide for the error bar in OPR/component OPR for a given game.

Are you, by any chance, including playoff matches in your OPR calculations? The code I use to calculate OPRs (which exclude playoff matches) produces numbers similar to TBA, and when modified to use all matches produces numbers similar to yours. It’s important to note that OPR is only defined over qualification matches due to the lack of mixing of alliances in playoff rounds.

Analytically speaking, the OPR model assumes that the interaction terms between specific teams are insignificant (i.e. have a mean of zero), which is generally a safe assumption for qualification matches, because teams rarely play with or against each other more than once or twice. However, as teams develop synergy in the playoffs and assume roles on alliances this becomes an increasingly poor model. In addition, since a playoff alliance will have the same vector for each match, you can potentially run into issues with your ß-matrix (from the y=ßx linear model) not being of full rank when it comes time to solve, especially at small events. This is why it’s generally important to remove playoff matches from regression-type CC algorithms.

1 Like

Ahhh we found the culprit! Thanks for that! I totally meant to remove elimination matches (I have been doing that for years but forgot this year). I will modify that ASAP.

Yup on the same page! This year, the FMS keeps track of individual robots for endgame. That means we get exact averages for climbing this year!

Looks like the google sheets creator is calculating an OPR for climbing rather than using the FMS data for climbing. I opted to avoid calculating climb OPR and instead, use FMS data to calculate the exact average (similar for starting configuration).

I include penalties because I believe OPR should be used mainly as a very rough gauge to compare teams, so I try to adjust the total scores as little as possible. While penalties are certainly inconsistent, so are the alliances teams are facing, and that is not factored into OPR either. This is also to stay consistent with Caleb’s and TBA’s calculations, which I use to check against for bugs. (and last I checked, all 3 match)

If this is team 27’s climbing OPR at miket, I agree with what @Richard_McCann said. Seeing as they had the most consistent L3 climber there, I think it’s reasonable to assume they would try to ensure at least one alliance partner made it to L1. Alliances unable to get the endgame RP may not have cared as much for those 3 points.

I would also suggest using the endgame % values instead of endgame OPR for evaluating climbing performance, since those are tracked per team instead of per alliance and are likely more accurate.

I’m not sure what you’re talking about here. Match 10 and 63 both have cargo listed as scored, which matches TBA. If you’re talking about cargo L3 OPR, the teams in those matches have very small (0.08 to 0.25) but non-zero OPRs, which makes sense.

Please let me know if you find bugs, but I don’t think what you’ve listed are issues and are just side effects of linear regression on small data sets.

2 Likes

I have both. Column G is OPR, T:V are averages.

1 Like

Did not see that! Apologies!

That is actually a pretty cool assumption that I did not think of. I can imagine that high-tier level 3 climbers actually get more than 12 points due to what you said cause they are guaranteeing the level 1 climb as well.

Same page. I just multiplied the % by the point value to get their OPR instead of separating the columns out like you did. Smart though - gives you multiple datasets to view

Site update! http://viperbotsvalor6800.com/scout/

Wrongfully accused @Rachel_Lim of having a buggy spreadsheet - but NOPE it was me. My bad!!

Issue was: I was including elimination matches in my calculations which is not good.

Keep in mind - I do not use penalties in any calculations.

Is there a way to see all the competitions combined?

Working on that - also working on a page to do match predictions where you can type in your team number and it will show you the stats for alliance/opponents in your next match

1 Like

Match predictions and breakdowns now available:
http://viperbotsvalor6800.com/scout/prediction

Note: If a tournament hasn’t started or next match is unavailable, the previous match is shown.
Browser cookies are used to store which team was last searched.