Evidence of "cooperation" in score analysis?

Hi everyone. Just purely for my curiosity, I am wondering if anyone who has worked on computing OPR or similar measures or analyses of match scores, has seen evidence of “cooperation” or synergy in the team breakdown?

I have been thinking about the scoring and ranking this year. The ranking of teams does not strongly match the numbers like EPA, in that I have seen lots of teams rank low or high counter to their EPA or our scouting (score) data. Also, it really feels like there were more upsets in the Elim matches, with Alliance 8 beating #1 or similar. This seems much different from the previous 3-4 years, where Qual ranking seemed to strong match perceived “real” rank, and it was pretty hard to upset Alliance 1 or 2.

In thinking about this, I suspect the answer is that there is much more synergy between teams and strategies during matches than previous years. In 2022 and 2023, it was a lot like 3 robots playing 3 separate matches with their scores essentially adding linearly. In 2024, we have the Amp which is a strong modifier on the Speaker score, and having an alliance coordinate their scoring makes a huge difference.

So, basically, I am wondering if there is “numerical” evidence in the score breakdown numbers. For instance, if the analysis were a matrix decomposition of contributions, are the “off-diagonal” elements stronger for 2024? Has anyone looked at this?

1 Like

See all the “death by serpentine” threads or “is the number 1 alliance OP” threads.

I was talking with 1678 at worlds and they specifically mentioned how they designed Storm Surge to be a game where 2 good bots cout take down the number one bot in quals, etc. Lots of strategy options. Now obviously FIRST put their spin on it, but still.

2018 would also be a good year to take a look at where several better than average bots were able to outscore the beat bot at an event.

To your point, this sounds like what @Caleb_Sykes has done in the past. They may know more about the historical perspective.


I disagree with this analysis. While I really enjoyed the power ups and the strategy they added, death by auto was so real in 2018. And once a really good team got a lead it was incredibly difficult to recover from. Which is why the poofs were able to go undefeated.

This isn’t really related to the main thread, but I think more underdog alliances at deeper events should have tried triple scale. Of course, trying to win a 2v2 for the scale was really hard when the other two robots were better than you, but 2v3 was a different story.

We (2791) almost upset the #1 alliance in our division using triple scale, even though we had one of the weaker alliances AND our robot broke. We also almost won the IRI mentor tournament against 2056/118 (lost finals on a tie) thanks to triple scale.

In short, I don’t think 2018 was as bad for underdog alliances as folks think it was.