FRCTop25 Week 1 Poll Open. Closes 3/5 7:00pm Eastern

Check out LamBot 3478, the went undefeated in Monterrey Regional 17-0 226 OPR and can lift 2 robots.

https://www.youtube.com/watch?v=MNKjuMCGcOo

Well, it’s too late now for voting, but most events now have their data published, so here’s the top 25 by Elo:

Team	Elo
148	1895
1678	1853
118	1847
610	1843
2046	1833
3478	1786
3005	1783
5172	1783
525	1777
33	1765
4513	1760
1918	1760
829	1754
379	1754
3538	1753
125	1745
1360	1742
1305	1742
4539	1741
2910	1737
2848	1737
4476	1735
1876	1730
3824	1730
1325	1726

Excuse my ignorance, but what exactly is Elo?

OPR is a lot more useful for this game than people many people realize. Because of the nature of the Power Up scoring system 2018 OPR is a reflection of how well teams control a match as opposed to how well they cycle game pieces. So it doesn’t reward a team who scores a lot of cubes but is always behind on possession, but does reward a team who consistently controls possession.

Which means to me that it might reward the team with the nicer match schedule who only plays against teams that can’t score in the scale, and that it isn’t valuable for comparing teams between events.
At CNY, for example, the teams ranked by OPR are roughly sorted by how good they were, but also there’s a team in there that didn’t make the elimination rounds.

That’s a really good point. Winning the Scale/Switch early and maintaining it is obviously an important skill. It also takes good strategy which is additional value in an alliance partner.

My issue with OPR stems from the fact that it is literally useless when it comes to Everybot style robots, and other specialists that focus on the Away Switch (great second round picks). An Everybot will have a terrible OPR and if people don’t understand why that is, they may make a poor decision during alliance selections. OPR is mostly useless when it comes to providing valuable information to teams for who they should select with their 2nd pick. If it isn’t valuable for helping with alliance selections… who cares? It’s very clearly not a number that can be compared between events.

This observation is coming from a really small sample size, but it also seemed like OPR took a long time to sort itself out. I saw some teams on Friday night at the Great Northern Regional in the top 15 for OPR that had no business being there. Although with that said, when I look at the top teams at various events in terms of OPR I say to myself… Yep, that’s a good team.

My favorite scouting metric for this game is total number of cubes scored (regardless of location). This metric is a good indicator for how productive a team is during a match, even when some scoring objectives are easily in hand. Knowing where they scored the cubes is also important obviously.

Scouting for a 1st pick is also an interesting challenge… it’s not just about how many cubes they can place on the Scale. It’s about how accurately they can place cubes on the Scale when there are already 7 of them on there. It’s not something you can see on the field in a lot of cases because 8 cubes rarely end up on the Scale in Quals. In order to really figure out how good a team is at placing, you need to understand the dimensions and functionality of their mechanisms. Which means pit scouting is important this year. Wow that was a long tangent.

Gee thanks ! It’s an honor to the be named right beside those four teams. But to be fair we didn’t really have a “solid” weekend. We had some intake trouble and we found out that our router was defective. That gave us o lot of issues troughout the weekend. Oh, and we did the lowest score possible of 0 in the final :smiley:

But our auto was wicked awesome. It failed only once during our second qualification match. But in the finals we had the task of taking ownership of the scale while 1772 did the switch. We did all matches, both opposite and front, flawlessly. When even did 2 cubes in scale in our second semis.

Our climbing mechanism also gave us the opportunity to do a lot of double climb with our 3rd alliance partner 5443. We were able to both climb on the rung without the help of any ramp.

I totally agree with all this. Especially this year. A lot of good scale bot placed low this years. At Montreal, we had 6 rookie team with swith/vault type bot that placed in the top 15. The reasons that this happened is that most scale bot went directly to the scale and ignore the switch. They ended up losing match due to late ownership of their own switch. All the switch bots had to do was getting a cube early in and voila. We didn’t have much defensive play so you kept ownership all the way to the end.

Scouting was extremly important as you could’nt use ranking or OPR as an indication and most of the 15-45 ranked team had the same average cube/match count. Since our first pick (1772) was an highly effective scale bot but could’t climb we went for an OK switch/vault bot(5443) that could climb and that was compatible with our climbing style.

We did have one effective Everybot and it ranked 10th. We had to face them in our quaterfinals since they eventually bacem the 7th alliance captain. I can tell you that those type of bot are annoying!

My concern with OPR this year is essentially similar to Kevin’s, there just not enough sample size. Similar to other games in which teams can contribute meaningfully to the score without obvious quantitative interactions (2009, 2014), there is some inherent value to a metric that seeks to infer the offensive impact of teams beyond the quantity of game pieces they score. However, also similar to 2009 and 2014, so much of the scoring potential of a team depends on their alliance partners and opponents in a given match. I feel there’s simply too much noise in qualification schedule to get a truly accurate OPR measurement.

Instead of OPR, I’d like to see a metric similar to but not exactly the same as CCWM. Pardon my crude math, but imagine a metric that approximately solved a matrix of equations for each match along the lines of MatchPointSpread = Red1 + Red2 + Red3 - Blue1 - Blue2 - Blue3.

Unlike OPR / DPR, the other alliance (and the other alliance’s score) is included, so it would take into account how the strength of the opponents could depress the margin.


As for the actual topic of this thread:

  • I don’t like using ELO for rankings since it picks up where last year left off (right?) rather than making new numbers each year.
  • 2791 is dope as hell and definitely a top team, pardon my bias
  • People aren’t mentioning the traditional powerhouses because they don’t need to mention them in order for others to pay attention

125 managed our 2-cube autonomous at least once.

I’ve been thinking of making this metric for a long time. (By that I mean several years) But I haven’t had the time to do it myself. However, some statistitions have assured me that it would come out the same as CCWM. I highly encourage you to try it and find out for sure.

To build off of others on this. FUN is 100% on board with having the FRC Top 25 weighted by some sort of objective metric. Really the question is what would be accepted by the majority and how can I have a report ran instantly so it isn’t any more of a hindrance when compiling the FRC Top 25.

I will say that while teams will always slip through the cracks each week that overall this is one of the more accurate community polls IMHO and I believe having around 300 submissions helps with this immensely.

I just wanted to say how much I enjoyed the set up of the region recaps last night! It was a great addition to an already great program. My kids were beyond thrilled to get a mention. Keep up the great work FUN!

edit: i may have done ccwm by accident. looking into code now ¯_(ツ)_/¯

Here’s the results for CNY for this metric (quals only):

340   118.06
5254  112.06
2791  108.15
319   107.2
639   89.16
4253  74.25
694   69.02
287   42.46
145   41.56
27    37.06
3003  33.39
20    25.55
3044  18.88
1518  13.89
6422  11.85
527   -1.22
378   -3.99
4027  -4.07
191   -5.51
5484  -12.22
2016  -18.25
810   -20.83
4122  -29.23
514   -32.12
250   -45.66
358   -54.19
6300  -59.72
7081  -59.93
223   -63.0
2053  -63.32
5030  -69.22
3173  -72.68
6621  -73.48
1450  -91.81
1665  -122.09

Or, including elims:

340   136.36   
2791  126.27   
5254  84.56    
319   74.65    
4253  64.63    
694   56.95    
639   54.75    
287   32.87    
27    26.39    
145   25.65    
1518  23.59    
3044  9.29     
20    5.4      
6422  -0.25    
527   -4.24    
4027  -4.66    
3003  -7.62    
191   -8.78    
378   -16.49   
5484  -19.86   
4122  -26.1    
514   -31.09   
810   -38.58   
6300  -42.17   
2016  -45.29   
223   -47.96   
5030  -48.03   
250   -50.78   
2053  -55.12   
7081  -55.7    
6621  -62.17   
3173  -63.79   
1450  -74.58   
358   -83.73   
1665  -106.12

Sorry for the long post, but I have a lot of thoughts on below topics.

I’ll be looking in the next few weeks at comparing the predictive power of OPR to Elo for 2018, and compare the predictive power to previous years. People are free to use any metric they like to rank robots, but essentially the only rankings I care about are metrics that are backed by predictive power. My Elo model was purely designed for this purpose. To my knowledge, OPR was not designed with predictive power in mind, but it has turned out to have by far the best predictive power of any common metric, so it is still the gold standard in my mind. That doesn’t mean it isn’t better or worse in some years than others, but I found it’s predictive power was strictly greater than that of the other metrics for every year in 2008-2016. It’s a safe bet it will also be the best metric this year.

I could reset my Elo ratings every year, but nearly everything in my model is done to maximize predictive power for matches, and doing this would severely limit the predictive power of the model, particularly in early weeks of each season. Each team’s Elo in my current model is found by taking 70% of their 2017 end of season Elo and adding 30% of their 2016 end of season Elo, and then reverting this sum by 20% toward 1550 (the pseudo-average Elo). If it were more predictive to revert everyone 100% after each season I would have found this during model tuning. I get that some people might view this as unfair, but my goal is to maximize predictive power, no matter where that takes me, not to make a metric that seems intuitively fair.

We just had a conversation on this very topic here. I didn’t actually go and check for equivalence, but a few years back wgardner did and found the measurements to be distinct. I’m seriously concerned with overfitting with this metric though and it’s predictive power was very poor from my testing, I like EPR a lot better if you want to incorporate who opponents are.

The only metric that would qualify for this privilege is OPR in my opinion. I’ve been working hard to get Elo to be as good or better of a metric than OPR, but considering I’m the only one who calculates it and it doesn’t have nearly as strong of a track record as OPR, it would be a poor choice to use it this year. As I stated above, OPR has consistently been the most predictive metric in FRC. I think it would be cool if the weightings were weighted half by OPR and half by polling. Neither method is anywhere close to perfect, but I think they could complement each other well. I often feel bad when amazing teams with little name recognition attending obscure events don’t make the list, when they are probably better than some teams that did make it.

Coming off the win from MVR, I would definitely agree that 4028 was a robot to contend with. They definitely have a great design, and compared to my team’s, a very similar design haha. I think 379 also has a great robot, with that triple climb working its way into the line-up. 302, being my team’s alliance captain, played smart throughout quals and definitely should be recognized for a great single climb that was fast and sturdy.

Although I haven’t spoke about my team, don’t count us out for coming in strong for our Michigan Events :wink: Ohio was just a warm-up.

Overall, had a great time at MVR and I think that a chunk of robots from that regional should not be forgotten.

Thanks for the post. It was highly informative. You make it sound like you have done some serious investigation of the predictive power of different metrics. If that is the case, could you point me in the direction of your findings? I am particularly interested in the relative power of OPR, WMPR, and EPR.

https://www.chiefdelphi.com/media/papers/3315