FRCTop25 is live https://www.twitch.tv/firstupdatesnow
Are you going to post the results of the top 25+ voting?
Yes ! We will be posting the Top 40 with ELO this afternoon/evening. I have some delays and am bogged down with real life work so will need to wait until I get home.
Sorry on the delay! Here are all the video’s and rankings:
Nor’easter Region Week 1 Recap
SEweet Tea Region Week 1 Recap
InFiMidation Region Week 1 Recap
Mouth of the South Region Week 1 Recap
We the North Region Week 1 Recap
Best of the West Region Week 1 Recap
El micrófono Está Encendido Week 2 Preview
Week 1 FRCTop25 2019 Rankings **Let us know you think got snubbed or was overrated If you are not in the Top 40 and want to know where you ended up post here or PM me.**
|Team|FRCTop25 rank|ELO Rank|
FRC Top 10 by Region and taken from the FRCTop25 voting results. Note this is where the team is located from, not the region they played in.
USA Northeast Top 10
USA Southeast Top 10
FIRST in Michigan Top 10
USA South Top 10
USA North/Ontario/Quebec Top 10
USA West/British Columbia/Alberta Top 10
International Top 3
Israel and LATAM/South America next week.
Sad to see that the number 7 ranked elo team who had the second highest opr this week didn’t make the top 40.
|Top 25||Week 1 - ELO Rankings|
|2910||Jack in the Bot|
|330||The Beach Bots|
|1747||Harrison Boiler Robotics|
Source: Sykes Scouting Database 2019
346 was better than Robonauts this week
To be fair, the top25 is not really a competitive ranking, rather a popularity contest. Also it can be highly biased by region.
I fully understand it’s just hard to see students put hard work in and it not get recognized.
This comes up every year and there are ways to get people from all over FRC to recognize your team. Just like how ELO puts BBQ in the 400’s it takes consistency and time to be recognized by the over 300 voters who put their input into week 1 voting.
I’m sure someone can link this but I know Caleb did a case study that shows that the FRCTop25 is actually pretty accurate. There will always be teams that get left out and deviations but overall it’s pretty darn close.
Just for a little bit of background on my Elo ratings, they do not look at single event performances only. Rather each team goes into an event with a seed and their ratings adjust throughout the event based on that seed. Normally the seeds are pretty good and it works out by the end of an event that a team’s rating is a solid reflection of their ability. Sometimes though, there are outlier teams that do dramatically better or worse than their seed. For teams like this, one event will eliminate only roughly half of the bias of their seed. So, if we take 4414 as an example, they started out the event at a rating of 1450 (rookie/new veteran default), and ended the event at a rating of ~1610, for a really dramatic jump of 160 points! However, their actual rating should probably have improved by about 160*2=320, which would put them at 1770, or about 10th in the world. This is indeed borne out from testing, I found 1770 to be the best rating for them in terms of predictive power when I reran Del Mar a few times.
Two groups that are helpful to keep an eye on in addition to the top rated Elo teams are the teams that saw big single event Elo jumps and the highest rated rookies. Both of these groups have lower ratings than they probably should due to their poor seeding going into their event.
Top 5 most improved teams/teams with unexpected performances last week were:
And top 5 rookies were:
Here’s an excerpt from an email I sent to the FUN team on this topic a few months ago:
I just looked at the pre-champs FRC Top 25 lists for Houston and Detroit. I also generated top 25 lists using teams’ max OPR at this time and current Elo at that time (pre-champs). I chose to use championships District Points (excluding awards) to quantify championships performance. Overall, the “best” list (i.e. the list that was the best predictor of how many champs DP a team would earn) was the FUN list, followed reasonably closely by Elo, and OPR dragged far behind.
Next, I made 4 new lists by taking a weighted average of existing lists. I then found the ordering of how predictive (from most predictive to least predictive) these lists were of champs performance to be:
25% OPR, 75% FUN
25% Elo, 75% FUN
50% OPR, 50% FUN
50% Elo, 50% FUN
To give a sense of magnitude, the 25% OPR, 75% FUN list was about 1 DP better than the 100% FUN list per team, meaning that that the teams on that list on average won half a match more than a team in the same position on the FUN list.
We should really keep in mind this is a pretty small sample size (50 teams) so I’d be wary of drawing too much of a conclusion, but we can pretty clearly say that 100% OPR is the worst of the above lists, and that there’s a good chance a slight adjustment of the FUN list could provide a bit more predictive power of champs performance. The FUN list is certainly one of the better ones though, and we should be wary of adjusting too much.
I’d definitely feel a lot better about drawing sharp conclusions if I had looked at a larger sample size, 50 teams isn’t a lot. It would also be useful to look at lists from earlier weeks and years other than 2018. That said, I do think the predictive power of the FUN lists is comparable to Elo. There are clear biases in the FUN lists (region bias and big-name bias come to mind) but there are also some very obvious biases in Elo (notably veteran team bias, see my above post).
Opr also varies in the effective prediction strength depending on the game due to how scoring, ranking and several other factors are not accounted. 2018 may not be a very good year to look at this type of data based on how scoring was possession and time based which does not show how much a team was actually contributing. With games such as 2015 where penalties were negative and scores were linear based on stacks, opr may be a better indicator.
It’s pretty obvious that FRC top 25 is influenced heavily by popularity… Is it “okay”? Sure. Is it “pretty accurate”? Well it certainly doesn’t fit my description.
It’s totally fine for it to be branded as a for fun thing to see who people think are the best teams in FIRST, but branding it as something that is something more than a fun little thing to do to see what people think is pretty proposterous. Citing ELO or OPR comparisons as a legitimization of the rankings is also silly, given the large issues both of those models have with inter-event comparisons.
FRC top 25 is great for a fun comparison between teams and can cause some great discussions about who the top teams actually are, but don’t sell it as a “fact” or call it “accurate”. Doing that just makes you/it look silly.
@Caleb_Sykes Have you ever had a discussion about using a seeding method to come up with a matchmaking algorithm, to either try to balance both sides in a match or to have a NFL type parity schedule where strong teams are matched more often against strong teams.
I follow a lot of Caleb’s work…and he has done exactly that. Check out these 3 TBA blog posts.
I said that a case study showed this and if I misspoke based on Caleb’s response I apologize. Saying I look silly based on you saying: “Well it certainly doesn’t fit my description.” is just saying that your opinion matters more.
View the FRCTop25 however you want, many teams and individuals find value and tune in. The goal has been to bring the FRC community together to celebrate these teams as voted on by community members. While with any poll there will be bias and error, it seems to be that many people like you just completely discount those who vote by assuming they are only voting for teams based on popularity. There are still many people who actually do research and put thought into what they are doing.
Is this poll completely accurate? Of course not, I believe that it tends to be more accurate than what certain people perceive and this is simply a difference of opinion.
This is so interesting that the subjective opinion of the collective is more predictive than going purely by metrics, but when supplemented with metrics only slightly it becomes the best predictor (at least for last year’s world champs). Reminds me of how we typically rank teams during our pick list meetings.
One question I have @Caleb_Sykes would be could presence on the FRC top 25 actually improve team performance, as opposed to being a predictor? It’s a morale boost, no doubt. No idea how one would be able to figure this out (but you might).