CHS Platter - Week 4 - Supreme Pizza

8 Likes

There are officials that get paid a lot of money to officiate sports with rules that are consistent from year to year that make quite a few mistakes or mis apply rules. I have been involved at the the highest levels of Collegiate Athletics and have seen mistakes made some that have cost teams games. Never once has a regulating/sanctioning body ever changed a result once final. (and these are sports where millions of dollars are involved). I don’t see why that would be any different here. Is it ideal for the teams involved, obviously not, but one of them went on to win the District Event so for them it did not make much of a difference. I can’t speak to the other two teams or the benefit the other alliance got for the win.

I am glad there was the recognition of rule that was misinterpreted and it was fixed moving forward even though there was frustration from a group of people. I hope that both students and adults were able to model behavior that they would be proud.

I am very proud of robotics in our district and a looking forward to seeing great competitions these next several weeks.

4 Likes

2012 Champs on Archimedes the refs were awarding co-op bridge points incorrectly. It was brought to their attention Saturday morning ( we still played quals on Saturday back then) They went back and changed all the matches where the points were incorrectly awarded. Don’t see why they couldn’t have done this at any of the events this year…

Big stage = more of an audience?

High profile teams were involved including the team about to be in the HoF?

They’re just volunteers, Eric, please keep in mind mistakes don’t need to be corrected if you’re not getting paid.

1 Like

Precedent does have to be considered. It is unprecedented in most other areas.

1 Like

I don’t know if this deserves its own post but where does CHS lie in skill level compared to other district? With other regions having world famous powerhouses at some events, CHS always has seemed to play every game slightly differently than the rest of the world, becasue there is not usually one of these teams that takes over an event. Is the lack of a true powerhouse team increase or decrease the average level of play?

1 Like

I think it depends on your definition of a “powerhouse.” There’s certainly very strong and consistent teams in CHS such as 2363, 1885, 1610, 422 sometimes, 1629, for example, who serve as great examples for other teams. Then there’s a very clear layer of teams who are having breakout seasons and really developing their skills thanks to the district system, even from my outside perspective. Teams on the rise include but are not limited to 614, 1599, 1262, etc.

While you may not have one of those big powerhouse names around, I think the very top tier in CHS is helping raise the bar for everyone, especially with the help of districts. Definitely keep pushing your fellow teams to improve if you think there is more room to grow!

2 Likes

Using @Caleb_Sykes FRC ELO spreadsheet, this is what the average ELOs for each district look like. Not sure it really is a good example of how competitive a certain region is though, because regions with more rookie teams will tend to be lower. (ex. FIM) A distribution of ELO by region would be interesting. That would help sort out the powerhouse teams from the rookie teams who are still developing.


1 Like

Interesting, thanks for making that chart.

Overall, I would say that CHS is a strong district, but a well balanced one. We have a lot of high level teams, but no ‘powerhouse teams’ on the World Championship level.

I’d caution against using ELO to compare teams between districts. The reason being that ELO is primarily designed to measure performance relative to the field, not overall. Because most teams in districts generally don’t play out-of-district events (aside from the ~15% that get invited to champs), teams won’t see many matches against those from other districts and the ELO pools will remain mostly separate, and thus not reflect the (hypothetical) talent differences between two districts. If the conclusion that you’re trying to reach is “the average team in X is better than the average team in Y”, you’ll find that an average CHS team does as well in CHS as an average FIM team does in FIM: average. Thus, both teams will have a ~1500 ELO and the only thing this graph will show is noise.

4 Likes

Good point. I hadn’t considered the fact that most matches teams play in districts are against the same teams in the district. But since every district sends some teams to champs to play other teams, isn’t it still useful to compare as those teams who compete outside the district will raise the average of their district? And if what your saying is true, that most of the differences is just noise, wouldn’t that also make using ELO to rank teams and predict championship matches pretty useless too?

ELO is not a good estimate of a teams actual skill. The good thing about ELO is that it will more easily be applied to different games. Methods such as OPR fluctuate between different games based on how scoring works that particular year, same with DPR or CCWM.

Yeah, if you’re looking to see who can actually play the current game better, it would be better to use a different metric. ELO seems more fitting to me to determine the unmeasurable stuff that makes powerhouse teams powerhouses. That is, their ability to win hard matches.

When it comes to quals data, CHS is about average this year
image

1 Like

As someone who has competed in FIM, IN, and now CHS, I don’t think average ELO passes a gut check.

IN (anecdotally) has a much different distribution than most FRC regions; there are fewer boxes on wheels and a large group of teams that are all pretty evenly competitive at the top. Makes sense that their average would be higher, but the top end is not as strong as other districts.

What we really need to look at, as has already been mentioned, is distributions. That can tell you a number of things that a single number can’t.

Anecdotally, CHS seems pretty average. Of the districts I’ve competed in, FIM felt stronger, IN was comparable, if not slightly stronger, and NC felt weaker.

3 Likes

I agree with your feeling of FIM being stronger and CHS being average. Maybe the number of rookie has something to do with that. Here is what the rookies look like for 2018:

Rookies Rookie ELO Vet ELO
CHS (125 Teams) 5 1448 1517
MAR (125 Teams) 9 1436 1540
FIM (497 Teams) 64 1438 1508
NC (65 Teams) 8 1438 1507
PCH (82 Teams) 12 1447 1504
IN (49 Teams) 2 1451 1550
IS (70 Teams) 8 1453 1524
ONT (156 Teams) 25 1440 1523
PNW (154 Teams) 7 1447 1518
NE (208 Teams) 15 1424 1528

I guess my theory would be since FIM adds lots of rookies each year, the opportunity to raise the average ELO for the region is reduced compared to districts who add less rookies.

At this point though I’m pretty much just waiting for Caleb to come in here and school me on how I’m using his metrics wrong :laughing:

:sweat_smile: You can use them however you want. I’m a bit late to the party, but here are my thoughts.

For comparing across regions, you really have to be careful, because everyone defines the “strength” of a region differently (average? median? top 10%? powerhouse count? etc…) and there’s no objectively correct way to do it. I’d recommend looking at pchild’s region rating graphs here as that gives a fuller picture of each region’s Elo distribution, and you can interpret those however you would like based on your own definition of region strength (or not at all if you don’t like Elo). Here’s another way to break down region strengths by Elo, just plot the Elo percentile versus the Elo rating for each district (done here for FiM, CHS, MAR, and NE):


From this perspective, MAR looks to win out everywhere except the first 10 percentile, probably due in large part to the age of that region. Another way to look is at the Elo rank versus Elo rating, which refers to individual teams:

FiM wrecks the competition on this graph, and my impression is that when people talk about region “strength” they are thinking of something more like this than the first graph. How many super good robots can you name from chs, taking Elo > 1700 that’s like 2. What about NE or MAR? Like 10. What about FIM? Probably like 20 and you know that you’re forgetting a bunch. Especially by the time DCMPs roll around, FIM looks super good with 160 teams at an amazing event. I tend to think more like the first graph though, which means to me that FIM often gets overrated, and are getting a bit of undeserved credit for being competitive when their real strength is being enormous.

Now, regarding the validity of using Elo ratings to compare regions. My Elo ratings work well (or at least better than any other system I’m aware of) at the champs level where there is plenty of mixing between regions. I’ve also twice tried to region balance my Elo ratings, and was unable to get any appreciable predictive power boost out of either of those endeavours. So your conclusion should probably either be that Elo doesn’t have noticeable region bias, or that I’m not clever enough to think of a way to correct for it :upside_down_face:.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.