Reshuffling Match Schedules at States and Worlds based on SOS

Given the level of data analysis we currently have with Statbotics.io and it’s calculation of Strength of Schedule, doesn’t it make sense that schedules be optimized to remove the extreme schedules, which I humbly propose to be anything above .75 and anything below .25. If we can reshuffle MSC fields to create a better balance, doesn’t it make sense then to take it one step further and make sure schedules are also somewhat balanced.

I don’t want to totally remove the randomness and the “anything can still happen” vibe. But I think we can all agree that schedules need to be generated better.

On a side note, it would be interesting to go through all of the years data and see which team had the worst luck in schedules and then we should all send that team a balloon or something.

8 Likes

From Johnson field, (before SOS → after SOS)
Top five “hardest” schedules:
7175: 0.86 → 0.88
1296: 0.84 → 0.76
2059: 0.84 → 0.87
4607: 0.83 → 0.75
2521: 0.79 → 0.79

Top five “easiest” schedules:
5654: 0.15 → 0.21
28: 0.15 → 0.27
3794: 0.18 → 0.13
2075: 0.19 → 0.23
6045: 0.19 → 0.24

Looks like the predicted SOS was pretty consistent with final SOS, based on their methodology. Shuffling schedules until the distribution of SOS scores is more clustered around 0.5 wouldn’t be too hard, and could allow for teams to shine a bit brighter.

I used to use a random seating chart generator for my classes. The students always complained that my random seats moved them away from the kids they like to goof off with. That’s because I hit the generate button until I got a random seating chart I liked.

8 Likes

The single biggest problem with shuffling based on strength of schedule is that nobody knows who the teams are in terms of strength at any given event.

At least, until they’ve actually played a match or two.

And you can’t really use past seasons as a benchmark, things change.

8 Likes

Couldn’t they use data from practice matches?

Yeah, but by the time a team makes it to state/worlds then they’ll have enough matches under their belt to determine a stabilized EPA. I don’t think anyone’s suggesting doing this at regionals/district events.

3 Likes

Teams aren’t required to play on practice day, nor are they required to play only in the matches they’re assigned to. A team playing in several filler matches would dilute this data.

6 Likes

Not to mention, teams that do make use of practice matches may choose to focus on practicing one specific task or tuning auto programs rather than actually playing a match to the best of their ability.

4 Likes

I nominate we send the balloon to #70, More Martians.

Event 1: 0.77 → 0.80
Event 2: 0.53 → 0.61
States: 0.90 → 0.91
Champs: 0.84 → 0.65

4 Likes

Against who, though? I know that if my team was #2 in our area and a major factor in opponents having a tough SOS, then went to Champs, we’d get blown out of the water. It’s a completely different level. Data doesn’t translate well.

2 Likes

They were re-run because they didn’t meat the criteria in the rule book, that was the main issue. Not EPA over power.

MSC fields are balanced on district points, not robot strength. While correlated, you can still have teams dramatically out of position.

As far as worlds go, personally I am a fan of the random system. The fields are big enough things generally get interesting in there.

1 Like

This depends on EPA being an accurate predictor of team strength. This year, since scoring was highly linear with the exception of link bonuses, EPA was very highly correlated with team strength, but that can change year to year depending on game design. Years with non-linear scoring actions like 2017/2018 lead to a much worse correlation between predictors like EPA or OPR and actual team performance.

2 Likes

My team had a similarly bad time at state champs in PNW we ended ranked 31 and got first picked by the 4th alliance our SOS was 0.91 aswell

At the 3 events were at with 2910 we played with them 0 times and against them 7 times

That’s rough. I know from similar experience, although I’m not complaining about the .90 SOS because we fell to 24th as the ~10th highest EPA team on the field and as the 2nd pick of 5th alliance, we ended up narrowly making it onto FIMstein for the first time in the team’s history.

A terrible schedule can sometimes help you in the long run, but it would be overwhelmingly better in most scenarios not to have a bad one at all.

1 Like

Look at what happened at MSC this year when a few teams were shuffled without regard to the stated division strength criteria.

I doubt that this is achievable without a major change to the criteria used by the scheduler. Looking at all championship divisions as well as all district championships, there wasn’t a single schedule that met this criteria. So I don’t think it’s possible to just regenerate schedules until one randomly meets this criteria, rather SOS would have to be a fairly large weight in the scheduler.

That brings in another complication. The statbotics SOS algorithm compares a given schedule against a number of random schedules. I’m not sure how statbotics creates the random schedules, but it would need to do so without using this new algorithm that takes into acount SoS, or else it would bias the results and make it so it’s never possible to meet the criteria (since the SoS is a percentile).

It takes several seconds to generate the SoS for an event on statbotics. Lets say that by implementing in c++ rather then however statbotics does it that gets down to 0.1 second. MatchMaker currently generates 5 million schedules in a few minutes. At 0.1 second per schedule, it would take around a week to run a schedule with 5 million iterations.

Lastly, there’s been a lot of clamoring for precomputing schedules to better optimize the existing criteria (match separation and minimizing partners and opponents). It would not be possible to do this if statbotics SoS was a factor.

4 Likes