Does anyone have a percentage of all elimination matches, within all divisions, and extending onto Einstein, that were upsets (lower seed beating higher seed)? Competitions this year were insane, and from my knowledge, I’d say by far the most unpredictable in FIRST’s history (at least since 2011). Can I get some older vet’s opinions on this? Also, I’d like to hear opinions on why.
I found it really interesting that Curie was 100% upsets in eliminations this year. It really shows how crucial the 2nd pick is to winning in the elimination rounds and the later seeds do get better picks in the second round.
To rephrase, and hopefully garner some more conversation/thoughts, here are the outcomes and calculations.
Each field has 4 Quarterfinal pairings (Q), 2 Semifinal pairings (S) and one Final pairing (F) for a total of 7 pairings.
With 8 division fields and Einstein, we have 9 fields total, with 7 pairings each, for a total of [edit fixed: 63] pairings.
Pairings with upsets:
Archimedes
Alliance 3/2 - S - 1
Carson
Alliance 5/4 - Q - 2
Alliance 7/2 - Q - 3
Alliance 4/1 - S - 4
Alliance 4/3 - F - 5
Carver
Alliance 5/4 - Q - 6
Alliance 6/3 - Q - 7
Alliance 2/1 - F - 8
Curie
Alliance 8/1 - Q - 9
Alliance 7/2 - Q - 10
Alliance 6/3 - Q - 11
Alliance 5/4 - Q - 12
Alliance 8/5 - S - 13
Alliance 7/6 - S - 14
Alliance 8/7 - F - 15
Galileo
Alliance 6/3 - Q - 16
Alliance 6/2 - Q - 17
Hopper
Alliance 7/2 - Q - 18
Newton
Alliance 7/2 - Q - 19
Alliance 7/3 - S - 20
Alliance 7/1 - F - 21
Tesla
Alliance 3/2 - S - 22
Einstein
Alliance 8/1 - Q - 23
Alliance 7/2 - Q - 24
Alliance 6/3 - Q - 25
Alliance 6/4 - S - 26
Alliance 7/6 - F - 27
Above you see that 27 out of those edit [63] pairings were upsets, giving us an upset rating of 42.8%.
So has anyone in FIRST ever seen anything quite this unpredictable before, or was this the most unpredictable Championship you’ve seen?
Why?
On Galileo 6 also beat 2 in the semis.
Not counting Einstein - that would 22 upsets out of 56 (7*8) or 39.3%.
I didn’t think I would ever see a reverse perfect bracket. That is ridiculous.
Where does the number 36 come from?
Looks like a typo. Last I knew, 7*9 was 63, not 36.
Yep fixed it, thanks! Do you guys think that poor scouting had a part in this? Teams looking at RP instead of accuracy, shots per match, etc.?
A quick glance at OPR seems to show a very small spread between each team in the top 15 for Curie. Other divisions show a clear 2-5 robots ahead in OPR from the rest.
I didn’t watch Curie, so I’m not sure how things went down. It did seem easier, however, for the lower seeds to scoop up some high goal scorers/scalers in the later picks. I’m going under the assumption that the top half or so were strong enough that no one stood out more than another, so the #5-6 pick would be able to match the output of a 1 pick, 8 captain was strong enough to pick robots to match the output of 1 alliance, etc.
Another interesting thing about Curie is that only one of the top 8 picked another in the top 8 as their pick (and the offer was declined.) I’m not a scout and don’t pay attention to that so I asked a scout yesterday. He said he though it was because many of the top seeds were low goal scorers so didn’t want to pick another low goal scorer.
(I’m a mentor on the 8 team on Curie)
Jeanne, I watched the online streams from home, and I came to the same conclusion. I think the fact that an alliance that won a match but didn’t capture or breach would gain 2 rp, and alliances that lost, but breached and captured would also gain 2 rp, skewed rankings quite a bit. Some top 8 teams, I think probably, had (possibly) bad scouting because they weren’t prepared to be where they ended up, and some lower ranks, I’m certain, were pushed out of higher ranked positions. That being said, I think Curie had some really good scouting teams out there. I mean the perfect reverse bracket speaks for itself. Incredible.
[Edit] That being said, will FIRST introduce so many possibilities for ranking points next year? Maybe not. I certainly hope not. I remember in 2012 coopertition points were a boost to the good and the bold. 2012 IMO was a very good example of how dual ranking points could highlight really above average teams. This year however, instead of being a boost, it was a possible equalizer. I think it probably looked good in theory / on paper, but not in the actual season.
Heck, I wouldn’t care if we went back to seeds being fully reliant on W/L/T. Good scouting will bring out the best.
The fact that Curie division had a 100% upset in the bracket proves that the scouting data those teams had were beyond poor. Besides the fact we knew the number one seed was unlikely to win because they were not a high goal shooter, it should have been much easier to make stronger alliances by checking for high goal consistency.
Our data which I think we can release soon will show that 3339 with a range of (2-9) and 836 with a range of (4-7) were the two best high goal shooters by our own judgement in the division and that nobody else caught it. They were both second picks which I find ridiculous considering the number of shots they made and with such consistency. Honorably mentions to 166 (1-7) and 3641(0-8) who are also really good high goal robots. What it came down to was how the robots were designed and if a defender could stop them, where they shot from, did they score 0s, etc.
It would have been terrifying if 876 and 3339 were on the same alliance and that almost happened!
Oh, I definitely agree! After watching, I can’t say I was thrilled to watch my first 100% upset bracket. It’s cool to see an underdog or two in competition, but my main point was that at this magnitude, all the upsets were maybe the deal-breaker for me. Took Stronghold from “a really good game” to a “really good game to an extent” in my mind. Maybe that makes me a poor sport? Maybe not.
I saw 1983’s data, and the Skunks were the 2 highest producing bot (after a team with a scoring quirk). They were the 2 pick (and nearly 1). I expect that they offered their data and draft experience to the 1 captain. Note that none of those teams you listed could be around for the 1 seed, and that that 3339 was the 9 pick. So I wouldn’t blame the scouting or drafting.
It was the result of the ranking system that rewarded getting just enough poiints to win low-scoring matches, two separate tasks that became integrated into elimination scoring. So 3 separate tasks became only in the playoffs. As a result schedule became even more important. An unbalanced schedule allowed certain teams to accomplish those tasks more easily thanks to the help of stronger teams, and stronger teams were hurt when an alliance mate failed to accomplish a task.
If 686 had not gotten back to the batter in time in the last match for 148, Hopper would have looked a lot more like Curie.
They (1983) shared their data with us during a match but I feel like the scouts of a team usually do not scout their own team’s performance fairly. I have this problem on my own team where people will put down awesome things about one of our bad matches and scout our robot to be better than it actually is.
I will attached our top 20 posted scores which we did later edit for consistency.
(Not to discredit any team because I love this robot design)
1983 crossed 28 defenses, scored 21 high goals, and scored 7 low goals in the qualifications which put them in our data after we ranked for consistency around 15th in overall effectiveness in Curie for a high goal shooter. I would be interested in seeing their data because I loved their graphic color charts and the way the data is organized. Maybe I am misunderstanding the term “producing” but they scored a lot less defenses (41) and half as many high goals (48) as the highest robots in those scoring categories.
This is why I was confused at the alliance selection when there were many other robots that should have bubbled to the top a lot faster. 3339 and 876 being prime examples, what if 1089 had picked one of them! Either way, it was an exciting and interesting turn of events in Curie
Here is a spreadsheet I made for English class with all the upsets from the past 4 years: https://docs.google.com/spreadsheets/d/1lIFLdP46mvqHLCzifvJbdmGB_2nkQHK9h079qD7Fh-8/edit?usp=sharing
Something I think a lot of teams did across the board for champs that might not be ideal was scouting defense crossings and using that to help calculate overall scores for robots. We opted to not even scout crossings since the breach is guaranteed at the championships level. That being said I wish we did scout crossings just so we can compare what our overall scores list would look like when comparing crossings vs without crossing. I would imagine teams that focus more on poaching the enemies secret passage would move up in our scouting ranking vs teams that cycle.
The numbers on the total do not match the individuals. Looks like you missed three upsets on Galileo on the total?
Looks like your right. Thanks for point that out! I’ll update it right now.
I wish I could have done something that cool for English class…