MittenReview: [FiM] Week 5 Recap and Week 6 Analysis

[FiM] Week 5 Recap and Week 6 Analysis

Well this is it friends, week 6 of regular competition is finally here, signaling the conclusion of the district level FRC competition schedule. As many have completed their two events and are either looking at team improvements for next year, finishing their third event, or preparing to play at the Michigan State Championship (MSC) in week 7. We still have those teams with one last shot to secure the the necessary points needed to earn their way into what is possibly the most competitive event in all of FRC. As always, our analysis relies nearly entirely on data, with as little subjective opinion as possible.

This week we see a total of 5 events, but for all things considered, this is an extra practice week for the majority of teams we will be seeing at MSC. Let’s first take a step back to look at the week 5 events. We saw who we had considered the top 5 teams in Michigan all compete (2767, 67, 33, 1918, 27), and maybe to no surprise, they all won their events and took home blue banners. Great work guys and a congrats to 5460 who took home silver at Troy.

We think that week 5 showed a refined gameplay style. Not a lot of new, wow, or game changing strategy but just some crisp, efficient and effective play by the top ranked teams at each event. A lot of nice decisions were made on when to go deswitch your opponent, when to break off from the scale to protect your home switch and vault cube usage. Week 5 also reinforced the importance of the buddy climb as 67, 1918 and 2767 were all able to achieve more than a 3.0 RP average, even with qualification match losses. This would have been impossible without their consistent buddy hang capabilities for that 4th rank point.

Last week we talked about stacking efficiency on the scale, aka, a team’s ability to neatly stack cubes so they don’t fall off easily as the stacks get taller. While I think this week the top alliances were just a little too much for the competition, it definitely showed there was still room for improvement as many cubes were dropped in the later half of elimination matches, and that’s definitely going to need to be cleaned up, because these are the mistakes that cost you the state championship.

2767 and 1918 at East Kentwood dropped 6 cubes in finals 1, and it almost cost them the match! They dropped 3 cubes in finals 2, but the match was pretty locked up after 2771 ran into mechanical issues.

67 and 4362 at Troy had 4 cubes dropped in finals 1 which DID cost them the match, another 4 in finals 2, and 2 more in finals 3… but the blue alliance dropped 5 in finals 3 which may have cost them the tournament!

27 and 302 were decent at Livonia, with only 2 dropped in finals 1 which may have cost them control of the scale in the last 30 seconds, but the match was already won. Finals 2 had zero dropped cubes but they only needed 4 to secure the match.

The alliance of 1506 and 6753 was decent as well with only 2 dropped cubes in finals 1 and 1 cube dropped in finals 2, but had solid control of the scale both matches.

This week we wanted to do something a little different, and introduce to you an algorithm that we have been refining and tweaking throughout the season. We believe we finally have it just right, and we have been using it for our weekly predictions and rankings. We call it our Mitten Power Ranking (MPR) and it utilizes these key metrics which we have been following all season:
-Teleop Switch Ownership %
-Teleop Scale Ownership %
-Teleop Switch Denial %
-Auto Quest RP %
-Auto Switch Ownership %
-Auto Scale Ownership %
-Climb RP %
-Strength of Field (ELO)

We’ve taken these metrics for each team at each competition, normalized all of the percentages into relative values (i.e. Switch Denial % clearly has a lower percentage than Teleop Switch Ownership %, but the effect on match outcome and point distribution is similar), then applied weights for each metric that we feel are appropriate to the importance of the game task. Long story short, we have a single cool number (with detailed match data to support it unlike other “Power Ranking” type metrics). We wanted to talk about MPR this week because 1. Our post next week for MSC is already going to be stupid long, 2. It will rely heavily on MPR for division predictions and 3. We think it’s really awesome how much it has helped us this season and couldn’t wait another week to share!

Blah blah blah, just another ranking system right? Why not just use OPR or ELO? Mainly because we are not a fan of singular metrics that don’t take specific aspects of the game (i.e. Climb RP for qualification ranking, or Auto Scale Ownership for elims) that are critical for success into account for this year’s game. Secondly, we are able to modify the weighted metrics for both qualification ranking as well as elimination round play and look at each individually. To give you an idea on why we feel confident in our algorithm, since week 3 when we actually had some data to start with on some teams, if you were on our top 2 MPR list for an event there was an 85% chance of your team ending up in the finals. Based on the graph below, you can see MPR competes with ELO for accuracy and we feel it’s OK to assume that this just isn’t the best year for OPR.

MPR is being used by us as one of the tools that we use to help us analyze the top teams at district events as well as our top 15 rankings each week, but it is definitely not the only tool we use. We use a combination of each team’s MPR, ELO, historical performance, and match videos to rank them.

Weekly Updated Rankings
So with that explanation out of the way, in order to establish our weekly rankings, we take a look at our MPR qualification and MPR elimination rankings each week and then do some minor tweaks based on the feel we get from watching match video of each robot. While watching videos, we look for a teams potential based on robot features, driving ability, and their historical performance in elimination matches.

  1. 2767 - Stryke Force
  2. 67 - HOT
  3. 33 - Killer Bees
  4. 1918 - NC Gears
  5. 27 - RUSH
  6. **3707 **- TechnoDogs
  7. 5460 - Strike Zone
  8. 3357 - COMETS
  9. 4003 - TriSonics
  10. 2771 - Code Red Robotics
  11. 4362 - Gems
  12. 3538 - RoboJackets
  13. 494 - Martians
  14. 5561 - Raider Robotics
  15. 2337 - Enginerds

Top 15 by MPR so you guys can compare:

			
Rank	Team #	Team Name	MPR
1	67	The HOT Team	2.6
2	2767	Styke Force	2.51
3	3707	TechnoDogs	2.34
4	1918	NC Gears	2.18
5	27	Team RUSH	2.12
6	5460	Strike Zone	2.04
7	33	Killer Bees	1.96
8	3357	COMETS	1.94
9	2771	Code Red Robotics	1.87
10	3538	RoboJackets	1.86
11	5561	Raider Robotics	1.84
12	4003	TriSonics	1.84
13	494	Martians	1.82
14	2337	EngiNERDs	1.79
15	85	B.O.B.	1.78

Week 6 Events
Alpena
A very interesting competition, the only team in attendance with a recent blue banner is 7213, who was the third pick robot but a very effective “every-bot” style robot capable of quickly de-switching their opponents. Our favorite is team 5505 who is a perennial powerhouse with an effective scale and deswitch robot that has a fast deployable hook style hanger. They were finalist in the Ontario Windsor Essex district event last week taking some hardware home to FiM. Teams also need to watch out for 6077 who was an effective hang and scale robot at Shepherd where they were finalists.

Team#	MPR
6077	1.359
5505	1.321
6637	1.266
2405	1.255
7213	1.242
5534	1.206
6121	1.176
4983	1.145
3537	1.145
3322	1.132

Forest Hills
This will be an exciting competition to watch with many powerhouse teams competing this weekend. Our personal favorite, 3357, we consider as one of Michigan’s best scale robots due to their ability to make the most efficient stacks on the scale we’ve seen. They are 2/2 for blue banners at their competitions this year, so we expect to see them competing in the finals on Saturday. Another super effective scale scorer is 2771 with their fast omni-directional drive and 2 cube switch auton which will help them during qualification matches. They were finalist in Kentwood so they are probably hungry for a blue banner out of this competition. 2337 is also a very effective scale scorer, finalist from Troy last weekend, and MSC finalist in 2017 so watch out for them. Some more effective scale robots and perennial power houses are 910 (with their very fancy cube grabber + elevator), and 1023. We are optimistic about 503, who has serious scale scoring and buddy hang potential when they are firing on all cylinders, and 314 who is one of the only effective shooter robots in Michigan. Finally, we can’t count out 3452 with their super fast scale scorer and auton who won a blue banner at St. Joe. There are many other teams at this competition who deserve to be in this list, so make sure you watch this competition on Gameday!

Team#	MPR
3357	1.935
2771	1.872
2337	1.795
3618	1.569
2832	1.520
314	1.483
910	1.470
226	1.463
4381	1.459
1023	1.445

Lakeview
This event is one of the few these past couple weeks where there has not been a blue banner from at least one team attending. So everyone here will be looking to take home their first of the season. 3656 is a favorite as an effective scale and tether hang robot who was a finalist at Lincoln. 2054 brings a good auton and effective scaling capabilities to the table and possibly some buddy hang potential. They were Semifinalist at both Shepherd and West Michigan.
2834 enters their third event and is looking to possibly grab their first win of the season. We would expect to see a 2 switch auto and more effective scaling to go along with their fast tether hang. Finally, 2959 brings their omnidirectional drive to the competition along with an effective hang and good scale scoring, they were a Semifinalist at Shepherd.

Team#	MPR
3656	1.659
2054	1.612
2834	1.517
2959	1.366
5501	1.286
5502	1.186
7222	1.178
7197	1.135
7187	1.122
5535	1.100

Lake Superior State
While we aren’t highlighting many teams at this event, there are three coming in with blue banners who will be fighting to take home another one this year! 141 is our favorite and brought Michigan some gold from Virginia, but only managed to make it to the semis at both Rocket City and West Michigan. As this will be their 4th competition (wow they must be tired), we are expecting them to be ready to compete right out of the bag on Thursday. We are expecting to see them at the finals of this event with their fast omni directional drive, effective scale scorer, and nice auton. 4391 took home a blue banner at Escanaba this year, 2 blue banners in 2017 and 1 in 2016. With their past championship experience, omni-directional robot, scale/switch scoring, and fast exchange feeding abilities, we won’t be surprised to see them in the finals as well. 5247 won at Traverse City as a third pick exchange robot, and we are sure they practiced to be able to do it again. Finally, we can’t count out 4970 who was a finalist at Escanaba with an interesting intake mechanism that is good at deswitching and scoring on the scale, along with a very effective hang mechanism.

Team#	MPR
141	1.558
4970	1.485
4391	1.455
5084	1.123
4988	1.103
4392	1.091
1596	1.085
5702	1.076
857	1.075
5175	1.034

Marysville
Last but definitely not least is the Marysville event. With many consistently competitive older teams, and explosive younger teams, this event will be a great one to watch! The perennial powerhouses are 217, our personal favorite known for their fast scale scoring and consistent hanger, along with 1718 who is looking to show FiM that their buddy ramp mechanism combined with their scale scoring abilities can launch them to the top of a MSC division. 5114 has serious potential assuming they can get their buddy hang to work consistently, as it will compliment their scale scoring abilities very well. They are also the only team to be coming into this competition with a blue banner from their win at the Gaylord event. Teams need to watch out for 3668, who another team with buddy hang potential, and 4779 who makes quick work of the scale in auton and teleop. Finalists coming into this competition looking to bring back gold this time are 5155 and 7218.

Team#	MPR
217	1.717
5114	1.509
3668	1.479
4779	1.400
1718	1.332
5155	1.317
247	1.311
4130	1.290
3667	1.218
7206	1.202

Until next week for MSC!
TMR

Would you mind posting the top 50-100 teams according to your MPR?

-Ronnie

Awesome data, great write up once again. Thank you for posting!

+1 or an Ether-esque document with a full list would be amazing as well!

You report a very specific metric by which your MPR is accurate, more accurate than other ratings. I’m curious whether you developed the rating while looking at that accuracy metric? If so, did you separate the event data into training and test sets? If not, that’s a pretty big data science no-no, and it’d mean your results are unlikely to have a lot of predictive value.

IDK, that “top two teams making it to finals” metric feels super specific and possibly cherry-picked. I guess the only thing to say is, let’s see how it does this week.

Hey guys, thanks for being so interested in the data. Here are some explanations to your questions as well as our workbook of MPR data, prediction summary, and the data required for those sheets, that you can browse through. I encourage anyone to look at this information, and if you have suggestions for improvements, let us know! We try to provide the most accurate information possible based on our experience, game knowledge and data available.

The workbook has 3 tabs, first is the MPR data which has the entire world data, currently filtered to FiM only. Each team has a line from each event and you can see how the MPR value is calculated. The second tab is the prediction summary which uses a teams previous event data and how it ranked them for the current event (i.e. week 3 data to rank for the current week 4 event). The final tab is the supporting data for the “previous event data” used in the predictions, filter by column B for any event to get the data gathered previous to that event.

Thanks Basel, that’s a good observation. We did not develop the MPR rating or weights based off the accuracy. We set weights off watching matches and determining importance of game tasks 2 weeks ago, and actually yesterday was the first time we looked at accuracy compared to ELO and OPR.

We show top 2 and it may seem cherry picked because that’s how we designed MPR to work. We know that a very high percentage of districts are won by either alliance 1 or alliance 2 (approx. 85% https://www.chiefdelphi.com/forums/showpost.php?p=1750687&postcount=34, thanks Richard!) so that’s why we designed MPR to show us who is most likely to end up as seeding rank 1 and 2. This doesn’t mean that MPR Rank 1 will necessarily pick MPR Rank 2, they could be more likely to pick the best scale robot based on scale ownership data, autonomous compatibility, past competition performance, etc, but we initially believed it is most likely that MPR rank 1 and 2 see each other in the finals, either together or facing each other.

At the end of the day, we think MPR can be used as an effective tool, in conjunction with other methods like OPR, ELO, and watching match video, but we think that MPR has the potential to help predict who will be the top seeds at each event, more so than the other metrics available. Believe me, we are also looking forward to see how MPR predicts this weekend!

Maybe this is the graph we should have put in the original post as it may be a better example of what we are using MPR for (i.e. who is Rank 1 at events vs. just who is making it to the finals)

https://i.imgur.com/zaae9CK.png

20180405_MPR Data.xlsx (1.14 MB)

Great data analysis as always.

Can you share more information on your Elo model? It doesn’t look like you’re using mine, but it looks pretty similar.

It looks to me that your MPR metric has a strange accuracy benchmark and a very limited sample of events, which makes me have trouble believing MPR could provide better predictions than my Elo model in the long run. I’ll withhold judgement until after week 6 events complete though. If no one else beats me to it, I’ll post OPR vs my Elo vs MPR prediction results for finalists at all week 6 events next week.

I have attached a spreadsheet which has data for all week 6 events (minus Long Island 1). The winners and finalists, along with the top 2 teams according to MPR, my Elo model, and OPR prior to the event are also shown.

Of the 44 finalist spots possible with this metric, MPR predicted 27, my Elo model predicted 30, and max previous OPR predicted 26. None of these metrics are significantly better or worse than the others according to this data. We’d need a larger sample size to be confident that any of them is superior.

I still think this is not a very good accuracy benchmark to use since it is so specific, but at least now we have a more reasonable sample size to look at.

MPR comparison.xlsx (11.1 KB)