1418's Scouting Data from the Chesapeake District Competition

FRC Team 1418 collects detailed and comprehensive data on as much of the competitions we attend as possible. We have almost 500 data points total, and data on every team in attendance available. We hope this helps teams analyze their own and others performance on multiple areas of the field.

7 Likes

It’s always fascinating to see what other teams scout for, and what they think is important when building a pick list. One of our most important criteria (points scored in Sandstorm) isn’t even on your list. Our theory was that you should build an eliminations alliance around the undefendable points from Sandstorm and endgame. Just by selecting the right robots, you could get 54 free points without doing anything in teleop. After that, it was a cycles race during teleop, with the usual tradeoffs between fast cycling offensive bots and defensive bots slowing down cycles. Alliances incapable of the 54 free points had to make up those points by being that much better in teleop.

2 Likes

Don’t forget the all-important “dropped/missed” game pieces data. Steered us clear of some teams for alliance selection.

To OP, thank you for sharing your data. Scouting is hard, and there is always room for improvement, but kudos for getting data on every team.

1 Like

We don’t track that because we figure if you score 5 gamepieces, we don’t care if you missed 5 along the way. All that matters to us is the scored total. A team that scores 5/10 is just as useful as one that scores 5/5. Do you guys do something with the “misses” data that can’t be deduced just from “scored” data?

1 Like

Was there any way of tracking teams defensive capabilities?

Does anyone have any plans to release their scouting notes? My observations (largely confirmed by this data) were that many of the big contenders struggled to remain consistent between matches. It would be quite helpful to have a bit more context behind some of these numbers.

Scouters comment if a team was defended or was just dropping pieces, had a hard time lining up, etc. I suppose that comment may be sufficient and the count of drops is redundant.

The way I look at it, if a team drops 5/10 game pieces each match that means they are wasting time getting new game pieces, and are inefficient. A team that can only do 5 game pieces, and always gets 5 can be relied on to follow a game plan, especially if you know defense is coming. Have that team score one or two and then go play D, or block a defender.

EDIT: we also tracked if they scored in SS, and if they missed an attempt in SS. Which was very important.

1 Like

It’s a way to evaluate upside. A team putting up 5/10 has a ceiling of 10/10, which is what you want if you’re an 8-seed needing to maximize variance to pull an upset against long odds.

The main issue I had with scouting dropped game pieces is that scouters tended to forget to record them, so the data was unreliable.

1 Like

It seems like the database is incomplete, particularly for later matches?

1 Like

If a team drops 5 hatches, but can still cycle 10 elements, they are still going to outscore a team who has never dropped a hatch but can only do 8 cycles. The bottom line for an elims match is whoever has the most points at the end no matter what, so it is only logical to pick the 10 over the 8 and the dropped hatches is just noise in the data.
Variance in cycles is a whole different metric.

I agree. I was just looking at it as a “can we count on this team” idea.

And I’m sure there is a good scouting philosophy discussion to be had on what to track and what not to track.

And its hard to see what happens on the other side of the cargo ship.

1 Like

Not sure I agree here. Teams that reliably place only 5 game pieces with few drops will necessarily have slow drivetrains and slow mechanisms. In the playoffs, the pace of play increases and defense becomes more of a factor, both of which make slow robots less effective relative to their fast, inconsistent peers. And you can send them back on defense, but they still have a slow drivetrain that can’t adapt to evasive maneuvers by the other alliance. You’re also limiting your alliance’s flexibility, since they can’t get back on offense quickly if one of your main scorers breaks (an issue that came up in the DCMP playoffs), and they won’t be able to attempt a last-second game piece in a close match (something I’ve decide many a playoff match).

And this all neglects the biggest problem with slow, consistent robots, which is that they don’t have any obvious ways to improve. Many teams have solved issues with dropping game pieces with tweaks to their PID or with MacGyvered bits of cardboard, but that doesn’t work for a slow robot. Nobody is gonna swap out their gearbox in the middle of a competition just to increase their top speed, and you can’t just persuade drivers to drive faster.

Or, instead of a slow robot, it could be a cautious driver, who goes and lines up, and takes a tad long- which can be improved with practice, or throwing caution to the win. Perhaps they’ve just been targeted by defense the entire competition, playing defense of their own, or perhaps they’ve been having mechanical issues, which has been limiting their cycles. Point being, this is why we scout, so we don’t have to assume why they’re not doing more cycles and blaming it on their robot being slow as opposed to a million other reasons.

Regardless, scouting data is awesome, and thank you to 1418 for releasing it to the public, Chesapeake tends to be ignored, so having data on it will be useful for teams everywhere, I think.

How did you get the 54 points? What is the breakdown?

We took notes on both sandstorm and climb / endgame scoring but we don’t release our notes in our data drops because some of our scouters comment not-so-graciously-professional things.

1 Like

Liu346, we track defense in comments as well

Essentially anything not directly measured in our released data is noted down by our scouters. Our scouters track metrics like sandstorm scoring, defensive performance (both playing defense and being defended against), drive train, wear and tear, endgame scoring, and climb efficiency.

We use all that data, along with what we can gleam from online sources, to construct strategy briefings for our drivers on every game. We assess both out own alliance and the opponents, then predict which robots will do what and effective counter strategies. I’d say the final product accurately predicts opponent behavior 80% of the time, which gives our drivers a MASSIVE advantage. Being able to anticipate the behavior of the other alliances lets us assign alliance members to disrupt the highest scoring bots and know when and who will be defending on us and how to avoid them.

Knowledge is power

1 Like

Sandstorm Starting Bonus: 6+3+6 = 15
Each robot placing a hatch in Sandstorm over a ball: 5+5+5 = 15
1 HAB3, 2 HAB2 climbs: 6+12+6 = 24

= 15+15+24 = 54 largely un-defendable points.

2 Likes

We don’t scout the last 10-20 matches to give our strategy team time to decide who we want to build relationships and alliances with, as well as let our scouters watch matches for the less measurable data on robots, like effect of wear and tear etc. Internet access isn’t reliable at competition either (no data hot-spots) so we may not even be able to upload additional data from our scouting program, Victiscout.

Edit: Our scouting also isn’t perfect. We do miss some matches, and delete others due to quality control issues.

I took the data you posted and calculated the average number of points scored from hatches, cargo, and combined scores for each team. Additionally, I took the average total score for the best three matches from each team, and I found the percentage of matches where a team got onto the HAB 2 or HAB 3 at the end.

This is obviously a ruidmentary analysis, but even accounting for that I’m a little puzzled as to why you picked the bots you did. 977 came in 18th for average points, and 2199 came in 25th, with both robots ranking even lower when you only take their best matches. The RoboLions were excellent climbers, sure, but your first pick of 977 was quite poor. What was the reasoning behind your picks? To put the band back together from your high-scoring Q21 match?

I’m wondering about the rest of the top four seeds as well, since 1418’s (not exactly unbiased) data shows them to be the best robot at the event, yet they wound up captaining the 4th-seeded alliance. Given that all these alliances got upset in the QFs, it seems there’s cause for some reflection.

Google sheets link.

teamNotes.xlsx (17.3 KB)