My team is in the fortunate position of possibly being a top 8 team at the Bayou regional, and we would like some tips on how to you guys make your lists for Alliance selection?
Many thanks.
My team is in the fortunate position of possibly being a top 8 team at the Bayou regional, and we would like some tips on how to you guys make your lists for Alliance selection?
Many thanks.
Scouting, if you guys don’t already have a scouting system set up I would just use OPR data to select teams. Also start researching how to make a good system for scouting for next year as it is one of the most important things your team can have. There should be plenty of threads on here with teams explaining how they do scouting. Our team records all offensive data of every robot then uploads it to MS excel and makes a ranked list of all the teams based on offensive performance. We also have 2 other “super-scouts” (I’m one of them) who record subjective data about robots that the statistical data doesn’t tell us. Most of what the super-scouts record is information about robot’s defensive capabilities, speed, and driving ability. The offensive data is most important for making your first pick and the subjective/defensive data is what we mostly use for making our second pick as we have usually seen that the best alliances are 2 offensive robots and 1 defensive robot. Hope that helps.
I would start by throwing OPR into the nearest trash can. Skipping over the whole “setting up a scouting system” aspect, which should definitely be done at some point, here is what I would do:
But before picking… Talk to some of the potential partners and see what would happen.
He’s absolutely right. OPR is calculated with alliance stats in a match and the data can be very misleading about individual robots. Some people argue that the law of averages ultimately sets everything straight, but as the mentor of a predominantly defense-oriented team, I can tell you outright that it doesn’t give you an accurate picture of individual robots, and it most certainly doesn’t factor in the idea that this game is played with a three team alliance, and if all robots do the same thing, the alliance could easily lose to another alliance with lesser robots that are better organized around a more cohesive strategy.
Ideally, I would consider one offensive robot that plays the game differently from my robot and a good defensive robot…not an ok offensive robot that is put to play defense. One of the best parts of being a lower end picking seed is that you can basically get a decent offensive robot as your first pick and then choose the best defensive robot, but few low-end seeds actually do that.
As far as scouting goes, we keep a pretty extensive database on each robot at a regional and rank them based on their own individual merits. We didn’t come up with this method on our own, though…we’ve asked for/stolen ideas on how to scout from the best teams we’ve been partnered with. One of the best parts about FIRST is the idea of coopertition (even though it doesn’t always play out that way when it’s been worked into the actual game), so if anything else, go up to an older, more established team and ask them to share scouting data and tips on how they’ve scouted the teams at Bayou. I can tell you firsthand that 118’s scouting system is very visual, simple to understand and very powerful and informative.
Thank you everybody for your replies.
I would just like to note that if you are picking a robot for the sole purpose of offense OPR is the best system you have you don’t already have a scouting system. The OPR ranking has a .9 correlation with our teams actual, quantitative offensive ranking of teams (how many points each team actually scores in a match) so when you say OPR isn’t accurate at telling you which robots are good offensive bots you are wrong, it is 90% accurate. For your third alliance partner you need to completely ignore OPR and seeded ranking and just look at what robots fit your strategy best, that might be a defensive robot with nothing but a drive train, or it might be a robot that can feed you disks half-court from the feeder station (if you have a aground pick-up) or it might be a bot that can climb for 30 points and stay out of your way for the rest of the match.
TL;DR: a good scouting system is irreplaceable, however OPR does tell you, with 90% accuracy, which robots score the most points consistently (this may not mean they are the best pick though).
Is this always the case, or is it 90% accurate at just the regional you attended? I haven’t checked the correlation at the Wisconsin regional, but comparing OPR data to the scouting data we collected on individual robot scoring from Friday shows OPR to be off quite a bit at a quick glance. 4212 was far and away the highest scoring average robot at the regional (by at least a 10 point margin) but was 4th in OPR. We (167) were about the 9th highest average scoring robot at the regional but are 26th in OPR. 1732 was 8th in OPR but I believe was closer to 3rd in actual robot scoring.
Of course, Wisconsin could also be an outlier in its own right as far as OPR correlation. I noticed off hand that the regional tended to be very balanced as far as quality scoring machines were concerned. There was not any clearly dominant scoring team like 987 or 2056 at the regional, instead there were 12 or so top teams that all scored between 40 and 60 points a match on average, then the average slowly dropped down until you had to get probably 24 teams in before teams were averaging less than 20 points a match.
OPR has been within 90% accurate at all of the last 5 events we attended (6 once nationals rolls around) so if you don’t have a quantitative scouting system to track every team OPR is your best bet.
Don’t count out rookies, they can surprise you.
At Boilermaker Regional, I calculated OPR as well as match predictions for Saturday (after teams had played 8 or 9 matches). It predicted the winning alliance correctly 20 out of 24 times. Score predictions were low by about 10-20 points on average, but that showed that robots were improving.
The other thing that surprised me about the Boilermaker OPR is that there were a couple of cases that demonstrated accuracy. There were 2 single purpose robots in particular - one that hung for 30 points and one that hung for 10 points. OPR after Friday was 29 and 11 respectively. Anecdotal evidence I’m sure, but it seems that this year’s game is easy to decompose (even easier than last year).
OPR might be good at identifying top pick candidates, but nothing beats old fashioned scouting. My advice is to write down 5-10 attributes you think make a robot “good” for the game. This year, accuracy, distance, as well as how quickly can they hang, and floor pickup are good attributes. Attributes that are pretty universal between games describe robot drive trains, like speed and pushing power.
The next step is to find 3 or 6 dedicated students to watch robots from the stands. These students need to focus on a single robot during each match, writing down as much detail as they possibly can.
The last thing you might consider is writing down robot features. When I was watching matches this year, whenever I saw a battery move within a robot, I advised the scouters to note it. It’s a hard lesson to learn, but it sunk my team and our alliance during eliminations in 2006 when our battery wasn’t secured properly and knocked open our pneumatic release valve, and disabled the robot through the remainder of the match. There are other things like this that are easy to spot (bumpers dragging on the floor), that can potentially draw lots of fouls or a disabled robot.
Hope this helps your scouting effort!
Not going to get into a full OPR rant, but I will briefly mention something about this anecdote. Those OPR figures would only be accurate if those robots were 100% successful in their climbing attempts.
What data did you use to determine it was 90% accurate?
I have the same gripe. It sounds like they have found a correlation coefficient of 0.9 between the explanatory variable (OPR) and the dependent variable (Avg. Score). In this case it is poor interpretation to say that this is 90% accurate. However, due to an R^2 value of 0.81, it would be acceptable to say that 81% of the variation in Avg. Score can be explained by variation in OPR. Saying it is 90% accurate based on a correlation coefficient of 0.9 is poor analysis of a statistical regression (it sounds like this is the mathematical tool being used).
Collecting and compiling statistics on individual team contribution can be very helpful. This is the type of system that CORE uses for data collection and it is very labor intensive.
Scouting is a lot like building in that you need to consider the capability of your team in your planning.
For smaller/newer teams I would suggest some kind of partnership with a larger and/or more established team. You can get some data from them and then work on your own strategy using their numbers.
Otherwise, OPR or FRCminer are good sources of impartial data even if they are not 100% representative of the teams individual capabilities.
I hope the OP had a good outcome at Bayou and I would encourage all teams to keep practicing and growing your scout-egy capabilities. If you wait until you are going to need the data you probably will not get up to speed in time.
-mister g
From what I’ve seen it looks like OPR accounts for the “average” number of hang points that a team will get per match, if they hang for 10 points 100% of the time their OPR will be higher than if they hang for 10 points 50% of the time.
“90% accuracy” was probably the wrong phrase for that. To elaborate, OPR ranking had a .9 correlation coefficient with robots actual offensive performance.
The main point that I’ve been trying to get across is that a good scouting system is irreplaceable, but if you don’t have one, OPR is much better than having nothing. I think we can all agree on that.
I’m thinking we might use OPR as a major element of our scouting process for Championships. We will also have students scouting the teams, but probably won’t scout every team every match.
Our crude 1-sheet-of-paper-per-team scouting during the AZ regional worked fine, and we we were also able to get corroborating scouting data from a friendly team.
What exactly does OPR stand for and what is it?
Offensive Power Ranking.
It’s an estimate of how much each team scores per match, or rather their point contribution to their alliance (on average).
The biggest problem with using solely OPR is that 1) it virtually ignores defensive players, and 2) it takes a bit of setup to run the numbers properly. (It also doesn’t distinguish between how points are scored–if you’re a 50-point climber and you pick a 50-point climber because they have a high OPR, you’ll probably be blanked by the 50-point climber that picked a pair of shooter/defense robots that combined for 50 points.)
That is why we use a pit scout/OPR combo.