paper: 2011 Team 1114 Championship Scouting Database

Thread created automatically to discuss a document in CD-Media.

2011 Team 1114 Championship Scouting Database
by: Karthik

The 8th annual Team 1114 Championship Scouting Database.

The 8th annual Team 1114 Championship Scouting Database. Includes full stats, including some advanced metrics for every team who competed in Logomotion.

Team 1114 Championship Scouting Database.xlsx (5.37 MB)

Thanks Karthik and 1114!

Thanks Karthik,

This will be very handy next week! :slight_smile:

Greetings all,

Attached is the 2011 Team 1114 Championship Database. This year’s database includes full results for every team who competed in the 2011 season as well. We’ve based this version on the current divisions posted at If these change we will update accordingly. (However if it’s just a couple of additions, we may not release a new version)

The database includes:

  • An interface allow you pull an individual team’s record
  • Full listing of awards, record & finish
  • Team scoring averages
  • “OPR” This calculation usings linear alegbra to determine what a team’s average input to their alliance was at each regional. (Only using qualifying match results) For the first time, we’ve given in and actually called it OPR. We started using “OPR” back in 2004 before there was any public discussion of it, and we called it “Calculated Contribution”, which is a more descriptive name. In 2007, a few people on CD started using the same method, independently from us and named it OPR. The OPR name stuck and as such we’ve decided to adapt.
  • A master sheet for a sortable comparison of all FIRST teams
  • Master sheets for each division and full divisional assigments
  • Normalized rankings of many stats. This year we found that a team’s true performance could be skewed in stats by attending a stronger regional. With our scoring metrics, we decided to normalize them in order to provide a different comparative view regional to regional. We normalize by comparing the Regional’s average score to the World’s. If a Regional’s average score is higher than the World the possibility of teams’ scores being artificially inflated by the performance of other teams. Therefore we normalize this by dividing this by the relative “strength” of the event. The general formula is: Normscore = Real Score x World Average / Regional Average
  • Overall Regional and Divisional Stats

The data we have was all mined from the FIRST website. There may be some errors, but I’m confident the data is 97.1114% accurate. That being said, much of the alliance selection results were obtained via word of mouth, so there may be some errors.

Prior to 2008 we never released any of our regression analysis (OPR) that we had been doing since 2004. Since people have become more knowledgeable on the subject we decided to make the change. Please do not take a poor score as a slight or an insult. We simply used the actual scores from matches to perform a calculation. We feel that this tool is the best available metric if you are unable to watch the actual matches. Since none of us can attend every regional, it should be a valuable tool. Regression analysis is less effective for Logomotion than it was Breakaway, much better than lunacy, but nowhere even close to Overdrive. If you want more details on this, come check out my seminar in St. Louis.

Thanks to Geoff Allan, Ben Bennett and Roberto Rotolo of Team 1114 Stats and Research for creating this year’s database.

If you have any questions, please ask.

I haven’t looked at the numbers from all the regionals, but I feel that it would work the other way: if you attend a very good regional, your OPR may be too low. If you have 3 teams that get 3 logos each and have a super fast minibot, there is no way for them to do that together and thus their OPR would be lower because of the fixed amount of total points, diminishing marginal returns in tubes scored, and having 3 robot getting in each other’s way while scoring. I recall some teams at MSC had lower OPRs than at their district events. Unless I’m reading this wrong, 148’s Dallas OPR is 5 times 217’s from MSC.

Anyway, thanks again. We’ve used this since 2007 and its a good way to get a rough idea of how good a team is.

Thanks, Karthik. Between you guys and Ed Law, you save everyone all a lot of time and produce a great reference resources.

FYI, the Las Vegas Regional “Team Standings” data on FIRST’s web site is incomplete. Every team played 11 matches, but the data does not reflect it. (Team 987 was 10-1, not 9-1) The “Match Results” data is accurate, however.

Thanks very much. The information is well organized and logically put together.

I agree, it is most important that teams scout at the event. Time after time, regional events are won by alliance captains who have diligently scouted at the events.

Relative to past games, would you say the OPR analysis represents a “coarse” ranking of the teams versus a “fine” one. Is a team with an OPR of 65 much difference than one with 55 from a statistical point of view? I am interested in your perspective. Your comments would suggest this.

Thanks for putting together this nice package. As I consult with 2016, I will recommend using your analysis particularly as they prepare for their scouting approach.

Thank you Karthik and Team 1114! What a great resource, and made public as well!

Issue with using results because standings are not accounting a match is that it could be a mistake or a disqualification. There were plenty of both, so you’ll still end up with inaccuracies.

Fair enough. Good point.

As always, an awesome spreadsheet.

However, I think the normalization goes in the wrong direction. It seems this year that strong regionals result in lower OPRs for teams (see MSC, for example). The reason is due to diminishing returns of the tube scoring, as well as the size of the scoring zone not allowing three excellent scoring robots to all score efficiently simultaneously. Additionally, having only two minibot poles for three robots adds to this effect.

As always, excellent database Karthik and Simbotics! We will be sure to put this to good use!


I do agree that the normalization goes the wrong way when it comes to MSC. However, MSC is just a complete statistical aberration due to the overall strength of the state of Michigan, and the exclusive nature of the event. Most other regionals never really got to the point where they reached the point of diminishing returns of tube scoring in qualifying, or never consistently had three scoring robots on a qualifying alliance. What we saw more of was many weak regionals where only 1-6 teams could score effectively, as such there were considerably lower scores. (Events with elimination matches where only 7 points were scored, etc.)

Can somebody give a brief rundown of what each of the categories is and how they are calculated?

As always, I think I have to agree with what you say.

Thanks Karthik and 1114, for continuing to lead the FRC community toward excellence, especially in scouting and strategy.

I was at MSC, and at three other events this season – an all time high for me. During the weeks I did not attend events, I was following webcasts. While I agree that MSC is in a class by itself (we can hope to see that level of play again in St. Louis, and at IRI), I wonder if the “point of diminishing returns” might also have been reached during some qualifying matches at Hartford and Knoxville? The fields for both of those events included many good tube scoring robots.

I really look forward to seeing some match-ups with teams that competed at those events against the most powerful Michigan teams. When ubertubes are even, the top and middle rows are filled early, HPs throw accurately and strategically, and everyone has a fast minibot, we begin to see artful defense decide matches. There were glimpses of that in the events I mentioned above. I think there will be more on Einstein.

Karthik, don’t selll yourself short. I ran some regressional linear-bearing algorithms, and i find that your data is 97.234% accurate.


It was due to a red card you folks got in the last quallification match right before lunch. I had asked about it and thought incomplete as well.


I think that linearly scaling the entire average contribution or OPR skews it too much at weaker regionals. Based on the two events 330 attended, Arizona was much weaker then LA, as the statistics show. Our performance was also worse at Arizona. With the current scaling, the Average Offensive Score scaling is 113.7 at Arizona and 79.5 at LA. The Arizona value is very much over inflated. I think a better scaling would be

normalized value = (team - regional average) + worldwide average

This results in an AZ offensive score of around 67, which makes sense to me. This way you’re only compensating for the regional difference, and not the teams performance.

Thank you to Geoff Allan, Ben Bennett and Roberto Rotolo!

Another year, another great 1114 database. Thanks guys!

It appears that the North Star (MN2) award data is missing.

Red card was withdrawn with appeal following the match so calculations would be just slightly different as a result I think…