paper: 2013 Team 1114 Championship Scouting Database

Thread created automatically to discuss a document in CD-Media.

2013 Team 1114 Championship Scouting Database
by: BenB

The 10th annual Team 1114 Championship Scouting Database.

The 10th annual Team 1114 Championship Scouting Database. Includes full stats, including some advanced metrics for every team who competed in Ultimate Ascent.

Team 1114 2013 Championship Scouting Database.xlsx (2.89 MB)
Team_1114_2013_Championship_Scouting_Database v2003.xls (5.79 MB)

Attached is the 2013 Team 1114 Championship Database. This year’s database includes full results for every team who competed in the 2013 season. We’ve based this version on the current divisions posted at usfirst.org. If these change we will update accordingly. (However if it’s just a couple of additions, we may not release a new version)

The database includes:
-An interface to allow you pull an individual team’s record
-Full listing of awards, record & finish
-Team scoring averages
-Total “OPR” as well as Auto, Teleop and Climb “OPR”. These calculations use linear algebra to determine what a team’s average input to their alliance was at each regional. (Only using qualifying match results)
-A master sheet for a sortable comparison of all FIRST teams
-Master sheets for each division and full divisional assignments
-A quick Divisional Stats Summary
-New this year, “Calculated World District-System Ranking”

Calculated World District-System Ranking, uses the district point system used in FiM and MAR to calculate a team’s ranking based on qualification W-L-T results, alliance selection, elimination finish, and awards. Full details about the scoring system can be found in the FiM Rules Supplement.

A few things to note when reviewing the “Calculated World District-System Ranking” results

-The results, draft, win-loss, and award points are normalized to 2 events since the district system only counts points from the first two events (ie. If you only attend one event and lost in the semi finals as a first round pick you would receive 210/1 elim points; 2 events * total elim points / number of regionals attended)
-The Win-Loss points was normalized to 12 matches per event (ie. If you win 8 matches at an event where only 10 matches were played, you would receive 2
12*8/10 points; 2 points for a win * 12 matches * winning percentage)
-All MAR and FiM District Champsionship Data has been excluded as we feel this would hurt the normalized points for teams competing at these events as it is more difficult to obtain points due to stronger competition
-District Chairman’s Award, Engineering Inspiration, and Rookie All-Star has been treated like a regular regional, and have been listed in the Auto Qualify for World column

Some of this is comparing apples to oranges, so it’s important to keep the following things in mind as well

-Weak teams that only attended one event and “get lucky” by making it to the finals would have inflated numbers, as their “result” numbers were doubled to be normalized to two events. It is unlikely had they attended a second event their results would have been as favorable and therefore would have likely had less total result points
-Conversely a strong team that attends three events and gets “upset” in the semi finals at their third event would have deflated numbers as only the points from their first two events would have counted in the district system and they would have had higher total result points
-Teams that attend smaller and/or weaker regionals will have inflated numbers as it is easier to obtain points
-Teams that attend larger and/or more competitive regionals will have deflated numbers as it is more difficult to obtain points

Assuming 400 teams were to attend World Championship this year, the top 235 would have qualified on points, with 165 teams below that qualifying based on awards.

The data we have was all mined from the FIRST website. There may be some errors, but I’m confident the data is 97.1114% accurate. That being said, much of the alliance selection results were obtained via word of mouth, so there may be some errors.

Prior to 2008 we never released any of our regression analysis (OPR) that we had been doing since 2004. Since people have become more knowledgeable on the subject we decided to make the change. Please do not take a poor score as a slight or an insult. We simply used the actual scores from matches to perform a calculation. We feel that this tool is the best available metric if you are unable to watch the actual matches. Since none of us can attend every regional, it should be a valuable tool. In our opinion, regression analysis is more effective for Ulimate Ascent than it was for most of the games in recent history since the game can be played fairly independently from partners (ie. for most events there is ample room on the field/at the loading zones and scoring objects to utilize). However, there are limitations, and as Karthik said “OPR in 2013 as a stat that is blindly quoted by those without a fundamental understanding of what it means = Thumbs Way Down”. If you want more details on this, come check out Karthik’s seminar in St. Louis.

Thanks to Karthik Kanagasabapathy, Geoff Allan and Roberto Rotolo of Team 1114 Stats and Research for creating this year’s database. Thanks to all on Chief Delphi who helped with the alliance selection results that are used within this database.

If you have any questions, please ask.

Thank you. This is very helpful.

This is great. Thanks

*Here’s what the interface screen looks like with older versions of Excel, after the M$ XLSM-to-XLS converter has worked its magic.

For those of us with older versions of Excel, would you be willing to try doing a “Save As” to export the spreadsheet as XLS and post it?

I’ve uploaded a 2003 version. We had to remove some formatting and change a couple formulas, but it appears to work. Let me know if you have any issues.

It appears to work. Thank you.

This is great, thanks 1114.

Special Thanks to Roberto

Great tool; thanks for the hard work.

Thought that the stats on the Divisional Summary sheet are very interesting:

Division   Teams Events   Regional  Total  Avg    Max
                 Attended Winner    Event  OPR    OPR
                                    Wins
Archimedes  99   202      43        57     32.3   92.0 
Curie       99   186      46        52     32.8   98.7 
Galileo     99   194      49        58     34.2   103.9 
Newton      98   198      32        43     28.8   80.9 

Makes Gali seem like the toughest and Newton seem a little softer.