Some of you noticed the CD-papers file I posted yesterday titled “Offensive Power Ratings 2006”. The ratings are designed to give a rough estimate of the number of points a robot contributes to its alliance in a given match. A description of the system can be found here.
Now for a mea culpa: the ratings posted yesterday were incorrect, because I did a sloppy job writing the code for them (what else is new). This time around I let mathematica do the heavy lifting, with better results.
The new, corrected (I hope) ratings are posted here.
I will do this with each of the championships divisions Friday evening and will update them Saturday if I’m up to it. I am willing to do BAE, UTC, or GTR if someone will give me the match data for those regionals.
One issue with the FIRST scheduling algorithm: at Annapolis, a 64-team event, 1446 and 1218 were paired together twice. In smaller events including New York, Chicago, and Wisconsin, some teams were paired together in three different matches.
That said, I have no doubt that there are a couple mistakes in the ratings that I have made. In fact I know of one already: I duplicated half of the last match in VCU. :eek: (this will also be fixed later when I add SBPLI).
*edit: I just noticed that I duplicate the second half of the last match of a bunch of regionals.
Also, anyone who uses this or any other “objective” rating as a replacement for scouting is a fool. I didn’t think I would have to say that.
Of course, it gives us the 12th highest power rating in our only regional, in which we didn’t get picked for a finals alliance. The other interesting PNW outcome is that perennial regional powerhouse Titan Robotics had the highest power rating in the list, even though their finals alliance did not win. They just quietly keep getting it done.
Thanks for doing the math.
It would be interesting to find out the effect of experience on scores. Can you factor out the competition week (later weeks score more?) and experience (score more at second or third regional)?
The power ratings are based on offensive scores alone, they do not take defense, week of competition or level of experience into account. I want to keep “arbitrariness” and subjectivity out of the calculation: that’s why you scout. You can look at the effect of experience if you want to, because I rated each team independently at each regional. That’s why, say, 25 has the 2nd and 5th highest ratings on the list (2nd for LV, 5th for NJ).
Note to anyone: The remaining mistakes are those that I made transcribing data from the website. If someone could post the number of matches played in qualification at each regional I could double-check my numbers, that would be great.
Lets compare the offensive power ratings to actual per-team scoring data taken at PNW. The individual data was taken by a rotating team of six individuals, each of which kept an eye on a specific robot during each match.
Offensive power rating ; individual data ; rank if ind. data
There is a correlation between good and poor scorers when considering the entire list, but the error in the ranking is as high as the number of alliances for the finals.
You would not want to use the power rating to do any picking if you are lucky enough to be picking in the nationals. Neither would you want to be using your own scoring from prior regionals as significant changes in performance can occur. You need to field a scoring team for your divison at the nationals. Doing a good job of that will serve you well…
The practice matches were used to practice gathering statistics. Our ranking data only includes qualifying matches.
This particular regional was our first for taking such detailed data. It is very hard work. The data was looked at for alliance selection but was not used to determine choices in a rigid way at Portland. The performance of teams in the finals at Portland validated our data, and also added the requirement to consider another important factor, our assessment of a teams willingness to play ball with the strategy of the alliance.
Our actual ranking data kept track of autonomous, points scored, defense, getting on the platform, penalties and win/loss record (seed). Points scored by the team in a match is important, but not the whole story. These independent rankings were merged with a formula that put priorities on what we considered important in forming an alliance specifically with us.
488 was number one in our combined rank at Portland. By our books it was top ranked in scoring, 7th in autonomous, 19th in defense, 20th for getting on the platform, and 10th for win/loss record.
1569 was number two in our combined rank at Portland, 4th for scoring, 1st for autonomous, 2nd for defense, first for getting on the platform, and 14th for win/loss record.
1359 was number three in our combined rank at Portland, 2nd for scoring, 3rd for autonomous, 40th for defense, 8th for getting on the platform, and 1st for win/loss record.
The lesson that we took away from PNW was to go with our data, although there is an important role for hunches with regard to the potentially different play that occurs during finals matches. Sometimes a team reserves a capability as a surprise for the finals matches and it pays to evaluate that possibility when inviting alliance partners, even if their ranking in your teams individual scoring is not that high…
And I spent way too long in excel today with sooo many nested IF statements, trying to get team averages with an actual formula, and then ranking the teams by their averages… My file is attached for anyone interested. It should be easy to update if we get any of the other info.
Great List I remember using this last year in our scouting!
And the reason this data doesnt correlate with ranking is because it doesnt factor in the other team’s score, just your own points. (I think thats why its called Offensive Power). The teams that ranked higher with lower OPR did so because either their opponents scored more, or they scored more for their opponents.
At the time the original list of these offensive power ratings was posted, the match data for the BAE, UTC, and GTR regionals were not available on the FIRST web pages. The data from BAE and GTR qualifiying matches seems to now be available, however. Would it be possible to post an update of the offensive power ratings with these tournaments included?
Thanks! I really appreciate having the OPR info for a regional that we attended, so that we can get a feel for whether or not the teams which subjectively seemed to have great offensive capabilities were highly correlated with the OPR numbers.