OPR, or Offensive Power Rating, is a stat that’s often used in the FIRST community to compare the performance of teams on the field. It’s also common to hear things like “this game is a bad game for OPR” or “OPR is a pretty accurate indicator of performance this year.” In order to understand OPR’s strengths and limitations, learning the math behind it is key. This blog post will provide an introduction to the math behind OPR, and a future post will delve into how game design affects OPR’s usefulness as a statistic.
Having taken a decent few math classes in college, I think this write up does a great job of explaining the math at a high school level without sacrificing on accuracy or rigor. This will definitely be recommended reading for the Shaker Robotics scouting team.
I find myself very jealous of all these FRC kids who are heading into college with a super concrete application of Linear Algebra in their heads. I struggled mightily with LA because I didn’t really perceive the value in it and didn’t put in the necessary work to come to a complete understanding. I might even find a MOOC course and take another stab at learning it…if I can find some free time.
I never thought about self multiplying the robot match matrix by its on transpose to get the A matrix. I had always just kept a running sum of match occurrences of an N x N matrix (where N is the number of robots). I like the transpose method better.
When I was first taught linear algebra, I had no idea how ubiquitous it would be in real-world engineering applications. As a result, I had to re-teach it to myself years later once I fully understood its value.
The “robot match matrix” mentioned above is known as the design matrix (of the overdetermined system of equations). Most texts use [A] to represent the design matrix. For FRC OPR, it is a dichotomous (binary) matrix (all ones and zeros).
For M matches involving T teams, the overdetermined system in matrix form is
[A][x] ~ **
Where [A] is the 2M-by-T design matrix, [x] is the T-by-1 column vector of team OPRs, and ** is the 2M-by-1 column vector of alliance scores.
When you multiply the above by the transpose of [A], the result is known as the Normal Equation:
[A]’[A] = [A]’**
I personally use [N] to represent [A]’[A], since it is the matrix of the Normal Equation:
[N][x] = [d],
where [d] = [A]’**
Notice that the system [N][x]=[d] is not****** overdetermined; it is a system of T equations in T variables (where T is the number of teams in the dataset).
The solution of the Normal Equation is the least-squares approximate solution to the original overdetermined system.
I had always just kept a running sum of match occurrences of an N x N matrix (where N is the number of robots). I like the transpose method better.
For very large datasets, like for example 2017 FRC “World” OPR, which has 25362 equations in 3331 unknowns, creating the design matrix (instead of the Normal matrix) from the raw data is superior because it facilitates the use of sparse matrix technology which can vastly speed up computation time.
Also, you will need the design matrix if you want to explore options other than OPR, as discussed in this thread and this paper.