Our scouting team is currently working on a way to rank the minibot race. The current way that I figured out so far is to input win data into a matrix and square it, but we are wondering if there is a more effective way to do this, or if it is what we should use. I tried looking up other scouting methods and all I found was OPR, which doesn’t fit what we are trying to find in this case.
As for how specifically we are using the matrix, we are taking how the minibot ranks in a match and translating it into wins. For example, lets say there are Minibots A, B, C, and D. If the rank 1, 2, 3, and 4 respectively, then A will have a “win” against B, C, and D, B will have a “win” against C and D, C will have a win against D, and D will have no data. essentially, it will look like:
O A B C D
A 0 1 1 1
B 0 0 1 1
C 0 0 0 1
D 0 0 0 0
And to clarify, if the same thing happens, it would look like:
O A B C D
A 0 2 2 2
B 0 0 2 2
C 0 0 0 2
D 0 0 0 0
We would then square that matrix, sum the rows, and whoever has the highest sum has the highest rank. I’ll include the excel sheet we are using so far. In theory, it will end up being a 50x50 matrix or however many teams there are at competition, but for experimentation purposes we are working with a 8x8 matrix.
This may work, but I think there are much easier solutions. We are keeping track of if a the minibot sucessful reaches the top and an estimate of time it takes from when the robot touches the tower.
This way won’t give us a perfect list (is a team with 5 sucesses with a 3 second average better than a team with 8 but with a 5 second average?), but I don’t think that is necessary. Most of the time its finding the best combination of autonomous, tube placing, minibot and defense (among many other things) that best fit your strategy.
In the first this would re registered in as 3/4 last, and likely 1/4 first or second. The question is how much you’d value the 25% minibot, and if you even want to know that percentage.
I’m not sure what 2337 is doing this year, but we always try to bring in as many objective numbers as possible, so I imagine it’ll be attempt (yes/no), success (yes/no), place (1-4). This could be used to find average place per success, as well as a percentage of attempts being successes.
Several people have said they are keeping track of the place of the minibot. I don’t think that is the best way to do it as it is highly dependent on the other robots. A 10 second minibot could get 1st if there are no other minibots while a 5 second minibot could get 4th is they are with really fast minibots. Over the course of 8 matches, I don’t think things will average out.
But anyway, I think placing is going to be worth just about as much as an estimation. Both will be somewhat off, but I think placing will work just as well with all minibots, whereas estimation won’t work as well with the fastest (but perhaps better with the worst).
Instead of taking something such as the rank, it will translate that rank into an “I’m better than you” kind of rank. For example, If Min1 comes in first, what the matrix will do is take into account who he got 1st against; 2, 3, and 4. I’ve actually tried to steer away from using time, and instead am using kind of a “Who beats who” kind of thing. That’s the wonderful thing I’ve found with this system: by squaring the matrix, if 1 beats 2 and 2 beats three, that squaring will tell the data that 1 technically beats 3.
The few problems are that it can’t be weighted, and each participant has to play the same amount of times for it to be accurate, which is why I was wondering if there was something I could do to improve the accuracy of the system.
Thats a good point. I think you get to a point where minibots are “fast enough”, especially at the regional level. Is there really a significant difference between 1.55 second and 2 seconds? Or 2 seconds and 3 seconds? I don’t think so, especially considering there are many factors that prevent teams from deploying at the exact same millisecond.
Thus, I think it is important to rank the lower end minibots, which may range from 8-14 seconds. That time difference is probably more significance.
You might also want to keep track of rule violations which cause the TOWER to be disabled. A 2.7-second MINIBOT isn’t of much value if they try to squeak it to 2.3 seconds by DEPLOYING too early, earning a disablement.
I’d say the best way to scout minibots would be for the scouter to write down the time on the timer when the minibot hits the top. That way, it’s totally objective and doesn’t depend on the other minibots/what place this particular one got in the minibot race
By far the best way in my opinion is to write down the time on the clock when the minibot reaches the top. This factors in everything - deployment speed, minibot speed, whether or not teams get to the tower early enough.
We may have to steal this. One downside to this is if a team knows the other opponent doesn’t have a minibot and thus places a final tube before deploying at the last second.
All you would have to do is keep track of where the minibot places, and the number of points scored, the one with the most points is usually the most beneficial, and how fast they make it to the top
This doesn’t actually tell you anything other than relative score. It’s analogous to looking at a team’s average score to figure out if they’re a good pick for your alliance. It gives you a general picture of what ballpark they are in, but it doesn’t actually tell you how good they are, only that their opponents were better or worse.
Teams with the fastest minibots, or at least those who understand their worth in the game will be prepared for deployment at least 5 seconds prior to the end game. Once the towers go live at 10 seconds they should start deployment. From there, I’d imagine the fastest minibots will reach the top at 8 seconds or so with other minibots being slower from there.