|
Re: The merits of treating robotics tourneys like a game of Fire Emblem
So basically the difference between the two systems (from what I can tell) is that the original system I assume uses quantitative data (how many stacks they made or points they scored, or whatever metric you used) and the students method was qualitative.
Qualitative data is insanely risky. Using words like great or good isn't an effective method of keeping data. What does good mean? What does great mean? How do you determine if one great is better than another great? If the reasoning for using the student's method is that it shows how a robot improves or not, why not just use the quantitative data (points scored) to do the same thing? Just keep a sheet (or excel column) for each team and put down how many points they score each match and you get the same effect as the student's method, but with a better way to compare the teams.
You're also throwing all of your scouting eggs into one basket. What if the kid forgets a good team? What if he can't show up one day and he doesn't know beforehand? If your entire scouting system depends on one kid being there, you're not only risking it's failure if he can't be there, but not using a system anyone else can take over. Not to mention it's not a system you'll likely be able to use after he graduates.
When I scouted my freshman year we had basically the same setup. I memorized all of the teams, and myself and a senior were the only scouts. We thought we made great picks at the time, but looking back on it 3 years later they were just based off our opinions, and not any real data. Because of that we didn't pick the right teams.
TLDR: I'm not saying that the student's system can't work, but it's definitely not an optimal setup. Quantitative data>Qualitative data.
Just my $.02
__________________
Student on Team 1058 (2012-2015)
Mentor on Team 229 (2016-Present)
Writer for Blue Alliance Blog
|