|
Re: scouting 2011
I like a lot of the work you are putting out. What's unfortunate is that the field management system this year is pretty sweet, but the twitter feed they released doesn't seem to take advantage of the information, which would have helped to eliminate much of the subjectiveness.
For the peg model, how are you going to compensate for pegs you can't see. Many of the larger competitions will force you to sit in areas with some blind spots (BAE specifically). Will you dedicate one scouter to each side of the field?
How many teams use laptops for their scouting vs stacks of paper? Both naturally have their pros and cons, and I'm curious about your thoughts for this game.
It also seems one concern is the subjectiveness of team's claims and how individuals perceive a robot. Do you think a rating system could over come this? If many individuals contributed their scouting to an evaluation of other robots, do you think they would average out to reasonable data, thus making it less subjective.
I'm very interested in collaborative platforms, it seems we tend to do 5x the work for the same payoff we could get by working together in a lot of cases. I think scouting is a perfect example, and maybe by working together we would get better data.
I think Patrick's (computerteen) idea is a good one. Definitely a step in the right direction. We see so many teams hand out flyers with claims of having the perfect robot, it sure would be nice to have a way to validate those claims.
__________________
"Never let your schooling interfere with your education" -Mark Twain
|