View Single Post
  #8   Spotlight this post!  
Unread 06-07-2015, 13:22
IKE's Avatar
IKE IKE is offline
Not so Custom User Title
AKA: Isaac Rife
no team (N/A)
Team Role: Mechanical
 
Join Date: Jan 2008
Rookie Year: 2003
Location: Michigan
Posts: 2,148
IKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond reputeIKE has a reputation beyond repute
Re: The merits of treating robotics tourneys like a game of Fire Emblem

Single person scouting can be effective, but I think the organization could use a little work.

Several years ago, I did a subjective scouting booklet loosely based off of some "tasting" booklets I had observed online. The intent was to fill out each page with some sort of full subjective assessment for each team.
http://www.chiefdelphi.com/media/papers/2572

When using this, I would typically use it with another sheet, and a copy of the schedule to help ensure that I got at least 1 full matches worth of each team. One weakness of this is that you can closely watch a teams 1 bad match, and possibly miss several good one thus classifying what would be a "good" team as a weak time. Vice Versa is also possible.

The system I mentioned above does tend to work pretty well for match scouting used to influence match strategy where you watch your partners and opponents 1-2 rounds ahead of your match with/against them, and use the comments to feed a strategy card.

***************************
In contrast though, I like the concept of the type of scouting you are discussing.
Let's say I just gave a 5-1 (use a 0 for no-show) rating for every robot based off of their performance. I could in general watch the match from a wholistic viewpoint, give a performance rating for every robot, for every match. The average and variances of rating would likely give me rough results that would mirror a lot of pick-lists that use some sort of performance based system ranking.
It wouldn't be as accurate, but it if using 2 people (one to scout, and one to take notes), it is about 1/8th the effort, and is likely better than 12.5% accuracy that the portion of work would say is equitable. My guess is if you were to figure out a rating method and compare it to others lists, your list would probably be around 75-80% as good as theirs. I know this is possible, because without using a "formal" system, I am usually able to get about 20/24 teams playing in elims at District events just watching matches as an observer. Often I am on the order of 15/16 for the first 16 teams with greater misses on the back end.

With a little practice, and 1-5 or 1-6 (again 0 for no shows), you could likely get a pretty good list.
If you had a group of about 4 students, and you did Red/Blue Defense/Offense ratings, you could likely get really good results as long as your Red/Blue teams discussed and synchronized their ratings (IE an get concurrence of what a 3, 4, and 5 look like).

If you took this data for all matches, you could likely see the important trends that could take your list from a 75-80% accurate level to a 90%+

You still won't be as accurate as the teams collecting stats, that have the ability to process the stats and refine the value of those stats. I would venture though, that your list may be more accurate than the teams that collect wonderful stats, but do not have a clear vision of how to use them....