Minibot Scouting

Watching Chief Delphi lately, I have seen two different systems of scouting minibot performance.

  • Option A - Record what time the minibot triggered the tower according to the game clock on the overhead screen

  • Option B - Record where the minibot placed (1-4) in relation to the other minibots in the match

As I see it, these are both valid methods each with their own pros and cons. I am curious as to which method will your team be using this year? Is there a definitive disadvantage to one of these methods? Are there other methods that I have overlooked? What are they?

Both methods have some failings.

If a match has the 4 slowest minibots in the tournament, someone will still come in first.

And only recording the finish time ignores the fact that in some matches, it is sufficient to finish after the clock runs out.

But is it safe to say that even if a team does have a slow minibot, over the course of the 7-8 matches a team will be a part of, that that one first place finish they have will be countered by the other, lower finishes?

I hadn’t thought about that…but that is definitely a problem. And without running a stopwatch up in the stands running along with the clock, it’s hard to remedy this problem.

The problem I foresee with Option B, at least at our regional [Oregon, few ‘powerhouse’ teams] is that in many matches there may well be no other minibots, so ANY minibot that scores is going to be ranked 1st. True, this might be balanced out by other matches, but with only ~8 matches, I’d rather not risk it. [Edit, I just noticed that the OP is from Team 360…an exception to the ‘no powerhouses at AOR’ rule bows in respect ]

Personally, I’d like to stopwatch-time each minibot, but I doubt we’ll have the manpower for that. I’m leaning towards just a yes/no response: did a team’s minibot score? but that has the huge disadvantage of being no help when you are trying to pick an alliance partner and ‘Our data shows that this team’s minibot reached the top of the pole 8 times, but then so did this team’s…’

I’m in the early stages of planning our scouting system and I’m interested to know what others are thinking; are the strategy/scouting discussions mainly going on in the General Forum?

It might be interesting to have a scouting team assigned to just watch and time minibots [for teams with more than 9 members…heh]. I wonder if we will see teams picked solely on the basis of their minibot.

What is your opinion of recording a yes/no answer for match scouting, then relying on pit scouting for exact times? Self-reported times, however, will probably not take into account the time needed to maneuver up to the pole and deploy.

Our team is not that large and most of our members are rookies. Because of this we’re trying to keep our scouting simple, so our minibot ranking so far is three columns in a table:

  1. Minibot success/fail (we’ll keep track of all matches to see if it begins to work or breaks badly) with a yes/no response

  2. Ranking the minibot on a scale of 1-5 (0 if the team doesn’t have one). This is a subjective comparison to the other minibots the scout has observed. Not ideal, but simple. Our database will average and be able to rank the values, as well as showing the individual match values so we can see possible improvement.

  3. Ranking the minibot deployment on a scale of 1-5 (again, 0 if not present). We’ve found from our observations that deployment is nearly as important as the actual minibot. After all, if you can’t get it on the tower how can it win?

Because we’re a small, fairly inexperienced team we’ve found that we work better with simpler calculations and easily learned/recorded systems. In the past we’ve barely had enough scouts to have one scout per team, so we wanted to keep it fairly simple to learn and operate.

I know a lot of older teams have more effective solutions, but I thought I’d share some of our ideas. If you have any suggestions for other things we can do manually (one computer for compiling but the rest is paper) that might work better, that would be a great help.

If you look at both options A and B, you will be able to get a more accurate view on how well the teams minibot does.

Option A, sort of.

Option B is irrelevant in my opinion because it’s a relative comparison. It doesn’t do a whole lot of good to know whether it won or lost without accompanying information.

For example, if your minibot deploys and goes up in 2.25 seconds but you happened to be playing 2 robots who both go up in under 2 seconds, you have a pretty darn fast minibot, but you still were second loser and the data reflects poorly on you.

If the opposite occurs and you’re playing 2 robots who have no minibots, or each take 10 sec to get to the top, yours could take 8 yet be ranked #1 for the match and on paper look better than the 2.25 sec minibot that lost out to the 2 best teams at the event.

Gary makes a valid point about the robot going up after the match ends, which is why I would say A, but instead of the match timer, an actual stop watch time from when the robot begins deploying to when the minibot triggers the target. Then you don’t care about when they actually went to the tower, just how long it took them once they committed to deploying.

I like your ideas very much. In fact, I may well steal them for our own team :wink:
We’ve struggled with a small team as well…including rather stressful years when I was the ONLY scout. Last year we finally had a laptop, and two scouts: I would input my data directly to the computer, my partner would take notes on paper and then I’d input hers after each match. It worked surprisingly well.
Last year we started off ranking things [almost everything about a robot: scoring ability, endgame success, autonomous ability] on a 0-1-2 scale: 0 for non-existent or poor, 1 for fair/good, and 2 for excellent. This worked very well in theory but then we discovered that we really only cared about how many balls were scored in each match…that was it…after that, we just tallied the raw number of balls scored and pretty much scrapped everything else [and we noted if a robot died]. I’m still not sure if it’s easier for a harried scout to decide on a ranking in the 0-2 or the 0-5 ranking system.

For a small scouting team: if you’re going to do pit scouting at all, which I don’t really suggest, do it during lunchbreaks or after competition if you have to, it is much more important to make sure you have a well-rested crew on match-scouting. If you do have a team member watching each robot, timing each minibot with a stopwatch [cell phone etc] would be a great idea.

I suggest networking before competition with another team, and collaborating with them on scouting. This might mean using their scouting system, but I’d recommend just comparing your scouting data and top-ranked teams with theirs once or twice per day, especially Friday night.
Send me a PM if you want any more advice on scouting with a small team…scouting was my first love :slight_smile:

This is one of those areas that teams will need to be highly observant in scouting, and I do not think there is going to be a clear and concise way of tracking the minibots.

Our team is using both methods A and B, along with an average time up the pole.

We went with option B. An exact time would be nice, but we just don’t have the manpower to time it.

Our scouts did a nice job with 0-3 rankings. Zero for no minibot or failed deploy. 1 for slow, 2 for medium and 3 for fast. No stopwatches were used, we only had two scouts and they collaborated on the definition of “medium”. I think they ended up giving a 2 to any minibot in the 2.5 to 5 second range. They also included some subjective measures about whether a team was improving over the course of the

Great job by our scouts by the way. We’re going to work on actually earning an alliance captain or 2nd spot in Troy so all your useful data actually… gets used!

It is also one of those things that the deploymen system becomes just as important as the minibot itself.

The real trick is to get the fastes minibot and the fastest deployment o the same robot. How doyou track this? Good question. A lot of observation