Robot's Potential

  1. How many Mb/s do you use?
  2. Which alliance color do you like more?
  3. The classic: how many wheels do you have?
  4. What are your thoughts on RP?
  5. Can you start from HAB3?

/s

2 Likes

One of the rookie teams at CPR made information sheets available in their pit. One of the claimed abilities was to start from HAB3. They struck off the pick list any team that took it at face value.

2 Likes

Some team asked us if we could score balls at any rocket level, followed by if we could score disks on any rocket level, then proceeded to ask if we could score cargo at any rocket level.

1 Like

Getting at the heart of your question, our primary criteria for judging the capabilities of a robot is demonstrated performance on the field. It’s tempting to simplify this to an average performance metric such as “average number of cycles”. Building a pick list based strictly on average performance over all previous matches, though, isn’t always the best idea. What you truly care about when picking partners is what will they be able to do as a member of your alliance in the upcoming elimination matches. Basing your pick list on average performance relies on the faulty assumption that a team’s future performance correlates with their historical average performance.

In a very general sense, average performance can be used to compare teams with very different capabilities. It’s probably a good bet that a team with an average of 6 cycles per match is a better pick than one with 2 cycles per match. But, this ignores the very real fact that performance changes over time as teams get things to work and/or things break. I’ll pick on ourselves at our previous competition to give a couple examples. 1) For most of the qualification matches, we quickly climbed to L3. Then we had a failure that compromised our climber, and we stopped climbing for the last couple matches. Ranking us based on our average climbing performance would rank us higher than our actual expected performance in the upcoming elimination matches. 2) For most of the competition, we were picking and placing gamepieces manually, because our vision system wasn’t working. But in our last couple qualification matches, we got our vision system working, and our number of cycles doubled. Ranking us based on our average number of cycles would rank us lower than our actual performance during the elimination matches.

So, with average metrics so obviously flawed, why to teams use them? Because it’s easy. You follow a defined process of recording robot performance metrics, enter the data into a spreadsheet, do some math, and make a pick list based on data that is only marginally relevant.

So, is there a better way? Or at least a refinement to the system that will give you better insight into robot performance during the eliminations? Probably, yes.

The first thing we’ve experimented with is paying attention to maximum robot performance, rather than average robot performance. This was especially important last year when teams played below their true capabilities in individual matches, because they only needed to be quicker to win. I don’t have data to back it up, but I’ve noticed that we tend to play our eliminations matches on a level closer to our best qualification match, rather than at our statistical average. It only makes sense to apply that observation to other teams, too.

But, finally, we come to the OP’s original inquiry. You need to gather intelligence, as late as possible, about how a team’s performance in the elimination matches might differ from demonstrated performance on the field. These might be specific questions, like asking the L3 climbers, “Can you also do an L2 climb?” But, more generally, you need to find out how their performance has changed over time. Are they doing anything better now than before? Is anything broken now that wasn’t before? It’s the answers to these questions that allow you to identify “dark horses” in the ranked pick list. Without these answers, you are relying on quantitative data which only roughly predicts future performance.

Pit scouting, by and large, is entirely a waste of time.

My team competed this past weekend, and the only piece of data that I was at all curious in gathering from the pits was which teams build a mechanism theoretically capable of climbing HAB Lv3.

From there, match data confirmed if the system worked. But I just wanted to know the % of team who build one who actually got it working (it was 50% btw).

If we make it to the state championship, we will begin collecting data regarding which teams have mecanum/omni-wheel drive trains (which teams are susceptible to defensive robots). At the district level, most defensive play isn’t good enough for it to matter yet. But at the championship level, it absolutely will matter.

All other data is better collected from the field.

1 Like

We rarely ask questions. We do take pictures. We’ve found a team’s opinion of their own robot to be highly biased and very suspect.

Our data comes from performance based metrics. Like the number of hatches scored, the number of cargo scored, etc. For instance, I can tell you our average cycle time over the entire Gibraltor event was around 17 seconds (fairly pathetic), with our best for one entire match coming in at around 11 seconds. We’re shooting to improve our average by 20% by reviewing those matches and determining where we’ve lost time in each cycle. It shocking how much cycle time you can reduce just by looking at bad driving habits.

1 Like

I feel like it would be good to go around with someone who understands robot design. Many teams send around some non technical member with a list of questions and stuff goes in one ear and out the other, and what gets wrote is useless. This pit scouting should be used along with Thursday scouting to form a who to watch list.

This list should be compiled with some categorization like below:

  • Good contender, but most likely Top 8
  • Possible 2nd tier pick
  • Possible 3rd tier pick
  • Works with specific strategy
  • Historically Good Team
  • On the bubble
  • Would be cool if it works

I remember the one year my team had enough people to scout. The scouts were wasting time watching every match and attempting to take down metrics on every robot. This is just a waste of bandwidth.

A. I find pit scouting to be essentially useless.

B. My team was asked multiple times what language we use to program our robot. Answers given included C++, French, and Bulgarian.

C. We’ve run reversible bumpers and bumpers with a covering skirt, and even in years where we were playing heavy defense never had a problem with them coming off or switching colors. Done well, they’re every bit as secure as hard-mounted bumpers of either color.

Pit scouting isn’t used for much other than pictures. Sometimes we will ask weight and a couple other questions depending on the year.

Match play is the primary metric used, but sometimes we pick a robot with the intent to play them in a way they weren’t playing in the qualification rounds. In these cases, we do rely a lot on intuition, experience and gut feel.

Winning elims matches is a learned skill as well, so we will also take into account if a team has members who are particularly good at that skill.

Reversible bumpers are nice (until a hatch panel gets stuck to them). We have made them every year since the beginning, have never had them flip in the last 3 years and have been in the top 8 twice. I think that well made reversible bumpers are better than badly made single color ones.

Our team finds a lot of value in pit scouting. Pictures of robots, quality of robot construction, opportunity for quick and dirty improvements.

Different games require us to ask different questions. We try not to ask the standard annoying questions like how many wheels there are on the robot. Usually the questions are tailored towards what we are looking for in a partner. In 2018 we asked teams to measure the distance between their wheels and the clearance under their robot to see if they could buddy climb on us. This year I don’t expect us to ask very many robot specific questions.

This is actually a valid question, especially for teams who’s autos aren’t so hot.(Not so much this year)

A.IMHO it’s useful to the kids who are scouting because it helps them get to know each robot as well as the team, which gives them a connection while match scouting to help remember the robot. In strategy discussions and pick-list discussions they can talk about the quality of the robot and what they notice about the team in general that might inform their ability to handle problems. e.g. “the students sat back and let the mentor attach pneumatic hoses and, frankly, didn’t seem to understand how to work on the robot at all” Finally, it helps rookie team members become familiarized with the different components of the game and how other teams’ robots solve the engineering problems. There’s a lot of value in these discussions beyond the exaggerations they hear from the pit they are talking to.

B. That’s good to know that your team is so forthcoming and respectful of the pit scouting process. It’s helpful to other teams to be aware of that so they can take what 1551 says with a grain of salt. I suspect that if you really think it through, you understand why this is being asked. If you can’t imagine why, well, I guess those teams who do know why may have a competitive advantage.

4 Likes

The best way to determine a robot’s potential is to look at it’s current performance. Back in 2015 our lead scout could predict the matches teams would drop a stack and it played into our picking. We were in Division semifinals at worlds and we were down on the points average. With one last match the last alliance playing just had to put up their usual amount of points and they would advance to finals. I thought our season was over. The lead scout came to me and said we were going to make finals because the data said team XXXX was set to drop a stack in that match. Team XXXX dropped the stack and we advanced to finals.

Moral of the story is the data if used correctly can predict so much and tell what is very likely to happen. Top teams will always go off of the data and not potential. One of the only going off the potential ideas I have seen have success was 1678 picking 1671 back in 2015 at worlds. That selection was very unique though due to them being next to eachother in the pits and 1671 having proven themselves at regionals.

I think that example might be more an outlier rather than a rule. In general the sample size of matches you have just isn’t enough to make good inferences about what teams will do with any decent confidence level. When we create our picklist we mostly focus on general trends. If a team is improving each match I would be much more inclined to put them higher than a team that has a slightly higher average but has been static or inconsistent with their scoring. A lot of this just comes down to if you can get good data and that takes enthusiastic scouters which is a whole different issue.

Pit scouting is all about the networking. it lets you get a feel of other teams and their style. It is also an opportunity to look for teams that are complementary, that you would like to work with.

It allows the mechanically minded to check out robot design and construction. Both from seeing how well build the robot is and to look at alternate methods of solving the same problem.

Same for software. Asking a bunch of canned questions is fairly pointless. Talking about how you control a particular mechanism or what sort of vision your auto uses is not.

I don’t think that pit scouting and talking to teams is a replacement for metric based match scouting. However, it is a useful addition to the hard data.

Beyond that, wandering around the pits looking at robots and talking to teams is one of the things I enjoy most about events.

2 Likes

If your Pit Scouting is asking questions about capabilities, then you’re doing it wrong. If you ask a team how many hatches they can do, they’ll say 12 even though they’ll actually do 2. It’s not malicious, it’s just that on day 1, the main engineer says they can do 3 because they’re optimistic, and then some rookie hears it and decides to say 4 because they don’t know better, and then a marketing kid hears that and tells a judge that their record in practice is 5. It ends up like a game of telephone. Leave that type of information to the scouts watching matches.

Instead, look at their robot and ask about mechanisms and designs. Oh, looks like their bearing blocks on their elevator look a little sketchy, lets keep an eye out for reliability of those. However, they’ve got a 6 NEO drive train and a high and low gear, that means that if their elevator stops working, they could be good on defense.

So, when I wander around the pits doing this, I’m really pit scouting? :slight_smile:

It can definitely be useful to identify robots that haven’t reached their true potential. If you are the number 1 seed your choice is generally pretty simple and you pick the best robot available. If you are a lower seeded alliance captain though, you should probably consider robots with unrealized potential. Sure you could pick the best robot available but what if that robot is already performing at its near max? Instead you can pick a robot that has been struggling with consistency but when they are on their game, they are capable of helping you beat a stronger alliance. It’s a risk either way so I wouldn’t even say it’s a risky strategy.

As far as how you can recognize robots with unrealized potential, one way is to look at their scoring output across all matches and seeing what their max is. A team struggling with consistency might have a few scattered matches where their scoring output is actually pretty high. Other ways is to look at the robot itself and try to recognize if a certain subsystem is hindering the robot performance. For example, if a team’s hatch mechanism is holding them back, ask yourself what their scoring output could be if they did only cargo.

Some have called this spying :wink:

1 Like

Absolutely! And that is what I tell my team… Though as a design & fab mentor if I am busy with robot issues with our robot at an event something has gone horribly horribly wrong.