2018 scouting is over (officially anyway)


#21

For all 3 competitions (1 state, 2 district), we had poor seating :frowning: One of our competitions was hit by a snowstorm too. Thanks, Indiana


#22

My team had worse scouting data than we normally do. What caused this? Many factors, but I think a main one was how the game was the same thing every match (put cube on switch/scale, maybe exchange). Last year, where you may have had a gear bot one match, a shooter the next, a defensive robot after that, etc. This year, you either had a tall bot for scale or a small bot for switch. This lack of diversity among robot design and gameplay contributes to boredom, which causes scouts to not pay close enough attention and return worse data. We have two groups of scouts on the team- subjective, who record the paths of the robots while scouting debatable items (poor climber, fast intake, etc.) and are the ones who make our pick list, and objective, who record concrete numbers (x cubes on scale, climbed, etc.). The subjective are the better scouts- they have to be, as their data is more important. This year, we had to make subjective scouts do objective scouting more than once due to objective scouts, who are often times underclassmen and therefore usually not as engaged by robotics (especially a game like this), returning bad data. The game caused us to struggle in scouting, and we paid for it in some matches. However, we can’t call out the GDC for making us have poor data- it isn’t their job to make a scoutable (is that a word?) game, it’s their job to make a playable one.


#23

This game was not like previous years because you couldn’t put a real number on anything. Most of our data were based purely on how an alliance decided to play their strategy than how an individual team performed (while that does matter to an extent). In our first competition, we were alliance captains and we picked one team that could get the switch in autonomous and attack the opponents switch and another team that could attack the scale. While the numbers were part of the reason we chose those two teams, in the end, we wanted to pick teams that would help with the strategy we wanted to play by.


#24

We found this to be the opposite in our events. Most teams just ran gears repeatedly last year, making the most difficult thing determining when they stopped because they reached their achieved goal (3 rotors usually) and went on to do another job: defense or shooting. This year teams were doing all sorts of things in different matches depending on their partners and opponents. Good scale bots were often also good at the switch and vault but never really had a chance to show it in quals since they had adequate partners to fulfill those roles. Teams would go from placing 7 cubes in the scale one match to only placing 1 in another because the opposing alliance didn’t have any scale bots. This made averages skewed and much less useful than recent years. Instead, we just looked at graphs of everything a team did in a match so we could see how they spent their time. If they have an odd match where they suddenly drop in number of scale cubes, for example, we’d expect to see they were spending that match placing cubes in the switch or vault. We also used this and information on if they owned each switch and scale to tell if they were strategically adaptable to the situation. If a team put 5 more cubes in the scale than needed to own it but let their switch get taken over, they probably weren’t very capable at switching strategies in-match when needed.


#25

Ooh, I could see how scouting in a snowstorm would really make things difficult. Your tablets get wet so you can’t see what the number say, your pencil tallies smear, the paper starts tearing if the pencil is too sharp. Then they stick together in the accordion binder. What a hassle.


#26

I found scouting when you were the first seed to be rather simple enough. In our district we just prioritized autonomous actions for DCMP to ensure we got a good start in playoff matches. We had a hard time at our last district event in playoffs because of weaker autonomous modes. After that we did a mostly action based ranking after that…which for the most part had positive results.

I will say being on flip side of having to be scouted…and having to adapt strategies that sometimes ended with our team not doing the scale in qualifications matches at worlds. I wonder how much that effected our team getting picked where we did in the Turing playoffs draft. Those that have watched the Turing playoffs know that we, as a second pick, were holding our own or even beating alliance captains at the scale in the Turing playoffs. I wonder if scouts in Turing were just doing categorical action count based scouting for the most part. I would imagine so, considering that’s probably what we would have been doing.


#27

Okay everyone,
Great response for ~24 hours of the poll being up. There have been some great points brought up.

24hr poll results:
https://i.imgur.com/x6Tfgj2.png

A couple of things I have noticed, the first option (time based scoring) was a bit loaded, I fully expected that to be the most common response. As for the other options:

  • I was surprised by the number of votes cast for field visibility being an issue, is this more of a result of the venues attended? do districts have better visibility than large regional events?
  • I suspected there would be a fair number of responses with people collecting the wrong amount of data, perhaps we need to do some sort of white paper on figuring out what you should be scouting/how in depth certain metrics need to go. This of course is team specific, but some general guidelines should help with the “how to scout 20XX’s year’s game” threads that appear in this sub-forum around week 2.
  • Yes, there were a lot of similar robots in 2018. Scout eyesight is a problem sometimes, you may need to address this in your individual scouting plans.
  • I am surprised how many of you took my quasi-non-sensical ramblings on the scouting sub forum as absolute truths and hold me responsible for not having a world championship blue banner hanging in the shop. I guess I am more influential than I thought.
  • As for “What’s a 'Scouting”’… we need to have a 1-on-1 talk, geez.

What made PLANNING a scouting system/method this year difficult? How do you approach scouting terminology?


#28

I guess I should have been more specific. As the game went on last year, you saw a lot more variance develop. Early competitions- it was all about the rotors. Then, as it went on, teams started shooting more rather than being straight gear runners, and then you eventually hit matches like the 254 and 2767 qualification match (not elims, but quals!) where they together put up over 100 kpa. This year it was just straight up the same game the whole time, except the stacks got somewhat taller and the autons got more successful and plentiful as the season went on.


#29

The hardest part for us was that the correct strategic decision was not always the thing that best showcased what we were looking for.

The best scale robot at the event might average 4 in the scale and 3 in the switch and 3 in the exchange, where as a mediocre scale robot might put 6 on every match and look like they were better at the scale.

It made us put a much bigger focus on qualitative scouting at ability to do certain tasks. Other teams failing to do so did mean we got some great picks though :slight_smile:


#30

You did give me the option…so naturally I couldn’t resist.

In all seriousness though, this year was a tough year for scouting. We made adjustments throughout the year that improved both the scouting and pick list creation processes. Was it “perfect”? Was it the “best” way to do it? Probably not…but it did result in us playing with some great robots and fun teams in Archimedes.


#31

We were pretty happy with our scouting system this year.

We switched from a native iPad app to an offline web app.
The iPad app would export the data as a CSV, then we had a python script that would combine the CSVs into one and then use that in tableau.
This year we had the app instead export a QR code that was then saved to the photo roll.
This was because getting pictures off of an iPad is much easier than getting CSVs off of them, plus there wasn’t really a way to save a CSV from the web page.
We then had a Java app that would combine the QR codes into one CSV, while excluding duplicates and doing some additional data validation.

Overall it worked out well, there are just a few tweaks that I would like to make.
I’d like to improve the efficiency of offloading the data, we use a laptop from 2008 to merge the data and the speed of it definitely caused some unneeded stress this year.
I’d also like to enforce more of the data validation in the app itself so it can be corrected during data creation instead of afterwords.

The only data we didn’t capture that I wish we did is if a scale/switch autonomous was attempted, we only captured if they succeeded.
It would’ve been nice for aggregation to have that additional piece of data so we could get a better idea of true autonomous %.
(We resorted to assuming that if they were lined up in the middle they were doing a switch autonomous)

I’d like to figure out a better way to capture time based actions.
This year having cubes everywhere, it wasn’t as useful as last year, so we ended up using it as a second sort or tiebreaker primarily.

For qualification matches our pre-match strategy sheet was really useful, but when it came to elims it was less useful than previous years.
I think that this was because the difference in play style between qualification matches and elimination matches.
Defense was almost never worth it in qualification matches, but proved crucial in elimination matches.
It might be nice if we brainstormed an “elimination” match strategy sheet that had a slightly different grouping of metrics that would allow for better high level strategy analysis without so much data on individual robots that we need for qualification matches.


#32

This is by far one of the most interesting database designs I have heard of. I am extremely glad that it worked for you!