The Viper scouting app https://viperscout.com/ is entering its 6th season. In this thread, we will document our development process for the Reefscape version, take feedback, and share releases.
Viper has the following features:
Multiple data upload options: wireless, wired, and QR code
Built-in data analysis with graphs
Alliance selection app
Match planning whiteboard with stats
Pit scouting with robot pictures
API integration for event schedules and scouting accuracy metrics
If the Reefscape rules are compatible, we are planning to use the new techniques that we premiered in the FTC Into the Deep scouting interface:
Remove the down arrows that allow scouters to decrease the stats leaving just the up arrows to increase the stats. Have an “Undo” button for when scouters make mistakes.
Disable “picking” and “placing” options based what the scouter records the robot currently holding. For example, if the robot isn’t controlling objects, the “placing” options would be disabled. If the robot is already holding an object and isn’t allowed to pick up another, the “picking” options would be disabled.
Based on what we already know, disabling pick and place interface should be possible. Removing the down arrows would only be possible if robots can’t de-score placed items.
Without a database, it stores data in CSV files in the www/data directory. This works fine if you are hosting it on a single computer for a single team. In fact, it has a feature that the database doesn’t: it versions the data, shows you history diffs, and allows you to roll back if needed.
A MYSQL database is needed only if you are either using a load balancer or hosting multiple teams out the same app. I only expect these things to be the case for cloud hosting.
Our tentative development schedule for the season is:
Kickoff until the Wed before week zero: check all code into the viper reefscape branch. The hosted servers including the demo will get updated with the branch code.
Wednesdays before weeks 0 through 6: merge into main, create a downloadable release on Github, update the hosted servers to the latest main.
I’m proposing that if you are scouting a bot that scores in the processor, then you are also responsible for scouting whether the OPPONENT human player makes the shot into the net. If it is done that way:
A team’s score contribution from using the processor will be 6 if the opponent misses, but only 2 if the opponent makes the shot.
If we also record the opponent human player’s team number, we can calculate the human player shooting accuracy for each team.
Auto resources
We’ll have to decide if its worth it to record which resources bots attempt to pick up during auto. If we record that info, we could use it for match planning auto-compatibility within our alliance. I’ve included that in the mockup, but maybe it is worth making a setting whether or not your team wants to scout that data.
Knocking algae off the reef
There is an open question of how we want to scout teams that knock algae off the reef without taking possession of it. Do we want to scout that as a collect and then a drop, or does it need special treatment?
After Game Questions
I haven’t thought to hard about what additional data to collect after each match. Please speak up if you have suggestions.
You suspect that putting on coral on certain sides of the reef will be harder because of visibility from the driver station? Are you thinking you might want to pick an alliance partner that can place on a particular side of the reef if you can’t place there well?
Recording the side of the reef when placing would significantly clutter the scouting interface. I’d only want to do that if the data were really important.
After 2022, and 2023 I have given up collecting data on the specifics of “where” when there are a lot of identical scoring locations (beyond the absolute basics of high, mid, low, etc). It just takes way too much time and reduces accuracy to collect this information.
If a team is way better at scoring in one particular area as opposed to others: that is a situation for notes and tags.
UI should be as de cluttered and easy as possible (especially for all the common things you are collecting). Scouts eyes belong on the field, not a cluttered hard to read data input screen.
In theory this could also be calculated from component EPAs after a few matches, like high notes last year, or at least sanity checked against the cEPA.
It is a difficult situation though because this relies on the opponent providing game pieces for a HP to score… so on second thought that cEPA metric may not be the best move due to small and/or fluctuation in sample sizes and limited opportunities.
EPA isn’t going to tell you which team provided the human player on the processor. Unlike high notes, robots can score directly into the net this year, so there isn’t going to be a metric from the API that is unique to the human player.
When I made my proposal above, I was thinking human players would generally shoot into the net as soon as they had algae. However, I’m realizing the better strategy is to hold onto all the algae until near the end of the game to let the processor fill up. I’m not sure how best to scout the human player at this point.
Yes, but which scouter is responsible for filling that in? You want the robot scouters to focus on the robots and not have to split their attention and try to watch the human player too.
I was thinking that if the human player were to throw the algae right after it was put in the processor, the scouter for the robot that put it in the processor could switch to watching the human player for a few seconds. But if that isn’t going to happen quickly all the time, that isn’t a good plan.
Well we do know a few things that could sort-of help.
We can fill in what team provides the HP pre-match. At least we know “who” is responsible for the scores/misses/timing-strategy
It does pose an interesting problem… my gut feel is track who is the HP (can be done pre-match), then what the result was post match (which we can pull from the API; which makes misses problematic…) However, we can make an educated guess on misses based on the number the other alliance scored vs the number in the barge though, all off of the api - sure this isn’t a true calculation of misses, but un-tossed algae may as well be misses anyway.
So in other words, if we track who is at the HP station, how many were scored in the processor (api data), and how many were in the barge at the end (api data): the difference is misses+unattempted. Since misses and unattempted are the same score differential (+6 to opposing alliance) then we should be fine (although a miss will reintroduce the algae, so it isn’t perfect). We would also need to track the HP strategy maybe, which isn’t optimal.
I am trying to think of an analog here…
What did we do in 2014? I want to say the HP didn’t really matter so much in that case, and it could be captured with notes.
The real question there is whether or not you can single-handedly get the coopertition point by scoring in both processors. To me, the rules aren’t clear enough right now to answer that question with certainty. If forcing coopertition is allowed, then scoring in the wrong processor will be common enough that it should be scouted.
Imagine you are playing against an opponent that you see from your scouting data typically scores in both processors to ensure coopertition. You might want to hold back on scoring in the processor until the end of the match to see if your opponent goes ahead and does it for you.
The whiteboard is going to be more challenging this season than usual. Usually a cropped top-down field render can be used without modifications. This year there are a few things that won’t show.
There is no indication of where the processor is located once you crop down to just the inside the walls of the playing field.
It is hard to indicate shallow and deep cages in a top down perspective.
A top down view of the reef doesn’t show the different levels.
Putting a mark for the processor, is pretty easy.
I’m not sure what to do about shallow and deep cages yet.
I started playing around in gimp with ways to modify the reef to show the levels.
There is also the chemistry notation for towards viewer /away (above/below reference plane)… May be helpful somehow. Maybe pertend there is a cross section plane for the field offset frim the floor and 2.5 feet? Then you could note what is above/below the reference plane.
Another option, which I am more in favor of in general I think, is to have separate inserts for the reef and climb showing the Z axis.
Anyone remember how 2008 strategy played out? What did that look like from a "planning board " perspective?