Quote:
Originally Posted by Michael Corsetto
Can the answer be both? Notice I said our "initial" filtering was based off drivetrain type aka. pit scouting. Our second filter was based on "accurate observations of how the drivetrains (and drivers) performed on the field." The primary filter made our second filter easier to perform.
Richard can talk more about how we implemented our second filter based on field performance, I don't understand half of how they do it! 
|
Right, and that is our violent agreement about me stepping aside if folks want to discuss whether the resources consumed in their own team's pit scouting is worth trouble.
Spending a little time and having the downstream work get a lot easier means it is worth the trouble. Spending a lot of time and only getting a minor simplification later would not be worth the trouble. I don't think outside observers can make useful generalizations about that, for other teams. There are a lot of implementation details in play.
Quote:
Originally Posted by Michael Corsetto
Definitely agree, winning in FRC is a many-dimensional challenge.
|
A little more violent agreement.
Quote:
Originally Posted by RoboChair
No, because we do both. We pre-filter based on what their drivetrain is and weight it with their on field performance. It is never black and white, but many different shades on a grey scale when it comes to us comparing robots. Though some of those shades can be practically indistinguishable from black and white, they are in fact at least a little grey.
|
What you wrote here doesn't support your answer of "No".
If a team is prefiltered to the bottom of the heap because of robot implementation, and does well on the field, do you keep them ranked lower than teams that do worse than them on the field?
If a team is prefiltered to the top of the heap because of robot implementation, and does poorly on the field, do you keep them ranked higher than teams that do better than them on the field?
My belief is that as the quals draw to a close, accurately assessing on-field performance, plus making a small investment in the pits near the end of quals (to find out if a team finally starts hitting on all cylinders (they fix a software bug, or complete a mechanical change, or swap drivers, or ...)), swamps any investment in early pit scouting.
I think Michael wrote that pit scouting helps you a bit with deciding who gets the most scouting attention while on the field. That makes sense.
However, it sounds like you are trying to tell me that in your method, at the end of quals, if you ranked the teams according to the on-field performance your scouts see, you would then also adjust those ranks non-trivially based on pit-scouting data. That sounds a bit odd. I can certainly see making a case for it because the number of qual matches played usually isn't enough to supply an excellent assessment of each teams abilities. But ... with that in mind, I think we might at least agree that as the number of qual matches increases (and for the sake of argument, lets assume everything else is constant), the value of pit-scouting data steadily declines.
What I was saying in support of what I think EricH was saying, is that by the end of a typical tournament, I would side with him and be unlikely to let early pit-scouting data significantly alter any ranking I had created using on-the-field scouting.
If you guys do let pit-scouting data significantly affect your end-of-quals rankings I'm surprised. And, if you do, maybe that has helped you win, or maybe you have won regardless of any possible harm done by those changes. Get out a ouija to answer that one.
Regardless, congrats on the wins.
Blake
PS: In all of this I am setting aside aspects of team performance that depend on how well any two teams get along when they need to communicate/cooperate. For the sake this discussion, let's assume everyone is equal in that regard, and in other similar characteristics.