View Single Post
  #1   Spotlight this post!  
Unread 17-11-2016, 11:22
gblake's Avatar
gblake gblake is offline
6th Gear Developer; Mentor
AKA: Blake Ross
no team (6th Gear)
Team Role: Mentor
 
Join Date: May 2006
Rookie Year: 2006
Location: Virginia
Posts: 1,933
gblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond repute
Re: Video Review Needs to Happen Now

Quote:
Originally Posted by Ryan Dognaux View Post
What would hard data for this subject look like to you? Most implementations will only have anecdotal evidence at this point because video review probably only gets used once or twice during an event (so far.)
TL, DR: A) Ask the right question(s). B) Measure the baseline. C) Understand the relationships. D) Tweak the independent variables. E) Measure the results. F) Decide what changes/(re)allocations, if any, to implement.

- - - - - - - - - - - - - - - - - - - - - -

To prep to answer this, in addition to shooting from the hip , I wanted to refresh my recollection of what has been said so far in the last few months. so, I reviewed this thread, and an adjacent thread. I found these posts I and a few others wrote. There is nothing Earth-shaking in them; but they supply some context.

https://www.chiefdelphi.com/forums/s...0&postcount=64
https://www.chiefdelphi.com/forums/s...0&postcount=72
https://www.chiefdelphi.com/forums/s...&postcount=182
https://www.chiefdelphi.com/forums/s...&postcount=193
https://www.chiefdelphi.com/forums/s...&postcount=207
https://www.chiefdelphi.com/forums/s...&postcount=223
https://www.chiefdelphi.com/forums/s...&postcount=265
https://www.chiefdelphi.com/forums/s...&postcount=268
https://www.chiefdelphi.com/forums/s...5&postcount=11
https://www.chiefdelphi.com/forums/s...2&postcount=13
https://www.chiefdelphi.com/forums/s...2&postcount=17
For me, the outline that follows is the way I would want to approach A) creating a solid understanding of the need (or lack thereof) for adding video to the refs' tools, and B) coming up with a first version of a video system, if developing one is warranted.

The "hard data" would be the results (measurements & statistics) produced by the experiments.

Obviously this is a back of the napkin, discussion-forum-quality sort of an outline - Not even PowerPoint quality yet.

The current system (FIRST) being discussed is a system containing many things, including competition events that contain, at the least, a Playing Field & Game Pieces, the Field Staff (announcers, refs, etc.), the participating Teams, the Match/Game/Robot rules, the Audience, the Matches/Schedule, and the field Computers/Sensors/Software.

We are talking about introducing Video Replays into the FRC (and FTC ...) event part of that FIRST system.

We need to know the pertinent parts of the current baseline system's status/performance, the current system's purpose, and the sensitivity of the system's ability-to-achieve-it's-purpose(s) to changes in the independent variables we are going to adjust.

Some useful metrics might be
  • Call accuracy
  • Current call challenges outcomes
    • Was a call changed
    • Was the result accurate
  • Match outcomes affected by call accuracy and by challenges
  • Event outcomes affected by call accuracy and by challenges
  • System-purpose outcomes affected by call accuracy and challenges
  • Calls (i.e. rules) that could/would/should be affected
    • Perfect video
    • Less-than-perfect video
      • Video usefulness vs equipment performance/placement
      • Equipment purchase costs vs equipment performance
      • Non-equipment-purchase costs: time, labor, maintenance, shipping, training, etc.
    • Video-alternatives (more humans, rule/game changes, anything else?)

In the experiments I would want to
  • Compare and contrast off-season events with regular season, and various championships to identify how they are alike, and how they are different; and what effect that has on the results collected in each type of event.
  • Compare and contrast multiple locations, times-of-day, levels of human-training, etc. as part of trying each candidate method for accomplishing the event's purposes (and sub-purposes).
    • These locations would include change-nothing "control" events.
  • Compare and contrast multiple years' (multiple games') results
  • When testing each/any alternative, determine "truth" (the baseline data) by analyzing data (not in real time) collected by a plethora of sensors (these are separate from the sensors/methods that are being tested, and the "truth" is not shared during the event).
  • In some circumstances (off-season events?), purposefully stress each alternative by having teams challenge calls in bursts and/or continuously. Do this to stress-test the alternative, not because of actual disagreements.
    • If necessary have teams create difficult-to-assess situations, so that reviewing calls requires more than a trivial glance at the video records or other evidence.

What's above is a quick-and-dirty outline of what I would *want* to do to produce "hard data". After dealing with real-world constraints, thinking a bit more deeply, and getting some preliminary results; I, or whoever, might decide the experiments could be simplified without violating the integrity of the results, or they might add something.

I know there are folks who firmly believe that the need for (or cost of) video reviews is/isn't so obvious, that what I outlined above isn't necessary. I don't disagree that they feel that way. I do say that nothing in this thread so far *proves* that the need does/doesn't exist, and/or that a need would justify the investment (instead of investing in satisfying other needs).

Blake

PS: In the past, I and at least one other person have wished for detailed camera/lens specs and placement info. That would be one example of "hard data", and could be used to answer some important questions; but it's just one part of the bigger picture under the heading of "Video Review Needs to Happen Now".

PPS: Above I have some bullets about identifying which calls could/should/would be affected by reviewing video. Complementing that, I'm not sure whether deciding what the effect of a changed call should be, is part designing each/any experiment (it probably is). Regardless, it is certainly something that would factor into any decisions to introduce (or not) video replays into the system.
__________________
Blake Ross, For emailing me, in the verizon.net domain, I am blake
VRC Team Mentor, FTC volunteer, 5th Gear Developer, Husband, Father, Triangle Fraternity Alumnus (ky 76), U Ky BSEE, Tau Beta Pi, Eta Kappa Nu, Kentucky Colonel
Words/phrases I avoid: basis, mitigate, leveraging, transitioning, impact (instead of affect/effect), facilitate, programmatic, problematic, issue (instead of problem), latency (instead of delay), dependency (instead of prerequisite), connectivity, usage & utilize (instead of use), downed, functionality, functional, power on, descore, alumni (instead of alumnus/alumna), the enterprise, methodology, nomenclature, form factor (instead of size or shape), competency, modality, provided(with), provision(ing), irregardless/irrespective, signage, colorized, pulsating, ideate

Last edited by gblake : 17-11-2016 at 15:36.
Reply With Quote