View Full Version : Video Review Needs to Happen Now
Can anyone cite a real world example where this is used and works?
I am struggling to find a situation that this would apply.
Outside of sports? MLB, NHL, NFL, NBA all have review; MLS doesn't have much if at all.
I'd have to say security video--there's a number of potential uses there. Though often in that sort of case the call is known, but the reasoning isn't.
Something to add that watching volleyball in the Olympics reminded me of. They have a review system for balls being in/out. And you have as many challenges as you wish as long as your claim isn't proven to be incorrect.
Perhaps a system could be implemented so that all teams get as many vid reviews as they wish until a claim they have made is denied and thus lose the privilege to contest via video review.
To me this is a win win. Discourages claims without certainty, and if a team has two or more valid issues, they aren't punished for it.
I think a reasonable set or rules regarding number of challenges could be:
For the entire Qualification match schedule, each TEAM gets one unsuccessful CHALLENGE. For an ALLIANCE to issue a CHALLENGE, all 3 TEAMS on the ALLIANCE must have their CHALLENGE remaining, and all must agree to issue a CHALLENGE. If a CHALLENGE is successful, all TEAMS keep their CHALLENGE. If a CHALLENGE is unsuccessful, all TEAMS lose their CHALLENGE for the remainder of the Qualification MATCH schedule. If a TEAM still has their CHALLENGE at the end of the Qualification MATCH schedule, it does not carry over to the Elimination tournament. For a CHALLENGE to be successful, the ranking points awarded at the end of the MATCH must change as a result of review.
In the Elimination tournament (except for Final MATCHES), each ALLIANCE gets one unsuccessful CHALLENGE. The ALLIANCE CAPTAIN determines when to issue a CHALLENGE. If a CHALLENGE is successful, the ALLIANCE keeps its CHALLENGE. If a CHALLENGE is unsuccessful, the ALLIANCE loses its CHALLENGE for the remainder of the Elimination tournament. For a CHALLENGE to be successful, the winner of the MATCH must change as a result of the review.
All non-judgement calls in Finals and Einstein MATCHES are reviewed automatically during the ensuing FIELD TIMEOUT.
Hitchhiker 42
27-08-2016, 17:48
For a CHALLENGE to be successful, the ranking points awarded at the end of the MATCH must change as a result of review.
I don't know if I'd go this far. This would require teams to calculate in their own heads (or calculators) what the new score would be and if it would swing the game. I think this would just waste more time, and it'd be easier to just say points instead of ranking points.
I don't know if I'd go this far. This would require teams to calculate in their own heads (or calculators) what the new score would be and if it would swing the game. I think this would just waste more time, and it'd be easier to just say points instead of ranking points.
If the score is 100-45, the losing alliance challenging to get an extra 5 or 10 points is a waste of time (assuming total score is not part of the ranking system). I think requiring the review have an effect outside of just the match is not unreasonable.
I don't know if I'd go this far. This would require teams to calculate in their own heads (or calculators) what the new score would be and if it would swing the game. I think this would just waste more time, and it'd be easier to just say points instead of ranking points.For quals, the requirement that it change one of the aggregates in the ranking algorithm (though I'm not positive that's what cgmv means) would be pretty loose in most years. This seems like a reasonable standard, since it's easy enough to know beforehand ("that would raise our auto score total") and speaks directly to the impact quals are supposed to have. We could probably keep the same condition for elims for simplicity's sake. Alliance Captains should be smart enough by that point not to risk losing a challenge on a ruling that wouldn't actually benefit them in the bracket.
Not sure how I feel about requiring every member of a qual alliance to still have their challenge coupon. I think I'd be okay with "any" instead of "all". It likely means more challenges, but then normal teams can't be burned by being randomly assigned a trigger-happy alliance partner who wasted their coupon on match 1.
I suggest some kind of time limit rule as well, like the completed coupon must be submitted to the head ref by no later than the starting whistle 3? matches after the match in question (match 4 for a challenge in match 1) or before the next elim level. The latter gets tricky if you're the last QF match to play.
I currently envision the challenge coupons including team name, match number, alliance color, specific challenge (from the list of acceptable ones), approximate time and field location, and a FOUO section for review outcome. Anything else? (Refs would archive the submitted slips; if you're right you'd get another blank one.)
If the score is 100-45, the losing alliance challenging to get an extra 5 or 10 points is a waste of time (assuming total score is not part of the ranking system). I think requiring the review have an effect outside of just the match is not unreasonable.
I agree on that. Anything from missing one breach ranking point on up is reasonable. If total score is part of the ranking system, that could be re-thought, but I'd suggest any score changes less than X amount (determined by the game and ranking, but let's call it one penalty of TBD type) would be unreviewable if the score differential was greater than 2X.
I disagree on the Finals being an automatic review, primarily because that means a minimum of 3 minutes where any refs involved aren't doing their between-match stuff (traffic control, overall monitoring), and also because if there's something tough in the Finals every ref is going to be in the huddle discussing the calls--we want to get the calls right the first time. What I'd do instead would be to reset challenges (I'm in favor of LIMITED challenges, and I'll explain why in a minute) to full for finals regardless of prior usage.
The reason I prefer limited challenges (probably 1/alliance in playoffs, with a second if the first is successful) is that by the second challenge from the same alliance, if the Head Ref hasn't shuffled the crew, he or she probably needs to. And it may be obvious on the reviews that one ref or another needs to be shuffled to break or another field position if possible. For those that aren't refs, the ref crews tend to find their weak links quickly and strengthen them as needed. If there's two challenges, chances are that there's a ref that needs more strength--or it's possible that the alliance is just trying to game the system.
For quals, the requirement that it change one of the aggregates in the ranking algorithm (though I'm not positive that's what cgmv means) would be pretty loose in most years. This seems like a reasonable standard, since it's easy enough to know beforehand ("that would raise our auto score total") and speaks directly to the impact quals are supposed to have.
My intent was that ranking points (or the first order ranking sort in subsequent games) would need to be affected. For Stronghold, this would have meant changing the outcome of the match or awarding a point for a missed breach/capture. I'm reconsidering this now, because you correctly point out that the ranking tiebreakers can also matter. I'm not sure where to draw the line, though, because for Stronghold, the only scores that don't factor into at least one tiebreaker are foul points, and I don't think allowing a challenge to every single Qualification match score is worth it. I'm fine with Eliminations going deep into the evening if it takes that much time to get all of the calls right, but I think Qualifications should be expected to stick to the schedule a bit more.
Not sure how I feel about requiring every member of a qual alliance to still have their challenge coupon. I think I'd be okay with "any" instead of "all". It likely means more challenges, but then normal teams can't be burned by being randomly assigned a trigger-happy alliance partner who wasted their coupon on match 1.
Yeah, I wasn't sure how fair/unfair it was for teams to be able to jeopardize future alliance partners by losing their challenges early. I do think it's fair to limit challenges in qualification matches to blatantly obvious errors with ranking implications, and the rules I proposed are written with that in mind. They're just a suggestion that I expect the GDC (Hi GDC!) to tweak before placing in the 2017 manual. :wink:
I disagree on the Finals being an automatic review, primarily because that means a minimum of 3 minutes where any refs involved aren't doing their between-match stuff (traffic control, overall monitoring), and also because if there's something tough in the Finals every ref is going to be in the huddle discussing the calls--we want to get the calls right the first time.
The automatic review wouldn't be done by a referee. It would be done by a "replay official" who would "confirm" most of the calls in the match right away and only call over a ref if they see something questionable.
Quick thought with regard to the 'must affect ranking points'....
If there is a) a time limit to make the call b) you need all 3 teams on the alliance to agree and c) need the ranking points to change then:
1) Is there enough time as robots are being pulled from the field to coordinate with the other two teams?
2) AND in the event that a 100-90 loss was a missed breech and would have resulted in a match point tie, is there still enough time to find and calculate the tie-breaker while coordinating with two other teams?
End of match is hectic as it is... If a team/alliance is feeling slighted by a call (or non-call), it's even more so. While I agree there should be reasonable limits the challenging team should have sufficient opportunity to avail itself of the rule.
To that end I would change the rule to be that the challenge, if successful, must either change the ranking points awarded, or change the win/loss/tie result of the match, without regard to the tie-breaking formula.
My intent was that ranking points (or the first order ranking sort in subsequent games) would need to be affected. For Stronghold, this would have meant changing the outcome of the match or awarding a point for a missed breach/capture. I'm reconsidering this now, because you correctly point out that the ranking tiebreakers can also matter. I'd stick with first-order sort, possibly 2nd-order. Most times the sorting doesn't seem to go beyond 2nd order. 2nd-order sort generally being auto, those would be quick reviews (15 seconds or less).
The automatic review wouldn't be done by a referee. It would be done by a "replay official" who would "confirm" most of the calls in the match right away and only call over a ref if they see something questionable. And this is where you need to remember something: the replay official is going to need to have at least some referee training. Otherwise, how are they going to know a questionable call? I know we're taking judgement out of it as much as possible, but, for example, when we're looking at crossings, there's an awful lot of nuances to crossings (for robot and for boulder) that sometimes people don't quite grasp--I once had a team complain during practice day after I called them for not finishing a Crossing before returning to the NZ from scoring in the high goal. So if they're going to need to have referee training anyways, then you may as well make them a referee. And if they're a referee, you may as well make them work the field for a few matches, particularly if you're short-handed. So that means that several refs may as well have training on the replay system.
I think the best way to handle replay would be to have the "off" ref in the rotation handle it, or have two "off" refs (one assigned to replay at any given time). Now, finding the refs over and above the field crew can sometimes be difficult. But I think with enough effort, someone could be found... And actually, that would speed up replays a bit--if you've got an off-field referee going through them during a match, then the only thing they need to do is to advise the head ref (NOT a regular ref, BTW, that's another thing--this is a call reversal or not) that this-that-and-the-other is the case, or that he needs to take a look as somebody's asking for a judgement call, or what-have-you.
I'm not sure where to draw the line, though, because for Stronghold, the only scores that don't factor into at least one tiebreaker are foul points
In Quals, yes, fouls don't factor, but in Playoffs they are the 1st tie-break in a tied score. Cleaner played match wins. [5.4.4]
My intent was that ranking points (or the first order ranking sort in subsequent games) would need to be affected. For Stronghold, this would have meant changing the outcome of the match or awarding a point for a missed breach/capture. I'm reconsidering this now, because you correctly point out that the ranking tiebreakers can also matter. I'm not sure where to draw the line, though, because for Stronghold, the only scores that don't factor into at least one tiebreaker are foul points, and I don't think allowing a challenge to every single Qualification match score is worth it. I'm fine with Eliminations going deep into the evening if it takes that much time to get all of the calls right, but I think Qualifications should be expected to stick to the schedule a bit more.We could hedge it and say "changes Nth (likely 1st/2nd) order ranking points or changes rank". Presumably any team looking at a deeper ranking aggregate is on-the-ball enough to determine whether a changed call would move them. It leaves out edge cases where it'd move you closer to jumping someone without actually passing them, but it does have to stop somewhere (I'd argue).
Caveat -- I'm not trying to reopen the battle about if there should be a video review or not. I just wanted to ask a simple question.
There has been much discussion about the technology needed to do video reviews and some teams have said that they have that level of equipment.
Since we are in the final weeks of off season events, are there any events that are planning to allow for video review?
Thanks!
Ryan's folks have one coming up sometime in October.
I asked about doing one at one of the local offseasons, as a pilot. No-go with the planning committee.
On the other hand, they did approve one of my other ideas... If that goes well, I may find myself needing to write a report on it.
Eric H, if the rest of us can volunteer you for some work, maybe the planning committee will let you place a few cameras at overlapping locations so that you could collect data able to shed light on some of the more technical topics raised earlier in this thread.
Topics like "How many points of view are needed to give an accurate record of things like line-crossings, or of contact between robots, or of contact/positions among/of any other physical parts of a match."
And/or maybe produce some screen shots illustrating:
- How much moving object blur/tearing exists in a single frame of the imagery captured by the cameras you happen to use.
- How far robots and game pieces travel between successive frames of the video at the frame rates you try.
- How much real world area each pixel represents in the images you capture (length of the field, worth of the field), given the camera settings you try (see next bullet)
- How much the resolution varies throughout the cameras' depth of field, given the focus & aperture settings, and lenses you choose to use.
Etc.
You would do a little scurrying around, and maybe pose some robots during down time, but your effect on the event would be hardly noticeable compared to a full replay experiment.
Blake
AllenGregoryIV
23-09-2016, 23:13
Eric H, if the rest of us can volunteer you for some work, maybe the planning committee will let you place a few cameras at overlapping locations so that you could collect data able to shed light on some of the more technical topics raised earlier in this thread.
Topics like "How many points of view are needed to give an accurate record of things like line-crossings, or of contact between robots, or of contact/positions among/of any other physical parts of a match."
And/or maybe produce some screen shots illustrating:
- How much moving object blur/tearing exists in a single frame of the imagery captured by the cameras you happen to use.
- How far robots and game pieces travel between successive frames of the video at the frame rates you try.
- How much real world area each pixel represents in the images you capture (length of the field, worth of the field), given the camera settings you try (see next bullet)
- How much the resolution varies throughout the cameras' depth of field, given the focus & aperture settings, and lenses you choose to use.
Etc.
You would do a little scurrying around, and maybe pose some robots during down time, but your effect on the event would be hardly noticeable compared to a full replay experiment.
Blake
Feel free to analyze any of the video we have posted on our channel, we have two events worth from this summer with three different camera angles in every shot.
https://www.youtube.com/playlist?list=PLTocT0DivsNm7VgMiXSff6pk35HJXrr9n
Eric H, if the rest of us can volunteer you for some work, maybe the planning committee will let you place a few cameras at overlapping locations so that you could collect data able to shed light on some of the more technical topics raised earlier in this thread.If the event wasn't tomorrow, that might theoretically be possible. But I'd need to clone myself at least once. My effect on the event is rather noticeable in my current role...
However, if you're interested in the typical camera setup, I've put the stream links to TBA (Fall Classic).
Like I said, there's something else I'm up to that has some relation to this topic, but is in a somewhat different vein. If I remember to write it up afterwards, I'll post it.
Feel free to analyze any of the video we have posted on our channel, we have two events worth from this summer with three different camera angles in every shot.
https://www.youtube.com/playlist?list=PLTocT0DivsNm7VgMiXSff6pk35HJXrr9n Not today, but maybe sometime reasonably soon. Doing that would give me a good reason to recreate a good-enough (bare) field model.
What can you tell us about the cameras? Lenses, F Stop, Virtual shutter speed & other "shutter" settings, frame rate, Model number (video chipset), resolution chosen, and, of course, locations & orientations (plus whatever else I'm forgetting)?
Some results can be evaluated reasonably well without that info, some not so much.
Also, I would have guessed that videos posted on the Web for ordinary consumption would have gone through some lossy compression. Is the raw, 100% uncompressed frame-by-frame imagery available for download from the YouTube page at that link (I'm definitely not a YouTube guru)? Thumbs up if it is. If not, do you have a link to a site where I can download some of it?
Blake
Like I said, there's something else I'm up to that has some relation to this topic, but is in a somewhat different vein. If I remember to write it up afterwards, I'll post it.
As it turns out, nobody took advantage... But I'll post the write-up anyways, or something like one.
The offer: Any person who could plausibly be a ref within the next couple of years (high school seniors, parents, mentors) was offered a chance to shadow a referee for a few matches. Now, obviously they wouldn't actually be making any calls, but being with the referees and able to ask questions is big for both the people and the refs, as normally their interaction is one or two kids talking to the head ref in the question box. (I also had an "open box" policy as the head ref for the event: The box is where I am, and the person in it is the person talking to me.)
There were no takers.
And I had a team ask me if they could set up a camera to check the crossings, and at least two teams tried to have me look at a replay. I told them all that I couldn't look at a replay--not that they can't set up a camera, mind you, but that I can't look at a replay.
I did have a team or two ask about crossings being counted--my usual response was to turn to the FMS operator and ask for the appropriate sheet for that match (we used a paper scoring system, and we had scorers working with the refs--HQ, take note, refs could use some scorers just to enter data), then check with the appropriate ref(s) and make corrections as necessary.
Hey off-season folks,
What news do you have for us?
What are the new anecdotes?
What are the new hard data?
Blake
Cothron Theiss
13-11-2016, 19:59
Hey off-season folks,
What news do you have for us?
What are the new anecdotes?
What are the new hard data?
Blake
Small anecdote here.
I reffed a small, one-day off season event recently. We were pretty short-staffed, so I was doing field reset, counting defense crossings, and keeping tabs on all the stuff behind one alliance wall. In two cases, I know I might've missed a crossing. In one case, it would have made the difference between a Breach or not for a team that was in the top 8 after the Qualification matches. I would have REALLY liked the chance to request a video replay, because I'm honestly not sure if I made the right call in a situation that would have affected a team's ranking somewhat significantly.
Not exactly a new or groundbreaking sentiment, but at some point, I'd like to ref again. I'd feel more confident reffing if I knew I could access video replays to give every team the results they earn.
Sperkowsky
13-11-2016, 20:06
Hey off-season folks,
What news do you have for us?
What are the new anecdotes?
What are the new hard data?
Blake
Not related exactly to video review but it's worth sharing.
Yesterday I tried my hand at reffing at a fairly large off season event. After my experience in that position I realise how easy it is to miss something like a crossing or a pin or a g43 violation. I'm sure I missed atleast 1 crossing yesterday and that wasn't for lack of trying or knowing the rules. On an Frc field there is a ton of stuff going on and despite 1 ref watching courtyard fouls and 1 watching crossings its extremely easy to miss something.
So why not have video review? If I missed something that caused a team to lose I'd feel horrible and if there was a way for my mistakes to be realized before ruining hundreds of kids competition it would be great.
Video review needs to happen. After my experience as a ref I belive that even more. I've been on both sides of the coin now and on both sides I want video review. I don't care if it means I leave the completion an hour later.
Bryce Clegg
13-11-2016, 20:57
What if we had video review only for playoffs/eliminations and only the referees can call for video review? This wouldn't slow down the competition too much, and the refs would know exactly what happened. While some argue that "this isn't what FRC is about", I disagree. This allows for teams that should have gone to the next level, which could be the District Championship or World Championship. Those championships allow for much more inspiration than just a regional or district event. Also, the public opinion of teams about certain events would become more positive because they know that there was no error that caused them not to advance to the next level.
Imagine this: there is a finals match like the final Stronghold match we saw this year at an event. The teams are tied, and a referee mistake could allow for a team to advance that maybe shouldn't have.
I believe that this is something we need to experiment with, however the power should not be overused,and would need to be monitored very closely.
What if we had video review only for playoffs/eliminations and only the referees can call for video review? This wouldn't slow down the competition too much, and the refs would know exactly what happened. I would make that the Head Referee only, myself--and it's actually easier than you might think to implement, IF you ignore the technological part of the equation. Basically, all the GDC would need to do would be to include video replays in the allowable sources the Head Referee is allowed to consult, for playoffs only, and clarify the "will not review" part of that to only be during qualifications. (The list currently includes GDC members, FIRST personnel, the FTA, and technical staff, per the manual in 5.5.3, just for reference.)
That being said, that's ignoring the technical part--camera placement, clear views, etc. That's been discussed ad nauseum, so I'll just leave the discussion there as solutions exist.
The other half is that by allowing the Head Ref to use it, use is not required. This could lead to variation between events, similar to the classic complaint to the inspectors of "But the Magnolia Regional allowed _______!" Probably some consistent standard would be needed, but I suspect that that would end up in the ref training materials, and thus the teams might not be able to know what it was.
This allows for teams that should have gone to the next level, which could be the District Championship or World Championship. Those championships allow for much more inspiration than just a regional or district event. Also, the public opinion of teams about certain events would become more positive because they know that there was no error that caused them not to advance to the next level. First question I have is, who determines what teams "should have gone" to the next level? Honestly, I would hope that that's determined by the teams on the field of play--even refs don't like swinging the winner by (debatable) penalty, folks!
Second question stems from major sports. They all use replay, to some extent, so the question now becomes why "bad" calls are still making it through the replay system? Paid refs, one game piece (and however many players), $$$$$ replay system, and calls still get through??? Huh, funny how that happens. How are we sure that no error was made?
Now, there is something that would deal with the public opinion BETTER than instant replay, IMO: Transparency about the call, at least with the affected teams if not the entire event. In the playoffs especially, I would say that if the head referee takes the time to go over why the call was made (or missed) with the teams, it's actually better than a replay--and if there's a score correction that needs to be made, then it needs to be made. The teams might not like the call--at least three of them won't!--but I'd be hopeful that they'd understand why the call was made the way it was made.
Imagine this: there is a finals match like the final Stronghold match we saw this year at an event. The teams are tied, and a referee mistake could allow for a team to advance that maybe shouldn't have.
You'll notice that the only reason for the tie was, in fact, a penalty--and whether that penalty was a good call has been discussed, with the conclusion that there was grounds in the rules for it. Just some food for thought, as that wasn't the only time during the season that a single penalty flipped winner to loser of an elims match. I think I saw that happen two or three times prior to that.
The other part of the problem is that FIRST may just need to do a better job of determining which games need dedicated scorers. 2014 and 2016 didn't have them (at official events--though by the end of 2014 the number of refs had increased to allow some to be scorers). 2015 did. Take a wild guess which games actually needed the scorers? (I'll go on record as saying that having scorers in 2016 really helped at the offseasons I was at, even if they were just recording a referee call.)
commentingonly
14-11-2016, 00:24
I would like to reiterate match logistics and explain a few reasons why Video review is out of the question, and should never be considered. I am a field staff volunteer, and have work with a few different teams.
Let us start with event logistics from the point of view of field staff, and refs for that matter. A district event typically has around 36 to 40 teams. For that matter lets actually use an event for logistics. How about FIM Southfield (https://www.thebluealliance.com/event/2016misou) event this year.
The event had 39 teams, and 10 hours worth of time scheduled for qualification matches on the public schedule. Each team was set to play 12 matches each giving you 78 matches. That gives you about 7.5 minutes per match.
Now let’s think about game play. What has to happen for a match? The field needs to be configured, robots need to connect, teams need to be announced, the match needs to be played, the scores need to be submitted, then posted, and robots need to clear the field. How long does this all take? About 7 minutes. Exactly the time allotted to each match, and that’s without having any problems with robots or the field that delay the start of a match, match replays (when needed), field repairs, and any number of other factors that may result in a delay of a match on the part of teams or the field. This 7-minute cycle time is just about the limit. it takes 30 seconds or so to prep the field for team connections, 3 to 4 minutes for the teams to setup robots on the field and connect, match time running about 2.5 minutes, then give the refs 30 seconds to confirm the score and for it to be posted. That adds up to about the 7 minutes depending on how long each of these steps takes. Can we make this faster? not without teams setting up robots faster. Even then, with field configuration, robot connection time and the time it takes for matches to run, and scores to be submitted and posted, the fastest time that matches tend to run is 5 and a half minutes, and that only gets hit at most once an event if that. Matches tend to run between 6.5 and 8 minute cycles.
Where in there do you plan to fit in video replay? Replaying a full match would take 2.5 minutes. Not to mention times to analyze what they are seeing, and re watch parts if needed. Maybe add 30 second to adjust and verify and scores that need to be after that? that gives you about 3 minutes to add onto match cycles. so that scores can be posted, so the field can be configured for the next match. Plus, who is to say that there won't me a few additional minutes of discussion about what’s being watched and if those points were counted. How much time do you really want to add to score review? how much is too much? well if you add the three minutes that we gave above, probably the lower end of the range, that gives us 10 minutes per match, and with 78 matches as at Southfield, that gives you 13 hours of game play. You now have to add 3 more hours on to the first day of qualification matches so that you can get done in time. This means keeping students at the venue as late as 10 PM in the case of Southfield. Some people would say that’s reasonable, but some teams traveling more than an hour to events, that’s unreasonable as an hour back to the school and then time to get home, puts students arriving home at 12 PM. That won't fly with many school districts, and do you really want students, mentors and volunteers running on that little sleep? Well think about it. Southfield's event opened at 8 AM the last day. with an hour bus rides, students need to meet before 7, and wake up at 6 to get there by then, and well we all know they don't go to sleep right away.
Now let’s think more about how this entire replay thing would work anyways. Okay so you record the video from the audience screen right? well what’s that? a full field camera? reasonable, but it can't get everything, can't see close details. Okay so you record all the cameras? Well Most setups have 3. Full field, and one for each alliance. Most of the feeds for the walls did not fully catch the defenses, so you don't get to see the exact defense crossed, or even this year, having the portcullis or drawbridge block the view of part of the camera feed. So you add more camera. that works to solve those problems. say one more for each alliance. covering the parts that the others don't see? Now you end up having to allow for video mixing, feed switching, rewind replay, and watching multiple camera screens. That is extra time in each match review that you want to have. maybe say 30 seconds or a minutes? That’s reasonable since the entire point of this is to not miss any detail right? well we have now added at least another hour onto our matches. Where does that fit into the schedule.
Now well you have your 11-minute cycle times, your cameras, your all ready for match review. What’s the cost? well say $500 for a good camera. so $3000 for all of them. Let’s add $500, for wires tripods, and another $500 for the screens and controls so that the video could be watched. You also need a device to record the video. We can use a Tricaster 40 (https://www.bhphotovideo.com/c/product/996927-REG/newtek_fg_000437_r001_tricaster_40_v2.html) which is $5,000 on B&H. That says our video replay system starts at around $10,000 once you get everything you need. Well that’s not too bad for a video setup, it’s fairly low end, but probably all you need to watch the matches. And remember that needs to be on 20 fields per week. So that is $200,000 for just your video replay system. Let’s also remember that additional time and cost needs to go into training for use of the system, a volunteer spot to possibly man the computer to help the refs, and time and the resources that are needed to prepare the system and vet the options. Still think it’s reasonable?
Okay you want to argue that not every second of every match needs to be replayed? So, you want to play part of a match, you have to seek through the full footage to find those 10 seconds for review, and maybe you still watch it twice? well the time to find the clip, might be 20 seconds and then 20 seconds to watch it twice. so that's 40 seconds. 20 seconds of thinking about how to proceed (that would be super-fast for most people. Please make a decision in 20 seconds for me). and there we have an entire minute added onto game play. Or maybe refs have their own station to watch matches? Okay fork over an additional $20,000 per field for a more advanced streaming device. And we still have the concept of not watching every second of every match. Arguable if you are going to do video review, then it need to happen on every match, since if you only do it when you think you have missed something, or a team asks a question, then that's unfair, since what if a team did not realize the ref did not count a defense crossing they made? Well why should they not get those points and have another team get them since they were being picky? So that means every match gets watched in full. That’s the only way to make it fair for every team.
So you want to argue the time problem? reasonable. Teams can setup for a match while video review is going on. That works. except, it does not change that hard cutoff that it takes the field itself to pre start and run a match. You need to leave 3 minutes for that. and your field staff (FTA, FTAA, CSA, Scorekeeper) won't have any idea about what robots are having connection problems until the field has been pre started, and then fixing any problems could take a minute or two. so you won't save yourself much time here.
You also need to consider the technical aspect of running the equipment. In an ideal world, nothing goes wrong. But that never happens. Cameras will go offline, break, the video streaming boxes won't work, cables will die, balls will fly out of the field and knock over or break video equipment. What happens when something goes wrong. It’s not fair to give a team extra points for crossing a missed defense in one match, but in the next when it happens again, but the opposing alliance hit the camera making it record the ground during those 20 seconds, none because it "was not on camera?" that's not fair now is it. And don't tell me "it won't happen," because you know that it will, and it will be your team that lost points because of it.
All in all, Yes the idea is great. It works for Football, Soccer, Baseball, or other sports that run for 3 hour long games, since 5 minutes don't matter, it’s just a commercial break. but in the fast passed games of FIRST, it’s not the time and place, and in the end you will be doing more bad then good. Feel free to pick this apart, but take a chance to see why it’s really unpractical for this to work.
Wait. Stop. Lets not do, what will be the third/fourth rehash of this.
A few events said they would offer replay counts for teams. Blake (and I) are looking for the data from those events.
What does the data show?
Commentingonly, you make great points, all of which have been posted before. But I'll give you points for a first post that has that much detail in it. For someone so new to FRC, I think it's very impressive.
Thanks
commentingonly
14-11-2016, 01:16
Foster, I am not new to FRC I have volunteered in almost every aspect of events, was a student on a team as a critical part of build and drive, ane I help mentior 4 teams. You must understand that i did not wish to rehash the data, and information. I have read a good deal of this thread maybe missing a few posts, so i know its all be said. I (and i would hope that many other field staff members, refs, event coordinates, and gdc/first staff) cant understand how one would expect to achieve this. The reasons i listed are just a few of those that i could list to explain how this would end badly and become more of a stuggle to implement then it would achieve in the long run.
The data that needs to be analized is how long it would take you to walk around the field, seek through a video, find a clip that you want to watch, verify what you expect, play it for two other people so you can be sure, disguss it, and then make the adjustments. Thats process, it would take atleast 2 minutes to complete. and if your talking about recounting scoring, it takes no less then 2.5 minutes to watch a full match, and for something like this years game? thats absolutely required in order to confirm crossings. So it would garentee adding atleast 3 minutes onto any match cycle you have, and thats without debate over any other penaties or other field related problems.
I will say i do agree that every point matters and that missed points suck, but you can't guarantee that a ref won't get distracted. And it can be a disapointment to students and teams. I personally when i was a student drive team memver was on the receiving end of a few bad calls, but never once did we think the wrong call was made after talking with the head ref.
The only feasable solution, is making refs more accountable for their actions. This could be as simple as adding a scorer or additional refs in future years. But providing video replay would more likely then not creat more problems by allowing refs to say "we will just review this later" and stop watching scores or fouls and forget about what they wanted to review at the end of the match.
MrTechCenter
14-11-2016, 01:53
We did not offer teams the chance to challenge via video replay at CCC (frankly, we had so much going on during the planning this year that we couldn't even think about it).
However, in the elimination rounds, there were two instances where we believe the match was scored incorrectly. We went back and looked at it on video to see and, if necessary, recount boulder scores and/or defense crossings. It made a significant difference having the video review.
In Finals Match 3, the final score that was initially posted showed that the blue alliance did not get a breach and, thus, did not earn the bonus points. Once the score went up I (Scorekeeper/FTA) immediately knew that it was wrong, the blue alliance was not given credit for the breach and red had won the match and, thus, the tournament initially. Others who were fieldside weren't so sure if they did get the breach or not. We replayed the stream back on one of the monitors and surely enough, the blue alliance did get the breach and not only that but their autonomous wasn't scored correctly either. The scoring adjustments were made and it put them over the red alliance and they ended up winning the tournament.
Not the best implementation, but it made a huge difference.
Sperkowsky
14-11-2016, 07:10
snip
While I agree with some of your points, I majorly disagree with some.
First of all with time. You seem to assume every single match will get replayed and frankly for lack of better words that's stupid.
Video replay will probably be used as much as the question box. A box used what? Probably less then 15 times per regional with at least half just being simple questions. Even if we ridiculously assume that every match is replayed in its entirety then lets go to your rebuttal about watching while set up is happening for the next match. You mentioned the match could not be pre started without the score being in and you are correct RIGHT NOW. If FIRST does implement video review I am sure they will make a few tweaks in FMS code and make it so the match can be pre started while video review was happening.
Now for cameras. I think video review could fix 99% of calls with 3 wide angle action cameras(gopros) costing around $900. You also mentioned a tricaster. While I love tricasters it is not needed. Something like a Blackmagic Atem would entirely suffice and you can get a nice one for $1,500. After that, there would need to be a streaming computer which Ill say costs $500 and then another $100 for cables. In the end that is about $3,000 per setup. So to outfit all 20 fields we are talking about $60,000. Now that may seem like a huge number but when you realize that its only 12 teams initial registrations it seems much smaller. Not to mention that setup can FINALLY make it so FIRST can standardize livestreams. Volunteer role is simple Livestream role. Almost all FIRST events already have livestreams and there is someone volunteering already unofficially to run it... many times without even being recognized as a real volunteer.
Now your point about equipment breaking I somewhat understand although your main center camera should be able to see 90% of what is going on, on the field. Not to mention this year is a major outlier. Any other year a single wide angle camera could catch 99.999% of calls.
I like you have done a ton of roles at an FRC competition. This year I have done field reset, been the scorekeeper, reffed, and run regional livestreams. I know video review can happen and I really really hope FIRST wakes up and it makes it happen.
Not the best implementation, but it made a huge difference.
Thank you very much for your data point with the detailed description on how it worked out.
Question, how long did it take you to do the replay and make a decision?
and again, we've beaten this horse to death with opinions, still looking for the events that did it and the FACTS around it.
commentingonly
14-11-2016, 09:23
Sam, you make some good points. Ones which I did consider. I have actually done work with video recording systems and have setup a similar camera setup to what your talking about.
First you suggest GoPros and an Atem. The Atem is a good piece of hardware, and would do what you need, for video mixing in a production environment. In this case however, you need something better. It only offers 1 video out which works for the audience video and webs stream, but does not allow you to record all 3 of the camera in your setup.
The second point that I would like to make is the GoPros, they are a good example for a camera for this. I personally have used them in this type of environment, and I believe the Mid-Atlantic had done this as well with their video system last year. The GoPros don't hold up in a streaming environment. They easily over heat when run for a long time. Getting them up and running takes a bit of work. They also did not play nicely with a black magic Atem that I used, and required multiple adapters (costing about $200 for each camera) to get them to work reliably.
Third, your video streaming server is severely under budget. A cheep black magic card, to handle HD video recording on one feed, costs around $200. Something to handle 3 or more recorded feeds starts around $300-$500. There goes your budget for that one, even if you get cheep hardware, you are still looking at around $1000 for that setup. You can get cheaper hardware, but I would not expect it to hold up well during full event.
I should also mention about camera setup. I have tried to place camera around the FRC field to effectively cover the full field. Its hard especially this year, and don't forget last year too. Once your stacks got above 4 high, it blocked robots, chutes, human players, and drivers. There was no way to ensure full video coverage over this, or last years, games with only 3 cameras. Also i would not say that its over yet either. With the increased production value of this years game, and the looks that it will be carried over to next year, I think we will see another low visibility game in our near future.
Second, i do agree that my every event gets video replay is a little crazy, but I don't think its out of the question. And yes, the question box is under used right now, since often there is nothing that can be done to re score matches. In my years of first, I would expect that if you watched match replays, then you could re-score about 50% of matches based on the video recordings of them. It happens that a ref is watching a robot for a penalty while another that they are not watching scores points, yeah this can very easily get over looked. I as a field staff member have caught myself focusing on the movements of a single robot for long parts of a match and ignoring the others. The game is exciting and that's really easy to do when your 3 feet away. You said it your self a few posts ago.
I'm sure I missed at least 1 crossing yesterday and that wasn't for lack of trying or knowing the rules. On an Frc field there is a ton of stuff going on and despite 1 ref watching courtyard fouls and 1 watching crossings its extremely easy to miss something.
How many matches would you have wanted to replay at that off season event? Are you telling me that you trusted every score you entered and would have only done match review is someone came to the question box? and even then, are you telling me that one team should get their match re-scored just cause they came to the question box? If you do it then, it should happen always in order to make it fair for all teams.
I do agree with you that its a great idea. Video replay would be a handy tool for refs when scoring matches, but i don't think there is a good implementation that would not make more of a hassle for teams and volunteers. There would be countless delays that this would create and i don't think any event could accurately predict its own running time with this as an added factor.
...
Any reason you felt the need to make an anonymous account for this thread?
Just curious :)
Thank you very much for your data point with the detailed description on how it worked out.
Question, how long did it take you to do the replay and make a decision?
and again, we've beaten this horse to death with opinions, still looking for the events that did it and the FACTS around it.
I'd say it took us around 5-10 minutes to check everything, decide, and edit the scores. That's only because we didn't really have an actual system in place and we were just playing back the webcast. If we actually planned out and came up with a process, dedicated equipment, etc. it would probably take significantly less time.
wilsonmw04
14-11-2016, 10:48
Wow,
This is such a bad idea for so many reason, forget the logistics of it: technology required, time and energy, etc; it puts the emphasis on the wrong part of the event: who won. We need to take the best of the sports model, not the entire thing.
Another possible data point for you all....
It may not be necessary to re-watch an entire 2 1/2 minute match to determine if an error occurred.
At our off-season event we had a situation where in the last ~15s of our first quarterfinals match our alliance damaged 2 defenses (2 bots nearly simultaneously) and breached before barely making it to the batter. Prior to crossings, the lights on the defenses both indicated one crossing had been completed, but prior to end of match (when they all went off) only one had been scored as damaged. The failure to score the 2nd crossing and breech was the difference between a win and a loss.
We had no replay (or webcast) in place but we sent our student to the ref to object, to no avail. All in all it took almost 3 minutes between quarterfinals matches to have the discussion and return to the queue.
When we reviewed match video later, we only needed to review the final ~30s of the match, enough to see the lights on the defense prior to the crossing and that two crossings occurred, confirming the scoring error.
To those that say we should focus on the positive of the game, and not emphasize winning or losing, I offer this observation. Coming out of the match we lost, the mentors of all three alliance teams attempted to do just that. In fact the closest to success we had was pointing out that we get to play an extra match as a result (knowing we'd win against this alliance). However, until you stand there with ~15-20 students who all know they took a loss due to a clear error, it's difficult to understand the severe lack of "inspiration" that results.
(To be clear, I hold no ill-will or resentment of the refs at the event -- I strongly believe they did their level best within the rules FIRST laid out, and it was a fun competition overall. Also, these options are mine, and I do not speak for my team.)
commentingonly
14-11-2016, 11:13
Any reason you felt the need to make an anonymous account for this thread?
Just curious :)
Partially because, I don't actually have another account otherwise. (long time reader though.) And partially to abstract away my background, and just focus on the information that I have to provide as a technical professional and my experience as a volunteer.
I'd say it took us around 5-10 minutes to check everything, decide, and edit the scores.
I think your right that the time could be improved, but for something such as a re-scoring a match, you can't do that in less then 3 minutes. and that does not include the time needed to verify that you entered the right scores and everyone agrees. Let alone if you wish to analyze other aspects of the match on video as well.
Wow,
it puts the emphasis on the wrong part of the event: who won. We need to take the best of the sports model, not the entire thing.
I also as a alumni and mentor, have to agree 100% with wilsonmw04. Yes, winning the event is a great experience. As someone who never made it to finals while on a team, I hope that every student has the chance to get there, but FIRST is not about winning, its about learning and spreading STEM, business and professionalism. In college and business, you can't go to your professor, boss, or a client and tell them that they did something wrong and expect them to re evaluate a grade or business decision just cause you have evidence supporting your case. As a free lance software developer, I have lost bids, and when seeing the final product produced by the competition, known that I should have won it, but I can't change the clients mind when they made the decision.
If teams members think that matches are reffed poorly, then those teams should be providing mentors to volunteer as refs so that they have someone that will do a better job, and at that point you might see that its not as easy as you think to make the calls they do.
Chris is me
14-11-2016, 11:47
I know this is another Opinion / Hot Take post, and it does echo some themes in the thread, but essentially the problem instant replay solves can be solved in other ways - but instant replay is probably the only way that we as event planning volunteers can make an impact.
The "right" way to solve the problem is in game design. While I respect and understand the many constraints the GDC are under in the game design process (and thus this whole paragraph is easier said than done), there are certainly changes to the game rules that would make refereeing more fair. Removing tasks that are scored subjectively by humans in real time is the #1 change that can be made. Assists in 2014 and crossings in 2016 are two perfect examples. To some extent these get better if the human scoring is dedicated solely to the task and focused on a small area of the field, but in both years this task was spread out over several positions and could happen at the same time in multiple areas. Other areas of rules ambiguity could be tightened up and made either more objective or removed entirely. There is a tendency for the GDC to "patch" holes in the game design with specific and excessively subjective rules to cover for a variety of convoluted situations, and that leads to a lot of these problems.
The thing is though, we have no pull on the GDC, and no opportunity to change how the game rules are written whatsoever. So we can't solve these problems the "right" way. We can ask and hope, that's it. If we want to get more calls right, in games where calls are done like this, instant replay is worth exploring. The way it's been done at various offseasons is great - let's keep trying stuff at offseasons to balance the constraints between volunteer requirements, equipment, rules interactions, etc. What we don't need is an essential moratorium on even considering the slightest change to a broken process from people used to the status quo. Let people try it at off-seasons! If it is truly doom and gloom as it is made out to be that will become obvious very quickly, and if it's not, we learned something.
Nobody in the entire thread wants video replay because they believe refs are incompetent, not trying hard enough, or biased. There is universal recognition of the difficulty of the task of refereeing. This is not a matter of people going "oh, now that I know being a ref is Hard, I won't complain anymore" - because what comfort is that to a team that's season is over on a blatant missed call that everyone can see but no one can change? You guys say "it's not about winning", which is really easy to say if you get to the Championship every year anyway, but the fact is in FRC winning is more than symbolic - it creates the future opportunity to compete and be inspired. As long as qualification is merit based, winning, and getting the calls right, will absolutely, tangibly matter. We should try and get it right.
Ryan Dognaux
14-11-2016, 12:06
Hey off-season folks,
What news do you have for us?
What are the new anecdotes?
What are the new hard data?
Blake
GRC had one instance of it being used by a team that wanted to make sure all of the high goal shots were correctly counted. As the next match set up, the head referee sat down with me at the webcast table and we pulled up the previous match video. We re-watched the match and the head referee was able to easily count each ball being scored in the high goal. Afterwards they confirmed it with the team and that was it. There was no delay to the event or anything event-related. Overall I thought it was a success.
Wow,
This is such a bad idea for so many reason, forget the logistics of it: technology required, time and energy, etc; it puts the emphasis on the wrong part of the event: who won. We need to take the best of the sports model, not the entire thing.
At least to your first points - technology and energy required are basically the same if you're already webcasting the event. See my post from another thread on the technology aspect - https://www.chiefdelphi.com/forums/showpost.php?p=1559380&postcount=30 Don't make a mountain out a mole hill on the technology piece.
Sperkowsky
14-11-2016, 12:48
n college you can't go to your professor and tell them that they did something wrong and expect them to re evaluate a grade just cause you have evidence supporting your case. A
Off topic a bit but this doesn't work out in my mind. Someone in college can correct me if I am wrong but if a professor made a mistake grading a test that caused hundreds of kids to fail and you alerted them of it you are telling me they wouldn't fix it?
At least to your first points - technology and energy required are basically the same if you're already webcasting the event. See my post from another thread on the technology aspect - https://www.chiefdelphi.com/forums/showpost.php?p=1559380&postcount=30 Don't make a mountain out a mole hill on the technology piece.Seriously. I would very much like to see this implemented as: 1) improve the webcasts to Make It Louder 2) experiment with replay methods behind the scenes, 3) go from there.
We all agree that the system is not perfect. We're likely to disagree about the proper effort to devote to various improvements. But this desire to shoot down any potentialities before even investigating their manifested difficulties is very aggravating. The biggest obstacle to experimentation is the technological investment, which largely piggybacks off other vast improvements that go directly to public FInspirationRST. Then experiment with how much different cameras help, how to integrate and navigate feeds, how to handle FMS and turnaround issues, etc, behind the scenes. Then handle actual implementation and restrictions thereon.
I also take serious issue with the "it's unfair if it's not X, and you definitely can't do X" strawmen. The status quo is unfair; insisting that an improvement become perfectly fair is unreasonable. Calls will still be missed whether replay is automatic for every second of every match, available throughout quals, available only by challenge in elims, available only to the head ref, what have you. The fact that calls will be missed doesn't mean missing fewer in some systematic way is equivalent. (Though you're of course free to argue it's irrelevant to the goal of FIRST, which is at best going to land on agree-to-disagree again.)
Off topic a bit but this doesn't work out in my mind. Someone in college can correct me if I am wrong but if a professor made a mistake grading a test that caused hundreds of kids to fail and you alerted them of it you are telling me they wouldn't fix it?I've been both a grader and a student in this situation, and it's a black-and-white call, it's far rarer that it won't be changed. And I've of course also corrected my own bosses with evidence; I do not understand this argument. If the evidence is ambiguous they may debate it or ignore you, but if it's a cut-and-dry equivalent of "they crossed the defense"--and there's still a way to correct it--every good boss I've had will. And I certainly don't hold the initial call against them if it was reasonable and they correct it responsibly. In fact, the whole thing is very analogous to the way providing video evidence in any situation should work. And I'm saying this as a veteran referee. I have reservations about replay, but that's certainly not one of them.
commentingonly
14-11-2016, 13:12
Off topic a bit but this doesn't work out in my mind. Someone in college can correct me if I am wrong but if a professor made a mistake grading a test that caused hundreds of kids to fail and you alerted them of it you are telling me they wouldn't fix it?
You are both right and wrong. Some professors do, but others don't. I have had more then one professor who, refused to regrade assignments due to their own error. It may seem a little silly, in concept, but often times it relates to needing to regrade the entire class or its a project, where the final grade stands as is and they don't want to make the page. In one instance, the professor did offer to, when my grade was in correctly assigned due to their error, but when regrading they ended at the same score finding other reasons to deduct points. So yes, in some cases, but its not as easy as you would think to get grades to be changed or re evaluated.
I know this is another Opinion / Hot Take post, and it does echo some themes in the thread, but essentially the problem instant replay solves can be solved in other ways - but instant replay is probably the only way that we as event planning volunteers can make an impact.
The "right" way to solve the problem is in game design. ...
Chris,
I broadly agree with your comments about game design, but I agree far less with the assertion about off-season event hosts being left with (probably) only video replay in their bag of tricks for making things better.
Off-season events can and do change rules every year, and not just the rule about using video. Changes other than video are definitely on the table.
One option that doesn't require changing any game rules would be to simply scrounge up 6 volunteers from the crowd and give them the job of watching one robot apiece (there are details to be dealt with, but you get my point).
[/EDIT]
When I wrote the two paragraphs above, my brain was stuck in a thinking-about-off-season-events/experiments rut; and it just dawned on me that Chris was almost certainly thinking about regular-season events. Doh!
That said, the bigger picture point is that if off-season experiments such as getting teams attending events to supply a small handful of students/adults to do a few low-skill scorekeeping tasks (or rule tweaks, or ...), create a dramatic error reduction during off-season events, the GDC would probably notice (notice both the errors and the simple(ish) way to treat the root cause's symptom).
[/EDIT]
I'm not saying in this post that using video evidence it's bad or good, wise or foolish, etc.
I'm also not saying in this post whether or not I think there is a difference between inspiring someone to try something new, and that person later on being excited or depressed by they way an FRC competition unfolds.
I am saying that I don't think video is the only lever that can be pulled.
Now, I'm going back to waiting for the "hard" data posts.
Blake
PS: The one time I got to spend some time with an FRC GDC, they seemed like nice people ;). I think they would welcome well-organized feedback from event organizers; especially if it took the form of a video-based, post-mortem of a game's rules.
I'm thinking about the sort of review and analysis that would use a large number of hours of video from multiple events to identify the sorts of calls/rules that are hardest for humans to make/enforce correctly.
That sort of info could definitely influence future games (and treat a cause instead of a symptom), especially if could be put into a simple checklist of things to avoid, or do.
Maybe a pro-video person reading this thread will contact the GDC in order to volunteer to do that for the next 1-3 seasons?
Video replay will probably be used as much as the question box. A box used what? Probably less then 15 times per regional with at least half just being simple questions. Uh, are you basing that just on your experience reffing an offseason, or do you have some other source? I've seen lines at those boxes in the regular season! I'd put it at a minimum of 1 match in 4, somebody is in the box, average over the event (more during playoffs, mind you).
Volunteer role is simple Livestream role. Almost all FIRST events already have livestreams and there is someone volunteering already unofficially to run it... many times without even being recognized as a real volunteer. ONLY for equipment operation. If that person makes the call--and makes it wrong--you know that HQ will be swamped with complaints about how instant replay inhales audibly. Any calls resulting from replay need to be made by the ref crew, particularly the Head Ref.
One other data point: I did once review a replay at an offseason--the webcast, to be exact. I won't go into the details, but what I'll term a "integrity of the tournament" question was raised. A quick look at the right place, and the question was answered to the tune of "integrity of tournament not affected". Had to wait a match or so to get the webcast set in playback mode, though.
I'm not against replays, per se. I'm against unnecessary use and excessive use, as well as use without proper equipment (read: without decent video). Basically, what that boils down to is that if the replay is available, it is necessary to use it to confirm a specific non-judgement call, and the team-requested use is kept to a reasonable level, great, provided that it's legal for the event in question.
I also agree with Siri on a very key item: If it is clear that something wasn't done properly, and the mistake is caught, it should be fixed. See also: Question box.
ratdude747
16-11-2016, 01:14
The "right" way to solve the problem is in game design. While I respect and understand the many constraints the GDC are under in the game design process (and thus this whole paragraph is easier said than done), there are certainly changes to the game rules that would make refereeing more fair. Removing tasks that are scored subjectively by humans in real time is the #1 change that can be made. Assists in 2014 and crossings in 2016 are two perfect examples. To some extent these get better if the human scoring is dedicated solely to the task and focused on a small area of the field, but in both years this task was spread out over several positions and could happen at the same time in multiple areas. Other areas of rules ambiguity could be tightened up and made either more objective or removed entirely. There is a tendency for the GDC to "patch" holes in the game design with specific and excessively subjective rules to cover for a variety of convoluted situations, and that leads to a lot of these problems.
Great idea in theory. In practice, I see this as only a partial solution. One cannot assume the field hardware works perfectly. Such as in 2014 with the hot goal indicators not flipping during auto once in a while. The head ref made the call whether or not the flippers worked (and if not, called for a replay of match). That's an automated system with a subjective element; head ref's are only human, and I could see one missing a missed flip (especially if it was only one one side) and the resulting questions after the match from the alliances.
When a field scoring fault is suggested, things become subjective by default. Yes, there are places where game design can reduce the number of subjective calls ref's have to make, but subjective calls, at least by the head ref, are unavoidable.
When a field scoring fault is suggested, things become subjective by default. Yes, there are places where game design can reduce the number of subjective calls ref's have to make, but subjective calls, at least by the head ref, are unavoidable.
You know, it's really hard for the game designers--and the refs.
If everything is black and white/objective, everybody complains that they got screwed over because there should have been some leeway/room for judgement (or the ref missed the call), and that everything should be a judgement call.
If everything is a judgement/subjective call, everybody complains that they got screwed over because the refs' interpretation of the game rules was bad, and that everything should be black and white.
And if the two are mixed, everybody's complaining that the objective calls should be subjective, and the subjective should be objective, and the refs still miss/misinterpret everything, so the teams are screwed over!
:p:rolleyes:
Oh, and then someone is bound to bring up instant replay as the only cure-all (instead of what it actually is, one possible tool in the toolbox full of solutions to issues that some teams don't even realize exist). Cue everybody repeating their statements from above paragraphs, followed by debates as to feasibility/fairness/volunteer POV.
Ryan Dognaux
16-11-2016, 14:50
Now, I'm going back to waiting for the "hard" data posts.
What would hard data for this subject look like to you? Most implementations will only have anecdotal evidence at this point because video review probably only gets used once or twice during an event (so far.)
Lil' Lavery
16-11-2016, 21:28
What would hard data for this subject look like to you? Most implementations will only have anecdotal evidence at this point because video review probably only gets used once or twice during an event (so far.)
That lack of usage is a valid data point.
Ryan Dognaux
17-11-2016, 09:37
That lack of usage is a valid data point.
What's interesting is I know some argued it would be used so much that it would slow events down, or that would be a risk at least, correct? So far it seems that is not the case for the few events that have tried it out.
Lil' Lavery
17-11-2016, 10:41
What's interesting is I know some argued it would be used so much that it would slow events down, or that would be a risk at least, correct? So far it seems that is not the case for the few events that have tried it out.
Have any events actually had an overturned call yet? There was one rescoring that was mentioned, but otherwise no event has had something overturned or adjusted. The primary concern is that most video review overturns would result in replaying matches, and thus the added schedule risk. It's not fair to penalize alliances for their strategic behavior based on the score/field conditions that are presented to them in real time, and then go back and adjust those via video review. For instance, if a team crosses a defense 3 or 4 times to damage it, they can't get the extra time they wasted crossing that defense back. Or if a team makes a call to ensure a capture at ~15seconds rather than scoring an extra couple balls based on their real time scoring, then have that scoring adjusted when you give the other alliance a breach that didn't exist previously. Virtually any case where errors are found in video review should mandate a replayed match. Thus the schedule concern.
What would hard data for this subject look like to you? Most implementations will only have anecdotal evidence at this point because video review probably only gets used once or twice during an event (so far.)
TL, DR: A) Ask the right question(s). B) Measure the baseline. C) Understand the relationships. D) Tweak the independent variables. E) Measure the results. F) Decide what changes/(re)allocations, if any, to implement.
- - - - - - - - - - - - - - - - - - - - - -
To prep to answer this, in addition to shooting from the hip ;) , I wanted to refresh my recollection of what has been said so far in the last few months. so, I reviewed this thread, and an adjacent thread. I found these posts I and a few others wrote. There is nothing Earth-shaking in them; but they supply some context.
https://www.chiefdelphi.com/forums/showpost.php?p=1556440&postcount=64
https://www.chiefdelphi.com/forums/showpost.php?p=1556450&postcount=72
https://www.chiefdelphi.com/forums/showpost.php?p=1557634&postcount=182
https://www.chiefdelphi.com/forums/showpost.php?p=1558613&postcount=193
https://www.chiefdelphi.com/forums/showpost.php?p=1559443&postcount=207
https://www.chiefdelphi.com/forums/showpost.php?p=1602654&postcount=223
https://www.chiefdelphi.com/forums/showpost.php?p=1608747&postcount=265
https://www.chiefdelphi.com/forums/showpost.php?p=1608753&postcount=268
https://www.chiefdelphi.com/forums/showpost.php?p=1557355&postcount=11
https://www.chiefdelphi.com/forums/showpost.php?p=1557542&postcount=13
https://www.chiefdelphi.com/forums/showpost.php?p=1557782&postcount=17
For me, the outline that follows is the way I would want to approach A) creating a solid understanding of the need (or lack thereof) for adding video to the refs' tools, and B) coming up with a first version of a video system, if developing one is warranted.
The "hard data" would be the results (measurements & statistics) produced by the experiments.
Obviously this is a back of the napkin, discussion-forum-quality sort of an outline - Not even PowerPoint quality yet.
The current system (FIRST) being discussed is a system containing many things, including competition events that contain, at the least, a Playing Field & Game Pieces, the Field Staff (announcers, refs, etc.), the participating Teams, the Match/Game/Robot rules, the Audience, the Matches/Schedule, and the field Computers/Sensors/Software.
We are talking about introducing Video Replays into the FRC (and FTC ...) event part of that FIRST system.
We need to know the pertinent parts of the current baseline system's status/performance, the current system's purpose, and the sensitivity of the system's ability-to-achieve-it's-purpose(s) to changes in the independent variables we are going to adjust.
Some useful metrics might be
Call accuracy
Current call challenges outcomes
Was a call changed
Was the result accurate
Match outcomes affected by call accuracy and by challenges
Event outcomes affected by call accuracy and by challenges
System-purpose outcomes affected by call accuracy and challenges
Calls (i.e. rules) that could/would/should be affected
Perfect video
Less-than-perfect video
Video usefulness vs equipment performance/placement
Equipment purchase costs vs equipment performance
Non-equipment-purchase costs: time, labor, maintenance, shipping, training, etc.
Video-alternatives (more humans, rule/game changes, anything else?)
In the experiments I would want to
Compare and contrast off-season events with regular season, and various championships to identify how they are alike, and how they are different; and what effect that has on the results collected in each type of event.
Compare and contrast multiple locations, times-of-day, levels of human-training, etc. as part of trying each candidate method for accomplishing the event's purposes (and sub-purposes).
These locations would include change-nothing "control" events.
Compare and contrast multiple years' (multiple games') results
When testing each/any alternative, determine "truth" (the baseline data) by analyzing data (not in real time) collected by a plethora of sensors (these are separate from the sensors/methods that are being tested, and the "truth" is not shared during the event).
In some circumstances (off-season events?), purposefully stress each alternative by having teams challenge calls in bursts and/or continuously. Do this to stress-test the alternative, not because of actual disagreements.
If necessary have teams create difficult-to-assess situations, so that reviewing calls requires more than a trivial glance at the video records or other evidence.
What's above is a quick-and-dirty outline of what I would *want* to do to produce "hard data". After dealing with real-world constraints, thinking a bit more deeply, and getting some preliminary results; I, or whoever, might decide the experiments could be simplified without violating the integrity of the results, or they might add something.
I know there are folks who firmly believe that the need for (or cost of) video reviews is/isn't so obvious, that what I outlined above isn't necessary. I don't disagree that they feel that way. I do say that nothing in this thread so far *proves* that the need does/doesn't exist, and/or that a need would justify the investment (instead of investing in satisfying other needs).
Blake
PS: In the past, I and at least one other person have wished for detailed camera/lens specs and placement info. That would be one example of "hard data", and could be used to answer some important questions; but it's just one part of the bigger picture under the heading of "Video Review Needs to Happen Now".
PPS: Above I have some bullets about identifying which calls could/should/would be affected by reviewing video. Complementing that, I'm not sure whether deciding what the effect of a changed call should be, is part designing each/any experiment (it probably is). Regardless, it is certainly something that would factor into any decisions to introduce (or not) video replays into the system.
Have any events actually had an overturned call yet? There was one rescoring that was mentioned, but otherwise no event has had something overturned or adjusted. The primary concern is that most video review overturns would result in replaying matches, and thus the added schedule risk. It's not fair to penalize alliances for their strategic behavior based on the score/field conditions that are presented to them in real time, and then go back and adjust those via video review. For instance, if a team crosses a defense 3 or 4 times to damage it, they can't get the extra time they wasted crossing that defense back. Or if a team makes a call to ensure a capture at ~15seconds rather than scoring an extra couple balls based on their real time scoring, then have that scoring adjusted when you give the other alliance a breach that didn't exist previously. Virtually any case where errors are found in video review should mandate a replayed match. Thus the schedule concern.As a coach, I'm happy to agree with this frustration. But we've gone years without this being true in real-time scoring, since as a referee it's completely normal to correct these things in non-video reviews before posting final score. It's annoying, but it's been part of the game since the advent of real-time scoring under T20 (identical to T15 last year). As long as it's clear when there will/won't be a replay, it's up to coaches to take it in our strategy.
Blake, there's one other data point that I think could be useful in that analysis:
# of matches where video replay was available for use. That is, how many matches at the event in question could have possibly had a review? (Basically, how many matches did the event play? Unless, of course, it's elims-restricted, in which case it's how many elims matches were played.)
And the reason that that particular data point could be useful is that it will give a broad-spectrum picture of how used video replay could be, if available. Just as an example, if in 5 offseason events (that offer replay review) each with 50 matches replay is used 5 times and the call on the field is overturned* once, then you could say that there's a 2% use rate and when used, there's a 20% overturn rate--but overall, the overturn rate is 0.4% because only one call was overturned.
That sort of data can be used to figure out timing of events (and what the limits for review are) and ref performance. Both those numbers could be pretty important, the latter feeding into better ref crews if need be.
*Change of any sort to the result of the match--points, winner, you get the idea.
I'd suggest that "control" events track number of scoring changes (team or field staff initiated), and number of (denied, obviously) requests for review.
Blake, there's one other data point that I think could be useful in that analysis:
...
...
After dealing with real-world constraints, thinking a bit more deeply, and getting some preliminary results; I, or whoever, might decide the experiments could be simplified without violating the integrity of the results, or they might add something.
...
Go for it!
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.