Offseason Video Review Pilot-Volunteers?

I know things on the last thread about video review got heated very quickly, and debating the merits vs drawbacks of the system as of right now isn’t going to help anyone. However, one really important thing that came up was a number of amazing offseason event coordinators volunteering to pilot video review systems at their events. Doing an in-depth trial of such a system is the only way I can think of to end the debate and/or prompt FIRST to implement official review.

Please read as much of this as you can get through http://www.chiefdelphi.com/forums/showthread.php?threadid=145650

Then post here if:

  • You know of or happen to be an offseason event coordinator (shoutouts to JohnFogarty, MrTechCenter, and Ryan Dognaux so far) that would be interested in trying this out (feel free to ask for volunteers)
  • You can assist above event coordinators in implementing such a system
  • You have ideas on rule variations that could be tried at these events and reasons why they might work better

Please do NOT post for:

  • Only saying you don’t like a proposed rule idea. It’s ultimately up to event coordinators what they want to try, though I’d suggest you guys don’t use the same rulesets so we can see what works better. If you have an improvement on a proposed variation go for it.
  • Continuing the debate about video review without having tested it. That’s what the other thread was for and nothing really got accomplished. However if you HAVE tried it out and seen problems please share them.

From the previously mentioned thread, Here is a ruleset proposed by user Donut, who has experience both as a robot driver and as a Referee:

  • Each Alliance is allowed one challenge/review in the playoffs. Unlimited challenges will result in reviewing every somewhat close match since no one wants their event/season to end. If the system works it could be expanded to each team having one challenge for all qualification matches also but not more to limit review quantities.
  • Head Ref leads the review process and delivers the final call, with other refs who were involved in that call assisting if necessary. The rest of the refs keep the next match getting setup correctly so that minimal time is added to normal field reset. Adding a review process should not require an increase in referee headcount given the difficulty in finding referee volunteers already.
  • Only match scoring errors (and penalties that lead to an automatic score) can be reviewed. This year that would mean defense crossings, challenges, scales, autonomous points, G13, G28, and boulders (though I am not sure if allowing review of an automatically scored element is reasonable to review, counting balls/disks scored in a match from a video is time consuming and more prone to error). Fouls are not reviewable as it is not easy to determine what fouls were assessed from a video and many involve a judgement call by a referee who has a better view than a camera or driver in their station will.
  • The score or lack of being reviewed must be significant enough to affect the outcome of the match (or an RP being awarded for games like this year). Reviewing whether a crossing was awarded in autonomous or teleop in a 40 point blowout is a waste of time and the implications on ranking tiebreakers are not significant enough to justify the resources for that.
  • Video evidence must be indisputable to change a call. The point is to receive credit for an obviously missed score, not debate further a close call that a referee already used their best judgement on (such as barely breaking contact with the Outer Works and Sally Port door).

Variations I’ve seen to this point include allowing foul reviews, having an extra volunteer to watch reviews at team request and only calling the head Ref over if there’s actually a valid dispute, allowing additional reviews so long as teams are proven correct on each of their challenges, and allowing/disallowing fan/media rep videos. There’s plenty of ways to do it, and the best way to see what works best is to try them all out. Let’s make it loud and show what the community can do when we care about an issue.

We would be interested at TRI, we have gone to video before already to fix something’s that happened in 2014. Its an off season and as the event coordinator I want all teams to leave the event happy and not thinking that they got the raw end of a missed call. Students leaving the events feeling good about their accomplishments and not down/annoyed about a mistake is a better way to inspire them.

As a ref this year, if there’s an off-season event anywhere near where I live currently (Northern California), I will volunteer to help test this system.

I will add that I am one of the people who doesn’t necessarily see this as the right solution to the problem, but I want to see what areas of an event are affected positively and negatively and how to improve from there.

We host one in Davis in October, we’d love to have you!

send a PM my way with more details on keeping up/getting involved and I’ll do my best to be there!
Luckily you guys are only about an hour and a half away from me :slight_smile:

We actually had a video review at THOR in 2014. In elims one alliance thought that the refs missed a high goal score. We went back in the stream and watched the match again (It was kinda weird watching something that had just happens) and found that the refs had been right and nothing was changed. The teams were relatively satisfied with the outcome iirc.

Since I’m not an offseason coordinator anymore I can’t offer an event to the pilot, just wanted to show that it has been done at offseasons in the past and has worked.

I did take a look at something (non-match-related) at last year’s Fall Classic. Turned out to be nothing. Used the stream playback.

If I got something worked out, though, I think I could get review without needing to use that stream… but I’d need to talk to the planning committee for the Classic in order to make it work. (I wouldn’t necessarily rely on the stream–3 cameras might miss something if the one watching the action isn’t the one being shown.)

I am a ref and would volunteer for one near milwaukee.

I’m not in favor of this for a multitude of reasons, however I’m also a big fan of “don’t knock it if you haven’t tried it”.

As a ref this year (which I think the rules need to state that the reviewer must be a current season qualified ref - there are A LOT of nuances especially this year), I’d be happy to volunteer to try this at an offseason event. May even travel for it (I live in NYC).

This is something I love about FRC- we aren’t afraid to try these things as a community.
FIRST itself looks for proven solutions- and I don’t blame them! Investing a few thousand dollars implementing a system that may or may not actually help the final outcome doesn’t make sense from their perspective.

But we FIRST-ers are tenacious. I’ll see if I can’t assist in this process at any events I help out with this off-season, but I may be more busy helping my teams to help with this effort.

Lots of excitement here - That’s good.

But, so far, no evaluation criteria have been proposed - That’s a little cart-before-the-horse; but, as you know, that can be remedied.

Captain Obvious says: If you want to maximize each experiment’s value, each one needs to be rigorously evaluated, using criteria that match the conditions, and constraints found in a regular season event. If those criteria are consistent across all the experiments, even better.

Collecting the data correctly is likely to be as boring as watching paint dry, but lot’s of anecdotes that aren’t accompanied by hard data will prolong the debate, not shorten it.

Also don’t forget to create proper bills-of-material that include everything (HW & SW & people) necessary to make each system work. Just because one site might have some cool tools laying around, or people with specialized skills, doesn’t mean everywhere else will also.

Blake

I couldn’t have said it better. This effort is awesome and I hope it leads to some positive change down the road.

In order for tests to provide useful results, the capture/review system should be tested separately from variations on rules for use. This post is focused more on capture/review system design and evaluation, with a focus on identifying the minimal capture system that achieves a meaningful outcome.

Variations to evaluate:

  1. 1080p vs 720p
  2. Frame rate differences
  3. Shutter speed differences
  4. Time synced vs non-time synced video
  5. Number and position of cameras

I’d recommend going through every match and having independent teams score matches off of the video system with these variations, and then comparing the results. For example, one team would score a match using high midfield cameras only. This would benefit from a careful experimental design.

Here’s a first cut at a rig for Stronghold that would allow you to test these variations.

4 defense cameras – each one looking across the field at the top of the field barrier, along the leading and trailing edge of the defense shields, to be used to determine whether a crossing was successful

2 rear facing castle cameras – looking down from top of back of castle. These would be wide angle cameras that would be used to determine the ball count in the castle.

2 front facing castle cameras – these look toward the field from the top of the castle, and each capture about 2/3 of the field. These are used to evaluate which scenarios field end cameras would help with

2 high mid-field cameras – these are wide angle views of the field from the center on each side. These are used to evaluate which scenarios side cameras would help with

Technical

All cameras should be timecode synchronized, so that the reviewer can be confident that different views represent the same moment in time. Cameras should capture at 1080p, 30fps, with shutter speed no longer than 1/500 sec. and at least 40’ of depth of field. (Some venues [e.g., HS gyms] may require additional lighting to support this shutter speed, depending on the camera) Cameras should be mounted in a way that minimizes vibration.

The viewing station needs to support side-by-side viewing of multiple 1080p streams at full resolution, in timecode synchronization. It should be possible to record matches and review previous ones at the same time.

Not that I disagree with the prospect of video replays (though I have my own reservations about it), but where does the budget for the proposed 10 “necessary” 1080p cameras come from? I would much rather something like Ryan Dognaux’s St. Louis set up that anywhere can set up at minimal cost.

Maybe Ryan’s initial estimate of what a successful system would contain was incomplete …?

To the folks planning to experiment with video replay/review of challenged calls this year, please take a look at what Tristan and I wrote here: Related thread.

My proposal isn’t for the actual system that would be deployed to every field. It’s a proposal for what’s needed if you want to determine what sort of video replay system is needed. The goal is to answer the question: “what’s the minimal setup needed to correct most incorrect referee calls” with data.

The best way to do this is to capture a bunch of data, and score every match under differing sets of assumptions using independent reviewers. Setting up a camera or two, and only allowing a maximum of 8 samples (one challenge per alliance, playoffs only) won’t provide enough data to convince anyone to act.

The question I’m trying to answer is the percentage of time these variations prevent a review from correcting a missed call:

  • lack of time sync
  • low resolution
  • slow shutter speeds (blurry)
  • bad camera mounting (blurry)
  • lack of depth of field (blurry)
  • is 30fps adequate?
  • bad point of view

For example, consider the FPS value. A robot moving at 10 feet per second moves 4" per frame at 30 frames per second. Is that enough that, in most situations, the correct call can be made?

I missed one thing on my list – you need the match time, matched with the video.

In tennis, each player gets X challenges. If the video challenge is called in favor of the player, they keep the challenge. If the player is wrong, the challenge is used up.

With the “one challenge each” approach described in the first post of this thread, I think a good variant is “one incorrect challenge.” That way, if the team has multiple bad calls, they can keep benefitting from them. However, they can’t waste time with multiple silly ones.

One problem in tennis is that towards the end, a player will challenge just in case. Because the challenge is no good shortly, they might as well take a change and use it. Maybe FIRST can counter that with GP - only challenge if you really think the refs were wrong.

The other thing tennis does is have music and audience participation (clapping for drama) for challenges. Which adds some ceremony. I bring up tennis because the sport resisted video replays for a LONG time and they worked out well.

Note: I’m not sure what I think of video challenges in FIRST, but that’s irrelevant to my posting ideas.

I’d love to assist anyone in the greater NYC region at offseason events trying to implement this solution.

I can help with any summer events in/near Chicago or other seasons in/near NYC