3419's Post-Game Video Review Process

This year we implemented a useful new process to review our games after the match. We combined together 4 videos:

-A video taken by a student from the stands of the entire field
-A video taken by a student from the stands focused on our robot
-A record of the video being seen on the driver’s laptop (compressed from a Jetson over gstreamer)
-A video composed of logging data into a custom visualizer

We then tile them together and watch the full video on a big screen in the pits. It was super useful at the Hudson Valley Regional last weekend and we’ll be doing it again in NYC in two weeks.

Have a look:

If you have any questions about how we do it, please ask.

Sounds like an awesome way to review each match and improve on strategy and the likes. But the link is broken. :slight_smile:

Woops, sorry, it’s public now.

That is pretty awesome. :cool: The on-board view is especially cool and it’s clear that would be really helpful to the driver in terms of knowing how accurate they are lining up when trying to pick up and place gears.

I’m curious as to what big ‘ah-ha!’ moments your team may have had in reviewing your match footage from so many perspectives.

The biggest realization we had after watching the first few games was that we were taking a sub-optimal route between the airship and the gear-drop station. Our original plan was to shuttle back and forth on the side of the field with the gear-loading stations because that is the shortest trip. What we found, though, is that path is often very crowded, while the middle of the field is wide open. Our robot has a pretty fast swerve drive, so going diagonally across the field each time is actually pretty quick and easy - much faster than fighting through the traffic. We also had our programmer add a button to the second joystick that would rotate the robot towards that spring while we were driving across the field - that way when we get to the spring, we’re already facing the right way to drop it on quickly. You can see that happen a few times in the video above.

This system looks really cool and our team was doing a smaller scale variant of this in a few of our events. I have a few questions: How did you all go about syncing up the video pieces or was that automated? Also, what sort of turn around time did you all see from the recording the video to the pit analysis?

Do you know if it’s possible to do something like this with the streams created by a USB webcam being shown on the Driver Station (e.g. Microsoft LifeCam 3000).

Sweet gear pickup and placer.
One nice thing about going diagnol on field is you stay away from opposing alliance’s loading station and stay away from potential penalties.

How much time does it take to out one match together with these views?

I’d be curious to know this too. Do you do this after every quals match or only a key few? With some matches within minutes of others, do you always have the time?

I’m picturing the workflow that I would know how to do - grabbing SD cards from people, copying video files to my Mac, importing them into software like Final Cut Pro etc., synchronizing/trimming clips, then outputting them without stuttering (e.g. rendering a final product) could be time consuming. Do you have specialized software? Really fast hardware? Or just some other workflow?

Who watches? Drive team and coach, obviously, I presume. Pit crew? Scouts? Software?

Finally, do you find that your student recording the whole field has a good enough viewing angle? From the low bleachers at Georgian College, there was nowhere to stand where your view of the other side of the airship wasn’t blocked.

Lots of good questions in there! Let me try to respond to them all:

"How did you all go about syncing up the video pieces or was that automated?"
We built a small C# Winforms application where someone on the team can pick each of the videos from SD cards inserted into a laptop. Then there is a “preview” button which pulls up the individual video and the person notes the start time of the game relative to the start of the video. That number is then inputted into the WinForms application and is used in the process of tiling them together. We thought about trying to automate this based on the sound of the horn at the start of the game, but never got around to doing it.

**“What sort of turn around time did you all see from the recording the video to the pit analysis?” **
It would take a few minutes for the video team to gather the SD cards, and return to the pits. And the tiling program itself takes a few minutes to run. So all told, it’s finished about 15 minutes after the game ends.

Do you know if it’s possible to do something like this with the streams created by a USB webcam being shown on the Driver Station (e.g. Microsoft LifeCam 3000).
We did this last year. Basically we got the source of the SmartDashboard’s WebCamViewer plugin, and modified it to save each frame to a JPG file on the laptop. Afterwards, we ran an ffmpeg command to bring each JPG together into a video file. If you want more details, I can try to find the source code for the dashboard extensions and the ffmpeg command. Let me know.

Sweet gear pickup and placer.
Thanks! We really focused on that this year, at the expense of having any ball shooting capabilities.

Do you do this after every quals match or only a key few? With some matches within minutes of others, do you always have the time?
We tried to do it after each qualification match, but you are correct that sometimes if you have games in quick succession there is not enough time.

**I’m picturing the workflow that I would know how to do - grabbing SD cards from people, copying video files to my Mac, importing them into software like Final Cut Pro etc., synchronizing/trimming clips, then outputting them without stuttering (e.g. rendering a final product) could be time consuming. Do you have specialized software? Really fast hardware? Or just some other workflow?
**
We built the tool described above to collect the different inputs, sync the start times, and then run ffmpeg to tile them. No crazy hardware, just an IBM laptop that’s a few years old. The actual production of the video takes about 5 minutes once the inputs have been selected.

Who watches? Drive team and coach, obviously, I presume. Pit crew? Scouts? Software?
Certainly the drive team and coach watch. The pit crew as well since we’re doing this all in the pits. And this year our programmer is our driver, so he’s in on the action.

Finally, do you find that your student recording the whole field has a good enough viewing angle? From the low bleachers at Georgian College, there was nowhere to stand where your view of the other side of the airship wasn’t blocked.
Our first regional was at Hudson Valley last weekend. The photographer for the whole field was right at the top of the bleachers, and you are correct that he still didn’t have a great view of the field. Big areas were blocked by the airships.

Really impressive setup! I particularly love the data side-by-side with video.
How does your custom visualizer work?

Thanks! There are a few components:

  1. In the robot code, in each iteration of the main control loop, we are sending various data items to a Network Table. The data includes electrical data, encoder values, joystick state, and a time stamp of milliseconds since the game started.
  2. On the laptop, we have a SmartDashboard extension with a class that, given a bunch of data and a Graphics object, draws the data to the graphics object. We use that to draw the logging data to the screen in real time.
  3. The extension also writes all of the data to a CSV, one row each time it sees a new time stamp.
  4. After the game, a java program parses up the CSV and uses the same drawing class, but this time writes each line to a separate image file (rather than the screen)
  5. We use ffmepg to convert the series of image files into a video.