Hello all, this is the official thread for my 2020 Zebra Data Parser tool! I will be providing updates to its GitHub page as needed throughout the season, and making note of updates in this thread.
This season, I will be using zones as defined in this thread. I have uploaded images showing the zones and zone IDs onto GitHub.
My first update is actually just a quality of life update to the 2019 parser. I will be incorporating 2020 specific zones soon. Changes since v4 (the version used in blog post 3) include:
Adding support for direct data download from the TBA API. Thanks to the TBA team for developing this API. Make sure to check out the TBA match playback tool, it’s pretty sweet. Note that for 2019 data, TBA handles start times differently than I did for the local data, local data will lag TBA data by 1.7 seconds. For 2020, both the TBA data and csv match start times should be synced with the FMS reported start time, so hopefully moving forward there will be no discrepancy.
Added support for up to 10-point convex polygon zones. The upper limit is also now easily adjustable, so if I need to go higher I can. This was an important update for 2020, as there are a few pentagon and hexagon zones I am looking to use, and now I have that capability. I doubt I’m ever going to add support for concave zones, as it will likely cause a noticeable increase in processing time just to build zones that aren’t intuitive to me.
Finally, I added two options for data aggregation/smoothing. These work by combining multiple time-adjacent datapoints together to achieve better location accuracy and reduce noise. Since we are not processing real-time, I can cheat a little bit and use moving averages incorporating both past and future points. This allows for easy lagless smoothing without the difficulty of building something like a Kalman filter. I may build a Kalman filter in the future if I see a need. I’ve added 3-point and 5-point moving average options. My current recommendation is to use the 5-point moving average, although I would like to investigate more. This option dramatically reduces the noise in the high movement derivatives (acceleration and jerk) with what I consider to be minimal positional loss. The 5-point moving average effectively eliminates the frequency characteristics we might see between 1Hz and 5Hz, but I think these kind of movements are generally unimportant for the high-level kind of analysis we are currently doing with the ZEBRA trackers.
Here are some plots showing the speed, acceleration, and jerk measurements for the first 12 seconds of 604 in match 1 at Chezy Champs. If you watch that match, you’ll see that 604 sits still for this time, so these graphs should all be near to 0 throughout.
First we have speed:
The 5-point average cuts the average speed in half, from 0.5 ft/s to 0.25 ft/s, or 6 inches per second to 3 inches per second. I think the important thing to note is that you can now have a cleaner cutoff threshold to distinguish stationary robots from moving robots. Unsmoothed, this threshold would probably need to be about 2 ft/s, but with the 5-point average this can be reduced to around 1 ft/s.
Next we have acceleration:
The gains start to become much more obvious here. The average acceleration drops from 6 ft/s^2 unsmoothed to 1.5 ft/s^2 when smoothed. Remember that acceleration due to gravity is 32 ft/s^2. So unsmoothed the robot can have a measured acceleration fluctuating as much as 0.5g in random directions. Can you imagine how disorienting that would be if you felt gravity’s direction changing by up to about 30 degrees? That’s how the robot would “feel” with unsmoothed data. For another reference point, here is some data on the DC red metro line braking to a stop. The maximum acceleration in that case was around 7 ft/s^2, so I think it’s wise to smooth measurements until they get at least below that threshold for a stationary robot.
Finally we have jerk:
The smoothing really shows it’s value here. The average unsmoothed jerk is 100 ft/s^3, and smoothed it’s only 16 ft/s^3. Using the same paper above, the highest jerk felt from a train stop was 40 ft/s^3.
I may in the future do even more aggressive smoothing, as even the values derived from 5-point averages seem pretty noisy to me. I’m unsure exactly how much smoothing to do though. The best approach I can think of would probably be to correlate Zebra speed and acceleration data with a robot’s own internal odometry measurements from encoders and accelerometers and the like. If any team at one of the Zebra events has good match odometry logs and is willing to share them with me please reach out.
That’s it for now, I’ll be adding 2020 specific metrics soon.