Introducing Limelight for FRC

When I did the quick napkin math, the $400 is more than an efficient way to spend money too because of this, even on a team with programmers familiar with Linux. If we figure vision takes two programmers eight hours (which is a conservative estimate for most FRC teams I think with pure color segmentation a lot simpler than what Limelight does), we’re at 16 man-hours of work. To those two programmers add 5 hours setting up and configuring Linux, we’re at 26 man-hours. At this rate, if you value someone’s time at say $10/hour, the Limelight has already paid for itself in two years.

This is without considering prices of a camera and coprocessor (which add up very quickly if one is using a high performance board), the time needed to mount cameras, and the robustness of the system. And this is all to a team who has programmers capable of building similar systems; a lot of teams don’t have this privilege, which makes the Limelight even more valuable to them. I don’t think the $400 is near as much as it sounds.

if(PICTURE){
   for( a = 0; a < 1000; a++ ){
      printf("words");
   }
}

Beta testing team chipping in here with our feedback. Works really well for sure! Packaging was nice, PoE is super useful, and it took maybe all of twenty minutes to get working with our bot. We’d be happy to help answer any questions teams may have.

Definitely going to have to check this out in the lab :stuck_out_tongue:

Well, since you offered…

Based on your teams experience in your estimation, roughly how long would it take a team with zero experience with vision processing (and thus no existing vision code) to get this set up and running a simple target lock program (IE, find target in view, rotate robot until target is centered)?

From what I can gather (and I’m not from a beta testing team, so I can’t confirm) limelight takes care of/ makes it easy to implement the first part - identifying the target. This is the part of vision that is still typically biggest barrier for entry for teams. Things like GRIP have made it easier, but still identifying the networking solution, lighting solution, co-processing etc are all not nearly as simple as what limelight appears to make it.

Rotating until the target is centered…well that’s relatively easy as long as you have someone that is loosely familiar with this type of task to begin with and your tolerance isn’t too tight. the 90 fps may make it so you don’t really have the oscillation typically seen in a poorly implemented P-loop - but a very naive programmer may still take a little while to implement “rotate until target is centered” in way that you’d like during competition. In the past when we’ve gotten vision working with a camera and the kangaroo PC, we have spent a non-trivial amount of time in tuning code so that it would quickly and consistently center. We could get it so that it would quickly be close to centered or could slowly get very centered - but doing both was hard, partially because of the lag in our vision processing.

The nice part would be that it allows your frustration to shift from “I dont know what components I need or how to do this” to “It’s doing a thing, but just not quite as well as I would like it to”.

We accidentally hooked up the 12V power backward… no problem, still worked fine :slight_smile:

I’m not the original poster to offer help but I am a beta tester the 30-60 minutes to acquire a target is very reasonable. The longest part we found was getting online with the device using mDNS add that’s allot to do with using a windows 7 laptop. To aquire the image using this very common work flow


1 turn down exposure 
Using the threshold view:
2 tune H (hue)
3 tune S (saturation)
4 tune V (value)

You can be done for now. Tuning the other filter can be done later when your ready to make things even more reliable and all these steps and sliders are well documented online but we want to see the robot react to the data.

Assuming you have a working driving robot we won’t add that time to the work flow. Now create a new function or edit existing drive code.


1 create a networktable object of limelight
2 get the variable table.tx
3 multiply tx by a simple Kp (proportional) term maybe .5 or .05
4 insert result into the turn factor of your drive control
5 test if the robot spins out of control multiply the result buy -1 or if it shakes reduce Kp in half of factor of 10

This is down and dirty way to get things working. Now more time is spent dialing in and experimenting with the different filters and sliders.

If i wasn’t on my phone i would get more exact code posted but again there is samples all over on networktables even on the lime light web page.

Before your device arrives download GRIP. Create a simple vision pipeline with only threshold hsv filter and findContours then post the results to the robots networktable at 10.te.am.2 or ( roborio-frc-team.local is that right? I always have to look that up)

Consider a few things before thinking the Limelight is a magic bullet that will solve that scenario.

Using a camera as the primary feedback sensor for a PID has never been a great idea, and (probably) still isn’t for a Limelight. At 90fps (11ms) +network lag, you’re still potentially on the edge of what an IterativeRobot control loop needs in order to get updated readings every cycle. For control loops faster than 50hz, using the camera for closed-loop PID is even less desirable. Since we don’t want network lag or (e.g.) NetworkTables issues* to be a factor (ever) in our turn-to-angle code, we probably want to use localized sensors (gyro, encoder, potentiometer, etc) to perform turn-to-angle directly. Limelight then reduces the ‘settle’ time to get a good image with an accurate target angle, while also increasing the reliability of the vision system as a whole.

*This isn’t a neg against the NT guys/gals. The code is pretty solid, but there are a few nuances to be aware of.

Agreed. I didn’t word it as clearly but using other sensors in conjunction and/or considering the momentum of a robot would likely be important. I can’t say how bad the lag will be on limelight but I do think it would be an improvement over what we had before.

Yes true i didn’t get that into my post. Mainly cause it was getting so long and on my phone. For a new to vision experience this is the quickest way to see vision work. The bettet solution is to get the target offset from vision then get a current gyro position and calculate the desired gyro position.

I suspect most teams will be quite happy with using Limelight as the feedback device in a PID loop.

Delay is the enemy of feedback control. The more delay there is between when a measurement is taken and when your control loop sees it, the less control bandwidth is available. In practical terms, this means you need to use lower gains to avoid your control loop going unstable (oscillating/diverging forever). Lower gains means longer settling time (time until your process variable converges at or very close to the setpoint).

Let’s say the worst-case delay of the Limelight is 20ms. 11ms for the frame rate* (assuming your control loop iteration begins juuuust before the next frame arrives) + 6ms for image capture and processing (from Brandon’s specs above) + 3ms for network comms. It’s not clear to me if the 6ms includes every source of delay from start of exposure to output, or just processing, so there could be another 10-20ms hiding in here (Brandon/Greg, please clarify if you get a chance!).

20ms is not that far off from the worst-case delay you have with a NavX gyro or Talon SRX-based feedback sensor if you are doing control on the RoboRIO. 10ms for sensor update rate assuming you use the default settings (*) + communications transit time (0-3 ms?) + whatever delays come from sampling and/or low-pass filtering the sensor.

In both cases, if you are using CAN, you are also subject to up to 10ms of further delay if you are out of sync with the control frame (unless you change frame rates). If you use PWM, I’m not sure what the architecture for updating outputs looks like, but I suspect you are subject to something similar. (I do know that 971 synced their control loops to PWM signal generation in 2017).

So we’re looking at something on the order of 1.5-2x the total delay of using a gyro for RoboRIO-based control. That’s plenty of bandwidth for 95%+ of FRC teams. You’ll likely settle on your setpoint in under a second when reorienting a robot base or turret that’s pointed in the right general direction. And with some careful architecting, you may be able to get a clever implementation of Limelight-based feedback down the the same delay as “naive” encoder + gyro feedback.

Back in the days of driver station-based vision processing or discrete webcams, it wasn’t uncommon to have >100ms of delay from exposure to actuation. This was an entirely different ballgame, and you either needed to use encoders/gyros or reallllllly low gains to stabilize your control.

  • If you sync your control loop to an interrupt for new data arriving, and properly account for timing jitter in your control loop (use a measured dt in your I and D terms), you can cut out sensor rate-related delays. This requires triggering an interrupt from NetworkTables/CAN/SPI/whatever, which is left as an exercise for the reader. Alternatively, it might be possible to sync control loop execution to the sending of CAN control frames to cut out that delay instead - depends on what the libraries look like for 2018.

Very cool to have this input. We’ll see what works out this coming year.

The 6ms we’re measuring is our pipeline after copying the image, but it includes all of the contour and text drawing operations that happen just before posting to NT. The current software simply sets the NetworkTables update rate to 100hz, but the next update will utilize NetworkTables::flush() (thanks Thad_House) to force a NetworkTables update at the end of every frame. I don’t think comms takes more than 1ms since this is all happening on the local robot network…

We will need to setup some tests to measure the per-frame copy time. Since the pipeline takes 6ms, we maintain 90fps, and we don’t keep a queue of old frames, though, we are “beating” the camera and waiting for each frame to come in.

Are you willing to share an estimation of when we can buy this?

This is our hope! I think this will turn out to be true but we do still need to prove it. For the past two years we have used the approach of taking the reading from the camera and constantly updating the setpoint on a gyro-based PID loop so I know that works well. I think we will be able to just skip the gyro though. That would make the auto-aiming function in your robot code very simple.

Brandon is working on it. Soon we plan to do a pre-order for the units already under construction. He will post all of the details, we don’t want to over-promise anything so we’re trying to make sure everything is on track.

In case anyone has subscribed to this thread:

Due to the volume of requests received, limited pre-orders for Limelight are now open! You can pre-order your Limelight directly, or through West Coast Products.

Pre-orders will ship in 3 - 4 weeks.

Best,
The Limelight Team

For reference, here are the latency numbers from navX-MXP, based on 200Hz update rate. These are end-to-end (MEMS Silicon to RoboRIO) latency numbers, including raw gyro sensor data acquisition (25khz), digital filtering, sensor fusion and data transmission to RoboRIO:

  • SPI: 8.3ms
  • USB: 6.5ms

As an aside, the navX-sensor in the VMX-pi generates an interrupt to the host Raspberry Pi when new data arrives - decreasing this to 5ms, end-to-end. My thinking is that’s about as good as the current technology is going to get.

Based on that, I think the upper end of Jared’s latency comparison estimate of 2x is about right, assuming it includes the integration time in the camera and the data transmission times from camera to compute module and over ethernet to RoboRIO.

Working on some testing for latency between a Pi and a RoboRIO, I was able to get the following graph.

This is the latency in ms between setting new data on the raspberry pi, and receiving this data on the roborio. This was done by connecting an output from the pi to an input interrupt of the rio. Then, when updating data (which I did at an 11ms rate), I would trigger the output on the pi, which would read as an interrupt on the rio. Then, I had a listener set up for NT entry updates on the rio side. When that update occured, I would get the current time and compare that to the interrupt time. That is what the graph shows. So on average, there is about 0.4ms between data updated on the pi, and the rio receiving that data.

Note this actually won’t take effect on the Limelight side until the next update. With the current NetworkTables setup, the following graph is the data receive time.

The spikes are caused by the 11ms frame time getting out of sync with the 10ms NT update time. With some changes to the NT code on limelight, this is removed and becomes the graph shown above.