As is, the limelight can track the vision tape with ease, you just need to tune the pipeline to the desired target size / crosshair location and you should be good to go. Once we actually make the vision target in the next few days I will post a video of tracking that at various angles / lighting conditions and upload the pipeline file we used.
My worry is that since the vision targets are so close together the device would look at two rectangles from different vision targets.
This is a challenge. Looking at either a custom GRIP pipeline with pattern matching to find two inward tipping targets or maybe just using the tape on the ground.
When are the 2019 images expected to be available for download? Very interested in trying out a GRIP pipeline. It’s a great piece of software and makes testing and iterating on the pipeline very easy.
The vision sample images are posted on the wpilib release on github (I think the LabVIEW installer also has them, but this is a nice small zip file). https://github.com/wpilibsuite/allwpilib/releases/tag/v2019.1.1 (the 2019VisionImages.zip file)
Peter, I think they are asking about the limelight images that are marked as “coming soon” on this page https://limelightvision.io/pages/downloads
I just put a word in on seeing if they will go live with the 2019.1 image. I can tell you they are finalizing some new filters for the Cargo ship vision problem and possibly more GRIP utilities. We will then see 2019.2 released.
1st 36 Neo/SMs, and now this! Dude, you’re killing me!
How many do you need
That just excessive.
Front and back, three robots…
I noticed that there are two different cam modes depending if you want to use the limelight as a driver camera or targeting camera. Is it feasible to switch back and forth? Also, would it be possible to get the vision targeting mode feed going back to the driver station? This might be a good option for us this year.
Yes in code you:
Here is the complete API
If you would like to reference or use our complete Lime Light Library it can be found on our GITHub
- Copy the
oi.limelightvision.limelight.frcpackage into your
src/main/javafolder. and create a LimeLight object like you would any other robot input.
Is this how one would properly write to the limelight network table?
You can set up multiple pipelines with different settings to use one for tracking and one for driving. For example, for tracking you would set the exposure very low and turn on the LEDs, and put that in slot "0 " for pipelines. Then, in slot “1”, you can make a pipeline with the LED/s turned off and exposure set high so you can see the field normally. To switch between you simply have to tell the limelight to use either pipeline “0” or “1” at the press of a button and it will switch the settings for you.
Currently, I have pipelines set up to track the cargo, hatches, vision tape, and to use as a driver cam depending on what we are trying to do.
Here is a pipeline tracking the hatches.