Is our plan to use two limelights and a raspberry pi feasiable?

We want to implement vision into our robot more this year, last year we only auto aligned with the HP station seeing that was all we needed. This year, our drivers are requesting a little bit more

We currently have a Limelight 3, Limelight 2, and a Raspberry PI.

Our plan is to use our LimeLight3 to detect April tags, and align with the April tags while shooting and to check the distance between our robot and the speaker. We wanted to also use our limelight 2 to aid our limelight 3. We also intend on using this so the drive team can see better. Right now we are unsure of whether to use the LL software or use photonvision, but we are leaning towards photonvision

My team also wanted to use our Rasberry PI to automate the alignment and intake of notes on the floor. We plan on using the Pre-built Raspberry Pi from WPILibPi. We would use openCV to find a Note (that we are right next to) and turn ourselves based on the Note, then move up. We are hoping that by running this on our Pi 3, we would be able to process this fast enough.

Before we make substantial progress on this, is there anything we should be concerned about? Any other options to consider Is our idea feasible? I also ama ware that due to the bandwidth problems, we can only stream one of cameras on the drive station, will switching between the cameras work?

I don’t have a lot of experience with LL but my team is doing one LL3 on our shooter side to auto line up on the speaker, amp, and source, and a LL3 on the intake side to track gamepieces, both using LL software. So far, the testing we’ve done using our extra drive base has been very promising with only one limelight tracking the apriltags.
As long as you’re not trying to do super accurate pose estimation, I don’t see a need for an extra limelight to validate your measurements.

EDIT: We also have a Google Coral USB accelerator on our intake LL to improve speeds when tracking gamepieces.

1 Like

That should be totally doable. Last year my team used an LL3 and an LL2+ for shooting (one for high cones and one for low & pickup). We also had a Pi4 running a black and white global shutter camera for our Apriltag detection.

What I would recommend (if you can get your hands on a Google Coral) for your intake is actually to use your LL3 (you could do the LL2 for it too) running Limelight, we did this last year (with the LL2+) with a pipeline (that used LL’s pretrained models for cones and one for cubes that worked perfect for us. They have not release a pre trained model for the notes yet. With the limelight doing this you would treat it the same as with retroreflective tape.

LL’s nural networks docs

2 Likes