Hey everyone, comming from a week 1 event one thing I noticed was that many teams where nit running vission such as using the limelight to align their robots.
One thing i relised was that when trying ti align our robot in the match to the limelight taoe for scoring the robot would crazy as it would detect a lot of difference peieces of tape which was not something I had expected.
Would anyone happen to know about any good ways of eleminating this problem? Or is it best to stick with human feedback for shelf pickup and scoring.
Every event is different from each other in regards to how they are lit, some have unfortunate lighting conditions, like an open window with the sun glaring right through. This can cause issues where the LL picks up something as a target, but in reality its not (especially true for Ref. tape).
For our team we used our practice matches to find out the “sweet spot” for our LL configuration. Just modify the LED power and exposure until the detection is reliable.
IMO, detecting AprilTags was much easier then using Ref. Tape, I find the AprilTags are less vulnerable to bad lighting conditions.
My biggest problem wasnt the lighting conditions but more so when it came to making the limelight know which target to track as it would see many pieces of limelight tape as it could pickup both the low and hight tape from at least 2 different cone nodes, i tried making it so that it tracks the largest one, but it pretty much was useless unless i was already pretty much right in front of it
To prevent getting more than one set of cone nodes, you can increase the zoom on the limelight to a point where it only shows the one in front of it. Not the most elegant solution but should work.
Hello,
We use alignment for april tags, especially in autonomous. It is just quicker to eyeball it in teleop. We use limelight to help with alignment in teleop, but corrects horizontally, driver just drives to the right distance. We use photonvision for automatically picking up floor game pieces, during both auto and teleop.
We at team 144 do during our place 2 game piece auton. We use the april tag to align the robot and account for any drive train errors before we place our second game piece. We are running photon vision and Labview, but so far has been surprisingly reliable.