Question about vision targeting

Ok so our intake is always directly on the center of our bot and can’t move left or right, and we have a KOP tank drive. I keep seeing teams tracking the reflective tape for vision targeting, but for us our team was thinking about doing line tracking with the tape on the floor. I haven’t seen much discussion about it and normally see teams use the reflective tape on the scoring place, and I was just wondering if there’s a reason for this, like is using the line on the floor less consistent or harder or something? Thanks for the help!

We choose not to use the floor line (even though we invested in expensive color sensors initially) because of the following:

  1. The line is very short, which does not allow much room to manouver after the sensors detect the line
  2. The line will be white at the start of a competition, but will likely become grayish near the end which could easily affect the detection
  3. The sensors need to be extremely close to the ground, which limits the design constrains of the robot and presents difficulty in ensuring they are not damaged.

Hope our experience with this helps you.
Sincerely, 4338

P.S.
Given that competitions are very soon, I would reccomend that you use a driver vision camera connected through either the Rio (through the USB ports) or a Rpi (with the basic FRC Pi Vision package). If you do think that you’ll have enough time for the reflective tape detection, I’d reccomend placing green LEDs around the camera and then implementing custom opencv code on a RPi - if you do go down that route, I could provide additional tips.

1 Like

If you do go the route of tracking the retroreflective vision targets, I made software to track the tapes, so you should be able to start tracking with minimal effort

Feel free to ask any questions

1 Like

The main reason for us to track the vision targets is they are retro-reflective, so easier to deal with HSV filtering. The lines on the floor are not, they are just gaffer tape. We thought briefly about line tracking sensors, but you have to be very close in for that, and the vision tracking will give us all of the offset and angle information we need to navigate.

Once you know distance and angle, it’s pretty simple trigonometry to plot out a path to get there, even with a tank drive (just more turns rather than strafing possible with a swerve drive).

1 Like

Disclaimer: Chicken Vision only calculates angle, not distance. It can be extended to calculate distance, but the current function to calculate distance doesn’t work

1 Like

We have a driver camera for vision, we were just trying to spend the 4 weeks before our competition implementing some auto-alignment for ease of use. Thanks for the tips, it really helps out and I’ll talk to our coding team about it today. If we have money for a limelight, would you recommend that over the LED ring, RPi, and second camera? Thanks!

We use Chicken Vision for aim assist (dubbed by driver), here is an example with a P loop that I wrote.

It can switch from being a driver cam, track cargo, or track reflective tape.

What is the “simple trigonometry” to make a curve? I’ve been experimenting with using vision to line up for a while and I haven’t found anything that works. Unfortunately our design calls for lining up pretty straight, and it would be difficult for our drivers to line up correctly without some type of assistance.

I havent used a limelight so I can’t comment on how well the limelight will work; that being said, I have heard that it is easy-to-use out-of-the-box and can effectively track the targets.

However, there is a reason that we choose to use a RPi over a limelight: we believe that the limelight does too much for you. For us, FRC is more about the learning than the actual product (after all, no one really needs a robot that can place flat circular plates on different levels of a “rocket” - what you do need is the experience of building a robot). In the real world, no one is going to create a solution like limelight that’ll solve these problems for you… and it’s not like you’re suddenly going to overcome 254 if you get a limelight.

Therefore, we believe that the risk associated with creating our custom programs are significantly outweighed by the learning involved in the process. Furthermore, if you invest enough effort into your own custom programs you could 1) be unique from all the teams buying the same limelight and 2) might even be better than what the limelight can provide (which I believe we achieved this year through our extensive investment).

I can’t promise you that your vision is going to be perfect, or even work, if you pursue a custom vision environment on the RPi. I can say that if you go with a limelight, you’d be missing out on a large potential for learning.

Remember that this is my opinion and others are entitled to others.

1 Like

Wouldn’t necessarily be a curve - from a simple drive perspective it would be a series of turns (to an angle) and drive forward.

We decided against bothering with the floor line and stuck with tracking the retro-reflective target.

  • the target will get dark, so needs illumination
  • our bumpers would be in the way
  • as said earlier, they are rather short, and can’t be seen from far away
  • camera facing forward for the RR target can be easily re-used as a driver camera

If we had done it, we would have used a camera pointing mostly down, instead of line follower modules.

The RR targets also allow you to find the full geometry of your robot’s “stance” with respect to the target (distance and multiple angles).