Aiming using reflective tape

We have an axis camera and the encoders set up on our [tentative] shooter. The camera is detecting reflective tape shape, but how are we supposed to have the control system use the information (based on the angle at which it views the rectangle, right?) to tell the encoders what to do?

We have yet to have a working accurate autonomous, so this would help a lot.

So the general answer is you need to write an algorithm that figures out where on the image the center of the rectangle is, and use that to determine the amount to turn.

The hard part is writing that algorithm. There are a bunch of solutions, none of which are simple. You could use the NI Vision libraries, or you could include OpenCV with your robot project. You could write your own image recognition algorithm (SAD, threshold, SSD, etc.).

What do you mean when you say “encoder”? One typically doesn’t tell an encoder to do anything; it’s a sensor that provides a measured value.

Not at all surprising 53 hours into build season.

I truly hate to say this, but have you searched Chief Delphi at all? I know of several threads that deal with this.

As a disclaimer, our drive team was better at dead-reckoning our catapult last year than our code was at aiming during autonomous. If you don’t have to move (don’t intend to pick up frisbees) during autonomous, you could simply determine what image parameters would indicate a hit, and output text to the dashboard (“On Target”, “Too far left/right/back/forward”).
(They were also better at aiming during the match with a tape-cross-hair on the screen, so we removed the whole function.)

Given the above, you weren’t very clear how far into the image processing you’ve gotten. Here is my code from last year, it’s not broken into modules very well so I’ll give you some line numbers:


1313: A function to init the camera
795: A function that takes a camera image and does image processing, and sets up variables for the pid.
753: Has teleop glue logic.
451: Has logic in disabled to report on/off targeting

My strategy last year was to capture an image every second, and use each image to feed parameters to the PID loops. The actual PID loop was driven by a gyroscope (for rotation) and accelerometer (a not so great way of tracking distance). I never got far enough to check whether or not taking a picture in motion (every image after the first) was an accurate way to feed the PIDs, I’m guessing that time delay which likely exists was causing a long settling time (since each updated target after the first would not capture the motion that had taken place between taking the image and setting the new target).

I thought it would work well, and hope to be able to develop it further this year. It certainly reduced the processing power needed for image processing, and it did work sorta (I hadn’t done PIDs before, so that was the primary issue, ran out of time on build season for tuning/guessing).

This might be very useful for you.