Peg targeting

I’ve been trying to use a camera to target the reflective tape around the peg. I’ve looked at tutorials and examples but I can’t get it to recognize the peg tape.

Try adjusting the exposure time and ISO level of the camera or adjusting threshold levels in your processing. Besides that, we can’t help you very much without knowing what your code or GRIP pipeline and your sample images look like.

This was posted in the LabVIEW subforum, so I would assume they are using NI Vision instead of GRIP. The point still stands that it’s hard for us to help without decent pictures of your code, sample pictures, threshold values, and output post-processing images.

Whoops, didn’t notice that the forum section was LabVIEW ::rtm::

Make sure Enable Vision in Robot Main is true by default.

Okay so I’ve been working on getting my program to detect the high goal target then move the robot to line up with it. I got it to recognize the goal with a lot of help from the FRC vision example. But now I’ve got two major problems, my vision processing takes forever to process, whenever I move the robot it takes 3 seconds or so for the values to change and this makes it hard to move the robot accurately. Another problem I have is that I have no idea how to get my program to recognize the peg target. I’ve attached my vision processing code to this post any help on something I could do to make it work faster and/or work with the peg would help a ton.





At a high level (as my team uses Java rather than LabView):

Vision processing is usually low-rate, high-latency. Use the results of vision processing to drive action based on encoders or an inertial system (gyros and encoder), which return data on much shorter time scales.

Where is this code located? On the dashboard? On the Robot? Or is this just the vision example? What resolution are you running? What compression are you using?

This. it’s impossible to stress this enough.

Currently my code is located on the roborio and the resolution is set to 360x240 the compression is whatever the default value is I’ve never changed it. I’m currently working on processing the image on the dashboard instead but I haven’t tested it yet. I kinda get what you guys are saying about using another sensor to actually line it up but I don’t know exactly how to implement that.

Get a cheap gyro, perhaps the ADXRS450, and set it up so that the angle resets every time the vision info changes (in Periodic Tasks.vi in a 10ms loop). That way, you have an angle that, when subtracted from the angle you calculate from your vision data, gives you how far off you are, even when vision doesn’t update. At least that’s how I did it.

Okay thank you for all your help. I have a gyro currently working on the robot, but the problem I see with using the gyro angle to line it up is distance. Right now my vision code won’t figure the distance from the target it just says infinite. If I’m closer or further away that’d change the angle calculations, but right now I don’t know how to get that distance.

https://www.chiefdelphi.com/forums/showthread.php?t=154930 try the code posted in this thread. It’s my LabView code that my team won’t be using.

okay thank you for that link it helped me a ton. My code know detects the peg target and has a distance that’s pretty accurate. Thanks for all your help! :slight_smile:

This just occurred to me. If you are a LabVIEW team that wants to target the peg, but is struggling to modify the example code to do it:

Try turning your camera on it’s side.

You may get a pleasant surprise.

(YMMV, I haven’t tried this myself).

Doesn’t work. Distance measurements read, but only if score cutoff is low enough, and without proper scaling, they’re useless, and your frame width is limited, and you have to keep track of X being Y and Y being -X, and to top it off, it’s neither gracious nor professional. But until you think about it, it sounds like a really good idea, and as for doing proper edits to the code, it’s one of the main principles.