|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
I've been trying to use a camera to target the reflective tape around the peg. I've looked at tutorials and examples but I can't get it to recognize the peg tape.
|
|
#2
|
||||
|
||||
|
Re: Peg targeting
Try adjusting the exposure time and ISO level of the camera or adjusting threshold levels in your processing. Besides that, we can't help you very much without knowing what your code or GRIP pipeline and your sample images look like.
|
|
#3
|
||||
|
||||
|
Re: Peg targeting
This was posted in the LabVIEW subforum, so I would assume they are using NI Vision instead of GRIP. The point still stands that it's hard for us to help without decent pictures of your code, sample pictures, threshold values, and output post-processing images.
|
|
#4
|
||||
|
||||
|
Re: Peg targeting
Quote:
![]() |
|
#5
|
||||
|
||||
|
Re: Peg targeting
Make sure Enable Vision in Robot Main is true by default.
|
|
#6
|
|||
|
|||
|
Re: Peg targeting
Okay so I've been working on getting my program to detect the high goal target then move the robot to line up with it. I got it to recognize the goal with a lot of help from the FRC vision example. But now I've got two major problems, my vision processing takes forever to process, whenever I move the robot it takes 3 seconds or so for the values to change and this makes it hard to move the robot accurately. Another problem I have is that I have no idea how to get my program to recognize the peg target. I've attached my vision processing code to this post any help on something I could do to make it work faster and/or work with the peg would help a ton.
|
|
#7
|
|||||
|
|||||
|
Re: Peg targeting
At a high level (as my team uses Java rather than LabView):
Vision processing is usually low-rate, high-latency. Use the results of vision processing to drive action based on encoders or an inertial system (gyros and encoder), which return data on much shorter time scales. |
|
#8
|
||||
|
||||
|
Re: Peg targeting
Where is this code located? On the dashboard? On the Robot? Or is this just the vision example? What resolution are you running? What compression are you using?
|
|
#9
|
||||
|
||||
|
Re: Peg targeting
Quote:
|
|
#10
|
|||
|
|||
|
Re: Peg targeting
Currently my code is located on the roborio and the resolution is set to 360x240 the compression is whatever the default value is I've never changed it. I'm currently working on processing the image on the dashboard instead but I haven't tested it yet. I kinda get what you guys are saying about using another sensor to actually line it up but I don't know exactly how to implement that.
|
|
#11
|
||||
|
||||
|
Re: Peg targeting
Quote:
|
|
#12
|
|||
|
|||
|
Re: Peg targeting
Quote:
|
|
#13
|
||||
|
||||
|
Quote:
https://www.chiefdelphi.com/forums/s...d.php?t=154930 try the code posted in this thread. It's my LabView code that my team won't be using. |
|
#14
|
|||
|
|||
|
Re: Peg targeting
Quote:
![]() |
|
#15
|
||||
|
||||
|
Re: Peg targeting
This just occurred to me. If you are a LabVIEW team that wants to target the peg, but is struggling to modify the example code to do it:
Try turning your camera on it's side. You may get a pleasant surprise. (YMMV, I haven't tried this myself). |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|