Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   More Vision questions (http://www.chiefdelphi.com/forums/showthread.php?t=140489)

marshall 22-12-2015 09:46

Re: More Vision questions
 
Quote:

Originally Posted by jojoguy10 (Post 1514154)
Thanks! The last two links don't show anything. Are they searches of keywords?

I thought I remembered someone talking about using USB to transfer data, but maybe they were talking about the other serial interfaces. With the ethernet, you would use UDP or something similar I'm guessing?

They were supposed to be white paper searches for "vision" and "opencv". There are a couple of good examples out there.

UDP or TCP depending on your tolerance for latency and importance of the data arriving, etc...

Roscoe Plowbots 27-01-2016 10:47

Re: More Vision questions
 
My team has been using NI Vision assistant and we now know how to track objects on the screen; however, we were wondering how to impliment the tracking to motor movement. We want our robot to find the target then auto adjust to score. If anyone has any code, websites, or tips for us that would be great. Thank you.

adciv 27-01-2016 11:05

Re: More Vision questions
 
I have a general algorithm for you. Mind you, it requires an encoder or gyro, depending on what you're moving (turret or robot).

Step 1: Use camera to acquire angle relative to shooter.
Step 2: Use Encoder/Gyro to turn that angle (or close to it).
Step 3: Use camera to check new angle relative to shooter.
If angle requires more adjustment, GOTO Step 2. Else, continue to Step 4.
Step 4: FIRE!

GMeyer 27-01-2016 17:13

Re: More Vision questions
 
In answer to the original four questions:

1. We've used a number of them, but most recently we used RoboRealm.
2. We're not sure, but it looks to be OpenCV on a Raspberry Pi, which we're going to use this year.
3. RoboRealm, by far.
4. Both RoboRealm and OpenCV are equally accurate.

Greg McKaskle 27-01-2016 21:09

Re: More Vision questions
 
The vision example does some mapping on target location to make the target coordinates range from -1 to 1 in X and Y. The reason for this is that it makes the code independent of camera resolution and makes it very similar to a joystick output. You may need to multiply by a scaling number, but this is pretty close to being able to wire the camera result to the Robot Drive.

If you want to close the loop faster than your image processing allows, consider using a gyro or IMU to turn by X degrees.

Greg McKaskle


All times are GMT -5. The time now is 05:18.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi