We are having trouble with vision and working it into the framework in labview. We have the example and are not sure what to realisticly do with it.
We use labview and are would really apreciate help with this.
The vision example shows how you can use a ring light and the camera to identify the column. The paper goes a bit further and shows how to determine distance and even horizontal offset. The examples do not show how to drive the robot based on the camera results.
What do you have so far? What type of robot? What would you like to do with vision?
Greg McKaskle
An 6 wheel robot, I put the half into the vision prossessing from the example now im trying to figure out how to get the outputs and what to do with it?
The first thing I’d do is figure out what you want the vision code to do. If your robot drives to the target using the line, is this camera for fine tuning, height measurement, or are you using the camera instead of the line?
The example code was showing how to find the columns using only the camera. It returns the camera X pixel location. To make it more useful to steer the robot, consider doing something like
(X / (half the image width)) - 1.
This will give you a -1 to 1 range for the column position. This is pretty similar to the X input to the arcade RobotDrive VI.
If you scale the column position a bit, possibly negate it, you should be able to get the robot to turn to stare at the column – put the column at the center of the camera image. If you add a small Y value, say -0.4, the robot will steer and move forward trying to keep the column in the center.
First, determine if this is what you want to do with the vision, and feel free to ask questions. Also, keep in mind that it is often better to drive the robot a bit first and map out useful joystick values. 1.0 doesn’t sound big, but for the X input, it will make the robot spin very fast. Also, once you start to write code, I recommend putting the robot on blocks and moving the vision stuff around instead. Put a strip of tape on a wheel on each side so that you can better tell how the robot is responding.
Greg McKaskle
Ok basicly I want to steer the robot towards the pole keeping it aligned strait with vision from the reflective tape. Im having problems taking the example and implementing it into the frame work to do that.
Before you drive the robot, did you copy the code to the right of the green line and put it into the vision loop? Then hook up the image from the camera to the input to the Color Threshold. At that point, test to see if your camera is acquiring images and calculating the pole position correctly.
Greg McKaskle
So I get this output from the Vi or the driverstation?
The driver station delivers the joystick values and other competition state to the robot. It doesn’t have anything to do with vision.
The dashboard is often run on the same computer. It can request the camera images be sent to it, and it is often used to display the images to the drivers, sometimes overlaying field info over the image.
The VI is, I assume, the vision loop that runs on the cRIO and is typically used to process the images and update any sort of driving target info. This info is often sent to the driver station too.
Greg McKaskle
http://i1230.photobucket.com/albums/ee483/Mstrickland0601/vision2.png take a look I basicly want to from this point take the coloums and keep the robot strait with it. Here is the code in vision proccessing.