Vision Tracking Help

I am using labview

Can some one Explaine how to go about using the rectangle vision tracking code to track the goals and use it to possibly move a turret to the cameras tracked position. If some one has code that they would not mind sharing it would be greatly appreciated.

I’m working on it, and am pretty far, just don’t know how to communicate with TCP and UDP stuff…

Personally I don’t have code, but there are plenty of FRC examples in the getting started window under Support, then Find FRC examples, go to Vision, then rectangular target processing. Hope that helps

In what site?

The rectangle vision code exampled provided, is a good start. Once you get that integrated into your code, it will find all the targets in it’s FOV.

You’ll need to write some code to select a single target. In our case, we selected the highest target seen. Which I believe if I remember right, would be the highest “Y” value. Once we selected the a single target, we then used the X and Y and distance values from that target.

The X value is what you want from your target to move your turret. Since the scaling of the values from rectangle vision code are -1 to 1, it makes for some EASY PID programming. If your camera is directly on center with your shooter, your setpoint will be ZERO in your PID function block. Your target “X” information will be your input and your output will be motor control to your turret.

That’s the AIMING PART.

Distance can be used to adjust the speed or angle. In our case, we choose SPEED. There is a thing I learned from one our team programmers Luke Pike about interpolated arrays in Labview. This allowed us to use a “lookup table”.

The basic logic is "when distance is: ?, then look up the speed point value in the lookup table, then set the speed of the shooter wheel. The only problem we saw is that when shooting at angle, the distance reported from the example code will not be correct because it’s based on scaling the size of the seen rectangle into a dimension world value. Very clever and works great, but you need to be looking at the target pretty much perpendicular. In the case of this game, shooting from the key worked in most cased and only skewed the value 0.5 feet to 1 feet. This wasn’t enough to mess up the set-point values from the wheel.

In our case, the logic seems to work, but we are dealing with compression problems on the mechanical side, and now wished maybe we would have designed a signal shooter wheel and a fixed back plate. We see alot of shooters designed this way are more immune to the ball compression then the double shooting wheel design. We have noted that a single shooter wheel design needs to be turning almost twice as fast as a double shooter wheel. IN wattage of motor power, we are using two fisher price at around 350 watts, and most of the single shooter wheel designs need 4 motors or close to 700 watts of power.

We never solved the angled shots, I see in the Robonauts video, their software seems to account for the angled shots. Glad someone figured that out, we ran out of time.

Hope that gives some basic guidance in “concepts” of control theory.

If you want to get a little better distance approximation on angled shots, you can switch the code to use the vertical measurements instead. For even better estimation, you may want to use the bounding box as a region of interest and do some edge detection or line fits.

Greg Mckaskle

Would anyone mind posting some sample code for turret control. To begin I would be happy to just control the turret with the vision tracking, shooting distance would be a bonus. I have got all the vision processing working from the examples and can track the rectangles. I just need to know what to do with the output. It sounds like it isnt a lot of work, I just dont know where to start with it. We are going to have 2 limit switches on our turret so it doesnt try to do a 360, it could pull off a 360, but the wires would be a nightmare.

If you can track the rectangles, than you can find the x coordinate of the target you want to aim at. If your camera is mounted such that it turns with the turret, you need to establish what screen coordinate the turret actually is aiming at. Then you can find the difference between the desired coordinate and the tracked one, and use that to turn the turret and make the two values the same.

Do you understand that high-level description?

I think I understand this in words, but putting it into labview is the difficulty. I think you are saying if the camera is mounted center on the turret then your target might have an coordinate of (0, 5) and as you move the coordinate would change but we want the program and the turret to do everything in its power to keep the X-Coordinate at 0. The Y may change as you get closer or move further away, but the program is trying to keep the X the same to keep it centered on the turret.

Now making that a reality is where I am going to struggle.

Try something similar to this.

You’ll want to use the X coordinate from the camera as the process variable to a PID controller (since this will change with the rotation of the turret). Your setpoint should be the center of the image (where you want the target to appear). This can be found by dividing the X resolution of the image by two. Note that this also assumes that the camera is mounted exactly on the center of the shooter - you might have to tweak the setpoint to make it mesh with your system. You’ll need to tune the PID constants in my snippet - since they’re the default values. There are other threads around for help on PID gain tuning (just search for them).

auto aim.png


auto aim.png

If you are using the Target location from the LV example, it is a -1 to 1 coordinate with 0 in the center, not 80. If you are using a more raw version, width/2 would be the right target for the PID.

Greg McKaskle

how do you add another camera that is not an axis camera. mine is a trendnet tv-IP110/A