Has anyone gotten direction extrapolation working using the included VIs? We’re trying to find a way to determine how far off center the camera is and have the base correct for it. I know that the processing software will vary the distance measured with respect to the angle of the camera, and with that measurement it’s definitely possible to have the robot center itself using the ultrasonic sensor as a verification method, but I’m not sure how to determine which direction the robot has to turn in order to correct itself.
In addition to the distance, the vision code you’re working with tells you the location on the image of the target. Use the x coordinate of the target to decide which direction you need to turn the robot, and how far.
It really depends on how exactly you code the auto-aiming. If you use PID to get the target in the center of the camera’s field of view, then it’s really just a matter of tuning the PID parameters to get the best results from your system. There are many ways to tune a PID controller, but one of the most common (and easiest to implement in our timeframes) is a guess/check and Ziegler–Nichols combination. I’m not sure there’s really much information to post.
But if you have specific questions about the logic/methodology behind auto-aiming, ask away, I’d be happy to answer.
Attached is the portion of the code I am using to control a motor to align to a target. I am encountering problems in determining the logic to have the motor *focus *on the target chosen.
From the picture, I have taken the position of the x-value and specified when the value is less than 0.09 and greater than -0.09 to assign an output of 0, allowing the motor to stay fixed on the target. Yet when the values are out of this range, I’m looking for the motor to move clockwise or counter-clockwise to set its value within that range. I am having trouble using the x-position to center itself on the target.
We have read through the Vision Target processing and checked FRCMastery’s section on vision processing; however, we are still having trouble.
Any pointers would be greatly appreciated. Thank you in advance for your responses and time.
But all jokes aside, you’re going to want to use a PID Controller in order to get your turret to aim. A PID controller is a feedback control mechanism, which, simply put, means that it will calculate the error between your desired value (the “setpoint”) and the current value (the “process variable”). So, since you have real-life feedback (your camera) as well as a desired value, you can use a PID controller to “lock onto” the target.
In LV, there’s a PID palate, and you place PID.vi onto your block diagram. You will want to wire the offset between the center (x) of the target and the center of the image (
offset = xCenter - (xResolution/2)
) as the process variable to the PID controller, and put the setpoint at 0 (you want the center of target to be exactly in the center of your image). You will have to make sure that the output ranges are scaled properly, and also that the input ranges are both scaled to the same units (you shouldn’t have to do any scaling in this case - both are a pixel offset).
Then comes the hard part. You’ll have to tune the parameters of the controller. There are three parameters: P (proportional), I (integral), and D (derivative). See the links in my last post for some common approaches and read through this section of the wikipedia page for how the gains work.
And if you can’t get this to work, there is a simpler (but much less robust method). You could figure out the sign of the offset between target center and image center. This will tell you if your turret needs to move left or right to get centered. Then, you can either send a positive or negative value to your motor, depending on the direction and keep a constant value sending until the offset is within a threshold for the center, when you stop the motor. This is similar to the approach you started with.
I would recommend going with a PID solution because it’s much more powerful and robust than the other way, but it does take some time and effort to get right. There are a lot of resources out on the internet and around ChiefDelphi for you to use. And as always, feel free to ask questions.
Very funny lol With all the reading and work getting done, it was nice to have a little humor.
I read through the documents, thanks for the links. However, I still do not understand how the process is accomplished inside of LabView. I set up a case select for the automation to run when a button is pressed down.
I think I set up the input and outputs for the PID block correctly as you stated. I enclosed a screen capture of what was done.
You mention to use this code:
offset = xCenter - (xResolution/2)
I think that is what I have done, I still am unsure however.
As you can see from the picture, I manually set the pixel value for the camera in the x direction. Is there a better way of doing this? Maybe as a variable that will update if the settings are changed?
I think that if you look at the Rectangle processing, it has already defined the X and Y position to be in the range of -1 to 1 to be more like a joystick. Unless that code has been removed, subtracting 320/2 doesn’t make sense.
Then what about comensating for the pixel offset as mentioned?
This post clearly states there must be an offset created and it makes sence. If there is no offset then how would the PID controller know the error and hence how to correct for it?
From our testing, with the stock kitbot setup and using PWM, it’s nearly impossible to have the robot precisely move to a certain position. The slop in the chain, transmission, and the varied power needed to overcome the moment of inertia are all contributing to the robot overshooting in order to attempt to aim.
Our solution: use CAN and encoders on the transmission, tune motor PID on the carpet, and tweak autoaim code. CAN enables us to closely monitor the number of rotations or set a speed at which to track, no matter what resistance the wheels meet (to an extent).
The overshoot that everyone is noticing with the example above is due to the camera being slightly off and the robot trying to precisely correct. This can be improved by using more case structures which vary the motors’ power, instead of having a single power at which the robot will track, but it won’t solve the problem.
And what auto-aim code are you referring to? There is a rectangular processing sample code? I do not see a auto-aim code… There is one however for tracking using a servo.
The camera is mounted about a axle moved by a drive motor. It is a turret that just rotates in the x direction. By limiting the speed I can not see the motor over-shooting the target.
I am hoping the approach documented by plnyyanks will work as mentioned. It seems like a simpler approach. But I would love to take a look at a screen cap of your code so I can see what you are referring to more precisely.
There is enough slop in a kitbot system that you’re very likely to overshoot. We created some autoaim code similar to what has been posted above, which is what I’m referring to.
On the topic of the servo, a servo works by providing position feedback. This same idea holds true for a larger system(such as a kitbot) which also requires feedback. You may be able to get the system to function by adding a bit more dampening (carpet) but it won’t perform well enough to accurately aim.
See, I though this too, but then I noticed that after a while, with the teleop code (in thumbnail)we have can aim rather precisely with the right range, and that the offset was actually coming from, and that the overshoot/wobbling we noticed was actually coming from the targets swapping places in the Target info array.
Granted, it could turn out that we might need to center the camera better to ensure that the center of the camera corresponds to the center of the target, but that seems easy enough to do…
If the coordinate system has zero at its center, and the center is the goal value, then measured position minus center is the same as center-0.
Ideally, I’d then compute the desired rotation and close the loop using a gyro or encoders – use the faster sensors to feedback, and tune a PID loop --probably just a PI.