I’m new to vision and I am attempting to use GRIP. I’ve already gotten values from a Network Table, however I’m not sure where I should go from there. Also I’m using a Pixy cam using Analog, we use Java and we are not using a co processor. Any help would be appreciated.
That is correct, I am Running Grip on the driver station, and the Pixy separately, the pixy is only being used as the camera. To clarify, my main issue is I’m not sure how I should go about using the values from the Network Table for vision, specifically how do I use centerX because I’m trying to track objects on the x axis.
centerX is the x-coordinate of the center of the object, so all you need to do to center the object relative to the robot is to run a PID loop with centerX as the input that turns your robot. Once centerX is in the center of the range (so for 0-4096, that would be 2048) then your robot is facing the object.
Does that help any?
If you are running GRIP on the driver station, I wouldn’t recommend doing this, since your latency will be high. If you have to run it on driver station, try getting a gyro on your robot. You can use field of view of your camera to calculate the angle displacement. Then run a PID loop on your robot with the gyro, using the angle displacement as your setpoint.
To clarify, centerX should be used to set the setpoint of the PID controller rather than be the feedback signal. Vision processing typically has too much latency and updates too slowly to be useful in that capacity.
Again, centerX should be the PID setpoint. I’ve seen too many people attempt to use it as a sensor and it never works out.