Stablized shooter & Claculating distance by vision

Hi everybody,
I’d be glad to know which algorithm / method did you use for:

  1. Stablize the shooter (for speciefic rpm). Did you use encoder and a PID closed loop? If so, which specific argument/s in the PID did you use. Otherwise, what other mehod did you used to stablized the shooter???

  2. Find your distance from the vision target. Did you use a table of measurments, or a formula?? If you used a formula I’ll be glad if you would explain it and tell me if it worked with precision.

[We - 4590 - have written our code in C++, and used the RoboRealm for the vision processing.]

  1. Used a Pololu wheel encoder with codewheels laser printed on cardstock. The black strip is about 0.5" wide to work at 11,000RPM on our 2-7/8" banebots wheels. We’re using C++ and a slightly modified version of the WPILib PIDController. The modification changes how integral anti-windup works to speed up reducing the integral action in certain cases. I can post the modified class if you’re interested. For tuning, we’re using the P,I, and F terms. F was tuned by driving the motor open loop and measuring the RPM at a specific command value. P was tuned by increasing it until we saw some instability in the RPM, then reduced slightly. I was tuned to minimize time to settle in the error range, while keeping overshoot low.
  2. No vision processing. We have a few parameter sets to let us shoot from various spots around the field, like behind and in front of the pyramid. Aiming and ranging is done by running into these physical locators. Co-pilot has a selector switch to pick which setup to use. Reduces our flexibility somewhat, but vastly simplifies programming and driving.

We are using a bang-bang controller on our shooter, which keeps us within 3% of our target rpm. Of course, this method has the best response after we shoot and requires no tuning. The RPM is measured with a light sensor and the FPGA timer.

We have a camera with RR code that detects the target and finds the distance with the angle the target occupies in our FOV and the ratio of the width and height of the target. That being said, we like to position ourselves on field objects. We don’t use the camera during practice.

  1. Used a Pololu wheel encoder with codewheels laser printed on cardstock. The black strip is about 0.5" wide to work at 11,000RPM on our 2-7/8" banebots wheels. We’re using C++ and a slightly modified version of the WPILib PIDController. The modification changes how integral anti-windup works to speed up reducing the integral action in certain cases. I can post the modified class if you’re interested. For tuning, we’re using the P,I, and F terms. F was tuned by driving the motor open loop and measuring the RPM at a specific command value. P was tuned by increasing it until we saw some instability in the RPM, then reduced slightly. I was tuned to minimize time to settle in the error range, while keeping overshoot low.
  2. No vision processing. We have a few parameter sets to let us shoot from various spots around the field, like behind and in front of the pyramid. Aiming and ranging is done by running into these physical locators. Co-pilot has a selector switch to pick which setup to use. Reduces our flexibility somewhat, but vastly simplifies programming and driving.

Thanks first for your comments.
Kevin, I would really like if you’d post the code. I’ve never used F and I’m not familiar with it. It would be nice if you’ll explain more how it affects the stablizing.

We have a camera with RR code that detects the target and finds the distance with the angle the target occupies in our FOV and the ratio of the width and height of the target. That being said, we like to position ourselves on field objects. We don’t use the camera during practice.

xmaams - even that you don’t use it in the matches - did your formula for finding distance worked? And can you post an explanation / the roborealm code for that (if you want to…)
Also, because we used also RR - have you experienced laggs on the robot when running code + RR at the same time?

I’ve attached a zip of our testing codebase. This one is currently set up for tuning the PID for the shooter tilting, but it’s easy to switch it over for tuning the shooter wheels.

F term is feedforward. It’s not based on feedback or error, simply on your commanded setpoint. So the output is the tradition PID Output + kF * setpoint. This is pretty much required for using a traditional PID to control a velocity. The idea is that if you know the speed controller command it takes to reach a specific speed, you can build that into your controller.

Example: Through open-loop testing, you know it takes a command of 0.75 to reach 7500 RPM. So you set kF = 0.75/7500 = 0.0001. Now, when you give a command of 7500, the output is going to be the usual PID output + 0.75. Your P and I terms only have to compensate for pulling you up to speed and any error in the kF predicted open loop command. It makes things react faster and stabilize better since you don’t need P and I to build up large enough to create the 0.75 you need to be at 7500 RPM.

You’ll see in there that I’m using my own custom AdvPIDController. It doing integral anti-windup slightly differently. It pre-calculates the PID output, and if the output is saturated and in the same direction as the error, it doesn’t integrate the error for that cycle. Doing it that way really reduces your overshoot, as you’re not adding integral action at the beginning of a step move when the output typically instantly saturates.

LeopardCommandBasedRobotTemplate.zip (45.6 KB)


LeopardCommandBasedRobotTemplate.zip (45.6 KB)

The RR code is in a visual basic script with some trigonometry. After we find the target rectangle, we find the angle that the target takes up on the camera by multiplying the field of view of the camera (radians) by the width of the rectangle (pixels) over the width of the image (pixels). We use half of the result to make the next step simpler.
The distance (inches) is then the width of the target in real life (inches) divided by two (to match the half angle from above) and then divided by the tangent of the above angle.

If you have any questions about how the math works I can draw you a picture with my amazing paint skillz.

The code ends up looking something like this:
halfTargetRad=fovRad*(widthPx/imageWidth)/2
distance=(widthTargetInch/2)*1/(Tan(halfTargetRad))

This method works better if you correct for the distortion in the axis camera and if you use a larger image resolution. For us, this was accurate to within a foot on a 1/5 scale target 12 feet away. (which scales to almost full court for a regular target)

RR runs on our DS laptop, which is not the classmate. We connect to the camera directly, so there is no load on the cRIO. Then we sent the data back to the robot with network tables. (this is about when the vision project got put on hold, but up to then we noticed no lag.) If you are using the classmate you might not have enough power to do a lot of vision and communication. Try to limit your FPS, and remember that grayscale images process more quickly.

Kevin, we need to get to the wanted rpm progressively. We could use only PID to get fast but progressively to the wanted rpm and then adjust it with F P and I. I didn’t understant though what did you change in the I in your special PIDController.

this method works better if you correct for the distortion in the axis camera and if you use a larger image resolution. For us, this was accurate to within a foot on a 1/5 scale target 12 feet away. (which scales to almost full court for a regular target)

xmaams-
About larger image resolution we use 640X480 image resolution. but what do you mean by correct for the distortion in the axis camera??? can you explain how do you do that??
thanks!

And did anybody else here did the following stuff in a different way???

Second thing first, I’m not changing how I is used, just how/when the error is integrated. It happens in the Calculate function.

void PIDController::Calculate()
{
//SNIP
	if (enabled)
	{
		float input = pidInput->PIDGet();
		float result;
		PIDOutput *pidOutput;
		{
//SNIP
			if(m_I != 0)
			{
				double potentialIGain = (m_totalError + m_error) * m_I;
				if (potentialIGain < m_maximumOutput)
				{
					if (potentialIGain > m_minimumOutput)
						m_totalError += m_error;
					else
						m_totalError = m_minimumOutput / m_I;
				}
				else
				{
					m_totalError = m_maximumOutput / m_I;
				}
			}
			m_result = m_P * m_error + m_I * m_totalError + m_D * (m_error - m_prevError) + m_setpoint * m_F;
			m_prevError = m_error;
//SNIP
}

Versus:

void AdvPIDController::Calculate()
{
// SNIP
	if (enabled)
	{
		float input = pidInput->PIDGet();
		float result;
		float m_interror;
		PIDOutput *pidOutput;
		{
//SNIP
			if(m_I != 0)
			{
				m_result = m_P * m_error + m_I * m_totalError + m_D * (m_error - m_prevError) + m_setpoint * m_F;
				m_interror = m_error;
				if ((m_result <= m_minimumOutput || m_result >= m_maximumOutput) && ((m_interror * m_error) > 0))
				{
					m_interror = 0;
				}
				double potentialIGain = (m_totalError + m_interror) * m_I;
				if (potentialIGain < m_maximumOutput)
				{
					if (potentialIGain > m_minimumOutput)
						m_totalError += m_interror;
					else
						m_totalError = m_minimumOutput / m_I;
				}
				else
				{
					m_totalError = m_maximumOutput / m_I;
				}
			}

			m_result = m_P * m_error + m_I * m_totalError + m_D * (m_error - m_prevError) + m_setpoint * m_F;
			m_prevError = m_error;
//SNIP
}

It’s a subtle difference, but important. The stock WPI code keeps integrating until the integral action is large enough that it saturates the output all by itself. In a slow reacting system, that can leave you with a lot of integral action to dissipate once you get to your setpoint. That leads to a ton of overshoot. My version doesn’t add to integral action as long as the output is saturated from all the control action. So you’re not building up a huge amount of integral action while the system is saturated and getting up to your setpoint as fast a it physically can.

Alright, to your first point. You want to get to your setpoint progressively? Like you want to slowly ramp up to your target RPM for some reason? Most people tune their closed loop speed controllers for three things: minimum time to reach (or re-reach) a setpoint, minimum error at steady state, and good robustness to a changing battery voltage. The idea being that you want to be able to just tell the system a setpoint and know it’s going to get there and stay there. If you’re using a PID controller, you should set up your gains to achieve this and leave them alone after that, for the most part. Changing gains on the fly can cause instabilities if you’re not very careful about things.

Slowly changing your setpoint is certainly achievable, but you should do that separately from tuning your PID. For that, you should have a separate class that limits how quickly the PID setpoint can change. So you’d have a currentSetpoint variable, and compare that to the commandedSetpoint, and if the difference is greater than rampRate, you add or subtract rampRate from the currentSetpoint. Otherwise you set them equal. That means your setpoint will only change by a small amount per cycle, so it will slowly ramp up or down.

The Axis camera has some distortion if what you are looking at is on the edges of the image. This is probably not a big problem if you are usually head on. RR has instructions for using its radial distortion module here.