After worlds we set out with a plan to make our code public and write a whitepaper on the move and shoot and how we went through design and code decisions and modifications from before Heartland through the Houston Championships.
So we (Senior programmer and I) never got around to the whitepaper… but in prep for IRI we did go through and clean things up and make improvements so I would still like to share the code in its raw form.
Please see link below to the public Github repo for the code for our 2022 robot “Blue Widow”
This was our first year using WPILIB Command Based programming and I think it worked out quite well.
Please do not hesitate to ask any questions! I will try my best to answer anything and any reason we may have done something a certain way. Also if we are doing something a weird way, feel free to point it out, it’s still a learning process for the students and myself.
For STL area teams I am really hoping to have some programming workshops this fall once we get new recruitment when school is back in session so please reach out if you are interested in joining those.
Hey! This is some very cool code.
I was confused when I was reading the limelight get distance function because I’m not sure why pitch - abs(yaw)/12 was used as the input to the interpolation table since this would be pretty hard to tune and just subtracting yaw doesn’t seem like it would intuitively work.
final double tyAdj = ty() - Math.abs(tx() * 57.296) / 12;
final double distance = distTable.getOutput(tyAdj);
So this missed getting cleaned up (I think we should be using a radians to degrees not just multiply by 180/PI in decimal form.)
The whole idea of /12 really doesn’t make direct sense but let me try to explain.
When using move and shoot we will purposely not have the goal in the middle of the image if there is any tangential velocity to the goal. Because of this the ty() angle in it’s raw form gives a pretty bad distance value. This is easy to see if you move rotate your robot/turret in the yaw axis and look at the limelight image. You’ll see that the center of the goal moves up as you go either to the left on right. Making the ty() angle appear as if the robot is closer. So we did a quick and dirty and divide the tx() angle in degrees by 12 and reduce it, we call it our yaw correction factor.
Im pretty sure there’s a better way to do this but this worked pretty well. We use the vision constantly (every single loop) to solve for robot x,y so we can use the pose and velocities to position a “virtual goal” for move and shoot. So It was really important that we could do this while moving even tangentially when the target is by the edge of the image. This allows us to successfully move and shoot approaching top speed when behind the “ball circle”.
This is data collected today (every robot will be different) at 4 distances from the hub, at each distance the turret was rotated CCW approximately every 5deg increase in tx angle.
This has some algebra built in that has it spit out a function to convert any (tx,ty) from the limelight into the equivalent ty if the tx angle was zero.
This allows useage of an easy to tune linear interpolation table for ty vs distance to work even when the target is not centered in the image (this will happen any time you are moving with move and shoot enabled)
If done correctly the graph on the right should be pretty linear.
Take a look at this spreadsheet, we basically noticed the target tends to move upwards along a quadratic when we rotate our robot/turret meaning ty doesn’t give the right distance when the target is not centered in the image. We used algebra to solve a system of equations to allow us to calculate the “would be” ty if tx is 0 for any tx,ty value.
I will try to make a write up on this in more detail when I get the chance.
For performKeepAngle, I can understand why you are changing “output” if the conditions of:
“timeSinceRot > 0.5” and the absolute value of “rot” parameter < minRotCommand
But I can’t figure out why you have “timeSinceDrive < 0.25” since shouldn’t the robot try to keep the same rotation after you drive it or while you are driving? Also what happens if you have 1 or 2 seconds instead 0.25 seconds?
I attached the related part of your code:
if (timeSinceRot < 0.5) { // Update keepAngle up until 0.5s after rotate command stops to allow rotation
// move to finish
keepAngle = getGyro().getRadians();
} else if (Math.abs(rot) < DriveConstants.kMinRotationCommand && timeSinceDrive < 0.25) { // Run Keep angle pid
// until 0.75s after drive
// command stops to combat
// decel drift
output = m_keepAnglePID.calculate(getGyro().getRadians(), keepAngle); // Set output command to the result of the
// Keep Angle PID
}
I think this is so that keep angle can continue to counteract and induce rotation from quick decel without leaving the wheels angled too long. If you left it at say 2.00s then the robot may rotate the swerve modules with a really small command to rotate but never get there. Probably does not matter much but we didn’t want to robot to continue to keep trying to get to the angle when stationary, so whatever correction it can get in that last .25s was good enough, next year we are going to revisit this as I think there may be better solutions
Posting latest links to our 2022 Final and 2023 Beta Repos for our B bot (Used to begin tinkering with the new PathPlanner) and the latest repo for our 2022 Robot “Blue Widow”.
These repos will get pushed the latest code before tomorrow morning. Good luck to all teams for 2023!