3824's rapid fire feature

Improved vision system

Are you still using the RPi based vision system I saw on your robot at Palmetto?

Yup. Painful but effective. Now using Lidar for range to relax error tolerance on auto shooting.

We know about the painful part, we use one too.

I’ve read into lower cost Lidar systems but haven’t tried them yet. Which one are you using?

I’ll get you part number tomorrow. Really like it.

The accuracy is astonishing. Not a single miss. Very impressed. Can’t wait to see what you guys do at camps.

The lidar is from PulsedLight.

http://pulsedlight3d.com/

It’s a little over $100. In a nutshell, at Palmetto, we targeted the shooter manually and hit slightly about 50% and tried to shoot from the batter.

At Smoky, we got the vision system perfected and hit close to 100% high goal in autonomous and probably 80% from the field but could shoot from any distance. We had weird communication problems with the raspberry pie and didn’t solve it (thanks to another team) until the finals (changed to a static IP address). We used the vision system to auto align for shooting but it was relatively slow (took about 4 to 5 second).

Since Smoky, we’ve go the lidar coupled to the vision system. We relax our accuracy as a function of distance and tightened up the controls to where it takes between 1 and 3 seconds to line up and fire. The driver just gets close and hits the trigger, auto firing takes care of the rest.

What kind of controller are you using to get the drivetrain to the exact shooting position?

We’ve actually been through a couple of iterations of controllers. At first (way back in the middle of build season) we tried using the wpilib PID controller, but didn’t like it because we had a low frame rate coming back from our image processing code, and a not very responsive drivetrain, so we ended up with a huge delay between output to the motors and response in the image, among other things.

At Palmetto we used a step based control that used the gyro for control left and right aiming and just set our shooter hight based on a quadratic equation and the distance from the goal (either from the lidar or the size of the reflected tape, I can’t remember what we actually used now)

After we came back from palmetto we decided to basically drop what we had and start over. We went with something really simple: Check which way we need to move, move a little bit in the right direction, repeat.

It’s literally 6 lines of code:


// Adjust wheel encoders based on distance from target
if (pixelXOffset > Constants.IMAGE_LARGE_PIXEL_OFFSET_X)
    encoderPosition += isPixelXOffsetPositive * Constants.IMAGE_LARGE_STEP_ANGLE_X;

else if (pixelXOffset > Constants.IMAGE_MEDIUM_PIXEL_OFFSET_X)
    encoderPosition += isPixelXOffsetPositive * Constants.IMAGE_MEDIUM_STEP_ANGLE_X;

else if (pixelXOffset > Constants.IMAGE_SMALL_PIXEL_OFFSET_X)
    encoderPosition += isPixelXOffsetPositive * Constants.IMAGE_SMALL_STEP_ANGLE_X;

We also use encoders connected to our drivetrain now so we get more precise control of our wheel positions. PID controllers control the encoder positions, we just adjust the setpoints up and down.

We do basically the same thing with the PID controller for the linear actuator that moves the shooter up and down. (Plus some magic that makes it scan up and down when it can’t see the target)

We actually only use the Lidar for Automous Positioning.

We were very impressed with the accuracy we had at Smoky, but we wanted to be faster. Turns out, all we needed to do was reduce the required tolerance for shooting (so we shoot sooner and spend less time lining up exactly right)