Week 2 Recap
Last week we focused a lot on the intake, under the logic that the design of the intake is going to affect the design of everything else quite a bit. This week we finished the basic intake CAD, and started the CAD for everything else.
Collector
The collector is going to be a wide, over the bumper roller to assist in picking up cubes, as well as our climbing mechanism. It’s going to deploy using a motor rather than pneumatics, and we’ve been working on CAD for it throughout the week. We also started assembling a more advanced prototype of it so we can fine tune the gear ratios and make sure it works as intended. Here’s a video of an early collector prototype picking up a cube:
Arm
We’ve decided on a double jointed arm mechanism to put the intake on, and started work on the CAD for it. To give us a head start on more advanced programming of it though, we built a 1/4th scale simplified double jointed arm. It’s powered by NEO 550s with 100:1 planetaries and REV through bore encoders. The plan for our full size arm is to use full size NEOs so this will allow for much of the complicated programming to be completed early.
Here is a video of the of the model arm in action (before any advanced programming, the operator here is just commanding the percent output of each joint motor):
You may be asking yourself “what complicated programming? You can just run a PID loop on the motors and have a few preset positions.” Well, that is a valid approach, and we did implement presets before doing anything more advanced to have something to fall back on if the more complicated doesn’t work out, but there is more we can do. By using inverse kinematics, ideally we will be able to set the position of the intake (which is at the end of the arm) using horizontal and vertical coordinates. This will open the door to let the arm be controlled more smartly. Here are a few examples:
-
The operator can directly move the intake up, down, forwards, or backwards (instead of manually controlling the rotation of the joints). If they need to do something complicated this will make it much easier for them to do so.
-
Motion profiling, or moving the intake around with deliberate motions. The way I like to think of it is that we can technically run G-code on the arm. This isn’t actually all that useful but it may allow for much more consistent mid or high node cone scoring, since you can deliberately and quickly set the cone down on the node.
-
Constraints. Although not as important this year due to the generous 48" extension limit, this will also let us more easily stay within the frame extension limit and height limit.
We made it as far is implementing the inverse kinematics based on this educational video, but couldn’t get it to be completely functional. The intake was definitely going to a point in 2d space but not the desired point. Sadly since autonomous routines and general swerve drive tuning will take priority over the advanced arm control this might have to take a back seat, but I will be sure to revisit this if I get the chance.
Pose Estimation and AprilTags
Last offseason, we worked on AprilTag recognition using PhotonVision running on an old Limelight, but due to a bug in the implementation of the Kalman filter used in the SwerveDrivePoseEstimator WPILib class the robot code would crash when seeing an AprilTag. We did learn a lot in the process though.
We decided to get an N5095 Beelink Mini PC for our coprocessor this year, along with two global shutter cameras (AR0144 and OV9281) and 2.8mm lenses. The Beelink will be running PhotonVision and powered by a boost buck converter that accepts an 8-40V input and outputs 12V 3A, perfect for the Beelink. (I am slightly concerned about it only accepting a minimum of 8V but that shouldn’t be an issue in a match.)
Since we had already completed the swerve drive base for our practice robot, we were itching to see the pose estimator work with AprilTags, but there were many things holding us up.
First, we needed to mount the Beelink on the robot and wire it. Thankfully, mounting holes for the Beelink were included on the brainpan, but when we had previously wired everything else on the robot, we neglected to leave the space open for the Beelink. So, we spent an entire meeting rewiring around where the Beelink needed to go.
The next day we needed to wire the Beelink to be powered by the robot, and we didn’t have our boost buck converter yet (actually we didn’t order it until that day), so we decided to just use the VRM for now and hope for the best.
It was at about this point that we realized that we should probably have the field elements that the AprilTags go on in the correct position before we get too excited about recognizing them, so we got started on taping out the zones and where the field elements should go on our carpet. We had already completed the co-op grid, so now it was just a matter of figuring out the correct location for it, moving it there, and placing an AprilTag on it.
Next, we had to actually put a camera on the robot, and although we had our global shutter cameras we had no good way to mount them. Our plan is to 3D print cases for them later in the season. We also don’t have the lenses for them either, so we decided to just ziptie a good ol’ Microsoft LifeCam on and call it a day.
Finally (after a few minor software fixes) we got the whole thing working. A WPILib SwerveDrivePoseEstimator being updated with drivetrain encoders, a navX2, and AprilTag pose information. You can see us playing with it at 0:58 in our week 2 recap video.
https://youtu.be/JqeocUKpMHE?t=58
Since the camera was loosely ziptied the vision pose wasn’t very accurate, so we couldn’t do much more until we got the real cameras properly mounted. Still, it was really nice to see this all come together after months of work.