Team 1736 2019 Robot Reveal

I don’t think you’d get away with that interpretation.

Also, thanks to all for the notes on the climber. We did revise the geometry of the mechanism after the video was shot. We’re going to go back tonight in our CAD to ensure we’re compliant with G23/R25.


Well, definitely wasn’t expecting #teamtether to be revived, but here we are. Good luck with inspections and see you all in Peoria!


I’m super-psyched you guys have a plan! This black magic is absolutely bonkers to watch. Very well-executed!

1 Like


Sorry for the late reply! Yup, arm is automated.

It’s driven off of a single NEO, and we close-loop around the internal encoder.

We did some physics simulations early in the season to prove out that a trivial bang-bang control would promptly tip the robot over, so we investigated other options. We’re currently using the “SmartMotion” feature in the SparkMax, which is trapezoidal motion profiling.

We also were looking to use the arbitrary feed-forward to do a cosine-based gravity compensation. However, it turned out at most it would adjust the motor command by ~3%, which was within the noise range for our application. We ended up just turning it off to simplify operation.

The driver can select between four fixed positions (high/mid/low/ground), as well as adjust the position manually. The intake and gripper automatically are actuated to get them in a safe position to put/extract the gripper within the frame.

Main takeaway from me: NEO motors are awesome. I am totally down for using them on drivetrain next year.

1 Like

Paging @Treecko120 :slight_smile:

Well, it’s kind of hard to see in the video but we use a jevois camera to help line up with the target. The program that the jevois is running is a high resolution camera stream which is then filtered for bright green vision targets. Once we just have the vision target we look at all the rectangles found by cv2.bounding rect and see if they and their potential pair fit the parameters to be a target. From there the camera does some math to find where it is in comparison to the cameras field of view. We also have a more advanced program that uses cv2.solvepnp to obtain coordinates for the target and send them to a pathplanner, but it is as of yet untested.


Super excited to see Casserole in action at Midwest and CIR! Can’t wait to see what great competition you guys cook up this year :slight_smile:

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.