Michigan Robot in 3 Days 2022

FAMNM’s Robot in 3 Days team, MRi3D, is back for our 3rd year! As in previous years, we’ll be creating a variety of resources for teams throughout the build, which we will be posting here.

We’ll be regularly uploading videos to our YouTube channel (check out the one that just went up!) and adding written resources and documentation to the google drive folder linked below and on our website. Additionally, we will have 1 hour long Twitch streams at 8PM on Saturday, Sunday, and Monday during the build, as well as a post-build stream the weekend after. If that isn’t enough for you, we’ll also be taking our bot to an Ri3D competition (https://firstalumnicollegiatecomp.org/ ) on February 2nd.

YouTube: youtube.com/c/FAMNM
Facebook: FIRST Alumni and Mentors Network at Michigan
Twitter and Instagram: @theFAMNM
Twitch: twitch.tv/FAMNM
Google Drive: bit.ly/MRi3D2022
GitHub: (coming soon)
Website: https://famnm.club/events/ri3d/index.html
Contact: [email protected]

Questions? Send 'em our way!


We’re live on Twitch until 9PM EST! Join to see the progress the team has made in the first eight hours since the game reveal

Almost time for our day 2 stream! We’ll be live at 8PM EST at twitch.tv/famnm

Check out our first update video for an overview of our brainstorm and planning process: https://youtu.be/JcKTPsDsQ5Y

As mentioned in the original post, we now have our code on GitHub! Because we want to help as many teams as possible, we decided to do \mathfrak{both} the iterative and command-based Java paradigms. We have published two repositories for each type, and both will be functionally equivalent, meaning a driver wouldn’t know if the robot was running iterative or command-based. Here are the repositories:

Iterative: https://github.com/FAMNM/ri3d-2022-iterative
Command-based: https://github.com/FAMNM/ri3d-2022-command

You won’t be able to copy all of our code and run it, but certain parts can be used as reference. Key features:

Drive code with thresholding
We are using an XBox controller to drive the robot, and the joysticks aren’t perfect. They might drift from off-center slightly and make the drive motors twitch and stutter. We fixed this minor issue by setting the drive speed to 0 unless the absolute joystick reading is at least 0.05. An example of this can be found in \texttt{teleopPeriodic()} in the iterative code and in the \texttt{setSpeed()} method of the \texttt{DriveTrain} subsystem in the command-based code. This deadband can be increased or decreased depending on the quality of the controller.

Vision processing
To help ease the work of the drivers during the match, we would like the robot to assist in lining up with the cargo for pickup. This is a sort of computer-aided driving, where the robot controls the rotation speed and the driver controls the forwards and reverse speed. We are using a standard USB camera plugged into the roboRIO. The vision processing algorithm will detect the largest group of cargo-colored pixels and return which direction the robot needs to turn to line up with the target. The general pipeline from camera video to motor output is this:

  1. Grab the camera’s current frame and convert it from the RGB color representation to the HSV (Hue, Saturation, Value) representation. This makes it easier later to say “I want red-ish colors”. More information about the HSV representation can be found here: https://www.lifewire.com/what-is-hsv-in-design-1078068
  2. Perform a color mask. This procedure looks at each pixel in the image and determines if that pixel is cargo-colored or not. The robot’s needed cargo color can be automatically retrieved from the FMS (Field Management System) so that code won’t need manually changed between matches. The result of this operation is a new image where cargo-colored pixels are set to white and other pixels are set to black.
  3. Find the contours. This looks at the image as a whole and finds groups of white pixels (which represent cargo-colored regions) that are potential candidates for a target. This operation stores every contour into a list.
  4. Select the largest contour from the list. We iterate over the list of contours and select the one with the largest recorded area. Ideally this will be cargo, and the X and Y coordinates of this contour is recorded for use.

This process is found in the \texttt{nextVisionFrame()} method of the iterative code (which is called by \texttt{robotPeriodic()}), and is found in the \texttt{VisionProcessor} subsystem of the command-based code. Currently we are not using the outputs of this procedure is input for the drive train because our robot still needs some mechanical work, but we hope to incorporate it soon.

That’s all for now, if anyone has questions feel free to post them here!


It’s almost time for our day 2 stream! As always, you can find it at twitch.tv/famnm until 9PM EST

And our next build update video is out, this one focused on the prototyping process: https://youtu.be/WpiBTDhTfR4

1 Like

Hey @FAMNM is there an events page for the 2022 competition? Would love to follow the event, but the FACC page still shows 2020 info.


1 Like

Our technical overview is now live! Check out our written documentation, and our YouTube video!

Videos of your Robot shooting and climbing?

Reveal video should be up soon. In the meanwhile, check out the FACC competition videos that are up now and will continue to come out over the coming days on the FUN YouTube channel

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.