FRC 2370 IBOTS | 2025 Build Thread | Open Alliance

IBOTS FRC Team 2370 is excited to make a splash posting our build blog on the Open Alliance for the first time this year! We are a small community based FRC team in Rutland, Vermont. Although a small team in a rural area, we strive to have a big impact on our community. We were honored to be selected as a 2024 Impact Award Winner at the NE District Championship last year. One of our main team goals is to continue to grow and expand out impact.

Team Links

9 Likes

FRC KICKOFF 2025

Our Kickoff had a great turn out with teams from around the state of Vermont. There was lots of excitement - and everyone dove right into learning the new game challenge and brainstorming ideas. One of our alumni from last year made a great scavenger hunt worksheet to help new students jump in and learn the game. She even had prizes for students who completed it! This scavenger hunt was a great way to introduce students who are completely new to FRC to the game rules and play.

Ultimate FRC Game Manual Scavenger Hunt 2025


10 Likes

WEEK 1 - Working with Algae

On Day 2, a small group of students and mentors started prototyping ways to pick up algae from the floor and deliver it to the processor. We tested at least 3 iterations that day.

  • 1st design - had a passive and an active set of rollers - two 3 inch compliance rollers 6 inches across. We concluded that the passive set of wheels did not really do mush or provide any benefit so we removed them

  • 2nd design - we used all 1 inch sushi wheels across - this worked but was slower.

  • 3rd design and best design of the day:
    ½ inch hex shaft 14.5 inches above the top of the bumpers with the following arrangement of compliance wheels - 1- 3 inch, 1- 2 inch, 5 - 1 inch, 1 -2 inch, and 1 - 3 inch. This design will intake the algae and output into the processor.

This is still a design work in progress. We’ll update new iterations and tests in the future.

4 Likes

WEEK ONE SUMMARY

We have had a productive (and fun) week. We welcomed many new students to our team who have dived right in learning new skills and contributing to our progress. We have a great group of new students this year - focused and hard working!

Mechanical:
Elevator - making great progress on elevator - the mechanical structure of the elevator prototype is complete. We will need to design and install how to actuate it. We are also considering making it a bit wider.

Coral Manipulator - designed, built and test an arm/sleeve prototype for delivering coral into the reef. Tested this with two different size wheels. This is still a work in progress - still in the testing and design phase.


Programming - started working on firmware updates, programming drivetrain for Kit Bot, wrote code to test the coral manipulator, research using a limelight calibrator board for machine learning.



Kit Bot - Tips for success from some of out students working on the Kit Bot -
“Make sure to read the instructions before starting”
“Look for all orientations before starting anything”
"Measure twice - cut once”

We have finished building and wiring the the drive train for the Kit Bot. Our programmers have also completed code for the drivetrain. In keeping with the Ocean theme - one of our students created some ship part and attached them to Kit Bot - it is now fashioned with a pirate ship helm, anchor, and helm.




Scouting - Our scouting system has been started and is in the early development stages. Stay tuned for more - we’re excited about this scouting system!


Game Elements - working on making our reef structure for scoring - using 3D printed parts to fit the PVC pipe together in the shape of the reef



8 Likes

Custom PCB - CAN-Power-Encoder Connector
We made this custom made PCB to connect CAN and power to our encoder right from the bottom of the Kraken. No need for power splitters, gets rid of a bunch of wiring, keeps things tidy and safe! No need for soldering.

11 Likes

Tell me where to order :eyes:

1 Like

Is this something your team would be willing to share the files for. I would buy these for our swerve modules.

1 Like

We’re placing an order right now and will have them up on our website Store — Rutland Area Robotics as soon as they come in.

1 Like

Programming / AI Machine Learning & Vision Tracking -

At the end of week 2, our programmers worked on learning more about setting up AI machine learning and vision tracking for our robot this season. Our programming mentor reviewed several resources and the process for setting us a system for vision tracking, creating data sets, and using data sets to create an AI vision model for our robot.

We developed the following notes from this discussion and work:

NVIDIA Jetson - NVIDIA Embedded Systems for Next-Gen Autonomous Machines - this is the hardware we use for our vision system - our camera plugs in to this on our robot

Jupyter Notebook - https://jupyter.org/

Roboflow - https://roboflow.com/ for machine vision learning / training; lots of data and images already available there from FRC teams - used these to help train our model - to create a data set for training need to have 100s of images with various lighting and setting and orientation; you can upload your images here, draw a box around each object and label each object to create a data set for training your model (images recommended size 640X640)

Ultralytcs YOLO - You Only Look Once documentation - https://docs.ultralytics.com/
(create your own AI model - they also have pre-trained models for everyday items)
Choose “quickstart”

Jtop - on Jetson give info about how much memory is being used - may not be able to train the model / create the model on the jetson - our jetson did not have enough RAM to create our model - we used google colab to create it.

Google Colab - https://colab.research.google.com/ - to train the model - T4GPU (free) for more memory to complete the training - allows you to write and execute Python in your browser, with:

  • Zero configuration required
  • Access to GPUs free of charge
  • Easy sharing

Tips/steps for Vision Training & Modeling

  1. Import the photos to RoboFlow and then draw an outline around each object and label each to use train the computer to “learn” what it means to be that object
  2. Download dataset from robo flow - as zip file to computer - then extract - will create 3 folders - test, train, valid
  3. Use Ultralytics - YOLO to create an AI model using the images that were labeled
    They also use pre-trained models - find the code on yolo documentation
  4. Use Google Colab to create the model with the code from YOLO and RoboFlow data if you don’t have enough memory on your device

In your robot code you need to be aware of confidence level for identification (we used > >78% on our test code)” boxes.conf > 0.78 “ in our code for the jetson to identify objects on the field

We are just starting to dive into this topic and learn about machine learning and vision tracking so we would welcome any additional tips, suggestions, and resources.

3 Likes

Hey guys, I saw you’re starting to dive into machine learning and figured I’d send this dataset in the hopes that you’ll find it useful. I use photonvision running on an orange pi 5 for object detection, I’m sure there’s a good way to do it with jetsons, but with photonvision all the backend work has already been completed.

If you guys have any datasets that you’ve started curating, I would love if you could send them over to me so I could add them to my big one, I’m trying to create a centralized location for everything.

Good luck!

1 Like

Thanks for the dataset - we’ll get it to our programming mentor and return the favor with any data we can provide!

2 Likes

Glad I could help!

1 Like