FTC 12527 Prototype 2025 Build Thread

Welcome to the Team Prototype 2024 Build Thread, Presented by FTC Open Alliance

What is Team Prototype?

Team Prototype is an FTC team based in Beijing, China. As the oldest FTC team in China, we join FTC Open Alliance to foster a culture of collaboration and innovation within FIRST and the China FTC community. For our team, we encourage our students to lead the project and have hands-on experiences in science and engineering. Like our team’s name, we make progress through experiments and iterations, so we aspire to share our strategy design, CAD, coding, and outreach processes.

Team structure

Our team comprises 15 students from 10th and 11th grades, organized into three departments: Mechanics, Programming, and Outreach. We’ve successfully established a mentoring system that includes a dedicated elective course within our school. In this program, our experienced members serve as mentors to the rookies, fostering skill development and collaboration.\

What you could get from us?

Follow our thread tightly to make this incredible journey INTO THE DEEP possible with us. Every Monday, members in each subteam will post our latest updates, and you are welcome to freely ask questions.

Team links

To be updated!

11 Likes

Support our leader Mr. xiao and Mr. Wenjin!!!

3 Likes

ayyeah captain!!!

4 Likes

Here we go!

3 Likes

Welcome to our first update of this season.

It’s been over a month since kickoff, and we are just getting started. There’s a lot to catch up on, so come and see what we’ve accomplished this week.

Mechanical Update NO.1 (10/28/24)

Although we are a veteran team, most of our members are new to FTC. For our first week, we aim to build subsystems, run tests with parts from GoBuilda, and help our rookie members familiarize themselves with the materials we could use.

1. Main Structure

For this year’s game, a slide mechanism is crucial for a high-scoring robot. After analyzing the game, our goal for the slide is to ensure stability while balancing speed and torque to score quickly and climb effectively.

We drew inspiration from team 7974’s RI30H robot and team 16093’s offseason robot from Powerplay, incorporating a vertical linear slide and a small pivoting arm with a claw at the end.

After evaluating 7974’s game performance with their RI30H robot, we added a slide to the arm to increase our range for picking up samples.

We ended up with a design with both the pivot and the slide powered by linkages.

However, after testing, we found that the servo wasn’t strong enough to lift the entire structure, so we abandoned the front slide.

2. Intake/Claw

We initially implemented an active intake with a horizontal roller powered by two GoBuilda servos. The intake speed was ideal, and we successfully mounted it on the pivoting arm, scoring for the first time.

The drawbacks of this active intake are clear; it is inefficient for both intaking and scoring, and it cannot score on the chambers.

Thus, we designed a claw, printed it, and conducted tests.

The claw proved to be very efficient and had a significant margin of error for aligning with the samples. Additionally, we incorporated a rotational degree of freedom to enable self-alignment with vision.

Here’s the CAD for the claw:

5 Likes

Love what you guys are doing, the claw looks very effective!

Software Update NO.1 (11/1/2024)

We decided to go with a claw design for intaking and scoring early on in the season. We focus on the intaking part in this post.

Compared to roller intakes, a claw intake needs to be aligned with the samples really well in order to effectively and consistently pick them up. Without automatic aligning and only relying on the driver to do that would be very time-consuming and impact cycle times, so a good vision system is crucial for getting the claw intake to work.

We have seen some amazing claw-based intake designs from Gear Wizards 16917 and 5064 Aperture Science, Proving that with a good-enough auto-align, claw intakes can be really reliable and fast.

Now moving on to what we did for the vision:

Vision for autoalign

For the CV part, we tested all our pipelines on a computer first with OpenCV. We made two approaches for the pipeline: a color-based pipeline and a edge-based pipeline.

We can separate the vision process into 3 parts:

  • Color masking: masking out the red, blue, and yellow game pieces.
  • Processing: will elaborate in next sections
  • Find contours and calculate position and orientation.

Color Masking

For the Color masking process, we used the Hue in the HSV color space to effectively threshold out the samples (To make it more simple, we focused on the blue samples, adding new colors should be the same process)

Contour Finding

After we get the masked frame, we find all the external contours in it and calculate the center of mass for that contour as the position, and the direction of the best-fit ellipse as the orientation.

Processing Things

Now let’s talk about the processing part, what do we do in that part and why do we need it?

We initially made a simple pipeline that filters out the game pieces using a Hue threshold for the 3 colors (red, yellow, blue) and then finds contours to detect game pieces.

This worked well with single pieces and pieces that are not close together but failed when pieces are close to one another.


So a method is needed to effectively separate the game pieces from each other when they are touching one another. For this we developed two methods

Color-based pipeline

The color based pipeline operates on the principle of removing shadows. During testing we observed that the top part of the sample is the brightest, and if we remove the shadows or dark parts on the samples, we can effectively separate them out.

Our first idea was to just mask out the parts that had a Value (HSV color space) that’s less than a certain threshold, but due to the tricky shape of the samples (I swear they designed it like that on purpose!), we were also masking out the dark parts on top of the samples. This made the contour-finding process afterwards really bad.

So we improved it by introducing 3 more constants: confirmed_shadow_threshold, shadow_dist, and shadow threshold.

The idea is to not make it remove the parts on top of the samples, and those parts are ususally not at the edges of the color mask. So now we only remove dark patches if they are less than shadow_dist away from the edge, removing the edges. We solve the issue of not separating two samples if they are touching side-by-side (the dark parts are now at the center) by removing dark places that are below the confirmed_shadow_threshold regardless of their distance to the edge.

Video:

Edge-based pipeline

Another approach is the detect the edges and separate them.

For that we use Canny edge detection on the grayscale masked frame, and then dilate the edges so they touch.

This method gave us stabler results than the color approach, but when testing on our limelight 3 (we haven’t got a 3A yet :frowning: ), we found this method to be horrendously slow, with single-digit framerates on the lowest resolution.

We timed our code and found out the bilateral filter we used was really slow, and we had a lot of unnecessary operation. After removing them our framerates significantly improved to about 40 fps, even becoming faster than the color-based method.

Timing code we used:

Results (This one is better than the color method):

Finding and separating contours

During testing we found out that even after processing, the samples will not separate. To fix this, we came up with our own separation algorithm (we tried watershed but results were not that good):

We first try to find the number of game pieces in the frame. To do this, we erode the contours a little bit at a time until their area is less than a certain threshold. We use the maximum number of contours as the number of game pieces there are. We then find the least-shrinked contours that has that maximum number of contours and use that for the downstream tasks.

In addition to that, we also use area thresholding to filter out the noise and any unwanted small contours.

This contour processing is used for both the color-based approach and the edge-based approach.

Code

“Talk is cheap, show me the code!”

The code is available on our GitHub:

10 Likes

Software Update NO.2 (11/4/2024)

This season, we decided to use GoBilda 4-Bar and Swingarm as our localizer. The best odometry packet we’ve used!

Later, we would test auto paths through Road Runner, and then try combining the wheel-based odometry with Apriltag-based poses to derive more accurate localizations of our robots for more advanced functions.

Thanks for our freshman programmer Hardy Ye’s devotion in tuning the new odometry packet.

GoBilda Pinpoint Driver

GoBilda Pinpoint Driver is a hardware class that you need to configure and set related parameters while constructing the object of it. Click to see the example code of this Driver. And here to see the user guide.

To find the xOffset and yOffset in the setOffsets() function, you can follow these steps :arrow_down:

  1. Find the rotation center of the car, which is the intersection of red lines in the figure.
  2. Draw perpendicular lines from odometry wheels to the vertical and horizontal
    centerlines of the chassis perspectively, which are two purple lines.
  3. Measure the length of the purple lines in millimeters

The IMU produced by GoBilda is definitely the best IMU we’ve ever had, since it is always reliable no matter how we test it. And “plug and play” odometry pods also offer our robots foundations of more precise auto paths without extra tuning processes.

Road Runner with GoBilda Pinpoint Driver

To enable the localization of Road Runner, we must follow the instructions given on the docs. Based on the documentation, we should write a class for our localizer by extending StandardTrackingWheelLocalizer or TwoWheelTrackingLocalizer. And then pass this localizer in the setLocalizer() method in the constructor of the drive class. (Click the link to see the code)

What’s different from normal Dead Wheels Odometry is that we could simply use one hardware (instead of two encoders+ imu/three encoders), which is GoBildaPinpointDriver in the Localizer class that extends TwoTrackingWheelLocalizer.

However, we found that if we simply return processed poses of GoBilda Pinpoint Driver in the overridden methods such as getWheelPositions() or getWheelVelocities(), these values would be recalculated in our superclass (TwoTrackingWheelLocalizer) to induce some fatal errors that make odometry extremely inaccurate while the chassis is rotating and translating simultaneously.

After realizing this problem, we chose to write a class called GoBildaLocalizer to implement the interface Localizer.kt in order to avoid reprocessing of poses in the superclass. This time, we directly return the Pose2d of getPoseEstimate() and getPoseVelocity() in our implementation by simply transforming Pose2D provided by GoBildaPinpointDriver to Pose2d defined by Road Runner. Here, inspired by FRC Team 6328’s GeomUtil, we use lombok and our own GeomUtil to simplify our codes more gracefully.

Please click the following link to see our full code :point_down:

The use of lombok in our code

  1. The @ExtensionMethod annotation allows us to use methods from the specified classes as if they were instance methods on the types they accept as their first parameter.

    For example, we have a GeomUtil class that includes various static methods that transform the object types:

    //Pose2D is the class in FTC SDK, and Pose2d is the class of Road Runner
    public class GeomUtil {
        public static Pose2d toPose2d(Pose2D pose) {
            return new Pose2d(pose.getX(DistanceUnit.INCH),
                    pose.getY(DistanceUnit.INCH),
                    pose.getHeading(AngleUnit.RADIANS));
        }
    
        public static Pose2D toPose2D(Pose2d pose) {
            return new Pose2D(DistanceUnit.INCH, pose.getX(), pose.getY(),
                    AngleUnit.RADIANS, pose.getHeading());
        }
    }
    

    Traditionally, we have to use this way to call a static method.

    Pose2D pose = GeomUtil.toPose2D(new Pose2d());
    

    However, if we use @ExtensionMethod annotation of lombok, we can use dot notations to avoid confusions in complex codes.

    @ExtensionMethod({GeomUtil.class})
    //...
    Pose2D pose = new Pose2d().toPose2D();
    
  2. @Getter and @Setter to generate Getter and Setter methods for attributes.

  3. @Builder to create constructor by filling arguments in any order

    public class Pose {
        private String name;
        private double xPosition;
        private double yPosition;
    
        public Robot(String name, double xPosition, double yPosition) { 
            this.xPosition = xPosition;
            this.yPosition = yPosition;
        }
    }
    
    //With @Builder Annotation
    
    @Builder
    public class Pose {
        private String name;
        private double xPosition;
        private double yPosition;
    }
    
    // Usage:
    Pose robot = Pose.builder()
        .name("MyRobot")
        .xPosition(10.0)
        .yPosition(20.0)
        .build();
    

Future plan: Trials to implement Pose Estimator

With FRC experience, we want to utilize PoseEstimator class to combine wheel-based odometry with Apriltag-based poses, since it embeds Kalman Filter to dcrease noise brought by vision measurements. However, to use this class, we have two main challenges to overcome.

  1. Import WPILib classes to FTC project
  2. Rewrite the class that implements interface WheelPositions to use data provided by odometry wheels instead of encoders of mecanum wheels.

We solve the first problem by first importing .jar files of WPILib into the lib directory, and then editing build.gradle (:TeamCode) like the following:

After sync, you can freely use these imported lib classes in your project.

7 Likes

:heart:

(Small) Software Update NO.3: All colors now detected! (11/11/2024)

After our first vision post, we then moved on to detect all the samples (the current code only detects blue pieces).

To do this, we simply have to find the color ranges for the red and yellow pieces. This is easier said than done, and because red and yellow are so close to each other in the HSV color model (See picture below), manually guessing and changing the threshold can be very tedious.

To better find the threshold, we made a little program that plots the colors in the video frame real-time in the HSV color model. With this, we can clearly see that the boundary is around 10 for the Hue. Very nice :slight_smile:

Playing with this tool, we get the optimal thresholds and get this result:

Apart from the tuned hyperparameters, we also removed the canny edge detection part and used the sobel magnitude with a threshold to separate the pieces. The canny edge detection made things worse because it flickers all the time, and removing it resulted in a more stable detection.

Code will be open-sourced later when I clear up all my spaghetticode.

EDIT: Code now available at:

1 Like

Mechanical Update NO.2 (11/12/2024)

Brief Work Summary

At the end of the first month since kickoff, the team captains have finalized their works. Over the past two weeks, the teams have focused on different goals based on their progress. Our alpha car has completed and upgraded their claw/intake structure. With the help of our programming group, the complete claw structure on the car can function well. At the same time, beta bot has embarked on a fresh approach with an innovative solution for transporting the specimens. With a new chassis and the rotating transportation solution, we are seeking more posibilities.

Alpha bot claw structure update

The alpha bot group has finalized the intake process design. Over the past two weeks, we transitioned the sliding rail to a single-sided version, reorganized wiring, and revised the horizontal sliding rail setup. Previous updates only demonstrated the claw and vertical sliding rails functioning independently. Our latest work introduced an additional motor to power the horizontal rail. Considering space constraints near the battery packs, we opted to position this motor parallel to the vertical axis, which helped restore previous balance and freed up sufficient space.

We also did the possible claw test for the autonomous 30 second which is sending the specimen onto the high chamber. Through test, this solution of vertical transportation worked well. (Video clip are attached below) What’s more, our creative members even came up with a unique claw solution—using the gas powered pump to implement accurate catch.

The High Chamber Claw Test

The Vertical intake claw test

Beta bot new startup

The Beta bot team has initiated a fresh design for the intake and transportation system for specimens. After thorough discussion, we developed an idea to merge the intake and basket transport functions into a single integrated mechanism. To enhance maneuverability, we added a rotating base powered by a GoBilda servo, allowing the entire structure to rotate up to 270 degrees. This feature significantly reduces the time needed to turn the chassis, enabling the system to align directly with the basket independently of the chassis orientation. To complete the intake functionality, we designed a quick-grab claw using a GoBilda steel plate and a servo rudder arm, allowing for swift and stable specimen handling. With assistance from our programming team, the new chassis is already operating smoothly, and we’re optimistic that this design will boost our efficiency in upcoming competitions. More iterations will be released.


New solution for upward tranporting

New Chassis Test

More things to do

In addition to advancing our own structural designs, we’ve begun analysis of other FTC teams’ exceptional solutions, studying the creative features and innovative approaches that have proven successful in on-going competitions. By closely examining these designs, we gain insights into what has worked well, identify potential areas for improvement, and find inspiration for new ideas. This comparative analysis has not only expanded our technical knowledge but also deepened our understanding of diverse engineering approaches within the FTC community.

3 Likes

Just for FUN!

We used a motor to build a very super simple pneumatic system and attached a suction cup at the end of the tube.

But this is illegal based on the rule R207. Hope we can use air power in FTC one day to make some mechanisms like 2019 FRC 971’s intake and 254’s suction climber for Backslash

2 Likes

Here we are! 104 Point TeleOp (13 samples) !

In the past week, we’ve done a lot to tune our programming! And currently our robot is controlled by single driver. Check out the following video to see our TeleOp!
:partying_face: :partying_face: :partying_face:

Also, there will be bunch of posts about to update this weekend. :robot:

2 Likes

we were trying to use your sample detection code with our limelight(3A) without much succuss.
were you able to run it with a limelight and if so are willing to share the code with the modification needed to be run on limelight or help us implement it ourself.

Working with Limelights

Below is a semi-detailed guide to running python snapscripts with Limelights.

Steps to Modify OpenCV code to run on Limelights

  1. The first thing you have to do is remove any visualizations such as cv2.imshow lines from the code. The Limelight won’t be able to display them. So remove the lines like this:

  2. Limelight runs the runPipeline function in your code when running Python Snapscripts. So we have to modify the ProcessFrame function to that.
    Change
    image
    to:

  3. As you can see here, the runPipeline has two parameters.

  • frame: the input frame from the camera.

  • llrobot: a list of 8 double values you can pass to it from the robot code when using (we are currently planning to use this to switch between different recognition modes by passing booleans in this list).

    For modifying our current code, just leave the llrobot unused.

  1. The RunPipeline function returns 3 values:
  • The first value is info to be used for the limelight crosshairs. I don’t know that much information about this and you can probably find a more detailed explaination of what this is in the Limelight Docs
  • The second value is the frame you want to return. In our case that is the frame with the contours and direction indicators painted in. This is also the frame the limelight shows in the camera preview.
  • The last value is called llpython, which similar to llrobot, is a list of 8 doubles, which can be used to store values that the robot code can then get and use. We plan to use this to return the position and angle of the recognized target sample like this.

Deploying it on the Limelight 3A

Follow the guide on the Limelight Documentation and switch the pipeline type to Python Snapscript.

Then paste in the code and hit save. The whole thing should be up and running if there are no issues in the code. You can tune the camera’s resolution, exposure etc. to get better results (we used the lowest resolution).

Code

The code adapted to run on Limelights have been commited to Github. However for some reason, we are not getting results as good as the ones we got when testing with a webcam on a computer, with the blue ones not detecting well. We will try to fix that soon. As always, here is our github repo link:

If you have any questions about anything, let me know and I will do my best to answer them!

1 Like