I am part of the team Tec Gear 6106 from Mexico, since we had the pandemic issue I’ve been trying to prepare some code and do some research on my own. I’ve been asked to develop an auto aim system for our robot and I have no idea from where to start. I know the basics of Java in VSCode but I’m not an expert and since our team has no coaches that can help me with this topic I decided to give it a shot to Chief Delphi. Any help would be great.
Glad ya are tryin to try something like this. The first question is, how are you trying to line up? Do you have an angle to turn to, or do you want to line up with the vision targets?
First thing you want is a way to detect the retroreflective target around the upper port. You can do that with one of few ways: Limelight, Chameleon Vision/Opensight, or custom Vision Processing code on a RPi with the FRCVision image.
After you are able to detect the target, its as simple as putting a P loop on your drivetrain’s speed and rotation. For more information, see the Limelight documentation and look at the case studies.
Thanks! What we are trying to do is to line up the shooter so the robot can move but the shooter stays in the right position for shooting and scoring (btw sorry if my english is not good)
Interesting options, thanks for your help, I’ll do my research based on those. Thank you
What Alphyte said, that should be your best shot for this. How is your shooter designed? Sounds like it’s a turret, is it a full turret or a partial turret (don’t go in a continuous circle)
Our team (4152) used a flywheel shooter mounted on a turret with a Limelight camera. You can view our code here:
Each subsystem has its own file.
In addition each command (i.e. AlognTurret) has its own file in the commands folder
There are other files as well but the ones mentioned cover the important aspects.
The biggest challenge was getting data. We put a grid on our target and took around 1000 shots. Each shot we recorded the the distances, where the ball landed in the grid, RPM, voltage, which ball (and how many shots it had been used for). This data allowed us to fine tune the programming.
Kudos to our team captain and lead programmer (Oliver) and our testing mentor (Ron) for their work on it.
To start, you will need a camera to get the target position. Once your camera can detect the target (you can use a Limelight or your own solution), then you can make a command that uses your drivetrain.
This command should take the target and see if it is in the center of the image. If it isn’t, you turn the robot (or turret) left or right until it is. You can use PID or any other control method to do this.
https://chameleon-vision.readthedocs.io/en/latest/contents.html
https://docs.wpilib.org/en/stable/docs/software/vision-processing/index.html
If you have it mounted on the turret, you can just set your hood angle and rpm as a function of distance and turn the turret using a PID loop with the angle offset as the error.
I would recommend either using a limelight if you can as it is very easy to use and also able to run the vision processing fast enough where you can use the angle given as a target for your code and don’t have to worry about a delay or another sensor.
Another option if you cannot use a limelight would be to run OpenCV code in a separate thread running on the roborio and a USB camera, this option is nice because you don’t have to worry about a co-processor and you can use GRIP to generate the code for you, we used this option in 2017 and had some written for 2019 before we switched to a limelight. You will only be able to get around 15-20fps but that is fast enough if you are just taking one image while not moving and then using that data to aim.
Here is a link to our 2017 camera class https://bitbucket.org/kaleb_dodd/simbot2017public/src/master/src/org/simbotics/frc2017/imaging/SimCamera.java
and also our GRIP generated Code here https://bitbucket.org/kaleb_dodd/simbot2017public/src/master/src/org/simbotics/frc2017/imaging/SimProcessing.java
It is a turret actually, a full turret, it is like a pillar in which 4 to 3 balls are stored while the robot is moving until the driver activates the shooting mecanism at the top of the turret. It has a rotative base, it can’t spin 360 degrees, more like 180 or less.
Thanks man! I’ll check it out
Actually earlier this year I heard something about the Limelight and since we use a mounted camera it seems like a good option. Thanks!
Mmmm, OpenCV sounds an interesting choice, let me research about it and I’ll tell you what happens. Thank you
If you want easy vision without the cost of LimeLight, or the (relative) complexity of OpenCV, you should definitely check out OpenSight if you want to build your own pipelines, or Chameleon Vision if you want a plug-n-play option. Both options will work with a Raspberry Pi 4, but you’ll need to find a way to mount and power a green LED setup yourself.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.