Is Full Autonomous Gameplay Plausible?, Possible?

Several teams were fully autonomous in 2015, but obviously that was significantly easier

Oh no, please don’t do this, this will not work. Simply using encoders and the WPILib kinematics classses will take you very far.

1 Like

Based on what you all said, I feel like autonomous power cell cycles are totally doable, but I still am trying to figure out climb and keeping track of x,y

what if you get pushed or skid?, encoders will not work over a period greater than 30 seconds, and not even close to 2:30

Being on the the UW REACT team, I can tell you that it is a VERY* daunting task to make FRC completely autonomous. With the nature of FRC, it is apparent that humans will have the best decision making skills in a competition setting.

Sure an AI agent can be trained to make accurate decisions, but that takes tons of development time and a large dataset to do so.

To the title of the thread, making FRC gameplay completely autonomous IS very much possible with resources available, but I definitely do not think it would be feasible for a team to pursue due to the large amount of resources needed.

*insert a LOT of VERYs

Forget about pushing or skid, acceleration based odometry will not work for more than 5 seconds when moving. And encoders are in fact good enough for 2 minutes if all the movements are calculated trajectories that the robot can achieve.

You can also rezero your odometry everytime you get close to a vision target, such as the human player and goal.

honestly, I don’t see a need for too much training, all the robot needs to do is get power cells, shoot them, and climb, it can all basically be done with different cases

good thinking

True, but think of it in this way by asking the following questions:

How will the power cells be gathered? Power cells are not always in the same location, other robots could be picking up power cells, loading bay could be blocked, cells can be in an opponent restricted zone…

How will the robot shoot cells? Robots don’t shoot in the same location everytime, good shooters will be defended, will the robot know to move to a protected zone to shoot if they are defended…

How will the robot climb? There are many different locations to climb, is it a self climb or a multiple climb, when do they start climbing, how will the robot make sure that itself and the other robot are level…

A lot of the questions/statements above can easily be answered by a human, but not so easily by an AI

can anyone on the UW REACT team give me a link to your resources that you use?

power cells can be vision tracked easily (see header and link for OpenSight), and loading bay is only if it cant find any.
robot will shoot using our limelight and will try to shoot from the trench or auto line,
climbing… im not sure, maybe just go to center of the bar and climb early, or make that part driver controlled still,
all of the positioning I will try to manage and keep accurate throughout the match,
before the robot goes into a protected zone, it will notice its x,y values and I will program it to stay away,

Another mentor and I pitched a fully auto robot to our team back in 08 for Overdrive. The kids did not go for it but it would have been awesome.

Our robot that I will be using
image
image
I plan on mounting 2 cameras front and back and linking them to a PI4

2 Likes

Another fun (but expensive) sensor you could experiment with is a LIDAR. 254 used one in 2018 (I think?) and there are a couple models cheap enough to be robot-legal these days. You could use it to sense other robots and balls, and also to account for drift in your drivepath by referencing locations of field components occasionally.

that could work, but that would be a lot of work and our team’s robot is rather expensive and we do not have the spare budget, however, when we shoot or see a vision target, the robot will reset the odometry to keep it accurate

When will the robot know to stop looking for balls and then decide to go to the loading bay?

Again, what happens if you’re defended? Will it know that it’s defended and change locations?

Also,

completely restricting the protected zone will prevent you from gathering balls that have to be cycled back into the field.

We found a LIDAR to be an absolute must for traversing the field and to sense field elements, game pieces and other robots.

about the loading bay, if the robot dual camera vision processing sees no balls, it will go there
for defense, our shooter can lift up, we have a lift that puts us at 45"
additionally for defense, we can push abt anybody and we have a very grippy drivebase (8 wheel, 4 falcons) and an adjusting turret
about for the protected zone, for now, complete restriction may work, but with two cameras and bumpers being easy to track, I could theoretically go into the protected zone

EDIT: I might try lidar, I have a few months

this is tesla level thinking
and if you do this i feel bad for your programmers

I see autonomous gameplay as very possible, with conditions. Basically it would benefit from the addition of a 3rd layer of sensor guidance: probably UWB, BLE or IR. There are already cost-effective solutions for these out there that could be easily integrated with the current robot control system. For example, UWB tech enables localization within 1 inch in a small space like an FRC field. Some additional vision targets using tech already common in FRC would help. SLAM algorithms coordinating inputs from the 3 primary sensor layers (e.g., IMU, camera vision-target identification and a new 3rd localization layer) would be required for robot guidance through field obstacles over the 2:30 match timeframe.
Despite claiming at the open that this is “very possible”, it would be a big challenge for most teams. TBA stats indicate that on average only 1/3rd of teams are achieving a successful 3-ball auto routine in 2020, and a fully-autonomous match is between 10X and 100X more difficult. Plus it would require an accurate full-field setup for testing, which few teams have. And it would have to account for other robots on the field, which is an additional level of complexity. I think a simpler field set up with a full layer of SLAM software modules in the FRC code library would be required to make full-auto gameplay accessible to more teams.
On a side note, full-auto robot competitions are already done in FLL, and in robo soccer and drones (but these last 2 are primarily at the university level).