Is Full Autonomous Gameplay Plausible?, Possible?

So, being from a PNW Team that now has nothing to do because our events are cancelled, and being a programmer, I was thinking… Is full autonomous gameplay a viable option?

My points for it:

  1. we have the resources (Limelight, Pathfinding OpenSight (Vision Processing) )
  2. we have the time and experience
  3. AWARDS!!!

points against it:

  1. probably very complex and difficult
  2. will have to be tested and better than drive team
  3. collaborative actions will be difficult (climbing)

TLDR: is full autonomous gameplay a viable option


any resources that could be helpful for said task?

I’ve always thought using a Jetson nano and Intel realsense would be the next progression in FRC auto. If you have time and resources, you could make large advancements in FRC.


ok, ill look into that, currently im using limelight, and have the capability to have acceleration based odometry for field positioning, and have a pi4 run opensight to track balls on the field and maybe other robots?

For finding other robots the best distinguishing feature would probably be their bumpers. Bumper shade can vary quite a bit between teams but it would be a fun project to make it recognize and avoid a robot from one of your past seasons


that was exactly my plan lol

Define gameplay… based on how many teams play the game I’d say autonomous play is possible, simply don’t touch the controls…


i mean able to score a few cycles from both the player station and balls on the field, and climb at 30 seconds left
that’s standard gameplay, but autonomous

One big thing about running auto for long periods of time is the accumulation of slight inaccuracies in every measurement you make. Having SLAM technology can really help, but also constantly homing your values to a fixed position, say running into the wall by the loading Bay, and reading it’s vision tape can also help.

yeah, one team we work with, 948, is doing this for semi-auto play

IIRC that’s Team 900’s vision (:wink:) for their robot.

1 Like

As Tinnittin has said, the biggest challenge will definitely be keeping accurate odometry over 2:30 minutes. Over time, disturbances like wheel slippage and gyro drift will make keeping an accurate pose with just your drivetrain very difficult. You’ll need some way to get a new estimate. You may want to look into things like Kalman Filters for comparing your new estimations against your original model.

Take a look at 971’s code, I believe last year they used 5-6 IR cameras tracking the tape to keep an accurate pose estimate throughout the match.

Can’t wait to see what Robototes show up to Sundome with! It’s always cool hanging out with your team ever year.

1 Like

Yeah sorry, crappy joke.

The primary issues people are bringing up accumulated errors. One nice way to avoid this is to re-localize periodically. The existing field components may be useful for that but you may want to look at Aruco markers or other options (could in theory be placed on your driver station to help) I know 1768 did something like this in 2015 to help align with the feeder station.

Tracking balls is certainly doable, tracking goals, also.

One big issue is going to be navigating the center of the field without losing a lot of accuracy in your position estimation.


910 was close to autonomous last year, except for interacting with nearby partners or opponents. They could do everything you need to do autonomously: drive, score, pick up game pieces. Localization was reset when they scored or collected, I believe.

That said, you have to travel farther for cycles this year, and the bumps make inertial navigation much more difficult. But it’s not impossible.

It was doable in FTC (on an open field, mind you), so it’s probably doable in FRC.

See this video from 11115 Gluten Free:

There are some college groups which are trying to build fully autonomous robots for the game. See UW REACT - Fully Autonomous FRC Robots!.

If this became a thing, FIRST could help: install location beacons around the field and provide every team with a receiver. That would allow you to know where the robot is within inches, with no accumulated error. This is (probably) what warehouses do; they don’t simply rely on vision. I know nothing about this, and don’t intend it as an endorsement, but something along this line:

[Edit: Actually, those beacons are ultrasonic, which is cool, but presumably can’t handle multiple robots. But the concept…]

All of your points are correct. It is doable, and difficult.

If you want to try, look up robot localization. Your robot has to be able to figure out where it is on the field at all times based on sensor, presumably camera, input.

You can’t ignore other robots. They are there, and they move. Plus, once you solve that problem, climbing becomes possible.

For what it’s worth, this game lends itself to full autonomous driving better than some others. The strategies are pretty straightforward.

I know I’ll be working on it. It’s doable.

I’ve spent a little bit of time thinking about this. Would definitely be in the “cool” category.

Agreed the biggest gap is accurate, timestamped X/Y location data on the field. Camera data is getting us better, but more fixed reference points around the points of interest on the field will be critical. Still, I personally think the best systems would be like what Hollywood uses for motion capture - high-precision sensors in fixed locations, identifying targets on robots, and streaming the data back to them.

Once you have that, the floodgates open. Most of the other main headaches have been solved and proven in competition: Real-time path-planning, sequencing robot parts to be autonomous… The supplier base provides large selections of legal and cheap sensors that more than fit the bill to automate the vast majority of tasks.

I imagine a driver station consisting simply of a large touchscreen. A graphical display of the field, with hotspots to tap on to instruct the robot what to do, maybe some ability to draw paths to follow with your finger…Something intuitive enough to be manipulated at a very high speed. Ultimately, the drivers build up a queue of tasks they want the robot doing, and the auto routines on the robot execute the tasks in the most efficient, pre-programmed manner possible.


If you can gather information about what the other robots are doing, you could start to build up a match strategy algorithm. Robot decides for itself what’s the best way to score points, given the current state of the match. Go a step further and add in your scouting data on other robots from other matches, to skew the strategy toward what the alliances’ strengths and weaknesses are.

No more need for drivers to manually input data, just let the FMS say “go” and the blue banners come flooding in.

Now. I do have some feelings about what FIRST looks like when we actually get to this point. For the very first team that does it, yea, it’s super cool and awe inspiring!. I’d probably have to spend years just to understand how the system works… If i’m even capable of understanding it.

However, I think there would be something lost when we take humans out of the real-time competition. There’s unique value in the problem statement “you must perform your best during these exact 2 minutes and 15 seconds to win”. When you smear the “excellence window” over multiple weeks, I personally think the competition looses something.

1 Like

that event is postponed indefinitely like the rest of PNW

I plan on using a NavX accelerometer, if anyone has experience with this, how accurate is it?