Will We See Full Auto Robot With ML that Replaces Human Driver in Tele?

Well, sorry for the comment. I think I forgot it’s such a sensitive subject.

Anyway…self driving robots.

Yeah, I think they’ll happen, but I don’t think any of them will be competitive. I think they’ll be demonstration robots. I don’t think a self driving robot will be able to beat most human drivers without some really huge breakthroughs in artificial intelligence.

2 Likes

If there were ever a competition between actual Mentor-designed, mentor-programmed, or mentor-built robots and those done by students (with good guidance and partnerships from adults), the clear losers would be the ones that adults did. The way I can tell a fully adult-made robot is what it is, is that they usually suck.

Also I can’t code at all, and my CAD students run circles around me. As the Lead Mentor on my team this year, my contributions largely amounted to buying stuff, choosing paint colors, and dressing as Richard Simmons when in competition.

I am indeed sorry though that my sarcasm wasn’t obvious, I tried to make it so with the registered trademark logo.

23 Likes

By coincidence that’s the part of the code I had decided to work on today. I really need to explore what other teams are doing.

I took an online course on self driving cars recently, and in the strategy section, they basically said the most common approach is just a finite state machine. Based on input, it decides if the car should be driving straight, stopping, turning, changing lanes, etc, and there were driving patterns associated with each “strategy”. So, I’m taking that approach for translating the field state into an “optimal” driving strategy.

I put “optimal” into quotes because it isn’t so much optimized, as much as just working.

For Rapid React, I have just a few things a robot might be doing. It might be looking for a ball where it is, going someplace else to find a ball, moving to pick up a ball, picking up a ball, moving to a shooting location, or shooting.

There’s a whole lot that is conspicuous by its absence in the above. Most notably, there’s no reference to other robots. I tried to come up with realistic ideas for reacting to other robots, and it was much too complicated. I decided that for version 1.0, robots were just obstacles. They might block the view of a ball, or pick one up, which would just cause the autonomous robot to look for another. If they actually played defense against the autonomous robot, the poor autonomous robot would just get confused, and keep looking for balls, constantly trying to adjust its state information to reflect the fact that it hadn’t moved.

Within each of those strategies, things still aren’t easy m but they are solveable and relatively well understood. There’s still path planning and localization. If your “state” in the state machine is “moving to shooting location”, you have to know where you are and then plan a path to where would be a good place to shoot, and update that path and check constantly to see if you ought to change state to “shooting”.

The above doesn’t worry about two ball pickups, or climbing, but for version 1, it won’t. It won’t be a competitive robot when running autonomously.

Crap sorry, I replied to the wrong person
Does anybody know if I can edit that??

I drove the 971 robot in 2019 and the code team was definitely making early threats to replace me with a “full-auto” robot. The idea was that it would basically just be a super long auto to complete a rocket, and would be interrupted if there was defense. Then it would be driver-controlled unless the defense left, in which case the auto would be resumed. 2019 was a very nice game for this because there was a huge abundance of vision targets (we did have complete field localization with a real-time display on the driver station) and you could grab hatch panels and cargo from the human player station which was completely repeatable. Turns out it is harder than it seems and this idea sadly never came to fruition. But it is a very fun idea that I hope a team can implement effectively in a non-2015-style game.

9 Likes

People who have interested in this topic, might want to reply in the following thread, and push forward to get more relevant data from FMS to our robots:

1 Like

For this year my teams shooting was almost fully auto. Press a button to intake it will sort and eject wrong color balls as required. Press a different button it finds the target and shoots. We also had our bot drive to balls with photon with a button that the driver would press and then it would pick it up. We are planning on using axon instead of photon this off season to make a full auto robot. We most likely just use a Huge state machine for deciding what to do but I believe me and my team will be able to pull it off if we get time in our shop this summer

2 Likes

What’s your team? I’d like to watch the video.

93 we dont have any videos on it but you could look at some quarterfinal matches from the Wisconsin regional for our shooting. The driving to a ball we only got on a tank drive prototype bc raspberry pi issues on the comp bot

Been there, done that. Very similar result. The ball detection worked fine, but when attached to the robot, had some failures, of the SD card of all things.

On top of that, I also had alignment issues because of an eight wheel drive that really didn’t like turning in place. I didn’t get the motion planning to work well when I changed it to a turn while moving alignment.

The real world can be so annoying.

Also from 93 here. We got it to the point where we can drive to balls whether they are moving or not with a constantly updating custom ramsete controller. We also ran into the problems of everything on the field being blue or red which photon vision didn’t love and our tuning was not fantastic. Our plan for next year is more cameras and running Axon to identify robots and game objects and their position in space from there we will have to figure out a state machine to work with defense. It’s a really interesting concept but there is such a large barrier to entry. Localization, and identification integrate so many systems. For us being a completely student coded robot, it makes it even more difficult with students coming and going every year.

Nice targeting, 93.

Not when your mentor can’t ever understand RobotC half the time, let alone Java.

I’ve seen a few different instances recently of robots on the same alliance coordinating movements such as autos where one robot feeds game pieces to another.

I feel like a fully autonomous robot would be much easier if you could share data between robots, and the first thing that comes to mind is Zigbee but I wonder if that would be equivalent to running an 802.11 AP in the stands

or the existing 802.11 connections between the robots and the field network, which wouldn’t require any extra hardware. A hole could be punched in the VLANs for an alliance-wide NetworkTable or something.

6 Likes

Just use ICMP. It’s already open and there are loads of POCs out there about how to do it: Ping Tunnel - Send TCP traffic over ICMP

1 Like

Traffic between team VLANs is blocked.

Interesting article though, thanks for posting it.

Sure about that?

Fairly certain, otherwise this section of the FMS white paper is fairly misleading. I will admit I haven’t tried to access another teams vlan on the field.

Cheesy Arena’s switch config includes access lists for isolation only allowing traffic from a team’s VLAN to the FMS to pass.

Our code is iterative year over year so framework remains very similar but typically it is written 100% by students, with some mentor input mostly for fine tuning of motion, autos, hood position, flywheel speed and such. I was blown away by how good the coding of our 2019, 2020 and 2022 robots was, and that it was written completely by the students. Likewise for our scouting app. I’m not saying our team is representative or typical, but our team makes worlds with extreme consistency and was on a high ranked division alliance in Houston, so it is possible for students to do really impressive work. We also have a grade 9 student (so about 15 years old) who is becoming a master at CAD and may well design a lot of the robot next year.

-edit- to add that while all of the above is true, we also don’t have anything approaching ML/AI or any on-field spatial awareness built in our code, so it is not perhaps at the level being discussed in the thread overall.

2 Likes