Will We See Full Auto Robot With ML that Replaces Human Driver in Tele?

Albert W. Tucker?

1 Like

Quietly slips this pearl into his quiver of quotes to be quipped to the queen.

3 Likes

lol no

10 Likes

True. I tried to throw enough “probably” in there to cover all the cases. I wasn’t trying to say you can’t have both, or attempting to be competitive is bad in some way.

1 Like

Having an opinion and sharing it. Rookie mistake. :wink:

3 Likes

I’ll be expanding on @dydx’s post.

Localization

The pose estimator classes in WPILib can be used for this, although they have performance issues we’re fixing for 2023. We still need to fix the UKF numerical stability issues, but they can be largely avoided by throwing away measurements that are more than a meter or so away from the current pose estimate.

There will also be utilities for converting computer vision pitch and yaw and gyro angle to a “vision pose” that can be used with the pose estimator.

Perspective n-point can give you a pose directly without a gyroscope, but you need good target corner detection and a good set of model points. PhotonVision lets you specify those, but Limelight doesn’t (protip: you can run PhotonVision software on Limelight hardware).

Another option is SIFT, like what 971 does.

What would be ideal for a plug-and-play solution is fiducial markers on the field (e.g., ArUco, AprilTag) with a pre-built Pi image that can detect them and send timestamped field-relative poses over NetworkTables. We’d want to find or write a more performant marker detector than what’s in OpenCV.

Mapping

I’ve seen people use factor graphs for this. In particular, GTSAM’s impl of it. It would need to be cross compiled for a Pi.

LiDAR would help for environment mapping, but good systems are still way too expensive. Hopefully economies of scale kicks in at some point. They also don’t work well with the field’s plexiglass.

Good feature detection is also essential so the measurements can be correlated with each other to build up your environment. That goes back to the 971 SIFT stuff I mentioned.

ML labeling helps with predicting how objects will behave, which may influence your path planning. It can also be used for feature detection, if it’s accurate enough.

Path planning

Options for pathfinding include but aren’t limited to A*, RRT*, and D* lite.

We’re experimenting with trajectory optimization in CasADi.

It works well and only takes a few seconds on a laptop. We’re still working on ways to speed it up so it can potentially be run in real-time for simple cases that don’t include obstacles because that could replace our existing trajectory generation stack with something much simpler and more general. FRC team 2363 has gotten super fast results for their swerve drive by exploiting the differential flatness property, but R&D for differential drive is still ongoing.

I still need to cross compile CasADi for all our platforms; our Gradle build system doesn’t exactly make it easy since there’s several dependencies that also need to be retrieved and built like MUMPS. I’ll probably follow what the Arch User Repository PKGBUILD files do since those seemed to Just Work when I was installing CasADi locally.

Controls

We’ll be adding more options in 2023 for teams that have a model of their robot and want something more optimal than Ramsete + cascaded velocity control.

Why not ROS?

With Network Tables 4 (pubsub framework) and the controls R&D, we’ve been reinventing ROS piece by piece. The main reason we haven’t used ROS is because the architectural overhead isn’t a good fit for the average FRC team. ROS is designed by and for the robotics industry, which generally has a lot of software resources to maintain the middleware. The mid tier FRC team generally doesn’t. Teams like 900 and 971 do have those resources though, so using ROS or custom RTOS middleware makes sense for them.

I also think our abstractions for some of the controls stuff are a better fit for our audience, and it’s a core competency we can afford to make in-house. For example, our system modeling, Kalman filter, and model-based control APIs turned out rather clean and close to mathematical convention, so we just need some more ease-of-use wrappers built on top (Java sucks at math and linear algebra due to lack of operator overloading, reified generics, and non-type generic parameters). I’d say our weakpoints at the moment are more general kinematics (think robot manipulator equations from Lagrangian mechanics) and more general trajectory planning for non-holonomic and holonomic drivetrains.

ROS has the TEB local planner in their navstack, but from what I’ve heard from those familiar with it, people use it because it’s a canned thing that works, not because it’s good or state of the art. There’s more modern stuff like Ch. 10 - Trajectory Optimization.

With all that said, we do reuse external code when it makes sense. The DARE solver we use for model-based linear optimal control came from Drake because it fits our use case (real-time optimal controller synthesis), and we can’t write a better one. Also, a lot of our internal data structures are copied directly from the LLVM compiler infrastructure.

How to help teams use fancy controls easier

The biggest struggle teams still have is hardware configuration (making sure the motors move the desired way with positive voltage, the sensors count the right way, the units are right, etc.). A guide on https://docs.wpilib.org that walks through trajectory tracking for a kitbot with one gear ratio and encoder choice and a complete example project would be cool.

Contributions welcome

By the way, help on any these things would be appreciated. It’s open source. :slight_smile: Here’s the 2023 controls project board. I’ve been one of the main contributors for WPILib controls things lately, but we’ve had student contributions before.

  • Kinematics, odometry, and trajectory classes (student from team 5190)
  • Model-based controls library (students from teams 604, 5940, and another I forget)
  • Trajectory optimization (work in progress, student from team 2363)

Contributing useful, general things to WPILib and writing docs for them helps avoid the wheel reinvention that happens after students graduate. That lets teams focus on solving their particular high-level problem instead of worrying about the low-level details and “how”.

24 Likes

Gotta include 195 in there too now. :slight_smile:

But yeah, it’s hard. I also want to be clear that it isn’t WPILib or ROS, it’s WPILib and ROS. We stand on the shoulders of giants and that includes every single WPILib contributor.

Also, this was just an excellent post on what is happening and where things are going and work needs to be done. So cool.

6 Likes

um, wrong, totally wrong. Public use GNSS systems can have centimeter level accuracy and with a RTK system at a competition EVERYONE could use it very easily

To the best of my knowledge, there is no civilian (C/A) GPS receiver that can achieve that kind of accuracy without long-term averaging and/or an augmentation system (WAAS, DGPS, etc.) The former doesn’t work if you’re moving, and the latter are forbidden under current rules.

I don’t know much about it but RTK seems to be very capable: SparkFun RTK Facet - GPS-19029 - SparkFun Electronics

RTK is that sweet but maybe not that practical. You would need to ship reference stations with the field or pull in someone else’s feed ($$$ + geographically variable). Plus you need to actually be able to receive the normal GPS signal indoors… by no means guaranteed.

Zebra or a similar system seems more robust to me, especially when considering international competitions.

2 Likes

I think this holds some real possibilities for a software revolution in FIRST. An onboard processor is inherently quicker than a human driver in response (especially on the far side). Additionally, with the advent of Nvidia’s Isaac Gym consolidating the whole simulation and observation/rewarding process to the GPUs, teams with access to university labs or other resources may be able to reasonably pull off a model that excels at its task. We all know that ML defeats humans in specialised tasks. Now, there is a massive problem- the cost. It’s going to be super prohibitive to only the most fortunate of teams to have access to mentors that can guide students through the process, computers to run the simulations on, money to spend on sensors, etc. It’ll also most likely create an insurmountable lead by teams with solid robots and highly-trained robots. Imagine a robot running a Jetson Nano connected to two LiDAR sensors trained with a penalty every violation it makes, every time a motor draws more than a set torque, every time an opponent scores, time it takes to cycle, etc; and a reward every time it scores, intakes a game object, performs an endgame task, whatever. It’ll get really good at it. All in all, I think it’ll be an incredible investment for teams who can and potentially a FIRST-ruiner if teams try to be private about it.

4 Likes

It seems like folks are thinking about autonomous robots competing to win events… too ambitious a step IMO. What about a 25th percentile performing autonomous bot at a district qualifying event, and you can use high quality hardware that could win the event if teleoperated? Could it be done? I sort of suspect yes.

QFT. The problem statement is mismatched, human drivers are still better at the task than any currently-known form of AI.

I say this because a huge number of folks I talk to… even folks with college degrees… see “machine learning” simply as a “we don’t know how to solve this problem, so we throw ML at it, and then the problem get solved” - doesn’t matter how generic or unknown the problem statement actually is. Or what intermediate steps might be required. ML is just magic problem solving box.

I think it’s worthwhile to push towards it though. There’s cool and awesome enabling technologies that remain under-explored in the FRC space. If done well, these open doors far beyond “replace the driver”.

The value is in the journey, not the end.

2 Likes

i believe some day we will see fully auto robots… but it will take a few years. Mainly the strategy, tactics, and dealing with defender and alliance robots is just to complicated for FRC level SW)

until then - we try a gradual approach - slowly replace some of the traditional driver tasks with partial automation. reducing mental load, and accelerating repeated moves

4 Likes

Unless there is something I am missing, how does this solve the issue of accuracy of known GPS signals within an indoor setting?

The Rio also feels bad about playing video games when it should be working on homework, so there is that…

5 Likes

I didn’t say it did?

1 Like

+1 for this, it’s the same layering I’ve advocated for in similar conversations before:

ML has some potential application in translating “field and match state” into “optimal next-best robot action to take”.

ML has much less applicability in translating “robot action to take” into “motor voltage commands”.

action->voltage is a much more tractable problem and the one teams are solving. Orbit’s auto-drive a great example of a necessary building block for these “full-auto” robots. field_state->optimal-next-action is much less tractable, and something I haven’t seen many folks work on yet.

3 Likes

My apologies! I read the line “RTK seems to be very capable” in the wrong context given the conversation of the topic of GNSS/GPS systems within an FRC setting.

2 Likes