Computational Design in FRC

I recently stumbled across this video from the Disney Research Hub which explained a process developed to generate a robot model from high-level descriptions. For example, given a path for an end effector to travel, and with a library of modular mechanisms, it generates a model that is optimized to follow the path. At the end of the video they even show how they give some simple constraints for their ‘Tetrabot’ to follow, and it generates a model of the robot which ends up being 3D printed and tested in real life! I am gearing this post to the design side, but I wanted to add the point about 3D printing to illustrate the effectiveness of their work. Maybe something manufacturing-related will be in a future post of mine. My questions are:

  1. How feasible is something like this today?
  2. How could one go about implementing the full stack approach? Maybe part of it (ie. instead of an entire robot, say one subsystem, or even a gearbox) could be generated like this.
  3. How effective would a robot designed with a system like this be in game? What could some drawbacks of a system like this be?

FRC has a large repository of COTS parts and a continuous stream of community designs that could be leveraged. I think it would be very interesting to see a system like this generate a robot for a game with a combination of COTS and custom subsystems.

I know that different CAD applications have different methods of allowing parameterization in designs. Just a week or so ago I came across this post about generating parts commonly used. I wonder if you could ‘chain’ these featurescripts together in a higher level process.

If I’m not mistaken, Woodie Flowers spoke about AI in design in a kickoff video. I believe he talked about how he envisioned AI in the future working with designers to create parts, with a simple example he gave of asking the AI to “design me an aluminum L-bracket of this size that will have less than 1.5mm deflection at the end with a load of this size”. Interested in hearing what people think!

Edit: cut a bunch so it’s (less) wall-of-texty

1 Like

So FWIW, I don’t think this is an entirely new concept. As you saw, there’s a patterns of enlisting computer aid by programming an algorithmic design. Namely, from a few critical unique inputs, derive a larger set of outputs.

The bracket case is simple: define what constraints the users can impact, and what the computer will assume. Given a set of user constraints, execute the algorithm, and a bracket comes out the back end.

In cases like this, the “real” work becomes describing the process of going from a small number of meaningful constraints, to a finished design.

The critical change you’re describing is using AI techniques to bridge that gap. That is to say, rather than have a human describe the specific algorithm, use a more generic algorithm that detects patterns from lots of examples, and extrapolates in useful ways.

In softwareland, the future is now: https://copilot.github.com/ . Preliminary results is that it is pretty good: Certainly not a replacement for a human, but good at providing timely suggestions and helping skip a few manual google searches.

The way I view a lot of these things, from the limited exposure I’ve had: They’re time-savers, not technology-creators. They help people move from doing “turn the crank” work, and into more “purely creative” work. They augment human ability, not supplant it.

However… since I’m walking myself into the ethics side of it anyway… the risk I see of unbridled progress here is that the technological progression will outpace humanity’s ability to upskill and educate, leaving large masses of the population in the dust. This has to be accounted for in some way.

2 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.