Orbit 1690 Presents: 2024 robot reveal - "Doppler"

We currently do not run our swerve and odometry in a higher rate than the rest of the code. This is actually something we talked about and we’ll definitely try it in this off-season.
Regarding your second question, I don’t really understand what you mean. Could you please elaborate?

I noticed you’re doing some impressive automation on Einstein. Are you using the built-in WPI for your odometer, or are you using your own algorithms? If it’s the latter, could you share a bit about it?

Would you be open to share some or all of the code for this season? More specifically I am interested how did you accommodate the robot physics in trajectories? Did you generate those on the fly? Did you rely on a third-party drive controller like PP, or used your own? How did you handle edge situations, like in-place turns?
We do some of this in our robot already, but our routines are still far from perfect, and we want to find a way to improve them.

1 Like

We use our own odometry, but I don’t think it’s different in concept from WPI’s odometry.
We do actually have a feature that I didn’t see anywhere else. In practice matches we drive from one end to the other, and measure how much the robot thought it drove. Then, we just multiply our odometry by the ratio between the value we got to the actual length of the field. We started doing it this year, because we saw that slight differences between carpets adds up. After implementing this method, our accuracy has noticably improved.


We use our custom Pathfinder algorithm, which generates a CSV file for the robot to follow, and the robot parses it before each match. So we don’t generate trajectories on the fly. We accommodate for the robot physics by having max velocities, accelerations etc.
We actually don’t have an ability to turn in place in a trajectory. The times we did turn in place this year were times we skipped a missing note from the midline, in these cases we paused following the path and implemented another algorithm on the fly, to drive to the note detected by the camera, and then returned to following the pre calculated path.


That’s an interesting point. We found that the errors start accumulating when a complex holonomic move is combined with linear driving over long distances.

For instance, in auto a robot starts at 30 degrees to straight (plus or minus) and picks up a note about 1m from its current position while turning straight - no problem.
A robot starts at 30 degrees to straight and picks up a note mid-field while rotation to straight when driving - unacceptably large linear error (10-15 cm) in X and/or Y axis (usually both). The camera-aided routines we did corrected the Y axis pickup in such cases, but not the X axis (in other words, if the robot ends up a bit closer to either wall and the error is around 15cm, our camera-aided “last mile” routines fix that. But if we end up closer or father, our cameras did not try to estimate the actual distance (which we probably could as a last-mile routine).

One potential issue - we greatly overestimated our robot’s max angular velocity (3rps vs 2.23rps) and I think that was one of the contributing factors.

We will test that, of course. Now that we’re working on precision characterization, that would become useful for the next year’s robot as well.

I still think it’s possible to have a precision in-place turning. I observed that from Team 176 as their mid-field shooting was fairly good, and the turning before shooting was a two-move-in-place task (they always overshot a bit and corrected back slightly). They had the Z-error in such attempts higher than the Y error, which means their horizontal aim was on target.

What do you plan to do on fields like 2016, 2020 etc…? Or when you collide with other robots?

1 Like

We can always measure a different distance, just the longer it is the better of course. Specifically the 2020 field had a clean path across the field, and 2016 had an almost clean one.

From our experience, other robots bumping us wasn’t really an issue. Also, we can always ask team to stay away from us for a couple of seconds, it’s a practice match, it’s intended for teams to check and calibrate their robot.


Noted for the next time we compete against each other :smiling_imp:

Just kidding lol, that’s good to know because I always wondered what you guys were doing in practice matches when you just drive around the field


What does this have to do with the carpet? Is it about the wheels skidding or slipping on the surface, so you multiply by a constant term to account for it?

It’s likely that it has to do, at least in part, with the friction with the carpet. It’s also affected by the alignment and orientation of the fibers in the carpet, though we don’t compensate differently depending on driving directions.
That said, it’s more empirical than anything.

Hey Orbiters, nice robot what type of string did you guys use for your hooks and for the climber raising tensioning.

It looks like dyneema

Yes… dyneema