When I asked Cam on March 19th he sent me this link, I believe we used the 45* and the 90* from this kit. @came20 or @jonahb55 can ack.
Congrats and welcome to the future, Top Down with Onshape is an awesome development track and something that helps build robust, easily adjustable models!
This is a great question and something that I see a lot, the overall answer is when we are drawing the super sketch, we’re really trying to build a few things that we can then use to get enough context to into the part studio to start modeling the subsystem without having to fear that we’re “overstepping” the boundaries of a different subsystem. The second bit is the interfaces between these subsystems and their mate connectors. You used the example of the arm, which I think is perfect to illustrate the method.
In this graphic, you can see how we started the SuperSketch for the side profile of the robot, we knew roughly where everything was going to go from a whiteboard sketch, but we just needed to start drawing everything to scale. The sketch is done on the right plane, in the middle of the robot, and shows the space allocation for the arm/shoot(purple), pivot(brown), elevator(yellow), and the pin at which the arm was going to pivot at(lime green).
Once we have that allocation, we will create two new planes and use the project/convert tool to pull them into 3d space, which you can see below.
Now we have a ton of context to derive into the new parts studio for the arm, and we can start extruding parts. The nice part is the parts will live in the space that is correct for them, so when you put all your different subsystems into a full assembly, they all drop into the correct place as long as when you build the subsystem assembly, you fix at least one part where it is imported from the part studio.
Here’s another example below of a SuperSketch I did of a new robot pallet jack cart that we’re making to easily move the robot around the practice field without manually lifting it up.
Here you can see we got even more detailed with drawing in both faces of the box tube, so it will make it even more clear which way stuff should be extruded.
I quickly made a parts studio and derived in just the tube sketches to show how you can then extrude the tubes and have enough context to build the details.
Here’s a quick example of now how you can change one dim and 6 parts all change and it doesn’t explode the model.
I think the best way to learn this is by doing some smaller projects, like this cart to get the hang of it. Doing some smaller, less feature intense systems is a great first step and an easy way to get more context for what details you do and don’t need in the supersketch. As you go through the process, it will become more and more clear what context you need in the SuperSketch vs what should be in the subsystem parts studio.
We started using SuperDerive before they made a bunch of upgrades to the Derive tool, the derive tool now allows you to do nearly everything Superderive did, so moving forward we’ll just use Derive.
A few bits that are important for this to work, you need to have a clear system for when you need to communicate with the others working on the sketch, and you need to be consistent with versioning, updating, and syncing the model. We normally just encourage all students to update the supersketch whenever they see it have the little blue circle.
But it’s a good habit to be in to update everything before you start working to make sure it’s fresh, and if you make a change, you spend the time to tell everyone to update in whatever communication tool your team uses.
This will vary team to team, but we typically divide up subsystems by what feels like makes the most sense, I know this is a cop out answer, but experience will make it make more sense how you should divide stuff up. Our models can get a little more complex, and we can have a larger number of people working on and owning subsystems, so we primarily focus on defining the scope of work for the student or group of students, and having them work within the scope in their own document. The arm and arm pivot is a good example, half way through the year this year we pulled the two into their own subsystems, and I believe the reason was because we had too many people working in a single subassembly because it had the elevator, pivot, and arm in it, so pulling them apart let us work on just one bit of it, without needing to mess with the whole assembly and improved document performance. In hindsight, those should have all been their own subsystems from the start.
These we did a while ago and we had the sketches handy so we just dropped them in the sketch and mated it in the correct location. Less work is better work.
Hope this answers all your questions, we’ve been slowly exploring more and more ways to make Onshape more performative and easily usable by a ton of different students all at once. We’re far from perfect, and still far from where we’d like to be, but we’re making progress. The next big task i’d like to tackle is utilizing the branch and merge features of Onshape more. It will be great to be able to pull a subassembly into a new branch to play around, without having to risk messing up the main assembly, and then when we’re happy with the changes, merge them back to main and version and update the master assembly! Exciting!!
What was the reasoning behind printing the camera mounts out of TPU? Did it help reduce vibrations in vision while the robot was moving? I assume it must be a pretty high shore hardness and infill to not flop around.
In fact, we want camera mounts that can flop around a bit. Here’s what the mounts were like on our 2023 robot (the image to the left, not the right):
Since we prefer to put cameras directly on top of swerve modules, they’re very exposed to collisions with other robots and the field. Our first several iterations of mount were rigid, and we went through at least one set per driver practice session while colliding with various objects…
The flexible mounts just bend out of the way and bounce back, which makes them very difficult to destroy (we’ve had robots drive up on them before with no issues). Of course, we also use shrouds around the lenses to protect the more sensitive components. The mounts are rigid enough to not move significantly during normal robot movements, so it doesn’t cause any issues with our pose estimation accuracy. Also keep in mind that since we use a full 3D solver (rather than relying on a highly precise mounting angle), the pose estimator is relatively robust to minor deviations in the camera position.
Thank you so much for all of the advice! You answered all our questions with so much detail, and as a team new to top down design, this helps us immensely. You guys are the best!
This just means solvePNP here, right?
Yes, as opposed to other techniques (MegaTag 2, pinhole model, etc) that depend heavily on a very precise camera angle.
Gotcha! Huh I didn’t know megatag was so sensitive to camera pitch. Something I played with is solving the same camera reprojection minimization problem but while constraining robot orientation to be flat and at Z=0 with a given yaw, but this was taking on the order of 7ms for a single tag on a roborio excluding problem setup. Next up is throwing sleipnir at it and reformulating as a traditional nonlinear least squares instead of factor graphs for maximum likelihood, I think a sufficiently motivated C++ nerd can make this realtime even on a potato
Very intriguing. I imagine an approach where the camera pitch is unconstrained (even if other variables are constrained) could potentially still work well in these circumstances. And to be clear, I use MegaTag 2 as a potential example but it’s not something we have a lot of experience with (and I certainly can’t speak to the details of what it does or does not constrain). We have much more experience with simple trig-based approaches to localization, which are extremely sensitive to changes in camera pitch. I’d be very nervous about using flexible mounts with that approach.
How are those camera mounts assembled? Where do those tiny standoffs go?
The style in Jonah’s gif were on our 2023 robot, those are threaded standoffs that went into heat-set inserts all with fairly small metric hardware (M3s).
For 2024 we pivoted to installing the camera from the back (meaning we don’t need the standoffs or the separate shroud part), there is an assembly in this post:
If this raises any more specific questions I’m always happy to clarify things further, I expect we will have more changes come 2025 so keep an eye out for that
Though not ideal, may be viable if you run the calculations on a separate thread, no?
Making MegaTag2 work on the rio in some kind of PhotonVision way would be huge. This is truly the kind of thing that if were public would helps tons of teams.
Could you maybe share the original code that took 7ms? I’d love to take a look. (Not sure if that’s the link you sent)
Btw, does anyone know if MegaTag1 (assuming MegaTag2 is built on it) is fundamentally different than MultitagSolvePnP, or is it just the same?
Yep it is - you’ll have to compile using a Rio toolchain, SCP the executable and all other shared libraries it links against over, kill robot code, and run it manually. And make sure to build in Release mode.
Sorry if this is getting too out of topic for this thread, but do you think supplying initial rvecs and tvecs with useExtrinsicGuess
could lead to a similar result?
Seems like this could be pretty easily incorporated with the current PV structure for multiTagOnCoprocStrategy. Maybe even only pitch, roll and height in the PV UI.
What material are you guys using to print those end caps?
would something more along the lines of ABS, PETG, or PETG-CF work in its place?
This is the stuff.
In the past we’ve also used PLA Plus, with a decent bit of success. I’ve started growing fond of ASA recently, printing a lot of it at work!