Hello @JamesCH95 , can you share the CAD of the camera mounts please?
Look for “life cam housing” in our OnShape directory.
Link is in the first post.
Also - I have enabled exporting from our OnShape link share. If you use something, post about it! We’d love to hear how you use/improve/catch inspiration from our CAD work.
Had some fun with sheet metal to integrate the hatch and cargo mechanisms.
We are expecting to pick up our first sheet metal parts today. We cut our wood brain-pans last night. Most of our OTS driveline parts are put together. I think we’ll have most of at least one chassis completed tonight.
That’s some fun sheet metal work! Reminds me of what I do at work…
It’s pretty clear how you will be acquiring cargo, but how will you be retaining (and scoring) your cargo with this setup?
The plan is to have a net or flexible frame attached two the two perforated bars, so the cargo remains captured against the one powered roller.
That makes sense. Will you be close enough to the rocket when this system is in a more vertical position? Or will you be using the top of your hatch panel mechanism to give you a bit more distance?
Great question. We think it will be enough to have the mechanism vertical, or slightly tilted. But using the hatch plunger is a secondary (and hopefully avoidable) backup.
With the supports you have for the hatch panel system, I don’t think it will be too much of a problem if you need to use it as a ledge.
I’m picking your brain because our mechanism looks strikingly similar with a few minor differences. This is looking great! Thanks again for sharing!
We’re also considering adding a little shelf to the cross-bar on the fuel collector. Need to model that up tonight…
Bent sheet metal parts arrived today, and the students began assembling the chassis! Pictures to follow soon.
In software world, we’ve made progress on target detection. The pipeline pseudocode goes something like this:
- Blur image (I’m not sure if this is necessary, but it smooths over small glitches)
- Apply an HSV threshold (The camera’s exposure settings are tuned for a saturated green color off our ring light)
- Filter contours very roughly to reject tiny ones (not sure this is really helping, but it came with our GRIP-generated pipeline)
- Compute minimum-area rotated rectangles with minAreaRect().
- Filter those rectangles. Their aspect ratio must be similar to the vision target stripes (5.5"x2"), and they must be mostly full (that is, the rectangle must be mostly filled by the contour - the contour must have been mostly rectangular).
- Classify rectangles based on their angle. Those tilted about 14.5 degrees to the right are the left-hand stripes, those tilted about 14.5 degrees to the left are the right-hand stripes. All other rectangles are discarded.
- Match stripes. For each left-hand stripe, look through the right-hand stripes. Based on the height of the left-hand stripe in pixels, compute about how many pixels represent one inch at that distance. Count about 8" to the right, and see if there’s a stripe from the list of right-hand stripes near that area.
This pipeline is added to our repo as of this commit.
The white cross shows where we think the center of the target lies:
This seems to work well with all the field demo images as well. In the image below, the yellow rectangle toward the top is one that was rejected due to its angle. The algorithm is tolerant of partial stripe occlusion, as seen in the rightward target. The green rectangle towards the right was recognized as a left-hand stripe, but was not matched up with any right-hand stripe.
The game piece collector was simplified for manufacturability. We are also starting to add electrical components to the robot assembly.
Here is the camera housing with ring light.
We have been working on manufacturing some of the smaller components, like drive axles, this week. The snow storm and workload at our sponsor delayed delivery of sheet metal parts by a day. It happens! More pictures and hopefully some video tonight.
Lots of sheetmetal has arrived, it has been an exciting couple of nights for us.
Lots of stock and McMaster bits too.
I made a mistake in CAD and one of the bearing sets interferes with the mid-cap chassis member. We cleared out the material on a mill, only a minor setback.
A couple key seams get welded up on a few parts. TIG’ing is fun!
While our sheet metal sponsor can laser to about ±0.002in or so, we like a better fit for bearings. So they laser cut every bearing hold to 1.10in and we use a step drill to knock it out to a final 1.125in ID.
We’re taking a look at staining, rather than painting, our wood brain pan. I think it’s looking pretty classy.
A quick coat of paint of some tweaked colors for this year. We are quite pleased with the change to a darker green and blue.
Lots of cleko usage to fit everything up for initial assembly.
We also started knocking out delrin/acetal gripper prototypes, an upgrade from the plywood ones.
And have assembled the Mk1 hatch gripper.
We’re excited by how it’s working so far, but already have a few more things to tweak/improve on it.
Week 3 is always my favorite, lots of parts getting made and being assembled. Stay tuned for more!
Sorry if i missed it talked about earlier in this thread, but what are you using for the linear slides on your hatch mechanism? Is that similar to the plastic round stock found in the KoP?
You didn’t miss anything. Those are OTS linear bearings and shaft stock from McMaster.
The PN for the linear bearing is in our OnShape directory. We’re quite happy with how they’re looking/feeling so far.
@ToddF did some experimentation with different drill bits to get a snug bearing fit. Video here. I don’t know if the ‘best’ bit can be used without a drill press, but it may be a good tool to have as an option.
We have seen that tech tip before (it’s pretty darn good).
However, with the Irwin step drill that we have we get nice, snug, bearing pockets without modifying the tool. Even when using hand drills (carefully). We have not had good luck with inexpensive step drills like Neiko.
YMMV, testing ones own particular setup is always a good idea!
Today’s Team 95 software update:
We’ve composed two sets of camera settings:
- One for humans to use in first-person view during the sandstorm
- One for the machine vision algorithms to use
At this point, the two modes are the same, except that in human FPV view the camera exposure is set to auto.
We modified the main loop of the vision coprocessor code, so that it now monitors a flag in the Network Tables to see which mode should be active. When the flag changes, the software applies the appropriate settings to the camera. Settings are loaded from a pair of JSON files at startup.
You can see the changes required to implement this feature in this commit - the most interesting changes are probably the ones to
Human FPV view:
Machine vision view. Note that you can now see a target show up as detected in the NetworkTables view.
Your elevator design is excellent! Kudo’s to your design team. A question on the routing of your rope for the continuous rigging you look to be doing based on your pulley arraignment:
How are you looking to attach the rope to the top & bottom of the carriage? Will the attachment points be off the rear of the top & bottom of the horizontals, or some other arraignment?
Thanks in advance!
We’re going to connect it to the back of the carriage. We might use a U-bracket, but are more likely to repurpose one of the dozens of extrapivot mounts that we’ve accumulated.
We are currently planning on using no powered down rope, relying instead on gravity. This was effective for us last year.
Have you thought / are you planning to use a constant force spring to pull the elevator upwards?
It should make the elevator easier to pull up. It would go down a bit slower but it should be fast enough