I love 6328. Keep the posts coming, always a good read!
Already miss NE like crazy, but it’ll be even sweeter when I come home for some events! It’s hard to explain how much the FIRST program did for me growing up, so always pushing to give back as much as I can.
Just wanted to hop in to talk a little bit about how we’re structuring team management this year. With all of us being virtual(and me being 3000 miles away), it’s been quite a challenge to keep the team engaged and moving in the right direction. I’m extremely happy with some of the progress we’ve made so far and look forward to the next couple months as we continue to try to hit our goals.
To make this simple, I figured I’d just share what a typical week looks like for 6328 in terms of meetings and agendas. We’ve tweaked this slightly over the last few months to try to figure out what works best for us, and of course your mileage may very, so I suggest just trying different methods until you find a way to keep the students and mentors engaged.
Our week starts on Sunday, we start off with our weekly Mentors and student leads meeting. We just started doing this recently, as we felt it made sense to start pulling some of the student leaders into some of the full team planning and it does a nice job of closing the communication loop and getting everyone on the same page. We’ve found recently the students do most of the talking and the mentors just give feedback, which is the direction we’ve been wanting to move for a while. During this meeting the agenda typically consists of going over subteam accomplishments the week prior, setting goals for upcoming week, scheduling out which days each subteam will be in the shop, and then go over anything else that comes up that we need to discuss. If it’s needed, we’ll kick the students off the call and have a just mentors meeting after we’ve finished up the agenda.
Over the course of the week, depending on what the goals are for the week, we schedule out meetings for each subteam to go into the shop. Some of the students can talk about the covid guidelines and rules we’ve set for the team, students and mentors, but we’ve worked hard to ensure we’re always adhering to all state and federal guidelines.
Most subteams will have a couple students in the shop working, and the rest of the subteam on a zoom call so we can include everyone in all our progress. Currently we’ve shifted a lot of focus on letting the programming subteam get as much time with the robot as possible, so they’ve been meeting regularly with mechanical going in to do tasks, robot improvements, and maintenance as needed.
Some tools we use to keep everything on track.
Thanks to one of our awesome mentors, Greg McGurrin(an expert at project management) we’ve started to implement some tools to keep everything moving in the right direction. Greg and a few student leads are currently working to develop a weekly report template that the student team leads will use to more formally set weekly goals and then report progress at the student team leads and mentors meeting, and the full team when necessary. I’ll have Greg and Maddie share those when we get them to a point we feel comfortable sharing.
The second major tool I’ve been pushing to utilize is a Gantt Chart. I’m a big fan of these and actually use them in my day to day life to track work, personal, and robotics projects and goals. We’re still working on polishing this up, but once it’s at a good place I’ll have Maddie or Hallie share here.
All and all, this season so far has been challenging, but I proud of the progress we’ve made so far. To make up for this rather boring post about team management, I figure I’d share some videos of the progress we’ve made on a few of the @ home challenges!
Galactic Search(no intake on though)
Bounce Path - Autonav
Slalom Path - Autonav
Run 2 (my favorite)
Props to the programming team. Looking solid!
I know I’m a bit late to the party, but I’ve been mired in nostalgia lately watching our recap videos from last year. It was really great fun teaming up with you guys in Northern CT. Your bot ran as beautifully as it looks. Congrats on a well-deserved banner!
I know I know, the title is cliché but it’s been a hectic third week of build/work season and I’m running out of ideas ALREADY. I probably just need some sleep, but oH WeLL.
In the shop, unfortunately, there is still only 25% building capacity following COVID precautions so there can only be about four people in the shop at any one time and one person per every other room.
1/31/2021 - The Game Design Challenge team discussed changes to the auto/tele-open/endgame structure for our game, a cooperation mission between the teams, which will include some sort of addition scoring location on the opposite side of each alliance’s driver stations, and how climbs will affect endgame and robot interactions during the time. We will be continuing with students creating basic sketches of the field for this week and starting rough drafts of required documentation. -Jonathan M.
2/3/2021 - The Game Design Challenge team finalized the general endgame tasks, but we still need to figure out the specifics of how the climbing mechanism will work and robot interactions during the endgame. We also divided the match time into separate auto/tele-op/endgame time frames for the match. The team broke into small groups to discuss, in more depth, the physical dimensions of the game elements, which included initial CAD sketches with the help of the mechanical team, who also provided their opinions on the game and the design of the game elements and pieces. The other team went further into developing our theme by giving names to game pieces and elements that fit the overarching theme. We decided on where we want the cooperation task between the teams to be located, but still, need to go into further specifics.
We are planning to continue developing the field sketch in CAD, continue to develop the theme, and start to focus on game balancing (points, penalties, etc.). -Jonathan M.
1/31/2021 - Quick summary from tonight:
- Katie reviewed executive summary questions and edited them to make sure same info isn’t presented twice, etc. Maddie and Michelle will review this week to get down to the right number of characters.
- Aryan and Anne have a draft of WF nomination ready for review.
- For 2/7, will have a rough draft of CA essay ready for review.
- Once essay is in review process, will start outlining CA interview/presentation.
- Need to start focus at the same time on prepping tech and game design teams for doing virtual interviews/presentations.
2/2/2021 - Here are some pictures of our first blue nitrile wheel! The powder coat chipped a little so we may need to touch up. Tomorrow we’ll be done with the rest, and we’ll try rivets on the remaining two so we have a way to compare and possibly save some effort in future years. Thanks to Maddie, AJ, Shirish, and Jack for doing all the work and making this go so smoothly! -Cameron E.
Definitely didn’t poorly edit out some of the paint chips in these photos with a mouse to make it look prettier… wh- why would I do that? PFFT, definitely NOT ME though.
2/3/2021 - Powder coated. - Luke
2/3/2021 - All 6 wheels ready to go! Spacers were a bit tight so they are being reprinted, some students are planning to come in tomorrow afternoon to get them installed before programming tests them in the evening.
2/4/2021 - I helped replace all the wheels on the drive today with Lizzy, Shirish, Maddie, and Jack. - Alisha
2/4/2021 - So on arrival tonight the compressor was not remounted, and when we went to do that, we found it doesn’t quite fit between the wheels any more; the new ones are notably bigger, roughly 6 1/4" instead of 5 3/4". - Brett Bonner
2/4/2021 - For Cad and Design, we have been working on designing a new sheet-metal robot cart with some students to allow them to gain knowledge and experience for next year. We only started meeting with these four sophomores last Monday (2/1/2021) and are at the stage where these students are observing the process first and will begin working on their design in the upcoming week or two. - Me (Hallie)
2/4/2021 - Also, the intake is undergoing a bit of a redraw because we still need to add a spot for the motor (probably a 775 Pro for more power) and gearbox. Overall the basics of the design will stay the same though but more will be added to it. I will try to post some more pictures of it by the middle of next week (if procrastination doesn’t get the best of me first lol). - Me (Hallie)
2/3/2021 - In order to ensure we can accurately hit the target from all of the zones, the software team did a ton of testing to characterize the shooter behavior at different flywheel speeds. We started at our standard 6000 RPM from each zone, and established that (yes) we can mostly hit the outer port but (no) we can’t reliably hit the inner port. - Jonah B.
2/3/2021 - At each hood position, we then manually measured the optimal flywheel speed for a variety of distances through trial and error. We can plot those on a graph and use polynomial regression to fit a quadratic function to each one (see below). The x-axis is the distance from the center of the robot to the target in inches and the y axis is flywheel speed in RPM. Purple is the wall shot, red is the initiation line, and blue is the trench. These data points represent the maximum range where we could reliably hit the target in each hood position. A few notes- There’s a fairly wide overlap in the ranges for line and trench, so we’ll probably put an arbitrary cutoff between 180-200 inches when selecting which hood position to use. On the opposite side, the wall shot data was unexpected. We can fairly reliably hit the inner port from several feet back using low RPMs (even if that isn’t incredibly useful). Unfortunately, it’s extremely difficult to make shots between about 90 inches and 120 inches. That’s not somewhere we’re likely to shoot from, but worth noting. - Jonah B.
2/3/2021 - After creating these models, we did a quick test by shooting from a previously untested distance in the line hood position, using the calculated RPM. The ball bounced out, but the freeze frame says it all. - Jonah B.
2/4/2021 - For software work tonight we took it out as we were just driving, but we’ll need a solution before we can shoot anything. Short term, it could mount in the right-hand climber space - until we decide to put a climber back on. It could still go between the wheels if supported about 1" below the bottom of frame rails since the mounting ears are the widest part, but that sort of bottom protrusion probably isn’t ideal (pics below). Also, we’re having some clearance issues between the right-hand pneumatics tank & the hopper which I don’t remember having before, or maybe just hadn’t noticed - may need another custom low-profile clip mount as we have on the left, or something similar. (The right-hand tank has a standard mount, which is taller.) - Brett Bonner
This section is just primarily for challenge videos but I may post some photos in the future.
Robot motor go BRRRRRRRRR
Thanks my friend! We had a great time working with y’all! Hopefully next time we can get those last two wins!
You have NO IDEA how much we wanted that!
absolutely dying at this Hallie, look at this graph will never not be funny
Bahahha facts though, iconic meme moment
The previous few posts have included some information about what the 6328 software team has been working on, but we thought it would be worth going into a little more detail about our adventures with motion profiling.
Our motion profiling system is built using the standard functionality built into WPILib. We wrote a custom command which wraps the normal
RamseteCommand. This gives us an easier API for setting up the profiles and means we don’t have to define all of the constraints for each one; There’s only one configuration per robot.
For each path, we wrap the motion profiling command in a command group. We do this for a couple of reasons - first, breaking each path into a separate file makes them easier to manage. Also, we can easily add in functionality before and after the profile. Before running, we set the odometry position using our coordinate system (with the origin in the upper right of the field near A11). Because we don’t care about the ending position, we set the ending velocity of the profile as high as possible and chain a command to brake after - the robot can stop quite quickly so we don’t hit a wall. The odometry can’t track accurately through that kind of skidding, but exact positioning doesn’t matter at the end of the profile.
When iterating quickly through versions of the trajectories, testing on the robot can get quite cumbersome. We’ve been using a path visualization tool put together by @came20 . Here’s some more information on that system from him:
I wrote this tool a couple of years ago as a student on FRC Team 401. We were getting into advanced trajectory following (this was before spline trajectories and the Ramsete controller were part of WPILib) using Team 254’s code as a reference. I needed a way to quickly visually examine trajectories before running them on a robot to ensure they had no malformed sections (such as wheels reversing due to tight turns, forgetting to set a trajectory as reversed, etc.), check that constraints were being applied correctly, and verify waypoint geometry.
The tool generates the trajectory using WPILib’s trajectory generator, and then plots the resulting path on the screen. It then uses inverse kinematics to draw the paths of the left and right wheels, and colors these paths on a gradient based on the maximum speed throughout the path. This “heatmap” visualization makes it very clear to see which parts of the path are acceleration or deceleration, and to check that centripetal and region constraints are being applied correctly. The tool also provides the ability to “simulate” the path by drawing a “robot” (black square) which traverses the path at real-time speed. This isn’t a true simulation in that it does not account for any sort of physics, instead purely drawing what the trajectory is commanding the robot to do. This is useful for getting a visual sense of how the robot should behave, and identifying areas where the robot appears to be doing something physically impossible. Along with this real-time display, some statistics are drawn in the upper left corner about current pose and timestamp. The tool uses Swing (built in Java GUI API) to draw the graphics, and it was built very rapidly, so not much work was put into making it pretty.
The code for the visualizer can be found in my “FRCKit” library, a work in progress to bring tools like this as well as simulation and other utility libraries to more FRC teams. The project is in no form release-ready or even well organized yet, but nevertheless the source code can be found here
As an example, here’s a video from the visualizer of a trajectory we tried for the barrel racing path:
Our plan for Galactic Search is to define the four possible profiles ahead of time and select one to follow when the run starts using vision (more details on that in the coming weeks). Our work thus far has been to start defining each of those profiles. One of the questions we had to answer was whether to start each profile from the same place (which simplifies the vision pipeline) or try to optimize each starting location (potentially saving valuable time). As a proof of concept, we visualized two trajectories for each path - one with a center starting position and another with an optimized off-center starting position. We tested using a cubic spline with waypoints on each power cell, and tuned the starting directions of each. The results of our testing are below. There are also videos included, where left is off-center and right is centered.
- Centered = 4.7589 secs
- Off-center = 4.6807 secs
- Improvement = 0.0782 secs, 1.6%
- Centered = 4.8171 secs
- Off-center = 4.5945 secs
- Improvement = 0.2226 secs, 4.6%
- Centered = 4.6224 secs
- Off-center = 4.2806 secs
- Improvement = 0.3418 secs, 7.4%
- Centered = 4.3433 secs
- Off-center = 4.2065 secs
- Improvement = 0.1368 secs, 3.2%
Given the short length of these profiles, our conclusion is that saving even a fraction of a second is worth the extra complexity. Also note that the center starting positions are still tuned to different angles, so if we used a single starting position (including direction) the discrepancy would be even greater.
For several of the paths (slalom and barrel racing) the robot needs to maneuver in a circle around the waypoints. For our initial testing, we tried to define waypoints such that they formed a continuous curve:
That isn’t terrible, but it’s also clearly not the optimal route. We’ve been doing some work to instead define perfectly circular profiles. Unfortunately, WPILib’s trajectory generator doesn’t support something like this on its own, so we needed to give it a little (a lot) of help. All of this functionality is part of our motion profiling command I linked previously if you’re interested in the details. The path itself is defined by hundreds of waypoints spaced a tenth of an inch apart in a circle. That forms a trajectory which looks something like this:
Unfortunately, the red color coding indicates that both sides are moving at the same maximum velocity, which a) doesn’t satisfy the centripetal acceleration constraint we defined and b) isn’t how to drive in a circle. On a real robot, you get something like this (https://youtu.be/M1k2Q-ZSMdE). It tries to correct the position, but goes way off course. The issue is that the trajectory generator thinks the curvature of this section is 0 (it’s completely straight), meaning it doesn’t properly apply constraints or compute wheel speeds. Since we know the properties of this circle, our solution was to manually adjust the curvature of each point after the trajectory was generated, which results in this:
That’s a big improvement, but there are still some issues. First of all, the curvature changes abruptly going into the circle which means the wheels have to accelerate very quickly. We actually found that this wasn’t a major issue in testing, and the drivetrain seems to behave reasonably. However, the larger issue is that our constraints are still not being obeyed so we’re exceeding the maximum velocity and centripetal acceleration (those are all still calculated using a curvature of 0). The solution? Apply a custom constraint within the circle. To provide maximum velocities and accelerations, this constraint passes calls through to all of the standard constraints that we apply to the whole path. However, it runs the calculations using the corrected curvature. The constraint also determines the maximum chassis velocity which will prevent the outer wheels exceeding their maximum velocity. Once the curvature is fixed after generation, all of the constraints are obeyed properly:
Below are some examples of running these profiles on the robot. This is the first time we ran with proper constraints- the circle looks good but the positioning is a little off. We were able to address that with further tuning.
Here’s another example on the barrel racing path:
We’ll keep everyone updated as we make more improvements in all of these areas, and we’re happy to answer any questions.
Programming team shared a couple videos, just wanted to post them quick!
Super excited about this, hoping we’ll see an 8 ball and maybe more!
As much fun as it would be to think that this just worked the first time, to be fair we should also share the blooper reel! A lot of the useful learning happens when you’re trying to figure out why things aren’t working they way you want them to. (Which I know you know!)
Thanks for sharing your experience and the impressive autonomous code! I do have a question about class CirclePath. Can you explain a little more about two parameters in the constructor: startingRotation and endingRotation. The comment in the code does not help much. I have some difficulty to determine what value should be used in barrel and slalom racing if CirclePath is used. I see you are using 0 and -180 for the first circle of barrel path, -160 and 160 for the circle in slalom path.
Sorry for the confusion on that. As is perhaps obvious, we haven’t spent much time cleaning up this code. The starting and ending rotation parameters set the position along the circumference of the circle to place the start and end points. Those angles are perpendicular to the robot’s direction of travel. A starting rotation of 0° would place the first point directly above the center while 90° would be directly to the left. Here’s a diagram of our configuration for the slalom path, traveling counterclockwise:
The angle of the robot is determined based on the direction of the path. Note that the initial pose for the above circle would be at -70° instead of -160°. Also, this method means switching from clockwise to counterclockwise fundamentally changes the trajectory. The two directions look like this…
…and NOT this.
Hopefully this helps, and let me know if you have any more questions.
Thanks for detail explanation. It make perfect sense now. For barrel path, the robot is in circle path for half of the circle (e.g. 0 to -180), correct? Can you also explain how you integrate FRCkit into your project?
Yes, on the barrel racing path the robot follows around half of the circle. We found that it’s slightly faster to create a normal spline on the other side (as opposed to a full circle) since we don’t need to worry about navigating tightly around the marker. In general, we try to use the circles only when it’s necessary to maintain a certain distance to the marker. I’ll let @came20 answer the FRCKit question.
FRCKit is still very much in development, and I haven’t decided on the best way to make it available to teams in an easy way yet (this could take quite a while). Nevertheless, here are some instructions to get you on the right path to using the path visualizer:
I publish FRCKit library files to my own maven repository for ease of development. Thus, you’ll need to modify your
build.gradle file to talk to my server. See here in our 2020 code for how to do that.
Once this is done, you’ll need to pull in the dependency for the visualizer. This line does that (note that it goes in the
dependencies block that should already exist in your
build.gradle. Note that the latest version at the time of writing is
0.0.30, so I’d recommend switching out the version that’s shown in that line to
The visualizer is used by creating another
main method somewhere in your project which launches the visualizer. VS Code should let you run this method straight from the IDE once you’ve created it. From there, it’s just a matter of creating a
TrajectoryVisualizer object, passing it the parameters it asks for (the constructor is documented so you can just look at that for the parameters needed) and calling
start on the object you created.
Please let me know if you have any questions or trouble getting this working!
Thanks for the reply! We will give it a try. The visualization looks cool.
We got the visualization working. The students love it. Your instruction is easy to follow and we use the sample code from Team 6328 to test it. One pc complains about VC++ runtime. We uses version 28 but failed to use version 30 because the data type changed . After that, it works well. FYI, our team uses Talon FX, Pigeon and meter as unit. After some tweaks, we can get Team 6328’s motion profile wrapper to work. Thank you all for the helps.