FRC 4481 Team Rembrandts | 2024 Build Thread | Open Alliance

Driver Practice

Last 4 days we’ve been doing lots of driver practice. We started earlier than normal this season since we believe that driver practice will be very important in this year’s game and will give us a slight advance over other teams. In total, we’ve logged about 45 minutes of officially recorded practice (close to 1.5 hours off-record). During these sessions, we recorded all cycles and logged the amount of cycles so we can calculate the cycle times.

Recorded Cycle Times

Below, I've added a graph of the cycle times over the last 4 days. At first glance, you might find it strange that there is a large variance in between data points. Most peaks / slow cycles were caused by a software/hardware failure or notes not being able to stay intact (more on that soon…)

Cycle time (2-7-2024 till 2-10-2024)

We were able to get close to 10-12 second cycles. However, keep in mind that we only have 3/4 of a field, so on a real field it will most likely would have been around 13 seconds for full field cycles. Checkout the video below for a compilation of the cycles we did today:

What have we learned so far?

Since our goal is to reduce cycle time while maintaining scoring accuracy, we are actively looking at each part of the cycle, and we ask ourselves where in the cycle we can save the most time. There are two key points we're going to focus on in the upcoming days to optimize the cycles even more.

Collecting notes
Firstly, it’s pretty hard to see if the robot is aligned with notes on the other side of the field. We could add a driver feedback camera, but we’re not planning to do that soon, because of the streaming delay and that it might be uncomfortable for the driver to use. As an alternative, we are going to add LED feedback on the robot soon, so the driver can react faster to when a note is picked up.
Second, we didn’t have wheel guards on the alpha bot the last view days. We noticed pretty soon that it’s super easy to accidentally drive over the note with the front wheels. This causes the note to be stuck on the wheels while being intaked. This leads to notes being shredded instantly. Much truth below (unfortunately)… We have added the wheel guards recently, and they seem to fix the issue!

Scoring notes
We’ve also noticed that shooting notes currently takes too long. Most of the time is spent on revving the wheels and the auto align software not being tuned. Revving the wheels is mostly a mechanical issue, since the friction is relatively high for the top rollers. We’re going to reduce the friction by reducing the amount of belts from 3 to 1. Software is currently working on shooting while driving. The code for this is almost done, and hopefully the robot will slowly evolve into a driving turret over the upcoming days.

Feature Steps

We will continue doing a lot of driver practice and keep track of the cycle times. We will also do more match simulations and add defense bots. We will send more updates about driver practice soon, so stay tuned!

Written by:
@Bjorn - Lead 3DM (Data Driven Decision Making)

35 Likes

Thank you so much. Have you found lower FPS to be acceptable? We also were having some difficulty calibrating the camera with the Photonvision chessboard. After callibrating, the 3D tracking kept seeing the 3D AprilTag as being several centimeters off of the plane of the physical AprilTag. Additionally, we are seeing a lot of latency in the Photonvision camera stream despite being on the lowest stream resolution and were wondering how we can mitigate it.

We had a second thought about our code release, and decided not to release it for now. We prioritize openness and want to inspire and help people on their way, but we think that releasing our entire code does not contribute to this goal. We believe that the raw code does not give the full story about why we have done certain things and why we have chosen for certain routes. We are eager to answer any questions about our software and we think that answering them in this thread or in a DM will help out more than directly giving the answers instead of the explanation.

3 Likes

ALPHA ROBOT REVEAL 2024 (MEME EDITION :sunglasses:)

:warning: Seizure warning :warning:


Carefully crafted by:

Mr. Weekend
me

54 Likes

How do you go about calculating what angle you are at from the goal, given you know your Pose on the field?

We considered two options for this. The first was to look at the AprilTag under the speaker with the Limelight and rotate the robot until the AprilTag was in the middle of the view. This is the same as how you would aim to retroflective tape in previous years. For this method you don’t need 3D AprilTag localization and vector calculations, and it should therefore be simpler and more reliable.

The second method is calculating the angle to the goal based on the pose estimation of the robot. This pose estimation is a combination of odometry and the pose we get from the Limelight when they are in 3D mode. This method is a bit more complex. However, for autonomous we do need this accurate localization, so we decided to use this for automatic aiming in teleop as well, so we didn’t have to switch between the 2D retroflective tape style aiming and the 3D localization on the go.

The vector math to determine the angle is done in the code using Translation2d objects. We first determine the vector from the robot to the goal (v), by substracting the vector of the robot (u) from the vector to the speaker (w). This is done by using the minus() method of the Translation2d objects. Another way that you can see this is that if you add up vector u and v, you get vector w again. Using the getAngle() method, we can then determine the angle of the vector v with respect to the X axis (alpha). We then subtract the current rotation of the robot from this angle alpha to get the angle of the robot with respect to the speaker.

I hope this answers your question.

23 Likes

Do you think that localization error would significantly impact this? My team planned on using localization-based speaker alignment, but switched to aligning to the tag itself w/ crosshairs and tx so our shooting would still work if our localization was off. IIRC this is actually what limelight recommended, but I can’t seem to find where though, so I could be wrong.

Could you not run the more reliable retroreflective style aiming while also running localization? To my knowledge, as long as you see the tag, you should be able to get the tx value which you can feed into a pid loop for aligning.

Our localization isn’t perfect so this very well may be the better option for you, I’d be curious to see how accurate this method is across different robots.

Fairly certain you can indeed see both at the same time. But we need accurate localization for our autonomous either way. So if those measurements are off, we have bigger problems to worry about.

That being said, we might still include it as a fallback option.

5 Likes

I can’t speak for the LL team but I think this was more a recognition that maintaining accurate odometry throughout the match is difficult and takes significant tuning. So teams that only need to aim at the goal should use the much easier to implement “2D aiming style”. The accuracy should be the same.

Your localization errors should be fixed when looking at the tags (especially 2 of them) so it not being perfect when getting bumped/traveling the middle of the field doesn’t matter

2 Likes

they did it at the end of the season last year.

creating an alt account to post this is a bit disingenuous…

15 Likes

Hey Liam_9999, welcome to the community and great to hear you’ve been a huge fan so far!

We have no secrets and share every thought and design choice. We’re here to help everyone! Let us know if you have a specific question.

As enthusiastic members of the Open Alliance, we value transparency and collaboration in our STEAM journey. Our philosophy, deeply rooted in the concept of “teaching to fish” rather than “giving the fish,” emphasizes empowering teams through the sharing of our design processes and ideas.

While we occasionally share partial CAD files or code snipepts and have done so in the past, we typically do not release complete designs. Our aim is to encourage teams to engage with the engineering challenges independently, fostering innovation and critical thinking skills. This approach not only aligns with our educational ethos but also enhances the learning experience for all involved.

We will release our full CAD files and code release at the end of the build season to provide a comprehensive resource for reflection and learning. We believe this method allows teams to understand the underlying principles of our designs and apply these insights in their future projects.

The release at end of build season in combination with the ability to ask any question you have in our thread is what counts for us as open CAD and open code.

65 Likes

So we are starting to finally get good data from the April Tags, but we were seeing that we needed to add an offset mulltiplier (6%) in the code vs what photonvision was giving us. This is our first real year of using photonvision. Are you guys seeing something similar? Also we are using printed April Tags, and have yet to put up our vinyl stickers.

1 Like

We have noticed that the flatness and angle of the AprilTags is really important. To make sure that they are as flat as possible we have taped them to a piece of pvc before putting them on the field. This also makes attaching them to the field easier. We double check the angle with a spirit level to make sure everything is straight. Also double check that the surface you are mounting them to is angled correctly.

Also, make sure your calibration is correct. The checkboard should cover around 90% of the view. To be sure, we also used a white background and made sure the lighting was even. With around 30 photos we got pretty decent results

3 Likes

Jumping in for some unsolicited advice:

  • especially if you’re comparing multi tag results to real world measurements, make positively sure that your setup is perfect.
  • make sure you’re actually measuring what you think you are. Chris’ comment here is a great summery for what really belongs in our docs
  • try designing a science experiment to isolate where error might be coming from. Think about the confounding factors you’ve got at play and how you can isolate them.

The PVC tag idea is also pretty slick! Have y’all considered spray adhesive as another way to ensure flatness?

3 Likes

Keep in mind that they may not be this accurately located on the competition fields. Realistically they shouldn’t be far off, but the field tolerances are hardly “Tight.”

4 Likes

We haven’t thought about glueing them, but that does sound like a good idea. What was nice about taping it though was that we could attach it corner by corner and retry if it wasn’t perfectly flat

1 Like


Shooting on the Move

As we enter the sixth week of build season, we are looking more and more into optimizing our performance as much as possible.

During auto testing and driver practice, we noticed that we were losing considerable time to stopping, shooting and accelerating. Since we want to minimize our cycle time as much as possible, minimizing the time to perform these actions would be crucial.

That’s why our software sub-department started developing a system for shooting while moving. To do this, we need to compensate in two dimensions.

Aiming Steps

To determine the dimensions in which we need to compensate, it’s good to first take a step back and look at the steps required for auto aiming

  1. Make sure the Shamper faces the speaker
  2. Determine the distance to the speaker
  3. Determine setpoints based on this value

For steps 1 and 2, we need to compensate separately. We compensate by adding an offset to the current position based on the robot velocity. This essentially mimics the robot being in the future, at the moment when the Note exits the Shamper.

Current Implementation - Carthesian Coordinates

Our current implementation is based on the carthesian plane, with the x dimension directly facing outward, and the y dimension facing left from the blue driver station.

  • The simulated distance from the speaker to the robot is influenced by the movement in the x direction.
  • The simulated angle offset of the robot looking at the speaker is influenced by movement in the y direction.

This works as a crude estimation, but the system falls apart when using at different angles to the speaker.

Future Implementation - Polar Coordinates

The obvious – and mathematically correct – improvement would be to switch the system to polar coordinates. This would change the (x,y) coordinates to (r, φ), with r representing the length of the vector from the speaker to the robot, and φ representing the angle from the vector representing r and the normal vector of the speaker.

This way, the auto aiming can be influenced more accurately in the following way.

  • The simulated distance from the speaker to the robot is influenced by the movement in the r direction.
  • The simulated angle offset of the robot looking at the speaker is influenced by movement in the φ direction.

Test Videos


Written by:
@Nigelientje - Lead Outreach / Software Mentor
@Jochem - Lead Software
@Casper - Software Mentor

30 Likes

Good to know on the flatness. Thank you so much. We currently taped them to our firld element that was left over from 2019. Luckily we have a shopsabre, so cutting mounting plates today. We’ll let you know how it changes when we get them mounted.

Also i totally agree that we are going to have to calibrate them to the red and blue side of the field at our events.


1 Like

How do you guys find the velocity your robot is moving at?

1 Like