How Do You Intake Game Pieces Outside of Line of Sight?

As operator, I’ve noticed my team taking too long to intake game pieces outside of our line of sight, whether obscured by a field element or our own robot. Our workflow for these situations is as follows:

  • I watch camera feed until OpenCV sees a note (I have to ensure there are no false positives from red bumpers or notes in the source)
  • Once OpenCV sees a real note, yell “spotted”
  • Driver holds a button that automatically drives to the note and intakes it

Since the camera FPS is slow, latency is high, and OpenCV only recognizes notes when the bot’s quite close to them, there’s only a very small window of time in which we see a note without having driven such that it exited the camera’s line of sight. This problem is compounded by my and the driver’s reaction times- I have to realize a real note is spotted, and he has to act according to my callout. Once a note does exit the line of sight, I have to tell the driver which way to turn and adjust to see it again; this process is again slowed by reaction times. Altogether, this process feels clunky and time-consuming.

This feels like it’d be a common problem, so I’d like to know how other teams deal with it.
Thanks!

3 Likes

From what I understand it seems like a big pain point in your system is actually you vision processing.

You mention OpenCV so i assume your using a custom vision system and not limelight or photonvision. I think something that would be worth thinking about is would you cycle faster if this was a manual process.

  • Would it be faster to not have any processing on the camera?
  • Would it be faster to not have a camera?

If you come to the conclusion that manually intaking these is slower then the next question is how do we make our vision more reliable.

  • Would tuning the vision system help?
  • would using a different OpenCV model help?
  • Would using a different vision system entirely help?

There is no one right solution to this problem but hopefully answering some of these questions will help your team figure out what works for them.

1 Like

There’s a few ways you can make this easier without any fancy vision processing:

  • Touch it own it, design your intake such that even the slightest brush with the game piece causes it to be drawn into your robot
  • Maximize intake width to reduce the need for driver accuracy
  • LEDs to indicate when your robot senses it has control of the game piece
  • If using a controller with vibration, vibrate the controller when your robot senses it has control of the game piece
  • Use a driver camera pointed at the intake side of robot to eliminate the driver blind spot
14 Likes

I think this is very possible.
Do you get this skill just from practice, or is there anything more specific?

1 Like

Practice goes a long way but there are a lot of ways to make things easier on the drivers. Something we did is past years is had a camera plugged into the RIO that had no vision processing software running so instead of having to wait for any vision processing to happen he could just manually drive himself to the game piece while using the camera as his eyes.

1 Like

If you have an under the bumper intake, put the grabby part as close to the outside of the frame perimeter as possible. Gives you more reach, especially with bumpers attached.

Initially our intake was directly middle of our chassis. The ideas was to make it an equal access intake from either direction. But if the roller didn’t have good pressure we wouldn’t grab well against walls or field elements. The size of our chassis meant with bumpers the note just barely started to touch the rollers if the note was against a hard wall.

Picking up midfield was no issue. A student added some additional rollers near the outside edge and it was an instant game changer for both midfield and in hard to see spots against walls/field elements.

Our drivers rarely go after obscured game pieces and hate using a driver (or operator) camera although I remember one year almost everything was obscured and another year in auto the driver drove while blind.

We do what has been suggested here - robust intake, driver camera, and the operator presses buttons if it was the operator who knew what to do.

If you are trying to optimize your current scheme, it’s hard to make suggestions without more details. Some example possible tweaks:

If you are using the camera to compute a turn-to-note angle and the camera is slow, then use the camera to set the setpoint for a PID controller on the gyro angle - it performs much faster and accurately.

If the driver needs quicker feedback on how to turn then use the joystick rumbles to indicate left or right.

There might be opportunity to speed up OpenCV identifying notes. Do you need to have better lighting on the floor to lower your exposure? Can you lower the exposure and increase the gain? Do you need a fast, monochrome global shutter instead of a color rolling shutter? Are there steps in the vision pipeline that don’t add value? Are you missing steps in the vision pipeline that could add value?

Are you overlapping camera frame acquisition with object detection (two separate threads for vision).

I would look at seeing if you can intake manually for pieces out of of sight. Do you have some feature/superstructure/etc of your robot that will allow the driver to gauge the distance without needing to hold the button to automaticaly intake?

As a drive coach, I watch our drivers reactions, and they never look down at cameras. There are 2 reasons why:

  1. They have to completely change their perspective when driving. Things move faster on a camera as it’s looking closer. You lose field perspective monitoring the camera, and take a second to regain it when you look back up.

  2. Now that most robots are holonomic and can move any direction, most controls are now field centric. This becomes a nightmare for looking down camera, as that perspective is robot centric. If you are on the center line trying to pick up a piece on that center line, looking down the camera, your instinct is to press forward to move to the piece. But that will send you to a driver station. You really have to press left or right to get the piece instead, and that control is super jarring to even think about how you must react to the piece.

And on top of the latency issues (I remember about half second latency on streams, maybe it’s better now), and the MJPEG streaming quality and resolution (bandwidth limit), it’s tough to have a real time streaming camera. (If we had x264/265 streams for higher quality/lower bandwidth, it might be better, but you need to worry about encoding latency)

One of the things we tried this year to counteract the streaming problems was put the video stream on the robot to look through it. We took a 7” pi screen connected to the PI HDMI and used the intake camera to show that stream live on the robot. Since you are on the same side of the radio, you can get full stream bandwidth through the network switch, so it was super fast and responsive. We ran into power issues with the battery dips during the match, and the screen was too small/dim at full field distances, especially with the protective shield on the front of the screen. We still think it would be cool to have live diagnostic data on the screen for testing and pit checks, but we never got that far.

This picture is small, but you can see the screen running on it. I never took a better picture while it still worked :frowning:

We have different ideas we want to try next season.

For us, historically, this is the success formula.

Using driver practice or software techniques to overcome limits of the intake is somewhere between “extremely difficult” to “impossible”.

1 Like

Agreed.

You need enough robot under your driver to do well. By extension you need enough robot under your software to do well.

You can’t fix poor robot architecture decisions in software.

This often means build a simpler more robust bot unless you are one of the few teams out there that have such a high software and controls ceiling that there is always more to extract within code. Those teams know exactly who they are, if you are questioning if you are one of them… Sorry you are not. Build simpler and have fewer functions working at 100%.

2 Likes