Integrating Computer Vision with Motion Control - FIRST Championship Conference

Are you having a tough time coordinating vision with motion?

Does buggy vision make your robot grasp at things that aren’t there?

Do you wish your robot had superhuman powers of perception and control?

If so, join us at 7:00 PM on Wednesday, April 27 in Room AC265 of the America’s Center at the FIRST Championship for our session on “Integrating Computer Vision with Motion Control”.

Here’s the abstract:

Often the hardest part of solving an FRC computer vision challenge is figuring out how to integrate a camera-based vision algorithm with closed-loop control to automatically point, steer, or drive a mechanism (or the entire robot!). This presentation walks through techniques and best practices that can be employed to mitigate issues like latency, imperfect cameras, and simplified vision algorithms to achieve lightning-fast, precise, and robust control.

This will be presented by Jared and Tom from Team 254, who (together with Austin from Team 971) brought you Motion Planning and Control for FRC last season. We had a great response (and a packed house!) last year, and have incorporated your feedback to make this session even better. To ensure the session is approachable for all levels of experience, we plan on focusing on concrete FRC problems (like auto-aiming a shooter) while showing how concepts like understanding how your camera works, being smart about tuning your vision algorithms, utilizing kinematics, and applying motion control best practices can make your solution better…we want to help teams compete while also opening your students’ eyes to how professional roboticists approach similar problems.

And the great news is that all Championship Conference talks will be recorded this year, so you don’t need to choose between this session and Karthik’s…you can have both!

Thanks Jared. Hopefully I’ll be able to attend. Last year’s motion profile course was terrific. We were able to incorporate the concepts in an off season project and this year’s robot. Look forward to learning more about vision control.


Sounds cool, I’m fairly new to programming but hoping to learn about vision tracking. Can’t wait!

Does anyone have a time machine?

Otherwise this is going to be a tough choice.

Will be recorded again if I don’t make it to this session?

As OP said, the sessions will be recorded this year - I’m planning on watching one of the two as well :stuck_out_tongue:

I should really read better. ::ouch:: ::rtm::

YES! This is awesome news. Will they be put up on YouTube or something similar?

There were a whole lot of great looking conferences and since our team didn’t make it to Champs I was very tempted to fly out anyway just to experience the sessions.

If anyone has any particular questions on this topic, feel free to post them here. We think we have a good idea of the problems most FRC teams run in to, but it’s always good to learn a bit more. We will do our best to address these topics in our presentation!

Our main problem is how long it takes to process an image. By the time you process the image. The robot is somewhere else.

Thanks for recording. I’ll be robot inspecting, and will miss the Wednesday conferences.

We will DEFINITELY deal with this one :slight_smile:

That is great news.

Do you know if the speaker will be fitted with an on-person mic that feeds electronically into the recorder? Audio quality from acoustic pickup from an on-camera mic at the back of the room is close to inaudible for those of us with aging ears.

young ears too :slight_smile:

the recordings will be a nice treat post-finals week. thank you for putting this together!

One question that I hope you will cover is:

When to use vision and when to not use vision. Last year, we had a mentor work on tracking the yellow totes for a couple of weeks and we didn’t end up using it at all. :confused:

However, this year you need vision tracking if you have any hope of shooting a high goal in autonomous.


That’s a strategy problem. You have to choose to focus on a task first and then you move to how to use vision to augment or automate performing that task.

I am looking forward to this talk though. I don’t know what questions I’ll have yet but I know that I’m going to have some.

My team’s robot doesn’t have a functioning vision system, and yet still fires a high goal shot in auto. We used a hard-coded position to aim our turret from the spy box.

Not sure, but I’ll bring this up.

While this is not a strategy talk first and foremost, we will most certainly discuss this. In particular, we will look back at the history of vision in FRC and you will quickly notice how often the vision challenge is a diversion for the majority of teams.

Mostly likely, I won’t be able to attend the conference (either session at 7pm), but I do have a question and a few examples of what I mean.

What are some rules of thumb for the prerequisites to attempting vision?

  • Great PID Control (fast settle, little/no overshoot) or “Good Enough” PID Control
  • Level of precision on drive train turning or (auton) “drive straight” distance offsets
  • For high-load situations, is a braking mechanism necessary

I think it’s interesting that Stronghold seemed to present the “perfect storm” of factors that made vision a viable and extremely value asset for a larger-than-ever subset of teams.

Looking forward to this talk!

I thought I’d share a clip we took today prepping for this conference presentation. We will definitely be going through all the pieces it takes to get a robot tracking like this!

Tom and Jared, will you guys be discussing the realities of being able to process frames faster than the camera can produce them?

For example, a camera’s maximum frame rate is listed at 30 frames per second, but your vision processing board can process over 80 frames per second.