Integrating Computer Vision with Motion Control - FIRST Championship Conference

Too cool for school!


I didn’t know you guys had ILM as a sponsor. :wink: I kid.

Seriously cool and I’m really looking forward to this talk now.

Some additional questions.
What is the minimum frame rate you feel is acceptable for FRC?
How much lag is acceptable and how do you cope with it.
What is the most efficient way to get targeting data into the RoboRio (TCP, UDP, Serial etc.)?

I think it will be addressed, but I would be curious to hear exactly how much control on 254 is done by drivers vs with vision. What are your failsafes if the vision board suddenly crashes/shorts/explodes?

Also, what other sensors do you combine with vision for a more complete picture of your control system. For example, if you want to know distance to a target, are you using only a camera for that, or are there other sensors that you use to crosscheck your measurements.

And, specifically on your robot, how do you relay to the drivers that everything is in position for a shot? Is there a giant green check mark, does the system just not let you fire until it’s ready, is the firing also controlled autonomously, or do you have a different solution?

The operator holds a button saying they want to auto aim, and the driver holds a button saying that it’s okay to shoot. Everything else is autonomous.

Hmm - including sensing on the intake and ball pre-positioning?

The intake also has some automation but no, I was speaking about the aiming and shooting subsystem.

Do you guys find that it’s easier to accurately position the turret than it is to turn the drivetrain? That gif is really impressive, and I’m wondering how much of that is the turret being mechanically easier to control. Do you do any motion profiling on it, or is it just PID?

I’m looking forward to your talk!

The turret is just running position PID (onboard a CAN Talon SRX). Turrets are much easier to control than a drivetrain (inertia, friction, and skidding are basically non-issues).

I assume you are using the camera to generate the set-point. Are you closing the loop with the camera as well, or are you using some other sensor like a gyro or encoder?

Cannot wait for the talk! My team has struggled inplementing vision in the past, and I am sure this conference will help a bunch! Once the video is posted can someone share the link below?

I’m sorry if I missed it, but do you know where will the recordings of these seminars be made available?

^ Where will the recordings be located???

Last year’s might have been posted elsewhere, but it was on

Let’s keep in mind that these presenters are on top tier teams that likely need their support for the next couple days as they compete. Last year’s video was posted ~5-6 days after champs ended, and while I can’t speak for the presenters, I would assume you can expect it sometime in the next week or so with a link to it in this thread. I wouldn’t want to try to upload that large of a video via hotel internet myself.

That being said, if you haven’t watched last year’s or if it has been awhile, you can pass the time with the 2015 video which was also very awesome and applicable. I’d argue for the majority of the teams, maybe more applicable than this year’s.

Sadly, I wasn’t able to attend the session since I had to fly out Thur morning. When a video is available, if someone could post a link, I would appreciate it.


One way or another I will make sure a recording of this session (of sufficiently high quality) gets posted online. We will just redo it from home (and answer the questions that we received in person) if necessary.

Thank you. Very much appreciated.

Slides are here: