If so, join us at 7:00 PM on Wednesday, April 27 in Room AC265 of the America’s Center at the FIRST Championship for our session on “Integrating Computer Vision with Motion Control”.
Here’s the abstract:
Often the hardest part of solving an FRC computer vision challenge is figuring out how to integrate a camera-based vision algorithm with closed-loop control to automatically point, steer, or drive a mechanism (or the entire robot!). This presentation walks through techniques and best practices that can be employed to mitigate issues like latency, imperfect cameras, and simplified vision algorithms to achieve lightning-fast, precise, and robust control.
This will be presented by Jared and Tom from Team 254, who (together with Austin from Team 971) brought you Motion Planning and Control for FRC last season. We had a great response (and a packed house!) last year, and have incorporated your feedback to make this session even better. To ensure the session is approachable for all levels of experience, we plan on focusing on concrete FRC problems (like auto-aiming a shooter) while showing how concepts like understanding how your camera works, being smart about tuning your vision algorithms, utilizing kinematics, and applying motion control best practices can make your solution better…we want to help teams compete while also opening your students’ eyes to how professional roboticists approach similar problems.
And the great news is that all Championship Conference talks will be recorded this year, so you don’t need to choose between this session and Karthik’s…you can have both!
Thanks Jared. Hopefully I’ll be able to attend. Last year’s motion profile course was terrific. We were able to incorporate the concepts in an off season project and this year’s robot. Look forward to learning more about vision control.
YES! This is awesome news. Will they be put up on YouTube or something similar?
There were a whole lot of great looking conferences and since our team didn’t make it to Champs I was very tempted to fly out anyway just to experience the sessions.
If anyone has any particular questions on this topic, feel free to post them here. We think we have a good idea of the problems most FRC teams run in to, but it’s always good to learn a bit more. We will do our best to address these topics in our presentation!
Do you know if the speaker will be fitted with an on-person mic that feeds electronically into the recorder? Audio quality from acoustic pickup from an on-camera mic at the back of the room is close to inaudible for those of us with aging ears.
When to use vision and when to not use vision. Last year, we had a mentor work on tracking the yellow totes for a couple of weeks and we didn’t end up using it at all.
However, this year you need vision tracking if you have any hope of shooting a high goal in autonomous.
That’s a strategy problem. You have to choose to focus on a task first and then you move to how to use vision to augment or automate performing that task.
I am looking forward to this talk though. I don’t know what questions I’ll have yet but I know that I’m going to have some.
My team’s robot doesn’t have a functioning vision system, and yet still fires a high goal shot in auto. We used a hard-coded position to aim our turret from the spy box.
While this is not a strategy talk first and foremost, we will most certainly discuss this. In particular, we will look back at the history of vision in FRC and you will quickly notice how often the vision challenge is a diversion for the majority of teams.
I think it’s interesting that Stronghold seemed to present the “perfect storm” of factors that made vision a viable and extremely value asset for a larger-than-ever subset of teams.
I thought I’d share a clip we took today prepping for this conference presentation. We will definitely be going through all the pieces it takes to get a robot tracking like this!