Ideas for robot avoidance with computer vision

Hello!

I’ve been watching quite a few matches lately, and I’ve discovered that robots seem to get stuck in mid because it is hard to see them from the driver station. My team has all the code down for things like auto-aim and gear placement so now I am experimenting with other automatic programs to help with teleop.

I’m thinking about working on some robot-detection/obstacle avoidance to help our robot speed down to the other end. This program has to use only 1 camera (no stereo!) and cannot use any sort of distance sensor. Also it is going to be done with openCV.

I’m thinking that I could possibly map out the floor (for example height of the bumper contours could show how far away they are). And then the program will see if the contours intersect with the path directly in front of the bot. The path directly in front of the bot would be shown by a contour (Basically like some sort of a quad that is smaller farther away then it is close to the bot).

Any ideas on the algorithms or particular functions that could be used to make a program like this?

Also I’ve made a tutorial for using the roboRIO with the BeagleBone (But this can also be applied to the raspberry pi): http://einsteiniumstudios.com/using-the-roborio-with-the-beaglebone.html

Thanks for any feedback,
Alek

I haven’t started any coding on these ideas, but I have some approaches I intend to look at.

  1. Robots have bumpers. If there’s a big blue or red patch down at the bottom of your screen, there’s probably a robot attached to it. If you can find a white patch inside a red or blue patch at the right height, you definitely have a robot.

That’s a “quick and dirty that might work” solution.

For extra credit try recognize numbers. If there are numbers near the floor, it’s definitely a robot. Recognizing numbers is hard, but there is at least some built in support in OpenCV. I don’t know quite the extent of it.

  1. Structure from motion.
    This one is a lot harder, as in a whole lot hard, except that I know OpenCV has some build in support for it. Compare two images, match up points. If a whole lot of them are moving coherently, it’s a robot.
    This one has all sorts of peril attached to it, but I’ve taken some online (MOOC) classes where I’ve done a few exercises on the subject. The practice exercises weren’t that hard, but they were also trivial examples, not nearly as difficult as the real world problem you will have to do to recognize a robot.

  2. Localization based solutions

    Instead of just picking out reflective tape, learn to recognize the other vision targets on the field. There are all sorts of fixed points on that field. there are lines drawn on the field, with corners. There are walls, and rails and hoppers. All of them are at known, fixed, locations. You can use them to determine your robot position (using SolvePnP) at any time.
    Now, knowing where you are, you can know what is supposed to be on the floor in front of you. If you are in a space where there is supposed to be carpet, and instead there’s an unrecognizable thing with lots of contours that don’t match any of the known targets, it’s a robot.

As I said, those are conceptual approaches that I intend to work on Real Soon Now, so I have no experience actually making any of them work on a real problem.

As an even more trivial example to David Lame’s, it should be possible to just look at the color of an area or pixels. Green? floor. Not green? not floor. Since robots are reasonably low, some basic trig would give you a distance estimate. The colored lines might cause some problems but should be easy enough to ignore.

Solomon from 2898, the Flying Hedgehogs, is actually doing this with with deep learning and neural networks, and posted a video of him doing it Here. He is also hosting a seminar in Houston called “Deep Neural Networks for Computer Vision: Do RIOs Dream of Electric Sheep?” which will be all about this.

This is all fascinating! I hope the seminar notes and/or video are posted eventually. Our home championship would be St. Louis, in the slim chance that we make it down.

For those who are looking into this - to what end? Automatic swerve-out-of-the-way? Path planning? Or just some kind of feedback to the driver?

We noticed with both Steamworks and Stronghold that there are points on the field that you have no line of sight for, and you’re kind of guessing at what could be going on. Robot not appearing on the other side of the obstacle? Either you’re stuck on the obstacle itself, the wall, another robot, etc.

I also wonder if there’s value in providing sensor feedback at the driver station. For example, some fancy cars have 360-degree video camera views that show you what obstacles and parking lot lines all around you. Even simpler than that could just be an indicator of what “object is near you” at your front corners, sides, etc. like a car parking radar. I don’t know how useful this would be to a driver though, whose eye is probably focused on the field and not on the laptop screen.

The great thing about the neural net is that it can be extended fairly easily to many situations. As you mentioned, this can be driver augmentation or even pathfinding. This isn’t tremendously difficult; use the neural net for object recognition and location then run that data through a normal pathfinding algorithm.

Solomon will have more details about it, and we certainly will record it.

4505 is planning on trying to use our Pixy camera to find and track robots for automated defensive blocking. Along with a swerve, we should be able to stop any team we want if we are forced to defend.