Programming for vision related things

Hello all, our team has kinda gone through a drought recently, and, consequently, lost our programmers, so, this year id like to try some vision processing. Well, the only problem is the fact that i have no idea, other than some basics, how to go about vision processing. The tools that i have at my disposal are a pi, 4 USB cameras, lights, 2 ip cams, and then the generic electrical things. Also, if you have any way to get multiple camera streams going at once, without the FMS giving us crap on bandwidth limitations, thad be great.

 Thanks,
        -Mike

What language are you guys using?

Java is on the RoboRio, however if the pi’s gotta be c++, thats how its gonna have to be

There is a RoboRealm voucher code on TIMS… That software is used to process images…

IIRC RoboRealm is windows-only.

In any case, vision processing is hard. Not only do you have to have enough usable info from the image, which is battered by changing light conditions and possibly some defensive robots, to be able to distinguish the location, angle, and distance of the target, you also have to make some assumptions about the physics of the situation. The program then has to calculate the trajectory of the ball, aim, and stay aimed as the bot is getting pushed around by defensive robots.

If you have no programming knowledge, it isn’t likely that anyone here would have the time to teach you about the intricacies of vision processing. Not to mention that the top teams would likely not share any of this year’s vision code (for obvious reasons). Unfortunately, you may be out of luck unless you find some competent programmers and finish in enough time (I’d estimate 2-3 weeks to completely test and perfect a vision solution).

It is for the last reason that our team has never had time to try vision. We usually spend all 6 weeks building, and not enough time practicing and perfecting things like this.

I would :slight_smile: If they were using labview…

My advice is to use the examples and tutorials for whichever tool/language.

Your first step is to get the camera connected and taking images. Then decide whether you want the drivers to see the images or for the computer to make measurements on the images. If making measurements, what do you want to do with those measurements? Driver, influence something on the robot, etc.

And feel free to ask detailed questions once you decide what you want and get started.
Greg McKaskle

I’ve been messing around with GRIP lately. It’s a graphical drag and drop program for vision processing.

https://wpilib.screenstepslive.com/s/4485/m/50711

If all of your programmers have left the team, your first priority should be to do what it takes to support basic functions like having your robot be able to drive around on the field, pick up some game pieces and transport them securely. If your robot cannot get into scoring position with the game piece, your vision processing will be unnecessary.

My impression is that most teams that are successful in using “advanced techniques” such as vision processing, swerve drive, other holonomic drive systems, learn to use them in the time between competition seasons. In the fall and especially in the month or two before Kickoff, you tend to see the results of many of these off-season learning exercises being posted.

In the next six weeks, you can really only use what you already know and some things that are quick and simple to learn. As others have pointed out, vision processing is not quick and simple to learn.

i programmed last years bot, so im not a complete newb to this, i just don have the experience of the old programmers. all im aiming for is a system that when a light reflects off the reflective tape, i can run it through and return a true or false, primarily for auton. i just got done with school, so im going to do some trial and error, as i don have an actual bot at the moment. so, its entirely possible that ill get it.

thanks

Ok, to EDesbiens, a huge thanks for the pip about roborealm, as now i can identify the “u”, i just have some high intensity LEDs (oh wait, neon tube is it? people need to start organizing lights.) set the RGB filter to green, and have an “ideal” for my target. what i hope to be able to do tomorrow (why do mentors need to go home?) is actually track the thing, hopefully this will all be contained on the Driver Station computer (oh great, another thing to fail) and i can just relay to the Smart Dash via the Network Table. (im trying to give as much info as possible, this might help some other team out there)

Thanks
-Mike

No problem :slight_smile: Have fun and don’t hesitate if you have problems!