I’m in no hurry to get an answer to this. I’m mearly doing research for next year.
I’m kinda new at using C++ for coding the robot this year. My team is thinking about implementing Vision for next year via C++. What I am looking for is someone to explain to me how to write a C++ program that does nothing but Vision processing or at least give me a place to start off with. I don’t want to run robot driving code on it, just vision processing code. I’ve read all of the manuals for the C++ stuff and it just confuses me.
I know it’s not exactly what you’re looking for, but you can find an example by going to (in WindRiver) File -> New -> Example -> VxWorks Downloadable Kernel Module Sample Project -> 2013 Vision (… I think there’s two “vision” related examples). We’ve never actually tried vision tracking ourselves (this is also our first year of C++), but that’s what most people who have will probably recommend. Enjoy!
Our robot has a little Intel NUC on it running Arch Linux, in which we run OpenCV code that gets images from our cameras, figures out angles, distance, etc, and sends it to the cRIO. We write it ourselves in C++. You can look at the source code here. I’m looking to port this C sometime. It’s not the fastest thing on the planet… I also want to clean this up later. It’s ultra messy as we kind of write our code in a hurry all the time
1706 is highly advanced in Vision programming. That is my role on the team and i love it. There are only 2 programmers on the team, me and a senior who does everything else. My program actually runs on a completely different computer than his on the robot, and runs independently. I used intel’s OpenCV libraries and had to fix some of the functions such as approxpoly because it wasnt good enough. I have a paper posted on here the I wrote about it for a science competition, which won me a trip to Houston for I-SWEEEP. If you have any questions, please contact me.
If you notice how poorly it draws the polygons, I dont rely on the corners it gives me. I adjusted aishack’s program(http://www.aishack.in/2011/06/image-moments/) to tracking a coloured ball and applied the moment to the contour, and not the whole image. That gives me subpixel accuracy for the centers of the targets that doesnt budge when everything is still. Last year to find the center of the target, I averaged the corners of ApproxPoly. To fix Approxpoly’s poor quality, I used contour-tree temporarily filled in the contour all white (im dealing with a binary image),then applied approxpoly, but around each corner it gave me, i applied a region of interest and then used cvcornerharris. Then I took those new corners, ordered them top to bottom, left to right, and drew the polygon around them. Of course I knew it was a square, but I could adjust the code to do it for any size polygon. We’ll be at stl and terra houte if you by chance happen be going too.
Okay, so I looked at Open CV and it doesn’t seem to be anything on getting the camera. Do yall use the WPI libraries to set up the camera and then use OpenCV to do the processing?
If you execute the stuff on the robot, you’re better off using NIVision, in which case you would use the WPI Libraries to set things up. On the driver station or some other processing platform, there’s a variety of ways to capture images from the camera.
I haven’t had an issue with the built in delays with OpenCV. My program runs at 20 fps, could be faster, but it isnt needed. We have 4 cores on our O Droid X board, but are only using one, so theoretically we could use all of them and make the program run at 80 fps. But again, that is not needed.
The delays only occur at low framerates (like 5fps) and low resolutions. Sounds like you’re processing on board, so you don’t have to deal with the 7Mbps limitation for the driver station, which means you’re unlikely to run into the problem.
I guess I didnt understand the scenario. And yes, you are correct about having an onboard computer. It weighs 6 ounces, so weight is not an issue. It is an O-Droid X board. Very powerful. I highly recommend it.