For the past couple weeks I have been working on creating a system for recognizing FRC hot targets and determining distance to them. This is a work in progress, and any feedback / suggestions people have would be much appreciated.
Are you planning on using OpenCV? Right now it looks like you’re writing everything from scratch and that’s going to take a very long time to do.
I have already figured the whole thing out, I am just implementing what I already have planned on paper at this point. Also, I looked at the code for openCV, and while it is a very powerful framework, using it for this would be a performance nightmare. The first version of this actually used openCV. I spent over a week optimizing it, but it still ran slow. OpenCV is an truck; I need a sedan.
Are you planning to run this code on the roboRIO or the driver station? I know OpenCV got built for the roboRIO but I’m not sure if it was ever benchmarked.
OpenCV should run fine on a decent driver station (so no classmate). I got it working last year at about 30 fps on a crappy three year old Alienware laptop.
OpenCV has been around since June of 2000 and was initially developed in Russia…
We’re planning on running it on the roboRIO. A major reason we are doing this in the first place is for the autonomous portion, during which communication with the driver station is prohibited.
Crappy Alienware laptop? My team’s drive station is a 2009 white macbook with a core 2 duo processor and 2gb ram. Oh, and to control the robot it needs to be running windows in a VM :D. I would not be surprised if running the vision code on the roboRIO would be faster anyway (I’m only mostly joking).
This doesn’t happen. The robot is always has communication with the driver station.
You’re right. This is my first year being part of FRC, so I only know what the rest of my team has told me. I could have been confused or they could have been confused, but either way, thanks for the information.
At this point I’m not entirely sure whether we will be doing the vision processing on the drive station or the roboRIO. The roboRIO has a slower processor, but sending an image from the camera to the roboRIO, then to the modem, and then to the drive station sounds like a lot of latency.
We tried this last year and with the right settings it worked OK, the problem is the FTA had to talk to us because we were exceeding our bandwidth limit since the image we were sending was too high resolutions.
The other problem we had was we didn’t account for the diamond plating on the driver station so the reflection from the LED ring was tricking our vision tracking software into thinking it was a target.
The OpenCV codebase was compiled to run on the roboRIO. I’m not saying that it was created for the roboRIO.
You don’t have to run your vision stuff in the VM. OS X can still connect to the robot and you can run your java app there, although I still wouldn’t expect it to process images at any more than 15 fps because of the Duo (for comparison, my laptop was running an i5-2537M). 15 fps should be fine if you don’t try to use it to shoot or aim when the robot’s going full speed.
If that happens, your code should ignore “targets” smaller than some threshold. Then you’d only be processing the real targets.
Sorry for taking up everyone’s time, but the communication on this team has been rather poor coming into this season. First of all, we are running our vision code on the computer and I am not worried about processing power or speed. Second of all, we will be using openCV for our vision processing. After reading your code, I’m not sure it’s right for the job. In the past, we have had bandwidth issues with our camera, and I think that somehow got miscommunicated to our newer members.