[Ri3D] Help BOOM DONE. Rpi Camera Tips and Tricks

As I’ve said before, Team BOOM DONE. has a stronger group of roboticist than any FIRST team I have been associated with. I am not bragging, I’m just stating facts here: My colleagues know how to make great robots.

BUT… …we are not that deep when it comes to FIRST experience.

The coders on the team have decided they want to do something with the Vision System aspect of the game (distance and angle to target for example).

Their Plan A is to use the IP Camera that so many FIRST teams use (Axis M1013 Camera), ship the image over to the laptop via wifi, process the images with Roborealm, and send date back to the CRIO via the magic of Network Tables.

But that is not our coders “home field” if you know what I mean. They would much prefer use OpenCV on Linux. Also, they hate latencies of any kind (yet alone unpredictable ones based on network traffic). They’d much prefer to keep everything on the robot if they can. Finally, add to that they I’m forcing them to use a system that is accessible to the 30th percentile FRC team and they sent me an e-mail asking to order a bunch of Rpi stuff (see jpg attached).

SO… …does anyone have any experience using the Raspberry Pi and associated camera to do FIRST type things?

If so, are you willing to share your experience?

Is this a recipe for disaster? if so, is it redeemable or should we formulate another plan?

Is the Rpi Plan B better than the Roborealm Plan A and should we make Rpi our Plan A?

Finally, what are the best threads on ChiefDelphi.com where this is talked about?

Help us get smarter faster.

Thanks,

Dr. Joe





I did a lot of work last summer on Camera Tracking using the Pi and BeagleBone Black. I was using python OpenCV because I didn’t know C++. On the Pi, just running a morphology and threshold on an image took about 100ms at stock speed. overclocking the Pi to around 1GHz brought that down to 90ms. So doing any kind of processing on the pi is going to take most likely around 150ms - 200ms including grabbing the image and sending the data to the robot.

I never did the full loop on the Pi, so I dont have an exact answer. I did get it working on the BBB however, and running the entire tracking takes about 55ms. This was using the IP camera that comes in the KOP.

Running this on the DS if powerful enough can do all the calculations in about 1ms. If running over WiFi, I calculated about 5ms transfer time, vs about 1ms transfer wired. So from acquiring the image, every 33ms, the processed data will hit the robot about 15ms later. So unless you hit a large lag spike, tracking on the DS will have alot less latency.

Remember that you can run OpenCV on the DS in windows, and it should work the same as running on linux.

If you really want it onboard, you can also mount a laptop directly on the robot and do that too.

Next year we are thinking about doing the Beaglebone black method, because 100ms delay is not much for our purposes. But the code will be runnable on the DS as a backup.

Realistically, is vision the best use of time, given only three days?

We did vision in 2012 on a Rpi with OpenCV.

The team wrote a white paper on it here. http://www.chiefdelphi.com/media/papers/2709

I have not compared on each of the platforms, but from what I have seen, heard, and measured, this is how I’d decide.

The image processing and decompression algorithms are largely integer calculations, so MIPs ratios are a reasonable estimate of image processing ratios. Sites like http://en.wikipedia.org/wiki/Million_instructions_per_second#Million_instructions_per_second are quite helpful at estimating the tradeoffs.

The IP camera images are compressed and it takes quite a bit of time to simply decompress them. This time goes down substantially with smaller images. Ditto for many of the typical processing approaches. The primary sizes are small (160x120), medium which has 4x pixels and large which has 16x pixels.

The IP camera can supply monochrome images too. So if you don’t need color, don’t use it. But do you need it?

The LV examples and perhaps the other languages work on both laptops and cRIO. So you can directly compare. The driver station has a chart that shows latency for the round trip of the UDP datagrams. This should give an idea of the TCP latency.

Clearly algorithm selection matters as well. How much info do you need to know? How certain do you need to be?

Similarly, a camera is a sensor. How often do you need to read the sensor? How few images can be processed? What sensors can be used to supplement the camera?

Greg McKaskle

Best summary set ever. I never really understood why teams needed 15fps of video. :rolleyes:

Joe, another, but more costly option would be the Cubieboard family, presently they have the Cubieboard 3 (aka Cubie truck) It’s a dual core ARM with a ton of ways to get data on and off the board. I’ve used the prior version and was impressed with the added processing power. The dual core Cubie2 and Cubie3 makes a difference in processing power, the extra memory can make a big difference.

I’m with you here. In order for a vision system to be effective, you need to have perfected whatever mechanism uses the camera feedback. I’m pretty sure that improving your game piece manipulator is a much better use of time than adding a camera. You won’t really have time to troubleshoot if something goes wrong, and you won’t really have any debugging time. Not to dismiss the ability of some of the 3 day robot teams, but I foresee that at least one of the groups won’t have a functioning robot at the end of 72 hours.

I hear you with regard to vision. I have seen many teams focus on vision only to never get it on the robot because they didn’t make weight or their main program (i.e. the one that allows the robot to MOVE) never gets done (or done right).

That said, our goal is a robot that would likely be playing after lunch on Saturday. At least the past two years, one way to do that has been vision helping locate the goals. Admittedly it is hardly the ONLY way, but certainly part of some team’s strategy for success.

Are there pitfalls in this approach? Certainly. But we hope to avoid those traps and model ways for others to avoid them as well.

For example, the ME team knows it has to support the CS team by giving them surrogates early in the 72 hour countdown so that they can develop code before they get THE robot. What is more, they have to deliver a robot that is essentially ready to code with enough ticks on the clock left to allow for the final code deployment.

I think that 72 hours with BOOM DONE.'s talent is a reasonable surrogate for what most teams can get done in the 6 weeks. Time scales pretty well. So if teams should give their coders at least a week then we should have the robot to the coders with 12 hours on the clock or we’re doomed. The ME team is shooting for twice that 24 hours.

Lofty goals. Let’s see how it turns out.

Dr. Joe

In 2013, most of the great shooters didn’t use a camera. It’s really easy to smash the robot up against the pyramid and shoot. We trained our driver to judge the shot distance without using the pyramid in under 2 hours. It takes more than two hours to program the vision stuff.

That being said, in 2012, an auto targeting routine could be really great, but time consuming. The team I was with that year was a team with an incredible software team, and let me tell you, it wasn’t easy. The vision stuff was student led (by one of the brightest students I’ve ever met), but the five programming mentors (two of which design optical recognition quality control stuff, and another with using camera feedback to align a robotic arm) agreed that it would have taken us well over a week to do this. We had a team of 3 students plus 2 mentors working on the vision system every build meeting (>35 hours per week) for the entire build season, and the full functional vision system worked at week five, and the tuning for PID, bandwidth, image size/quality/exposure… was determined a week after build ended.

However, I think that for a group of reasonably experienced programmers a vision system of sorts could be written in three days. A second team I helped in 2013 decided to do a vision system, but only had one kid and one mentor for programming. So, they took the example LabVIEW vision code, ported it to run on a computer instead of the robot controller (less latency), and transmitted the values using network tables. A sophomore in high school was able to do this by himself in 4 hours and could get the turret on their 2012 robot tracking a target perfectly.

My advice to you is before you dismiss driver station vision processing or onboard (on crio) processing as too slow, do some trials. Do you really need to process 15 fps, or can you grab one image and figure out the angle. If you need only one image, use the cRIO, if you need live stuff up to 20 fps, try the driver station vision processing. Miss Daisy (341) had what was in my opinion (and many others) one of the most accurate auto aiming system, and they just processed images on their driver station laptop.

Besides for being more expensive and difficult, using the raspberry pi makes your robot really complicated and prone to silly bugs and glitches. The robonauts in 2012 didn’t move on einstein at all because there network buffer overflowed from the computer on the robot. Every extra component adds thousands of ways for the system to fail, and a competitive FRC robot has the least number of possible failures.

It helps if you’ve done this in a prior season or in the preseason, but there are several cookbook papers and a team with the right mix of people could pull this off in the build season – but not without risk or expending additional resources that could possibly be spent elsewhere. There’s no withholding allowance on software, so there are ways to buy time too.

This (http://charmedlabs.com/default/?page_id=211, Pixy/CMUcam5) might be an interesting option, but the general availability doesn’t line up well with this season. Short of this turning out to be a good fit and being generally available, the current way that is within the easiest reach for most teams seems to be running OpenCV or RoboRealm on the driver station as mentioned previously.

For what it’s worth the Charmed Lab’s people have prototypes they have been using and testing. I’m going to guess if you reached out to them and asked they would be happy to become part of “5 crazy teams build robots in 72 hours”

From what they’ve posted, the camera and software looks very powerful. I’m in for a pair of them in the Kickstarter, I was hoping (*) that they would get here in December.

(*) Hope isn’t a engineering strategy that I recommend