So, we were a rookie team last year and we didn’t expect to have such high ping when using vision on the driver’s computer. We are considering using a single board computer such as raspberry pi and we want to know if anyone had an experience using vision with single board computer and would recommend any board and\or what to consider when picking a board.
Last year we used a Pandaboard single board computer based on a white paper from another team. The two key issues you need to address no matter what you choose to use are:
Power - You will want to build voltage regulators to power the board off of the main power distribution board.
Communication - Spend a great deal of time defining how your vision program will communicate with the CRIO and other robot code. KISS principle applies as always. We also had the board streaming the image back to the DS so if you want to do that make sure you can control the bandwidth usage (resolution and Frames Per Second).
There are lots of good and cheap single board computers out there, I would just make sure you picked one which has a solid Linux build with all drivers available.
If your weight can afford it, I would recommend getting a used laptop computer (super cheap; maybe add an SSD). The power supply shouldn’t be too difficult to figure out, and you’ll probably be able to get more processing power. Of course, many teams have had success with Raspberry Pi’s and other Linux compatible systems such as ODROID X2s or UDOOs (both have quad cores; worth looking at for vision processing). I can’t believe you guys did vision processing your rookie year. You guys must have been pretty dedicated and organized
I’d recommend using an O-droid product, that is, X-2, X, U-2, U. They are quad core, arm based, 1.7 GHz, much more powerful than the pi that may teams seem to use for some reason. As for talking with the cRio, we have been using a udp message.
I’d look for at least an ARM A9 dual core processor, RAM, Ethernet, USB Host, i2c running Linux and OpenCV. This has been well-documented and there are a lot of recent threads on this forum that you will find useful.
As mentioned before, the Odroid (www.hardkernel.com) is a good unit and I think it’s one of the best specifications (quad core, etc). I think we will be getting one to experiment with. In the past, we have used one of the single core Mk801 Android PC units. We upgraded to a dual core Mk808, but never used it in competition.
I would not use an onboard laptop unless you absolutely have to - the size and weight concerns far outweigh the small cost of an ARM-based single board computer.
A fast multi-core CPU. ARM based boards or other small computers won’t be capable of exploiting GPU for vision processing, so the CPU is pretty much all that matters. After that, I would look for something with USB 3.0 ports to accommodate higher end cameras (which you don’t necessarily need, it’s just a perk)
There are two major issues that may cause latency when using vision on the driver station, bandwidth limit and driver station CPU throughput. Both can have an impact even when doing vision on a separate computer, so it would be helpful for you to determine which was causing your issue, so you can avoid it with your new architecture (or even get it working with the old architecture).
The bandwidth limit on the field is 7mbps. As the bandwidth used approaches that limit, the latency increases. There is very good data about this in the FMS Whitepaper. This only affects data sent over the radio, so if your vision processing is completely limited to your onboard computer, this won’t be an issue. However, most likely you will want to have vision feedback on the driver station, therefore you will need to worry about bandwidth. One thing the whitepaper doesn’t cover is that dark pictures compress much easier and also are easier to process. See http://www.chiefdelphi.com/forums/showpost.php?p=1248042&postcount=44
The classmate PC only has an atom processor which can be overloaded with just displaying high resolution images. If the CPU is overloaded, it will also affect network latency. You can look at the driver station CPU usage on the charts tab of the driver station. Whether you do driver station or onboard vision processing, you will need to carefully manage CPU usage, as vision processing very quickly consumes all available CPU time. Limiting the rate at which vision processing occurs is smart no matter what platform you use. Again, if you send the processed images back to the driver station, you will need to worry about driver station CPU usage regardless of where the processing occurs, if you are using the classmate.
We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images
We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images.
We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images.
We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images
Today and tomorrow, 2073 will be using a PCDuino and a MS USB Webcam onboard our robot at CalGames. It is running Ubuntu and OpenCV to do our target tracking. It passes off the corners and center positions of the top target to the cRio as a 24 character string over the local network on the robot. No WiFi bandwidth limitations at all.
This is the first time we have fully implemented this in competition. So, when they return, we will let you know how it performs.
I am also working on vision processing for our team. After finding out about the bandwidth limit restrictions, I decided it would be better to do all the work onboard the robot. I was loving the Pi, until, today, I found out about the odroid. Just made me nuts. The processor should be faster than my new i3 laptop! OpenCV might like running on the odroid. I find the Pi good for continuous applications requiring less power. However, I think the odroid will be more of the “performance,” what is needed in a robot. Also, my Pi is oc’ed to 1.1GHz. Use a USB Camera to get away from network. Also, for some of these boards, you can use i2C instead of network, reducing network load and even making the system more robust.
From all the reports I got back from the team, the camera tracking worked exactly as intended. The first half of the day we had great success with it.
But… later in the day, something in the robot went south. We lost all ability to drive. Luckily, we have zero indication it was related to the camera system in any way. Most likely we lost a DSC or possibly the PDB.
Either way, we are quite happy with the off-board vision processing.
The biggest issue I’ve been having so far is just getting communications up. What’s your preferred method of interfacing the board with the cRIO? I’ve tried a basic TCP, but I was getting a ton of lag for some reason.
Our approach is to have the offboard processor do all the heavy lifting. It processes the images to determine the “target” location. It then places that information in the form of a string into a memory location. Every image that generates a valid target, over rights the previous data.
We use a Socket Request handler on the board to respond to Socket Requests from the cRio. The response to a Socket Request is to send the latest string to the cRio and then close the socket. This way only the latest target information is passed to the cRio as a 24 character string. The cRio then uses that information to perform whatever task we have coded it to do.
We are not sending images from the board to the cRio. Doing so would really defeat the purpose of using the offboard processor.
Pardon me for being a bit dense, but how are the Socket Requests sent? Most of my difficulties have just been getting the Pi and the cRIO to talk to each other, and my efforts with NetworkTables have been met with considerable frustration. Do you guys have any advice?
I also started learning NetworkTables very recently (about 1-2 weeks ago), so my knowledge might not be as thorough as it could be yet. This is pretty much my working knowledge of it.