Co-processor advantages and disadvantages


1389 is looking to tackle vision next season and we wondering what co-processor we should use. In the past we’ve used a raspberry pi, but we haven’t done much with vision in past years. We were wondering what co-processor’s other teams use and recommend, based on ease of setup, speed, price, and whatever other criteria you feel is necessary.

We write everything in java, but have one member who is also versed in python, c, and c++.

Any input is appreciated!

I think the Kangaroo very handy for this:

It runs native Windows or Linux.
It is an x86 CPU.

Only pitfall: the cooling is a bit limited.
We use a fan from an old Victor and a fan cover to the exhaust port we 3D printed.

Big advantage: you can prototype with a laptop…plug a USB camera into that laptop during development…then transfer the camera and code to just deliver it on the Kangaroo.

Is this specific to the Kangaroo?

Kangaroo is the easiest to use, Jetson TX2 is the fastest as long as you use C++, Raspberry Pi is the cheapest.

Unless you’re doing stuff that really does need GPU acceleration to work in real time (optical flow, stereo, SIFT, NNs, etc), it’s not really worth it in terms of code overhead (and you don’t even get much of a speed up). Odroid XU4 is fast enough for most FRC-esque vision tasks you’ll throw at it. The Jetson TX1/2 is great if you need more CPU power or a GPU.

3946 has used Raspberry Pi several years. The only caveat/disadvantage we’ve noted is that you should put communication with your vision coprocessor in a separate thread so that if you have difficulties connecting you don’t hang your RIO.

Historically we’ve used the Pine A64. It’s physical footprint is about 2/3rds the size of the RoboRIO, however it is a quadcore, 64-bit device. We run Debian Linux headless on it, and write all our vision code in OpenCV under C++. The model we use (the Pine A64+ 1GB) is $19USD.

Running 2x [30fps 640x480] streams, we only hit about 60-70% CPU utilization with 2 Kinect cameras.

Installation of the Debian environment is fairly straight forward. Download the image, run dd if=imgfile of=/dev/sdcarddisk bs=1M and wait.

Be aware that the CPU itself gets quite toasty. Nothing dangerous, it doesn’t throttle, just a bit hot to the touch. You can add a cheapo $2 heatsink if you’re paranoid.

If you’d really like, the board supports Android. It was a big sluggish, but I’ve only tried Android on the early development prototype boards, so your mileage may vary with how polished Android is to run on this hardware.

Can confirm the thing gets HOT. Like burn you hot.

That is a bit over dramatic in my opinion.

It does get hot because it is a small package and that heat has to go somewhere.

I have 4 of these and no student has been burned in 3 years of having them and that is every version available (I weld and own a hobby forge…am used to hot…I have to defer to students for how hot is hot to less callused hands). Students have made them hot enough…particulary by leaving the black case in direct summer sun…that the Kangaro slows down and will not charge the integrated battery. However that obviously is a combination of heat sources the designer can not control.

People made claims like this over the Apple TV early models as well because it channeled all the heat to the surface to exhaust.

If this is really the concern: Victor fans, see EBM Papst, are a simple solution. We only use one but our 3D printed mount could be easily changed to have 2-5 fans powered through the robots PDP in parallel.

Our very first vision coprocessor was a Gateway netbook with 2GB of RAM, dual cores, and the display stripped off. That was field legal. However the Kangaroo is lighter, smaller, cheaper and more consistently available year to year within the allowed budget for a COTS computing device.

Notably I had provided my students with the ODroid XU4 because I own over 100 for a professional purpose. However the integral battery of the Kangaroo with a proper x86 64bit CPU won me over trying to keep the electrical students from providing unstable power to the ODroid. I can just charge the Kangaroo and communicate over ethernet (either with a USB to ethernet adapter or the integrated Mobile Pro ethernet) to the RoboRio or other Ethernet connected devices.

I like the ODroid but the extra effort over teaching OpenCV and getting the code ported and it performing sometimes very different just made me wonder if it was worth it in our team’s case.

The 64bit element at first glance seems irrelevant to this issue, but recall that most desktop PC are now 64bit. The support models for the 32bit OS from Microsoft and sometimes Linux libraries is not as accute as it once was. So while 32bit versus 64bit is probably irrelevant to your vision code…a 32bit platform will force you to use 32bit OS.

Also…off topic…but in regards to OpenCV: Packt Publishing has a few good books to learn OpenCV on x86 and the Raspberry Pi. Sometimes they even give them away free as part of their daily book free program.

We used an Android phone based on the solution 254 presented from last year. The Android app that 254 developed is rock solid, which allows you to only need to look at the OpenCV part. We really liked the form factor having the camera + processor + sensors + internal battery all together and only needing a single cable to connect. It is also nice to be able to debug using the screen and/or connected to a PC before deploying to the roboRIO.

It did take a bit of setup to get running and cost can be a bit higher than some of the other options, especially if you need a backup. You also need to be careful of battery management, as it tends to drain quite quickly with the screen, camera, and processing code constantly on.

The primary advantage of a co-processor in FRC is that there exists a size/weight/power/cost/CPU/programming language/interfaces co-processor option that fits most use-cases, whereas the RoboRIO offers one set of tradeoffs and options that may preclude certain things you want to do (e.g. some forms of vision processing). Secondary advantages are that most popular co-processors lend themselves to being programmed/iterated without a robot, and are also very common platforms in academics/industry.

The disadvantage (besides just materials cost) is that you will have to figure out how to power it, mount it, interface with it, and program it on your own, and you may run into various issues that you’ll have to work around (for example, many co-processors don’t like being turned off suddenly, require configuration to start your program at boot, can’t take a ton of vibration, etc.). It’s a significant step up in the complexity of your control system and introduces new potential points of failure.

Is it difficult to interface with, are there certain processors that are easier to interface with?

Did you just charge it between matches, was there no way to figure out how to power it through the battery on the robot, that might sound naive but I really don’t know much about electronics. :slight_smile:

Our roborio was at full use with no vision this year, possibly because we had too many threads running. We will redo our architecture but if we’re close to the same strain on the rio and then we try to add vision, it won’t be able to handle it.

That would be very helpful, will look into it :slight_smile:

We just had a specific routine with our pit crew. When the robot came of the field they would plug in a USB battery pack and close the app to keep it charged at 100%. The battery pack travels with the robot cart for elims, and it is removed before the match.

The battery discharge rate is fine during the match, and the USB is plugged into the RIO which gives it a small amount of power (slows down the discharge rate).

Our team just recently finished using a Raspberry pi for vision. We’ve actually started creating a repo that has all the tools necessary to set up the pi as well as a framework for your processing. It will definitely make your life easier if you choose the pi, which I would recommend.

We are still in the process of testing the setup files but try it out and see what you think. This is all in Python using OpenCv. We process at around 20 fps so its pretty fast and the Pi Camera is pretty reliable.


Cooling is why we made this