Code for beaglebone and network camera

Thought I might as well share our team’s starting code for the beaglebone and the network camera.

It is not quite done yet, we still have to deal with multiple targets, add in more filters, correct the measurements of the targets, etc.

But it should provide a base to those who want to get started.

Some gotchas to the beaglebone:

  1. If your model is A4, then you will have to remove a resistor in order to get the ethernet to work. Here is a video randomly found on the internet showing you how to remove the resistor: http://www.youtube.com/watch?v=Ak30G-shiYY

  2. Sometimes the file system gets a tad corrupted and the beaglebone will neither connect to a network or pop up as a usb drive when connected to the computer. The solution is simple, plug the microsd into a desktop computer and run fsck on it.

  3. Opencv did not seem to work with the network camera on our beaglebone. We ended up just going directly with ffmpeg.

  4. Ffmpeg was claiming that it did not have the mjpeg codec (even though it had mjpeg). Building ffmpeg from scratch fixed this.

  5. We compiled everything on the beaglebone. Cross compiling was too much of a pain.

This is how we are finding the angle to the target.

  1. Use findContours to find the contours.
  2. Use approxpolydp to find the points of the rectangles.
  3. Use a filter that tosses out misshapen rectangles.
  4. Use solvePnP to obtain the location of the target.
  5. Use atan2 to obtain our angle to the target in the x-z plane.

As for controlling the beaglebone we are going to have the beaglebone act as a TCP server and send messages back and forth with boost::asio.

Hope that this is useful for some teams.

asioSingle.zip (64.5 KB)


asioSingle.zip (64.5 KB)

Wow, very interesting. So are you actually using the beaglebone on the robot this year, or is this only a proof-of-concept with the camera?

Proof of concept so far. We will put it on the practice bot on Monday, and see how well it works on our target on a stick.

Otherwise it gives very accurate measurements with our printed out protractor.

Sweet, let us know what happens.

Are you planning on using the Kinect Sensor with it?

One of my main cool projects I want to dig into in the off season is using the Kinect to find things like floors, corners, balls, steps, other robots…

Dream big…

Joe J.

For some reason we were having problems with the usb port on the beaglebone. After a couple days of tinkering around with usb cameras and the kinect we decided to cut our losses and move on with the network camera.

It just didn’t seem to be giving enough power to the devices.

If you are willing, you can try soldering the wires directly onto the beagleboard. Using a network camera really just defeat your purpose of getting the board. From what Min Soo told me, you guys originally planned on using the Kinect. Remember, the Kinect needs its own 12V power from the cRio, the USB only gives you 5V. Since Beagle Board runs linux, you can give video4linux a try, but it seems as if you had success already without it.

If you are willing, you can try soldering the wires directly onto the beagleboard. Using a network camera really just defeat your purpose of getting the board. From what Min Soo told me, you guys originally planned on using the Kinect. Remember, the Kinect needs its own 12V power from the cRio, the USB only gives you 5V. Since Beagle Board runs linux, you can give video4linux a try, but it seems as if you had success already without it.

We provided the 12V power sources with the adaptor. We also measured the 5v across the pins from the usb. Lights on the Kinect or the usb camera still don’t turn on. /dev/video0 does not show up for the usb camera. The strange thing is flashdrives still work.

Something very funky is obviously going on.

We will look into the Kinect( and usb webcams) later, but the ip camera will work perfectly for now.

There was actually 3 reasons why we got the beaglebone.

  1. A lot more possibilities, with opencv, full C++ support, etc. We can practically do whatever we wish(and opencv does come with some pretty impressive functions, solvePnp being one of them).

  2. We want all this technology to be separate from the drive system. This way, if the code is slow, if the code fails, etc, at least the robot will continue to drive and the driver can manually shoot.

  3. We wanted the ability to use usb devices such as usb cameras(much cheaper/ faster), and the Kinect.

2 out of 3 is still pretty good as far as I can see it.

But there is a bandwidth/latency problem you have to overcome. I mean it is milliseconds of time we are working with here, but you will have to run tests to actually see if offloading calculations will actually save you clock cycles. I might honestly be faster to crunch data on the cRio than to transfer over data to the board, calculate and then send it back. Your number two is valid, from what I see. But considering the short time frame of a match, will you pick up on it quickly enough to go into manual override? Just some potential issue I see. Also, you have a very short time to figure all that out, you need a way to interface with the cRio seamlessly.

I am sure the time from reading 8 bytes of “The angle is blah” from the beaglebone is much faster than reading 640 * 380(?) * 4 bytes of an image, converting the image to the right colorspace for our work, then doing filters and other operations to detect and locate the position of the targets.

When the robot goes through robot-init it tries to connect to the beaglebone, and sets a flag if it fails. We could simply show a light that the connection failed.

As for the speed of doing this on the beaglebone, it is a 700 Mhz processor, and seems to do these calculations almost in real time. We could of course have it cache the previous calculations, and simply immediately return the “stale” value if needed, but I doubt that will be necessary.