|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
|||
|
|||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Quote:
), and we just leave both running at the same time. We don't even max out the CPU freerunning at over 30 FPS for both programs (admittedly with the same code that was originally optimized for the beagleboard). In short, why we chose (and love) a solution like this:
|
|
#17
|
|||
|
|||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Has anyone looked into using the Odroid X? It's by a South Korean company, so shipping might be expensive. It has a quad core CPU and GPU, and runs Android. It supports up to 4 cameras, for any teams wanting to do stereoscopic vision. It has a built in Hardware Accelerated JPEG Encoder/Decoder, so that is a plus. It does have GPIO, I2C and SPI interface for additional sensor inputs. I know that there was rules regarding the cRio being the sole board to actuate any mechanisms, but this has PWM/ADC output for your personal projects.
Great alternate for any board IMHO. |
|
#18
|
|||
|
|||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
If your are running your Driver Station on a relatively new laptop, and not the Classmate, you have more than enough processing power available to do your image processing there. The image processing routines included with LabVIEW are impressively efficient and powerful. I actually use them in my professional life, and have found them to be among the fastest such routines available. You can then send whatever computed values you need back to the robot via a UDP connection.
|
|
#19
|
|||
|
|||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Our team used a beaglebone during competition season and tested out a raspberry pi at two off season events. Both of them work well enough, having a low frame-rate but fast enough to get back accurate data. The main problem with these boards if you don't get a high end board is the lack of fast enough USB drivers to allow for large or fast frame-rates from usb webcams. Even though using the network camera can get around this, pyopencv uses ffmpeg for network streaming of video which has a very slow connection and is a major bottleneck, iirc C and C++ implementation can get around this by using their own image acquisition functions. Take your video source into account when picking the board. Of course your best option is offloading processing to a fast Driver Station.
|
|
#20
|
||||
|
||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Has anyone used an off-board processor (beagle/bone/pi/arduino and others) with a directly connected camera (not using USB or Ethernet)?
For example, I think the beagle xm has a camera connector and modules one can buy. The pi has a 5MP camera module in design but looks like it will not go into production in time for FRC2013 use. Since it is December we need to buy something with a canned hardware/solution that requires only the poring of OpenCV (or equivalent) etc. TIA |
|
#21
|
||||
|
||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Our team has used a simple but effective vision system on the cRIO, and during the off season I've tried using the driver station for vision processing. As far as I could tell,:the driver station (running the default LabView vision for 2012 modified to work on PC) could grab the image from the robot camera, process the image, find the target, and send back the coordinates to the crio with tcp. I'm not positive it was faster, but it was much smoother than running the LabView default vision code on the robot. Our DS laptop was an older Toshiba with a dual core 1.5 ghz processor and 2 gb ram, and the CPU and ram weren't very high. I haven't tried a beagle board, but I've seen my friend ry to use a kinect with a raspberry pi. The USB is so slow that we can't even get a good frame rate with a color 120x160 image. Also, I'm not sure at how fast the pi can actually filter/process an image. It seems to me that the easiest way to do vision is with the driver laptop. If you don't like LabView or vision assistant, you are free to use whatever you want like opencv. To me it seems like to much of a hassle to set up your own single board computer on the robot just to get vision.
|
|
#22
|
||||
|
||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
This is a large risk to take so late in the offseason. Given that, mitigate the risks by going with as much of a known system as possible. (i.e. do the PandaBoard if you have enough budget). That way you'll spend less time pulling your hair out on non-vision stuff, and more time actually doing vision algorithm refinement. Plus you have to figure out how the vision calculations will play into the overall robot logic code.
|
|
#23
|
|||||
|
|||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
The first few years of FRC robot vision used a CMUCam. It was an integrated camera and dedicated processor, communicating with the Robot Controller using a serial data connection. We had moderate success with the green plastic vision tetras in Triple Play, and great success with it finding the lit vision targets in the Rack & Roll and Aim High games (with great or little success, respectively, in having the robot do something useful once the target was located).
|
|
#24
|
||||
|
||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
So over the past few days I have been working on converting 341's vision code over to python so I could do some testing on the raspberry pi for our purposes. Using their code just up to the morphology step took 1/3 of a second at 640x480 and 1/10 of a second at 320x240. This was pulling images directly from the sd card without any networking or other processing. I have the old 256 mb version, but I do not think that would be enough difference to make the raspberry pi worthwhile.
|
|
#25
|
|||||
|
|||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Quote:
Last edited by dellagd : 24-12-2012 at 22:31. |
|
#26
|
|||
|
|||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Memory can make a huge difference.
I have a 256Pi that I host my Java based wiki on. Pretty slow. Move to 512Pi, and the speed improves by 20%, a little overclock to 900mhz and speed is 30% over original Pi New cubieboard arrived, 1Gb mem, rate is now 50% faster than 256 PI. Board cost is up to $49. So memory will make a difference. Remember too that the USB pins also drive the ethernet, so you may have a bottleneck there. |
|
#27
|
|||||
|
|||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Quote:
Just a heads up, if you are attempting it, it will be quite difficult as no one has done it successfully yet. |
|
#28
|
|||
|
|||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
What makes you say that? If I recall correctly, the SoC on the Pi uses a Synopsys USB core and is supported by the dwc_otg driver, which is pretty robust. (Although I have seen different vendors fork this driver and make a mess of it).
|
|
#29
|
||||
|
||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Quote:
Accepts a 12-volt DC input and drives the motherboard directly. Surely, you'd probably want some power conditioning to ensure a constant 12-volt input, but it's a step in the right direction. Just rip out the AC-DC power supply (and save weight while doing it!) and replace it with this. |
|
#30
|
|||||
|
|||||
|
Re: Best board for vision processing (beagle/panda/beaglebone/etc?)
Im just operating off of a what I've heard on the raspberry pi forums and within our own controls team.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|