|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
Hello, I'm a new (first year) member in a rookie team (this will be the 3rd year for the team) . As a enthusiast developer, I will be a part of the programming sub team. We program the RoboRio in C++ and the SmartDashboard in Java or C# (using ikvm to port the java binaries to .NET).
In this period, before the competition starts, I learn as much materiel as I can. A friend of mine and I were thinking about developing a vision processing system for the robot, and we pretty much figured that utilizing the RoboRio (or the cRio we have form last year) isn't any good because, well, it's just too weak for the job. We thought about sending the video live to the driver station (classmate/another laptop), when it will be processed and then sent back to the RoboRio. the problem is the 7Mbit/s networking bandwidth limit and, of course, the latency. So, we thought about employing an additional board, which will connect to the RoboRio and do the image processing there. We though about using Arduino or Raspberry Pi, but we are not sure they too are strong enough for the task. So, to sum up: What is the best board for using in FRC vision systems? Also, if we connect, for example, a Raspberry Pi to the robot's router and the router to the IP camera, the 7Mbit/s bandwidth limit does not apply, right? (because the camera and the Pi are connected via LAN) P.S. I am aware that this question has been asked in this forum already, but it was a year ago. So today there may be better/other options. Last edited by matan129 : 15-10-2014 at 09:53. |
|
#2
|
||||
|
||||
|
Re: Optimal board for vision processing
The most powerful board in terms of raw power is the Jetson TK1. It utilizes an Nvidia GPU witch is orders of magnitude more powerful than a CPU for virtually any vision processing task. And if just want to use its' CPU it's still has a quad core 2.32Ghz ARM which to my knowledge is more than most if not any other SBLC on the market. It is however $192 and much larger than an R-Pi.
http://elinux.org/Jetson_TK1 PS Here are some CD threads with more info: http://www.chiefdelphi.com/forums/sh...ghlight=Jetson http://www.chiefdelphi.com/forums/sh...ghlight=Jetson Last edited by jman4747 : 15-10-2014 at 10:19. |
|
#3
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Also, is the developing for CUDA any different from 'normal' developing? Quote:
Last edited by matan129 : 15-10-2014 at 10:32. |
|
#4
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
Our first tests were conducted on Dell Mini 9's running Ubuntu Linux LTS version 8 which I had loaded on mine while doing development work on another unrelated project. The Dell Mini 9 is a single core Atom processor. Using Video4Linux and OpenJDK (Java) the programming captain crafted his own recognition code. I believe that helped get him into college. It was very interesting. We then tried a dual core Atom classmate and it worked better when his code was designed to use that extra resource. Between years I slammed together a vision system using 2 cameras on a Lego MindStorm PTZ and used OpenCV with Python. With that you could locate yourself on the field using geometry not parallax. Other students have since worked on other Java based and Python based solutions using custom and OpenCV code. I have stripped parts out of OpenCV and loaded them into ARM processors to create a camera with vision processing within it. It was mentioned in the proposal I helped to submit to FIRST. I think using an old phone is probably more cost effective (they make lots of a single model of phone and when they are old they plummet in price). OpenCV wraps Video4Linux so the real upside of OpenCV from the 'use a USB camera perspective' is that it will remove things like detecting the camera being attached and setting the modes. Still Video4Linux is pretty well documented and the only grey area you will find is if you pick a random camera. Every company that tries to USB interface a CMOS or CCD camera does their own little thing with the configuration values. So I suggest finding a camera you can understand (Logitech or PS3-Eye) and not worrying about the other choices. A random cheapo camera off Amazon or eBay might be a huge pain when you can buy a used PS3-Eye at GameStop. Last edited by techhelpbb : 15-10-2014 at 10:53. |
|
#5
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
It is soooooooooooooo easy to use the GPU. After the setup (installing libraries, updating, etc.) We were tracking red 2014 game pieces on the GPU within the next 30 min. We used the code from here: http://pleasingsoftware.blogspot.com...-computer.html Read trough this and the related get hub linked in the article. https://github.com/aiverson/BitwiseAVCBalloons Open CV has GPU libraries that basically work automatically with the Jetson. http://docs.opencv.org/modules/gpu/doc/gpu.html You can see in the gethub of the above example as well the different compile command for activating GPU usage. https://github.com/aiverson/BitwiseA...aster/build.sh If you ever get to use that code on the Jetson note: The program in the above link opens up a display window for each step of the process and closing the displays speeds up the program from 4fps with all open to 16fps with only the final output open. I presume with the final output closed and no GUI open (AkA how it would be on a robot) it would be much faster. Also we used this camera and were set to 1080p for the test: http://www.logitech.com/en-us/produc...ro-webcam-c920 Last edited by jman4747 : 15-10-2014 at 10:59. |
|
#6
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Quote:
|
|
#7
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
Quote:
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth). If your CPU is/are slow or your code is bad then things might not work out so well. Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements. I would rather see a good cooperative solution that people can leverage and discuss/document than a lot of people rediscovering how to do this in a vacuum over and over. I will put an honorable mention here for VideoLAN(VLC) as far as unique and interesting ways to send video over a network. Anyone interested might want to look it over. Last edited by techhelpbb : 15-10-2014 at 11:22. |
|
#8
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place). Quote:
Last edited by matan129 : 15-10-2014 at 11:32. |
|
#9
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
Also you do not need to send much over the D-Link switch unless you are sending video to the driver's station. In fact you can avoid the Ethernet entirely if you use I2C, the digital I/O or something like that. So you should be good. Just be careful to realize that if you do use Ethernet to do this - you are using some of the bandwidth to the cRIO and RoboRio and if you do this too much you can cause issues. You do not have complete control over what the cRIO/RoboRio does on Ethernet especially when on a regulation FIRST field. I believe there is a relevant example of what I mean in the Einstein report from years past. Quote:
D-Link has had issues with this in the past. Hence they deprecated the bridge feature from the DIR-655. There are hints to this floating around like this: http://forums.dlink.com/index.php?topic=4542.0 Also this (odd is it not that the described broadcast does not pass....) http://forums.dlink.com/index.php?to..._next=next#new Last edited by techhelpbb : 15-10-2014 at 19:35. |
|
#10
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
|
|
#11
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
http://www.andymark.com/product-p/am-0866.htm I do not want to hijack your topic on this extensively. So I will simply point you here: http://en.wikipedia.org/wiki/I%C2%B2C It is basically a form of digital communication. To use it from a laptop you would probably need a USB interface for I2C and they do make things like this COTS. Last edited by techhelpbb : 15-10-2014 at 11:41. |
|
#12
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
What I would do is try and get a DAC (digital to analog converter) and on the RPi or Beaglebone. That way you can hook it up straight to the analog in on the roboRIO and use a function to change the analog signal back into a digital signal. I feel like it would be an easy thing to do. (especially if you are doing an auto center/aim system. You could hook a PID loop right up to the analog signal (okay, maybe a PI loop, but that can still give you good auto-aiming)) I also completely forgot about the laptop driver station option. Although it is not the fastest method, vision tracking on the driver station is probably the easiest method. Also, the RoboRIO has 2 cores, so maybe you can dedicate one core to the vision tracking and that way there is practically 0 latency (at least for communications) |
|
#13
|
||||||
|
||||||
|
Re: Optimal board for vision processing
Quote:
|
|
#14
|
|||
|
|||
|
I got in contact with another team who seemed to like the "pixy". They seemed to like it not only because it was vision but it processed the vision itself with very little outside programming. We plan to use it in the next year but you can find it at http://www.amazon.com/Charmed-Labs-a...&keywords=pixy or you can go to their websight http://charmedlabs.com/default/.. We have yet to try it but he seems to really like it. Its defanitly a start for vision and vision processing. Hope it helps!
|
|
#15
|
||||
|
||||
|
Re: Optimal board for vision processing
I really don't know how good of an option the pixy will be for FRC though. There would be quite a bit of motion blur, I could imagine, and I do not think the Pixy allows you to calculate more advanced things such as distances.
For the same price, you could get an ARM Dev board that can run a full-blown suite of vision tools, such as OpenCV! |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|