|
|
|
| Let's swap data! |
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#1
|
|||
|
|||
|
Hello, I'm a new (first year) member in a rookie team (this will be the 3rd year for the team) . As a enthusiast developer, I will be a part of the programming sub team. We program the RoboRio in C++ and the SmartDashboard in Java or C# (using ikvm to port the java binaries to .NET).
In this period, before the competition starts, I learn as much materiel as I can. A friend of mine and I were thinking about developing a vision processing system for the robot, and we pretty much figured that utilizing the RoboRio (or the cRio we have form last year) isn't any good because, well, it's just too weak for the job. We thought about sending the video live to the driver station (classmate/another laptop), when it will be processed and then sent back to the RoboRio. the problem is the 7Mbit/s networking bandwidth limit and, of course, the latency. So, we thought about employing an additional board, which will connect to the RoboRio and do the image processing there. We though about using Arduino or Raspberry Pi, but we are not sure they too are strong enough for the task. So, to sum up: What is the best board for using in FRC vision systems? Also, if we connect, for example, a Raspberry Pi to the robot's router and the router to the IP camera, the 7Mbit/s bandwidth limit does not apply, right? (because the camera and the Pi are connected via LAN) P.S. I am aware that this question has been asked in this forum already, but it was a year ago. So today there may be better/other options. Last edited by matan129 : 15-10-2014 at 09:53. |
|
#2
|
||||
|
||||
|
Re: Optimal board for vision processing
The most powerful board in terms of raw power is the Jetson TK1. It utilizes an Nvidia GPU witch is orders of magnitude more powerful than a CPU for virtually any vision processing task. And if just want to use its' CPU it's still has a quad core 2.32Ghz ARM which to my knowledge is more than most if not any other SBLC on the market. It is however $192 and much larger than an R-Pi.
http://elinux.org/Jetson_TK1 PS Here are some CD threads with more info: http://www.chiefdelphi.com/forums/sh...ghlight=Jetson http://www.chiefdelphi.com/forums/sh...ghlight=Jetson Last edited by jman4747 : 15-10-2014 at 10:19. |
|
#3
|
||||
|
||||
|
Re: Optimal board for vision processing
So far the vision processing MORT11 has done has been done with a stripped down dual core AMD mini-laptop (bigger than a netbook) on the robot that is worth less than $200 on the common market. It has the display and keyboard removed. It has proven to be legal in the past but we have rarely relied on vision processing so it often is removed from the robot mid-season. It was also driven 200+ times over the bumps in the field with an SSD inside it and it still works fine. For cameras we used USB cameras like the PS3-Eye which has a Windows professional vision library and can handle 60 frames a second in Linux (though you hardly need that).
That laptop is heavier than the single board computers in part because of the battery. However I would suggest that battery is worth the weight. As the laptop is COTS the extra battery is legal. This means the laptop can be running while the robot is totally off. The tricky part is not finding a single board or embedded system that can do vision processing. The tricky part is powering it reliably and the battery fixes that issue while providing enormous computing power in comparison. Very likely all of the embedded and single board system that will be invariably listed in this topic will not be able to compete on cost/performance with a general purpose laptop. The market forces in the general computing industry drive differently. The cRIO gets around this issue because the cRIO gets boosted 19V from the PDU and then bucks it to the internal low voltage it needs. As the battery sags under the motor loads, dropping the 19V is no big deal if you need 3.3V. As switching regulators are generally closed loop they adapt to these changing conditions. So just be careful. The 5V regulated outputs on the robot PDU may not operate in a way you desire or maybe provide the Wattage you need and then you need to think about how you intend to power this accessory. People have worked around this in various ways: largish capacitors, COTS power supplies, just using the PDU. I figure that since electronics engineering is not really a requirement for FIRST that using a COTS computing device with a reliable and production power system is asking less. Keep in mind that I see no reason an Apple/Android device like a tablet or cell phone would not be legal in past competitions on the robot as long as the various radio parts are properly turned off. It is possible someone could create a vision processing system in an old phone using the phone's camera and use the phone's: audio jack (think Square credit card reader), display (put a photo-transistor against the display and toggle the pixels) or charging/docking port (USB/debugging and with Apple be warned they have a licensed chip you might need to work around) to connect it to the rest of the system. I've been playing around with ways to do this since I helped create a counter-proposal against the NI RoboRio and it can and does work. In fact I can run the whole robot off an Android device itself (no cRIO or RoboRio). Last edited by techhelpbb : 15-10-2014 at 10:36. |
|
#4
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Also, is the developing for CUDA any different from 'normal' developing? Quote:
Last edited by matan129 : 15-10-2014 at 10:32. |
|
#5
|
||||
|
||||
|
Re: Optimal board for vision processing
I would say the most bang for your buck is the Beaglebone black. 987 used it way back in 2012 with the kinect sensor. Very powerful, and if I can remember clearly, it has about 20fp/s. Maybe somebody can give a more accurate number, but it is plenty powerful. Same type of computer (rpi style microcomputer) that has ethernet for UDP communications.
Odroid and pdDuino are both good options too RPis are okay. I hear most teams get anywhere from 2fp/s to 10fp/s (again all depending what you are doing). I would say for simple target tracking, you would get about 5fp/s. I want to also start doing some vision tracking this year on another board. I would end up using the regular dashboard (or maybe modified a slight bit) with labview. I would be using a BeagleBone or maybe RPi just to start off. I don't know how to use linux, which is my biggest problem. Anyone have any information on how to auto start up and use vision tracking on linux? I need something simple to follow. |
|
#6
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
Our first tests were conducted on Dell Mini 9's running Ubuntu Linux LTS version 8 which I had loaded on mine while doing development work on another unrelated project. The Dell Mini 9 is a single core Atom processor. Using Video4Linux and OpenJDK (Java) the programming captain crafted his own recognition code. I believe that helped get him into college. It was very interesting. We then tried a dual core Atom classmate and it worked better when his code was designed to use that extra resource. Between years I slammed together a vision system using 2 cameras on a Lego MindStorm PTZ and used OpenCV with Python. With that you could locate yourself on the field using geometry not parallax. Other students have since worked on other Java based and Python based solutions using custom and OpenCV code. I have stripped parts out of OpenCV and loaded them into ARM processors to create a camera with vision processing within it. It was mentioned in the proposal I helped to submit to FIRST. I think using an old phone is probably more cost effective (they make lots of a single model of phone and when they are old they plummet in price). OpenCV wraps Video4Linux so the real upside of OpenCV from the 'use a USB camera perspective' is that it will remove things like detecting the camera being attached and setting the modes. Still Video4Linux is pretty well documented and the only grey area you will find is if you pick a random camera. Every company that tries to USB interface a CMOS or CCD camera does their own little thing with the configuration values. So I suggest finding a camera you can understand (Logitech or PS3-Eye) and not worrying about the other choices. A random cheapo camera off Amazon or eBay might be a huge pain when you can buy a used PS3-Eye at GameStop. Last edited by techhelpbb : 15-10-2014 at 10:53. |
|
#7
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Also, can someone answer my question about the bandwidth limit? And I might be able to assist you with linux: If I remember correctly, try to open the Terminal and run Code:
sudo crontab -e Code:
@reboot AND_THEN_A_COMMAND |
|
#8
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
If you can avoid sending live video you depend on over the WiFi please do (I speak for myself not FIRST or 11/193 when I write this). I can assure you what you think you have for bandwidth you probably do not have. I can back that up with various experiences and evidence I have collected over the years. If you must send something to the driver's station send pictures one at a time over UDP if you can. If you miss one - do not send it again. I have no interest in hijacking this topic with any dispute over this (so if someone disagrees feel free to take this up with me in private). Last edited by techhelpbb : 15-10-2014 at 11:00. |
|
#9
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
It is soooooooooooooo easy to use the GPU. After the setup (installing libraries, updating, etc.) We were tracking red 2014 game pieces on the GPU within the next 30 min. We used the code from here: http://pleasingsoftware.blogspot.com...-computer.html Read trough this and the related get hub linked in the article. https://github.com/aiverson/BitwiseAVCBalloons Open CV has GPU libraries that basically work automatically with the Jetson. http://docs.opencv.org/modules/gpu/doc/gpu.html You can see in the gethub of the above example as well the different compile command for activating GPU usage. https://github.com/aiverson/BitwiseA...aster/build.sh If you ever get to use that code on the Jetson note: The program in the above link opens up a display window for each step of the process and closing the displays speeds up the program from 4fps with all open to 16fps with only the final output open. I presume with the final output closed and no GUI open (AkA how it would be on a robot) it would be much faster. Also we used this camera and were set to 1080p for the test: http://www.logitech.com/en-us/produc...ro-webcam-c920 Last edited by jman4747 : 15-10-2014 at 10:59. |
|
#10
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Quote:
|
|
#11
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
Quote:
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth). If your CPU is/are slow or your code is bad then things might not work out so well. Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements. I would rather see a good cooperative solution that people can leverage and discuss/document than a lot of people rediscovering how to do this in a vacuum over and over. I will put an honorable mention here for VideoLAN(VLC) as far as unique and interesting ways to send video over a network. Anyone interested might want to look it over. Last edited by techhelpbb : 15-10-2014 at 11:22. |
|
#12
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
When we tried going down to 480p. Our frame rate did not improve per say however the capture time went down which is very important for tracking. That said our test wasn't extensive so the are other factors at play. It may or may not improve overall performance. Last edited by jman4747 : 15-10-2014 at 11:24. |
|
#13
|
|||||
|
|||||
|
Re: Optimal board for vision processing
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio or (b) your driver station laptop. No new hardware costs, no worry about powering an extra board, no extra cabling or new points of failure. FIRST and WPI will provide working example code as a starting point, and libraries to facilitate communication between the driver station laptop and the cRIO exist and are fairly straightforward to use.
In all seriousness, in the retroreflective tape-and-LED-ring era, FIRST has never given us a vision task that couldn't be solved using either the cRIO or your driver station laptop for processing. Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era). I am still partial to the driver station method. With sensible compression and brightness/contrast/exposure time, you can easily stream 30fps worth of data to your laptop over the field's wifi system, process it in in a few tens of milliseconds, and send back the relevant bits to your robot. Round trip latency with processing will be on the order of 30-100ms, which is more than sufficient for most tasks that track a stationary vision target (particularly if you utilize tricks like sending gyro data along with your image so you can estimate your robot's pose at the precise moment the image was captured). Moreover, you can display exactly what your algorithm is seeing as it runs, easily build in logging for playback/testing, and even have "on-the-fly" tuning between or during matches. For example, on 341 in 2013 we found we frequently needed to adjust where in the image we should try to aim, so by clicking on our live feed where the discs were actually going we recalibrated our auto-aim control loop on the fly. If you are talking about using vision for something besides tracking a retroreflective vision target, then an offboard solution might make sense. That said, think long and hard about the opportunity cost of pursuing such a solution, and what your goals really are. If your goal is to build the most competitive robot that you possibly can, there is almost always lower hanging fruit that is just as inspirational to your students. |
|
#14
|
|||
|
|||
|
Re: Optimal board for vision processing
Quote:
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place). Quote:
Last edited by matan129 : 15-10-2014 at 11:32. |
|
#15
|
||||||
|
||||||
|
Re: Optimal board for vision processing
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|