![]() |
Re: Optimal board for vision processing
Quote:
Also you do not need to send much over the D-Link switch unless you are sending video to the driver's station. In fact you can avoid the Ethernet entirely if you use I2C, the digital I/O or something like that. So you should be good. Just be careful to realize that if you do use Ethernet to do this - you are using some of the bandwidth to the cRIO and RoboRio and if you do this too much you can cause issues. You do not have complete control over what the cRIO/RoboRio does on Ethernet especially when on a regulation FIRST field. I believe there is a relevant example of what I mean in the Einstein report from years past. Quote:
D-Link has had issues with this in the past. Hence they deprecated the bridge feature from the DIR-655. There are hints to this floating around like this: http://forums.dlink.com/index.php?topic=4542.0 Also this (odd is it not that the described broadcast does not pass....) http://forums.dlink.com/index.php?to..._next=next#new |
Re: Optimal board for vision processing
Quote:
|
Re: Optimal board for vision processing
Quote:
http://www.andymark.com/product-p/am-0866.htm I do not want to hijack your topic on this extensively. So I will simply point you here: http://en.wikipedia.org/wiki/I%C2%B2C It is basically a form of digital communication. To use it from a laptop you would probably need a USB interface for I2C and they do make things like this COTS. |
Re: Optimal board for vision processing
Quote:
What I would do is try and get a DAC (digital to analog converter) and on the RPi or Beaglebone. That way you can hook it up straight to the analog in on the roboRIO and use a function to change the analog signal back into a digital signal. I feel like it would be an easy thing to do. (especially if you are doing an auto center/aim system. You could hook a PID loop right up to the analog signal (okay, maybe a PI loop, but that can still give you good auto-aiming)) I also completely forgot about the laptop driver station option. Although it is not the fastest method, vision tracking on the driver station is probably the easiest method. Also, the RoboRIO has 2 cores, so maybe you can dedicate one core to the vision tracking and that way there is practically 0 latency (at least for communications) |
Re: Optimal board for vision processing
Quote:
On Team 11 we had very little luck getting the cRIO to handle all we asked from it with Java doing vision and everything else and in some example cases even getting the examples to work. The RoboRio is faster so that will help. It is less picky so that will also help. I believe I asked around on ChiefDelphi in the past for Java vision examples for the cRIO that actually work. I would love to see working Java vision examples on the RoboRio. Perhaps video of them working. I have not involved myself with the beta work MORT is doing on the RoboRio and Java in this regard so it may exist. Quote:
I also spent part of each competition chasing around teams sending video back to the driver's station in ways that messed with the field at the request of the FTA. Very competent people were having issues with this so I do not think it is quite so cut and dry. If anyone wanted I could toss that data together for the events at which I volunteered in MAR. Additionally: I would like to add there is a hidden cost to the embedded and single board computers. It is the same hidden cost of the cRIO especially back 3 or more years ago when the 8 slot cRIO was the FIRST approved system. How many of these do you have knocking around to develop on? Think about it: general purpose laptops are plentiful and therefore anyone with a laptop (increasingly all the students in a school) could snag a USB camera for <$30 and start writing vision code. If you are using old phones you can get the whole package probably for $100 or less and probably your students are already glued to the phones they use too often every day now. On the other hand if you buy a development board for nearly $200 how many people can actively develop/test on it? If a student takes that board home how many other students can work and be confident that what they are doing will port to that system? Vision is a big project and more importantly you can often play a FIRST game without it. Is it better to use laptops you probably already have or commit to more proprietary stuff you might have to buy multiple of and then....if that product goes out of production...do that all over again if you even really use it? Is the cost justifiable? |
Re: Optimal board for vision processing
Quote:
|
Re: Optimal board for vision processing
Quote:
Secondly CUDA development is sometimes leveraged in the financial industries in which I often work. Some problems are not well suited for it, luckily OpenCV does leverage it, but I can certainly see various ways it could be poorly utilized regardless of support. As you say OpenCV supports it but what if you don't want to use OpenCV? Thirdly the Jetson is actually more expensive than you might think. It lacks an enclosure and again the battery that you'd get with the laptop or a phone. Once you add those items the laptop or phone is cheaper. If you don't care about the battery then the Jetson wins on the price over the laptop because then you don't need to deal with the power supply issue the laptop would create without the battery, but a used phone would still likely over take the Jetson with or without the battery for price. Fourthly the phone is smaller than 5"x 5" and very likely lighter even with the battery. You might even have the phone development experience because teams like MORT write scouting apps that are in the Android store. Fifthly the Jetson does not have a camera and an old phone probably does. Maybe even 2 cameras facing different directions or in a small number of cases 2 cameras facing the same direction. What the Jetson does have is a single USB 2 port and a single USB 3 port. While a laptop might have 4 or even more USB ports (yes often laptops have integrated USB hubs on some of these ports, but you would have to add a USB hub to the Jetson and you would run out of non-hubbed ports fast like that). That might matter a lot if you intend to not use Ethernet (I2C USB adapter or USB digital I/O like an FTDI chip or Atmel/PIC MCU). To put this in a fair light I will refer here: http://elinux.org/Jetson/Cameras If you need expensive Firewire or Ethernet cameras you already consumed the cost of possibly 3 USB cameras for each. Worse you might be back on the D-Link switch or dealing TCP/IP based video which for this application is in my opinion not a good idea. Finally I will acknowledge that the Tegra TK1 is basically a general purpose computer with a GPU. So therefore you can leverage tools as you say. Still all the testing needs to end up on it. You could develop up to that point but then you'd need to buy it to test. Maybe buy more than one if you have a practice robot. Maybe even more if you have multiple developers. Students usually do not work like professional programmers as the sheer number of cRIO reloads I have seen can demonstrate. On the plus side you could build up to the point you have it working. Then load it on the Jetson and if it doesn't work take your laptop apart. So there's that. For a different dimension to this analysis which skill is probably worth more: the ability to write Android and Apple apps that you can bring to market while still a high school student or the ability to write CUDA apps? Both could analyze video but which one would you mentor if you wanted to give your student the most immediately marketable skill they can use without your guidance? My bet is the Android and Apple app skills would more immediately help a student earn a quick buck and be empowered. Mining bit coins on CUDA is not as profitable as you think ;). |
Re: Optimal board for vision processing
Quote:
What helped the most in getting it to work was that we first wrote a standalone app in .NET C#. I seem to recall that the NI install included a NIVision dll or we downloaded it for free. Using the examples as a guide, we were able to learn the libraries much faster than dealing with the cRIO. An added bonus was we could quickly diagnosis issues at competitions without tying up the robot/cRIO. We thought about using it in 2013 and 2014 but, as others have said, it was a low priority and the extra effort/failure points made it even less important. Cheesy Vision sealed the decision. If we do it in the future it will most likely be on the roboRIO or Driver Station. |
Re: Optimal board for vision processing
I'd encourage you to use the examples and white paper to compare your processing decisions. The MIPs rating of the processors is a pretty good estimate of the raw horsepower. I don't have a good Tegra, so I can't measure where the CUDA cores are a huge win and where they are not.
Finally, it isn't the board you pick, but how you use it. I suggest you pick the one that lets you iterate and experiment quickly and confidently. Greg McKaskle |
Re: Optimal board for vision processing
Quote:
The main thing is that you pick something and then stick with it until you've developed a solution. If you are new to FRC or to any of this then your best bet is using the code and examples that NI/WPI have made available to teams. Don't get me wrong, if you want to try new things then do it and ask lots of questions too! Just be prepared for it not to always work out. |
Re: Optimal board for vision processing
Another facet of this issue:
If in doubt as to the stability of your vision code/system: make sure it is properly modular. I've seen some really tragic losses over the years because vision wasn't working right but removing it was as bad as leaving it especially when the code is interlaced into the rest of the cRIO code awkwardly. Putting the system external to the cRIO can make it more contained. It is often possible to make the whole thing just mechanically removable when it is external (a few less active inputs here or there). Remember things you never saw in testing can happen on a competition field. |
Re: Optimal board for vision processing
Quote:
I believe that when people have trouble with this setup, it can usually be traced back to choosing camera settings poorly. Crank the exposure time down along with brightness, raise contrast, and you will find that images are almost entirely black except for your vision target (if necessary, provide more photons from additional LED rings to improve your signal to noise ratio). A mostly-black image with only your target illuminated is advantageous for a bunch of reasons: 1) JPEG compression can be REALLY effective, and each image will be ~20-40KB, even at 640x480. Large patches of uniform color are what JPEG loves best. 2) Your detection algorithm has far fewer false alarms since most of the background is simply black. 3) You can get away with conservative HSL/HSV/RGB thresholds, so you are more robust to changes in field lighting conditions. We won 6 of those 9 on-season competitions (and more than half of the offseasons) using 100% camera driven auto-aim, and never once touched our vision system parameters other than extrinsic calibration (ex. if the camera got bumped or our shooter was repaired). In my experience, I find that the vast majority of teams don't provide enough photons and/or don't crank down their exposure time aggressively enough. Also, I strongly suspect (but do not know for sure) that the Bayer pattern on the Axis camera effectively makes it twice as sensitive to green light, so you might find that green LEDs work much better than other colors. We used green LEDs both years. It is also possible that if your vision processing code falls behind, your laptop will get sluggish and bad things will happen. Tune your code (+ camera settings, including resolution) until you can guarantee that you will process faster than you are acquiring. |
Re: Optimal board for vision processing
Quote:
A similar concept to reducing the frame rate and/or video resolution and increasing the compression all of which make the video detail less and less useful to human eyes. All of these options reduce the bandwidth required to send the video in the end. So whether the video is sent with TCP or UDP is less important. Even if TCP sends the video poorly there is just less of it to send. So I would wonder, with this being the compromise, if the driver's trying to see the video on the driver's station (for example if the vision software was in doubt) would be nearly as useful as just using a targeting laser/light to illuminate the target visually to the drivers and just not using the video at all. In years before FIRST started using QOS and prioritizing traffic (before the Einstein that caused the uproar) just sending video could put you in a situation where someone on the network might get robbed of FMS packets. We can only assume as teams that the bandwidth controls we have now actually will allow 2-4Mb of video without disruption. Since I know for sure that timing out the FMS packets will stop the robot till FMS can deliver a packet this is a real balancing act. One of the most concerning things to me personally: is when you are faced with a situation like I was last year where someone that worked on the Einstein issue is having trouble sending video with the expectations that they should have based on the results of that work. Yet they are finding basically that they have less bandwidth than they might expect. So in cases like these I point out that FIRST fields are very dynamic and things might change without notice. So what was without issue on the field network in 2012 might have issues in 2015. It really depends on settings you can neither control nor see in this environment until you have access to that field. I believe there is generally bandwidth to send some video from the robot to the driver's station even using TCP, but you will have to make compromises and they might not be compromises you'd like to make. Hence, at least to me personally, if you can avoid putting that burden on the field by sending video over the WiFi you just should. It will just be one less variable to change on you from your test environment to the competition field. |
Re: Optimal board for vision processing
Quote:
Quote:
|
Re: Optimal board for vision processing
Quote:
Not sending real time video payloads over the WiFi will not remove the possibility in which you could send so much data to the cRIO/RoboRio via the ethernet port you still prevent it from getting FMS packets. If one interfaced to the cRIO/RoboRio over the digital I/O for example. Then the coprocessor could send all the data it wants but the cRIO/RoboRio might not get it all from the coprocessor and will continue to get FMS packets so your robot does not suddenly stop. Effectively giving the coprocessor a lower priority than your FMS packets (and that is likely the situation to really desire). If the RoboRio stops using the ethernet port for the field radio then this may be less an issue because the FMS packets would not be competing on the ethernet port (they would be on a separate network stream). I know some alpha testing for the RoboRio was around the Asus USB-N53 Dual-band Wireless N600. At least then the issue is purely one of the RoboRio software keeping up with the combined traffic from the ethernet port and the USB networking device (only real testing would show how well that works out and for that you need robots on a competition field, test equipment and things to throw data at the RoboRio (laptops, Jetson boards, etc.)). |
| All times are GMT -5. The time now is 22:13. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi