View Full Version : Optimal board for vision processing
matan129
15-10-2014, 09:47
Hello, I'm a new (first year) member in a rookie team (this will be the 3rd year for the team) . As a enthusiast developer, I will be a part of the programming sub team. We program the RoboRio in C++ and the SmartDashboard in Java or C# (using ikvm to port the java binaries to .NET).
In this period, before the competition starts, I learn as much materiel as I can. A friend of mine and I were thinking about developing a vision processing system for the robot, and we pretty much figured that utilizing the RoboRio (or the cRio we have form last year) isn't any good because, well, it's just too weak for the job. We thought about sending the video live to the driver station (classmate/another laptop), when it will be processed and then sent back to the RoboRio. the problem is the 7Mbit/s networking bandwidth limit and, of course, the latency.
So, we thought about employing an additional board, which will connect to the RoboRio and do the image processing there. We though about using Arduino or Raspberry Pi, but we are not sure they too are strong enough for the task.
So, to sum up: What is the best board for using in FRC vision systems?
Also, if we connect, for example, a Raspberry Pi to the robot's router and the router to the IP camera, the 7Mbit/s bandwidth limit does not apply, right? (because the camera and the Pi are connected via LAN)
P.S. I am aware that this question has been asked in this forum already, but it was a year ago. So today there may be better/other options.
jman4747
15-10-2014, 10:14
The most powerful board in terms of raw power is the Jetson TK1. It utilizes an Nvidia GPU witch is orders of magnitude more powerful than a CPU for virtually any vision processing task. And if just want to use its' CPU it's still has a quad core 2.32Ghz ARM which to my knowledge is more than most if not any other SBLC on the market. It is however $192 and much larger than an R-Pi.
http://elinux.org/Jetson_TK1
PS Here are some CD threads with more info:
http://www.chiefdelphi.com/forums/showthread.php?t=130777&highlight=Jetson
http://www.chiefdelphi.com/forums/showthread.php?t=129827&highlight=Jetson
techhelpbb
15-10-2014, 10:24
So far the vision processing MORT11 has done has been done with a stripped down dual core AMD mini-laptop (bigger than a netbook) on the robot that is worth less than $200 on the common market. It has the display and keyboard removed. It has proven to be legal in the past but we have rarely relied on vision processing so it often is removed from the robot mid-season. It was also driven 200+ times over the bumps in the field with an SSD inside it and it still works fine. For cameras we used USB cameras like the PS3-Eye which has a Windows professional vision library and can handle 60 frames a second in Linux (though you hardly need that).
That laptop is heavier than the single board computers in part because of the battery. However I would suggest that battery is worth the weight. As the laptop is COTS the extra battery is legal. This means the laptop can be running while the robot is totally off.
The tricky part is not finding a single board or embedded system that can do vision processing. The tricky part is powering it reliably and the battery fixes that issue while providing enormous computing power in comparison.
Very likely all of the embedded and single board system that will be invariably listed in this topic will not be able to compete on cost/performance with a general purpose laptop. The market forces in the general computing industry drive differently.
The cRIO gets around this issue because the cRIO gets boosted 19V from the PDU and then bucks it to the internal low voltage it needs. As the battery sags under the motor loads, dropping the 19V is no big deal if you need 3.3V. As switching regulators are generally closed loop they adapt to these changing conditions.
So just be careful. The 5V regulated outputs on the robot PDU may not operate in a way you desire or maybe provide the Wattage you need and then you need to think about how you intend to power this accessory.
People have worked around this in various ways: largish capacitors, COTS power supplies, just using the PDU. I figure that since electronics engineering is not really a requirement for FIRST that using a COTS computing device with a reliable and production power system is asking less.
Keep in mind that I see no reason an Apple/Android device like a tablet or cell phone would not be legal in past competitions on the robot as long as the various radio parts are properly turned off. It is possible someone could create a vision processing system in an old phone using the phone's camera and use the phone's: audio jack (think Square credit card reader), display (put a photo-transistor against the display and toggle the pixels) or charging/docking port (USB/debugging and with Apple be warned they have a licensed chip you might need to work around) to connect it to the rest of the system. I've been playing around with ways to do this since I helped create a counter-proposal against the NI RoboRio and it can and does work. In fact I can run the whole robot off an Android device itself (no cRIO or RoboRio).
matan129
15-10-2014, 10:26
The most powerful board in terms of raw power is the Jetson TK1. It utilizes an Nvidia GPU witch is orders of magnitude more powerful than a CPU for virtually any vision processing task. And if just want to use its' CPU it's still has a quad core 2.32Ghz ARM which to my knowledge is more than most if not any other SBLC on the market. It is however $192 and much larger than an R-Pi.
http://elinux.org/Jetson_TK1
PS Here are some CD threads with more info:
http://www.chiefdelphi.com/forums/showthread.php?t=130777&highlight=Jetson
http://www.chiefdelphi.com/forums/showthread.php?t=129827&highlight=Jetson
Thanks for the suggestion! But it's kind of pricey, when compared to Pi. Is it worth it?
Also, is the developing for CUDA any different from 'normal' developing?
So far the vision processing MORT11 has done has been done with a stripped down dual core AMD mini-laptop (bigger than a netbook) on the robot that is worth less than $200 on the common market. It has the display and keyboard removed. It has proven to be legal in the past but we have rarely relied on vision processing so it often is removed from the robot mid-season.
That laptop is heavier than the single board computers in part because of the battery. However I would suggest that battery is worth the weight. As the laptop is COTS the extra battery is legal. This means the laptop can be running while the robot is totally off.
The tricky part is not finding a single board or embedded system that can do vision processing. The tricky part is powering it reliably and the battery fixes that issue while providing enormous computing power in comparison.
Very likely all of the embedded and single board system that will be invariably be listed in this topic will not be able to compete on cost/performance with a general purpose laptop. The market forces in the general computing industry drive differently.
The cRIO gets around this issue because the cRIO gets boosted 19V from the PDU and then bucks it to the internal low voltage it needs. As the battery sags under the motor loads dropping the 19V is no big deal if you need 3.3V. As switching regulators are generally closed loop they adapt to these changing conditions.
So just be careful. The 5V regulated outputs on the robot PDU may not operate in a way you desire or maybe provide the Wattage you need and then you need to think about how you intend to power this accessory.
People have worked around this in various ways: largish capacitors, COTS power supplies, just using the PDU. I figure that since electronics engineering is not really a requirement for FIRST that using a COTS computing device with a reliable and production power system is asking less.
Keep in mind that I see no reason an Apple/Android device like a tablet or cell phone would not be legal in past competitions on the robot as long as the various radio parts are properly turned off. It is possible someone could create a vision processing system in an old phone using the phone's camera and use the phone's: audio jack, display or charging/docking port to connect it to the rest of the system.
Thank for the detailed info! But in this case, I guess I can just use a stripped-down classmate (we have 2 of those), or just any other mini laptop in order to do so (I guess that the Atom Processor is more then powerful enough in terms of computing power). Also, what platform did you use to develop the image processing code?
Chadfrom308
15-10-2014, 10:40
I would say the most bang for your buck is the Beaglebone black. 987 used it way back in 2012 with the kinect sensor. Very powerful, and if I can remember clearly, it has about 20fp/s. Maybe somebody can give a more accurate number, but it is plenty powerful. Same type of computer (rpi style microcomputer) that has ethernet for UDP communications.
Odroid and pdDuino are both good options too
RPis are okay. I hear most teams get anywhere from 2fp/s to 10fp/s (again all depending what you are doing). I would say for simple target tracking, you would get about 5fp/s.
I want to also start doing some vision tracking this year on another board. I would end up using the regular dashboard (or maybe modified a slight bit) with labview. I would be using a BeagleBone or maybe RPi just to start off. I don't know how to use linux, which is my biggest problem. Anyone have any information on how to auto start up and use vision tracking on linux? I need something simple to follow.
techhelpbb
15-10-2014, 10:43
Thank for the detailed info! But in this case, I guess I can just use a stripped-down classmate (we have 2 of those), or just any other mini laptop in order to do so (I guess that the Atom Processor is more then powerful enough in terms of computing power). Also, what platform did you use to develop the image processing code?
We started off testing this idea when the COTS rules would allow a computing device several years ago (more than 3 years ago).
Our first tests were conducted on Dell Mini 9's running Ubuntu Linux LTS version 8 which I had loaded on mine while doing development work on another unrelated project. The Dell Mini 9 is a single core Atom processor.
Using Video4Linux and OpenJDK (Java) the programming captain crafted his own recognition code. I believe that helped get him into college. It was very interesting.
We then tried a dual core Atom classmate and it worked better when his code was designed to use that extra resource.
Between years I slammed together a vision system using 2 cameras on a Lego MindStorm PTZ and used OpenCV with Python. With that you could locate yourself on the field using geometry not parallax.
Other students have since worked on other Java based and Python based solutions using custom and OpenCV code.
I have stripped parts out of OpenCV and loaded them into ARM processors to create a camera with vision processing within it. It was mentioned in the proposal I helped to submit to FIRST. I think using an old phone is probably more cost effective (they make lots of a single model of phone and when they are old they plummet in price).
OpenCV wraps Video4Linux so the real upside of OpenCV from the 'use a USB camera perspective' is that it will remove things like detecting the camera being attached and setting the modes. Still Video4Linux is pretty well documented and the only grey area you will find is if you pick a random camera. Every company that tries to USB interface a CMOS or CCD camera does their own little thing with the configuration values. So I suggest finding a camera you can understand (Logitech or PS3-Eye) and not worrying about the other choices. A random cheapo camera off Amazon or eBay might be a huge pain when you can buy a used PS3-Eye at GameStop.
matan129
15-10-2014, 10:50
I would say the most bang for your buck is the Beaglebone black. 987 used it way back in 2012 with the kinect sensor. Very powerful, and if I can remember clearly, it has about 20fp/s. Maybe somebody can give a more accurate number, but it is plenty powerful. Same type of computer (rpi style microcomputer) that has ethernet for UDP communications.
Odroid and pdDuino are both good options too
RPis are okay. I hear most teams get anywhere from 2fp/s to 10fp/s (again all depending what you are doing). I would say for simple target tracking, you would get about 5fp/s.
I want to also start doing some vision tracking this year on another board. I would end up using the regular dashboard (or maybe modified a slight bit) with labview. I would be using a BeagleBone or maybe RPi just to start off. I don't know how to use linux, which is my biggest problem. Anyone have any information on how to auto start up and use vision tracking on linux? I need something simple to follow.
Thanks for the info - actually, I have a friend which has a RPi (model b) lying around, I guess he will allow me to test with it. If it will not do, I'll check bout the beaglebone.
Also, can someone answer my question about the bandwidth limit?
And I might be able to assist you with linux:
If I remember correctly, try to open the Terminal and run sudo crontab -e Then you will e able to edit the cron file, which is basically a file which automates tasks in linux systems. Add the following line to it:
@reboot AND_THEN_A_COMMAND The command you typed should be executed in every startup.
techhelpbb
15-10-2014, 10:55
Also, can someone answer my question about the bandwidth limit?
I have been CSA at several competitions over the years.
If you can avoid sending live video you depend on over the WiFi please do (I speak for myself not FIRST or 11/193 when I write this).
I can assure you what you think you have for bandwidth you probably do not have.
I can back that up with various experiences and evidence I have collected over the years.
If you must send something to the driver's station send pictures one at a time over UDP if you can.
If you miss one - do not send it again.
I have no interest in hijacking this topic with any dispute over this (so if someone disagrees feel free to take this up with me in private).
jman4747
15-10-2014, 10:57
Thanks for the suggestion! But it's kind of pricey, when compared to Pi. Is it worth it?
Also, is the developing for CUDA any different from 'normal' developing?
It is worth it IF you want to process video and track an object continuously. As for power, the new voltage regulators' 12 V 2A port will be more than enough. The jetson needs 12V and people have tested this thing running heavy duty vision applications on the GPU/CPU without cracking 1 amp.
It is soooooooooooooo easy to use the GPU. After the setup (installing libraries, updating, etc.) We were tracking red 2014 game pieces on the GPU within the next 30 min. We used the code from here: http://pleasingsoftware.blogspot.com/2014/06/identifying-balloons-using-computer.html Read trough this and the related get hub linked in the article.
https://github.com/aiverson/BitwiseAVCBalloons
Open CV has GPU libraries that basically work automatically with the Jetson.
http://docs.opencv.org/modules/gpu/doc/gpu.html
You can see in the gethub of the above example as well the different compile command for activating GPU usage.
https://github.com/aiverson/BitwiseAVCBalloons/blob/master/build.sh
If you ever get to use that code on the Jetson note: The program in the above link opens up a display window for each step of the process and closing the displays speeds up the program from 4fps with all open to 16fps with only the final output open. I presume with the final output closed and no GUI open (AkA how it would be on a robot) it would be much faster. Also we used this camera and were set to 1080p for the test: http://www.logitech.com/en-us/product/hd-pro-webcam-c920
matan129
15-10-2014, 11:06
I have been CSA at several competitions over the years.
If you can avoid sending live video you depend on over the WiFi please do (I speak for myself not FIRST or 11/193 when I write this).
I can assure you what you think you have for bandwidth you probably do not have.
I can back that up with various experiences and evidence I have collected over the years.
If you must send something to the driver's station send pictures one at a time over UDP if you can.
If you miss one - do not send it again.
I have no interest in hijacking this topic with any dispute over this (so if someone disagrees feel free to take this up with me in private).
Yeah, people have told me basically what you said here, but I asked if local LAN (the camera is connected with Ethernet cable to the router on the robot which connects also with Ethernet to the RPi/NVIDA thing/any other board) counts as networks badwidth? (It does not seems likely, but want to be sure about this).
It is worth it IF you want to process video and track an object continuously. As for power, the new voltage regulators' 12 V 2A port will be more than enough. The jetson needs 12V and people have tested this thing running heavy duty vision applications on the GPU/CPU without cracking 1 amp.
It is soooooooooooooo easy to use the GPU. After the setup (installing libraries, updating, etc.) We were tracking red 2014 game pieces on the GPU within the next 30 min. We used the code from here: http://pleasingsoftware.blogspot.com/2014/06/identifying-balloons-using-computer.html Read trough this and the related get hub linked in the article.
https://github.com/aiverson/BitwiseAVCBalloons
Open CV has GPU libraries that basically work automatically with the Jetson.
http://docs.opencv.org/modules/gpu/doc/gpu.html
You can see in the gethub of the above example as well the different compile command for activating GPU usage.
https://github.com/aiverson/BitwiseAVCBalloons/blob/master/build.sh
If you ever get to use that code on the Jetson note: The program in the above link opens up a display window for each step of the process and closing the displays speeds up the program from 4fps with all open to 16fps with only the final output open. I presume with the final output closed and no GUI open (AkA how it would be on a robot) it would be much faster. Also we used this camera and were set to 1080p for the test: http://www.logitech.com/en-us/product/hd-pro-webcam-c920
Cool, thanks! I'll look into this. If I'm doing processing in a resolution of, let's say, 1024*768 (more seems too much), how many FPS will I get? (approx.)
techhelpbb
15-10-2014, 11:12
Yeah, people have told me basically what you said here, but I asked if local LAN (the camera is connected with Ethernet cable to the router on the robot which connects also with Ethernet to the RPi/NVIDA thing/any other board) counts as networks badwidth? (It does not seems likely, but want to be sure about this).
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.
Cool, thanks! I'll look into this. If I'm doing processing in a resolution of, let's say, 1024*768 (more seems too much), how many FPS will I get? (approx.)
This question greatly depends on how you achieve this.
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth).
If your CPU is/are slow or your code is bad then things might not work out so well.
Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements. I would rather see a good cooperative solution that people can leverage and discuss/document than a lot of people rediscovering how to do this in a vacuum over and over.
I will put an honorable mention here for VideoLAN(VLC) as far as unique and interesting ways to send video over a network.
Anyone interested might want to look it over.
jman4747
15-10-2014, 11:21
Cool, thanks! I'll look into this. If I'm doing processing in a resolution of, let's say, 1024*768 (more seems too much), how many FPS will I get? (approx.)
When we tried going down to 480p. Our frame rate did not improve per say however the capture time went down which is very important for tracking. That said our test wasn't extensive so the are other factors at play. It may or may not improve overall performance.
Jared Russell
15-10-2014, 11:21
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio or (b) your driver station laptop. No new hardware costs, no worry about powering an extra board, no extra cabling or new points of failure. FIRST and WPI will provide working example code as a starting point, and libraries to facilitate communication between the driver station laptop and the cRIO exist and are fairly straightforward to use.
In all seriousness, in the retroreflective tape-and-LED-ring era, FIRST has never given us a vision task that couldn't be solved using either the cRIO or your driver station laptop for processing. Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).
I am still partial to the driver station method. With sensible compression and brightness/contrast/exposure time, you can easily stream 30fps worth of data to your laptop over the field's wifi system, process it in in a few tens of milliseconds, and send back the relevant bits to your robot. Round trip latency with processing will be on the order of 30-100ms, which is more than sufficient for most tasks that track a stationary vision target (particularly if you utilize tricks like sending gyro data along with your image so you can estimate your robot's pose at the precise moment the image was captured). Moreover, you can display exactly what your algorithm is seeing as it runs, easily build in logging for playback/testing, and even have "on-the-fly" tuning between or during matches. For example, on 341 in 2013 we found we frequently needed to adjust where in the image we should try to aim, so by clicking on our live feed where the discs were actually going we recalibrated our auto-aim control loop on the fly.
If you are talking about using vision for something besides tracking a retroreflective vision target, then an offboard solution might make sense. That said, think long and hard about the opportunity cost of pursuing such a solution, and what your goals really are. If your goal is to build the most competitive robot that you possibly can, there is almost always lower hanging fruit that is just as inspirational to your students.
matan129
15-10-2014, 11:23
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.
This question greatly depends on how you achieve this.
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth).
If your CPU is/are slow or your code is bad then things might not work out so well.
Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements.
So... to summarize, I will always be able to choose NOT to send data over WiFi?
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place).
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio or (b) your driver station laptop. No new hardware costs, no worry about powering an extra board, no extra cabling or new points of failure. FIRST and WPI will provide working example code as a starting point, and libraries to facilitate communication between the driver station laptop and the cRIO exist and are fairly straightforward to use.
In all seriousness, in the retroreflective tape-and-LED-ring era, FIRST has never given us a vision task that couldn't be solved using either the cRIO or your driver station laptop for processing. Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).
I am still partial to the driver station method. With sensible compression and brightness/contrast/exposure time, you can easily stream 30fps worth of data to your laptop over the field's wifi system, process it in in a few tens of milliseconds, and send back the relevant bits to your robot. Round trip latency with processing will be on the order of 30-100ms, which is more than sufficient for most tasks that track a stationary vision target (particularly if you utilize tricks like sending gyro data along with your image so you can estimate your robot's pose at the precise moment the image was captured). Moreover, you can display exactly what your algorithm is seeing as it runs, easily build in logging for playback/testing, and even have "on-the-fly" tuning between or during matches. For example, on 341 in 2013 we found we frequently needed to adjust where in the image we should try to aim, so by clicking on our live feed where the discs were actually going we recalibrated our auto-aim control loop on the fly.
If you are talking about using vision for something besides tracking a retroreflective vision target, then an offboard solution might make sense. That said, think long and hard about the opportunity cost of pursuing such a solution, and what your goals really are. If your goal is to build the most competitive robot that you possibly can, there is almost always lower hanging fruit that is just as inspirational to your students.
Wow, thanks! And yes, in the beginning I intend to only develop recognition of the retroreflective strips. Well, I'll talk with the other guys in the programming team and we"ll see about this. The major goal (at least for this time) of my planned vision system is to assist the driver is scoring - that it will slightly fix the position of the robot and therefore be more precise.
Joe Ross
15-10-2014, 11:30
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.
Do you have any evidence of this? Bridge mode does not mean that the D-Link acts as a hub and blasts data anywhere. It still has an internal switch and will only send the data to the appropriate port. If both connections are Ethernet, it won't send the data through WiFi. The only exception is broadcast packets.
techhelpbb
15-10-2014, 11:30
So... to summarize, I will always be able to choose NOT to send data over WiFi?
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place).
Even if the D-Link were to repeat the packets you send over the D-Link switch intended only for the wired ports, if the packets are UDP it has no effect on the onboard interconnection. The worst that happens is you hit the field side with UDP and frankly if they are monitoring remotely as they should they can filter that if it really came down to it.
Also you do not need to send much over the D-Link switch unless you are sending video to the driver's station. In fact you can avoid the Ethernet entirely if you use I2C, the digital I/O or something like that.
So you should be good. Just be careful to realize that if you do use Ethernet to do this - you are using some of the bandwidth to the cRIO and RoboRio and if you do this too much you can cause issues. You do not have complete control over what the cRIO/RoboRio does on Ethernet especially when on a regulation FIRST field.
I believe there is a relevant example of what I mean in the Einstein report from years past.
Do you have any evidence of this? Bridge mode does not mean that the D-Link acts as a hub and blasts data anywhere. It still has an internal switch and will only send the data to the appropriate port. If both connections are Ethernet, it won't send the data through WiFi. The only exception is broadcast packets.
Yes a properly working ARP table in a switch should work that way.
D-Link has had issues with this in the past. Hence they deprecated the bridge feature from the DIR-655.
There are hints to this floating around like this:
http://forums.dlink.com/index.php?topic=4542.0
Also this (odd is it not that the described broadcast does not pass....)
http://forums.dlink.com/index.php?topic=43999.0;prev_next=next#newhttp://forums.dlink.com/index.php?topic=43999.0;prev_next=next#new
matan129
15-10-2014, 11:35
Even if the D-Link were to repeat the packets you send over the D-Link switch intended only for the wired ports, if the packets are UDP it has no effect on the onboard interconnection. The worst that happens is you hit the field side with UDP and frankly if they are monitoring remotely as they should they can filter that if it really came down to it.
Also you do not need to send much over the D-Link switch unless you are sending video to the driver's station. In fact you can avoid the Ethernet entirely if you use I2C, the digital I/O or something like that.
So you should be good. Just be careful to realize that if you do use Ethernet to do this - you are using some of the bandwidth to the cRIO and RoboRio and if you do this too much you can cause issues. You do not have complete control over what the cRIO/RoboRio does on Ethernet especially when on a regulation FIRST field.
I believe there is a relevant example of what I mean in the Einstein report from years past.
Yes a properly working ARP table in a switch should work that way.
D-Link has had issues with this in the past. Hence they deprecated the bridge feature from the DIR-655.
There are hints to this floating around like this:
http://forums.dlink.com/index.php?topic=4542.0
Can you elaborate what I2C is? (again- I'm new to FRC. Sorry if this is a noob question!)
techhelpbb
15-10-2014, 11:37
Can you elaborate what I2C is? (again- I'm new to FRC. Sorry if this is a noob question!)
I2C is the telephone style jack on the bottom left in this picture of the digital side car.
http://www.andymark.com/product-p/am-0866.htm
I do not want to hijack your topic on this extensively.
So I will simply point you here:
http://en.wikipedia.org/wiki/I%C2%B2C
It is basically a form of digital communication.
To use it from a laptop you would probably need a USB interface for I2C and they do make things like this COTS.
Chadfrom308
15-10-2014, 12:52
Can you elaborate what I2C is? (again- I'm new to FRC. Sorry if this is a noob question!)
I²C (or I2C) is a really simple way to communicate. Most of the times it's between 2 microcontrollers because it is easy and fast to do. I hear it is pretty hard to do on the cRIO (hopefully it is easier to do on the roboRIO).
What I would do is try and get a DAC (digital to analog converter) and on the RPi or Beaglebone. That way you can hook it up straight to the analog in on the roboRIO and use a function to change the analog signal back into a digital signal. I feel like it would be an easy thing to do. (especially if you are doing an auto center/aim system. You could hook a PID loop right up to the analog signal (okay, maybe a PI loop, but that can still give you good auto-aiming))
I also completely forgot about the laptop driver station option. Although it is not the fastest method, vision tracking on the driver station is probably the easiest method.
Also, the RoboRIO has 2 cores, so maybe you can dedicate one core to the vision tracking and that way there is practically 0 latency (at least for communications)
techhelpbb
15-10-2014, 13:39
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio
I would hope the Java vision examples for the RoboRio have improved.
On Team 11 we had very little luck getting the cRIO to handle all we asked from it with Java doing vision and everything else and in some example cases even getting the examples to work.
The RoboRio is faster so that will help. It is less picky so that will also help.
I believe I asked around on ChiefDelphi in the past for Java vision examples for the cRIO that actually work.
I would love to see working Java vision examples on the RoboRio. Perhaps video of them working.
I have not involved myself with the beta work MORT is doing on the RoboRio and Java in this regard so it may exist.
Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).
I did a survey last year at several events regarding what people were using for vision processing. It was not asked of the CSA but since I was asking other questions I asked or looked. I think you would be surprised at how many teams made the coprocessor work without major headaches.
I also spent part of each competition chasing around teams sending video back to the driver's station in ways that messed with the field at the request of the FTA. Very competent people were having issues with this so I do not think it is quite so cut and dry. If anyone wanted I could toss that data together for the events at which I volunteered in MAR.
Additionally:
I would like to add there is a hidden cost to the embedded and single board computers.
It is the same hidden cost of the cRIO especially back 3 or more years ago when the 8 slot cRIO was the FIRST approved system.
How many of these do you have knocking around to develop on?
Think about it: general purpose laptops are plentiful and therefore anyone with a laptop (increasingly all the students in a school) could snag a USB camera for <$30 and start writing vision code. If you are using old phones you can get the whole package probably for $100 or less and probably your students are already glued to the phones they use too often every day now.
On the other hand if you buy a development board for nearly $200 how many people can actively develop/test on it?
If a student takes that board home how many other students can work and be confident that what they are doing will port to that system?
Vision is a big project and more importantly you can often play a FIRST game without it.
Is it better to use laptops you probably already have or commit to more proprietary stuff you might have to buy multiple of and then....if that product goes out of production...do that all over again if you even really use it? Is the cost justifiable?
jman4747
15-10-2014, 17:10
I would like to add there is a hidden cost to the embedded and single board computers.
It is the same hidden cost of the cRIO especially back 3 or more years ago when the 8 slot cRIO was the FIRST approved system.
How many of these do you have knocking around to develop on?
Think about it: general purpose laptops are plentiful and therefore anyone with a laptop (increasingly all the students in a school) could snag a USB camera for <$30 and start writing vision code. If you are using old phones you can get the whole package probably for $100 or less and probably your students are already glued to the phones they use too often every day now.
On the other hand if you buy a development board for nearly $200 how many people can actively develop/test on it?
If a student takes that board home how many other students can work and be confident that what they are doing will port to that system?
Vision is a big project and more importantly you can often play a FIRST game without it.
Is it better to use laptops you probably already have or commit to more proprietary stuff you might have to buy multiple of and then....if that product goes out of production...do that all over again if you even really use it? Is the cost justifiable?
Not necessarily. Most people use open cv libraries with c++ on the Jetson TK1, and other SBLC's. The libraries are written the same for it, Linux pc's, laptops, windows pc's, etc. Even the GPU libraries are fuctionaly the same and look almost identical. Furthermore standard open cv GPU libraries work with Nividia GPU's in general not just the Jetson. If you mean Linux vs windows... before the Jetson I've never used Linux but the GUI desktop is very easy to get into and I was able to install everything necessary having never used linux. Thus anyone with a computer or other SBLC running windows/Linux and able to dev in C++ can write code that can be used on the co-processor.
techhelpbb
15-10-2014, 17:55
Not necessarily. Most people use open cv libraries with c++ on the Jetson TK1, and other SBLC's. The libraries are written the same for it, Linux pc's, laptops, windows pc's, etc. Even the GPU libraries are fuctionaly the same and look almost identical. Furthermore standard open cv GPU libraries work with Nividia GPU's in general not just the Jetson. If you mean Linux vs windows... before the Jetson I've never used Linux but the GUI desktop is very easy to get into and I was able to install everything necessary having never used linux. Thus anyone with a computer or other SBLC running windows/Linux and able to dev in C++ can write code that can be used on the co-processor.
Firstly I will not deny that the Jetson TK1 is great bit of kit.
Secondly CUDA development is sometimes leveraged in the financial industries in which I often work. Some problems are not well suited for it, luckily OpenCV does leverage it, but I can certainly see various ways it could be poorly utilized regardless of support. As you say OpenCV supports it but what if you don't want to use OpenCV?
Thirdly the Jetson is actually more expensive than you might think. It lacks an enclosure and again the battery that you'd get with the laptop or a phone. Once you add those items the laptop or phone is cheaper. If you don't care about the battery then the Jetson wins on the price over the laptop because then you don't need to deal with the power supply issue the laptop would create without the battery, but a used phone would still likely over take the Jetson with or without the battery for price.
Fourthly the phone is smaller than 5"x 5" and very likely lighter even with the battery. You might even have the phone development experience because teams like MORT write scouting apps that are in the Android store.
Fifthly the Jetson does not have a camera and an old phone probably does. Maybe even 2 cameras facing different directions or in a small number of cases 2 cameras facing the same direction. What the Jetson does have is a single USB 2 port and a single USB 3 port. While a laptop might have 4 or even more USB ports (yes often laptops have integrated USB hubs on some of these ports, but you would have to add a USB hub to the Jetson and you would run out of non-hubbed ports fast like that). That might matter a lot if you intend to not use Ethernet (I2C USB adapter or USB digital I/O like an FTDI chip or Atmel/PIC MCU). To put this in a fair light I will refer here:
http://elinux.org/Jetson/Cameras
If you need expensive Firewire or Ethernet cameras you already consumed the cost of possibly 3 USB cameras for each.
Worse you might be back on the D-Link switch or dealing TCP/IP based video which for this application is in my opinion not a good idea.
Finally I will acknowledge that the Tegra TK1 is basically a general purpose computer with a GPU. So therefore you can leverage tools as you say. Still all the testing needs to end up on it. You could develop up to that point but then you'd need to buy it to test. Maybe buy more than one if you have a practice robot. Maybe even more if you have multiple developers. Students usually do not work like professional programmers as the sheer number of cRIO reloads I have seen can demonstrate.
On the plus side you could build up to the point you have it working. Then load it on the Jetson and if it doesn't work take your laptop apart. So there's that.
For a different dimension to this analysis which skill is probably worth more: the ability to write Android and Apple apps that you can bring to market while still a high school student or the ability to write CUDA apps? Both could analyze video but which one would you mentor if you wanted to give your student the most immediately marketable skill they can use without your guidance? My bet is the Android and Apple app skills would more immediately help a student earn a quick buck and be empowered. Mining bit coins on CUDA is not as profitable as you think ;).
Brian Selle
15-10-2014, 18:38
I would hope the Java vision examples for the RoboRio have improved.
On Team 11 we had very little luck getting the cRIO to handle all we asked from it with Java doing vision and everything else and in some example cases even getting the examples to work.
The RoboRio is faster so that will help. It is less picky so that will also help.
I believe I asked around on ChiefDelphi in the past for Java vision examples for the cRIO that actually work.
I would love to see working Java vision examples on the RoboRio. Perhaps video of them working.
In 2012, we used Java for vision processing on the cRIO using the NIVision libraries. It calculated the target goal distance/angle from a single frame, adjusted the turret/shooter speed and unloaded the ball supply. It worked quite well.
What helped the most in getting it to work was that we first wrote a standalone app in .NET C#. I seem to recall that the NI install included a NIVision dll or we downloaded it for free. Using the examples as a guide, we were able to learn the libraries much faster than dealing with the cRIO. An added bonus was we could quickly diagnosis issues at competitions without tying up the robot/cRIO.
We thought about using it in 2013 and 2014 but, as others have said, it was a low priority and the extra effort/failure points made it even less important. Cheesy Vision sealed the decision. If we do it in the future it will most likely be on the roboRIO or Driver Station.
Greg McKaskle
15-10-2014, 21:26
I'd encourage you to use the examples and white paper to compare your processing decisions. The MIPs rating of the processors is a pretty good estimate of the raw horsepower. I don't have a good Tegra, so I can't measure where the CUDA cores are a huge win and where they are not.
Finally, it isn't the board you pick, but how you use it. I suggest you pick the one that lets you iterate and experiment quickly and confidently.
Greg McKaskle
marshall
15-10-2014, 22:41
I'd encourage you to use the examples and white paper to compare your processing decisions. The MIPs rating of the processors is a pretty good estimate of the raw horsepower. I don't have a good Tegra, so I can't measure where the CUDA cores are a huge win and where they are not.
Finally, it isn't the board you pick, but how you use it. I suggest you pick the one that lets you iterate and experiment quickly and confidently.
Greg McKaskle
Agreed. It is all about how you use the tools you have. We're doing development with the TK1 board now and it's a lot of fun but there are some drawbacks. Most of them have been outlined above (Be mindful of using X11 on it, it's not stable.).
The main thing is that you pick something and then stick with it until you've developed a solution. If you are new to FRC or to any of this then your best bet is using the code and examples that NI/WPI have made available to teams.
Don't get me wrong, if you want to try new things then do it and ask lots of questions too! Just be prepared for it not to always work out.
techhelpbb
15-10-2014, 23:00
Another facet of this issue:
If in doubt as to the stability of your vision code/system:
make sure it is properly modular.
I've seen some really tragic losses over the years because vision wasn't working right but removing it was as bad as leaving it especially when the code is interlaced into the rest of the cRIO code awkwardly.
Putting the system external to the cRIO can make it more contained.
It is often possible to make the whole thing just mechanically removable when it is external (a few less active inputs here or there).
Remember things you never saw in testing can happen on a competition field.
Jared Russell
15-10-2014, 23:37
I also spent part of each competition chasing around teams sending video back to the driver's station in ways that messed with the field at the request of the FTA. Very competent people were having issues with this so I do not think it is quite so cut and dry. If anyone wanted I could toss that data together for the events at which I volunteered in MAR.
In 2012-2013, 341 competed at 9 official events and numerous other offseason competitions. We never had any issues with streaming video. There were a handful of matches where things didn't work, but they all traced back to user error or an early match start (before our SmartDashboard had connected).
I believe that when people have trouble with this setup, it can usually be traced back to choosing camera settings poorly. Crank the exposure time down along with brightness, raise contrast, and you will find that images are almost entirely black except for your vision target (if necessary, provide more photons from additional LED rings to improve your signal to noise ratio). A mostly-black image with only your target illuminated is advantageous for a bunch of reasons:
1) JPEG compression can be REALLY effective, and each image will be ~20-40KB, even at 640x480. Large patches of uniform color are what JPEG loves best.
2) Your detection algorithm has far fewer false alarms since most of the background is simply black.
3) You can get away with conservative HSL/HSV/RGB thresholds, so you are more robust to changes in field lighting conditions. We won 6 of those 9 on-season competitions (and more than half of the offseasons) using 100% camera driven auto-aim, and never once touched our vision system parameters other than extrinsic calibration (ex. if the camera got bumped or our shooter was repaired).
In my experience, I find that the vast majority of teams don't provide enough photons and/or don't crank down their exposure time aggressively enough. Also, I strongly suspect (but do not know for sure) that the Bayer pattern on the Axis camera effectively makes it twice as sensitive to green light, so you might find that green LEDs work much better than other colors. We used green LEDs both years.
It is also possible that if your vision processing code falls behind, your laptop will get sluggish and bad things will happen. Tune your code (+ camera settings, including resolution) until you can guarantee that you will process faster than you are acquiring.
techhelpbb
16-10-2014, 04:58
...
Basically this advice produces a color depth and image content reduction. The end result of this advice is that the settings will effectively remove large portions of the video image before the video is sent. Reducing the bandwidth required to send the video over the field network. Using the retroreflective tape with a light source would make this easier because the high brightness to the camera sensor will make it survive the process of reducing the camera output. Not sure if this advice will really work out so well if the goal is to track things that are not retroreflective.
A similar concept to reducing the frame rate and/or video resolution and increasing the compression all of which make the video detail less and less useful to human eyes.
All of these options reduce the bandwidth required to send the video in the end. So whether the video is sent with TCP or UDP is less important. Even if TCP sends the video poorly there is just less of it to send.
So I would wonder, with this being the compromise, if the driver's trying to see the video on the driver's station (for example if the vision software was in doubt) would be nearly as useful as just using a targeting laser/light to illuminate the target visually to the drivers and just not using the video at all.
In years before FIRST started using QOS and prioritizing traffic (before the Einstein that caused the uproar) just sending video could put you in a situation where someone on the network might get robbed of FMS packets. We can only assume as teams that the bandwidth controls we have now actually will allow 2-4Mb of video without disruption. Since I know for sure that timing out the FMS packets will stop the robot till FMS can deliver a packet this is a real balancing act.
One of the most concerning things to me personally: is when you are faced with a situation like I was last year where someone that worked on the Einstein issue is having trouble sending video with the expectations that they should have based on the results of that work. Yet they are finding basically that they have less bandwidth than they might expect. So in cases like these I point out that FIRST fields are very dynamic and things might change without notice. So what was without issue on the field network in 2012 might have issues in 2015. It really depends on settings you can neither control nor see in this environment until you have access to that field. I believe there is generally bandwidth to send some video from the robot to the driver's station even using TCP, but you will have to make compromises and they might not be compromises you'd like to make. Hence, at least to me personally, if you can avoid putting that burden on the field by sending video over the WiFi you just should. It will just be one less variable to change on you from your test environment to the competition field.
marshall
16-10-2014, 08:01
We can only assume as teams that the bandwidth controls we have now actually will allow 2-4Mb of video without disruption. Since I know for sure that timing out the FMS packets will stop the robot till FMS can deliver a packet this is a real balancing act.
I would not make that assumption. We ran into a lot of issues trying to get compressed 640x480 images back to the robot. We ended up ditching streaming entirely and instead just grabbing a single frame for our targeting system this past year.
Hence, at least to me personally, if you can avoid putting that burden on the field by sending video over the WiFi you just should. It will just be one less variable to change on you from your test environment to the competition field.
I agree 100% with this. That's why we have started down the road with the Tegra boards. Despite what FIRST says, FMS Lite != FMS. There are some oddities with FMS that only occur when you are on a field with full FMS and not in a lab.
techhelpbb
16-10-2014, 09:50
I agree 100% with this. That's why we have started down the road with the Tegra boards. Despite what FIRST says, FMS Lite != FMS. There are some oddities with FMS that only occur when you are on a field with full FMS and not in a lab.
I am not clear on the TCP/IP stack performance of the RoboRio but on the cRIO if you used the ethernet on the robot to interface your coprocessor (for vision in this case), even if the goal was to send information only to the cRIO locally, you could overwhelm the cRIO. There is a fine write up of the details in the Einstein report. So just be careful the Tegra boards have respectable power if you send your cRIO/RoboRio lots of data you could have this issue.
Not sending real time video payloads over the WiFi will not remove the possibility in which you could send so much data to the cRIO/RoboRio via the ethernet port you still prevent it from getting FMS packets.
If one interfaced to the cRIO/RoboRio over the digital I/O for example. Then the coprocessor could send all the data it wants but the cRIO/RoboRio might not get it all from the coprocessor and will continue to get FMS packets so your robot does not suddenly stop. Effectively giving the coprocessor a lower priority than your FMS packets (and that is likely the situation to really desire).
If the RoboRio stops using the ethernet port for the field radio then this may be less an issue because the FMS packets would not be competing on the ethernet port (they would be on a separate network stream). I know some alpha testing for the RoboRio was around the Asus USB-N53 Dual-band Wireless N600. At least then the issue is purely one of the RoboRio software keeping up with the combined traffic from the ethernet port and the USB networking device (only real testing would show how well that works out and for that you need robots on a competition field, test equipment and things to throw data at the RoboRio (laptops, Jetson boards, etc.)).
NotInControl
16-10-2014, 13:25
I am not clear on the TCP/IP stack performance of the RoboRio but on the cRIO if you used the Ethernet on the robot to interface your coprocessor (for vision in this case), even if the goal was to send information only to the cRIO locally, you could overwhelm the cRIO. There is a fine write up of the details in the Einstein report. So just be careful the Tegra boards have respectable power if you send your cRIO/RoboRio lots of data you could have this issue.
I feel like this information is misleading to some and might cause people to deter from trying a solution that works for them, because they feel the hardware we have can not support it. What is your definition of "lots of data". This just doesn't happen as you make it seem with the statement "even if the goal was to send information only to the cRIO locally, you could overwhelm the cRIO" or "Tegra boards have respectable power if you send your cRIO/RoboRio lots of data you could have this issue" The problem is not with the cRIO, or with the RoboRio, it is with the operating systems they run on, and the network drivers included with those system. The cRIO runs on vxWorks 6.3. That OS has a single FIFO network buffer for all sockets. It is possible to fill up the network stack, and thus causing any new packets to be dropped, but only if you are constantly sending packets to the controllers NIC, but not reading it off the queue. This happened to our good friends on Einstein that year, because they had a condition where they could enter an infinite loop in their code, causing the code to read data from queue they were sending from the beaglebone white to not be executed.
As long as you are reading from the queue faster then you write, and your code doesn't halt. You should never run into this problem on the cRIO. A properly threaded TCP or UDP communication protocol programmed by the user for controller to off-board processor can't overwhelm the network. We used a bi-directional TCP communication protocol sending data from our off-board processor at a rate of 20 times a second with out any packet loss or communication issues in the 5 events we have played in 2014 so far.
At the end of the day, as long as you can read data off the NIC faster than you can send (which should be easy to achieve) then you should never have this problem above. Its that simple. The Rio is linux based, and operates a bit differently, but it is still possible to run into this issue. The benefit to the Rio being linux is that more users are familiar with linux and can diagnose if the stack is full. The user should be able to see the buffer state by reading
/proc/net/tcp
or
/proc/net/udp
Those files provide a ton of info on the corresponding protocol, including the count of items in-bound and out-bound in the queue. You can read the spec to see what each field of data means. http://search.cpan.org/~salva/Linux-Proc-Net-TCP-0.05/lib/Linux/Proc/Net/TCP.pm
The EINSTEIN report from 2012, when this was noted is documented here: http://www3.usfirst.org/sites/default/files/uploadedFiles/Robotics_Programs/FRC/Game_and_Season__Info/2012_Assets/Einstein%20Investigation%20Report.pdf
Not sending real time video payloads over the WiFi will not remove the possibility in which you could send so much data to the cRIO/RoboRio via the ethernet port you still prevent it from getting FMS packets.
Why do you state that sending real-time video can aid in flooding the cRIO queue, even though the packet destination is the driver station, and not the cRIO/RoboRio. What you are describing sounds like the D-links ports act like a hub, instead of a swtich, do you have evidence of this?
What do you consider real-time? Again the problem you are mentioning is a worst-case senario, if you only send packets to the cRIO, but never read them, you will fill up its network buffer, as is expected. It is not unreasonable.
We transmit 320x240 images at 20 frames per second from our off-board processor, to our driverstation. I consider this to be real-time, and we don't have issues and because of the small image size, with mostly black background, we are way under 3mbits per second bandwidth (which is a safety factor of more than 2 on the link limit.) Since we use an off-board processor, The transmission starts with our off-board processor, and ends with a separate thread running on our driverstation. The cRIO is not aware of the transmission, because the d-link acts as a switch and routes based on MAC address, the pictures destined for the driverstation, should not flood the cRIO. A proper working switch as the D-link is advertised to be, only routes packets to individual ports, not all ports. If you have evidence of your claim to the contrary, please provide it.
If one interfaced to the cRIO/RoboRio over the digital I/O for example. Then the coprocessor could send all the data it wants but the cRIO/RoboRio might not get it all from the coprocessor and will continue to get FMS packets so your robot does not suddenly stop. Effectively giving the coprocessor a lower priority than your FMS packets (and that is likely the situation to really desire).
IF you plan to bit banging data over IO, or even multiple IO your throughput will suffer. This may work for small communications, but will be a hindrance for more data, or even scalability. I also believe it is more complicated if the user requires bi-directional communication than using Ethernet. If flooding the network queue was of concern even after designing a proper communication protocol, I would recommend people to reduce the Time To Live on the packet, so that if it does get queued up, it is not sitting in the queue for more than 2 seconds.
If the RoboRio stops using the Ethernet port for the field radio then this may be less an issue because the FMS packets would not be competing on the Ethernet port (they would be on a separate network stream). I know some alpha testing for the RoboRio was around the Asus USB-N53 Dual-band Wireless N600. At least then the issue is purely one of the RoboRio software keeping up with the combined traffic from the Ethernet port and the USB networking device (only real testing would show how well that works out and for that you need robots on a competition field, test equipment and things to throw data at the RoboRio (laptops, Jetson boards, etc.)).
As an Alpha and Beta tester, the ASUS was proved to be too problematic, and will not be approved for use. We will continue to use the D-link for at least the 2015 competition season.
I second what Jared states on camera and network setting. I will be releasing a comparison of RoboRio vs BeagleBone Black vs Jetson (CPU) vs Jetson (w/GPU) sometime in the near future as I have access to all of those boards, and they are all linux/ARM and can run the exact same code.
We used an off-board process mainly for reliability. Which is why I wanted to chime in. We like the segregation of keeping the vision system separate from our main robot. IF the vision fails, we get a null image, or any of a mirage of things happen on the vision side, the way we have the robot coded, it does not affect the robots ability to communicate, and play the game (without vision of course). We ran into issue at an off-season event where the SD card on our beagle-bone white began to fail, and would not load the OS properly on startup, all this meant is we could not detect the hot goal, and a backup auto routine was performed ( just shoot 2 balls in the same goal and one will be hot). It did not bring down the robot at all. If the camera becomes unplugged, or unpowered, it does not create a null reference on the robot, which I have seen many teams have dead robots in Auto, because their camera was unplugged when put on the field, and the code started with a null reference.
I realize our rationale if different than your post which has a tone of more bad than good in my opinion, but I would definitely advocate for an off-board processor. Another reason we go with an off-board processor is because of what happened in 2012 and 2013 where certain events actually disabled streams in order to guarantee up-time on the field. This handicapped any team that depended on vision processing on their driverstation. I still like to believe that the 7Mbit/s stream is not a guarantee from FIRST, so depending on it is a bad idea. If you must rely on vision, doing the processing locally on the RoboRio, or an off-board processor is a way to avoid this (because the data stays on the local LAN) and doesn't need to be transmitted to the DS. Although I am open to any evidence that this is not true for the D-LINK DAP 1522 as is suggested in this thread.
For these reliability reasons, even though the RoboRio probably holds enough juice for our needs, we most likely will still continue to use an off-board processor to keep the major software of our system segregated, if we even have vision on our bot! As others have said, vision should only be a last resort, not the first, their are more reliable ways to get something done, like more driver practice! We didn't use vision at all in 2012, and ranked 1 in both of our in-season competitions.
Our robot is and always will be developed to play the game without relying heavily on vision. We do this with dedicated driver training. Vision is only added to help, or make an action quicker, not replace the function entirely. Our drivers still train for a failure event, and we try to design our software so that if any sensor fails, the driver is alerted immediately, and the robot still operates to complete the match without much handicap.
Regards,
Kevin
techhelpbb
16-10-2014, 14:15
I feel like this information is misleading to some and might cause people to deter from trying a solution that works for them, because they feel the hardware we have can not support it.
1. I've referenced the Einstein report several times during this topic.
2. Without a doubt lots of people are not even going to read this and try it anyway.
3. I've had this discussion over...and over...for years.
The bottom line is if someone asked how to test it that is one thing.
Simply throwing the details at them seems to tune them out (and really that's a common human trait).
I have to read your post again later when I have time but off hand most of what you wrote there seems fine.
Why do you state that sending real-time video can aid in flooding the cRIO queue, even though the packet destination is the driver station, and not the cRIO/RoboRio. What you are describing sounds like the D-links ports act like a hub, instead of a swtich, do you have evidence of this?
If you read again I did not write the real time video over WiFi can flood the cRIO queue.
I wrote sending real time video over the WiFi can cause network issues that prevent your cRIO from getting FMS packets in time to prevent a timed disable.
I used simple language when I wrote it because I hoped that I said it in a way less experienced people could understand.
What do you consider real-time? Again the problem you are mentioning is a worst-case senario, if you only send packets to the cRIO, but never read them, you will fill up its network buffer, as is expected. It is not unreasonable.
I think you misunderstood the previous point so this does not make sense for me to address.
We transmit 320x240 images at 20 frames per second from our off-board processor, to our driverstation. I consider this to be real-time, and we don't have issues and because of the small image size, with mostly black background, we are way under 3mbits per second bandwidth (which is a safety factor of more than 2 on the link limit.) Since we use an off-board processor, The transmission starts with our off-board processor, and ends with a separate thread running on our driverstation. The cRIO is not aware of the transmission, because the d-link acts as a switch and routes based on MAC address, the pictures destined for the driverstation, should not flood the cRIO.
Again the direction you are going with this does not match what I communicated.
You reduced your video bandwidth so you could send it in the bandwidth FIRST actually has on the competition field.
You did so because it worked.
If you go back and look - you say 3Mb and others wrote 7Mb.
So what do you think would happen if you used up 7Mb of bandwidth? It would be a problem.
I find it difficult to tell people to read the manual when that manual tells them something less than transparent.
A proper working switch as the D-link is advertised to be, only routes packets to individual ports, not all ports. If you have evidence of your claim to the contrary, please provide it.
I do not see why I should provide evidence of something I never wrote.
However the ARP function the D-Link products implement is questionable.
I provided links earlier if you would like to see what D-Link has to say about their own bridge function.
I have to say that I am not inclined to waste lots of time or energy on proving things to FIRST.
It seems to accomplish very little because there is no reasonable way for them to address some of these problems cost effectively.
Worse it might be exploitable if I go into too much detail.
IF you plan to bit banging data over IO, or even multiple IO your throughput will suffer. This may work for small communications, but will be a hindrance for more data, or even scalability.
I am very curious what data your video coprocessor is sending that is so large that it needs high throughput? What are you sending to the cRIO/RoboRio if the coprocessor is doing the vision part?
Is there really some reason you can not have digital pins or even simple binary data for things like: 'move up more', 'move down more', 'on target', 'not on target'...?
I also believe it is more complicated if the user requires bi-directional communication than using Ethernet.
If you had a lot to communicate possibly. Again what are you communicating that requires a whole protocol?
If flooding the network queue was of concern even after designing a proper communication protocol, I would recommend people to reduce the Time To Live on the packet, so that if it does get queued up, it is not sitting in the queue for more than 2 seconds.
Sure....but on the other hand...TCP/IP is not exactly a simple protocol either.
Seems odd to me the fix for not writing a simple or virtually no protocol is use a protocol where often people do not understand fine details like TCP congestion mechanisms.
Even more strange when they tune out anyone that is trying to explain potential issues.
So what are you sending from your vision coprocessor to your cRIO/RoboRio that you need to deal with all that?
As an Alpha and Beta tester, the ASUS was proved to be too problematic, and will not be approved for use. We will continue to use the D-link for at least the 2015 competition season.
First I heard of it. Thanks.
I second what Jared states on camera and network setting. I will be releasing a comparison of RoboRio vs BeagleBone Black vs Jetson (CPU) vs Jetson (w/GPU) sometime in the near future as I have access to all of those boards, and they are all linux/ARM and can run the exact same code.
Me likely data :). Just saying.
Also if you release the test code perhaps we can try that against a laptop that would be legal on a FIRST robot.
We used an off-board process mainly for reliability. Which is why I wanted to chime in. We like the segregation of keeping the vision system separate from our main robot. IF the vision fails, we get a null image, or any of a mirage of things happen on the vision side, the way we have the robot coded, it does not affect the robots ability to communicate, and play the game (without vision of course). We ran into issue at an off-season event where the SD card on our beagle-bone white began to fail, and would not load the OS properly on startup, all this meant is we could not detect the hot goal, and a backup auto routine was performed ( just shoot 2 balls in the same goal and one will be hot). It did not bring down the robot at all. If the camera becomes unplugged, or unpowered, it does not create a null reference on the robot, which I have seen many teams have dead robots in Auto, because their camera was unplugged when put on the field, and the code started with a null reference.
I realize our rationale if different than your post which has a tone of more bad than good in my opinion, but I would definitely advocate for an off-board processor.
I do not understand how I made a post previous to this suggesting making vision systems modular was interpreted this way.
Another reason we go with an off-board processor is because of what happened in 2012 and 2013 where certain events actually disabled streams in order to guarantee up-time on the field. This handicapped any team that depended on vision processing on their driverstation. I still like to believe that the 7Mbit/s stream is not a guarantee from FIRST, so depending on it is a bad idea. If you must rely on vision, doing the processing locally on the RoboRio, or an off-board processor is a way to avoid this (because the data stays on the local LAN) and doesn't need to be transmitted to the DS. Although I am open to any evidence that this is not true for the D-LINK DAP 1522 as is suggested in this thread.
I am still utterly perplexed by this.
If I prove to you there can be an impact on the field network when you send data on 2 switch ports locally what exactly do you think you are going to do about that?
I did not say that this impact would detract from that local communication.
Besides you just told us the D-Link is the only option in town.
So that means FIRST is past the point of changing course because it is nearly November.
For these reliability reasons, even though the RoboRio probably holds enough juice for our needs, we most likely will still continue to use an off-board processor to keep the major software of our system segregated, if we even have vision on our bot! As others have said, vision should only be a last resort, not the first, their are more reliable ways to get something done, like more driver practice! We didn't use vision at all in 2012, and ranked 1 in both of our in-season competitions.
Our robot is and always will be developed to play the game without relying heavily on vision. We do this with dedicated driver training. Vision is only added to help, or make an action quicker, not replace the function entirely. Our drivers still train for a failure event, and we try to design our software so that if any sensor fails, the driver is alerted immediately, and the robot still operates to complete the match without much handicap.
Regards,
Kevin
See nothing else to address there.
Brian
NotInControl
16-10-2014, 17:42
1. I've referenced the Einstein report several times during this topic.
2. Without a doubt lots of people are not even going to read this and try it anyway.
3. I've had this discussion over...and over...for years.
The bottom line is if someone asked how to test it that is one thing.
Simply throwing the details at them seems to tune them out (and really that's a common human trait).
I am of the mindset if you are posting as an authority or advocate for a solution, technology, or other reason, provide only the facts and let the user decide what they will with the information. As a mentor, I try to be as factually correct in my posts as humanly possible. You don't know who is reading, or how something can be interpreted if you leave room for interpretation. If you are providing advice or opinion, then state as such, so as to not confuse anyone what is fact vs. opinion. Your statements confused me and were open to some interpretation, which is why I offered what I though would be clarification. I am not trying to offend anyone, and if I did I apologize. The goal is to help answer the OP's question, and also be factual about what we have at hand, so that anyone using this post as reference can make the right decision, now, or in the future.
If you read again I did not say the real time video over WiFi can flood the cRIO queue.
I said sending real time video over the WiFi can cause network issues that prevent your cRIO from getting FMS packets in time to prevent a timed disable.
What other network issues can arise on the cRIO which will stop it from reading DS packets if it is not a filled buffer? As far as I am aware, the thread priority is set such that the DS protocol has the highest priority and the user thread is lower on the cRIO. This means that even if the robot code is in an infinite loop, it should still be able to read and execute on a DS packet. The only way I know how to STOP the cRIO from reading the DS packets is to flood the network buffer with USER packets only, in which case all DS packets are thrown away by the NIC because there is no room for it. The robot can not execute on a DS packet, because it's not getting it.
You state you were not saying that sending data over WiFI causes the buffer to fill, but sending data over WiFi can prevent the cRIO from reading a DS packet. Please elaborate for us what is going on here.
You reduced your video bandwidth so you could send it in the bandwidth FIRST actually has on the competition field.
So what do you think would happen if you used up 7Mb of bandwidth?
This is the kind of information I want to convey. The message I get from your message is that sending video over WiFi is bad, and you correctly identify a problem, but don't really give it the caveat it deserves. Whether you explicitly state it or not, it is very much implied. Which I believe is a very wrong message. In stead of using objective terms like "lots of data" or "real-time" video. I provided concrete values that work. If you were trying to state "try to avoid sending real-time data which approaches anything over 5Mbits/s" for example, then the message would be much different, and it just didn't read that way for me. I believe that is a good recommendation to a person asking about the link limitations.
I do not see why I should provide evidence of something I never wrote.
However the ARP function the D-Link products implement is questionable.
I provided links earlier if you would like to see what D-Link has to say about their own bridge function.
It was implied by your statement that sending data over WiFi can hinder the cRIOs ability to receive DS packets. Which I do not see how that's possible, unless you were flooding your network with broadcast packets . In a properly switched network, as is on the Robot, provided by the D-link, they data traffic should be mutually exclusive. Your statement suggests, that sending packets over WiFi, somehow make its way to the queue of the cRIO NIC and help fill it up, because that is the confirmed way to stop cRIO comms. Unless you have evidence of another network anomaly going on which causes the cRIO to loose DS packets by transmitting data over WiFi
I am very curious what data your video coprocessor is sending that is so large that it needs high throughput? What are you sending to the cRIO/RoboRio if the coprocessor is doing the vision part?
We sent from the beagleBone to the cRIO, hot target status, left or right target, state of the beagle-bone, state of the camera, and some other values. This showed the driveTeam the health of the vision system as it was running. It was about 15 bytes of data, 20 times a second.
We sent from the cRIO to the beagleBone, when the match started (to signal when to grab the hot target frame), and when the match reached 5s (signaling a left to right hot goal switch). We could also send other cal values which allowed us to tune our filter params from the driver station if we needed too. These were all async transmissions.
Is there really some reason you can not have digital pins or even simple binary data for things like: Move up more, Move down more, On target, Not on target?
This is a valid solution. The only downside I see is How many IO pins do you need to use in order to do that. IO is not scalable, but if it works for you, or for others, then by all means go ahead and use it. My advice would be to try ethernet first, because I do not see a problem using it, and when done correctly, you can have a fully scalable vision system that is robust, that you can use for future years no matter the challenge.
I am still utterly perplexed by this. If I prove to you there can be an impact on the field network when you send data on 2 switch ports locally what exactly do you think you are going to do about that?
I personally would like to see evidence of this and I am sure other teams would too, because I think a lot of teams are under the impression that this is not true, and they can use the full potential of the LAN onboard the robot. Your evidence would correct teams usage of the local network. The OP also asked this question in their original post.
We had a lot of network issues in 2012. We got over most of them in 2013, and had virtually no issues in 2014. If you have evidence of issues that can arise on the system we all use, then it should be posted for the greater community to understand. I believe for a lot of teams most of the details around FMS are based on what we "think" vs. what we "know". However, I can't change what I "think" without supporting data as I am sure you can appreciate.
I believe the OP has received their answer, and our conversation is just side tracking now. If you have any evidence to help the community at large I think it would be beneficial to post, if not, we can take this conversation offline if you wish to continue. Please feel free to PM me if you wish.
Thanks.
Regards,
Kevin
techhelpbb
16-10-2014, 17:58
I am of the mindset if you are posting as an authority or advocate for a solution, technology, or other reason, provide only the facts and let the user decide what they will with the information. As a mentor, I try to be as factually correct in my posts as humanly possible. You don't know who is reading, or how something can be interpreted if you leave room for interpretation. If you are providing advice or opinion, then state as such, so as to not confuse anyone what is fact vs. opinion. Your statements confused me and were open to some interpretation, which is why I offered what I though would be clarification. I am not trying to offend anyone, and if I did I apologize. The goal is to help answer the OP's question, and also be factual about what we have at hand, so that anyone using this post as reference can make the right decision, now, or in the future.
I have to say that when I read this I think back to the advertised bandwidth on the network and immediately recognize that multiple people from multiple regions have determined that the best way to get video to a driver's station from a robot reliably is to use quite a bit less than the advertised bandwidth in official sources.
So if this is about factually correct and supported by evidence we all have a problem. It clearly is not exclusive to what I wrote. The question is why should I do any more than I have so that I can then (as I have for years) go cleanup after it anyway both as a mentor and a volunteer?
It is increasingly supported by evidence that such an effort is literally a waste of my time regardless of what nonsense is used to push me to expend the effort.
Let me take that a step further...for all of this...can anyone please provide a detailed and complete analysis of a field 'christmas tree' and the correct procedure to eliminate it?
Cause I see cycling the power of fielded robots at different moments in my future.
Furthermore, the whole 'who do you think you are to speak for FIRST bit' is old and without merit.
I specifically and directly said it was my opinion in several places.
I even took both FIRST and Team 11/193 off the hook.
Every year someone tries these tactics with me it gets predictable and old.
Kind of like trying to get video to driver's stations.
You state you were not saying that sending data over WiFI causes the buffer to fill, but sending data over WiFi can prevent the cRIO from reading a DS packet. Please elaborate for us what is going on here.
I refuse to let you devalue the core points by making this overcomplicated.
I ask you instead to set your Axis camera to 640x480, at the highest color depth you can find, minimal compression and 30 frames per second then send that to your driver's station and drive your robot on a competition field. Then come to me when you start having strange issues with your robot and tell me there's nothing that can happen over the WiFi sending video to a driver's station that can have an impact and end up with a disabled robot here and there.
Oh wait *SMACKS FOREHEAD* never mind I do that test every year at a competition.
All I have to do to test it this year is show up.
This is the kind of information I want to convey. The message I get from your message is that sending video over WiFi is bad, and you correctly identify a problem, but don't really give it the caveat it deserves. Whether you explicitly state it or not, it is very much implied. Which I believe is a very wrong message. In stead of using objective terms like "lots of data" or "real-time" video. I provided concrete values that work. If you were trying to state "try to avoid sending real-time data which approaches anything over 5Mbits/s" for example, then the message would be much different, and it just didn't read that way for me. I believe that is a good recommendation to a person asking about the link limitations.
I have some really bad news for you.
If the published values are wrong.
If the channel bonding on the fields and settings change during the competition to 'adapt' and they do.
I have more than noticed that and reported it.
In a heartbeat that specific recommendation can instantly fail.
So I can either confront FIRST about that and face it that's worthless....or....
If you don't want to have video problems sending video to your driver's station:
Try not to send video to your driver's station (pretty logical for someone as silly as me).
If you must send video to your driver's station then just do the best you can, and realistically that is all that you have done by halving the bandwidth you use.
If you don't believe me I await the first time your recommendation fails for you because I have seen what will happen.
All that it would take to make this fail as well is a subtle change in the field load balancer.
We sent from the beagleBone to the cRIO, hot target status, left or right target, state of the beagle-bone, state of the camera, and some other values. This showed the driveTeam the health of the vision system as it was running. It was about 15 bytes of data, 20 times a second.
We sent from the cRIO to the beagleBone, when the match started (to signal when to grab the hot target frame), and when the match reached 5s (signaling a left to right hot goal switch). We could also send other cal values which allowed us to tune our filter params from the driver station if we needed too. These were all async transmissions.
This does not sound like the kind of data that would be all that hard to move even over raw digital I/O.
You obviously have a fine grasp of TCP/IP mechanics so I am sure it's no big deal to send it and service it for your team.
Problem is - a lot of teams do not have as a great a grasp on the subject.
I find it hard to tell teams to develop that while they tackle vision.
Seems to me like it is asking quite a bit - a great challenge if you can rise to it - or a pain if you stumble.
This is a valid solution. The only downside I see is How many IO pins do you need to use in order to do that. IO is not scalable, but if it works for you, or for others, then by all means go ahead and use it. My advice would be to try ethernet first, because I do not see a problem using it, and when done correctly, you can have a fully scalable vision system that is robust, that you can use for future years no matter the challenge.
Firstly CAN had the same promise of being future proof.
Were not that many teams using it at the competitions I was at and I have the records from them.
This is not a dig at CAN or the Jaguar - just saying I sometimes wonder if CAN is FIRST's Beta/VHS.
Secondly to make more pins without getting fancy you can use addressing and multiplexing.
One of the problems I see with FIRST not really requiring electronics knowledge is that it seems we use TCP/IP like a hammer and forget that it depends on simple digital concepts. I realize that students do not have to use discrete TTL/CMOS anymore but I wonder if the logic or the finite issues of TCP/IP are more difficult to grasp.
You know when I was in school - you learned digital logic first - then took college courses on the network physical layer and then later courses on TCP/IP. It almost sounds like you advocate the reverse.
I personally would like to see evidence of this and I am sure other teams would too, because I think a lot of teams are under the impression that this is not true, and they can use the full potential of the LAN onboard the robot. Your evidence would correct teams usage of the local network. The OP also asked this question in their original post.
My response:
I did not want this topic side tracked because as you can see that is what is going on.
So I asked very early to take the details offline, or at least, into another topic.
If I continue to dig into this - like I do professionally - you will eventually get your answers at the expense of my time.
However if I am not *very* careful I might be providing knowledge that someone can abuse.
Further, as you have said, the D-Link is back this year.
It is too late to alter course because of the way FIRST gathers the KOP.
So what we have here is something that is just bad for me personally any way you look:
1. No field I can test on without disrupting something.
2. Lots of time/money further documenting issues for problems other professionals have seen.
3. A distraction from my mentoring for Team 11.
4. A distraction from my personal projects.
5. A distraction from my professional projects.
6. Something I will still be cleaning up when I volunteer.
7. The potential it gets abused.
8. Helping D-Link fix their commercial product at my personal expense.
9. All the monetary, social and political head aches that come with all of the above.
I can have all that - so that we can use TCP/IP like a hammer on every nail.
Hmmm....
Walks over and turns off my light switch.
Did not have to worry about a dead iPhone battery to turn off the light.
Sorry if that was rough on you/FIRST/the guy next door/the aliens in orbit/the NSA whatever.
Sometimes you just gotta say what is on your mind, especially when you help pay the bills.
[DISCLAIMER]
THIS SARCASM IS PROVIDED BY BRIAN CAUSE SOMETIMES PEOPLE GRIND MY GEARS.
BRIAN's SARCASM IS NOT NECESSARILY THE VIEW OF FIRST, TEAM 11, TEAM 193 OR THE PEOPLE THAT MAKE TIN FOIL.
SMILE CAUSE IT IS GOOD FOR YOU.
Team118Joseph
14-11-2014, 09:33
The BeagleBone Black is perfect for the job. If you want an example of some vision code you can download it from our website.
https://ccisdrobonauts.org/?p=robots
I got in contact with another team who seemed to like the "pixy". They seemed to like it not only because it was vision but it processed the vision itself with very little outside programming. We plan to use it in the next year but you can find it at http://www.amazon.com/Charmed-Labs-and-CMU-CMUcam5/dp/B00IUYUA80/ref=sr_1_1?ie=UTF8&qid=1415976208&sr=8-1&keywords=pixy or you can go to their websight http://charmedlabs.com/default/.. We have yet to try it but he seems to really like it. Its defanitly a start for vision and vision processing. Hope it helps!
I really don't know how good of an option the pixy will be for FRC though. There would be quite a bit of motion blur, I could imagine, and I do not think the Pixy allows you to calculate more advanced things such as distances.
For the same price, you could get an ARM Dev board that can run a full-blown suite of vision tools, such as OpenCV!
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.