|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#31
|
||||
|
||||
|
Re: Optimal board for vision processing
Quote:
As long as you are reading from the queue faster then you write, and your code doesn't halt. You should never run into this problem on the cRIO. A properly threaded TCP or UDP communication protocol programmed by the user for controller to off-board processor can't overwhelm the network. We used a bi-directional TCP communication protocol sending data from our off-board processor at a rate of 20 times a second with out any packet loss or communication issues in the 5 events we have played in 2014 so far. At the end of the day, as long as you can read data off the NIC faster than you can send (which should be easy to achieve) then you should never have this problem above. Its that simple. The Rio is linux based, and operates a bit differently, but it is still possible to run into this issue. The benefit to the Rio being linux is that more users are familiar with linux and can diagnose if the stack is full. The user should be able to see the buffer state by reading Code:
/proc/net/tcp or /proc/net/udp The EINSTEIN report from 2012, when this was noted is documented here: http://www3.usfirst.org/sites/defaul...n%20Report.pdf Quote:
What do you consider real-time? Again the problem you are mentioning is a worst-case senario, if you only send packets to the cRIO, but never read them, you will fill up its network buffer, as is expected. It is not unreasonable. We transmit 320x240 images at 20 frames per second from our off-board processor, to our driverstation. I consider this to be real-time, and we don't have issues and because of the small image size, with mostly black background, we are way under 3mbits per second bandwidth (which is a safety factor of more than 2 on the link limit.) Since we use an off-board processor, The transmission starts with our off-board processor, and ends with a separate thread running on our driverstation. The cRIO is not aware of the transmission, because the d-link acts as a switch and routes based on MAC address, the pictures destined for the driverstation, should not flood the cRIO. A proper working switch as the D-link is advertised to be, only routes packets to individual ports, not all ports. If you have evidence of your claim to the contrary, please provide it. Quote:
Quote:
I second what Jared states on camera and network setting. I will be releasing a comparison of RoboRio vs BeagleBone Black vs Jetson (CPU) vs Jetson (w/GPU) sometime in the near future as I have access to all of those boards, and they are all linux/ARM and can run the exact same code. We used an off-board process mainly for reliability. Which is why I wanted to chime in. We like the segregation of keeping the vision system separate from our main robot. IF the vision fails, we get a null image, or any of a mirage of things happen on the vision side, the way we have the robot coded, it does not affect the robots ability to communicate, and play the game (without vision of course). We ran into issue at an off-season event where the SD card on our beagle-bone white began to fail, and would not load the OS properly on startup, all this meant is we could not detect the hot goal, and a backup auto routine was performed ( just shoot 2 balls in the same goal and one will be hot). It did not bring down the robot at all. If the camera becomes unplugged, or unpowered, it does not create a null reference on the robot, which I have seen many teams have dead robots in Auto, because their camera was unplugged when put on the field, and the code started with a null reference. I realize our rationale if different than your post which has a tone of more bad than good in my opinion, but I would definitely advocate for an off-board processor. Another reason we go with an off-board processor is because of what happened in 2012 and 2013 where certain events actually disabled streams in order to guarantee up-time on the field. This handicapped any team that depended on vision processing on their driverstation. I still like to believe that the 7Mbit/s stream is not a guarantee from FIRST, so depending on it is a bad idea. If you must rely on vision, doing the processing locally on the RoboRio, or an off-board processor is a way to avoid this (because the data stays on the local LAN) and doesn't need to be transmitted to the DS. Although I am open to any evidence that this is not true for the D-LINK DAP 1522 as is suggested in this thread. For these reliability reasons, even though the RoboRio probably holds enough juice for our needs, we most likely will still continue to use an off-board processor to keep the major software of our system segregated, if we even have vision on our bot! As others have said, vision should only be a last resort, not the first, their are more reliable ways to get something done, like more driver practice! We didn't use vision at all in 2012, and ranked 1 in both of our in-season competitions. Our robot is and always will be developed to play the game without relying heavily on vision. We do this with dedicated driver training. Vision is only added to help, or make an action quicker, not replace the function entirely. Our drivers still train for a failure event, and we try to design our software so that if any sensor fails, the driver is alerted immediately, and the robot still operates to complete the match without much handicap. Regards, Kevin |
|
#32
|
|||||||||||||
|
|||||||||||||
|
Re: Optimal board for vision processing
Quote:
2. Without a doubt lots of people are not even going to read this and try it anyway. 3. I've had this discussion over...and over...for years. The bottom line is if someone asked how to test it that is one thing. Simply throwing the details at them seems to tune them out (and really that's a common human trait). I have to read your post again later when I have time but off hand most of what you wrote there seems fine. Quote:
I wrote sending real time video over the WiFi can cause network issues that prevent your cRIO from getting FMS packets in time to prevent a timed disable. I used simple language when I wrote it because I hoped that I said it in a way less experienced people could understand. Quote:
Quote:
You reduced your video bandwidth so you could send it in the bandwidth FIRST actually has on the competition field. You did so because it worked. If you go back and look - you say 3Mb and others wrote 7Mb. So what do you think would happen if you used up 7Mb of bandwidth? It would be a problem. I find it difficult to tell people to read the manual when that manual tells them something less than transparent. Quote:
However the ARP function the D-Link products implement is questionable. I provided links earlier if you would like to see what D-Link has to say about their own bridge function. I have to say that I am not inclined to waste lots of time or energy on proving things to FIRST. It seems to accomplish very little because there is no reasonable way for them to address some of these problems cost effectively. Worse it might be exploitable if I go into too much detail. Quote:
Is there really some reason you can not have digital pins or even simple binary data for things like: 'move up more', 'move down more', 'on target', 'not on target'...? Quote:
Quote:
Seems odd to me the fix for not writing a simple or virtually no protocol is use a protocol where often people do not understand fine details like TCP congestion mechanisms. Even more strange when they tune out anyone that is trying to explain potential issues. So what are you sending from your vision coprocessor to your cRIO/RoboRio that you need to deal with all that? Quote:
Quote:
. Just saying.Also if you release the test code perhaps we can try that against a laptop that would be legal on a FIRST robot. Quote:
Quote:
If I prove to you there can be an impact on the field network when you send data on 2 switch ports locally what exactly do you think you are going to do about that? I did not say that this impact would detract from that local communication. Besides you just told us the D-Link is the only option in town. So that means FIRST is past the point of changing course because it is nearly November. Quote:
Brian Last edited by techhelpbb : 16-10-2014 at 17:50. |
|
#33
|
|||||||
|
|||||||
|
Re: Optimal board for vision processing
Quote:
Quote:
You state you were not saying that sending data over WiFI causes the buffer to fill, but sending data over WiFi can prevent the cRIO from reading a DS packet. Please elaborate for us what is going on here. Quote:
Quote:
Quote:
We sent from the cRIO to the beagleBone, when the match started (to signal when to grab the hot target frame), and when the match reached 5s (signaling a left to right hot goal switch). We could also send other cal values which allowed us to tune our filter params from the driver station if we needed too. These were all async transmissions. Quote:
Quote:
We had a lot of network issues in 2012. We got over most of them in 2013, and had virtually no issues in 2014. If you have evidence of issues that can arise on the system we all use, then it should be posted for the greater community to understand. I believe for a lot of teams most of the details around FMS are based on what we "think" vs. what we "know". However, I can't change what I "think" without supporting data as I am sure you can appreciate. I believe the OP has received their answer, and our conversation is just side tracking now. If you have any evidence to help the community at large I think it would be beneficial to post, if not, we can take this conversation offline if you wish to continue. Please feel free to PM me if you wish. Thanks. Regards, Kevin Last edited by NotInControl : 16-10-2014 at 17:55. |
|
#34
|
||||||
|
||||||
|
Re: Optimal board for vision processing
Quote:
So if this is about factually correct and supported by evidence we all have a problem. It clearly is not exclusive to what I wrote. The question is why should I do any more than I have so that I can then (as I have for years) go cleanup after it anyway both as a mentor and a volunteer? It is increasingly supported by evidence that such an effort is literally a waste of my time regardless of what nonsense is used to push me to expend the effort. Let me take that a step further...for all of this...can anyone please provide a detailed and complete analysis of a field 'christmas tree' and the correct procedure to eliminate it? Cause I see cycling the power of fielded robots at different moments in my future. Furthermore, the whole 'who do you think you are to speak for FIRST bit' is old and without merit. I specifically and directly said it was my opinion in several places. I even took both FIRST and Team 11/193 off the hook. Every year someone tries these tactics with me it gets predictable and old. Kind of like trying to get video to driver's stations. Quote:
I ask you instead to set your Axis camera to 640x480, at the highest color depth you can find, minimal compression and 30 frames per second then send that to your driver's station and drive your robot on a competition field. Then come to me when you start having strange issues with your robot and tell me there's nothing that can happen over the WiFi sending video to a driver's station that can have an impact and end up with a disabled robot here and there. Oh wait *SMACKS FOREHEAD* never mind I do that test every year at a competition. All I have to do to test it this year is show up. Quote:
If the published values are wrong. If the channel bonding on the fields and settings change during the competition to 'adapt' and they do. I have more than noticed that and reported it. In a heartbeat that specific recommendation can instantly fail. So I can either confront FIRST about that and face it that's worthless....or.... If you don't want to have video problems sending video to your driver's station: Try not to send video to your driver's station (pretty logical for someone as silly as me). If you must send video to your driver's station then just do the best you can, and realistically that is all that you have done by halving the bandwidth you use. If you don't believe me I await the first time your recommendation fails for you because I have seen what will happen. All that it would take to make this fail as well is a subtle change in the field load balancer. Quote:
You obviously have a fine grasp of TCP/IP mechanics so I am sure it's no big deal to send it and service it for your team. Problem is - a lot of teams do not have as a great a grasp on the subject. I find it hard to tell teams to develop that while they tackle vision. Seems to me like it is asking quite a bit - a great challenge if you can rise to it - or a pain if you stumble. Quote:
Were not that many teams using it at the competitions I was at and I have the records from them. This is not a dig at CAN or the Jaguar - just saying I sometimes wonder if CAN is FIRST's Beta/VHS. Secondly to make more pins without getting fancy you can use addressing and multiplexing. One of the problems I see with FIRST not really requiring electronics knowledge is that it seems we use TCP/IP like a hammer and forget that it depends on simple digital concepts. I realize that students do not have to use discrete TTL/CMOS anymore but I wonder if the logic or the finite issues of TCP/IP are more difficult to grasp. You know when I was in school - you learned digital logic first - then took college courses on the network physical layer and then later courses on TCP/IP. It almost sounds like you advocate the reverse. Quote:
I did not want this topic side tracked because as you can see that is what is going on. So I asked very early to take the details offline, or at least, into another topic. If I continue to dig into this - like I do professionally - you will eventually get your answers at the expense of my time. However if I am not *very* careful I might be providing knowledge that someone can abuse. Further, as you have said, the D-Link is back this year. It is too late to alter course because of the way FIRST gathers the KOP. So what we have here is something that is just bad for me personally any way you look: 1. No field I can test on without disrupting something. 2. Lots of time/money further documenting issues for problems other professionals have seen. 3. A distraction from my mentoring for Team 11. 4. A distraction from my personal projects. 5. A distraction from my professional projects. 6. Something I will still be cleaning up when I volunteer. 7. The potential it gets abused. 8. Helping D-Link fix their commercial product at my personal expense. 9. All the monetary, social and political head aches that come with all of the above. I can have all that - so that we can use TCP/IP like a hammer on every nail. Hmmm.... Walks over and turns off my light switch. Did not have to worry about a dead iPhone battery to turn off the light. Sorry if that was rough on you/FIRST/the guy next door/the aliens in orbit/the NSA whatever. Sometimes you just gotta say what is on your mind, especially when you help pay the bills. [DISCLAIMER] THIS SARCASM IS PROVIDED BY BRIAN CAUSE SOMETIMES PEOPLE GRIND MY GEARS. BRIAN's SARCASM IS NOT NECESSARILY THE VIEW OF FIRST, TEAM 11, TEAM 193 OR THE PEOPLE THAT MAKE TIN FOIL. SMILE CAUSE IT IS GOOD FOR YOU. Last edited by techhelpbb : 17-10-2014 at 15:04. |
|
#35
|
||||
|
||||
|
Re: Optimal board for vision processing
The BeagleBone Black is perfect for the job. If you want an example of some vision code you can download it from our website.
https://ccisdrobonauts.org/?p=robots |
|
#36
|
|||
|
|||
|
I got in contact with another team who seemed to like the "pixy". They seemed to like it not only because it was vision but it processed the vision itself with very little outside programming. We plan to use it in the next year but you can find it at http://www.amazon.com/Charmed-Labs-a...&keywords=pixy or you can go to their websight http://charmedlabs.com/default/.. We have yet to try it but he seems to really like it. Its defanitly a start for vision and vision processing. Hope it helps!
|
|
#37
|
||||
|
||||
|
Re: Optimal board for vision processing
I really don't know how good of an option the pixy will be for FRC though. There would be quite a bit of motion blur, I could imagine, and I do not think the Pixy allows you to calculate more advanced things such as distances.
For the same price, you could get an ARM Dev board that can run a full-blown suite of vision tools, such as OpenCV! |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|