Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Optimal board for vision processing (http://www.chiefdelphi.com/forums/showthread.php?t=130817)

NotInControl 16-10-2014 13:25

Re: Optimal board for vision processing
 
Quote:

Originally Posted by techhelpbb (Post 1404532)
I am not clear on the TCP/IP stack performance of the RoboRio but on the cRIO if you used the Ethernet on the robot to interface your coprocessor (for vision in this case), even if the goal was to send information only to the cRIO locally, you could overwhelm the cRIO. There is a fine write up of the details in the Einstein report. So just be careful the Tegra boards have respectable power if you send your cRIO/RoboRio lots of data you could have this issue.

I feel like this information is misleading to some and might cause people to deter from trying a solution that works for them, because they feel the hardware we have can not support it. What is your definition of "lots of data". This just doesn't happen as you make it seem with the statement "even if the goal was to send information only to the cRIO locally, you could overwhelm the cRIO" or "Tegra boards have respectable power if you send your cRIO/RoboRio lots of data you could have this issue" The problem is not with the cRIO, or with the RoboRio, it is with the operating systems they run on, and the network drivers included with those system. The cRIO runs on vxWorks 6.3. That OS has a single FIFO network buffer for all sockets. It is possible to fill up the network stack, and thus causing any new packets to be dropped, but only if you are constantly sending packets to the controllers NIC, but not reading it off the queue. This happened to our good friends on Einstein that year, because they had a condition where they could enter an infinite loop in their code, causing the code to read data from queue they were sending from the beaglebone white to not be executed.

As long as you are reading from the queue faster then you write, and your code doesn't halt. You should never run into this problem on the cRIO. A properly threaded TCP or UDP communication protocol programmed by the user for controller to off-board processor can't overwhelm the network. We used a bi-directional TCP communication protocol sending data from our off-board processor at a rate of 20 times a second with out any packet loss or communication issues in the 5 events we have played in 2014 so far.

At the end of the day, as long as you can read data off the NIC faster than you can send (which should be easy to achieve) then you should never have this problem above. Its that simple. The Rio is linux based, and operates a bit differently, but it is still possible to run into this issue. The benefit to the Rio being linux is that more users are familiar with linux and can diagnose if the stack is full. The user should be able to see the buffer state by reading

Code:

/proc/net/tcp

or

/proc/net/udp

Those files provide a ton of info on the corresponding protocol, including the count of items in-bound and out-bound in the queue. You can read the spec to see what each field of data means. http://search.cpan.org/~salva/Linux-...roc/Net/TCP.pm


The EINSTEIN report from 2012, when this was noted is documented here: http://www3.usfirst.org/sites/defaul...n%20Report.pdf

Quote:

Originally Posted by techhelpbb (Post 1404532)
Not sending real time video payloads over the WiFi will not remove the possibility in which you could send so much data to the cRIO/RoboRio via the ethernet port you still prevent it from getting FMS packets.

Why do you state that sending real-time video can aid in flooding the cRIO queue, even though the packet destination is the driver station, and not the cRIO/RoboRio. What you are describing sounds like the D-links ports act like a hub, instead of a swtich, do you have evidence of this?

What do you consider real-time? Again the problem you are mentioning is a worst-case senario, if you only send packets to the cRIO, but never read them, you will fill up its network buffer, as is expected. It is not unreasonable.

We transmit 320x240 images at 20 frames per second from our off-board processor, to our driverstation. I consider this to be real-time, and we don't have issues and because of the small image size, with mostly black background, we are way under 3mbits per second bandwidth (which is a safety factor of more than 2 on the link limit.) Since we use an off-board processor, The transmission starts with our off-board processor, and ends with a separate thread running on our driverstation. The cRIO is not aware of the transmission, because the d-link acts as a switch and routes based on MAC address, the pictures destined for the driverstation, should not flood the cRIO. A proper working switch as the D-link is advertised to be, only routes packets to individual ports, not all ports. If you have evidence of your claim to the contrary, please provide it.

Quote:

Originally Posted by techhelpbb (Post 1404532)
If one interfaced to the cRIO/RoboRio over the digital I/O for example. Then the coprocessor could send all the data it wants but the cRIO/RoboRio might not get it all from the coprocessor and will continue to get FMS packets so your robot does not suddenly stop. Effectively giving the coprocessor a lower priority than your FMS packets (and that is likely the situation to really desire).

IF you plan to bit banging data over IO, or even multiple IO your throughput will suffer. This may work for small communications, but will be a hindrance for more data, or even scalability. I also believe it is more complicated if the user requires bi-directional communication than using Ethernet. If flooding the network queue was of concern even after designing a proper communication protocol, I would recommend people to reduce the Time To Live on the packet, so that if it does get queued up, it is not sitting in the queue for more than 2 seconds.

Quote:

Originally Posted by techhelpbb (Post 1404532)
If the RoboRio stops using the Ethernet port for the field radio then this may be less an issue because the FMS packets would not be competing on the Ethernet port (they would be on a separate network stream). I know some alpha testing for the RoboRio was around the Asus USB-N53 Dual-band Wireless N600. At least then the issue is purely one of the RoboRio software keeping up with the combined traffic from the Ethernet port and the USB networking device (only real testing would show how well that works out and for that you need robots on a competition field, test equipment and things to throw data at the RoboRio (laptops, Jetson boards, etc.)).

As an Alpha and Beta tester, the ASUS was proved to be too problematic, and will not be approved for use. We will continue to use the D-link for at least the 2015 competition season.


I second what Jared states on camera and network setting. I will be releasing a comparison of RoboRio vs BeagleBone Black vs Jetson (CPU) vs Jetson (w/GPU) sometime in the near future as I have access to all of those boards, and they are all linux/ARM and can run the exact same code.


We used an off-board process mainly for reliability. Which is why I wanted to chime in. We like the segregation of keeping the vision system separate from our main robot. IF the vision fails, we get a null image, or any of a mirage of things happen on the vision side, the way we have the robot coded, it does not affect the robots ability to communicate, and play the game (without vision of course). We ran into issue at an off-season event where the SD card on our beagle-bone white began to fail, and would not load the OS properly on startup, all this meant is we could not detect the hot goal, and a backup auto routine was performed ( just shoot 2 balls in the same goal and one will be hot). It did not bring down the robot at all. If the camera becomes unplugged, or unpowered, it does not create a null reference on the robot, which I have seen many teams have dead robots in Auto, because their camera was unplugged when put on the field, and the code started with a null reference.

I realize our rationale if different than your post which has a tone of more bad than good in my opinion, but I would definitely advocate for an off-board processor. Another reason we go with an off-board processor is because of what happened in 2012 and 2013 where certain events actually disabled streams in order to guarantee up-time on the field. This handicapped any team that depended on vision processing on their driverstation. I still like to believe that the 7Mbit/s stream is not a guarantee from FIRST, so depending on it is a bad idea. If you must rely on vision, doing the processing locally on the RoboRio, or an off-board processor is a way to avoid this (because the data stays on the local LAN) and doesn't need to be transmitted to the DS. Although I am open to any evidence that this is not true for the D-LINK DAP 1522 as is suggested in this thread.

For these reliability reasons, even though the RoboRio probably holds enough juice for our needs, we most likely will still continue to use an off-board processor to keep the major software of our system segregated, if we even have vision on our bot! As others have said, vision should only be a last resort, not the first, their are more reliable ways to get something done, like more driver practice! We didn't use vision at all in 2012, and ranked 1 in both of our in-season competitions.

Our robot is and always will be developed to play the game without relying heavily on vision. We do this with dedicated driver training. Vision is only added to help, or make an action quicker, not replace the function entirely. Our drivers still train for a failure event, and we try to design our software so that if any sensor fails, the driver is alerted immediately, and the robot still operates to complete the match without much handicap.

Regards,
Kevin

techhelpbb 16-10-2014 14:15

Re: Optimal board for vision processing
 
Quote:

Originally Posted by NotInControl (Post 1404560)
I feel like this information is misleading to some and might cause people to deter from trying a solution that works for them, because they feel the hardware we have can not support it.

1. I've referenced the Einstein report several times during this topic.
2. Without a doubt lots of people are not even going to read this and try it anyway.
3. I've had this discussion over...and over...for years.

The bottom line is if someone asked how to test it that is one thing.
Simply throwing the details at them seems to tune them out (and really that's a common human trait).

I have to read your post again later when I have time but off hand most of what you wrote there seems fine.

Quote:

Originally Posted by NotInControl (Post 1404560)
Why do you state that sending real-time video can aid in flooding the cRIO queue, even though the packet destination is the driver station, and not the cRIO/RoboRio. What you are describing sounds like the D-links ports act like a hub, instead of a swtich, do you have evidence of this?

If you read again I did not write the real time video over WiFi can flood the cRIO queue.
I wrote sending real time video over the WiFi can cause network issues that prevent your cRIO from getting FMS packets in time to prevent a timed disable.
I used simple language when I wrote it because I hoped that I said it in a way less experienced people could understand.

Quote:

Originally Posted by NotInControl (Post 1404560)
What do you consider real-time? Again the problem you are mentioning is a worst-case senario, if you only send packets to the cRIO, but never read them, you will fill up its network buffer, as is expected. It is not unreasonable.

I think you misunderstood the previous point so this does not make sense for me to address.

Quote:

Originally Posted by NotInControl (Post 1404560)
We transmit 320x240 images at 20 frames per second from our off-board processor, to our driverstation. I consider this to be real-time, and we don't have issues and because of the small image size, with mostly black background, we are way under 3mbits per second bandwidth (which is a safety factor of more than 2 on the link limit.) Since we use an off-board processor, The transmission starts with our off-board processor, and ends with a separate thread running on our driverstation. The cRIO is not aware of the transmission, because the d-link acts as a switch and routes based on MAC address, the pictures destined for the driverstation, should not flood the cRIO.

Again the direction you are going with this does not match what I communicated.
You reduced your video bandwidth so you could send it in the bandwidth FIRST actually has on the competition field.
You did so because it worked.

If you go back and look - you say 3Mb and others wrote 7Mb.
So what do you think would happen if you used up 7Mb of bandwidth? It would be a problem.

I find it difficult to tell people to read the manual when that manual tells them something less than transparent.

Quote:

A proper working switch as the D-link is advertised to be, only routes packets to individual ports, not all ports. If you have evidence of your claim to the contrary, please provide it.
I do not see why I should provide evidence of something I never wrote.
However the ARP function the D-Link products implement is questionable.
I provided links earlier if you would like to see what D-Link has to say about their own bridge function.

I have to say that I am not inclined to waste lots of time or energy on proving things to FIRST.
It seems to accomplish very little because there is no reasonable way for them to address some of these problems cost effectively.
Worse it might be exploitable if I go into too much detail.

Quote:

IF you plan to bit banging data over IO, or even multiple IO your throughput will suffer. This may work for small communications, but will be a hindrance for more data, or even scalability.
I am very curious what data your video coprocessor is sending that is so large that it needs high throughput? What are you sending to the cRIO/RoboRio if the coprocessor is doing the vision part?

Is there really some reason you can not have digital pins or even simple binary data for things like: 'move up more', 'move down more', 'on target', 'not on target'...?

Quote:

I also believe it is more complicated if the user requires bi-directional communication than using Ethernet.
If you had a lot to communicate possibly. Again what are you communicating that requires a whole protocol?

Quote:

If flooding the network queue was of concern even after designing a proper communication protocol, I would recommend people to reduce the Time To Live on the packet, so that if it does get queued up, it is not sitting in the queue for more than 2 seconds.
Sure....but on the other hand...TCP/IP is not exactly a simple protocol either.
Seems odd to me the fix for not writing a simple or virtually no protocol is use a protocol where often people do not understand fine details like TCP congestion mechanisms.
Even more strange when they tune out anyone that is trying to explain potential issues.

So what are you sending from your vision coprocessor to your cRIO/RoboRio that you need to deal with all that?

Quote:

As an Alpha and Beta tester, the ASUS was proved to be too problematic, and will not be approved for use. We will continue to use the D-link for at least the 2015 competition season.
First I heard of it. Thanks.

Quote:

I second what Jared states on camera and network setting. I will be releasing a comparison of RoboRio vs BeagleBone Black vs Jetson (CPU) vs Jetson (w/GPU) sometime in the near future as I have access to all of those boards, and they are all linux/ARM and can run the exact same code.
Me likely data :). Just saying.

Also if you release the test code perhaps we can try that against a laptop that would be legal on a FIRST robot.

Quote:

We used an off-board process mainly for reliability. Which is why I wanted to chime in. We like the segregation of keeping the vision system separate from our main robot. IF the vision fails, we get a null image, or any of a mirage of things happen on the vision side, the way we have the robot coded, it does not affect the robots ability to communicate, and play the game (without vision of course). We ran into issue at an off-season event where the SD card on our beagle-bone white began to fail, and would not load the OS properly on startup, all this meant is we could not detect the hot goal, and a backup auto routine was performed ( just shoot 2 balls in the same goal and one will be hot). It did not bring down the robot at all. If the camera becomes unplugged, or unpowered, it does not create a null reference on the robot, which I have seen many teams have dead robots in Auto, because their camera was unplugged when put on the field, and the code started with a null reference.

I realize our rationale if different than your post which has a tone of more bad than good in my opinion, but I would definitely advocate for an off-board processor.
I do not understand how I made a post previous to this suggesting making vision systems modular was interpreted this way.

Quote:

Another reason we go with an off-board processor is because of what happened in 2012 and 2013 where certain events actually disabled streams in order to guarantee up-time on the field. This handicapped any team that depended on vision processing on their driverstation. I still like to believe that the 7Mbit/s stream is not a guarantee from FIRST, so depending on it is a bad idea. If you must rely on vision, doing the processing locally on the RoboRio, or an off-board processor is a way to avoid this (because the data stays on the local LAN) and doesn't need to be transmitted to the DS. Although I am open to any evidence that this is not true for the D-LINK DAP 1522 as is suggested in this thread.
I am still utterly perplexed by this.
If I prove to you there can be an impact on the field network when you send data on 2 switch ports locally what exactly do you think you are going to do about that?
I did not say that this impact would detract from that local communication.

Besides you just told us the D-Link is the only option in town.
So that means FIRST is past the point of changing course because it is nearly November.

Quote:

For these reliability reasons, even though the RoboRio probably holds enough juice for our needs, we most likely will still continue to use an off-board processor to keep the major software of our system segregated, if we even have vision on our bot! As others have said, vision should only be a last resort, not the first, their are more reliable ways to get something done, like more driver practice! We didn't use vision at all in 2012, and ranked 1 in both of our in-season competitions.

Our robot is and always will be developed to play the game without relying heavily on vision. We do this with dedicated driver training. Vision is only added to help, or make an action quicker, not replace the function entirely. Our drivers still train for a failure event, and we try to design our software so that if any sensor fails, the driver is alerted immediately, and the robot still operates to complete the match without much handicap.

Regards,
Kevin
See nothing else to address there.

Brian

NotInControl 16-10-2014 17:42

Re: Optimal board for vision processing
 
Quote:

Originally Posted by techhelpbb (Post 1404564)
1. I've referenced the Einstein report several times during this topic.
2. Without a doubt lots of people are not even going to read this and try it anyway.
3. I've had this discussion over...and over...for years.

The bottom line is if someone asked how to test it that is one thing.
Simply throwing the details at them seems to tune them out (and really that's a common human trait).

I am of the mindset if you are posting as an authority or advocate for a solution, technology, or other reason, provide only the facts and let the user decide what they will with the information. As a mentor, I try to be as factually correct in my posts as humanly possible. You don't know who is reading, or how something can be interpreted if you leave room for interpretation. If you are providing advice or opinion, then state as such, so as to not confuse anyone what is fact vs. opinion. Your statements confused me and were open to some interpretation, which is why I offered what I though would be clarification. I am not trying to offend anyone, and if I did I apologize. The goal is to help answer the OP's question, and also be factual about what we have at hand, so that anyone using this post as reference can make the right decision, now, or in the future.

Quote:

Originally Posted by techhelpbb (Post 1404564)
If you read again I did not say the real time video over WiFi can flood the cRIO queue.
I said sending real time video over the WiFi can cause network issues that prevent your cRIO from getting FMS packets in time to prevent a timed disable.

What other network issues can arise on the cRIO which will stop it from reading DS packets if it is not a filled buffer? As far as I am aware, the thread priority is set such that the DS protocol has the highest priority and the user thread is lower on the cRIO. This means that even if the robot code is in an infinite loop, it should still be able to read and execute on a DS packet. The only way I know how to STOP the cRIO from reading the DS packets is to flood the network buffer with USER packets only, in which case all DS packets are thrown away by the NIC because there is no room for it. The robot can not execute on a DS packet, because it's not getting it.

You state you were not saying that sending data over WiFI causes the buffer to fill, but sending data over WiFi can prevent the cRIO from reading a DS packet. Please elaborate for us what is going on here.


Quote:

Originally Posted by techhelpbb (Post 1404564)
You reduced your video bandwidth so you could send it in the bandwidth FIRST actually has on the competition field.
So what do you think would happen if you used up 7Mb of bandwidth?

This is the kind of information I want to convey. The message I get from your message is that sending video over WiFi is bad, and you correctly identify a problem, but don't really give it the caveat it deserves. Whether you explicitly state it or not, it is very much implied. Which I believe is a very wrong message. In stead of using objective terms like "lots of data" or "real-time" video. I provided concrete values that work. If you were trying to state "try to avoid sending real-time data which approaches anything over 5Mbits/s" for example, then the message would be much different, and it just didn't read that way for me. I believe that is a good recommendation to a person asking about the link limitations.

Quote:

Originally Posted by techhelpbb (Post 1404564)
I do not see why I should provide evidence of something I never wrote.
However the ARP function the D-Link products implement is questionable.
I provided links earlier if you would like to see what D-Link has to say about their own bridge function.

It was implied by your statement that sending data over WiFi can hinder the cRIOs ability to receive DS packets. Which I do not see how that's possible, unless you were flooding your network with broadcast packets . In a properly switched network, as is on the Robot, provided by the D-link, they data traffic should be mutually exclusive. Your statement suggests, that sending packets over WiFi, somehow make its way to the queue of the cRIO NIC and help fill it up, because that is the confirmed way to stop cRIO comms. Unless you have evidence of another network anomaly going on which causes the cRIO to loose DS packets by transmitting data over WiFi


Quote:

Originally Posted by techhelpbb (Post 1404564)
I am very curious what data your video coprocessor is sending that is so large that it needs high throughput? What are you sending to the cRIO/RoboRio if the coprocessor is doing the vision part?

We sent from the beagleBone to the cRIO, hot target status, left or right target, state of the beagle-bone, state of the camera, and some other values. This showed the driveTeam the health of the vision system as it was running. It was about 15 bytes of data, 20 times a second.

We sent from the cRIO to the beagleBone, when the match started (to signal when to grab the hot target frame), and when the match reached 5s (signaling a left to right hot goal switch). We could also send other cal values which allowed us to tune our filter params from the driver station if we needed too. These were all async transmissions.

Quote:

Originally Posted by techhelpbb (Post 1404564)
Is there really some reason you can not have digital pins or even simple binary data for things like: Move up more, Move down more, On target, Not on target?

This is a valid solution. The only downside I see is How many IO pins do you need to use in order to do that. IO is not scalable, but if it works for you, or for others, then by all means go ahead and use it. My advice would be to try ethernet first, because I do not see a problem using it, and when done correctly, you can have a fully scalable vision system that is robust, that you can use for future years no matter the challenge.

Quote:

Originally Posted by techhelpbb (Post 1404564)
I am still utterly perplexed by this. If I prove to you there can be an impact on the field network when you send data on 2 switch ports locally what exactly do you think you are going to do about that?

I personally would like to see evidence of this and I am sure other teams would too, because I think a lot of teams are under the impression that this is not true, and they can use the full potential of the LAN onboard the robot. Your evidence would correct teams usage of the local network. The OP also asked this question in their original post.

We had a lot of network issues in 2012. We got over most of them in 2013, and had virtually no issues in 2014. If you have evidence of issues that can arise on the system we all use, then it should be posted for the greater community to understand. I believe for a lot of teams most of the details around FMS are based on what we "think" vs. what we "know". However, I can't change what I "think" without supporting data as I am sure you can appreciate.

I believe the OP has received their answer, and our conversation is just side tracking now. If you have any evidence to help the community at large I think it would be beneficial to post, if not, we can take this conversation offline if you wish to continue. Please feel free to PM me if you wish.

Thanks.

Regards,
Kevin

techhelpbb 16-10-2014 17:58

Re: Optimal board for vision processing
 
Quote:

Originally Posted by NotInControl (Post 1404615)
I am of the mindset if you are posting as an authority or advocate for a solution, technology, or other reason, provide only the facts and let the user decide what they will with the information. As a mentor, I try to be as factually correct in my posts as humanly possible. You don't know who is reading, or how something can be interpreted if you leave room for interpretation. If you are providing advice or opinion, then state as such, so as to not confuse anyone what is fact vs. opinion. Your statements confused me and were open to some interpretation, which is why I offered what I though would be clarification. I am not trying to offend anyone, and if I did I apologize. The goal is to help answer the OP's question, and also be factual about what we have at hand, so that anyone using this post as reference can make the right decision, now, or in the future.

I have to say that when I read this I think back to the advertised bandwidth on the network and immediately recognize that multiple people from multiple regions have determined that the best way to get video to a driver's station from a robot reliably is to use quite a bit less than the advertised bandwidth in official sources.

So if this is about factually correct and supported by evidence we all have a problem. It clearly is not exclusive to what I wrote. The question is why should I do any more than I have so that I can then (as I have for years) go cleanup after it anyway both as a mentor and a volunteer?

It is increasingly supported by evidence that such an effort is literally a waste of my time regardless of what nonsense is used to push me to expend the effort.

Let me take that a step further...for all of this...can anyone please provide a detailed and complete analysis of a field 'christmas tree' and the correct procedure to eliminate it?
Cause I see cycling the power of fielded robots at different moments in my future.

Furthermore, the whole 'who do you think you are to speak for FIRST bit' is old and without merit.
I specifically and directly said it was my opinion in several places.
I even took both FIRST and Team 11/193 off the hook.

Every year someone tries these tactics with me it gets predictable and old.
Kind of like trying to get video to driver's stations.

Quote:

Originally Posted by NotInControl (Post 1404615)
You state you were not saying that sending data over WiFI causes the buffer to fill, but sending data over WiFi can prevent the cRIO from reading a DS packet. Please elaborate for us what is going on here.

I refuse to let you devalue the core points by making this overcomplicated.

I ask you instead to set your Axis camera to 640x480, at the highest color depth you can find, minimal compression and 30 frames per second then send that to your driver's station and drive your robot on a competition field. Then come to me when you start having strange issues with your robot and tell me there's nothing that can happen over the WiFi sending video to a driver's station that can have an impact and end up with a disabled robot here and there.

Oh wait *SMACKS FOREHEAD* never mind I do that test every year at a competition.
All I have to do to test it this year is show up.

Quote:

Originally Posted by NotInControl (Post 1404615)
This is the kind of information I want to convey. The message I get from your message is that sending video over WiFi is bad, and you correctly identify a problem, but don't really give it the caveat it deserves. Whether you explicitly state it or not, it is very much implied. Which I believe is a very wrong message. In stead of using objective terms like "lots of data" or "real-time" video. I provided concrete values that work. If you were trying to state "try to avoid sending real-time data which approaches anything over 5Mbits/s" for example, then the message would be much different, and it just didn't read that way for me. I believe that is a good recommendation to a person asking about the link limitations.

I have some really bad news for you.
If the published values are wrong.
If the channel bonding on the fields and settings change during the competition to 'adapt' and they do.
I have more than noticed that and reported it.

In a heartbeat that specific recommendation can instantly fail.
So I can either confront FIRST about that and face it that's worthless....or....

If you don't want to have video problems sending video to your driver's station:
Try not to send video to your driver's station (pretty logical for someone as silly as me).
If you must send video to your driver's station then just do the best you can, and realistically that is all that you have done by halving the bandwidth you use.

If you don't believe me I await the first time your recommendation fails for you because I have seen what will happen.
All that it would take to make this fail as well is a subtle change in the field load balancer.

Quote:

Originally Posted by NotInControl (Post 1404615)
We sent from the beagleBone to the cRIO, hot target status, left or right target, state of the beagle-bone, state of the camera, and some other values. This showed the driveTeam the health of the vision system as it was running. It was about 15 bytes of data, 20 times a second.

We sent from the cRIO to the beagleBone, when the match started (to signal when to grab the hot target frame), and when the match reached 5s (signaling a left to right hot goal switch). We could also send other cal values which allowed us to tune our filter params from the driver station if we needed too. These were all async transmissions.

This does not sound like the kind of data that would be all that hard to move even over raw digital I/O.
You obviously have a fine grasp of TCP/IP mechanics so I am sure it's no big deal to send it and service it for your team.
Problem is - a lot of teams do not have as a great a grasp on the subject.

I find it hard to tell teams to develop that while they tackle vision.
Seems to me like it is asking quite a bit - a great challenge if you can rise to it - or a pain if you stumble.

Quote:

Originally Posted by NotInControl (Post 1404615)
This is a valid solution. The only downside I see is How many IO pins do you need to use in order to do that. IO is not scalable, but if it works for you, or for others, then by all means go ahead and use it. My advice would be to try ethernet first, because I do not see a problem using it, and when done correctly, you can have a fully scalable vision system that is robust, that you can use for future years no matter the challenge.

Firstly CAN had the same promise of being future proof.
Were not that many teams using it at the competitions I was at and I have the records from them.
This is not a dig at CAN or the Jaguar - just saying I sometimes wonder if CAN is FIRST's Beta/VHS.

Secondly to make more pins without getting fancy you can use addressing and multiplexing.

One of the problems I see with FIRST not really requiring electronics knowledge is that it seems we use TCP/IP like a hammer and forget that it depends on simple digital concepts. I realize that students do not have to use discrete TTL/CMOS anymore but I wonder if the logic or the finite issues of TCP/IP are more difficult to grasp.

You know when I was in school - you learned digital logic first - then took college courses on the network physical layer and then later courses on TCP/IP. It almost sounds like you advocate the reverse.

Quote:

Originally Posted by NotInControl (Post 1404615)
I personally would like to see evidence of this and I am sure other teams would too, because I think a lot of teams are under the impression that this is not true, and they can use the full potential of the LAN onboard the robot. Your evidence would correct teams usage of the local network. The OP also asked this question in their original post.

My response:

I did not want this topic side tracked because as you can see that is what is going on.
So I asked very early to take the details offline, or at least, into another topic.

If I continue to dig into this - like I do professionally - you will eventually get your answers at the expense of my time.
However if I am not *very* careful I might be providing knowledge that someone can abuse.

Further, as you have said, the D-Link is back this year.
It is too late to alter course because of the way FIRST gathers the KOP.

So what we have here is something that is just bad for me personally any way you look:

1. No field I can test on without disrupting something.
2. Lots of time/money further documenting issues for problems other professionals have seen.
3. A distraction from my mentoring for Team 11.
4. A distraction from my personal projects.
5. A distraction from my professional projects.
6. Something I will still be cleaning up when I volunteer.
7. The potential it gets abused.
8. Helping D-Link fix their commercial product at my personal expense.
9. All the monetary, social and political head aches that come with all of the above.

I can have all that - so that we can use TCP/IP like a hammer on every nail.
Hmmm....

Walks over and turns off my light switch.
Did not have to worry about a dead iPhone battery to turn off the light.

Sorry if that was rough on you/FIRST/the guy next door/the aliens in orbit/the NSA whatever.
Sometimes you just gotta say what is on your mind, especially when you help pay the bills.

[DISCLAIMER]
THIS SARCASM IS PROVIDED BY BRIAN CAUSE SOMETIMES PEOPLE GRIND MY GEARS.
BRIAN's SARCASM IS NOT NECESSARILY THE VIEW OF FIRST, TEAM 11, TEAM 193 OR THE PEOPLE THAT MAKE TIN FOIL.
SMILE CAUSE IT IS GOOD FOR YOU.

Team118Joseph 14-11-2014 09:33

Re: Optimal board for vision processing
 
The BeagleBone Black is perfect for the job. If you want an example of some vision code you can download it from our website.
https://ccisdrobonauts.org/?p=robots

dash121 14-11-2014 09:43

Re: Optimal board for vision processing
 
I got in contact with another team who seemed to like the "pixy". They seemed to like it not only because it was vision but it processed the vision itself with very little outside programming. We plan to use it in the next year but you can find it at http://www.amazon.com/Charmed-Labs-a...&keywords=pixy or you can go to their websight http://charmedlabs.com/default/.. We have yet to try it but he seems to really like it. Its defanitly a start for vision and vision processing. Hope it helps!

yash101 14-11-2014 13:22

Re: Optimal board for vision processing
 
I really don't know how good of an option the pixy will be for FRC though. There would be quite a bit of motion blur, I could imagine, and I do not think the Pixy allows you to calculate more advanced things such as distances.

For the same price, you could get an ARM Dev board that can run a full-blown suite of vision tools, such as OpenCV!


All times are GMT -5. The time now is 22:13.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi