Go to Post ...regardless of anything that happens, there should be no booing at competitions, only cheers. True, many of us want to win, but in the end, all of us are working to expand FIRST and to promote the inspiration and recognition of science and technology through gracious professionalism. - Aignam [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rating: Thread Rating: 13 votes, 5.00 average. Display Modes
Prev Previous Post   Next Post Next
  #33   Spotlight this post!  
Unread 16-10-2014, 14:15
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by NotInControl View Post
I feel like this information is misleading to some and might cause people to deter from trying a solution that works for them, because they feel the hardware we have can not support it.
1. I've referenced the Einstein report several times during this topic.
2. Without a doubt lots of people are not even going to read this and try it anyway.
3. I've had this discussion over...and over...for years.

The bottom line is if someone asked how to test it that is one thing.
Simply throwing the details at them seems to tune them out (and really that's a common human trait).

I have to read your post again later when I have time but off hand most of what you wrote there seems fine.

Quote:
Originally Posted by NotInControl View Post
Why do you state that sending real-time video can aid in flooding the cRIO queue, even though the packet destination is the driver station, and not the cRIO/RoboRio. What you are describing sounds like the D-links ports act like a hub, instead of a swtich, do you have evidence of this?
If you read again I did not write the real time video over WiFi can flood the cRIO queue.
I wrote sending real time video over the WiFi can cause network issues that prevent your cRIO from getting FMS packets in time to prevent a timed disable.
I used simple language when I wrote it because I hoped that I said it in a way less experienced people could understand.

Quote:
Originally Posted by NotInControl View Post
What do you consider real-time? Again the problem you are mentioning is a worst-case senario, if you only send packets to the cRIO, but never read them, you will fill up its network buffer, as is expected. It is not unreasonable.
I think you misunderstood the previous point so this does not make sense for me to address.

Quote:
Originally Posted by NotInControl View Post
We transmit 320x240 images at 20 frames per second from our off-board processor, to our driverstation. I consider this to be real-time, and we don't have issues and because of the small image size, with mostly black background, we are way under 3mbits per second bandwidth (which is a safety factor of more than 2 on the link limit.) Since we use an off-board processor, The transmission starts with our off-board processor, and ends with a separate thread running on our driverstation. The cRIO is not aware of the transmission, because the d-link acts as a switch and routes based on MAC address, the pictures destined for the driverstation, should not flood the cRIO.
Again the direction you are going with this does not match what I communicated.
You reduced your video bandwidth so you could send it in the bandwidth FIRST actually has on the competition field.
You did so because it worked.

If you go back and look - you say 3Mb and others wrote 7Mb.
So what do you think would happen if you used up 7Mb of bandwidth? It would be a problem.

I find it difficult to tell people to read the manual when that manual tells them something less than transparent.

Quote:
A proper working switch as the D-link is advertised to be, only routes packets to individual ports, not all ports. If you have evidence of your claim to the contrary, please provide it.
I do not see why I should provide evidence of something I never wrote.
However the ARP function the D-Link products implement is questionable.
I provided links earlier if you would like to see what D-Link has to say about their own bridge function.

I have to say that I am not inclined to waste lots of time or energy on proving things to FIRST.
It seems to accomplish very little because there is no reasonable way for them to address some of these problems cost effectively.
Worse it might be exploitable if I go into too much detail.

Quote:
IF you plan to bit banging data over IO, or even multiple IO your throughput will suffer. This may work for small communications, but will be a hindrance for more data, or even scalability.
I am very curious what data your video coprocessor is sending that is so large that it needs high throughput? What are you sending to the cRIO/RoboRio if the coprocessor is doing the vision part?

Is there really some reason you can not have digital pins or even simple binary data for things like: 'move up more', 'move down more', 'on target', 'not on target'...?

Quote:
I also believe it is more complicated if the user requires bi-directional communication than using Ethernet.
If you had a lot to communicate possibly. Again what are you communicating that requires a whole protocol?

Quote:
If flooding the network queue was of concern even after designing a proper communication protocol, I would recommend people to reduce the Time To Live on the packet, so that if it does get queued up, it is not sitting in the queue for more than 2 seconds.
Sure....but on the other hand...TCP/IP is not exactly a simple protocol either.
Seems odd to me the fix for not writing a simple or virtually no protocol is use a protocol where often people do not understand fine details like TCP congestion mechanisms.
Even more strange when they tune out anyone that is trying to explain potential issues.

So what are you sending from your vision coprocessor to your cRIO/RoboRio that you need to deal with all that?

Quote:
As an Alpha and Beta tester, the ASUS was proved to be too problematic, and will not be approved for use. We will continue to use the D-link for at least the 2015 competition season.
First I heard of it. Thanks.

Quote:
I second what Jared states on camera and network setting. I will be releasing a comparison of RoboRio vs BeagleBone Black vs Jetson (CPU) vs Jetson (w/GPU) sometime in the near future as I have access to all of those boards, and they are all linux/ARM and can run the exact same code.
Me likely data . Just saying.

Also if you release the test code perhaps we can try that against a laptop that would be legal on a FIRST robot.

Quote:
We used an off-board process mainly for reliability. Which is why I wanted to chime in. We like the segregation of keeping the vision system separate from our main robot. IF the vision fails, we get a null image, or any of a mirage of things happen on the vision side, the way we have the robot coded, it does not affect the robots ability to communicate, and play the game (without vision of course). We ran into issue at an off-season event where the SD card on our beagle-bone white began to fail, and would not load the OS properly on startup, all this meant is we could not detect the hot goal, and a backup auto routine was performed ( just shoot 2 balls in the same goal and one will be hot). It did not bring down the robot at all. If the camera becomes unplugged, or unpowered, it does not create a null reference on the robot, which I have seen many teams have dead robots in Auto, because their camera was unplugged when put on the field, and the code started with a null reference.

I realize our rationale if different than your post which has a tone of more bad than good in my opinion, but I would definitely advocate for an off-board processor.
I do not understand how I made a post previous to this suggesting making vision systems modular was interpreted this way.

Quote:
Another reason we go with an off-board processor is because of what happened in 2012 and 2013 where certain events actually disabled streams in order to guarantee up-time on the field. This handicapped any team that depended on vision processing on their driverstation. I still like to believe that the 7Mbit/s stream is not a guarantee from FIRST, so depending on it is a bad idea. If you must rely on vision, doing the processing locally on the RoboRio, or an off-board processor is a way to avoid this (because the data stays on the local LAN) and doesn't need to be transmitted to the DS. Although I am open to any evidence that this is not true for the D-LINK DAP 1522 as is suggested in this thread.
I am still utterly perplexed by this.
If I prove to you there can be an impact on the field network when you send data on 2 switch ports locally what exactly do you think you are going to do about that?
I did not say that this impact would detract from that local communication.

Besides you just told us the D-Link is the only option in town.
So that means FIRST is past the point of changing course because it is nearly November.

Quote:
For these reliability reasons, even though the RoboRio probably holds enough juice for our needs, we most likely will still continue to use an off-board processor to keep the major software of our system segregated, if we even have vision on our bot! As others have said, vision should only be a last resort, not the first, their are more reliable ways to get something done, like more driver practice! We didn't use vision at all in 2012, and ranked 1 in both of our in-season competitions.

Our robot is and always will be developed to play the game without relying heavily on vision. We do this with dedicated driver training. Vision is only added to help, or make an action quicker, not replace the function entirely. Our drivers still train for a failure event, and we try to design our software so that if any sensor fails, the driver is alerted immediately, and the robot still operates to complete the match without much handicap.

Regards,
Kevin
See nothing else to address there.

Brian

Last edited by techhelpbb : 16-10-2014 at 17:50.
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:03.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi