Go to Post MAKE SURE THE BATTERY DOES NOT FALL OUT - magnets [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rating: Thread Rating: 13 votes, 5.00 average. Display Modes
Prev Previous Post   Next Post Next
  #8   Spotlight this post!  
Unread 15-10-2014, 11:23
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.



This question greatly depends on how you achieve this.
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth).
If your CPU is/are slow or your code is bad then things might not work out so well.

Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements.
So... to summarize, I will always be able to choose NOT to send data over WiFi?
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place).

Quote:
Originally Posted by Jared Russell View Post
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio or (b) your driver station laptop. No new hardware costs, no worry about powering an extra board, no extra cabling or new points of failure. FIRST and WPI will provide working example code as a starting point, and libraries to facilitate communication between the driver station laptop and the cRIO exist and are fairly straightforward to use.

In all seriousness, in the retroreflective tape-and-LED-ring era, FIRST has never given us a vision task that couldn't be solved using either the cRIO or your driver station laptop for processing. Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).

I am still partial to the driver station method. With sensible compression and brightness/contrast/exposure time, you can easily stream 30fps worth of data to your laptop over the field's wifi system, process it in in a few tens of milliseconds, and send back the relevant bits to your robot. Round trip latency with processing will be on the order of 30-100ms, which is more than sufficient for most tasks that track a stationary vision target (particularly if you utilize tricks like sending gyro data along with your image so you can estimate your robot's pose at the precise moment the image was captured). Moreover, you can display exactly what your algorithm is seeing as it runs, easily build in logging for playback/testing, and even have "on-the-fly" tuning between or during matches. For example, on 341 in 2013 we found we frequently needed to adjust where in the image we should try to aim, so by clicking on our live feed where the discs were actually going we recalibrated our auto-aim control loop on the fly.

If you are talking about using vision for something besides tracking a retroreflective vision target, then an offboard solution might make sense. That said, think long and hard about the opportunity cost of pursuing such a solution, and what your goals really are. If your goal is to build the most competitive robot that you possibly can, there is almost always lower hanging fruit that is just as inspirational to your students.
Wow, thanks! And yes, in the beginning I intend to only develop recognition of the retroreflective strips. Well, I'll talk with the other guys in the programming team and we"ll see about this. The major goal (at least for this time) of my planned vision system is to assist the driver is scoring - that it will slightly fix the position of the robot and therefore be more precise.

Last edited by matan129 : 15-10-2014 at 11:32.
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 22:13.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi