Go to Post Before you fix something, it's worthwhile to make a concerted effort to figure out why it failed. - Ether [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 13 votes, 5.00 average. Display Modes
  #16   Spotlight this post!  
Unread 15-10-2014, 11:30
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
So... to summarize, I will always be able to choose NOT to send data over WiFi?
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place).
Even if the D-Link were to repeat the packets you send over the D-Link switch intended only for the wired ports, if the packets are UDP it has no effect on the onboard interconnection. The worst that happens is you hit the field side with UDP and frankly if they are monitoring remotely as they should they can filter that if it really came down to it.

Also you do not need to send much over the D-Link switch unless you are sending video to the driver's station. In fact you can avoid the Ethernet entirely if you use I2C, the digital I/O or something like that.

So you should be good. Just be careful to realize that if you do use Ethernet to do this - you are using some of the bandwidth to the cRIO and RoboRio and if you do this too much you can cause issues. You do not have complete control over what the cRIO/RoboRio does on Ethernet especially when on a regulation FIRST field.

I believe there is a relevant example of what I mean in the Einstein report from years past.

Quote:
Originally Posted by Joe Ross View Post
Do you have any evidence of this? Bridge mode does not mean that the D-Link acts as a hub and blasts data anywhere. It still has an internal switch and will only send the data to the appropriate port. If both connections are Ethernet, it won't send the data through WiFi. The only exception is broadcast packets.
Yes a properly working ARP table in a switch should work that way.

D-Link has had issues with this in the past. Hence they deprecated the bridge feature from the DIR-655.

There are hints to this floating around like this:
http://forums.dlink.com/index.php?topic=4542.0

Also this (odd is it not that the described broadcast does not pass....)
http://forums.dlink.com/index.php?to..._next=next#new

Last edited by techhelpbb : 15-10-2014 at 19:35.
  #17   Spotlight this post!  
Unread 15-10-2014, 11:35
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
Even if the D-Link were to repeat the packets you send over the D-Link switch intended only for the wired ports, if the packets are UDP it has no effect on the onboard interconnection. The worst that happens is you hit the field side with UDP and frankly if they are monitoring remotely as they should they can filter that if it really came down to it.

Also you do not need to send much over the D-Link switch unless you are sending video to the driver's station. In fact you can avoid the Ethernet entirely if you use I2C, the digital I/O or something like that.

So you should be good. Just be careful to realize that if you do use Ethernet to do this - you are using some of the bandwidth to the cRIO and RoboRio and if you do this too much you can cause issues. You do not have complete control over what the cRIO/RoboRio does on Ethernet especially when on a regulation FIRST field.

I believe there is a relevant example of what I mean in the Einstein report from years past.



Yes a properly working ARP table in a switch should work that way.

D-Link has had issues with this in the past. Hence they deprecated the bridge feature from the DIR-655.

There are hints to this floating around like this:
http://forums.dlink.com/index.php?topic=4542.0
Can you elaborate what I2C is? (again- I'm new to FRC. Sorry if this is a noob question!)
  #18   Spotlight this post!  
Unread 15-10-2014, 11:37
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Can you elaborate what I2C is? (again- I'm new to FRC. Sorry if this is a noob question!)
I2C is the telephone style jack on the bottom left in this picture of the digital side car.

http://www.andymark.com/product-p/am-0866.htm

I do not want to hijack your topic on this extensively.
So I will simply point you here:

http://en.wikipedia.org/wiki/I%C2%B2C

It is basically a form of digital communication.
To use it from a laptop you would probably need a USB interface for I2C and they do make things like this COTS.

Last edited by techhelpbb : 15-10-2014 at 11:41.
  #19   Spotlight this post!  
Unread 15-10-2014, 12:52
Chadfrom308's Avatar
Chadfrom308 Chadfrom308 is offline
Slave to the bot
AKA: Chad Krause
FRC #0308 (The Monsters)
Team Role: Driver
 
Join Date: Jan 2013
Rookie Year: 2011
Location: Novi
Posts: 272
Chadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to behold
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Can you elaborate what I2C is? (again- I'm new to FRC. Sorry if this is a noob question!)
I²C (or I2C) is a really simple way to communicate. Most of the times it's between 2 microcontrollers because it is easy and fast to do. I hear it is pretty hard to do on the cRIO (hopefully it is easier to do on the roboRIO).

What I would do is try and get a DAC (digital to analog converter) and on the RPi or Beaglebone. That way you can hook it up straight to the analog in on the roboRIO and use a function to change the analog signal back into a digital signal. I feel like it would be an easy thing to do. (especially if you are doing an auto center/aim system. You could hook a PID loop right up to the analog signal (okay, maybe a PI loop, but that can still give you good auto-aiming))

I also completely forgot about the laptop driver station option. Although it is not the fastest method, vision tracking on the driver station is probably the easiest method.

Also, the RoboRIO has 2 cores, so maybe you can dedicate one core to the vision tracking and that way there is practically 0 latency (at least for communications)
  #20   Spotlight this post!  
Unread 15-10-2014, 13:39
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by Jared Russell View Post
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio
I would hope the Java vision examples for the RoboRio have improved.
On Team 11 we had very little luck getting the cRIO to handle all we asked from it with Java doing vision and everything else and in some example cases even getting the examples to work.
The RoboRio is faster so that will help. It is less picky so that will also help.

I believe I asked around on ChiefDelphi in the past for Java vision examples for the cRIO that actually work.
I would love to see working Java vision examples on the RoboRio. Perhaps video of them working.

I have not involved myself with the beta work MORT is doing on the RoboRio and Java in this regard so it may exist.

Quote:
Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).
I did a survey last year at several events regarding what people were using for vision processing. It was not asked of the CSA but since I was asking other questions I asked or looked. I think you would be surprised at how many teams made the coprocessor work without major headaches.

I also spent part of each competition chasing around teams sending video back to the driver's station in ways that messed with the field at the request of the FTA. Very competent people were having issues with this so I do not think it is quite so cut and dry. If anyone wanted I could toss that data together for the events at which I volunteered in MAR.

Additionally:

I would like to add there is a hidden cost to the embedded and single board computers.
It is the same hidden cost of the cRIO especially back 3 or more years ago when the 8 slot cRIO was the FIRST approved system.

How many of these do you have knocking around to develop on?

Think about it: general purpose laptops are plentiful and therefore anyone with a laptop (increasingly all the students in a school) could snag a USB camera for <$30 and start writing vision code. If you are using old phones you can get the whole package probably for $100 or less and probably your students are already glued to the phones they use too often every day now.

On the other hand if you buy a development board for nearly $200 how many people can actively develop/test on it?
If a student takes that board home how many other students can work and be confident that what they are doing will port to that system?
Vision is a big project and more importantly you can often play a FIRST game without it.

Is it better to use laptops you probably already have or commit to more proprietary stuff you might have to buy multiple of and then....if that product goes out of production...do that all over again if you even really use it? Is the cost justifiable?

Last edited by techhelpbb : 15-10-2014 at 16:35.
  #21   Spotlight this post!  
Unread 15-10-2014, 17:10
jman4747's Avatar
jman4747 jman4747 is offline
Just building robots
AKA: Josh
FRC #4080 (Team Reboot)
Team Role: CAD
 
Join Date: Apr 2013
Rookie Year: 2011
Location: Atlanta GA
Posts: 422
jman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
I would like to add there is a hidden cost to the embedded and single board computers.
It is the same hidden cost of the cRIO especially back 3 or more years ago when the 8 slot cRIO was the FIRST approved system.

How many of these do you have knocking around to develop on?

Think about it: general purpose laptops are plentiful and therefore anyone with a laptop (increasingly all the students in a school) could snag a USB camera for <$30 and start writing vision code. If you are using old phones you can get the whole package probably for $100 or less and probably your students are already glued to the phones they use too often every day now.

On the other hand if you buy a development board for nearly $200 how many people can actively develop/test on it?
If a student takes that board home how many other students can work and be confident that what they are doing will port to that system?
Vision is a big project and more importantly you can often play a FIRST game without it.

Is it better to use laptops you probably already have or commit to more proprietary stuff you might have to buy multiple of and then....if that product goes out of production...do that all over again if you even really use it? Is the cost justifiable?
Not necessarily. Most people use open cv libraries with c++ on the Jetson TK1, and other SBLC's. The libraries are written the same for it, Linux pc's, laptops, windows pc's, etc. Even the GPU libraries are fuctionaly the same and look almost identical. Furthermore standard open cv GPU libraries work with Nividia GPU's in general not just the Jetson. If you mean Linux vs windows... before the Jetson I've never used Linux but the GUI desktop is very easy to get into and I was able to install everything necessary having never used linux. Thus anyone with a computer or other SBLC running windows/Linux and able to dev in C++ can write code that can be used on the co-processor.
__________________
---------------------
Alumni, CAD Designer, machinist, and Mentor: FRC Team #4080

Mentor: Rookie FTC Team "EVE" #10458, FRC Team "Drewbotics" #5812

#banthebag
#RIBMEATS
#1620
  #22   Spotlight this post!  
Unread 15-10-2014, 17:55
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by jman4747 View Post
Not necessarily. Most people use open cv libraries with c++ on the Jetson TK1, and other SBLC's. The libraries are written the same for it, Linux pc's, laptops, windows pc's, etc. Even the GPU libraries are fuctionaly the same and look almost identical. Furthermore standard open cv GPU libraries work with Nividia GPU's in general not just the Jetson. If you mean Linux vs windows... before the Jetson I've never used Linux but the GUI desktop is very easy to get into and I was able to install everything necessary having never used linux. Thus anyone with a computer or other SBLC running windows/Linux and able to dev in C++ can write code that can be used on the co-processor.
Firstly I will not deny that the Jetson TK1 is great bit of kit.

Secondly CUDA development is sometimes leveraged in the financial industries in which I often work. Some problems are not well suited for it, luckily OpenCV does leverage it, but I can certainly see various ways it could be poorly utilized regardless of support. As you say OpenCV supports it but what if you don't want to use OpenCV?

Thirdly the Jetson is actually more expensive than you might think. It lacks an enclosure and again the battery that you'd get with the laptop or a phone. Once you add those items the laptop or phone is cheaper. If you don't care about the battery then the Jetson wins on the price over the laptop because then you don't need to deal with the power supply issue the laptop would create without the battery, but a used phone would still likely over take the Jetson with or without the battery for price.

Fourthly the phone is smaller than 5"x 5" and very likely lighter even with the battery. You might even have the phone development experience because teams like MORT write scouting apps that are in the Android store.

Fifthly the Jetson does not have a camera and an old phone probably does. Maybe even 2 cameras facing different directions or in a small number of cases 2 cameras facing the same direction. What the Jetson does have is a single USB 2 port and a single USB 3 port. While a laptop might have 4 or even more USB ports (yes often laptops have integrated USB hubs on some of these ports, but you would have to add a USB hub to the Jetson and you would run out of non-hubbed ports fast like that). That might matter a lot if you intend to not use Ethernet (I2C USB adapter or USB digital I/O like an FTDI chip or Atmel/PIC MCU). To put this in a fair light I will refer here:
http://elinux.org/Jetson/Cameras
If you need expensive Firewire or Ethernet cameras you already consumed the cost of possibly 3 USB cameras for each.
Worse you might be back on the D-Link switch or dealing TCP/IP based video which for this application is in my opinion not a good idea.

Finally I will acknowledge that the Tegra TK1 is basically a general purpose computer with a GPU. So therefore you can leverage tools as you say. Still all the testing needs to end up on it. You could develop up to that point but then you'd need to buy it to test. Maybe buy more than one if you have a practice robot. Maybe even more if you have multiple developers. Students usually do not work like professional programmers as the sheer number of cRIO reloads I have seen can demonstrate.

On the plus side you could build up to the point you have it working. Then load it on the Jetson and if it doesn't work take your laptop apart. So there's that.

For a different dimension to this analysis which skill is probably worth more: the ability to write Android and Apple apps that you can bring to market while still a high school student or the ability to write CUDA apps? Both could analyze video but which one would you mentor if you wanted to give your student the most immediately marketable skill they can use without your guidance? My bet is the Android and Apple app skills would more immediately help a student earn a quick buck and be empowered. Mining bit coins on CUDA is not as profitable as you think .

Last edited by techhelpbb : 15-10-2014 at 18:21.
  #23   Spotlight this post!  
Unread 15-10-2014, 18:38
Brian Selle's Avatar
Brian Selle Brian Selle is offline
Mentor
FRC #3310 (Black Hawk Robotics)
Team Role: Engineer
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Texas
Posts: 170
Brian Selle has a spectacular aura aboutBrian Selle has a spectacular aura aboutBrian Selle has a spectacular aura about
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
I would hope the Java vision examples for the RoboRio have improved.
On Team 11 we had very little luck getting the cRIO to handle all we asked from it with Java doing vision and everything else and in some example cases even getting the examples to work.
The RoboRio is faster so that will help. It is less picky so that will also help.

I believe I asked around on ChiefDelphi in the past for Java vision examples for the cRIO that actually work.
I would love to see working Java vision examples on the RoboRio. Perhaps video of them working.
In 2012, we used Java for vision processing on the cRIO using the NIVision libraries. It calculated the target goal distance/angle from a single frame, adjusted the turret/shooter speed and unloaded the ball supply. It worked quite well.

What helped the most in getting it to work was that we first wrote a standalone app in .NET C#. I seem to recall that the NI install included a NIVision dll or we downloaded it for free. Using the examples as a guide, we were able to learn the libraries much faster than dealing with the cRIO. An added bonus was we could quickly diagnosis issues at competitions without tying up the robot/cRIO.

We thought about using it in 2013 and 2014 but, as others have said, it was a low priority and the extra effort/failure points made it even less important. Cheesy Vision sealed the decision. If we do it in the future it will most likely be on the roboRIO or Driver Station.
__________________
2016 Curie Quarter-Finalist (5803, 3310, 2168, 5940), Lubbock Regional Winner (3310, 4063, 4301), Arkansas Regional Winner (16, 3310, 6055)
2015 Newton Quarter-Finalist (3130, 2468, 3310, 537), Lubbock Regional Winner (2468, 3310, 4799)
2014 Galileo Quarter-Finalist (2052, 70, 3310, 3360), Colorado Regional Winner (1138, 3310, 2543)
2013 Archimedes Semi-Finalist (126, 3310, 1756), Texas Robot Roundup Winner (3310, 624, 2848), Dallas Regional Winner (148, 3310, 4610)
2012 Dallas West Regional Winner (935, 3310, 4206)
  #24   Spotlight this post!  
Unread 15-10-2014, 21:26
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,756
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Optimal board for vision processing

I'd encourage you to use the examples and white paper to compare your processing decisions. The MIPs rating of the processors is a pretty good estimate of the raw horsepower. I don't have a good Tegra, so I can't measure where the CUDA cores are a huge win and where they are not.

Finally, it isn't the board you pick, but how you use it. I suggest you pick the one that lets you iterate and experiment quickly and confidently.

Greg McKaskle
  #25   Spotlight this post!  
Unread 15-10-2014, 22:41
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,337
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by Greg McKaskle View Post
I'd encourage you to use the examples and white paper to compare your processing decisions. The MIPs rating of the processors is a pretty good estimate of the raw horsepower. I don't have a good Tegra, so I can't measure where the CUDA cores are a huge win and where they are not.

Finally, it isn't the board you pick, but how you use it. I suggest you pick the one that lets you iterate and experiment quickly and confidently.

Greg McKaskle
Agreed. It is all about how you use the tools you have. We're doing development with the TK1 board now and it's a lot of fun but there are some drawbacks. Most of them have been outlined above (Be mindful of using X11 on it, it's not stable.).

The main thing is that you pick something and then stick with it until you've developed a solution. If you are new to FRC or to any of this then your best bet is using the code and examples that NI/WPI have made available to teams.

Don't get me wrong, if you want to try new things then do it and ask lots of questions too! Just be prepared for it not to always work out.
  #26   Spotlight this post!  
Unread 15-10-2014, 23:00
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Another facet of this issue:

If in doubt as to the stability of your vision code/system:
make sure it is properly modular.

I've seen some really tragic losses over the years because vision wasn't working right but removing it was as bad as leaving it especially when the code is interlaced into the rest of the cRIO code awkwardly.

Putting the system external to the cRIO can make it more contained.
It is often possible to make the whole thing just mechanically removable when it is external (a few less active inputs here or there).

Remember things you never saw in testing can happen on a competition field.
  #27   Spotlight this post!  
Unread 15-10-2014, 23:37
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,082
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
I also spent part of each competition chasing around teams sending video back to the driver's station in ways that messed with the field at the request of the FTA. Very competent people were having issues with this so I do not think it is quite so cut and dry. If anyone wanted I could toss that data together for the events at which I volunteered in MAR.
In 2012-2013, 341 competed at 9 official events and numerous other offseason competitions. We never had any issues with streaming video. There were a handful of matches where things didn't work, but they all traced back to user error or an early match start (before our SmartDashboard had connected).

I believe that when people have trouble with this setup, it can usually be traced back to choosing camera settings poorly. Crank the exposure time down along with brightness, raise contrast, and you will find that images are almost entirely black except for your vision target (if necessary, provide more photons from additional LED rings to improve your signal to noise ratio). A mostly-black image with only your target illuminated is advantageous for a bunch of reasons:

1) JPEG compression can be REALLY effective, and each image will be ~20-40KB, even at 640x480. Large patches of uniform color are what JPEG loves best.

2) Your detection algorithm has far fewer false alarms since most of the background is simply black.

3) You can get away with conservative HSL/HSV/RGB thresholds, so you are more robust to changes in field lighting conditions. We won 6 of those 9 on-season competitions (and more than half of the offseasons) using 100% camera driven auto-aim, and never once touched our vision system parameters other than extrinsic calibration (ex. if the camera got bumped or our shooter was repaired).

In my experience, I find that the vast majority of teams don't provide enough photons and/or don't crank down their exposure time aggressively enough. Also, I strongly suspect (but do not know for sure) that the Bayer pattern on the Axis camera effectively makes it twice as sensitive to green light, so you might find that green LEDs work much better than other colors. We used green LEDs both years.

It is also possible that if your vision processing code falls behind, your laptop will get sluggish and bad things will happen. Tune your code (+ camera settings, including resolution) until you can guarantee that you will process faster than you are acquiring.

Last edited by Jared Russell : 15-10-2014 at 23:39.
  #28   Spotlight this post!  
Unread 16-10-2014, 04:58
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by Jared Russell View Post
...
Basically this advice produces a color depth and image content reduction. The end result of this advice is that the settings will effectively remove large portions of the video image before the video is sent. Reducing the bandwidth required to send the video over the field network. Using the retroreflective tape with a light source would make this easier because the high brightness to the camera sensor will make it survive the process of reducing the camera output. Not sure if this advice will really work out so well if the goal is to track things that are not retroreflective.

A similar concept to reducing the frame rate and/or video resolution and increasing the compression all of which make the video detail less and less useful to human eyes.

All of these options reduce the bandwidth required to send the video in the end. So whether the video is sent with TCP or UDP is less important. Even if TCP sends the video poorly there is just less of it to send.

So I would wonder, with this being the compromise, if the driver's trying to see the video on the driver's station (for example if the vision software was in doubt) would be nearly as useful as just using a targeting laser/light to illuminate the target visually to the drivers and just not using the video at all.

In years before FIRST started using QOS and prioritizing traffic (before the Einstein that caused the uproar) just sending video could put you in a situation where someone on the network might get robbed of FMS packets. We can only assume as teams that the bandwidth controls we have now actually will allow 2-4Mb of video without disruption. Since I know for sure that timing out the FMS packets will stop the robot till FMS can deliver a packet this is a real balancing act.

One of the most concerning things to me personally: is when you are faced with a situation like I was last year where someone that worked on the Einstein issue is having trouble sending video with the expectations that they should have based on the results of that work. Yet they are finding basically that they have less bandwidth than they might expect. So in cases like these I point out that FIRST fields are very dynamic and things might change without notice. So what was without issue on the field network in 2012 might have issues in 2015. It really depends on settings you can neither control nor see in this environment until you have access to that field. I believe there is generally bandwidth to send some video from the robot to the driver's station even using TCP, but you will have to make compromises and they might not be compromises you'd like to make. Hence, at least to me personally, if you can avoid putting that burden on the field by sending video over the WiFi you just should. It will just be one less variable to change on you from your test environment to the competition field.

Last edited by techhelpbb : 16-10-2014 at 05:53.
  #29   Spotlight this post!  
Unread 16-10-2014, 08:01
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,337
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
We can only assume as teams that the bandwidth controls we have now actually will allow 2-4Mb of video without disruption. Since I know for sure that timing out the FMS packets will stop the robot till FMS can deliver a packet this is a real balancing act.
I would not make that assumption. We ran into a lot of issues trying to get compressed 640x480 images back to the robot. We ended up ditching streaming entirely and instead just grabbing a single frame for our targeting system this past year.

Quote:
Originally Posted by techhelpbb View Post
Hence, at least to me personally, if you can avoid putting that burden on the field by sending video over the WiFi you just should. It will just be one less variable to change on you from your test environment to the competition field.
I agree 100% with this. That's why we have started down the road with the Tegra boards. Despite what FIRST says, FMS Lite != FMS. There are some oddities with FMS that only occur when you are on a field with full FMS and not in a lab.
  #30   Spotlight this post!  
Unread 16-10-2014, 09:50
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by marshall View Post
I agree 100% with this. That's why we have started down the road with the Tegra boards. Despite what FIRST says, FMS Lite != FMS. There are some oddities with FMS that only occur when you are on a field with full FMS and not in a lab.
I am not clear on the TCP/IP stack performance of the RoboRio but on the cRIO if you used the ethernet on the robot to interface your coprocessor (for vision in this case), even if the goal was to send information only to the cRIO locally, you could overwhelm the cRIO. There is a fine write up of the details in the Einstein report. So just be careful the Tegra boards have respectable power if you send your cRIO/RoboRio lots of data you could have this issue.

Not sending real time video payloads over the WiFi will not remove the possibility in which you could send so much data to the cRIO/RoboRio via the ethernet port you still prevent it from getting FMS packets.

If one interfaced to the cRIO/RoboRio over the digital I/O for example. Then the coprocessor could send all the data it wants but the cRIO/RoboRio might not get it all from the coprocessor and will continue to get FMS packets so your robot does not suddenly stop. Effectively giving the coprocessor a lower priority than your FMS packets (and that is likely the situation to really desire).

If the RoboRio stops using the ethernet port for the field radio then this may be less an issue because the FMS packets would not be competing on the ethernet port (they would be on a separate network stream). I know some alpha testing for the RoboRio was around the Asus USB-N53 Dual-band Wireless N600. At least then the issue is purely one of the RoboRio software keeping up with the combined traffic from the ethernet port and the USB networking device (only real testing would show how well that works out and for that you need robots on a competition field, test equipment and things to throw data at the RoboRio (laptops, Jetson boards, etc.)).

Last edited by techhelpbb : 16-10-2014 at 09:56.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:52.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi