Go to Post Since the dawn of man, martial artists have sought to master fighting styles that seamlessly blend the attributes of strength, power, quickness, grace, and speed. You've heard them all. The Tiger. The Crane. The Mantis. Now witness.......The Nerd. - Travis Hoffman [more]
Home
Go Back   Chief Delphi > Technical > Programming > NI LabVIEW
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 15-07-2012, 15:47
Suitster's Avatar
Suitster Suitster is offline
Registered User
AKA: Ethan Pellittiere
FRC #3951 (SUITS)
Team Role: Alumni
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Honeoye
Posts: 79
Suitster is on a distinguished road
Vision Targeting on Laptop

Our team recently purchased a new laptop to replace our broken Classmate. The new laptop is a quad core (thanks Woot!), so we have a bunch of processing power that can be used.

I remember in a thread a while ago that one team mentioned that they did their vision processing on their driver station's laptop (Maybe 1114).

This seems a good choice to pursue, as the "Take single image, act on it" method may not be viable for next years game, like if it was a clone of lunacy, for example.

A theory I have on how to accomplish this is to run the vision processing in the dashboard, as it already has a connection to the cRIO, and just transmit some of the target info (distance, angle, etc.) to the robot

Am I going in the right direction, or is there a better way to go about it?
__________________
2012 FLR Regional Champs, with 1507 and 191
Reply With Quote
  #2   Spotlight this post!  
Unread 15-07-2012, 15:50
Todd's Avatar
Todd Todd is offline
Software Engineer
FRC #1071 (Team Max)
Team Role: Mentor
 
Join Date: Feb 2005
Rookie Year: 2004
Location: Connecticut, Wolcott
Posts: 51
Todd is just really niceTodd is just really niceTodd is just really niceTodd is just really niceTodd is just really nice
Re: Vision Targeting on Laptop

Quote:
Originally Posted by Suitster View Post
A theory I have on how to accomplish this is to run the vision processing in the dashboard, as it already has a connection to the cRIO, and just transmit some of the target info (distance, angle, etc.) to the robot
That is indeed a perfectly viable way to go about it. The cRio can be set to not connect to the camera at all, saving CPU cycles over there, with the dashboard connecting to the camera directly, the camera plugged into your wifi router on your robot. Then the dashboard can communicate the data it derives over to the cRio
Reply With Quote
  #3   Spotlight this post!  
Unread 15-07-2012, 16:50
Andrew Schreiber Andrew Schreiber is offline
Joining the 900 Meme Team
FRC #0079
 
Join Date: Jan 2005
Rookie Year: 2000
Location: Misplaced Michigander
Posts: 4,068
Andrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond repute
Re: Vision Targeting on Laptop

Dunno about 1114 but 341 did. They also published a document on it. http://www.chiefdelphi.com/media/papers/2676
__________________




.
Reply With Quote
  #4   Spotlight this post!  
Unread 16-07-2012, 22:48
androb4's Avatar
androb4 androb4 is offline
..is trying to take this year off.
AKA: Andrew A.
no team
Team Role: Alumni
 
Join Date: Feb 2010
Rookie Year: 2003
Location: Houston, TX
Posts: 220
androb4 is a splendid one to beholdandrob4 is a splendid one to beholdandrob4 is a splendid one to beholdandrob4 is a splendid one to beholdandrob4 is a splendid one to beholdandrob4 is a splendid one to beholdandrob4 is a splendid one to behold
Re: Vision Targeting on Laptop

That is exactly what I was working on the past few days, and I got it working just fine! I tested it today and found out it is WAY faster than processing on the cRio.

What I basically did was, I took all the camera vi's from the 'Begin.vi' in a FRC Project, and put it inside the 'Vision Processing.vi' from the 'Retangular Target Processing' template. I removed the 'Set Registery.vi' and 'Get Registery.vi' and just wired the two sections together. Then I sent the 'Target Info' Cluster/Array, through UDP port 1130 to 10.te.am.2 (cRio), and made a UDP receive vi in a separate loop.

I HIGHLY recommend this since you have a good processor on your driver station.

NOTE: This is VERY similar to on-board processing.

If you want, I can send you my code or some screenshots. I can't right now because I'm on an iPad.
__________________
FRC 441 Mentor 2012-2015
FRC 441 Alumni 2009-2012
FTC 4673 Alumni 2011-2012
FRC 1484 Alumni 2006-2008

Reply With Quote
  #5   Spotlight this post!  
Unread 16-07-2012, 23:16
plnyyanks's Avatar
plnyyanks plnyyanks is offline
Data wins arguments.
AKA: Phil Lopreiato
FRC #1124 (The ÜberBots), FRC #2900 (The Mighty Penguins)
Team Role: College Student
 
Join Date: Apr 2010
Rookie Year: 2010
Location: NYC/Washington, DC
Posts: 1,114
plnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond repute
Re: Vision Targeting on Laptop

Quote:
Originally Posted by androb4 View Post
What I basically did was, I took all the camera vi's from the 'Begin.vi' in a FRC Project, and put it inside the 'Vision Processing.vi' from the 'Retangular Target Processing' template. I removed the 'Set Registery.vi' and 'Get Registery.vi' and just wired the two sections together. Then I sent the 'Target Info' Cluster/Array, through UDP port 1130 to 10.te.am.2 (cRio), and made a UDP receive vi in a separate loop.
Actually, there's an easier way to get the camera image on the dashboard. If you look in the default LV dashboard, the camera stream is already fetched from the robot. The code already has a Camera Read MJPEG VI in it, which you can also use to do processing. Sending the same image a second time over a different connection is unnecessary and bandwidth-heavy. I would also assume that similar functionality exists in other dashboard software as well, although I don't have direct experience with them.
__________________
Phil Lopreiato - "It's a hardware problem"
Team 1124 (2010 - 2013), Team 1418 (2014), Team 2900 (2016)
FRC Notebook The Blue Alliance for Android
Reply With Quote
  #6   Spotlight this post!  
Unread 16-07-2012, 23:43
Alpha Beta's Avatar
Alpha Beta Alpha Beta is online now
Strategy, Scouting, and LabVIEW
AKA: Mr. Aaron Bailey
FRC #1986 (Team Titanium)
Team Role: Coach
 
Join Date: Mar 2008
Rookie Year: 2007
Location: Lee's Summit, Missouri
Posts: 763
Alpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond repute
Re: Vision Targeting on Laptop

Quote:
Originally Posted by androb4 View Post
If you want, I can send you my code or some screenshots. I can't right now because I'm on an iPad.
Sounds like a bit of code that I'd like to take a look at. PM sent
__________________
Regional Wins: 2016(KC), 2015(St. Louis, Queen City), 2014(Central Illinois, KC), 2013(Hub City, KC, Oklahoma City), 2012(KC, St. Louis), 2011(Colorado), 2010(North Star)
Regional Chairman's Award: 2014(Central Illinois), 2009(10,000 Lakes)
Engineering Inspiration: 2016(Smoky Mountain), 2012(Kansas City), 2011(Denver)
Dean's List Finalist 2016(Jacob S), 2014(Cameron L), 2013(Jay U), 2012(Laura S), 2011(Dominic A), 2010(Collin R)
Woodie Flowers Finalist 2013 (Aaron Bailey)
Championships: Sub-Division Champion (2016), Finalist (2013, 2010), Semifinalist (2014), Quaterfinalist (2015, 2012, 2011)
Other Official Awards: Gracious Professionalism (2013) Entrepreneurship (2013), Quality (2015, 2015, 2013), Engineering Excellence (Champs 2013, 2012), Website (2011), Industrial Design (Archimedes/Tesla 2016, 2016, 2015, Newton 2014, 2013, 2011), Innovation in Control (2014, Champs 2010, 2010, 2008, 2008), Imagery (2009), Regional Finalist (2016, 2015, 2008)
Reply With Quote
  #7   Spotlight this post!  
Unread 17-07-2012, 00:00
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Targeting on Laptop

Hmm. There may have been a good reason why the example vision code demonstrated both PC and cRIO processing. Both are valid approaches. The cRIO is about an 800 MIPs computer with potentially lots of other stuff to do. The laptop with an Atom is around 3300 and may have very little else to do during a match. Be mindful of latency though. Make sure to measure the entire processing loop from acquisition to response.

Greg McKaskle
Reply With Quote
  #8   Spotlight this post!  
Unread 18-07-2012, 07:53
Gdeaver Gdeaver is offline
Registered User
FRC #1640
Team Role: Mentor
 
Join Date: Mar 2004
Rookie Year: 2001
Location: West Chester, Pa.
Posts: 1,367
Gdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond repute
Re: Vision Targeting on Laptop

If I remember correctly, The NI vision is locked to a single processor. It doesn't take advantage of multiple cores or the GPU. OpenCV can be configured to use all the cores available and can compile in CUDA extensions to use a Nvidia GPU. This is only doable on an Intel processor at this time and the extensions are not all free. The image routines that were used this year are not really that intensive. How ever with vision becoming more critical in the FRC game. Teams may find the need to maximize the CPU and GPU usage in their driver station or on robot processor. Also remember that with the issues at champs the methods and volume of wireless communications maybe very different next year. Exploring vision in the off season would be time well spent.
Reply With Quote
  #9   Spotlight this post!  
Unread 18-07-2012, 11:39
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Targeting on Laptop

If your memory is based on old versions, that may be accurate, but for about four years, core algorithms have been multicore aware. I believe the default is to spread over all cores. There is a VI called "IMAQ Multi-Core Options" that can be used to view or modify the number of cores you want the algorithms to use. Additionally, the algorithms have been SIMD, MMX and SSE capable for a dozen years -- as those capabilities were available in the processors.

As you point out, OpenCV is very customizable and if you need to tune for a platform, it is a very nice tool. But IMAQ is no slouch on performance, and should not be characterized as single-core. This is even more the case if it is used within LV, where multitasking is relatively easy and safe to carry out.

Greg McKaskle
Reply With Quote
  #10   Spotlight this post!  
Unread 18-07-2012, 21:13
Gdeaver Gdeaver is offline
Registered User
FRC #1640
Team Role: Mentor
 
Join Date: Mar 2004
Rookie Year: 2001
Location: West Chester, Pa.
Posts: 1,367
Gdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond repute
Re: Vision Targeting on Laptop

Yes it seams I'm out of date. I'll also look into this with our student programmer this summer. Maybe you can help with this. Is there any way to work with a usb web cam in labview? Here is one of our thought's. This year our drivers had issues with seeing balls that were blocked by the bridge and robot traffic. Our cam was aimed and focused on the upper basket. It was of no use in field view for balls. We tried 2 lan web cams and could not over come lag and latency issues. We are looking at using inexpensive usb web cams to give field view. Our student programer received a free via micro itx auto motive box to work with vision. The reason for our interest in opencv is that it is easy to have multiple usb cams attached and switch between them and send one stream to the dash board. Could this function be accomplished in labview? Our programmer put allot of time into labview vision. It would be easier to continue on with labview development.
Reply With Quote
  #11   Spotlight this post!  
Unread 18-07-2012, 22:41
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Targeting on Laptop

Quote:
Originally Posted by Gdeaver View Post
...Is there any way to work with a usb web cam in labview? ... We tried 2 lan web cams and could not over come lag and latency issues. We are looking at using inexpensive usb web cams to give field view. .. Could this function be accomplished in labview? ...
On the FIRST cRIO, there is no USB port, so that is a bit of a problem. The IMAQ vision is actually broken into a few components, they have the vision analysis lib, and they have several acquisition drivers. The IMAQdx driver on Windows is able to use all of the USB webcams that fit into the Windows camera spec which I can't remember the name of. I don't know that the dx driver was a part of the kit last year, but you can always ask support to see if it is available.

The next thing to consider is that USB camera streams are not compressed, meaning that if you do get the camera into an external laptop or board, you will need to spend CPU to compress it before transmitting it to the dashboard. By comparison, the Axis IP camera is already compressed by HW on the camera and can be sent to the dashboard with no compression overhead.

As for the lag and latency, can you give more details on how it was setup. A few years ago, I hooked three cameras up to the dLink switch and requested all of the for the dashboard. Two large ones worked fine, and the limiting factor was the old laptop's CPU that couldn't uncompress and draw three different movie streams. Anyway, it should be no problem doing two.

Greg McKaskle
Reply With Quote
  #12   Spotlight this post!  
Unread 19-07-2012, 19:42
Todd's Avatar
Todd Todd is offline
Software Engineer
FRC #1071 (Team Max)
Team Role: Mentor
 
Join Date: Feb 2005
Rookie Year: 2004
Location: Connecticut, Wolcott
Posts: 51
Todd is just really niceTodd is just really niceTodd is just really niceTodd is just really niceTodd is just really nice
Re: Vision Targeting on Laptop

Quote:
Originally Posted by Greg McKaskle View Post
Anyway, it should be no problem doing two.

Greg McKaskle
Indeed I've helped with multiple robots this year that had two KOP Axis cameras, being read directly from the dashboard, without any issue.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 21:55.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi