View Single Post
  #5   Spotlight this post!  
Unread 16-07-2012, 23:16
plnyyanks's Avatar
plnyyanks plnyyanks is offline
Data wins arguments.
AKA: Phil Lopreiato
FRC #1124 (The ÜberBots), FRC #2900 (The Mighty Penguins)
Team Role: College Student
 
Join Date: Apr 2010
Rookie Year: 2010
Location: NYC/Washington, DC
Posts: 1,113
plnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond reputeplnyyanks has a reputation beyond repute
Re: Vision Targeting on Laptop

Quote:
Originally Posted by androb4 View Post
What I basically did was, I took all the camera vi's from the 'Begin.vi' in a FRC Project, and put it inside the 'Vision Processing.vi' from the 'Retangular Target Processing' template. I removed the 'Set Registery.vi' and 'Get Registery.vi' and just wired the two sections together. Then I sent the 'Target Info' Cluster/Array, through UDP port 1130 to 10.te.am.2 (cRio), and made a UDP receive vi in a separate loop.
Actually, there's an easier way to get the camera image on the dashboard. If you look in the default LV dashboard, the camera stream is already fetched from the robot. The code already has a Camera Read MJPEG VI in it, which you can also use to do processing. Sending the same image a second time over a different connection is unnecessary and bandwidth-heavy. I would also assume that similar functionality exists in other dashboard software as well, although I don't have direct experience with them.
__________________
Phil Lopreiato - "It's a hardware problem"
Team 1124 (2010 - 2013), Team 1418 (2014), Team 2900 (2016)
FRC Notebook The Blue Alliance for Android
Reply With Quote