View Single Post
  #23   Spotlight this post!  
Unread 11-03-2016, 18:04
ThomasClark's Avatar
ThomasClark ThomasClark is offline
Registered User
FRC #0237
 
Join Date: Dec 2012
Location: Watertown, CT
Posts: 146
ThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud of
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by Andrew Schreiber View Post
That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably).
That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue and it doesn't look too hard, I can try implementing it.

Quote:
Originally Posted by Arhowk View Post
I would recommend against GRIP. Our team was going to use GRIP initially but I rewrote our processing code in OpenCV using GRIP as a realtime processing agent in the pits than just copying over the hsv, erosion kernels, etc. to the cv code.
Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.

Quote:
Originally Posted by Arhowk View Post
  1. GRIP, if ran on the dashboard, requires sending camera data over a second time in addition to the DS which clogs up bandwidth and laptop CPU
  2. GRIP, if ran the RIO, never even worked for us. Gave us some error and resulted in the program never writing to NetTables.
  3. GRIP on the RIO also requires the installation and execution of the Java VM which is quite alot of overhead if you aren't a Java team
  4. There is also the latency of running it on the DS that is amplified on the field which produces visible control lag for the driver or robot code if used
  5. You learn more if you do it by hand! It's not hard. (Getting an mjpeg is a pain though)
1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension. Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.
__________________
GRIP (Graphically Represented Image Processing) - rapidly develop computer vision algorithms for FRC
Reply With Quote