Go to Post No one can deny the awesome power of FIRST. - Queen_of_Mascot [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 11-01-2017, 02:11
LaLaLand LaLaLand is offline
Registered User
FRC #5015
 
Join Date: Jan 2017
Location: calgary
Posts: 4
LaLaLand is an unknown quantity at this point
Lost and Confused about Grip, Java, roboRIO, and Network Tables

Hi all,

Here are my understanding and research on Grip, Java, roboRio, and NetWorkTable. Can someone help me with a high-level diagram on whether I understand the architecture correctly?



However, according to this webpage page - http://wpilib.screenstepslive.com/s/...8-new-for-2017
We no longer recommend using the GRIP deploy tool for roboRIO or Raspberry PI processors due to issues seen by many teams running out of resources.

Does this mean that I can still use Grip on roboRiO, but I must use the code generated for Java instead of deployment tool?

Can someone confirm my step by step CV for dummies below? My intention is to produce a document with step-by-step instructions so that everyone can follow without any prior knowledge about CV coding, architecture and etc. Maybe there is such document out there that you can share with us.

1) Use Grip to generate Java code, see below



The Publish ContoursReport step will publish the selected values to network tables. A good way of determining exactly what's published is to run the Outline Viewer program that's one of the tools in the <username>/wpilib/tools directory after you installed the eclipse plugins.

2) Launch the Outline Viewer program in Eclipse plugins, see picture below. What do I need to put in the parameter box?



3) Correct for the perspective distortion using OpenCV.

4) Perform calibration

5) Add some baking power and a target box will magically appear on the driver panel.

Can someone help us by providing a process flow diagram and a checklist since this is the first time that our team is planning to implement computer vision? I am certain that there are many teams also in a similar situation as us.

Your comment and guidance are greatly appreciated.
Reply With Quote
  #2   Spotlight this post!  
Unread 11-01-2017, 02:59
wsh32's Avatar
wsh32 wsh32 is offline
The Nerdiest of the Nerd Herd
AKA: Wesley Soo-Hoo
FRC #0687 (The Nerd Herd)
Team Role: Leadership
 
Join Date: Sep 2014
Rookie Year: 2014
Location: SoCal
Posts: 16
wsh32 is on a distinguished road
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Check out team 254's vision seminar at champs last year: https://www.youtube.com/watch?v=rLwOkAJqImo

To answer your question, if you want to see data coming out of the GRIP debugging, use localhost or 127.0.0.1.

A couple key points about processing the video stream:
1. Underexpose your camera. 254 goes more into detail about why in the video, but it does wonders.
2. You can also eliminate noise by applying a gaussian blur and running cv erode followed by cv dilate (same number of iterations). I'd also use the Filter Contours block and filter by area.

Also, make sure your camera is mounted in the center of your robot. It helps so much when it comes to alignment. Also, not as important, but keeping your camera behind your center of rotation makes your perceived angle of alignment less, which eliminates oscillation.

On 687 last year, a few of these problems resulted in our vision system not working. First, the camera had a lot of latency. Make sure you take camera latency into account when you write your vision system. Also, make sure you get your drive PID loop tuned perfectly. A bad PID loop would make your drivebase spin around randomly.

I hope this helps, good luck this season!
__________________


FRC 687 - Lead Programmer (2015-2016), Team Captain (2016-present)
Reply With Quote
  #3   Spotlight this post!  
Unread 11-01-2017, 10:45
AriMindell AriMindell is online now
Registered User
FRC #1389 (The Body Electric)
Team Role: Programmer
 
Join Date: May 2016
Rookie Year: 2015
Location: Maryland
Posts: 28
AriMindell will become famous soon enoughAriMindell will become famous soon enough
So you are correct that running cv on the roborio is an easy option thanks to the changes in WPIlib 2017, but it isn't the only way. There are many options for where your camera stream can come from, and where it can be processed.
1) Mjpeg feed from RoboRIO, GRIP on Driver station, Contours report returned to the RoboRIO via network tables. I would say this was the easiest way to use GRIP before the 2017 update. Pros: lots of processing power from the laptop, easy to debug by watching the GRIP pipeline. Cons: relatively high latency because the camera stream has to go over through radio.
2) Mjpeg from the roborio, OpenCV running on a coprocessor. Deploying was the way to do this before, but now you can write a Java/C++ program to run the exported GRIP code. Pros: lower latency than Driver station without sacrificing processing power on the roboRIO. Cons: debugging is harder because you can only see the end result of the GRIP code, but not the pipeline. You can use a laptop to tune your GRIP code before you put it on the coprocessor.
3) All vision processing done on the roboRIO, as you mentioned. Pros: 0 network latency, very easy to integrate the results of the vision processing with motion control. Cons: takes a lot of processing power away from robot control ( we haven't tested this with the new GRIP code generation yet, so I don't know how bad it is)
When I talk about an Mjpeg stream, that can come from one of two places: if you have an axis camera, it creates an Mjpeg stream that's available anywhere on the network it's connected to. If you have a USB camera, you can use the new WPILib cameraServer to convert it to an Mjpeg stream.
Good luck! I think people will really benefit from a beginners guide like this.


Sent from my iPhone using Tapatalk
Reply With Quote
  #4   Spotlight this post!  
Unread 11-01-2017, 12:08
onenerdyguy onenerdyguy is offline
Registered User
FRC #5929
 
Join Date: Jan 2016
Location: Lake Park, MN
Posts: 56
onenerdyguy is on a distinguished road
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Is it possible to use something like the Kangaroo PC, put that on robot and run GRIP on it as a coprocessor, and have the cameras plugged directly into it via usb, publishing to network tables?

That way, you get the minimal latency by keeping it all on wired LAN compared to radio, but also the power benefit of the full windows PC like the drivers station. Or is there a rule I'm missing saying the camera has to be plugged into the robo rio?
Reply With Quote
  #5   Spotlight this post!  
Unread 11-01-2017, 12:12
AustinShalit's Avatar
AustinShalit AustinShalit is offline
Registered User
AKA: אוסטין
no team (WPILib Suite Developer)
 
Join Date: Dec 2013
Rookie Year: 2008
Location: Los Angeles/Worcester/Israel
Posts: 152
AustinShalit is a glorious beacon of lightAustinShalit is a glorious beacon of lightAustinShalit is a glorious beacon of lightAustinShalit is a glorious beacon of lightAustinShalit is a glorious beacon of lightAustinShalit is a glorious beacon of light
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Quote:
Originally Posted by onenerdyguy View Post
Is it possible to use something like the Kangaroo PC, put that on robot and run GRIP on it as a coprocessor, and have the cameras plugged directly into it via usb, publishing to network tables?

That way, you get the minimal latency by keeping it all on wired LAN compared to radio, but also the power benefit of the full windows PC like the drivers station. Or is there a rule I'm missing saying the camera has to be plugged into the robo rio?
That is actually one of the recommended ways of running GRIP.

http://wpilib.screenstepslive.com/s/...garoo-computer
__________________
Reply With Quote
  #6   Spotlight this post!  
Unread 11-01-2017, 12:26
onenerdyguy onenerdyguy is offline
Registered User
FRC #5929
 
Join Date: Jan 2016
Location: Lake Park, MN
Posts: 56
onenerdyguy is on a distinguished road
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Thats what I figured, just wanted clarification! Thanks!
Reply With Quote
  #7   Spotlight this post!  
Unread 11-01-2017, 13:32
AriMindell AriMindell is online now
Registered User
FRC #1389 (The Body Electric)
Team Role: Programmer
 
Join Date: May 2016
Rookie Year: 2015
Location: Maryland
Posts: 28
AriMindell will become famous soon enoughAriMindell will become famous soon enough
Quote:
Originally Posted by onenerdyguy View Post
Is it possible to use something like the Kangaroo PC, put that on robot and run GRIP on it as a coprocessor, and have the cameras plugged directly into it via usb, publishing to network tables?



That way, you get the minimal latency by keeping it all on wired LAN compared to radio, but also the power benefit of the full windows PC like the drivers station. Or is there a rule I'm missing saying the camera has to be plugged into the robo rio?

This is basically what I meant by number 2, with the caveat that you are suggesting to connect the camera directly to the coprocessor rather than to the rio. This is effectively the same in terms of latency, since the coprocessor and the rio are connected via Ethernet, but your version may be slightly easier to program.
Both ways absolutely work though.


Sent from my iPhone using Tapatalk
Reply With Quote
  #8   Spotlight this post!  
Unread 11-01-2017, 15:14
LaLaLand LaLaLand is offline
Registered User
FRC #5015
 
Join Date: Jan 2017
Location: calgary
Posts: 4
LaLaLand is an unknown quantity at this point
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Quote:
Originally Posted by wsh32 View Post
Check out team 254's vision seminar at champs last year: https://www.youtube.com/watch?v=rLwOkAJqImo

To answer your question, if you want to see data coming out of the GRIP debugging, use localhost or 127.0.0.1.

A couple key points about processing the video stream:
1. Underexpose your camera. 254 goes more into detail about why in the video, but it does wonders.
2. You can also eliminate noise by applying a gaussian blur and running cv erode followed by cv dilate (same number of iterations). I'd also use the Filter Contours block and filter by area.

Also, make sure your camera is mounted in the center of your robot. It helps so much when it comes to alignment. Also, not as important, but keeping your camera behind your center of rotation makes your perceived angle of alignment less, which eliminates oscillation.

I hope this helps, good luck this season!
wsh32,

Thanks for the link and I made some screen captures for quick summary point. Is there a way that I can share/attach the document to share with everyone?

These are the take away points:

1) Make sure the camera is mounted in the center of the robot
2) Different camera & hardware have different advantages and disadvantages. Implementation is dependent on team resources. For us, we are attempting the easiest solution (roboRIO with camera) with low processing speed; however, high latency.
3) Use library Open CV, NIVision, Grip and etc.
4) Use HSV (hue, saturation, value) instead of RGB
5) Turn down the exposure time as we want a dark image (don't overexpose)
6) Tune HUE first, then Saturation, and the Value. Look for high saturation (color intensity) and then tune V/L (brightness) to LED ring setup. If done properly, we will not need to calibrate on the field.
7) Convert pixel coordinates into real world coordinates (angle displacement). Use coordinates as setpoint for a controller with faster sensor. In our case, we are going to use Gyro. There will be a latency issue; however, we will need to accept due to limited resources & knowledge
8) Use a linear approximation approach of pixels to degrees. Pinhole model is more exact and will be implemented for future years.

See best bets summary below.

Reply With Quote
  #9   Spotlight this post!  
Unread 11-01-2017, 15:25
LaLaLand LaLaLand is offline
Registered User
FRC #5015
 
Join Date: Jan 2017
Location: calgary
Posts: 4
LaLaLand is an unknown quantity at this point
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Quote:
Originally Posted by AriMindell View Post
So you are correct that running cv on the roborio is an easy option thanks to the changes in WPIlib 2017, but it isn't the only way. There are many options for where your camera stream can come from, and where it can be processed.
1) Mjpeg feed from RoboRIO, GRIP on Driver station, Contours report returned to the RoboRIO via network tables. I would say this was the easiest way to use GRIP before the 2017 update. Pros: lots of processing power from the laptop, easy to debug by watching the GRIP pipeline. Cons: relatively high latency because the camera stream has to go over through radio.
2) Mjpeg from the roborio, OpenCV running on a coprocessor. Deploying was the way to do this before, but now you can write a Java/C++ program to run the exported GRIP code. Pros: lower latency than Driver station without sacrificing processing power on the roboRIO. Cons: debugging is harder because you can only see the end result of the GRIP code, but not the pipeline. You can use a laptop to tune your GRIP code before you put it on the coprocessor.
3) All vision processing done on the roboRIO, as you mentioned. Pros: 0 network latency, very easy to integrate the results of the vision processing with motion control. Cons: takes a lot of processing power away from robot control ( we haven't tested this with the new GRIP code generation yet, so I don't know how bad it is)
When I talk about an Mjpeg stream, that can come from one of two places: if you have an axis camera, it creates an Mjpeg stream that's available anywhere on the network it's connected to. If you have a USB camera, you can use the new WPILib cameraServer to convert it to an Mjpeg stream.
Good luck! I think people will really benefit from a beginners guide like this.


Sent from my iPhone using Tapatalk

AriMindell,
Thanks for your contribution. What are your thoughts on activating vision processing only when a button is pushed. This way we are not taking processing power away from the robot. When the robot is stopped and we are ready to line up to target then activate vision processing.
Reply With Quote
  #10   Spotlight this post!  
Unread 11-01-2017, 15:32
LaLaLand LaLaLand is offline
Registered User
FRC #5015
 
Join Date: Jan 2017
Location: calgary
Posts: 4
LaLaLand is an unknown quantity at this point
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Quote:
Originally Posted by AustinShalit View Post
That is actually one of the recommended ways of running GRIP.

http://wpilib.screenstepslive.com/s/...garoo-computer
AustinShalit,

Thanks for your comments. For this thread, we are keeping it simple by implementing it on the roboRIO. In future documentation, I will attempt to document step-by-step other method and other hardware configuration.
Reply With Quote
  #11   Spotlight this post!  
Unread 11-01-2017, 15:51
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,082
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Lost and Confused about Grip, Java, roboRIO, and Network Tables

Quote:
Originally Posted by wsh32 View Post
2. You can also eliminate noise by applying a gaussian blur and running cv erode followed by cv dilate (same number of iterations).
There's another way to do this that is embarrassingly low tech.

If you camera has manually adjustable focus, you can deliberately defocus your lens just bit to get a bit of optical blur. You lose a little bit of crispness near the edges of the target, but if you are just trying to find the center of a blob with very coarse shape filtering (ex. size or aspect ratio) it doesn't have a significant effect on accuracy.
Reply With Quote
  #12   Spotlight this post!  
Unread 11-01-2017, 18:31
AriMindell AriMindell is online now
Registered User
FRC #1389 (The Body Electric)
Team Role: Programmer
 
Join Date: May 2016
Rookie Year: 2015
Location: Maryland
Posts: 28
AriMindell will become famous soon enoughAriMindell will become famous soon enough
Quote:
Originally Posted by LaLaLand View Post
AriMindell,

Thanks for your contribution. What are your thoughts on activating vision processing only when a button is pushed. This way we are not taking processing power away from the robot. When the robot is stopped and we are ready to line up to target then activate vision processing.


That sounds perfectly doable to me, but you should still expect to be dealing with a small amount of latency. One easy solution to this is latency correction as discussed in the 254 lecture someone already linked here. Some latency correction support is available for the navX using the sf2 framework.


Sent from my iPhone using Tapatalk
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 20:47.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi