View Single Post
  #2   Spotlight this post!  
Unread 23-01-2017, 12:25
wlogeais wlogeais is offline
Registered User
FRC #2177 (The Robettes)
Team Role: Mentor
 
Join Date: Feb 2016
Rookie Year: 2011
Location: Minnesota
Posts: 18
wlogeais is an unknown quantity at this point
Re: Will this vision processing code work? / Confusion about NetworkTables

Quote:
Originally Posted by Lesafian View Post
Hi, everyone. I am currently the only programmer for my team, and we have no programming mentors, so bare with me. ... I decided to tinker around and just start programming.
Sorry you have no programming Mentor, your attitude so far is a great asset!
I’m going to break up my response into general / examples and then into a mini-code-review detail.

For General Vision tips last year it was a challenge to run GRIP on the roboRIO unless you only needed one camera and only needed it for a brief time (in autonomous). Therefore the Pi/Kangaroo options were very popular.

This year look for “GRIP – Generate Code” instructions in the screensteps-2017 instructions. Read (and use) the whole example. Then look specifically for the VisionThread() “->” segment of code, copy it and modify that ‘lambda’ function. This allows you to build/test your pipeline with you programming computer and take that pipeline code into your team-java project. Among other benefits this will allow for lower-refresh rates and/or if you want to use 2 cameras (or other reasons for 2 pipelines – via VisionRunner ‘runOnce()’ rather than VisionThread ‘start()’ ).

That said – if you already have electrical/ip plans towards a kangaroo – that still works – that approach still requires the network-tables code you have.

Lastly – also do a google search for “frc vision hot or cold”. After chief-delphi and after screensteps-2017 you should see screensteps-2017 which contains a great section on processing the (grip-like-) contours to evaluate which contour is the best target – and other code relating to what do with good/bad target values.

~~~~ part 2. ~~~~ regarding your test-code.
A good start.

Your imageCenter won't be useful as calculated. Logically you either need to be dealing with area or x-ratios/values or y-rations/values ( or more advanced the x/y ration like in target evaluation).

Based on that - x-center (vs imageXcenter) works well for robot-drive-rotate while x-width or y-height can be used for robot-drive-(closer-or-further) - prioritize one then the other, you might not need both.

So given the ideals of 2014-(is-goal-hot-or-not...), I think you want to change your 'updateDashboard()' code to be focused on // find best target...,
Then have the lineup() code focus on // need to drive rotate..., else // close enough... // else ...ready-for-shot...

Lastly you may find this tip helpful for trouble-shooting
This...
} else {
SmartDashboard.putString(shotReady, "true");
And
SmartDashboard.putString(shotReady, "Error: No sight");
... is something I recommend against.
Rather, I recommend towards use of string-literals as the ('key',...) and then variable as the value.
String direction = "NONE";
If () { ... ; direction = "forward"; }
Else If () { ... ; direction = "spin right"; }
SmartDashboard.putString("direction", direction);
And a separate key from different methods/purposes - such as your updateDashboard()
SmartDashboard.putNumber("contour count", centerX.length );
// this way you get a hint of when you are dealing with center[0] but center[1] might be the better target...
Reply With Quote