Go to Post Our robot will use sensors to collect information about its environment, then discard that information and drive into a wall. - Jared Russell [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 31-01-2012, 20:08
Jacob tMannetje Jacob tMannetje is offline
Registered User
FRC #1075
 
Join Date: Jan 2012
Rookie Year: 2011
Location: Whitby
Posts: 1
Jacob tMannetje is on a distinguished road
Convex Hull

How are people handling the processing of images in Wind River?
  #2   Spotlight this post!  
Unread 01-02-2012, 14:24
DjScribbles DjScribbles is offline
Programming Mentor
AKA: Joe S
FRC #2474 (Team Excel)
Team Role: Mentor
 
Join Date: Oct 2011
Rookie Year: 2012
Location: Niles MI
Posts: 284
DjScribbles is a splendid one to beholdDjScribbles is a splendid one to beholdDjScribbles is a splendid one to beholdDjScribbles is a splendid one to beholdDjScribbles is a splendid one to beholdDjScribbles is a splendid one to beholdDjScribbles is a splendid one to beholdDjScribbles is a splendid one to behold
Re: Convex Hull

First, have a look at the vision whitepaper, it goes through the theory.

NI Vision Assistant is a useful tool for helping you get started (and for tuning your values). Use a web browser to connect to your camera and get a few sample images (stop the stream and right click->save image as) to load in the vision assistant. You can also generate code from vision assistant (make sure you change the c file to a cpp, and it will have errors to be fixed)
That should help you get good value ranges for filtering.

In code:
You use AxisCamera::GetInstance("10.te.am.11"); to get the camera
then create new color image and specify the appropriate color space (we used HSL, after having some issues with HSV) to put the image from the camera into.
The colorImage has a number of filter methods (we again used HSL) which will return a binary image to you.

Then you use the binary image, and typically do a convex hull and particle filter (these are the parts we used from the generated C code from vision assistant), once all that is done, you can get the particle analysis data for each blob your algorithm found, and get some useful data from it such as it's center of mass to determine how you want to aim.


If you need any more help, just let me know, I assumed you have some moderate programming experience, so if you need more details on any steps, just let me know.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 09:18.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi