Go to Post What's so bad about Matt? I mean, if the guy can explain PID so that even I can understand it, he can't be all that bad... - Billfred [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 2 votes, 5.00 average. Display Modes
  #16   Spotlight this post!  
Unread 06-01-2013, 16:32
DetectiveWind's Avatar
DetectiveWind DetectiveWind is offline
Registered User
AKA: Aravind
FRC #2198 (Paradigm Shift)
Team Role: Leadership
 
Join Date: Dec 2011
Rookie Year: 2010
Location: Toronto
Posts: 10
DetectiveWind is an unknown quantity at this point
Re: Example of Vision Processing Avaliable Upon Request

We tried to do it last year. Unsuccessful.... JAVA pls ?
  #17   Spotlight this post!  
Unread 06-01-2013, 16:53
Fifthparallel Fifthparallel is offline
Registered User
AKA: Sam Chen
FRC #1410
 
Join Date: Dec 2012
Rookie Year: 2011
Location: Denver, CO
Posts: 65
Fifthparallel is an unknown quantity at this point
Re: Example of Vision Processing Avaliable Upon Request

As a note, there is the "white paper" at wpilib.screenstepslive.com that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.
__________________
sudo chmod u+x helloworld.sh
gotta start somewhere.
  #18   Spotlight this post!  
Unread 06-01-2013, 18:53
ohrly?'s Avatar
ohrly? ohrly? is offline
Griffin Alum
AKA: Colin Poler
FRC #1884 (The Griffins)
Team Role: Alumni
 
Join Date: Jan 2013
Rookie Year: 2011
Location: London
Posts: 58
ohrly? is an unknown quantity at this point
Re: Example of Vision Processing Avaliable Upon Request

I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)
  #19   Spotlight this post!  
Unread 06-01-2013, 19:05
PaulDavis1968's Avatar
PaulDavis1968 PaulDavis1968 is offline
Embedded Software/Systems Engineer
AKA: Master of Complexity
FRC #2053 (TigerTronics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Endicot NY
Posts: 91
PaulDavis1968 is just really nicePaulDavis1968 is just really nicePaulDavis1968 is just really nicePaulDavis1968 is just really nicePaulDavis1968 is just really nice
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by DjMaddius View Post
Do the vision processing in Python please! I'd love to see a tut on it.
I Have done it with SimpleCV which is basically a python wrapper for opencv. I did that over the summer. I did it in opencv c++ last season.

SimpleCV code is extremely easy to use.
  #20   Spotlight this post!  
Unread 06-01-2013, 20:10
jacob9706 jacob9706 is offline
Registered User
AKA: Jacob Ebey
FRC #3574 (High Tekerz)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Seattle
Posts: 101
jacob9706 is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by ohrly? View Post
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)
I am starting on the tutorial right now.

I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language.

Look back for a post Entitled "OpenCV Tutorial" I will post here as well once the tutorial thread is up.
__________________
/*
* Team 3574 Alumni
*
* 2011 - Highest Seeded Rookie
* 2011 - Rookie All-Star
* 2012 - Engineering Inspiration
* 2012 - Olympic Deans List Winner
* 2013 - Engineering Inspiration
* 2013 - Judges Award (For unique circular robot and the way the team works together.)
*/
  #21   Spotlight this post!  
Unread 06-01-2013, 21:57
jacob9706 jacob9706 is offline
Registered User
AKA: Jacob Ebey
FRC #3574 (High Tekerz)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Seattle
Posts: 101
jacob9706 is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Ok Everyone! Here is a quick OpenCV Tutorial for tracking the rectangles!
OpenCV FRC Tutorail
__________________
/*
* Team 3574 Alumni
*
* 2011 - Highest Seeded Rookie
* 2011 - Rookie All-Star
* 2012 - Engineering Inspiration
* 2012 - Olympic Deans List Winner
* 2013 - Engineering Inspiration
* 2013 - Judges Award (For unique circular robot and the way the team works together.)
*/
  #22   Spotlight this post!  
Unread 07-01-2013, 23:50
virtuald's Avatar
virtuald virtuald is offline
RobotPy Guy
AKA: Dustin Spicuzza
FRC #1418 (), FRC #1973, FRC #4796, FRC #6367 ()
Team Role: Mentor
 
Join Date: Dec 2008
Rookie Year: 2003
Location: Boston, MA
Posts: 1,102
virtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant future
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by ohrly? View Post
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.
You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy
__________________
Maintainer of RobotPy - Python for FRC
Creator of pyfrc (Robot Simulator + utilities for Python) and pynetworktables/pynetworktables2js (NetworkTables for Python & Javascript)

2017 Season: Teams #1973, #4796, #6369
Team #1418 (remote mentor): Newton Quarterfinalists, 2016 Chesapeake District Champion, 2x Innovation in Control award, 2x district event winner
Team #1418: 2015 DC Regional Innovation In Control Award, #2 seed; 2014 VA Industrial Design Award; 2014 Finalists in DC & VA
Team #2423: 2012 & 2013 Boston Regional Innovation in Control Award


Resources: FIRSTWiki (relaunched!) | My Software Stuff
  #23   Spotlight this post!  
Unread 07-01-2013, 23:53
jacob9706 jacob9706 is offline
Registered User
AKA: Jacob Ebey
FRC #3574 (High Tekerz)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Seattle
Posts: 101
jacob9706 is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by virtuald View Post
You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy
Python is not supported but has a pretty big backing. And yes, I was talking about on an external source.
__________________
/*
* Team 3574 Alumni
*
* 2011 - Highest Seeded Rookie
* 2011 - Rookie All-Star
* 2012 - Engineering Inspiration
* 2012 - Olympic Deans List Winner
* 2013 - Engineering Inspiration
* 2013 - Judges Award (For unique circular robot and the way the team works together.)
*/
  #24   Spotlight this post!  
Unread 20-01-2013, 03:07
Azrathud's Avatar
Azrathud Azrathud is offline
Computer Nerd
AKA: Bryce Guinta
FRC #2945 (BANG)
Team Role: Programmer
 
Join Date: Jan 2010
Rookie Year: 2010
Location: Colorado
Posts: 24
Azrathud is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by jacob9706 View Post
...
Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

Last edited by Azrathud : 20-01-2013 at 05:48.
  #25   Spotlight this post!  
Unread 20-01-2013, 04:47
catacon catacon is offline
Registered User
FRC #1444 (Lightning Lancers)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2006
Location: St. Louis
Posts: 154
catacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to beholdcatacon is a splendid one to behold
Re: Example of Vision Processing Avaliable Upon Request

You certainly don't need a Core i5 to handle this processing. The trick is to write you code correctly. OpenCV implementations of certain algorithms are extremely efficient. If you do it right, this can all be done on an ARM processor getting about 20FPS.

Contours are simply the outlines of objects found in an image (generally binary). The size of these contours can be used to filter out noise, reflection, and other unwanted objects. They can also be approximated with a polygon which makes filtering targets easy. The "contourArea" of a target will have to be determined experimentally and you will want to find a range of acceptable values (i.e. the target area will be a function of distance).

OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.
  #26   Spotlight this post!  
Unread 20-01-2013, 05:48
Azrathud's Avatar
Azrathud Azrathud is offline
Computer Nerd
AKA: Bryce Guinta
FRC #2945 (BANG)
Team Role: Programmer
 
Join Date: Jan 2010
Rookie Year: 2010
Location: Colorado
Posts: 24
Azrathud is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by catacon View Post
OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.
Fair enough. Thank you.
  #27   Spotlight this post!  
Unread 28-04-2013, 03:46
jacob9706 jacob9706 is offline
Registered User
AKA: Jacob Ebey
FRC #3574 (High Tekerz)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Seattle
Posts: 101
jacob9706 is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by Azrathud View Post
Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.
Sorry for the REALLY late reply.
1.) Gaussian Blur just reduces the grain in the image by averaging nearby pixles

2.) I believe you are confused about RETR_TREE and CHAIN_APPROX_SIMPLE, these belong to the findContours method. RETR_TREE is to return these as a tree, if you don't know what a tree is look it up (a family tree is an example). CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. (Straight out of the documentation). http://docs.opencv.org/ Their search is awesome. USE IT.

3.) I have not actually looked at the algorithm behind the scene for exactly how it affects the process but it is in theory to scale the points location.

4.) 1000 was just something we found worked best to filter out a lot of false positives. On our production system we ended upping it to 2000 if I remember right.

Any other question feel free to ask, do research first though please. I wish you luck!
__________________
/*
* Team 3574 Alumni
*
* 2011 - Highest Seeded Rookie
* 2011 - Rookie All-Star
* 2012 - Engineering Inspiration
* 2012 - Olympic Deans List Winner
* 2013 - Engineering Inspiration
* 2013 - Judges Award (For unique circular robot and the way the team works together.)
*/
  #28   Spotlight this post!  
Unread 28-04-2013, 19:24
mdrouillard mdrouillard is offline
Registered User
FRC #0772
 
Join Date: May 2011
Location: Canada
Posts: 29
mdrouillard is an unknown quantity at this point
Re: Example of Vision Processing Avaliable Upon Request

hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md
  #29   Spotlight this post!  
Unread 28-04-2013, 19:32
Gregor's Avatar
Gregor Gregor is offline
#StickToTheStratisQuo
AKA: Gregor Browning
no team
Team Role: College Student
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Kingston, Ontario, Canada
Posts: 2,447
Gregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond reputeGregor has a reputation beyond repute
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by mdrouillard View Post
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md
987 did a lot of work in the 2012 season on powering their Kinect.

Check out the "Powering the Kinect and the Pandaboard" section of their whitepaper.
__________________
What are nationals? Sounds like a fun American party, can we Canadians come?
“For me, insanity is super sanity. The normal is psychotic. Normal means lack of imagination, lack of creativity.” -Jean Dubuffet
"Insanity is doing the same thing over and over again and expecting different results." -Albert Einstein
FLL 2011-2015 Glen Ames Robotics-Student, Mentor
FRC 2012-2013 Team 907-Scouting Lead, Strategy Lead, Human Player, Driver
FRC 2014-2015 Team 1310-Mechanical, Electrical, Drive Captain
FRC 2011-xxxx Volunteer
How I came to be a FIRSTer
<Since 2011
  #30   Spotlight this post!  
Unread 28-04-2013, 21:16
jacob9706 jacob9706 is offline
Registered User
AKA: Jacob Ebey
FRC #3574 (High Tekerz)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Seattle
Posts: 101
jacob9706 is on a distinguished road
Re: Example of Vision Processing Avaliable Upon Request

Quote:
Originally Posted by mdrouillard View Post
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md
The second processor (O-DROID U2) runs on 12 volts so it is just plugged directly into the power distribution board. Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.
__________________
/*
* Team 3574 Alumni
*
* 2011 - Highest Seeded Rookie
* 2011 - Rookie All-Star
* 2012 - Engineering Inspiration
* 2012 - Olympic Deans List Winner
* 2013 - Engineering Inspiration
* 2013 - Judges Award (For unique circular robot and the way the team works together.)
*/
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:22.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi