View Single Post
  #4   Spotlight this post!  
Unread 25-01-2012, 11:48
MaxMax161's Avatar
MaxMax161 MaxMax161 is offline
Allegedly Useful
AKA: Max Llewellyn
FRC #2791 (Shaker Robotics), FRC #1676 (Pascack π-oneers)
Team Role: Mentor
 
Join Date: Nov 2009
Rookie Year: 2008
Location: Montvale NJ\Troy NY
Posts: 174
MaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond reputeMaxMax161 has a reputation beyond repute
Re: Camera Targetting

I may be mistaken here (I messed with vision but have had to backburner it to solve other problems) but I believe the bulk of the image processing is in two functions the convex hull operation, and the edge detection.

To my understanding the edge detection is what finds the numbers that the distance, X,Y coordinates, ect functions use, and they're little more then some quick math. So you can't pull out edge detection.

The other thing to look at pulling out would be the convex hull operation. I believe edge detection works better with it, however it might work without it. So you might be able to pull out that.

I think all the other functions are just quick math, you could pull them out but I don't think it would solve your problem, at least not much.

The other possibility is to do the vision processing on the driver station and send data back to the robot via either TCP or UDP.


I hope this helps, good luck getting everything working!
__________________
2791 Shaker Robotics (2013-present)
--Control Systems Mentor 2013-present
--Drive coach 2015-present

1676 The Pascack π-oneers (2010-2013)
--Drive coach 2011-2013
--Lead Programmer 2011-2013
Reply With Quote