View Single Post
  #4   Spotlight this post!  
Unread 18-02-2016, 16:49
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,753
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Processing

If you run your code on the PI, it sill be using a network tables implementation that is compatible with the LV implementation.

You can also look at the vision example on the getting started page. It shows some basics of camera and vision processing and maps the target info into a pretty useful coordinate space for steering the robot.

If you are looking to steer the robot using the camera, you may also want to consider taking an image, processing for the angular offset to the target, and then using a gyro to turn to the target. Cameras are a pretty slow sensor, and using them to measure how much a robot has turned is not easy. Anyway, if you take this approach, you don't necessarily need a coprocessor.

Greg McKaskle