|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Learning Vision Control
Our team has been looking into vision control/processing for this coming season. Our setup is going to be a USB Webcam with a green LED ring plugged directly into the roborio. I already know how to stream video from the camera to the smartdashboard, but it doesn't process it. We need it to be able to locate a target and give the position of the center of it. I've done extensive research as to how to go about doing it, but nothing has worked. The preferred way of processing it would be on the roborio, but I'm open to other methods, for as long as the setup isn't too painful. I've seen every link in the screenstepslive tutorials, but it never works. GRIP is another thing I have tried using, but it is blocked by every single antivirus software there is, and I'm not ready to trust something like that. I hope someone will be able to help me.
|
|
#2
|
|||||
|
|||||
|
Re: Learning Vision Control
We do our vision processing on a raspberry pi using CV, and just send the "targeting information" across to the roboRIO. The RIO side code is on our GitHub repository, at Vision2016.py. Our (command-based java) RIO side code is also there under src/, as the CatapultPositioner subsystem and various AutoAim commands. I can forward any questions to our programming team.
Edit: we used SEPARATE driver/streaming cameras, because the exposure/speed/resolution settings for the two functions are usually quite different. Last edited by GeeTwo : 22-12-2016 at 09:18. Reason: fixed link |
|
#3
|
|||
|
|||
|
Re: Learning Vision Control
How did you have the pi communicating with the Rio? We do have a pi, but we've never really used it before. Our robot is controlled with the iterative template. Your github link also gives me a 404 error.
|
|
#4
|
||||
|
||||
|
Re: Learning Vision Control
I think this is the link he was referring to: https://github.com/frc3946/Stronghold
If you have more questions, ask away. With GRIP coming out last year, it's made vision processing at lot easier. Our programmers started out in GRIP but since it was new, they were still working out installation instructions for the PI. So our programmers decided to use Python and OpenCV on the Pi. They did a presentation on our implementation https://www.youtube.com/watch?v=ZNIlhVzC-4g Brian |
|
#5
|
|||||
|
|||||
|
Re: Learning Vision Control
Sorry, fixed the link. We connected the pi to the network, and I believe that this past year the targeting camera was on the pi's USB port, and the streaming cameras were connected to the roboRIO's USB ports. I did not work with the programming side very closely last year, but I understand they had difficulty connecting with the pi at competition, because they had hardwired IP addresses in code. It looks like the code on GitHub is from the week before Bayou, so it may need a fix for that. I sent the reminder to the programmers earlier this week to make sure GitHub was up to date before kickoff.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|