|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: Learning Vision Control
How did you have the pi communicating with the Rio? We do have a pi, but we've never really used it before. Our robot is controlled with the iterative template. Your github link also gives me a 404 error.
|
|
#2
|
||||
|
||||
|
Re: Learning Vision Control
I think this is the link he was referring to: https://github.com/frc3946/Stronghold
If you have more questions, ask away. With GRIP coming out last year, it's made vision processing at lot easier. Our programmers started out in GRIP but since it was new, they were still working out installation instructions for the PI. So our programmers decided to use Python and OpenCV on the Pi. They did a presentation on our implementation https://www.youtube.com/watch?v=ZNIlhVzC-4g Brian |
|
#3
|
|||||
|
|||||
|
Re: Learning Vision Control
Sorry, fixed the link. We connected the pi to the network, and I believe that this past year the targeting camera was on the pi's USB port, and the streaming cameras were connected to the roboRIO's USB ports. I did not work with the programming side very closely last year, but I understand they had difficulty connecting with the pi at competition, because they had hardwired IP addresses in code. It looks like the code on GitHub is from the week before Bayou, so it may need a fix for that. I sent the reminder to the programmers earlier this week to make sure GitHub was up to date before kickoff.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|