|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Vision Processing with Raspberry Pi
Quote:
Quite a few robots have been fielded like that. Just do it properly so you do not shoot yourself in the foot. Remember that if you miss enough FMS packets at the cRIO/RoboRio your robot will get disabled until you get an FMS packet. Per the linked topic there's lots of good information in the Einstein report that will help you if you want to try it. NotInControl's post in the linked topic also seems to be fine advice. Last edited by techhelpbb : 17-10-2014 at 17:40. |
|
#2
|
|||
|
|||
|
Re: Vision Processing with Raspberry Pi
Given that there are no rules against using TCP/IP I would personally tend toward that over the RS-232 route. It will be much faster, more modern examples and easier to test off robot.
If you do decide to use TCP/IP and have questions regarding C/C++ or Labview and network programming feel free to ask, I have been hacking that stuff out for longer than I care to admit. |
|
#3
|
||||
|
||||
|
Re: Vision Processing with Raspberry Pi
Team Spectrum 3847 has put up a fantastic resource on Vision processing using the RPi. It is the basis for our object recognition software.
It is written in Python, but it makes the entire process easy to understand. They have set up a TCP socket request receiver to communicate with the cRio/RoboRio. We have modified this socket receiver to make TCP communication with the cRio easy and stable. We even were able to modify CheesyVision to work with the cRio and LabView with a modified version of the Socket Request Receiver. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|