Vision Processing with Raspberry Pi

So, after reading through your suggestions here and consulting with my programming team , we have decided to try developing a vision system for our robot with Raspberry Pi, mainly because we already have one (also, if we wanted to port the code elsewhere afterwards it would be pretty easy).

We are struggling how to start the developing - I have suggested using OpenCV and C++ (.Net implementations are not relevant because they are a pain to compile to ARM, even with Mono).

I just wanted to see if there are any other possibilities/recommendations for developing on the RPi.
Also, how does the communication between the board and the RoboRio/cRio work?

The R-PI UART pins should plug directly into the RoboRio MXP ports’ UART pins. I don’t know if level shifting is required between those UARTs. On the CRio you need to plug the R-PI UART into a RS-232 to UART converter. Conversely you should try to look at what you can get away with by using just GPIO. Just remember you need to use some sort of level shifting between the R-PI and CRio/RoboRio GPIO.

Alright, thanks. And programmatically-wise, how can I output command from the RPi to the RoboRio (and make the RoboRio “understand” them)?

Although I have some programming experience, I have never done anything like this before, and the same goes for the other programmers in my team. So we are starting from scratch.

The roboRIO serial port on the main board is RS232 levels. The serial port on the MXP is LVTTL levels (5v compatible).

Unfortunately I have only done this in LabView and I believe you mentioned using C++. I never did find an example of how to utilize the RS-232 port on the C-Rio and had to ask around here (CD) and experiment which I imagine you will have to do. I could post some LV examples so you can get an idea of the algorithm at least. Also I didn’t do much of the work with the R-PI so I can’t advise as much there though there are way more tutorials on using the UART on the PI.

Basically UART and RS-232 allow you to send ascii characters over the bus. This makes it very easy to come up with your own command set for your system. You can basically tell the PI to write “SHOOT” or “stop” to the UART and the RIO will see the characters “SHOOT” or “STOP” on the other end and vise versa. You can for instance use an equal to comparator to check if what is in the input buffer is “SHOOT” or not and if it is output a TRUE to be used by some other part of the code.

Alright, I start to get the idea of how this works. So, generally speaking, I should check for UART input (somehow) on the robot side. If I get a predefined command from the RPi (like “Shoot”, or “move X degrees left”) I should interrupt the currently running command / add a command in the scheduler and so on.
In addition, you have mentioned differences between the cRio (which I have now and I can test on before the kickoff) and the new RoboRio - are these changes going to affect dramatically the code? (I mean more then just changing the line that reads the UART input to the new one, for example)

Most likely not. According to the guys who wrote the libraries they are trying to keep them as close to before as possible. In fact if you use the RS-232 port on the RoboRio with the R-PI and a UART to RS-232 converter it may be nearly identical.

A note on the differences between TTL UART and RS-232. RS-232 will likely be more reliable than UART due to the differences in how they signal and I would suggest using UART to 232 conversion.

This is a question, not an answer :slight_smile:

Is it possible to use networking (TCP/UDP) on the robot when using a co-processor? I know how to do all the networking stuff, but am unsure as to the rules regarding IP addressing, field management, etc.

Are you asking if it is ok to send a udp message from, say a PI, to the roboRIO?

Here is a crude diagram of how we do it:
http://ratchetrockers1706.org/team/technical-division/robot-control/

If you are asking if it is legal then the answer is yes it has been legal.
Quite a few robots have been fielded like that.

Just do it properly so you do not shoot yourself in the foot.
Remember that if you miss enough FMS packets at the cRIO/RoboRio your robot will get disabled until you get an FMS packet.
Per the linked topic there’s lots of good information in the Einstein report that will help you if you want to try it.
NotInControl’s post in the linked topic also seems to be fine advice.

One word of caution on pi side. Pi can only except 3.3v signals . Most definitely going to need level shifting.GPIO and uart.
How about a FTDI cable between pi USB and use TTL end (seems that everyone is thinking roboRio is TTL at expansion connection - I would want to check this before proceeding) and connect other end to expansion connection. Make sure Tx connected to Rx. And Rx connected to Tx,
Communication flow will have to be handled with xon or xoff (or simply put well thought out timing delays in the code) will probably want to flush the buffer at start of serial initiation. Both processors are capable of multitasking so any lag in image processing may cause some difficulty for the pi.

I wanted to add that Penn robotics implemented a pi vision system (or was working on one). The student’s name I remember was Brian contact Jim at team135.

Given that there are no rules against using TCP/IP I would personally tend toward that over the RS-232 route. It will be much faster, more modern examples and easier to test off robot.

If you do decide to use TCP/IP and have questions regarding C/C++ or Labview and network programming feel free to ask, I have been hacking that stuff out for longer than I care to admit.

Team Spectrum 3847 has put up a fantastic resource on Vision processing using the RPi. It is the basis for our object recognition software.

It is written in Python, but it makes the entire process easy to understand.
They have set up a TCP socket request receiver to communicate with the cRio/RoboRio.

We have modified this socket receiver to make TCP communication with the cRio easy and stable. We even were able to modify CheesyVision to work with the cRio and LabView with a modified version of the Socket Request Receiver.