H.264 Encoding with Vision Processing [RASPBERRY PI]

Hey, I’m currently using a raspberry pi with the WPILib image to do all vision related activity, we currently use a python script for vision processing and we have been using the web dashboard on the RP image to view the stream during comp. I have recently heard that you are able to achieve a higher quality stream with less bandwidth output using H.264 encoding. I’m not sure how I can convert my current stream into this encoding/output. We are using a Microsoft Lifecam HD-3000. Do I need a different camera or what code/settings do I need to implement/change. Any answers are greatly appreciated.

Thanks,
Team7433

CSCore and the dashboards provided by FIRST/WPILib don’t support H.264, so you would have to implement something yourself.

I know a number of teams use GStreamer to stream their cameras, but I haven’t heard of teams streaming a vision pipeline in H.264.

You will not need another camera, but you will need other software. Here’s one that was made for FRC. I’m working on my own solution but it will take time.

1 Like

Streaming in h.264 is really easy in GStreamer (though OpenMAX can be picky about pixel format.) You basically just do mycamerasrc ! video/x-raw,format=YUY2 ! videoconvert ! video/x-raw,format=I420 ! omxh264enc to get h.264 frames to send down the pipeline (e.g. to h264parse ! rtph264pay or similar.)

You can limit resolution and FPS with height and framerate properties of the first video/x-raw filter. Raspberry Pi cameras provide h.264 video directly, so you don’t even have to go through OpenMAX, you can just package the frames directly.

This is exactly how potential-engine (thanks for the plug @8BITProgramming) works under the hood.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.