What is the simplest way to access files from the cRIO during Auto/Teleop mode?

More specifically, I need to be able to send camera images in .jpg format from the cRIO to the driver station to do vision processing.

I’m pretty sure that the solution to this is extremely simple, but I’ve never done anything like this before. How would I send files from the cRIO to the driver station using C++?

Is there a requirement that the camera images go through the cRIO on the way to the operator console? Vision processing on the driver station typically communicates directly with the camera.

How do I get the driver station to communicate directly with the camera in the code? I would like to be able to write .jpg files to the driver station. Any code samples would be greatly appreciated!

The default Dashboard program already connects to the camera and writes .jpg files to the computer. It’s written in LabVIEW. Create a new Dashboard project and use it as your code sample.

Is there any way to do it completely in C++, though? My team has zero experience with LabView.

I’ve been browsing other threads, and some of them mention using FTP to get files off of the cRIO (using software like FileZilla). Is this a viable way to do it, or is there a much simpler way to write the file to the driver station? Haha sorry, I’m still sort of an amateur at this stuff.

If any examples of relevant C++ code could be posted, I would very much appreciate it. Thanks!

The default Dashboard already connects to the camera and writes .jpg files to the computer. Look in the dashboard directory to find them.

I’ve attached two screenshots. Is this the directory you are referring to? I don’t see any images automatically saved in there :confused:

If it’s possible, could you show me some screenshots of your own of the dashboard directory?

Sorry about the trouble! This has been befuddling me for a while.







To answer your direct question, I’d say ftp. To answer your bigger question, I’d say … don’t do it that way.

To get images directly from the camera, you use an HTTP session where you request an mjpeg stream and decode it. This is how the cRIO does it, and it is how the dashboard does it. The camera can handle about five sessions at once.

If you want to process images on the cRIO using C++, that seems pretty straightforward. If you want to process images on the dashboard using LV, that is pretty straightforward. If you install LV and follow Tutorial 8 on the Getting Started window, you will have the basic goal detection working.

If you want to use C++ on the dashboard computer, you have much more work that you will need to do yourself. WPILib may be helpful, but I suspect there will be lots of issues you will need to fix.

Don’t forget that you will also need to get the results of the image processing back to the robot.

I’d be happy to help, but you will need to broaden your approach.

Greg McKaskle

Or you could just leave all of that aside, and do the vision processing with direct access to your AXIS cam, with the following IP address: “http://10.TE.AM.11/mjpg/video.mjpg”. You’d just have to send the results from the vision analysis via Network Tables or UDP/TCP. I’ve got my vision code with python and Network Tables ready, so if you need it, just send me a P.M.
Good luck!