Reviews or comments on VMX-pi?

I’m investigating the possibility of using Kauai Labs’ VMX-pi robot controller, now sold by Studica as the Studica Robot Controller, but I’m having a difficult time figuring out what it actually does.

Our team has used the NavX sensor, and I would endorse it in a heartbeat, so I have a good view of the vendor. However, I can’t figure out exactly what I’m getting by ordering the controller. It’s obviously a package of software and hardware that includes the NavX capabilities and…….something.

What I can’t figure out is if it is actually a collection of tools for doing things, all tightly integrated and ready to use, or if it’s just a Raspberry Pi bound up with a NavX, plus some pre-installed open source libraries that I could start programming with. For example, it mentions that it’s a vision processor, but does that mean more than OpenCV is installed on it? Likewise it says I can do path planning. Is that more than, a pre-installed path planner that is exactly the same as a package I could, if I were inclined, load up on a pi myself?

What I would really want from the package is ease of integration. We’ve used Raspberry Pis on the robot before. It was a pain in the neck to get voltage to them, and to get OpenCV programs running on the Pi to send data over a serial port, or network tables, to the Roborio. Would something in this product make that process painless? It says it has CAN, could I, without too much effort, set up the angle and distance to target, computed by OpenCV, as parameters in a CAN message that I can read using a library function on the RoboRio? Or, any other option that is fairly simple, i.e. I have to install a library and then call some API that can get data from my OpenCV code onto the RIO for use by my autonomous driving program?

And I’m asking this question at the end of week 2 of build season, so obviously time is of the essence. If it takes four weeks of study and preparation to use the capability, it’s not too valuable. I know how to program OpenCV. I know what CAN bus and I2C are, but I need something that, armed with that knowledge, I can turn into a robot subsystem without a lot of soldering, making custom cabling, or rigging power supplies.

If it has electrical and software connections that make that connection easy, it would be worth it. I just can’t tell if that’s what is really in the box. In general, can anyone say what sort of experience they’ve had with this product? How did you use it, and would you endorse it?

Thanks.

If you want to use a Pi for image processing and want to write your own image processing software, check out FRCVision, which is a Pi image all set up with WPILib (including CameraServer and NetworkTables) and has template programs for C++, Java, and Python.

There’s also Chameleon Vision, which requires a bit more setup but provides more out-of-the-box image processing features.

The VMX-pi is more of a RoboRIO replacement for non-FRC usage, e.g. it adds interfaces to the Pi that the Pi doesn’t have built in.

Ok. Thanks. I was thinking that might be the case. I think FRCVision will probably be our final answer, but we’ll be checking out Chameleon for development support.

FRCVision seems like what will meet our needs, but I can’t seem to get any pictures out of it.

I have the pi connected to the radio via ethernet, and my PC connected wirelessly. I open frcvision.local in either Chrome or Edge, and I can see camera parameters. I can load them from the camera. If I pull the camera out of the pi’s USB port I can see that it becomes disconnected, but when I hit “open stream”, I get a page of camera parameters, but no picture.

(I’ve also tried just connecting an ethernet patch cable between PC and pi, also with no success.)

So, i’m sure that the camera works, because I have tried it on PC (with Grip and Chameleon Vision)

I know it is being detected as a camera by Linux on the pi, because it can detect when it is plugged into the USB port, or not plugged into the USB port. However, I can’t see an image.

Does anyone know if there is anything installed on the pi itself where I can see the attached cameras and see the video stream from one or more of them? What I mean is I want to eliminate the PC or the radio as a possible cause of the problem. I want to verify that the Linux device, i.e. the Raspberry Pi with the FRCVision image, can accept and possibly display that video.

On thing that I did notice is that if I turn on the console in the browser window, I see a whole bunch of messages that say it can’t process the improperly formatted jpg. (I’ll see if I can get the exact error message) It seems probable that somehow there’s a format mismatch somewhere, but the direction in the wpilib docs don’t say anything about setting up the format, so I don’t know if that’s likely to be the problem.

It does give me a few alternate device entries, so I’ve tried clicking in the dropdown and selecting those, but nothing happened that I could tell.

Anyone have any ideas?

Edit: Here’s the error message

CS: WARNING: rPi Camera 0: invalid JPEG image received from camera

I’m pretty sure that what is happening is that my camera is streaming YUYV (did I get that right?) but the camera server within FRCVision is looking for an MPeg stream.

So now all I have to do is figure out how to tell the server what to look for. I tried the obvious choice of putting in a supported mode into the boxes, but I apparently haven’t found the magic box to click to send it to the server so it can decipher the picture.

What camera make/model is it? A page of camera parameters means that the default camera server program is seeing the camera and is connected to it. It’s just not getting valid images from the camera. The alternate device entries are just aliases to the same device, primarily useful when you have multiple cameras hooked up and want to make sure they’re consistently mapped.

What are the modes listed on the camera parameters page? The camera server tries to default to the lowest resolution mode the camera offers, but something you can try is selecting a different mode (pick a supported one) via the Vision Settings page. Make it writable, click on the camera name to expand the grouping, edit the pixel format / width / height / fps, and click Save. Then wait a few seconds and click the Open Stream button.

I would also suggest you try a different camera in case the one you have is either malfunctioning or incompatible for whatever reason.

1 Like

Thanks. As I suspected, the issue was mismatched camera streams.

Armed with this theory, I just started filling in YUYV, and 160x120 resolution (the lowest available on the “open stream” page, and what was being returned if I read the parameters from the camera) into every box I could. I also clicked “writeable” and “save”. Is YUYV a compression type? Well, I don’t know, but I typed it into the box for the default stream, and the video stream showed up.

Tomorrow I try again with two Jevois cameras, but for now I need to go to bed.

Thanks again.

YUYV is an uncompressed image. CameraServer will take that and do MJPEG compression in software to stream it.

Well, I finally got back to this. I’m not having much luck. The weird thing that is happening now is I’m trying to connect to the stream to view the camera or, at least, the available settings, and now I get the browser message that it can’t find

http://frcvision.local:1182/

It finds frcvision.local just fine, but I’m getting the usual browser message that it can’t reach that URL. (i.e. in Chrome it says:

This site can’t be reached

frcvision.local refused to connect.)

I tried a few variations. Different computers. Different routers Including the frc radio and others, Chrome and Edge, and it just won’t show the stream anymore. I suppose my next step will be to reflash my raspberry pi image and see if that helps. That’s the only thing that’s consistent on all of the scenarios that don’t work…but if anyone has seen this before and has an explanation, I’d be grateful.

What’s displaying on the console output (on the vision status tab)?

(I really need to start enabling the console output)

On the console output

config error in ‘/boot/frc.json’: could not read camera name: [json.exception.out_of_range.403] key ‘name’ not found

Waiting 5 seconds…

And I had noticed that the status was “up”, and the seconds were counting but after I read that message I paid closer attention and it’s counting from 1 to 5.

It sounds like the /boot/frc.json file (which is written by the vision settings tab of the webpage) may be corrupt. Try going to that tab, making it writable, tweaking a camera name, and saving.

I made a completely new image.

I have a couple of different USB cameras. I can get the “normal” ones working one at a time, but when I try two, I get into all sorts of errors. Either the stream isn’t displayed, or I get the "can’t reach frcvision.local:1181 (or 1182) error. And the jevois cameras just never work at all.

I’m going to go back to square one and step through it one step at a time.

After a few hours of reading the documentation slowly and paying attention, and then starting over from step one, I managed to get both cameras streaming, one of which was a Jevois.

Our current plan is to have one camera streaming a view to the driver station, while another one prcesses vision and feeds data to the roborio. So, next, I somehow have to get an OpenCV program that shares data with the Roborio. I’m thinking a program that has both networktables and OpenCV on the raspberry pi, but I’ll see if I can make that work.

I honestly have no clue what I did differently this time around to make it work, other than be very methodical and make sure I was doing the right steps in the right order. It sure seemed like I had tried all of those steps before.

That’s exactly what the example programs are the starting point for. They implement the same multi camera server functionality, but you can customize them further to perform image processing.

I think there’s some detail missing in that example.

Like, it doesn’t actually use OpenCV, and it never writes any value to Network Tables, or never touches an individual frame at all. However, it is a good example of what it does. It just requires a bit more knowledge than I have to perform that “customization”. However, that’s what search engines are for. Off to google…or maybe GitHub. I don’t think I use GitHub searching enough. I’ll bet I can find code reveals that do something comparable to what I want.

1 Like

Wanted to answer the initial questions on VMX-pi:

VMX-pi is very flexible and has several software options, depending upon the usage:

  • FRC Motion/Vision Co-processor: VMX-RTK is available. Similar to FRC Vision, it’s a pre-built SD Card with all the libraries (OpenCV, WPI Library NTCore/CSCore, and libraries for accessing the onboard navX-Sensor.
  • Offseason FRC training platform: Full WPI Library support is available which allows development of FRC compatible software directly on the board, using the same tools and libraries as on the official FRC RoboRIO platform.
  • Post-season cost-savings platform: Using the WPI Library, last year’s robot can be kept running using the low-cost VMX-pi platform, while the more expensive FRC RoboRIO platform is removed for use in next year’s robot.

Keep in mind that VMX-pi provides power to the Raspberry Pi, and you can connect it directly to the PDP, due to its onboard power supply.

As to the CAN usage, the typical use case is to use the WPI Library Network Tables to send the vision processing result and any motion processing data to the RoboRIO. Transmitting detection and motion processing data to RoboRIO via CAN is also possible (using the VMX-pi Hardware Abstraction Library, also included in the VMX-RTK), but the network tables approach is the simplest thing that could possibly work.

Are there any accessible code examples that show a program doing all that? i.e. using OpenCV, integrating the NavX, sending results of processing to Network Tables?

Code examples (in C++, Java and Python) for Vision Coprocessing on VMX-pi are contained in the open source VMX-RTK Examples Github Repo, and are described on the VMX-RTK Examples page. The examples include: navX-sensor access, opencv access, WPI Library network tables access for sending data to RoboRIO, camera streaming and save to disk via the CSCore Libraries, and also how to integrate a GRIP-created vision processing pipeline into them.

There are also additional examples on camera calibration (from opencv) and accessing the VMX-pi HAL directly (for CAN bus, as well analog IO and digital IO, quadrature encoder decode, counters, etc. if using VMX-pi as a robot controller).

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.