How to run vison processing code on classmate during matches?

My team is going to be using vision processing for our hybrid mode. From what I understand, rather than running the processing code on the CRIO, it is run on the classmate or other laptop used for driving by my team. How do I make this separate code run on the robot while the robot is running?

You would have to send the video frames to the computer, run your image processing program, and have it send relevant commands to the cRIO, all over the wifi link. IMO it would be better to have an on board laptop that does the same scheme, but now, it can be connected through Ethernet, which is much faster.

The forums have various other threads discussing this scheme. Basically, connect your axis camera to your router and grab the stream of of it. But note it is possible to run NIVision on the cRIO (although I think someone mentioned they were attempting to offload it due to it being so CPU intensive).

Are we allowed to have a laptop right on our robot to do the image processing? I was definitely planning to offload the images from the camera, rather than using the CRIO because there is just so much more processing power on a laptop. What I am asking really is how to execute the processing code that both finds the rectangle, and then figures out where the robot needs to go, on the laptop, rather than any of the other parts of the process, such as how to process the images.

yes and no:
Kinect sensor->data: yes, but its already built
axis camera processing: no (though you can, as you can see from the rest of this thread)
kinect data-> drive commands: no, on robot

If you are doing insane calculations on images, you should offload it to the classmate. If you are doing more simple image processing (rectangle tracking, etc…), the robot will suffice, as offloading it will take more work than it is worth

If you plug the camera directly into the DLink switch on the robot, then programs that know the IP address and are on the subnet can request images. This is how the dashboard works. If you wish to change the dashboard, it is written in LV and source is provided, or you can choose one of the other DB tools.

So, getting an image from the camera is as simple as getting the laptop program to ask for it.

As for processing it, you can process it on the cRIO. The target isn’t moving, and as long as the robot isn’t moving very much, you don’t need 30fps. Calculate the speed you need. For processing the image, you have a number of choices. NIVision works on the cRIO and on the desktop and has entry points for C and LV. OpenCV is another pretty accessible choice.

Greg McKaskle

Well, thats part of the problem we are having, my team is programming in java, and none of the other programmers on the team are willing to switch to C, which is what I am the most familiar with.

I’m not sure what to do about that. OpenCV is definitely an option, and if someone is willing to do wrappers, NI-Vision is too.

Greg McKaskle

What would have to be done to create the wrappers? None of the programmers on the team are very experienced with writing robotics java code.
Also, what I am really trying to ask in this thread is once the code is written, how do I execute it.

As an aside, the plan my team has is to use the rectangle as a way of positioning the robot, as we plan to have an extremely short range shooter. Would you still suggest processing at less than 30 FPS and on the cRIO?
Today 08:13 PM

I don’t want to hijack this thread, but I have heard usually reliable sources say that sending pictures back to the Classmate (or whatever Win7 Machine you use on your driver station) is a bad idea.

The gist of their argument is that nearby the wifi is up to the task but from across the field with other robots all talking too, the frame rate drops to the low single digits (3-4 fps).

This is the kind of thing that scares me to death because I can’t know that it is a problem until we get to an actual competition.

So… …should I worry about the 10,000 other problems I have to worry about or should I continue to worry about this one (and perhaps decide that I don’t really want to send the camera data to the remote Win7 Machine after all)?

Do tell.

Joe J

Last year we did such a scheme (no data sent back to robot, however) and were running at about ~21 fps

However we did have some small issues with the robot hiccuping. We believe the problem was the classmate’s DS commands due to high latency (we graphed some data obtained from wireshark). We have not ruled out the camera directly attached to the wifi with 100% certainty…

I have heard similar things, however, the counter-argument that I have heard is that running image processing on the cRIO, if ran simultaneously with enough code (no idea how much though) causes noticeable lag in the processing speed. One solution to this that I have heard is to have a second laptop mounted on the robot, and directly wire that to the camera and cRIO, however, I am not sure if the game design rules allow this or not.

You can’t communicate between the robot and the laptop in hybrid/autonomous can you? or are things like the keyboard and joysticks the only things disabled?

If allowed to put another machine on the robot for image processing, you may as well use the kinect on it instead of the axis! I’ve heard its very easy to get good data, much more than the axis could ever give you and with a computer with usb on the robot it is very possible and legal to do it. (The kinect anyways, I know nothing of the computer on the robot)

I can vouch that grabbing the image stream from the Axis camera works fine. Our team got ~29 fps using the smaller resolution. The 3-4 fps I would imagine comes from someone trying to use the cRIO to relay the images.

As for communication during hybrid, you can, the only thing disallowed is human input (well, excepting the kinect).

Not knowing much about this situation, I can’t say for sure what their framerate was on and off the field, but if they used the default dashboard and the M1011, then the framerate would have been single digits to start with.

The default dashboard, along with the cRIO camera communications moved from SW timed requests of individual JPEGs to HW camera timed MJPG stream this year once it was discovered that the M1011 had poor performance with the JPEG route. Both options are still in the LV palette, but the default is now MJPG. The initial decision was somewhat arbitrary, and C and therefore Java, were already using MJPGs. Depending on the setup, other factors could have played a part. It is easy to chalk it up to the “field”, but I have never witnessed this and it doesn’t feel like the actual culprit.

As mentioned, it is somewhat difficult to simulate a match in your shop, but on the network you have, you can look at utilization and latency. You can look at how/if it changes when a second robot is added. You can also do some back-of-envelope calculations to see how much of the N speed network you are using.

My final advice is to look at the elements being measured and use those requirements to determine the rates and resolutions needed, as well as the appropriate sensor to use.

Due to slow speeds (30 Hz) max, somewhat high latency (>60ms, often 150ms), and variable jitter, cameras are not necessarily a good sensor to close a loop with. It is far better to calculate where the target is and use an encoder or pot to turn the turret. If the robot is to be turned, use a gyro. More CPU does little to improve the numbers. Higher speed cameras exist, but they are not in the kit, their cost is pretty high, and it may be difficult to integrate them.

I think the camera is a very valuable sensor, but it all depends on how it is used.

To the original topic, the laptop allows you to bring more CPU to the table, to process images more thoroughly, at a higher resolution, and perhaps at a faster rate. Once you have an algorithm that demands more CPU, this seems like a good step. Until then, …

Greg McKaskle

All right, so, I have gotten a ton of really helpful information, now what I would like is some help figuring out how I should put it all together.
Here is what I am hoping to do, in its entirety:

Use the camera on the robot to find the vision target,

Export images from the camera to either our driverstation computer or a laptop mounted on the robot,

Use the NI-Vision software to find where the rectangles are,

Take the rectangles found by the NI-Vision software and calculate the distance from and angle off of perpendicular with the backboards,

Using these distances and angles, which will be updated throughout the course of the robots movement, calculate how our joystick would need to be moved in order to move our robot with a short range shooter up to the backboards,

And finally, send these theoretical joystick controls back to the cRIO and use them to move the robot to where it needs to go.

I am starting to get how to get the images from the camera, I can theoretically do the calculations on the angles and movement required, but what I still need to figure out is how to create the virtual joystick and then how to send the virtual joystick controls to the robot, as well as how to make the program I write run on whatever computer I decide it needs to run on.

So can you do processing on the robot with a laptop as long as you don’t control the robot or can’t you?

Currently the rules make it look like you can do the processing on a laptop that is mounted on the robot as long as it doesn’t directly control the robot, however, we are not fully sure on this as of right now. Q+A qestion perhaps?

During the course of the rest of today, I got a fair amount of work done on figuring out how do deal with the positioning calculations, but I am still having a bunch of trouble figuring out how to both actually run the code that gets written, as well as how I will want to send the processed data back to the robot.