I was thinking this year would be an excellent year to do the vision tracking on the Dashboard as apposed to the cRIO. There are a couple big advantages I see in doing this. First, it will save the cRIO’s limited processing power. The second more important reason is it would allow adjustments to vision tracking calibration on the field. Has anyone tried this or know any reasons why this might be a bad idea?
We did camera tracking on the driver station last year. I’m helping another team do it this season. It was all done in Labview.
Camera calibration is key. Being able to change the settings on the field is a plus, but, with proper calibration you shouldn’t need to. Still it’s good to have just in case.
Be sure to read this white paper on vision processing especially the parts on exposure and white balance found here:
http://wpilib.screenstepslive.com/s/3120/m/8731
You really can’t run it on the Classmate, there’s just not enough processing power there. So you need a faster machine with more ram than that. Just remember you need enough processing cycles for the driver station to remain in regular contact with the field as well as perform what ever else you need.
It should run better on the Classmate. If you look at the specs of the cRIO it only has 128 mb of RAM. I am sure the Classmate has more than that. https://decibel.ni.com/content/docs/DOC-19103
Vision will run on all the above, but the performance will differ based roughly on MIPS performance and available vector instructions.
Greg McKaskle
My first attempt at offloading the Vision Processing was onto the Classmate. I didn’t work out well. I had issues with CPU being over taxed, losing/delayed packets between the field, robot and driver station. I got a bigger/faster laptop and all those problems magically disappeared.
As I stated, it just doesn’t have enough horsepower to be the driver station, AND perform image processing. So unless you have a really scaled down image process function (ie… not directly copy and pasting in the sample code provided) you won’t be happy with the end results.
That’s disappointing to hear. I shall give it a try once the camera comes in because I already programmed it. I guess most likely will have to do it the same as last year. I was really hoping to be able to calibrate from the Dashboard.
Anyone willing to let me see the dashboard vision tracking code for this year that I can download? I’m stuck.
There is a sample program on the FRC LabVIEW examples section. From the home screen click the Support tab and click Find FRC Examples. Then go to Vision and click the Rectangular Targets - 2013.lvproj That is the code I am using.
The code that didn’t work on the classmate … Can you describe what camera resolution and what frame rate you were using with the default code?
As mentioned, the classmate is faster than the cRIO. The cRIO is pretty capable for an industrial controller, but is a 400MHz RISC architecture from the 90’s. It has no vector instructions and can achieve 760 MIPs. By comparison, an ARM Cortex is more like 90, an Atom is a few thousand, and an i7 laptop is more like 80K.
The requirement scale linearly with frame rate and linearly with pixels. Note that pixels scale by X*Y, meaning that small, medium, and large images scale as X, 4X, and 16X. This is an estimate because the JPEG compression doesn’t necessarily follow this prediction, and most but not all image processing is integer based.
Greg McKaskle
We ran vision processing on the driver station PC both last year and this year, and we’ve had no problems with the result. We’re not using the Classmate, though, so you should maybe just consider using a faster laptop if you want to make your vision processing faster.
Greg,
It was 320 x 240, 30fps, 30% compression. We basically copied the Vision Processing sample into the Driver Station, and sent the targeting array only back to the CRIO via UDP. That’s a pretty common setup so I hear.
Lowering the frame rate and/or image size eases the problem, but doesn’t eliminate it. What we eventually restored to was a turning the tracking on/off as needed. But when it was on, it did increase the cpu load, and the robot was sluggish.
I can’t recall if any watch dogs were triggered or not. But I do remember looking at the log, and when the CPU use was high packet loss was high as well.
Instead of using UDP, you should probably use the SD Read VI’s that are included in the Dashboard section of the WPI Robotics Library. That should hopefully help eliminate your packet loss, but it wouldn’t necessarily help with your CPU lagging.
Using Network Tables or UDP wouldn’t have changed the problem.
We used UDP because it was faster than Network Tables and we knew how to do UDP. As for packet loss it wasn’t UDP packet loss I was talking about, it was FMS communication packet loss. That is what the log view is showing.
Our UDP could care less if there was any packet loss or not. UDP doesn’t factor into the equation. Our UDP packets were tiny anyway (52 bytes IIRC), and only transmitted every 100ms.
The bottom line for us, was the Classmate couldn’t cut it with what we asked of it. It’s all about the processing power or lack thereof of the Classmate, nothing else.
Last year when we did some vision processing stuff on the classmate, the field admin told us we had above 300 ping all autonomous, while regular is like 10-40, didn’t stop us from scoring though, lmao
Thanks for the info on the classmate. I don’t have one at home, but I’ll see if I can replicate it. My guess would be that 30fps is not needed and would be pushing it. I would think that the classmate should be able to process 20 fps, though.
As for UDP versus SD, they should both work fine. UDP is less traffic and a bit less overhead, but may not get through. SD is TCP based.
Greg McKaskle
Another thought came to me. Keep in mind that anytime you have panels open, especially ones with a number of control, indicators, or probes, LV is shipping those back from the controller and it adds quite a bit of overhead. Running on the classmate, there is less overhead for panels and displays, but it is still present.
Greg McKaskle