![]() |
Vision Tracking from the Dashboard
I was thinking this year would be an excellent year to do the vision tracking on the Dashboard as apposed to the cRIO. There are a couple big advantages I see in doing this. First, it will save the cRIO's limited processing power. The second more important reason is it would allow adjustments to vision tracking calibration on the field. Has anyone tried this or know any reasons why this might be a bad idea?
|
Re: Vision Tracking from the Dashboard
We did camera tracking on the driver station last year. I'm helping another team do it this season. It was all done in Labview.
Camera calibration is key. Being able to change the settings on the field is a plus, but, with proper calibration you shouldn't need to. Still it's good to have just in case. Be sure to read this white paper on vision processing especially the parts on exposure and white balance found here: http://wpilib.screenstepslive.com/s/3120/m/8731 You really can't run it on the Classmate, there's just not enough processing power there. So you need a faster machine with more ram than that. Just remember you need enough processing cycles for the driver station to remain in regular contact with the field as well as perform what ever else you need. |
Re: Vision Tracking from the Dashboard
Quote:
|
Re: Vision Tracking from the Dashboard
Vision will run on all the above, but the performance will differ based roughly on MIPS performance and available vector instructions.
Greg McKaskle |
Re: Vision Tracking from the Dashboard
Quote:
As I stated, it just doesn't have enough horsepower to be the driver station, AND perform image processing. So unless you have a really scaled down image process function (ie.. not directly copy and pasting in the sample code provided) you won't be happy with the end results. |
Re: Vision Tracking from the Dashboard
Quote:
|
Re: Vision Tracking from the Dashboard
Anyone willing to let me see the dashboard vision tracking code for this year that I can download? I'm stuck.
|
Re: Vision Tracking from the Dashboard
Quote:
|
Re: Vision Tracking from the Dashboard
The code that didn't work on the classmate ... Can you describe what camera resolution and what frame rate you were using with the default code?
As mentioned, the classmate is faster than the cRIO. The cRIO is pretty capable for an industrial controller, but is a 400MHz RISC architecture from the 90's. It has no vector instructions and can achieve 760 MIPs. By comparison, an ARM Cortex is more like 90, an Atom is a few thousand, and an i7 laptop is more like 80K. The requirement scale linearly with frame rate and linearly with pixels. Note that pixels scale by X*Y, meaning that small, medium, and large images scale as X, 4X, and 16X. This is an estimate because the JPEG compression doesn't necessarily follow this prediction, and most but not all image processing is integer based. Greg McKaskle |
Re: Vision Tracking from the Dashboard
We ran vision processing on the driver station PC both last year and this year, and we've had no problems with the result. We're not using the Classmate, though, so you should maybe just consider using a faster laptop if you want to make your vision processing faster.
|
Re: Vision Tracking from the Dashboard
Quote:
It was 320 x 240, 30fps, 30% compression. We basically copied the Vision Processing sample into the Driver Station, and sent the targeting array only back to the CRIO via UDP. That's a pretty common setup so I hear. Lowering the frame rate and/or image size eases the problem, but doesn't eliminate it. What we eventually restored to was a turning the tracking on/off as needed. But when it was on, it did increase the cpu load, and the robot was sluggish. I can't recall if any watch dogs were triggered or not. But I do remember looking at the log, and when the CPU use was high packet loss was high as well. |
Re: Vision Tracking from the Dashboard
Instead of using UDP, you should probably use the SD Read VI's that are included in the Dashboard section of the WPI Robotics Library. That should hopefully help eliminate your packet loss, but it wouldn't necessarily help with your CPU lagging.
|
Re: Vision Tracking from the Dashboard
Using Network Tables or UDP wouldn't have changed the problem.
We used UDP because it was faster than Network Tables and we knew how to do UDP. As for packet loss it wasn't UDP packet loss I was talking about, it was FMS communication packet loss. That is what the log view is showing. Our UDP could care less if there was any packet loss or not. UDP doesn't factor into the equation. Our UDP packets were tiny anyway (52 bytes IIRC), and only transmitted every 100ms. The bottom line for us, was the Classmate couldn't cut it with what we asked of it. It's all about the processing power or lack thereof of the Classmate, nothing else. |
Re: Vision Tracking from the Dashboard
Last year when we did some vision processing stuff on the classmate, the field admin told us we had above 300 ping all autonomous, while regular is like 10-40, didn't stop us from scoring though, lmao
|
Re: Vision Tracking from the Dashboard
Thanks for the info on the classmate. I don't have one at home, but I'll see if I can replicate it. My guess would be that 30fps is not needed and would be pushing it. I would think that the classmate should be able to process 20 fps, though.
As for UDP versus SD, they should both work fine. UDP is less traffic and a bit less overhead, but may not get through. SD is TCP based. Greg McKaskle |
| All times are GMT -5. The time now is 12:11. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi