![]() |
Vision Targeting on Laptop
Our team recently purchased a new laptop to replace our broken Classmate. The new laptop is a quad core (thanks Woot!), so we have a bunch of processing power that can be used.
I remember in a thread a while ago that one team mentioned that they did their vision processing on their driver station's laptop (Maybe 1114). This seems a good choice to pursue, as the "Take single image, act on it" method may not be viable for next years game, like if it was a clone of lunacy, for example. A theory I have on how to accomplish this is to run the vision processing in the dashboard, as it already has a connection to the cRIO, and just transmit some of the target info (distance, angle, etc.) to the robot Am I going in the right direction, or is there a better way to go about it? |
Re: Vision Targeting on Laptop
Quote:
|
Re: Vision Targeting on Laptop
Dunno about 1114 but 341 did. They also published a document on it. http://www.chiefdelphi.com/media/papers/2676
|
Re: Vision Targeting on Laptop
That is exactly what I was working on the past few days, and I got it working just fine! I tested it today and found out it is WAY faster than processing on the cRio.
What I basically did was, I took all the camera vi's from the 'Begin.vi' in a FRC Project, and put it inside the 'Vision Processing.vi' from the 'Retangular Target Processing' template. I removed the 'Set Registery.vi' and 'Get Registery.vi' and just wired the two sections together. Then I sent the 'Target Info' Cluster/Array, through UDP port 1130 to 10.te.am.2 (cRio), and made a UDP receive vi in a separate loop. I HIGHLY recommend this since you have a good processor on your driver station. NOTE: This is VERY similar to on-board processing. If you want, I can send you my code or some screenshots. I can't right now because I'm on an iPad. |
Re: Vision Targeting on Laptop
Quote:
|
Re: Vision Targeting on Laptop
Quote:
|
Re: Vision Targeting on Laptop
Hmm. There may have been a good reason why the example vision code demonstrated both PC and cRIO processing. Both are valid approaches. The cRIO is about an 800 MIPs computer with potentially lots of other stuff to do. The laptop with an Atom is around 3300 and may have very little else to do during a match. Be mindful of latency though. Make sure to measure the entire processing loop from acquisition to response.
Greg McKaskle |
Re: Vision Targeting on Laptop
If I remember correctly, The NI vision is locked to a single processor. It doesn't take advantage of multiple cores or the GPU. OpenCV can be configured to use all the cores available and can compile in CUDA extensions to use a Nvidia GPU. This is only doable on an Intel processor at this time and the extensions are not all free. The image routines that were used this year are not really that intensive. How ever with vision becoming more critical in the FRC game. Teams may find the need to maximize the CPU and GPU usage in their driver station or on robot processor. Also remember that with the issues at champs the methods and volume of wireless communications maybe very different next year. Exploring vision in the off season would be time well spent.
|
Re: Vision Targeting on Laptop
If your memory is based on old versions, that may be accurate, but for about four years, core algorithms have been multicore aware. I believe the default is to spread over all cores. There is a VI called "IMAQ Multi-Core Options" that can be used to view or modify the number of cores you want the algorithms to use. Additionally, the algorithms have been SIMD, MMX and SSE capable for a dozen years -- as those capabilities were available in the processors.
As you point out, OpenCV is very customizable and if you need to tune for a platform, it is a very nice tool. But IMAQ is no slouch on performance, and should not be characterized as single-core. This is even more the case if it is used within LV, where multitasking is relatively easy and safe to carry out. Greg McKaskle |
Re: Vision Targeting on Laptop
Yes it seams I'm out of date. I'll also look into this with our student programmer this summer. Maybe you can help with this. Is there any way to work with a usb web cam in labview? Here is one of our thought's. This year our drivers had issues with seeing balls that were blocked by the bridge and robot traffic. Our cam was aimed and focused on the upper basket. It was of no use in field view for balls. We tried 2 lan web cams and could not over come lag and latency issues. We are looking at using inexpensive usb web cams to give field view. Our student programer received a free via micro itx auto motive box to work with vision. The reason for our interest in opencv is that it is easy to have multiple usb cams attached and switch between them and send one stream to the dash board. Could this function be accomplished in labview? Our programmer put allot of time into labview vision. It would be easier to continue on with labview development.
|
Re: Vision Targeting on Laptop
Quote:
The next thing to consider is that USB camera streams are not compressed, meaning that if you do get the camera into an external laptop or board, you will need to spend CPU to compress it before transmitting it to the dashboard. By comparison, the Axis IP camera is already compressed by HW on the camera and can be sent to the dashboard with no compression overhead. As for the lag and latency, can you give more details on how it was setup. A few years ago, I hooked three cameras up to the dLink switch and requested all of the for the dashboard. Two large ones worked fine, and the limiting factor was the old laptop's CPU that couldn't uncompress and draw three different movie streams. Anyway, it should be no problem doing two. Greg McKaskle |
Re: Vision Targeting on Laptop
Quote:
|
| All times are GMT -5. The time now is 10:56. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi