![]() |
Walkthrough on how to do offboard image processing!
Just typed this out in a PM for someone and thought it might be useful for everyone here.
You'll have the easiest time doing your vision processing in either C or Labview. Our team is a Java team, so neither of those would have been our first choices, but the fact is while it's possible to do it in Java, C++ and Labview let you use the NI Vision Assistant, which is a VERY powerful tool. I'll get back to that later. Make sure you have an LED ring light for your webcam, as mentioned in the second link below. I modified the dashboard to do all the vision processing on that, so that the robot didn't have to waste cycles on it. The driver station is going to be a lot more powerful than the robot anyways unless you're using the Classmate, in which case I'd recommend you find just a laptop to run it on since the Classmate can barely even stream the video output without doing any processing on it. You can open up Labview 2012 and start a New Dashboard Project, which will basically be an editable version of the default dashboard. If you don't know Labview, it's OK, neither did I when I started that. It's a major pain to work with but if you keep to it, you'll get it, and there's plenty of support online for it. Now in your modified dashboard, you're going to have a few discrete parts. 1) The vision processing itself. Open up the NI Vision Assistant, in All Programs > National Instruments, and play with that a bit. This link will help you with that. It'll serve as a good guide on how to use NI Vision Assistant, and how to compile your generated scripts into VIs that we'll use in your actual modified dashboard. Now, as for a guide on what algorithm you'll want to put together in Vision Assistant, this whitepaper is absolutely amazing and will take you through it all in good detail. 2) Some sort of socket communications, to relay data back to the cRIO. You can do this with NetworkTables if you're a special blend of brave and masochistic, but I never ventured down that particular route. In my opinion, even the reworked NetworkTables implementation is just too obfuscated, confusing, and poorly documented to be worth using. I wrote in TCP communication, to relay back to the robot what I get from... 3) A way to format the results from the vision assistant. I took the x,y coords of the centers of all the detected rectangles, put those in a string formatted like {Distance, x1, y1, x2, y2, x3, y3, x4, y4}. Since this was from last year's game, I was seeing the 4 vision targets. My distance value was generated from a rangefinding script I wrote that roughly estimated distances based on the size of the rectangles it's seeing. You can make a proportion of that to the actual size and do some trig and come up with your distance from the target. You'll want a good way of formatting it so it's easy to pick apart once it's on the cRIO. You can just make a subVI for this and put it within the dashboard to make it a little cleaner. 4) This is optional but I'd highly recommend it. I added 6 boxes to the dashboard, for min/max values of Hue, Saturation, and Luminosity for the Vision Assistant script. This lets you tweak the threshold values on the fly for what it's detecting, so when you're on the field you don't have to be recompiling stuff all the time. I had it store those values in a .cfg file in the Program Files/FRC Dashboard directory. So, let's recap. Your new dashboard, since it's just a modified version of the old one, will leave in the part of the VI that puts the image on the dashboard for you to watch. It'll also establish a TCP connection to the cRIO (more on how to do this in your code on the cRIO later). However, it's also going to pass this image to the VI you generated with the Vision Assistant program. That'll spit out the X,Y coordinates of the centers of each detected target, which will be formatted into a clean string within the dashboard. Then, your dashboard will send that over the TCP link to the robot. Now, the robot side! Isn't this fun? The hard part is over, don't worry. This is MetaTCPVariables, a java library file we wrote and added to our copy of WPILIBJ. MetaTCPVariables adds the ability for the cRIO to listen for an incoming TCP connection, establish it on one of the designated ports you're allowed to use. Check page 6, it's an out of date whitepaper but that part has stayed the same. Now you've got all your x,y coordinates, just come up with a way to sort them! I wrote this, which let us sort the targets based on relative position to each other. It would always classify the highest up target as top, but it used a sort of rolling assignment, so if there was 1 target it would always be called top, if there were 2 the higher one would be called top, if there were 4 the highest one would be top. If you've got any more questions, please ask me. This ended up being a lot longer than I initially anticipated I'd write, but it's really not nearly as bad as reading a wall of text makes it seem. My code is in Java but you should be able to read the basic ideas if you're not using it and translate to C++ or whatever you're using. If you want more information on how I actually did the Labview side of things please ask. I'd never done anything with Labview or TCP socket communications on this, and by the time we were at competition I had an autotargeting robot that scored more than a couple perfect autonomous rounds. If you set your mind to it you'll get it, don't be intimidated by having to learn a bunch of new stuff. It's fun once you get into it, and hey, if we didn't want to sink ridiculous amounts of time and effort and frustration into software, we wouldn't be writing code for a FIRST competition! I hope this helps! Again, if anything's unclear or if you want more help or specific code examples please just ask. |
Re: Walkthrough on how to do offboard image processing!
Good information. I'll only make two comments:
UDP is usually preferable to TCP for the kind of data you're sending from the offboard computer to the robot. You're really only interested in the latest information, not in guaranteeing delivery of every single packet in order. TCP's retransmission of unacknowledged packets can introduce significant delays if there is network congestion (e.g. video streams). Your custom MetaTCPVariables package is no longer necessary. NetworkTables provides exactly the same thing and is included with robot software development environments this year. |
Re: Walkthrough on how to do offboard image processing!
Is off loading vision processing to the drivers station still a viable option now that FIRST has changed the priorities of the packets (eg. video has lowest priority)?
Does using the Raspberry Pi (or similar device) make a better alternative to off load video processing? |
Re: Walkthrough on how to do offboard image processing!
That is a valid question, but unless the available bandwidth is in short supply, the prioritization should have little effect.
Greg McKaskle |
Re: Walkthrough on how to do offboard image processing!
Quote:
As far as NetworkTables goes, I'm sure it works but I'm personally not a big fan of it. It's big, confusing, poorly documented, and there are few if any good examples on how to use it. That said, I have been considering trying it this year just to give it a shot, I just need to block out some time to sit down with it and figure it out. It'd definitely be nice to have something everyone can use more easily without having to deal with writing in their own TCP code, I'll keep this updated as we deal with that. Warehouse: Funny you'd mention that, we're actually thinking of using a Raspberry Pi on the robot this year. The packet priority should not matter as the bandwidth isn't nearly saturated enough for that to be an issue, though. Our plan with the Raspberry Pi is to run pretty much all the same Labview VIs, but to separate them out from the Dashboard (which would actually be even easier.) |
Re: Walkthrough on how to do offboard image processing!
can anyone help?
http://www.chiefdelphi.com/forums/sh...26#post1221426 |
Re: Walkthrough on how to do offboard image processing!
"Invalid template range" is an error I'm receiving on the driver station, I believe I did the steps correctly. The only thing I didn't do is setup the tcp/udp yet. Is that required to track the image?
EDIT: error is IMAQ:invalid color template image |
Re: Walkthrough on how to do offboard image processing!
Quote:
That is not required for the track, but you won't get any data on the cRIO. The Dashboard should still be able to show the detection though. What's giving you that error? Where's it showing up? Edit: I know what it is. the image processing script uses a template image to match detected blobs to. I forgot to include that, but I'll get it up first thing as soon as I get in the lab tomorrow. Sorry! |
Re: Walkthrough on how to do offboard image processing!
Thanks much appreciated!
|
Re: Walkthrough on how to do offboard image processing!
Hey,
Attached is an image of my vision assistant script. I made this a while ago, I'd forgotten that I just decided to forego using a template image at all. Instead, I just use thresholding and shape recognition. It makes it a little more flexible, if I remember right why I did it that way. |
Re: Walkthrough on how to do offboard image processing!
where did you attach it?
|
Re: Walkthrough on how to do offboard image processing!
|
Re: Walkthrough on how to do offboard image processing!
What do you recommend in trying to find the disk? Circular edge tool?
|
Re: Walkthrough on how to do offboard image processing!
Replace the shape detection block of the script with whatever shape you're trying to find, I'd think.
|
Re: Walkthrough on how to do offboard image processing!
This is probably such a simple answer, but how do I place the VI script for image processing on the dashboard? I created the script using NI Vision Assistant, but I don't know how to place it in the modified Dashboard project.
Thank you |
| All times are GMT -5. The time now is 02:10. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi