![]() |
Tracking Rectangles
For this year's game, it seems to be necessary to locate the backboards programatically in both the "hybrid" and tele-op periods. I found a paper from Team 1511 that helps determine a robot's position relative to a vision target (see: http://www.chiefdelphi.com/media/papers/2324).
I feel like most of the math and thought behind that paper would translate well to this scenario too. The only problem is finding the rectangles accurately in an image using the camera or Kinect. I was wondering if any teams had any tips on how to do this with the camera, since we haven't really tried camera-tracking since we played with CircleTrackerDemo two years ago. Unfortunately, most of the CircleTrackerDemo code seems specific to ellipses only. Any ideas on how to do rectangles with a camera? Perhaps some code we can use? If that isn't possible, an alternative would be using the Kinect. Although I'm sort of clueless when it comes to shape tracking (outside of human shapes) with the Kinect. Thank you for your help, and I appreciate any input you may have. |
Re: Tracking Rectangles
|
Re: Tracking Rectangles
Looks like a lot of complicated math to work through, but that's to be expected. I am interested in how your first link does perspective rectangle detection, but it doesn't seem to include any mathematical descriptions. There are some vague mentions of finding the vanishing points and the unit vector field pointing in the direction of vanishing lines, but nothing specific.
There's currently a rectangle processing VI or something like that for LabView programers. I'm hoping that could be ported to Java (and/or C++) soon, since we moved on from LabView a long while back. Is there any word on this? I was also wondering how other teams accomplished tracking of retro-reflective tape (in circular and rectangular shapes) for Logomotion in Java. From an electronic point of view, a cluster of LEDs around the camera seems necessary. However, programmatically, was it necessary to do Hough Transforms? If so, is there a more concise description of these transforms we can access? Perhaps how to take the transformed image and use it to determine the edges of a rectangle? Thanks as always. |
Re: Tracking Rectangles
I can't find the white paper on the NI site yet, but one should be posted soon that covers several approaches. One approach uses simple particle analysis to identify the ones most like hollow rectangles. Another approach is to use the line or rectangle geometric fit routines -- which are Hough implementations under the hood.
The paper actually uses NI Vision Assistant for most of the exploration, but does refer to the LV example when it comes to scoring and position/distance calculation. The LV example will also run directly on your computer, so your cRIO can run whatever, and the laptop can pull images directly from the camera that is on the switch. Greg McKaskle |
Re: Tracking Rectangles
I posted a copy of Greg's the Whitepaper here:
http://firstforge.wpi.edu/sf/docman/...ib/docman.root This has a lot of good information about finding and tracking the 2012 vision targets. Brad |
Re: Tracking Rectangles
Thank you Brad, this is perfect. Just three questions/comments for anyone:
1) On the PDF under the "Measurements" section concerning distance (page 9), it says that the blue rectangle width is 11.4 ft but half the blue rectangle width is 6.7 ft. I don't know who wrote this, but that seems like a typo. 2) Does the particle processing method only accurately find rectangles when it encounters them head on? Is the edge detection method necessary to find rectangles distorted by perspective?
3) Are there any pointers you can give on how to process camera images on the laptop instead of the cRIO? We've never tried this before, but it seems worth doing. Thank you again for your help. |
Re: Tracking Rectangles
Quote:
|
Re: Tracking Rectangles
Quote:
Personally, these types of papers are actually fun to read. Even if you only understand quarter of the math, the more you try, you start getting a bigger picture. I think it is more beneficial to butt your head and try going through this way first before using code already provided. It really builds character IMHO. |
Re: Tracking Rectangles
I love challenges too and it was interesting to read and actually (somewhat) understand the problem of image processing better. Of course, I appreciate that it is quite a big feat, and know through experience that you can't tell the cRIO to just look for a rectangle. However, given that the NI Vision Assistant does a lot of what we need it to do in terms of image processing, it makes more sense to use that instead of writing code that does Hough transforms on images and looks for peaks. Thank you for the read though.
Going back to the issue, I was wondering if any teams could answer my third question from my post above: how is it possible to have the image processing happen on the laptop instead of the cRIO? We're replacing our Classmate this year so the performance gain could be significant. I read in the PDF that making this switch requires no change in code (or something along those lines). What does it require then? |
Re: Tracking Rectangles
Quote:
Quote:
|
Re: Tracking Rectangles
Quote:
If you're using your Classmate as the DS and want to do the processing on there, I assume you'd use the dashboard data (there should be examples in LabVIEW, there are in C++). I'm still wondering how to do it between the cRio and a laptop on the robot. EDIT: Quote:
|
Re: Tracking Rectangles
I was once good at head-math, but I guess things change. The formula is correct, you take half of the blue rectangle. The example values are wrong, half of 11.4 is 5.7, not 6.7.
As for running on the laptop. The LV example project does both. A LV project can have code for multiple devices or target devices. For simplicity, the FRC projects tend to have only one. The rectangular target processing project has roughly the same code with slight differences in how it is invoked under both the My Computer section of the project and the RT cRIO section. The tutorial goes into detail about how to Save As the RT section to run on the cRIO, but if you prefer, you can pretty easily integrate the My Computer VI into your dashboard, do the processing, and arrange for the values to be sent back to the robot via UDP or TCP. If you prefer to use OpenCV, it should theoretically run both locations, but I'm not aware of any port of it for the PPC architecture. Both OpenCV and NI-Vision run on the laptop. If I glossed over too many details, feel free to ask more detailed questions. Greg McKaskle |
Re: Tracking Rectangles
Quote:
Pretty new to the whole FRC programming as a whole. Sorry if this is a "dumb" question. Thanks, Jay |
Re: Tracking Rectangles
The framework examples do a bit of this already, but for a limited protocol.
If you drill into the dashboard code, you will find that the camera loop does TCP port 80 communications to the camera. The Kinect loop does UDP from a localhost Kinect Server, and even the other loop gets its data from a UDP port from the robot. For the robot side, there are C++ classes for building up a LabVIEW binary type and submitting it for low or high priority user data. I'm not that familiar with other portions of the framework which may directly use UDP or TCP. Greg McKaskle |
Re: Tracking Rectangles
The whitepaper is extremely useful but the part I needed help with is actually what's glossed over the most. My understanding is that it's fully possible to determine both angle and distance from the target by the skew of the rectangle and the size. Here is a quote from the whitepaper:
"Shown to the right, the contours are fit with lines, and with some work, it is possible to identify the shared points and reconstruct the quadrilateral and therefore the perspective rectangle" Except it stops there. Have any other reading or direction you can send us to take this the rest of the way? I'd really like our bot to be able to find it's location on the floor with the vision targets and unless we are straight-on, this is going to require handling the angle. Thanks! -Mike |
| All times are GMT -5. The time now is 08:07 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi