|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
If you're using your Classmate as the DS and want to do the processing on there, I assume you'd use the dashboard data (there should be examples in LabVIEW, there are in C++). I'm still wondering how to do it between the cRio and a laptop on the robot. EDIT: Quote:
Last edited by basicxman : 08-01-2012 at 15:47. |
|
#2
|
|||
|
|||
|
Re: Tracking Rectangles
I was once good at head-math, but I guess things change. The formula is correct, you take half of the blue rectangle. The example values are wrong, half of 11.4 is 5.7, not 6.7.
As for running on the laptop. The LV example project does both. A LV project can have code for multiple devices or target devices. For simplicity, the FRC projects tend to have only one. The rectangular target processing project has roughly the same code with slight differences in how it is invoked under both the My Computer section of the project and the RT cRIO section. The tutorial goes into detail about how to Save As the RT section to run on the cRIO, but if you prefer, you can pretty easily integrate the My Computer VI into your dashboard, do the processing, and arrange for the values to be sent back to the robot via UDP or TCP. If you prefer to use OpenCV, it should theoretically run both locations, but I'm not aware of any port of it for the PPC architecture. Both OpenCV and NI-Vision run on the laptop. If I glossed over too many details, feel free to ask more detailed questions. Greg McKaskle |
|
#3
|
||||
|
||||
|
Re: Tracking Rectangles
Quote:
Pretty new to the whole FRC programming as a whole. Sorry if this is a "dumb" question. Thanks, Jay |
|
#4
|
|||
|
|||
|
Re: Tracking Rectangles
The framework examples do a bit of this already, but for a limited protocol.
If you drill into the dashboard code, you will find that the camera loop does TCP port 80 communications to the camera. The Kinect loop does UDP from a localhost Kinect Server, and even the other loop gets its data from a UDP port from the robot. For the robot side, there are C++ classes for building up a LabVIEW binary type and submitting it for low or high priority user data. I'm not that familiar with other portions of the framework which may directly use UDP or TCP. Greg McKaskle |
|
#5
|
|||
|
|||
|
Re: Tracking Rectangles
The whitepaper is extremely useful but the part I needed help with is actually what's glossed over the most. My understanding is that it's fully possible to determine both angle and distance from the target by the skew of the rectangle and the size. Here is a quote from the whitepaper:
"Shown to the right, the contours are fit with lines, and with some work, it is possible to identify the shared points and reconstruct the quadrilateral and therefore the perspective rectangle" Except it stops there. Have any other reading or direction you can send us to take this the rest of the way? I'd really like our bot to be able to find it's location on the floor with the vision targets and unless we are straight-on, this is going to require handling the angle. Thanks! -Mike |
|
#6
|
|||
|
|||
|
Re: Tracking Rectangles
But question is still how do you get the robot to track the rectangle like it would with a circle?
|
|
#7
|
||||
|
||||
|
Re: Tracking Rectangles
Quote:
In theory, the bounding rectangle should be enough, if you put your camera as high as possible, and are willing to tolerate a little error. The height would tell you how far away you are, and the width, after accounting for the height, would tell you how far "off center" you are, giving you your position in polar form relative to the target. The error would be greater the further off center you are (since the perspective transformation makes the rectangle taller than it should be), but I would need to test to see if it is a significant amount. |
|
#8
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
1. I open up the image shown in the paper into Vision Assistant (the one with the perspective distortion). 2. Use the third tool, the Measure tool to determine the lengths of the left and right vertical edges of the reflective strip. I measure 100 pixels and 134 pixels. First image shows the measurements in red and green. Since the edges are different pixel sizes, they are clearly different distances from the camera, but in the real-world, both are 18" long. The image is 320x240 pixels in size. The FOV height where the red and green lines are drawn are found using ... 240 / 100 x 18" -> 43.2" for green, and 240 / 134 x 18" -> 32.2" for red. These may seem odd at first, but it is stating that if a tape measure were in the photo where the green line is drawn, taped to the backboard, you would see that from top to bottom in the camera photo, 43.2 inches would be visible on the left/green side, and since the red is closer, only 32.2 inches would be visible. Next find the distance to the lines using theta of 47 degrees for the M1011... (43.2 / 2) / tan( theta / 2) -> 49.7" and (32.2 / 2 ) / tan( theta / 2) -> 37.0" This says that if you were to stretch a tape measure from the camera lens to to green line, it would read 49.7 inches, and to the red line would read 37 inches. These measurements form two edges of a triangle from the camera to the red line and from the camera to the green line, and the third is the width of the retro-reflective rectangle, or 24". Note that this is not typically a right triangle. I think the next step would depend on how you intend to shoot. One team may want to solve for the center of the hoop, another may want to solve for the center of the rectangle. If you would like to measure the angles of the rectangle described above, you may want to look up the law of cosines. It will allow you to solve for any of the unknown angles. I'd encourage you to place yardsticks or tape measures on your backboard and walk to different locations on the field and capture photos through your camera. You can then do similar calculations by hand or with your program. You can then calculate many of the different unknown values and determine which are useful for determining a shooting solution. As with the white paper, this is not intended to be a final solution, but a starting point. Feel free to ask followup questions or pose other approaches. Greg McKaskle |
|
#9
|
|||
|
|||
|
Re: Tracking Rectangles
This is incredibly helpful - I have no idea why the idea to use the camera as part of a triangle didn't come to mind but it was the key piece I was missing. Thanks much!
-Mike |
|
#10
|
|||
|
|||
|
Re: Tracking Rectangles
Would you be able to provide some raw images of the hoops through the Axis camera?
|
|
#11
|
|||
|
|||
|
Re: Tracking Rectangles
Perhaps, but it is actually pretty easy to get your own.
If you have the camera plugged into the switch and set the camera IP to 10.te.am.11, the dashboard will save an image every second. Connect the ring light and walk around the target. The images will be saved into the user/documents/LabVIEW Data directory as a series of jpgs. You can also do this using the web browser or Vision Assistant, but you'll need to press a button each image and later save them. Greg McKaskle |
|
#12
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
![]() |
|
#13
|
|||
|
|||
|
Re: Tracking Rectangles
Is using OpenCV (JavaCV) more feasible than using NI vision if you're not using LabView then?
Our team is also considering putting a netbook on the robot to do the image processing (gotta figure out 12 -> 18V)... Is that really worth the trouble? I don't know how to get a netbook to communicate with the cRIO yet other than with the driver station... Any ideas/suggestions? Thanks |
|
#14
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
and you can talk to the crio over a usb device such as an arduino or serial |
|
#15
|
|||
|
|||
|
Re: Tracking Rectangles
Is anyone else gonna be using OpenCV? (I, hopefully, will be able to use Python)
Also, what about the rule Quote:
Last edited by shuhao : 13-01-2012 at 15:22. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|