|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Tracking Rectangles
Quote:
In theory, the bounding rectangle should be enough, if you put your camera as high as possible, and are willing to tolerate a little error. The height would tell you how far away you are, and the width, after accounting for the height, would tell you how far "off center" you are, giving you your position in polar form relative to the target. The error would be greater the further off center you are (since the perspective transformation makes the rectangle taller than it should be), but I would need to test to see if it is a significant amount. |
|
#2
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
1. I open up the image shown in the paper into Vision Assistant (the one with the perspective distortion). 2. Use the third tool, the Measure tool to determine the lengths of the left and right vertical edges of the reflective strip. I measure 100 pixels and 134 pixels. First image shows the measurements in red and green. Since the edges are different pixel sizes, they are clearly different distances from the camera, but in the real-world, both are 18" long. The image is 320x240 pixels in size. The FOV height where the red and green lines are drawn are found using ... 240 / 100 x 18" -> 43.2" for green, and 240 / 134 x 18" -> 32.2" for red. These may seem odd at first, but it is stating that if a tape measure were in the photo where the green line is drawn, taped to the backboard, you would see that from top to bottom in the camera photo, 43.2 inches would be visible on the left/green side, and since the red is closer, only 32.2 inches would be visible. Next find the distance to the lines using theta of 47 degrees for the M1011... (43.2 / 2) / tan( theta / 2) -> 49.7" and (32.2 / 2 ) / tan( theta / 2) -> 37.0" This says that if you were to stretch a tape measure from the camera lens to to green line, it would read 49.7 inches, and to the red line would read 37 inches. These measurements form two edges of a triangle from the camera to the red line and from the camera to the green line, and the third is the width of the retro-reflective rectangle, or 24". Note that this is not typically a right triangle. I think the next step would depend on how you intend to shoot. One team may want to solve for the center of the hoop, another may want to solve for the center of the rectangle. If you would like to measure the angles of the rectangle described above, you may want to look up the law of cosines. It will allow you to solve for any of the unknown angles. I'd encourage you to place yardsticks or tape measures on your backboard and walk to different locations on the field and capture photos through your camera. You can then do similar calculations by hand or with your program. You can then calculate many of the different unknown values and determine which are useful for determining a shooting solution. As with the white paper, this is not intended to be a final solution, but a starting point. Feel free to ask followup questions or pose other approaches. Greg McKaskle |
|
#3
|
||||
|
||||
|
I have wanted to design a vision system to work like that and calculate the distance from the target without range sensors or any other sensors. I also wanted to skip the Kinect because of how hard it is to interface to the robot, and it's slow speed. This is exactly the routine that I wanted to do. Now, I know how to implement it. Thank You!
Also, if I am not wrong, does it follow the laws of perspective that explain how an image looks smaller as it is farther away from your eyes, in this case, the camera. Here's and O: O Look at it up close. doesn't it look large? Now look at it five feet away. It should look much smaller now. If I am not wrong, I think that is how this is supposed to work! ![]() |
|
#4
|
||||
|
||||
|
To isolate the rectangle, could I use a very high exposure rate camera, to reduce blur and to reduce the extraneous light, and have a very powerful light highlight the goals? Thresholding should get rid of the spare pieces, then binary conversion, then erode and dilate, then the other stuff done to find one box?
|
|
#5
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
Quote:
|
|
#6
|
|||
|
|||
|
I've seen posts about how ni vision + their own tracking code lags other robot functions. Plus, openCV has way more resource, and I also get to use things like standard python, or other languages
Maybe raspberry pi? Hmmm |
|
#7
|
||||
|
||||
|
Re: Tracking Rectangles
It is working great for me, but YMMV. Proper threading should fix those problems. Using openCV in the CRIO would be very hard, as you would need to compile it for the CRIO to get that super fast C code. You should try both out and report back to us with some metrics, since I have nothing but my NI Vision code to speak for. Personally, I see no advantage to having the laptop on the robot, since the lag between the robot and the DS is negligible. Perhaps threshold on the CRIO, send the (much smaller) binary image to the laptop?
To address your earlier point about the legality of a laptop controller, all output to robot parts (motors, relays, etc) must come from the CRIO. You can send any signal you want to the crio, just not to anything else. Back in 2008 my team used current based speed controllers that were custom built circuit boards placed between the speed controller and the motor, and it was fun convincing the inspectors that they were legal ![]() |
|
#8
|
|||
|
|||
|
Well. I need to send data back to the crio if i want to image process else where. Im not sending driving instructions to the parts from the laptop. the crio handles those . Im just processing the image and sending a couple of things back, like heading and location etc.
|
|
#9
|
||||
|
||||
|
Re: Tracking Rectangles
Quote:
|
|
#10
|
||||
|
||||
|
Re: Tracking Rectangles
Quote:
Quote:
Quote:
|
|
#11
|
|||
|
|||
|
Re: Tracking Rectangles
My team is currently considering a single-board computer on the robot. You can get an excellent multi-core Intel Atom-based computer from http://www.logicsupply.com/ for a few hundred dollars. We've already checked with one of our regional inspectors and this would be completely allowed. The design would be:
Axis M1011 --> D-Link --> Atom (MJPEG stream) Axis M1011 --> D-Link --> Wireless --> Driver Station (MJPEG stream) Atom --> D-Link --> CRIO CRIO <--> D-Link <--> Wireless <--> Driver Station CRIO --> Robot electro/mechanical bits The Atom would run a program (Labview, custom, whatever) that processes the image feed in real time and uses the network to talk to the CRIO. The CRIO would use this information internally to determine shooting solutions and send needed data down to the driver station so drivers know what's going on and what it's thinking. The idea behind this is that it removes both the wireless network and the CRIO from the image processing loop at the expense of another piece of electronics in the system. The added horsepower comes at added complexity. The assumption though, correct or otherwise, is that an industrial-ish single-board PC is reliable and the code on the CRIO and driver station can still work great even if image processing fails. The specific configuration I listed above also keeps us with video feed unless the camera itself fails. Only time will tell if it's a good idea or not ![]() -Mike |
|
#12
|
|||
|
|||
|
Re: Tracking Rectangles
Quote:
The board may be COTS, but the battery is no longer "integral to and part of", and thus not an allowable battery. |
|
#13
|
||||
|
||||
|
Re: Tracking Rectangles
Quote:
|
|
#14
|
|||
|
|||
|
Re: Tracking Rectangles
Actually, many single board computers have a power supply designed for car use where they can take from 6v to 24v. The power supply we are using does this for instance, making it well suited to the robot.
-Mike |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|