Tracking Rectangles

Perhaps, but it is actually pretty easy to get your own.

If you have the camera plugged into the switch and set the camera IP to 10.te.am.11, the dashboard will save an image every second. Connect the ring light and walk around the target. The images will be saved into the user/documents/LabVIEW Data directory as a series of jpgs. You can also do this using the web browser or Vision Assistant, but you’ll need to press a button each image and later save them.

Greg McKaskle

Aye, I was just wondering if you had some already as I don’t have a camera and cRio until tomorrow :slight_smile:

Is using OpenCV (JavaCV) more feasible than using NI vision if you’re not using LabView then?

Our team is also considering putting a netbook on the robot to do the image processing (gotta figure out 12 -> 18V)… Is that really worth the trouble? I don’t know how to get a netbook to communicate with the cRIO yet other than with the driver station…

Any ideas/suggestions?

Thanks

You can have the netbook on the robot with the battery you dont need 12v to 18v.

and you can talk to the crio over a usb device such as an arduino or serial

Is anyone else gonna be using OpenCV? (I, hopefully, will be able to use Python)

Also, what about the rule

Robots must be controlled via one programmable National Instruments cRIO (part # cRIO-FRC or cRIO-FRCII), with image version FRC_2012_v43. Other controllers shall not be used.

Does a netbook count as another “controller”?

Personally, I don’t think so, but that is for your team to judge. I have no facts to back this up, but I would bet money that the NI Vision is optimized pretty well for the CRIO and will give you better performance. Just the time required to convert the image to openCV format will be huge, since it is stored as a pointer to a block of memory containing a JPEG image (which you have to decompress to get individual pixel values from, so I’m not sure it would be feasible at all).

As far as the laptop goes, I would run the code on the driverstation, then send the results back rather than attach a laptop to the robot. Might be slightly slower, but a much smaller chance of getting destroyed :stuck_out_tongue:

I’ve seen posts about how ni vision + their own tracking code lags other robot functions. Plus, openCV has way more resource, and I also get to use things like standard python, or other languages

Maybe raspberry pi? Hmmm

It is working great for me, but YMMV. Proper threading should fix those problems. Using openCV in the CRIO would be very hard, as you would need to compile it for the CRIO to get that super fast C code. You should try both out and report back to us with some metrics, since I have nothing but my NI Vision code to speak for. Personally, I see no advantage to having the laptop on the robot, since the lag between the robot and the DS is negligible. Perhaps threshold on the CRIO, send the (much smaller) binary image to the laptop?

To address your earlier point about the legality of a laptop controller, all output to robot parts (motors, relays, etc) must come from the CRIO. You can send any signal you want to the crio, just not to anything else. Back in 2008 my team used current based speed controllers that were custom built circuit boards placed between the speed controller and the motor, and it was fun convincing the inspectors that they were legal :stuck_out_tongue:

Well. I need to send data back to the crio if i want to image process else where. Im not sending driving instructions to the parts from the laptop. the crio handles those . Im just processing the image and sending a couple of things back, like heading and location etc.

Right so it would be legal. But you should read the rules carefully; there is a max on the money spent on one part, on the motors (fan, hard drive, etc), power source (no batteries but the kit one, all power goes through distribution board). There is a reason very few teams go that route, and many teams are successful at image processing on the CRIO

Luckily they’ve made this rather easier the last few years than when I tried it in 2008.

My team is currently considering a single-board computer on the robot. You can get an excellent multi-core Intel Atom-based computer from http://www.logicsupply.com/ for a few hundred dollars. We’ve already checked with one of our regional inspectors and this would be completely allowed. The design would be:

Axis M1011 --> D-Link --> Atom (MJPEG stream)
Axis M1011 --> D-Link --> Wireless --> Driver Station (MJPEG stream)
Atom --> D-Link --> CRIO
CRIO <–> D-Link <–> Wireless <–> Driver Station
CRIO --> Robot electro/mechanical bits

The Atom would run a program (Labview, custom, whatever) that processes the image feed in real time and uses the network to talk to the CRIO. The CRIO would use this information internally to determine shooting solutions and send needed data down to the driver station so drivers know what’s going on and what it’s thinking.

The idea behind this is that it removes both the wireless network and the CRIO from the image processing loop at the expense of another piece of electronics in the system. The added horsepower comes at added complexity. The assumption though, correct or otherwise, is that an industrial-ish single-board PC is reliable and the code on the CRIO and driver station can still work great even if image processing fails. The specific configuration I listed above also keeps us with video feed unless the camera itself fails.

Only time will tell if it’s a good idea or not :slight_smile:

-Mike

I thought about that. The only downside is that it is no longer “batteries integral to and part of a COTS computing device …”. Thus, you have to run off the main battery.

The board may be COTS, but the battery is no longer “integral to and part of”, and thus not an allowable battery.

I don’t necessarily agree with that. I think a ruling on that would be needed.

Actually, many single board computers have a power supply designed for car use where they can take from 6v to 24v. The power supply we are using does this for instance, making it well suited to the robot.

-Mike

I have wanted to design a vision system to work like that and calculate the distance from the target without range sensors or any other sensors. I also wanted to skip the Kinect because of how hard it is to interface to the robot, and it’s slow speed. This is exactly the routine that I wanted to do. Now, I know how to implement it. Thank You!
Also, if I am not wrong, does it follow the laws of perspective that explain how an image looks smaller as it is farther away from your eyes, in this case, the camera.

Here’s and O: O
Look at it up close. doesn’t it look large?
Now look at it five feet away. It should look much smaller now.
If I am not wrong, I think that is how this is supposed to work!
:cool: ::safety:: :stuck_out_tongue: :smiley: ::ouch::

is it still too late to contribute? I’ve been teaching those interested in my team during this preseason computer vision. I posted a white paper on here describing my methods of using camera pose estimation for rebound rumble (note, this is very complicated mathematics, I don’t recommend it unless you are up for a big challenge) Pose could have been used for this year, too, but that pesky pyramid, so basic trig sufficed. I’m in the process of writing a scholarly (I guess you’d call it) paper describing my program from ultimate ascent (for the purpose of submitting it to our the missouri symposium and have it compete at Intel ISEF and ISWEEEP, the kid who won isef was on daily show last night) You can view our vision system set up from our website if you like. *we used a single board computer, O-Droid X2. It is our toy now essentially. So much fun to play with.

Anyways. Is anyone doing anything with vision this preseason? I’ve been working with some professors at wash u, Missouri s&t, and harvey mudd to make a camera pose estimation with a known rotation matrix. Turned out to be a lot more math intensive than the 3 of us first thought…the program will be done before build season and I’ll have it up online somewhere so whoever is welcome can look at it. I’ll make it as…educational as I can with comments, but comments can only do so much (which turns out to be a lot). If it is another game where there is reflective tape that can be used to assist in scoring, which there has been every year I’ve been in FIRST (starting with logomotion), then I’ll put up a working vision code that returns distance on the x-z plane and x-rotation to the center of the target at a reasonable time during build season.

I wrote my team’s vision system last year, but this preseason (and hopefully build season), it’s in the hands of one of the newer team members – I’m graduating this year. Your pose estimation project sounds quite interesting to me, though. Thinking through it a little, it seems that for a rectangular target you could use the ratios in length of opposite edges to find the Euler angles of its plane. To figure out the formula to calculate that, though, I’d need some paper and a lot of pacing. I’d definitely like to see your work once it’s complete!

If you sit down and think about it for a second, you realize that the whole rotation matrix can be solved without doing pose. the object points are all on a single plane, which in itself makes things simpler, but it also IS the YZ, and it doesn’t go into the XY plane at all, which is nice. That means that roll is constant. You can calculate pitch and yaw by proportions with the FOV vs image resolution. Great, now we have the rotation and camera matrix known. the only thing to solve for is the translation matrix. Hurray! This can be done in 2 ways, “plugging in” the rotation matrix into the standard pose equations, or using geometry. I did the geometry approach already, it works. Coded and everything. Currently doing the linear algebra approach. I want to know which one is quicker fps wise. Then, since my team loves gyros but hates the gyro climb, I’m going to use my “pose” as a check on the gyro. So when I have a pose solution, it will fix the gyro readings. I trust my linear algebra more than a gyro that has climbed over 10 rps in the pits.

To isolate the rectangle, could I use a very high exposure rate camera, to reduce blur and to reduce the extraneous light, and have a very powerful light highlight the goals? Thresholding should get rid of the spare pieces, then binary conversion, then erode and dilate, then the other stuff done to find one box?