In team 3494’s so far discussion on designing an Offensive robot, we’ve determined that the most reliable way to score other than the “slam dunk” method is by using a camera to track the reflective tape targets on the goals. I have no experience programming this, how feasible is it to program a tracking method with Labview as a fairly new (1 year exp) programmer? I have’nt been able to look at the available prebuilt software, but is there any that would make this a good dependable design option? Also, is it easier to track, program, and test tracking the lines with the Kinect as opposed to the Axis camera? Thanks in advance for any help!
The example code called Rectangular Target Processing may be interesting to look at. Additionally, a white paper that describes the code should be on the NI site before long. The example runs on the laptop using just the camera and the ring light.
While the Kinect has a good RGB camera on it, it is much simpler to connect an Axis than a Kinect.
Greg McKaskle
Can I pull up any similar example code for Java?
I’m not sure about that, but I believe the functions are probably wrapped so that they can be called from C++ and Java. If not, it is not much work to have them wrapped and added to WPILib.
Greg McKaskle
No is the short answer, a friend and programming mentor for a neighboring team beta tested 2012 software and pointed out the slow processing on the netbooks. Kinect is built to work with a fairly agile processer that can run the C# tools needed to preprocess the Kinect signal. Not a big deal for a 2012 notebook, but difficult to run indepently on a cRIO(some sort of connecting board would likely need to interface, correct me if some one else has accomplished this direct). An Axis camera has large amounts of example code and documentation(appropriate for some one with 1 year of experience), and has great ways you can adjust code to run faster while learning more about image processing.
http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
http://www.inf.ufrgs.br/~crjung/research_arquivos/paper_1125.pdf
Go knock yourselfs out.
As you brainstorm, it’s also worth noting that robots don’t have to do things the way that humans do things. To aim at the target, you don’t have to be able to see the target, you just have to know where it is in relation to the robot (angle, distance, height, the last of which is constant for any given basket).
Astute students could devise a way, for example, to use the geometry of floor lines (detected with an array of color sensors) to line up shots from particular points on the field, wherein the robot will be in perfect position for nuthin’-but-net.
(Please note that I’m not necessarily advocating that as the ideal method to line up shots – I’m just trying to give kids a prod in the alternative-functionality direction.)
Bingo. Look for all of the known field entities that can be used to localize the robot on the field, especially those that may get you to a position where your robot has a high chance of scoring, and then look for the simplest way to identify those and incorporate them into your targeting system.
As an example, exercise, my HS basketball coach told players to focus on either the front or rear of the rim, depending on how their shooting motion tended to loft the ball. Then he admitted that he still focussed on the net, and that when he was a youngster he started at a solar eclipse long enough to scar his retina. Since he couldn’t actually make out the rim reliably, he looked at the net. I’m sure other players look at some portion of the backboard rectangle.
The field has nice lines on the floor to follow to a known location. It has a fender to align against or locate with a sensor. And it is worth mentioning that the walls of the field are nice for alignment.
The retro-reflective tape was definitely put on the backboard as a possible vision target, but it is one of many. Don’t let it limit your possibilities.
Greg McKaskle