My team is using an axis camera to calculate the distance to a target. The distances we are currently calculating are up to a foot off, depending on the location, lighting etc. The calculated distances are worse at angles. What accuracy range are other teams getting? Do you think the camera is the best way to get the distance? We have also done tests with the ultrasonic sensor which is sometimes very accurate and other times gives strange results.
I second this question. If you can’t get better estimates from the axis camera, you might try using the camera as a sanity check for an ultrasonic sensor.
A smaller area could also mean you are just farther away from the line perpendicular to the target, even if you are the same distance from the target itself.
Citrus, what algorithm are you using to compute distance?
Would it be more accurate to use the ratio of size of the rectangle in comparison to the overall picture (ie. The smaller the rectangle in pixels, the farther away you are moving from it) and then using the length and width of the sides to calculate for angle?
And I’m one of those NEMs (Non-engineering Mentors) so the math escapes me completely here.
With the range finder we received you shouldn’t need to use the Axis camera. The real question is has anyone found information for coding the rangefinder?
If you haven’t already, you should read this white paper on vision and the axis camera. It is from National Instruments and is specific to this years competition. The url is https://decibel.ni.com/content/docs/DOC-20173
You may need to register to download. It provides good detail on the algorithm used and possible errors.
We are using OpenCV to track the vision target this year. I have been fiddling around with color tracking over the summer, so I have a little experience with it. I have been able to track the tape up to 16ft. away. Due to a lack of smoked polycarbonate, I don’t have the actual vision target built yet.
Does anyone known how dense the polycarbonate is? All the manual says is “1/2 in. smoked polycarbonate” with no more description.
As far as determining distance, I thought using the target’s relative size to the image size would be viable because the target is 2ft. wide.
What problems are you having with OpenCV? I’ll try to help. Also you should get a hold of the book Learning OpenCV: Computer Vision with the OpenCV Library (ISBN: 978-0-596-51613-0). It’s great for reference and (obviously) learning the basics.
I am not using OpenCV at all, as I am running all of my code on the cRio.
If you are running the vision on the cRio, using NI Vision (CVI) is a much easier way to go, as it is already compiled for VxWorks, all of the supporting code (get image from axis cam) is already written, there is an example to process rectangles, and you can use NI Vision Assistant to automate a bunch of it.
I am using the older version because I had big problems with new 2.3.1. I have starting to write the tracking algorithm I have detected the “parallelograms”. Now here comes the math.
Correct me if im wrong, but cant you use a exponential regression for the correlation from height of rectangle to distance away (at least it seems a exponential regression is the right way to go)
Does OpenCV have a program that will spit out code for you similar to what the NI Vision Assistant program does? It seems like using OpenCV will be easier for my team to implement our rectangle tracking with if it does, as we are having some trouble understanding the code that is spit out by Vision Assistant.
my team uses the axis camera to identify blobs (yes its the technical term for the… blobs that the camera picks up) And then finds the distance based on the pixel distance between top square and bottom square, angle uses trig and distance from side targets, labview can calculate the rest