Its been a long time I have posted, nothing got accomplished for my fully autonomous robot idea, just whole bunch of notes and drawing in a notebook. My ideas involve heavy use of trig functions (due to the fact that I took trig in the summer, it inspired me). Now trig functions do take lots of power considering the fact that I might be using at least a dozen of them every cycle, while processing images (mass blob detection), and the locomotion and all that good stuff. Since there is no direct way to monitor memory and processor usage on the crio (or am I just unaware) how can I find out how much it is using? Am I good to go with the Math functions or do I have to make a “Trig chart” or have a onboard processor helping the crio?
If you want to know what I am doing:
I am thinking of using 2 60fps cameras as a stereo vision system and its not very complex on paper either. I think it is the perfect system to use
The distance of an object purely based on cameras, if you think about it, you will get it very easily, I need diagrams to explain it and might take long time to actually explain though. One phrase: Isosceles triangle with a known base
I actually think that doing the trig functions would be faster than doing a lookup table. How long does it take to graph a sine wave on an iBook G4?
Now, what will take a while is the processing of images, ESPECIALLY with disparity maps. You’ll probably require an FPGA to get the performance you’re looking for. Too bad sbRIOs are in OEM quantities only.
The Taylor approximation stated above is both extremely accurate and extremely fast. If it is not accurate enough, just add another term. However, I am pretty sure this is how it is done behind the scenes. It doesn’t really make that much of a difference. This year, my team did crab drive w/ gyro and I used a ton of trig functions every cycle. The image processing is much much much more processing intense.
Before you invest too much in this project, assume a reasonable error in the angles you are able to measure after you try to get both cameras to lock onto a single small point on the object. Additionally assume the object is not perfectly “broadside” to your two cameras. Then compute the range of distance errors that will arise as a consequence of those angular errors.
If you are thinking of using an FRC bot for the base of your triangle, and you are trying get the distances to objects that are scattered around an FRC field, You might be disappointed by the probable size of the distance errors
Or you might be perfectly happy. Just don’t overlook the extreme precision required for accurate distance measurements across a reasonably wide field of view.
I am perfectly fine with “that object is close” “that object is farther” type of thing, I am not making the next mars rover or anything:eek: I don’t need to be that exact, may be off by a foot or 2 is fine
team 1350 used binocular vision in 2006 somewhat successfully, I wasn’t there to get the specifics, but it used 2 CMUcams and trig, and that was with the old IFI rc, assuming there is a target that can be easily picked up at a low resolution, and the servos controlling the cameras are accurate enough (the closer the object, the less acurate they need to be), it should be *relatively simple. Streamlining the rest of the code to free up memory should also help.
Hint, research “boids”, it could help with programing how to react to other robots.
It’s an secant function where x is your angle and y is your distance.
You can see that after about 60 degrees it gets very inaccurate.
The IR sensor, on the other hand, is exponential decay, which makes a slightly friendlier curve. (Your error at 75% maximum distance is only ±25% of the total range of the IR sensor)
If this is the only extent you are using this for, I think the simplest method is using an approximating equation based on just one camera.
We’ve tried this in the past and our greatest error was about 2inches (at a distance of 52 feet). Basically, all you have to do is take a bunch of values of the width or length of the blob versus the the actual measured distance from the camera to the object. Since sight is logarithmic, take a calculator, plug in the datapoints, generate a logarithmic regression and you’re good to go.
Edit: Of course, I am speaking in terms of the last 2 games, for which 2 inches would have been a small enough tolerance.
The cRIO is powerful enough so that a few trig functions make absolutely no difference. We used a couple dozen of them for our drive routine with no noticeable slow-down at all. Using them in the vision routine should be no different. The main thing that will be slow is image processing, which is mostly affected by the camera resolution.
In addition, I imagine that the existing trig functions are already highly optimized for real-time applications, as that’s what the cRIO is for. The cRIO has a floating point processor, so that should work pretty well.
Also, in my experience, one camera is good enough for range finding, provided you know the physical dimensions of the target, which has been the case in the last few games.
This conversation is focusing way too much on the cost of the trig functions, compared to the cost of the vision functions. Trig is a lot more expensive than addition and subtraction, but trivial compared to vision.