Its been a long time I have posted, nothing got accomplished for my fully autonomous robot idea, just whole bunch of notes and drawing in a notebook. My ideas involve heavy use of trig functions (due to the fact that I took trig in the summer, it inspired me). Now trig functions do take lots of power considering the fact that I might be using at least a dozen of them every cycle, while processing images (mass blob detection), and the locomotion and all that good stuff. Since there is no direct way to monitor memory and processor usage on the crio (or am I just unaware) how can I find out how much it is using? Am I good to go with the Math functions or do I have to make a âTrig chartâ or have a onboard processor helping the crio?

If you want to know what I am doing:

I am thinking of using 2 60fps cameras as a stereo vision system and its not very complex on paper either. I think it is the perfect system to use

The distance of an object purely based on cameras, if you think about it, you will get it very easily, I need diagrams to explain it and might take long time to actually explain though. One phrase: Isosceles triangle with a known base

If you are using too much trig processing, then you could always use a lookup table.

Or you could use a polynomial approximation.

You can get a very good approximation to the sine curve from zero to pi/2 (which is all you need to construct the entire curve) with only three multiplies and three adds:

I actually think that doing the trig functions would be faster than doing a lookup table. How long does it take to graph a sine wave on an iBook G4?
Now, what will take a while is the processing of images, ESPECIALLY with disparity maps. Youâll probably require an FPGA to get the performance youâre looking for. Too bad sbRIOs are in OEM quantities only.

The Taylor approximation stated above is both extremely accurate and extremely fast. If it is not accurate enough, just add another term. However, I am pretty sure this is how it is done behind the scenes. It doesnât really make that much of a difference. This year, my team did crab drive w/ gyro and I used a ton of trig functions every cycle. The image processing is much much much more processing intense.

Before you invest too much in this project, assume a reasonable error in the angles you are able to measure after you try to get both cameras to lock onto a single small point on the object. Additionally assume the object is not perfectly âbroadsideâ to your two cameras. Then compute the range of distance errors that will arise as a consequence of those angular errors.

If you are thinking of using an FRC bot for the base of your triangle, and you are trying get the distances to objects that are scattered around an FRC field, You might be disappointed by the probable size of the distance errors

Or you might be perfectly happy. Just donât overlook the extreme precision required for accurate distance measurements across a reasonably wide field of view.

I am perfectly fine with âthat object is closeâ âthat object is fartherâ type of thing, I am not making the next mars rover or anything:eek: I donât need to be that exact, may be off by a foot or 2 is fine

I have no idea what that means, butâŠ Its like the IR Range finders, the voltage output is a curve, gradually evening out and becoming very inaccurate; is this an example of your explanation?

team 1350 used binocular vision in 2006 somewhat successfully, I wasnât there to get the specifics, but it used 2 CMUcams and trig, and that was with the old IFI rc, assuming there is a target that can be easily picked up at a low resolution, and the servos controlling the cameras are accurate enough (the closer the object, the less acurate they need to be), it should be *relatively simple. Streamlining the rest of the code to free up memory should also help.

Hint, research âboidsâ, it could help with programing how to react to other robots.

Itâs an secant function where x is your angle and y is your distance.
You can see that after about 60 degrees it gets very inaccurate.
The IR sensor, on the other hand, is exponential decay, which makes a slightly friendlier curve. (Your error at 75% maximum distance is only Â±25% of the total range of the IR sensor)

If this is the only extent you are using this for, I think the simplest method is using an approximating equation based on just one camera.

Weâve tried this in the past and our greatest error was about 2inches (at a distance of 52 feet). Basically, all you have to do is take a bunch of values of the width or length of the blob versus the the actual measured distance from the camera to the object. Since sight is logarithmic, take a calculator, plug in the datapoints, generate a logarithmic regression and youâre good to go.

Edit: Of course, I am speaking in terms of the last 2 games, for which 2 inches would have been a small enough tolerance.

If memory serves, you guys also used a Gumstix as a Co-Processor to handle the stereo vision. I dont think the old controller could handle two CMU cams very well.

The cRIO is powerful enough so that a few trig functions make absolutely no difference. We used a couple dozen of them for our drive routine with no noticeable slow-down at all. Using them in the vision routine should be no different. The main thing that will be slow is image processing, which is mostly affected by the camera resolution.

In addition, I imagine that the existing trig functions are already highly optimized for real-time applications, as thatâs what the cRIO is for. The cRIO has a floating point processor, so that should work pretty well.

Also, in my experience, one camera is good enough for range finding, provided you know the physical dimensions of the target, which has been the case in the last few games.

This conversation is focusing way too much on the cost of the trig functions, compared to the cost of the vision functions. Trig is a lot more expensive than addition and subtraction, but trivial compared to vision.

In summary:
Yes, the cRIO is powerful enough for trig functions.
No, the cRIO is not powerful enough for stereo image processing at 60hz, without reprogramming the FPGA.