Ok all -

Just of curiosity - is it mathematically feasible to create a shooter than can calculate distance/angle/etc of a shooter and have it accurately make shots? Can it be done or is it simply a pipe dream?

Thanks.

Ok all -

Just of curiosity - is it mathematically feasible to create a shooter than can calculate distance/angle/etc of a shooter and have it accurately make shots? Can it be done or is it simply a pipe dream?

Thanks.

It’s possible. 100% possible.

I was thinking something along these lines, but the camera would only need to calculate distance. Then adjust a shooter to hit the selected basket at that distance. I have no idea how to do it, that’s the programmers problem.

It’s entirely feasible. I spent a good part of today discussing with another programmer the best way to implement it - it’s just some simple trigonometry in solving the triangle made from your shooter, the alliance wall, and the hoop.

Does anybody know if the Kinect camera can be used to locate the reflective tape on the baskets?

Mathematically, optimistically and practically feasible. As long as the sensor system and filtering done on the sensors is reliable (slight challenge, as an image analyzer sub-routine might take a while to fine-tune), and the shooter mechanism has a low standard deviation in it’s behavior (read: reliable in shot taking), it’s a piece of cake to code a system that can do a bit of math and calculate the shot trajectory.

A white paper is or should be posted on the NI site before too long that explains some approaches to processing images.

There are a number of techniques for calculating distance, but the straightforward one is to use the known width of the target and the information about the camera lens optics. You can find the optics view angle on the Axis web site.

Basically, if a target width is known, camera resolution is known, and lens info is known, you can calculate distance pretty accurately to a target located at any of the heights.

Greg McKaskle

Check the other forumns on the Kinect camera on the bot. not wise.

Axis works fantastic. Perhaps FIRST will be nice enough to include a sonar sensor in the kit of parts you could use to…

There should be a sonar sensor in the kit. I remember writing a new example for it.

I currently have a SubVi in Labview that calculates the distance to an object based off its width, and the angles you are viewing its edges.

However, I don’t currently understand the stuff about the camera optics.

To get the angle that the camera is viewing the object at, is it an ArcCos function?

(pretty sure ArcCos is the simple name for the inverse function of cosine)

The calculations, once derived (that’s the hard part), aren’t too difficult for a computer. I can’t speak about sensing and programming, but having a reliable shooter is just as important, especially as you move away from the hoops.

Of course, the combination of correct kinematics, reliable sensors, and consistent shooter makes for a fairly difficult challenge. Lacking any part of that trio means you’ll miss. A lot.

If you want to post more information about your approach, I or another mentor should be able to help. Yes, ArcCosine is the Inverse Cosine, but that isn’t enough information to say it is the correct function.

Greg McKaskle

My approach to the triangulation is like you said. I declare the camera to be at (0,0), and I input the object’s width, and the angles that the camera is viewing the right and left edges of the object at.

I can then calculate the coordinates of the object, and from that find the distance and angle required to hit it.

However, I don’t know how to find the angle that the camera is viewing the object at.

I would imagine that the simplest way would be to take the position the object is on the screen (in relation to the center), multiplying it by some constant, and then putting that number through an ArcSin function.

I think this would get the angle that the object’s angle in relation to the center of the field of view.

Basically:

ArcSin((Position on Screen - Center of Screen) / Screen Width) = Angle Object is viewed at.

If the camera was a typical FRC Camera found in the kit of parts, would this be correct?

I’m not following the approach you’ve laid out. Are you trying to determine the distance to the target, the height of the target, or the location of the robot on the field?

To review, sin(theta) is equal to (side opposite theta)/hypotenuse. This is of course only true for right triangles.

Can you draw a sketch showing the triangle and labeling what the points of the triangle correspond to?

Greg McKaskle

I’m still new to the interface on chief delphi, so let me know if this didn’t work.

I put some pictures up on my profile that describe what I am thinking.

Basically, I need to find Theta 1 and Theta 2 on the triangulation diagram somehow.

I would like to use some sort of vision.

However, I’m not sure what math is required to convert an image from a camera into the angle that the camera is viewing something at.

If you could explain that to me, or direct me to a site that does, it would be greatly appreciated.

You may find it useful to look through the white paper on the NI site. It is also posted in the media papers section. There is also some discussion of this on the Java forum.

Greg Mckaskle

ok, thanks for the reccomendations.

I think LabVIEW actually has pre-made functions for this, so my team is most likely going to use those.