So I think that is pretty self explanatory, you know the range and the angle of the IR beam. You can find the other stuff. I will be used for drawing the image on screen using OpenGL. I did it during english class. :rolleyes: sin^-1 is invert sin
edit: TYPO… angle two = sin^-1 (Side One / Known Distance)
If you know angle two, why not base all of your math off of that?
Also, I would suggest trying out an IR or Ultrasonic pinging range finder and seeing what the feedback looks like. I think you will be surprised at how noisy they are when pointed at different surface finishes with different attack angles.
Could you explain the drawing better, ie, where is the robot, what is it’s orientation, and what are you trying to find.
My original assumption is that the robot is essentially the Bule box in the corner (where the range finder is) and that it is pointed at the wall. I am guessing this is wrong, I think there may be a better way to solve this problem based on how you use your sensors but I need to know more about what I am looking at.
I was thinking about it in school today, yea, my diagram need more info. My goal in teh short term is to be able to render a 3d image on screen with the info I get from the sensor. It sounds complicated but not really. I can get the x, y and z coordinates of the point just using trig.
It is pretty complicated, it may seem fairly simple but in practice it isn’t. It is even more difficult since in FIRST we can’t use laser range finders. IR, and Sonar range finders introduce a lot of noise.
If you are serious about looking into this in more depth research SLAM ( Simultaneous Localization and Mapping). Nearly every University with a robotics lab is doing some level of research on SLAM algorithms, this alone should be enough evidence to show you that this is not a trivial problem.
For noise — running a rolling average (16x or higher oversample) tends to level out most noise issues, although it does slow down response.
SLAM is much easier in a FIRST enviroment where the initial map is known, the objects sizes (other than other robots sizes) are known (and possibly fixed in position), the initial position and orientation (as well as size) of the robot is known, and the driving characteristics (drive train characteristics, turning radius, etc) of the robot are known. While it is not trivial … it is doable in a FIRST enviroment.