Mecanum physics

I’ve been toying with the idea of placing six rangefinders, one on each face of each corner except the front, 2 facing left, 2 right, and 2 backwards, to use for obstacle avoidance on an off season robot project (using Mecanum wheels). The general idea behind this is that you shouldn’t be able to run into a wall unless you’re trying to run head-on into a wall, which would make the robot friendlier to drive. If this method proves successful, it could significantly change the way we write autonomous code, making it possible to make more dynamic movements where avoiding robots would otherwise be problematic. At the moment, I’ve written code so that the robot “glances off” of walls at a reflective angle when getting too close, even if the driver is merely coasting into the wall, which isn’t quite working, mostly because I don’t have a robot to practice with. I was wondering if you guys could weigh-in on this from a programming perspective (one better than my own). Thanks!

Are mecanum robot has a system like this using 4 sharp IR rangefinders. I haven’t tried yet but I believe it is programmed to slow down when it has detected an object and stops once it comes close and waits for the object to move or the robot to be pushed away from the obstacle

Doing obstacle detection and avoidance during teleop is difficult, mostly because you now need to understand the intent of the driver. What if I’m actually trying to ram someone else? (not that it would necessarily be legal).

One way to approach the problem gradually would first be to do obstacle detection. I would place an intelligent layer between the driver and the drivetrain (in the code) that would basically figure out whether or not there is an obstacle, how far the obstacle is, and whether or not the robot will hit it (if it doesn’t change state in the meantime). The first and second parts aren’t too bad - you simply need to query the sensors at maybe 20 Hz (Let’s say the robot moves at 15 ft /s, so just under 1 ft resolution for a 20 Hz query rate. Might be a bit coarse, but you can play with the read rate later on). Once you figure out how far the obstacle is, you need to figure out what to do. The military has a system known as the Phalanx CIWS (Close In Weapons System). It does something where it looks at targets and decides whether or not to destroy it based on whether or not it can avoid the ship the CIWS is on. Your control layer needs to do something similar - can you stop the robot (or turn) so that you don’t hit the obstacle? I imagine you have encoders on your drivetrain, so you should be able to generate a velocity. If you also know the dynamics of the robot (i.e. how much it takes to stop the robot safely, how much it takes to turn it around, etc.), you can code a state table that looks at the range of the obstacle, the closing velocity, and your current velocity, and figure out whether or not to turn tail, stop, or keep going (not sure why you’d want to do that, but maybe you do!).

Maybe a simpler method: if you sense something less than oh say 3 ft, freeze the drivetrain, or prevent movement in that direction. The problem with this approach is that it’s nondeterministic in the result (you don’t know exactly where you’ll end up relative to the obstacle, and it’ll be the same whether or not the obstacle is a wall or another robot.

This is starting to sound like an actual intelligent system!

I’ll get working on that. Thanks for your reply! One of the ideas I had related to this project was doing vision processing on-board, and having the computer persistently “suggesting” that the robot drive in a particular direction, while the actual robot would “bounce” off obstacles and other robots (without ever actually touching them) while continuing on it’s path. For us, it would be a leap towards the “fully autonomous teleop” that we’ve always dreamed of. Ideas are still welcomed, and thanks!

We are using the depth camera on the kinect to detect objects, and then be able to go in between two cones from the depth map data.

2013 was our 4th year using macanumm wheels. We used then in logomotion, and 2 other years I was not a part of the team. This year, we had a PID to lock on to the 3pt target based off of vision solutions, and continuously locked onto the target while strafing in and direction. To do obstacle avoidance, it’d basically be the direct opposite. After we find a target, simple (3D, depth is calculated) vector calculus (well, not so simple) can be applied to figure out the velocity (and acceleration) of the object, be it a wall or robot in relation to our robot (which is of course considered stationary at all times. -relativity FTW)

Using this information, one would be able to avoid objects. The question arises as to how fast such computations could be made and also how effective they would be in a dynamic environment. For instance, say your robot couldnt go under the pyramid, and this program tells the driver to drive under it, what then?

Our team as thought about having preprogrammed jukes for our higher speed drive trains. I couldn’t count how many head on collisions there were between cycler bots this years, be it cycler to cycler or cycler to defensive bot. So, say left trigger is a “spin move” left, right trigger vise versa. I believe that would prove to be much more effective. Simplicity is best, as nathan_hui hinted at.

I love seeing teams at least thinking about things like this. I feel that I have learned so much more by developing these complex systems even if they aren’t used. I feel that it is just icing on the top of everything FIRST provides.

yours truly,
someone who overthinks everything.