Scoring Corner Goals During Autonomous

My team is not building a robot with the ability to shoot into the center goal. Instead, we will simply gather balls and dump them into the corner goals. During autonomous mode, we simply want the robot to drive up to the corner goal and dump all 10 starting balls into it.

The problem seems to be that there are no vision lights on the corner goals. There is no green light above the corner goals for sure, and unless I am very much mistaken there is also no infrared source (any infrared beacon) near the corner goals. So, can anyone think of any way of making an autonomous mode that will have the robot drive up to a corner goal? I have heard something somewhere about it being possible to use the infrared sensors to detect obstacles, and thus being able to detect a hole in a wall. Is this true? Is it possible? If so, how difficult would it be to implement?

Also, it has been suggested to me that I could use the CMU cam to look for shapes. Then, I could look for the red or blue rectangle surrounding the corner goals. I am completely new to using sensors: in previous years, our robot had no autonomous mode. I believe that the CMU cam idea will not work because there are lots of red or blue rectangles on the field, for instance the starting positions. However, I would very much appreciate someone else’s opinion on this and any ideas on how to implement the autonomous mode I described.

How about at the start of a match you place your robot so it is aimed to drive right at the corner goal nearest to the start position. You could program it to follow the wall (mechanically by having a horizontal wheel run along it, and having your robot have a bit of drift into the wall or electrically by using sensors). Dead reckoning has been used a lot in the past, may work really well for getting the corner goals. Also, dead reckoning would be quicker and easier programming than the CMU cam.

Good luck!

A good dead reckoning program with time could do what you want if you just wanted to drive strait to the goal in line with your robot’s starting position or using wheel encoders and a gyro you could get your robot to where you want it to go.

Range sensors do exist. So, you could measure the distance from the wall and have the robot stop at the same distance each time. If you have a radio shack near by they sell VEX Ultrasonic Range Sensors which do this. There are also IR range finders as well.

Where did you hear this? The CMUCam is not able to distinguish shapes, only specific colors.

Wow thanks for the amazingly quick replies. It’s nice to know that for once in your life you’re in a forum that’s actually helpful. I am a newbie to programming using sensors. Could someone explain to me what dead reckoning is and what would be involved in making such a program? And about the CMU cam - what exactly does it do? Does it return a matrix of pixels or what? Yes, I am planning to position the robot to drive straight to a corner goal in the beginning of the match. It has also been suggested to me that I might know that a corner goal is there by detecting the robot go up a ramp - the corner goal is up a light ramp. Could someone please give me an opinion on this idea?

Dead Reckoning is usually just telling your robot to do something for a specific time and not take any information from the out side world. It would involve setting up a timer or using the 26ms loop and have your robot do something for so many loops like drive for 2 sec and then turn for 1 sec something like that. It is not the most reliable way to do autonomous but it will work.

One option is to simply tell your robot to drive forward for X number of seconds (time your robot doing the run to see how long to set X for), then release all your balls. That way, if you start it facing the near corner goal it will drive straight up to it and drop all the balls, just like you want it to do. It will take a little experimentation to get this working just right, but it is an easy way to get an autonomous mode working, and if you have trouble, there are plenty of smart people on this forums that can help!

You program in a specific color, and it finds the biggest area(s) of that color in the image. It then finds the middle of the object and sends it to the RC.

Let’s handle these one at a time

Dead reckoning is the simplest form of autonomous movement, in that it doesn’t require feedback, you just do things based on timing loops. All you do is tell your bot to do things like “turn left for 0.1 seconds and then drive straight forward for 4 seconds, and then activate your ball dumper”. It’s very simple to do (I can give you some sample code, as our team’s bot usually has some sort of dead reckoning code written in case we have problems with the feedback sensors). The obvious problem is that many things can go wrong (wheel slip and other robots getting in the way being the obvious ones), and precise alignment isn’t easy.

And about the CMU cam - what exactly does it do? Does it return a matrix of pixels or what?

There are some very complete documents on on what the CMUcam2 does, but basically it checks to see if it’s looking at the target, and if it is, it returns the size of the target and it’s centroid.

Yes, I am planning to position the robot to drive straight to a corner goal in the beginning of the match. It has also been suggested to me that I might know that a corner goal is there by detecting the robot go up a ramp - the corner goal is up a light ramp. Could someone please give me an opinion on this idea?

As long as you start your bot in the position nearest the ramp, this should be doable, either as dead-reckoning, or with simple feedback (i.e. ‘drive towards ramp until you detect the ramp (with a tilt meter or contact switch, for example) and then dump all the balls’).

The best way to develop stuff like this is incrementally. Do something like you propose above, and then come up with improvements and alternatives. It’s nice to have a “library” of autonomous functions your bot can do, so you can select one before the match (including the most impressive “sit there and look pretty” autonomous mode, 'cause sometimes you’ll be in a match where that’s the best strategy, since you might be teamed with other bot’s whose autonomous modes and capabilities might conflict with yours).

And remember, it’s often better to do one thing really well, than do a bunch of things poorly.

Good luck, and ask lots of questions. The folks here are very helpful.

Range sensors do exist. So, you could measure the distance from the wall and have the robot stop at the same distance each time. If you have a radio shack near by they sell VEX Ultrasonic Range Sensors which do this. There are also IR range finders as well.

Could you please tell me more about this? What exactly will VEX Ultrasonic Range Sensors do? How will they tell me where a hole in the wall is located? That is, what values do these sensors typically return? And does FIRST definitely allow these and other off-kit electronics? Also, what are infrared range finders? What kind of values will these return? I really know nothing about sensors. Lastly, what price ranges are all of these components in? Are we looking at something extremely pricey?

Ultrasonic sensors use high-frequency sound waves and bounce them off of things–think sonar for an example. If you get feedback, you can tell using return time how far away something is. In this case, you could use it to end up at the ramp (drive forward until value is (whatever it returns at the bottom of the ramp.)
FIRST most definitely allows non-kit, Commercial Off-The-Shelf (COTS) sensors. You have a limit of $200 USD per electronic item max.

Since you say you are completely new to sensor systems, let’s start off with the ultimate in simplicity and go from there. An extremely simple (and it turns out, very effective and incredibly inexpensive) solution works like this:

step 1: Attach two simple bump sensors to the front corners of your robot. These “bump sensors” can be simple microswitches that are normally open (i.e. when connected to the digital I/O ports on the RC, they provide a “0” indication), and close when they make contact with some object (and thereby change the state of the digital input to “1’”).

step 2: Position the robot in the starting square closest to and facing the corner goal (i.e. as you stand at the center of the long edge of the field, facing the field, put your robot in the square to your right, and have it face the corner goal to your right). Then have the robot execute the following instructions:

step 3: Turn to the right until the bump sensor on the right front corner of the robot hits the edge of the field. This aligns the robot drive direction with the edge of the field, heading toward the corner goal.

step 4: Drive forward until BOTH sensors detect contact. This will indicate you have hit the player station wall. Stop driving.

step 5: Spew out the balls into the corner goal as fast as you can.

step 6: All done.

Once you get this simple sensor implementation nailed down and working, then you can start to get a little fancier. Ultrasonic sensors will allow you to do fundamentally the same thing that the bump sensors do. The only difference is that instead of just telling you when you have actually made contact something by bumping into it, the ultrasonic sensors can tell you the approximate range to an object as you begin to get close to it. So you can slow down and smoothly stop your robot just before it reaches the end wall, instead of running into it at full speed and then figuring out “ohh! there is a wall there!” And yes, using the ultrasonic sensors from a VEX kit is permissible under the current rules.

Thanks for all the ideas, people. There’s definitely enough here to get me started. Though about that idea with bumper sensors - it’s a logical idea, but if I have bumpers on my robot and go at full power until I run into a wall, and I have bumpers, don’t you think that the robot will bounce off the wall and back down the ramp? If it accelerates halfway across the field, the impact will be very strong, almost dangerous. Could someone give me an opinion on this thought please?

since you know the approximate distance to the wall you can use sort of dead reckoning and tell the robot to slow to half or quarter speed at a certain time during its autonomous journey.

Not to pop the simple bubble, but I have another idea. AFTER you get something simple working, I say try the camera, the light may not be over a corner goal, but its in the same spot all the time and your camera will be in a fixed spot on your robot. It’s actually fairly simple to lock on to the camera and use it to figure out where you are based on the servo values, PM me if you’d like more help, I’ve already written a simple autonomous mode to go to field waypoints based on the light and camera, you could easily use it to drive up to the corner goal, know for sure you’re there and dump, and a properly mounted camera deep inside the robot will probably be less likely to be damaged. And it’s honestly not very much more complex, Kevin’s camera code does a fantastic job of making it track, you just need to take the information from that and use it to drive, it can be very simple, a proportional control will probably do.

#define Kgain 1 // Adjust this, trial and error for best results, varies by robot
#define TARGET 160 // This is the servo position you want the goal to be at, also trial and error

signed int pan_error;

pan_error = PAN_SERVO - TARGET; //This calculates a positive or negative error for the target, negative is too far left,  positive is too far right

pwm_XX = pan_error * Kgain;

That’s heavy pseudo coding but it illustrates the procedure, Kgain is adjustable for responsiveness and the target position is where the camera points when you’re facing the goal from say, the start position. Then you could drive forward until the pan matches a second target for being right in front of the goal, but I’m not sure if the camera can pan that far, but you can use it for initial positioning before just driving up and unloading. Safer and more accurate then bump sensors, but more complex, but not terribly much so, and I’ll help you as best as I can if you’d like, just drop me a PM

our teams autonomous also involves just driving toward the center goal but we used a slightly more complex method of driving to it:

we align the robot straight at the goal then i tell the camera to track about where the green light should be once it locks on it drives forward until the light goes out of range, once the light is out of range i will test to see how far it still has to drive to get to the goal and tell it to do that using dead reckoning.

Or you could just put encoders on the wheels and do it that way.

Oh yes, and if I use dead reckoning, and there are lots of balls in the robot’s way, will that be significant enough for the robot to mess up?

I honestly don’t believe that will work. You got 10 seconds. Do you really think you can align the robot straight at the goal, drive away, and come back, and then shoot and score some points I assume, all within 10 seconds? Why not use the camera and some basic trigonometry or something to drive toward the light right away and stop at some distance X from the goal? I’m guessing the camera can tell you what angle it’s looking at the light from (well, these would be derived from the servo values). Then, using math, you can find the distance. For example, if you know the vertical distance between where your camera is mounted and the light is Y and the angle you’re observing it from (derive from servo values) is A, then:
tan A = Y/X (X is the distance you need). So, X = Y/tan A.