View Full Version : Shooter Aiming Methods
I would like to know if anyone is thinking about making a turret or having a variable angle shooter, like last year.
What would be your method of aiming at the goals?
Also, will just lowering the speed of the shooter wheel, decrease the range or trajectory of the frisbee? Has anyone done any testing with this?
nathan_hui
09-01-2013, 04:35
Decreasing the speed of the shooter will decrease the range of the frisbee. As to what it will do to the accuracy, that will require testing of a different sort. We initially used a table and a pool noodle to test shooting - different amounts of accelaration cause different ranges
Aiming at the goals will most likely be done via manual aim (the targets are large enough). Auto aim is doable, but may not be of any advantage (you're driving towards the darned thing, might as well take the time to slew the turret). Not sure about range, but then we were thinking of raising the shooter, so there's that too.
ttldomination
09-01-2013, 08:06
I would like to know if anyone is thinking about making a turret or having a variable angle shooter, like last year.
What would be your method of aiming at the goals?
Also, will just lowering the speed of the shooter wheel, decrease the range or trajectory of the frisbee? Has anyone done any testing with this?
(1) A turret is feasible, but I don't think it's necessary. We're still grappling with the adjustable angle. We're trying to find the sweet angle.
(2) We're determined that if we can do a full-ish auto aim that the driver is comfortable with, then we'll go there. In the mean time, we'll just be looking for that sweet spot for manual aim.
(3) We have done some preliminary testing, and what you have said seems to follow, however, we will do more testing in round two to back up what you have said.
- Sunny G.
gabrielau23
09-01-2013, 18:10
Two words:
Photon Cannon
After dedicating a week of design time to a turret last year and hardly using it, we immediately crossed out that idea. Something along the principles of K.I.S.S. (Keep It Simple Stupid)
mdrouillard
09-01-2013, 19:08
We plan in auto to aim the whole robot.
F22Rapture
09-01-2013, 22:38
Auto aim is doable, but may not be of any advantage (you're driving towards the darned thing, might as well take the time to slew the turret).
Another way to think about it is, "you're writing the autoaim for autonomous anyway, might as well use it for teleop as well"
The people that are auto aiming:
Do you guys off put vision processing onto something else?
z_beeblebrox
09-01-2013, 22:55
I'm thinking of putting vision processing into the operator's brain and combine the best of automatic and manual aiming. Instead of having the computer struggle to identify a target 50' away with different lighting, I want to have the operator look at the camera feed and click the center of the goal. Then, the computer uses that to figure out how much the robot needs to turn and how high to aim the shooter to hit the goal. The output from this will be fed to PID controllers for robot and shooter angles. When the robot has slewed to position and the shooter has spun up to speed, the operator fires a disc and makes corrections if it misses. Then, the operator rapidly fires their remaining 3 discs.
PhantomPhyxer
10-01-2013, 07:24
Decreasing the speed of the shooter will decrease the range of the frisbee. As to what it will do to the accuracy, that will require testing of a different sort. We initially used a table and a pool noodle to test shooting - different amounts of accelaration cause different ranges
Aiming at the goals will most likely be done via manual aim (the targets are large enough). Auto aim is doable, but may not be of any advantage (you're driving towards the darned thing, might as well take the time to slew the turret). Not sure about range, but then we were thinking of raising the shooter, so there's that too.
I did not know there were other former Tanker on this site. The "Slew" term related to Turrets makes me think there are. I worked on the Bradley Fighting Vehicle several years. We used an aim able Turret last year.
Anupam Goli
10-01-2013, 08:22
Another way to think about it is, "you're writing the autoaim for autonomous anyway, might as well use it for teleop as well"
But I don't think you HAVE to auto aim for auton. As long as you're contacting the pyramid, you start out in auto right? You could just start aiming straight at the goal and fire. Granted, it's not the best, but hey, whatever works. Also, auto-aim is just sometimes too much to deal with and test. I'd be perfectly happy if we just had a sweet spot and a little camera crosshair that would guarantee that our shots would go in. My theory on auto aim is that you don't absolutely need it unless you're encontering moving targets.
Also, a rotating shooter is probably much more complicated to do for what it's worth. if you've played catalyst, or done some math, the angle of error for these shots is actually pretty high, compared to previous year's games.
DjScribbles
10-01-2013, 11:19
My opinion is that controlling the angle vertically will be more important than rotational angle control. We have wide targets, but they aren't tall.
Anupam Goli
10-01-2013, 12:09
My opinion is that controlling the angle vertically will be more important than rotational angle control. We have wide targets, but they aren't tall.
Using the speed of the shooter and a table of experimented speeds vs distance can also overcome the short goal height.
Check out this post. It's not directly related to shooter aiming, but it does have some useful qualitative observations about shooter wheel speed and slipping:
http://www.chiefdelphi.com/forums/showpost.php?p=1212312&postcount=55
stingray27
10-01-2013, 13:56
For vision processing, I believe I am going to setup a system that uses vision processing, but at a minimal level. Last year, we wrote some code to attempt and follow the targets at all times. This didn't seem to fit to the game and so this year we are going to revise that method. The labview code for vision targeting from last year is a really good reference for those of you trying to figure it out. The only change you have to make is when determining the aspect ratio subscore, you have to compare it to the targets aspect ratio this year instead of the aspect ratio from last year (18 by 24). I am just going to divide the width and the height and then later use that number to determine what target the camera is currently looking at. I can then throw out the aspect ratio subscore when determining if the camera is looking at a target or not and use the other 3 subscores as the determining factor (convex hull operation score, or the rectangle coverage %, and the vetical and horizontal line scores).
As for actually using the vision information, I believe that we may go with just a single button that activates a vertical alignment of the shooter. Since the target is so wide and just limited in height, the shooter then would line up vertically and then hand over control to the operator. The horizontal alignment would just be from the driver. This allows for just a quick rough alignment of the robot and then the operator would only have to perform quick slight fine tuning.
Any thoughts?
jwakeman
10-01-2013, 14:27
The people that are auto aiming:
Do you guys off put vision processing onto something else?
We had the camera streaming directly to the driver's station/class mate last year. We would do the vision processing there and send relevant coordinate info to the robot to make the position adjustments. Lots of teams did this last year, they gave an example last year that was setup for this.
Lil' Lavery
10-01-2013, 14:33
I'm thinking of putting vision processing into the operator's brain and combine the best of automatic and manual aiming. Instead of having the computer struggle to identify a target 50' away with different lighting, I want to have the operator look at the camera feed and click the center of the goal. Then, the computer uses that to figure out how much the robot needs to turn and how high to aim the shooter to hit the goal. The output from this will be fed to PID controllers for robot and shooter angles. When the robot has slewed to position and the shooter has spun up to speed, the operator fires a disc and makes corrections if it misses. Then, the operator rapidly fires their remaining 3 discs.
Certainly an interesting aiming concept. How will your computer know the range of an object based solely on the click of the operator? Are you assuming you're always firing from approximately the same distance from the goal? Are you going to have the operator click&drag a box that can be used to size the goal (and thus determine range)?
Additionally, you're then forcing one of your operators to either move his hands between two input devices (his typical input device and the computer), or be entirely dedicated to the computer (thus leaving both the driving and firing to the other driver). Forcing your operators to have to look at the controls rather than the robot* and move their hands between multiple devices are some of the cardinal sins of OI design for FRC.
*the obvious exception to this is when focusing on a camera feed on the OI.
ctccromer
10-01-2013, 14:41
Here's my PLANS for this year (final results may vary):
1) Auto-aiming system with a shooting system that does NOT move on a turret or anything. It can aim up/down slightly, but that's the only axis it moves on and only to an extent
2) I'm switching to a controller this year and my very first idea for coding the robot was to not only make the joysticks turn and move the robot, but also code it so that while I have the left trigger held down, make the joysticks turn and move the robot at 0.25 normal speed. This way you don't have to JUUUUUST BARELY NUDGE the turning joystick a bunch of times to line up the shot -- you can manually aim the whole ROBOT (not the shooter on a turret) at the goal, then use an algorithm to aim the shooter's motor speed and vertical height, NOT horizontal angle
Last year, we had a slow mode button on our controller as well.
falconmaster
10-01-2013, 18:00
We had a great deal of success last year by locating landmarks on the field and then lining up with them and then launching the balls. We think we can do the same this year. To assist us though we are going to use a "photon cannon" aka flashlight http://www.amazon.com/8066-T6-Rechargeable-Zoomable-Flashlight-Charger/dp/B00A34P5J4/ref=sr_1_8?ie=UTF8&qid=1357856111&sr=8-8&keywords=led+flashlights+1000+lumens
Like the three days robot builders guys. From our experience this is much faster and more reliable than computer vision processing. There are too many variables to account for that a human can adapt to that a computer without an extensive vision processing program can do as well or fast. Just my two cents...
Anupam Goli
10-01-2013, 18:28
... There are too many variables to account for that a human can adapt to that a computer without an extensive vision processing program can do as well or fast.
This. Playing 20 minutes of Catalyst made me realize that within one hour of practice, a driver could find a sweet spot, sweet angle, and fire consistently. Granted, a very extensive vision processing system could do the same, but if your driver is confident enough, it'll most likely be faster for a human to line up to that sweet spot that is ingrained into the driver's mind.
z_beeblebrox
10-01-2013, 21:51
Certainly an interesting aiming concept. How will your computer know the range of an object based solely on the click of the operator? Are you assuming you're always firing from approximately the same distance from the goal? Are you going to have the operator click&drag a box that can be used to size the goal (and thus determine range)?
Additionally, you're then forcing one of your operators to either move his hands between two input devices (his typical input device and the computer), or be entirely dedicated to the computer (thus leaving both the driving and firing to the other driver). Forcing your operators to have to look at the controls rather than the robot* and move their hands between multiple devices are some of the cardinal sins of OI design for FRC.
*the obvious exception to this is when focusing on a camera feed on the OI.
Since you know the height of the goal and the angle from your shooter to its center, you can use simple trig to find the distance. I would give the operator a USB mouse for their right hand and a joystick (maybe a custom controls box) for their left.
Dan Richardson
10-01-2013, 22:16
This. Playing 20 minutes of Catalyst made me realize that within one hour of practice, a driver could find a sweet spot, sweet angle, and fire consistently. Granted, a very extensive vision processing system could do the same, but if your driver is confident enough, it'll most likely be faster for a human to line up to that sweet spot that is ingrained into the driver's mind.
I think this is a great observation that is often overlooked. Having a good drive team is the keystone to a competitive robot. Great drivers need practice. Even good drivers get better with time at the sticks. Most teams seem not to make practice time a priority. Put this priority at top of the list and rethink resource strategies and your bot will instantly be more competitive.
Back on topic, I believe the photon cannon to be one of the most elegant targeting methods to date. I only wish we'd thought of it first.
KrazyCarl92
10-01-2013, 23:50
Using the pyramid as an alignment device and protection from interference by the opposing alliance seems like a great aiming method. Just back into the 30" horizontal bar and have the robot square up and you are in a known, consistent position relative to the target. And with the targets being so wide, the one degree of freedom this alignment method affords (translation short-ways across the field) also happens to be the largest dimension for the target. If a robot can score reliably from this position as well as one other defense will be rather difficult and aiming is REALLY easy from one of those positions.
Lil' Lavery
11-01-2013, 14:28
Since you know the height of the goal and the angle from your shooter to its center, you can use simple trig to find the distance. I would give the operator a USB mouse for their right hand and a joystick (maybe a custom controls box) for their left.
So you're assuming that as your distance from the target changes, so will its height in your camera's field of vision at a predictable rate? I'd recheck that assumption if I were you. If you're only moving in the axis orthogonal to the goal, this would be true. But does it hold true once you introduce the second (or third) axes of motion?
So you're assuming that as your distance from the target changes, so will its height in your camera's field of vision at a predictable rate? I'd recheck that assumption if I were you. If you're only moving in the axis orthogonal to the goal, this would be true. But does it hold true once you introduce the second (or third) axes of motion?
This is true, and we did it last year with EdgeWalker.
If the camera is kept at a fixed height, and a fixed angle, and you are directly in front of the goal, you've essentially limited your movement to the axis orthogonal to the goal. If you move left and right (perpendicular axis) the shape of the goal becomes trapezoidal, but if you use the centre of the bounding box of the trapezoid as your reference, you can compensate for the additional axis of movement.
The trick is finding a camera height and angle where there is enough change in the goal's height to give you meaningful information - AND where you can keep the goal in the field of view at every spot on the field you want to shoot from.
Last year our team tries vision processing with an non-axis camera and we never used it because it was not as accurate as we wanted and was very slow. If you do try vision processing I suggest using a raspberry pi or something similar.
stingray27
27-01-2013, 22:06
For anyone wanting a explanation of what the example code for vision processing is doing, I put up a youtube video here: https://www.youtube.com/watch?v=m2Pwdq30eSI last year where I explained to my mentor what it was doing. Please bear with it as it is toned down, slow and probably all not incorrect (and long). But I have gotten some good feedback in that my explanation made a lot of sense to even non-programmers. Check it out if your interested.
jesusrambo
28-01-2013, 04:05
Our plan for this year is to have extensive auto-targeting to align the turret on the fly as we move, but with only movable elevation. Azimuth will be handled by actual driving, though we're planning on having that automatically align too. The image processing will be offloaded to the driver station, though we're looking into using an onboard computer.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.