*Do you think that using Statistical Analysis based on the bell curve is an innovative and/or efficient way to implement targeting control?

Please explain.*

*Do you think that using Statistical Analysis based on the bell curve is an innovative and/or efficient way to implement targeting control?

Please explain.*

I think you’re going to have to do a bit better at explaining exactly what you’re talking about before anyone can give you a reasonable answer.

Seconded.

A bit vague here. Specifics please?

Why not go for the 1st/2nd order taylor series approximation aka basic kinematics approximation of your target’s motion in the future few seconds?

But anyhow, perhaps by bell curve you mean randomly guessing firing solutions in a test environment then constructing a bell curve to help select (and interpolate?) a relevant firing solutiont from a table of known firing solutions at game time?

But a few more details would be nice. :o

-q

Are you actually saying statistically-based algorithm or probabilistically-based algorithm? Statistically-based algorithms have been used by FIRST teams for a while, in the form of look up tables (I know several of the teams last year used similar things for ramping speed in autonomous and even for position control for knocking off track balls).

The ones that really interest me, and that I want to encourage my teams to start using since we have the power of the new control system, is probabilistic robotics, which is doing real-time analysis to calculate the probability of the robot being in any of several states instead of just using heuristic-based, “yes you are or no you’re not,” types of algorithms.

If this is the type of thing you’re doing, I’d really like to hear more about it. Realistically, though, it’s not that hard to do some of this type of thing with vision. I.e. you first do thresholding and then particle analysis, which gives you back several blobs, and you have to decide which one to track; in our experimenting, 1708 decided to use the size of the blob, as that is generally proportional to proximity to the robot. Note that this could be called probabilistic robots, because essentially we’re assigning a probability to each blob that is the closest based on how big the blob is, because we don’t know for sure, but larger blobs are more probable to be closer. Then we pick the largest blob - taking the blob with highest probability - as the one that we should “probably” track. You essentially have to do this with any algorithm that is analyzing data from a sensor.

The next step comes when you start interpolating data based on relative accuracy-probability from multiple sensors in order to estimate a higher state (for example is my DARPA Urban Challenge robot still on the road?). According to Thrun, Burgard, and Fox, this is the future of robotics and I’m inclined to agree.

–Ryan

In the realm of FIRST, “statistical analysis” done in real-time in any form is novel. In the realm of academic robotics, you’d have to be more specific as to what you mean, e.g. it sounds like you could be talking about a Kalman Filter (where you trade off between how much you trust your sensors vs. how much you trust your “model” of the world based on statistical properties of your sensor readings).

Yes, I apologize for being rather vague, so here is a basic explanation of the idea from beginning to end.

The camera produces an image.

Labview renders out undesired colors, leaving desired colors (green or pink).

Generate the image onto the graph.

Use the standard bell curve to generate the numbers for the math required (standard deviation, kurtosis, ect.), also visa-versa for the other color.

Determine if the colors line up enough in the Y values to be one target.

Determine weather the target is indeed the right target (alliance).

Calculate the distance (based on preset values).

Possibly AI intelligence to track ahead of the target based on movement.

Calibration of ball release mechanism for proper distances.

YES!

With the power of labview and the control system, plus the vision sensor and any other costum efficent sensors can deffinatly allow programmers to go wild and do this challenge. It’s such a big acheivment when it is proved working. I wanted to try to do something close to that (not really with probablites, but close enough) but because of lack of time, resources and other stuff (and the fact that our turret does not turn on it’s axis [or will it…?]), I had to stop all the work I’ve done on it.

It is really fun to do all the thinking and probabilties and stuff and I encourage anyone to stand up to the challenge for next year, unless anyone has done this year and I want to see every single detail of it.

Sounds like the basic idea of tracking to me. Generating algorithm parameters based on empirical testing of your mechanism is the usual approach to programming mechanical systems. A normal curve may in fact be a good model for your shooter, but at some point you’ll still have to set arbitrary limits on how far out of alignment you deem to be “close enough” to risk taking a shot; whether these are sensor readings, z-scores based on sensor readings, probabilities based on some complex model that you apply to sensor readings, it all comes down to the fact that you have an imperfect system, and you’ll have to make some judgment about what level of risk you want to assume.

On a different note, the idea of tracking ahead of the target has always interested me. I’d be more inclined to polynomial approximations as Qbranch suggested rather than some sort of AI algorithm. In general, I don’t think you could get enough data to accurately train such an algorithm, as you’d have to have a different set of data for each robot/driver, and humans are unpredictable enough that I don’t think you’d have much luck. Even other humans aren’t that good at predicting what we’ll do. In my mind, using a second order approximation always seemed to be the best to me. Reasoning: the control point for our systems is essentially the amount of power we’re applying to the motors, which is essentially proportional to the amount of acceleration the robot we’ll be experiencing, which is a second order parameter of the system. If you’re acceleration is changing, there’s not much sense in guessing at it, as you’re not sure what the program/driver is doing. This is definitely something I’d like to start playing around with in the offseason.

–Ryan