Is hybrid valuable?

One of our mentors went to Gull Lake regional in MI this weekend, and came back thinking that we should abandon our effort to implement the Kinect in favor of adding some behavior to autonomous to tip the coop bridge. Other reports I’ve seen indicate that very few teams are using hybrid mode, is this because it doesn’t work well, or simply apprehension to the new tech?

Our autonomous is setup right now to shoot our two balls, and my strategy was to have this behavior supported as a subset of hybrid mode, so a single gesture would handle running our standard autonomous mode; and if we are the hybrid team on the alliance we can simply use the kinect to drive back and drop the bridge. I’m rather apprehensive about trying to drive backward and hit the bridge simply because it will be very difficult to program robustly, where hybrid control would really allow a much greater degree of flexibility for our bot.

My feeling is that having a hybrid autonomous is going to make our team more valuable because so few teams are taking advantage of it; but I’m concerned about pushing the issue only to find out all the good teams found out it didn’t work reliably 2 weeks ago.

For most, I think that perceived lack of use is due to lack of utility.

We felt that the Kinect does not offer enough by way of precise control over the robot. Therefore, we didn’t see any advantage it offered over using traditional control techniques during autonomous mode.

Writing a simple routine that drives backwards for an interval of time is easy to do and often effective. Bonus points if you have encoders installed in your drivetrain, so that you can make it distance-based instead of time-based, and if your drivetrain has a strong bias to veer in one direction when it is supposed to drive “straight,” then you can use the encoders to correct for that as well by measuring the difference between how far each side of your drivetrain has traveled. Alternatively, you can use a gyro to measure by what angle your drivetrain has changed course and use that error in a control feedback loop like PID.

But chances are you might not have enough time to implement these fancier control techniques. I would try driving your robot backwards for a set time interval, and seeing if that accomplishes what you need for going to the bridge in autonomous. While you are right that this is not the most “robust” solution, have you tried it?

What I would do is create a list of the information that you are able to provide via the Kinect that you can’t get otherwise. If you find something critical on this list, implement Hybrid.

Yeah, thats a big part of the problem. We have 6 hours of out of the bag time to use, I have a feeling I’ll get 1 or 2 of those. This is my first year on the team as a programming mentor, and the first year the team has used any sensors (as far as I know) and had any strong direction for programming. I feel we’re going to have a pretty good robot, and have only a little tuning to do on our shooting automation PIDs. However, we don’t have any encoders on our wheels, and haven’t really done any steering correction before (I was planning to save it for an off-season project as I try to get some of the kids more involved in programming).

To implement something that will do more than drive backwards for x seconds seems out of reach, but we’re about an hour away from kinect completion, which could fill that gap.

My concern is that the motion control could be unreliable (we’re going to use default gestures with kinect stick) and we end up floundering in hybrid autonomous when a standard autonomous would have worked fine, simply because the kinect doesn’t work as well as I’m counting on…

Perhaps the following video can give you some idea of what kind of control to expect? It is using the beta software, so I do not know if it has improved, but:

Ultimately, I agree with EricVanWyk in that you should only use the Kinect IF you think it will give you information or control that you cannot attain otherwise. Simply using the Kinect alone does not serve as an advantage just because no one else is using it; but if you can find a good application of the tool, then it is worth it.

You can see my other post here but in short we found the use of the Kinect very valuable. We did not have the time or resources to implement a closed-loop driving method, so we chose to use the Kinect to drive the robot during the Autonomous period along with other modes that used simple open-loop control.

The main benefit that we found at the KC Regional was that with the Kinect the Kinect Driver was able to react much more dynamically to certain situations, such as lining up for the coopertition bridge to knock the balls down, picking up a random ball that didn’t make it into the hoops and primarily to steer around other robots that were executing their autonomous driving modes. Initially we had programmed it to allow the Kinect Driver to be able to shoot balls that were missed during the initial autonomous shooting but with such a limited time that was ineffective. I wasn’t able to watch every one of our matches but as far as I know we were able to get to the Coopertition Bridge and tip it to our side before our opposing alliance in every match.

If all you need to do is drive straight and operate a bridge mechanism (and your robot drives fairly straight on its own over the appropriate distance) the Kinect may not be a bad choice.

You can use the Dashboard to verify that you have the FRC Kinect Server configured and running properly without using up any of your unbag time.

You could set up the scoring to run autonomously and ignore the Kinect, then after it completes use 1 arm of the Kinect stick to control the power of an Arcade Drive function to drive to the bridge and use the other arm or a leg gesture for the bridge. I saw a number of teams this weekend accidentally ram into the bridge using their “full auto” modes and tip the balls the other way. This would also allow you to “abort” if someone else beats you to the bridge, instead of potentially tipping over if they are stronger at pushing it down.

You may be able prepare both the “full auto” version and the Kinect code and quickly test the repeatability of the “full auto” version towards the beginning of the unbag time (at this point you don’t care if the distance is correct, only if it’s repeatable). If it seems good enough, spend the time dialing in the distance/time, if not spend the time practicing with the Kinect.

at the granite state regional I think only one team tried to use Kinect, and it didn’t seem to benefit them.

the main task in hybrid time is to get two balls in the top basket. if one or two bots can do that consistently the alliance is on the way to a win. hybrid turned out to be VERY important.

if you can reliably find a way to pull the bridge down, using Kinect or not, that is certainly a plus but not critical in my opinion.

Unfortunately this year it seems like using the Kinect for Hybrid was an afterthought. Most teams can get much greater accuracy by just lining up their robots manually and allowing an automode to run without using any sort of controlled input.

In 2010, hybrid could be very useful, as you did not know where the trackball would be starting, so your command could help signal the robot where the trackball was. This year, there really isn’t input like that needed.

That said… I can think of two potential uses…

  1. A delay timer so that you can wait for other robots to shoot the balls. If you want to be the last team to shoot, and there is a potential jam in the top basket, you might be able to use it to signal your robot when to start shooting (waiting for the others to finish) and if you want to shoot for the top or the middle.

  2. To choose between tipping the bridge first or shooting first. As automodes get more and more advanced, many teams are going to try to tip the bridge in autonomous. This means you may want to be the first to the bridge, but it does require you to move from your starting position if you do it before shooting (risky for alignment). If you don’t have a clear picture of your opponents auto strategy, you might allow your kinect driver to signal whether to go for the bridge or shoot first.

Both of those options are not really necessary. Good strategy, timing and prematch setup can account for all of that, however, if you wanted the challenge of using the kinect (my guess is that at least some of the Controls Awards will go to the few teams that use the kinect), those are potential uses for it.

I would say having a consistent autonomous is much much more important than using the kinect (in fact as an alliance captain I don’t think I would care if you used it or not - I would just need to know that I didn’t have two other teams that “require” it). And that if you are weighing tipping the bridge vs using the kinect, the option for tipping the bridge adds more value to the alliance.

Hyrid is very very important, we just came back from the bae system granite state regional and you could win the game just with hybrid mode, the kinect however not to many teams used it. and i couldn’t really tell and advantage of it as you can just set the robot up into autonomous to do everything that you want it to do for example shoot, tip the bridge, both and so on.


I think I only seen one or two teams use it at GSR this past weekend, none were successful.

It will be useful to prevent the loser of the “Battle of the Bridge” from tipping over completely. It may be useful to select different autonomous modes (stand in place, go to fender, delay, go to colored bridge, go to white bridge). It’s probably useful as a direct range-finding camera on-board the robot (subject to the extra necessary resources).

Had we received the Kinect pre-season, I think we’d see more teams using it. Yet for this year, we felt it was a distraction outside of the above scenarios.

So, I’ve gone ahead and implemented kinect control into our robot, right away I noticed a big problem that I just want to make people aware of:

The Intel Atom processor (or at least the one in our netbook) was struggling quite badly to keep up with the kinect processing. Luckily we have a semi modern laptop (and a second computer that I can switch to for programming) to use as our driverstation.

Using our DS which is equivalent to this (it’s got 1gig ram and a N570(1.66GHz)) I was seeing approximately 0.5-1.0 second delay in things like throttle control. Switching to the other computer (it’s got a intel core2duo, but don’t know the specs) it works significantly better.

I’m pretty excited to see how this works this weekend (we’ll be in Traverse City MI, team 2474), as we’ve only had about 5 minutes of test time so far :slight_smile: