The Control Award was for our vision system. In addition to goal tracking, we have also been working on using neural networks to track balls. We don’t currently implement ball tracking, but we are working on it.
This is a good question, and there are various different reasons. How the robot handles when it hits the defense comes down to a lot of different variables, including what type of drivetrain was used, wheel size, and exact angle that we hit it. Our drivetrain is certainly not perfect at going over defenses, and we have to hit the rock wall just right to get enough traction to go over.
If you look at some of our matches on Saturday, you will notice that we stop before the alliance wall in our first few matches. The question then is what changed between then and the end of today. This requires a bit of explenation. Our autonomous reads values from the Dashboard via Network Tables for an autonomous (do nothing, drive, drive and aim, etc) and a distance to travel. At some point the robot stopped reading these values at the start of auto while on the feild (expect another post about this tomorrow). We don’t know why; It worked just fine on the practice feild, and we tested multiple times. Since we no longer had the option of reconfiguring the distance our robot drives based on what defense we were in front of, we were faced with the option of having no autonomous or having an auto that goes x inches, and to change that we would have to recompile and redeploy.
We played a match; auto went fine (crossed the defense, didn’t hit the wall). We played another match, and we got stuck on the rock wall. We now had the choice between not always being able to get over the rock wall or definitly getting over the rock wall by running for all of autonomous (just in case we get stuck). We chose the more strategically viable option, in order to get the most points.
Lazy? Perhaps. I would certainly describe some programmers on my team using this word at times, myself included. But in this case, particulary our case where we had to make this change at competiton due to an undiagnosable issue, we feel it was the right decision. Furhtermore, we’re not entirely sure how we would detect our total distance traveled. We’ve talked about adding an ultrasonic sensor, and I believe we’ve experimented with using our NavX MXP to detect distance traveled, but we’ve had trouble with both of these methods in the past. Encoder values have been the most relable way of detecting distance traveled. This doesn’t work when we loose traction with the ground.
I expect that most teams that build custom driver stations have velcro on the bottom, or put velcro on the bottom of their laptops (We did after we noticed wall-banging in Palmetto). Furthermore, most refs have been allowing teams to catch falling driver stations/controllers, crossing the line with no penalty.
I appologize for typos or anything else messed up or confusing about this post, I’ve been at a robotics competition for a while and am very tired.