Has any one seen any team with an effective use of autonomous mode? Has any one stacked a tetra from the loading station or a vision tetra in 15 seconds?
yes, in a way. many teams at BAE could stack a tetra they started with on a side goal. Team 133 (BERT) could stack on the center if they started with it. I was in the pits most of the time, but i don’t think any team actually picked up a vision tetra in auto mode.
I’m pretty sure I saw ONE team at the VCU Regional on Friday. It was the first, and like only 1 of few that actually used the camera. I’ve noticed that most teams start out with No autonomous, or dead reckoning for a few feet/seconds. Then as the matches progressed on, the more they did. On Day1 I didn’t see too many 'bots with that feature, but by the Qualifying matches there were a couple of 'em that picked tetras up, but didn’t score.
Our bot didn’t cap, and the team didn’t like our autonomous that I wrote, so by the time we were in the seeding matches we were idle for 15 seconds.
No vision tetras in auto mode at BAE!
At Peachtree, one team started with a tetra, used it to knock down the hanging tetra, and then capped the tetra they started with. Very effective and they did it consistently at least 3 times in a row.
we had the same way of doing it. we had it done 6 times In a a row then a team actually found out how to defend us and they did. but then we did it from there and out easily as well. we also go numerous awards from other teams on the Auto mode at least 4 of them. It was fun to watch the auto this year.
121 only missed once with starting w/ the allied tetra, scoring it ontop of the hanging one knocking it down (the one miss both fell into the goal being contained)
157 could track the vision, and picked it up a few times but never successfully capped… there were only 4 teams total that i belive picked up the vision tetra in auto…
no camera on at sacramento, but a good amount of taking down hanging tetra and like 5-6 bots could cap in autonomous
Not a single robot was even able to pick up a vision tetra at Sacramento.
One team ran into one, by pure dumb luck when it happened to be set right in front of them. That was the closest to any kind of interaction robots had with them.
one team came EXTREMELY CLOSE at peachtree…they picked it up drove to the goal and came down, but it was still swinging so they just missed putting it on right…
The sad thing is, we had an auto mode at VCU that would allow us to cap center with the alliance tetra, but most of the time, our partners wanted us to run without it (I think they may have been a bit more reliable)
I still wish we’d gotten the chance to run it once!
Oh well, maybe at ROSi…
That was 1388. Just before the match, I was asked to unplug the camera, because they didn’t want to use the autonomous in the first match. The robot still ended up driving into a vision tetra and quite nearly pushing it under a side goal. I ran two other matches with the camera, just to see how well it could follow tetras. Once, it drove through a side goal and into the opposing alliance’s vision tetra. The other time, it pretty much missed. Vision is not even close to the attainable task I thought it to be six weeks ago.
Which is why the camera box was removed.
I’m sure one or two teams will do it in Atlanta. At Sac, I was suprised how few teams used the camera (well … even had it mounted).
Kudos to anyone who can pick it up using the camera.
At the Rochester Finger Lakes Regional, team 237 could not only track and lift the vision tetra but also position it most of the times in or above a goal. I would be on the look out the the NJ Regional for them to cap a goal with the vision tetra in auton mode.
Amazing how quickly things can change. Anyone remember this thread?
The autonomous mode I wrote missed picking up a tetra from the autoloader by a few inches in our best match. In another match the robot nocked the tetra outside of the field. My teams was at the VCU regional.
Given the new math libraries, and the support Kevin Watson gave for the encoders and gyros, I was really surprised that not many teams had good autonomous modes. The one best I saw at VCU was the Virginia Tech team who capped the side goal with an alliance tetra every time. With plenty of time to test, I’m sure that many more bots will have great autonomous modes, something to look for at some of the summer competitions.
My team this year decided not to have an arm, so our autonomous consisted primarily of getting to our opponent’s side ASAP so we could get in their way. However, we want to put an arm on it for the summer, and I fully expect to have an autonomous that caps several goals in autonomous.
No one at Rochester was able to do it, but as dez250 said, 237 came so close a number of times. So it seems like the consensus from the first week of competition is that no one has been able to track, pick up and cap vision tetra.
I’m interested to see if any team does it this year.
Oh man! One of the reasons our (Aztechs 157) autonomous mode didn’t work, was that we didn’t have enough time to debug our code for the lighting at BAE.
We wrote the code in assabet (our school - It worked very well!!!), and our calibration values were much too different from the ones at BAE. We also mounted our camera way too low, so our yellow values were very innacurate.
I found out too late!
I guarantee, it will work at UTC!!! See you guys there!
Team 772 used our camera during most of the qualifying matches since it was consistent at home but every time autonomous started in Rochester it was a few inches off. On top of that the fragile servo kept breaking. Overnight I wrote a different autonomous that went to the auto loaders which worked out much better (but the turning needs to be sped up a ton - if you saw our bot you’ll know what I mean!!).