Most of the features are listed in the “More Info” section of the video. The centerpeice of the video is our Target-Field Oriented drive mode. Similar to what’s seen in 1058’s thread about field oriented drive, we also have field oriented mecanum drive with the ability to track the goal target and hold our heading so we are always facing the goal while we move around relative to the field.
I like the concept here, i’m assuming most of the time you Arn’t facing the target…I just imagine it being hard to aquire balls while tracking like this. We have this as well, but on a more “on-demand” level, where as, most of our driving is done perpendicular to us (aka, shooter facing us) then we hit a button, and now we’re facing the target…it gives us better ball controll
That all depends on your strategy and how you see the game played. In autonomous, I can see it beneficial to not use the camera, since it will un-align you when you try to continue on to further balls (mid and far positions only). You could align, track the heading delta, and then turn back, but that all takes much more time. In teleop, I can see many times where the camera dosen’t have a target while you are not facing it (e.g. getting balls) and then suddenly turning and wanting to fire. The camera takes a little while to get the target after turning quickly, so this would cause problems. This is all for the far and mid fields, the close field has many more applications for camera tracking and probably actually needs it less since the shot is so easy to line up.
That said, with Mecanum and Swerve drives, the ability to translate in an arbitrary direction without changing orientation makes the camera much more useful, especially where you don’t plan on changing orientation enough to knock the camera off target.
You might find the camera useful, maybe not. It all depends on your strategy.
Does the target-facing rely on the update rate of the camera or are you using the distance-from-target in conjunction with the heading from the gyro to figure out how the robot should move inter-frame from the camera? or is the heading adjustment recalculated based on camera data alone?
I’m a big fan. :] It would be cool to see a video of your robot switching between field and target oriented modes.
After playing with the labview vision extensively, they may be using the 160x120 resolution. If so, after adjusting and in bright light like they have, I was able to obtain 70-100 millisecond timing (12 fps). At 320x240 I was only able to get it down to the 180 millisecond time frame (5 FPS).
I am curious as well as to how they are obtaining such seemingly excellent framerates. To be fair, mine have been running off the development computer and I’m unsure if theirs were deployed to the robot permanently and simply using the Driver station or not. At the 160x120 I begin losing the target at more than 18 feet.
I’m not sure what the default is, but I bet giving it a value of 100 might increase your FPS.
I’m not really sure the frame rate needs to be much higher, though. Our robot turns very quickly (probably can rotate more than 360 degrees/s). Even while turning at full speed, our camera was good enough to track the target using a hacked up version of the vision sample code. For us, a full rotation a second is not reasonable. If our team were to improve our vision/tracking code, we would not look to improve frame rate, but the quality of our image.
Thanks! We enter targeting mode by pressing and holding a button on our joystick, you actually see the transition in the video from the robot right side view as it turns around from looking at us, we just turn around so we generally face the target, hold the button and go! I would have like to include better video of the transitioning but we had limited space since it was raining outside all day!
We are using LabView!
I believe in this video we are using 320x240, though we have tested and driven using 640x480. With 640x480 we get around 5-7 FPS, using 320x240 the FPS counter jumps around between 7-14 FPS, though generally around 8 FPS.
As to how we did this, Im not sure what technical info my team would prefer me to divulge. But I can say that we looked at the image processing capabilities of the cRio, then we read the rules:
Custom circuits may be used to indirectly affect the robot outputs by providing enhanced sensor feedback to the cRIO-FRC to allow it to more effectively control the ROBOT.
After a few tweaks to optimize the code, we are tracking the target at 640x480, 30! frames per second. I Don’t think it can get any better.
of course, unless we get a camera with 60fps capability:rolleyes: