Is anyone using the Camera?

I was at the New Jersey Regional this past weekend and I think I saw one maybe two robots who used the camera. I was interested if anyone had used them and Why or Why not.

We use the camera but not to track trailers. We use it for ball recognition along with our light curtain. If we have already sucked up a Empty cell the camera will have hopefully seen that and will not let us suck up another one. Also our intake system is completely autonomous using the previous said light curtain.

We are using our camera and all the code is written for it. We just have to mount it and recalibrate it at comps. Also the whole turret is manual control, if the camera locks onto the colors, then a joystick movement will auto lock it. Otherwise our turret is manual control.

Yes, Team 904 has implemented code to use the camera in autonomous AND in tele-op mode.

We’ll find out next week whether or not we use it…it’s mounted, programmed, and was kind of working when the robot shipped.

Some recalibration required

Based on what we saw in week 1 matches, we changed our camera focus :wink: to the live streaming. It appears that using live video 160x120 @25fps scaled up in our python dashboard to 320x240 is going to more valuable to the driver to aid in picking up moon rocks than for tracking targets, although we hope to get near instant mode switching working by the FL regional.

We have a lock mode that we could not shake in the shop, we will have to see how that turns out on the field, I am concerned about the lighting issues mentioned in another thread. We found effective lock code was nearly impossible without the use of separate tasks in C++ (I dont know about labview nor do I care).

Note that semaphores are very important in this, and I recommend that you do not use the WPI “restricted region” (which is not much of an abstraction anyway) and access the vxWorks semaphores directly there are different types and they are each for very different situations.

do you remember that our bandwidth is being limited to ~950 bytes every 0.02 seconds (50 hz)? i don’t think that will work.

We have some prelim code done and tested, the problem is that our bot will fall into osculation because of the low traction. We ended up not using it at our 1st regional because of that. We are working on that now and hope to have it up and going for Purdue.

You almost won MWR with a not-fully-functioning robot, and you’ll have it fixed at BMR? ::scary::

The user data is 984 bytes * 50 hz = 49,200 Bytes per second (393 kbps ).

An appropriately compressed 160x120 image is 1300 - 1800 bytes. We fit each image over two packets, along with a pretty decent amount of telemetry.

160x120 is a bit small, so we stretch it to 320x240 on our Dashboard. Because the eye integrates video very well, it’s much more than usable for the driver.

We’re certainly using the camera to track the targets and automatically aim our turret, unless the programmers decide to stream video from it. Whichever one is easier is the one we’re going for.

The camera we have programmed on our robot tracks opposing trailers, chases after the trailer, then when it gets about a foot away starts the belt holding the moon rocks and dumps it in. The program worked actually really well at the DC regional, except for 2 or 3 times were the camera lost patience or the target for whatever reason.

We had the camera sending video back to our laptop at the control station and it was pretty much seamless, i was expecting it to be really bad refresh rate but we had no trouble with it at Traverse city.

We had a couple of software issues at start of the DC Regional, but as the tournament went on, I was assured it was working. Since we did not have a chance to fully test it out on the practice field, we choose not to use it and relied on manual control of the turret and shooter. We expect to have it up and running at Chesapeake, however. We will still have the turret under manual control until just before firing, when we will allow the camera to lock on the target and then fire.

We used our camera at Trenton along with a few other sensors that we’ll try to perfect at Philly. At the moment we’ve only risked trying to score 2 balls in autonomous while placing the other 5 in our conveyor ready to load the hopper in teleop. We were extremely proud of our programmers for making it track very well.

we’re using it,

we have it working perfectly

All Zach has to do it aim the turret towards a trailer, and then the camera takes over and follows it.

We’re having some difficulties with the camera.

It’s working, but response time is ridiculously slow (> 0.5 seconds). I’m not quite sure what’s causing it. I’m just using the FindColor() function provided in the API, and I even tried to speed that up by allowing multiple calls to use the same image (so that you don’t have to get a new image for each call). Still no luck.

The only thing thing that I can think of as the cause is that I’m using a multi-tasking system (the drive train, collector system, and shooter system all have their own tasks, using a buffering system of my design). I don’t see how they would be slowing things down to that point, however, as all other tasks have very fast response times ( < 0.05 seconds).

Any ideas what may be the problem?

are you using any sensors?

we have a gyro and accelerometer and 7 motors all running and there’s no delay…

is it on a rotating turret?

we had this problem, but solved it by decreasing the speed of the rotation and now it lock on in under .5 second from 6 feet…

although our camera is set to only track pink…

We are using sensors. We have 4 encoders hooked up to the drive train (we’re using an omni system). It’s not the lock-on that’s the problem. I could tune that fairly easily. The problem is the actual return rate of the function I’m using to get target data. I’ve put in some prints for debugging and the actual function only returns at > 0.5 seconds. The actual tracking works perfectly; the problem is that it’s too slow.

I’m planning on trying an approach closer to what is used in the two color demo. We’ll see if that gives us any luck.