Those who have actually played in a game and have a turret that is aimed by a camera, what has been your experience?
I am hearing in the forums that the lighting is bad and cameras are not being effective. Is this true?
Those who have actually played in a game and have a turret that is aimed by a camera, what has been your experience?
I am hearing in the forums that the lighting is bad and cameras are not being effective. Is this true?
Well, the code provided by FIRST detects the target fairly well in the pits at Buckeye. I’m not all sure how it was working on the actual field, with incandescent bulbs instead of fluorescent/LED lights, and I didn’t calibrate it at all. My drive team tells me it worked fairly well at a distance, but is not very good close up (This is due to MY programming, since the turret won’t turn fast enough to keep the target in the field of vision up close)
From what I hear the stadium lighting is totally messing up cameras, I’ve only heard that from a couple sources though. Just be warned that you are very likely going to have to re-configure your camera settings for the on field lighting conditions.
Are they allowing teams to go on the field during breaks (primarily lunch) to calibrate their camera with field conditions? I remember doing that at a number of regionals in 2006 and 2007, it was very helpful.
At the Midwest Regional, after the practice matches on Thursday during lunch we were allowed to go onto the field to check the camera software.
What we found out was that the lighting really messed with the area percent on the targets, so we eventually gave up on using it.
We found that turning down the brightness helps a bit.
perhaps this isn’t a good idea, but instead of using area function provided, I suggest doing your own area calculation using the rectangle it finds. Oh, and how would one go about calibrating camera? Do you just run the white balance vi that came with the example and make sure that the set white balance function is nowhere in the normal main (as to not overwrite the calibration)?
What were you using for brightness (and white balance)?
In the example (2 color tracking) there was a brightness selection, that had been defaultly set at 40, which we turned down to 0.
I don’t think we had time to check out white balance (we programmers were getting really hungry by that time).
How exactly does the rectangle work?
When we were using area, we noticed that it pretty much cut off a side the target.
We took advantage of the calibration time during lunch on Thursday, and found that the images we were getting were basically just glowing orbs of (barely) green and pink. We couldn’t find any brightness, WB, or exposure settings that made it work. We found setting brightness low (like 10… sounds like 0 would have worked too?) helped a bit but we could never find green. Camera worked GREAT in the pits and on the practice field.
We did find a target once or twice in autonomous with the ‘flourescent 1’ and ‘auto’ settings for wb/exposure but it was way too unreliable to be used during teleop.
I’d really appreciate hearing from anyone who got the camera to work under those super-bright lights on the field. I know FIRST tested the camera “under FRC competition conditions” but I wonder if they tested with the white FRP floor? I’m thinking it may be reflecting a whole lot more than the classic carpet we’re used to… and they probably selected the camera long before Lunacy was designed…!
Maybe we’ll bring sunglasses to put on Toasty and see if it works?
(Toasty the Camera…)
It seemed like Simbotics had some sort of sunglasses like feature on their camera, maybe someone could check on that.
The simbotics are using a fisheye lens.
We tried something like that by Friday afternoon… =) Polarized and all. Can’t say it helped I would suggest everyone play with some really strong, direct white lighting on their targets before heading to their regionals!
We didn’t use a Camera this year so I have no first hand experience with it’s success or failure but I do know that someone published some Camera Threshold Values in the pits on Friday morning they were displayed next to the inspection station. Also, at about 7pm Thursday once all of the practice matches were over teams were let onto the Field for about 10 minutes each to test out Camera Settings. This is just what they did at Jersey, I don’t know what’ll happen at your Regional(s).
My suggestion to any teams who need to tune Cameras is to ask the FTA or someone else in power on the Field if there’s any time when you could go out on the field to gather values. I’ve found that usually if you ask politely they’ll try their best to help you out.
We didn’t do the camera calibration on Thursday, and regretted it. Our radar screen would show ghosted images of targets, but never a fully-solutioned target. The ghosted image meant that a pink/green combination appeared at that spot sometime in the last second, yet the data wasn’t continuous. In other words, the targets would pop in and out while on the field, making them difficult to track. It still proved useful when we were close though.
We’ll have to get it calibrated and practice with it in Florida.
What is close? And did you succesfully target and shoot at that range?
Did anyone find a setting that worked. Thanks everyone.
At the DC Regional, there was an opportunity during lunch break on Thursday to calibrate cameras on the field. Greg McKaskle from NI (many of you might recognize his name from Chief Delphi) was there to help, but I only saw two teams (including us) actually out on the field. The first thing Greg said to do was to log in to the camera (192.168.0.90) and lower the brightness level to 0. After that, we tweaked our HSL values a little bit, showed them to Greg, and he approved.
During the matches, our camera seemed to work fine. The only problem was that we had written our code to rely almost completely on the camera, and we only tested our code on stationary targets. When our robot and the target robots were moving, our shooter would miss the target. The camera by itself, though, worked better for us than most people here seem to describe.
Thanks for the info, we’ll try to get onto the field to do that as soon as we can at our first regional.
We did some playing around with moving targets before ship, so we know it’s a challenge…and this is with the target trailer moving pretty slowly.
The values that were given to us by the folks from NI were fairly accurate. We created our own lighting settings in WindRiver specifically for the arena, and our camera was very effective at tracking.
We were given calibration time on Thursday night, and took advantage of it. I was glad we did.
I’ll say that the software was successful enough – the ghosted targets were at the proper locations when we were within about 3 feet, which was all we were hoping for with the state of our shooter anyways. 50% miss rate, though I don’t necessarily blame that on the software…I’m sure the targets were in the right places (range & bearing) on the radar screen.
The Java radar screen on our laptop allows the second driver to follow targets around with the turret, with a cursor representing the turret bearing and shooter speed. There isn’t much automation in it due to the fact that it is more difficult than our capacity & timetable allow to create automated tracking algorithms. So it is up to the second driver to lead the shots on moving targets. He was getting decent practice with it Saturday, but we didn’t have enough time to tweak it.
Unfortunately, judges don’t believe the screen is good enough for an award because of its lack of automation. 90% of their questions were geared towards automation and accuracy of the target tracking system. Forget the fact that we did some pretty technical processing & re-wrapping of data in between the camera, cRIO, DS, and laptop, or the fact that we use a closed-loop turret tracking algorithm, or the fact that we used a standardized DoD System of Systems approach to the whole thing. We’ll have to make an animation and presentation for all of that stuff one of these days…