blucoat
12-01-2016, 11:00
This year my team is considering using vision to aim the boulders at the high goals. If we do, this will be the first year we use vision to aim like this, although I've played around with vision stuff in openCV before.
I've downloaded the sample images of the field and tried to build a pipeline in GRIP that can pick out the goals reasonably well with some success. However, I'm having some trouble where in some shots the reflection on the tape is so bright, it appears almost white. Unfortunately, these are the shots where the robot is facing the tape head-on, which seems like the most likely case if you're actually trying to line up a shot.
I have a few ideas for things that can mitigate the problem, and I was wondering if anyone with more experience could weigh in:
Doing two thresholds, one for green and one for white, and taking the bitwise OR of the two. I've tried this and it works ok, but then it includes many white lights in the background as well. I was thinking it might be possible to filter these out later in the process using some other criteria. For example, since the goals are U-shaped they should have a relatively low area compared to their convex hulls.
Turning down the sensitivity of the camera. Does this easily fix it or are the pictures taken on lowest sensitivity? What about using a different camera? Are the microsoft or axis ones better at this?
Dimming the lights on the robot. Can the light be dimmed using pwm, or feeding them a lower voltage, or just covering half of them up?
It seems this is something that stays the same every year, so I'm curious to hear how teams who frequently use vision have handled this.
I've downloaded the sample images of the field and tried to build a pipeline in GRIP that can pick out the goals reasonably well with some success. However, I'm having some trouble where in some shots the reflection on the tape is so bright, it appears almost white. Unfortunately, these are the shots where the robot is facing the tape head-on, which seems like the most likely case if you're actually trying to line up a shot.
I have a few ideas for things that can mitigate the problem, and I was wondering if anyone with more experience could weigh in:
Doing two thresholds, one for green and one for white, and taking the bitwise OR of the two. I've tried this and it works ok, but then it includes many white lights in the background as well. I was thinking it might be possible to filter these out later in the process using some other criteria. For example, since the goals are U-shaped they should have a relatively low area compared to their convex hulls.
Turning down the sensitivity of the camera. Does this easily fix it or are the pictures taken on lowest sensitivity? What about using a different camera? Are the microsoft or axis ones better at this?
Dimming the lights on the robot. Can the light be dimmed using pwm, or feeding them a lower voltage, or just covering half of them up?
It seems this is something that stays the same every year, so I'm curious to hear how teams who frequently use vision have handled this.