Our programming team is insisting that using the camera to determine the distance from our robot to the backboard is either going to be unreliable or too slow. I have great faith in them and I believe that they could program anything, but I was wondering if this was true, I feel like there has to be a way to determine the distance using the camera since a proximity sensor might not work depending on the angle. Any thoughts programmers?
They probably are saying that because it is probably too hard for them to take on. But they are partially correct. You cannot expect it to have the same reaction times as our human eye.
You may want to read through the vision white paper, perhaps open and run the example code.
But there are many field elements that can be used for alignment. The camera is certainly not the only way to prepare for shooting.
You could do some digging yourself, this is what I pulled up on the most up-to-date camera tracking using stereoscopic vision. It’s definitely over a years work of FIRST team programming, but neat to know.
There is also a way to track the brightest spot in an image using cameras and pixel difference but I’m not sure how to do it. I googled it and found a nice link (which isn’t hard). But it does pose its problems with the crowd being behind the polycarbonite backboard, and it also is a problem with other robots trying to track the same image. (Which is usually a red laser pointer dot.)
There are a lot of other ways to find your position on the field to the target! Think of independent systems when playing the field. Even if FIRST promotes teamwork, when you want to do well in autonomous this year, I would suggest a form of triangulation because you already have the basketballs. There is also a way to not have to track the backboard, but I’ll let you guys figure it out as we did ;). It’s a lot of fun to brainstorm! hahahahah I’ll give you a hint though: algorithms, trigonometry, and ultrasonic sensors.