I set up an Axis camera at the front of the bot and sent the video feed into Roborealm to target where in my field of view are various saturated points. Then, since the range is fairly open to prevent not catching the LEDs (just a precaution), I blob them into objects, take out all but the biggest one and put a crosshair value in the centre of the object. This way, despite lights above the field, white banners lying around, etc we can find out where the largest object on the field is. The angle of our camera also allows us to see both sides of the field--even if they aren't 100% in view, the object is filled large enough to outdo any other possible source of light.
The X and Y values of the crosshair are then sent through network tables to the cRIO, and a final, global variable in the command-based code I use shows me where it starts out hot first: left or right. If it's hot on the side I'm on, I shoot and move forward while retracting our arm (maybe a video will come soon!). If it isn't, then a timer I made waits 5.5 seconds until it will inevitably pop into hot and fire. If it's the centre I'm shooting at, then I use that global variable (isRight) to go forward, spin a bit with a PID loop and fire.
The 2-ball auto does some other hoomawazits, and our three-ball will only focus on whether the goal is hot or not after every other possible thing is done. It would be nifty to have a 3-ball
hot auto, but one can only dream.
I think I went out of vision processing and into the full autonomous. Oh well, it's late and six weeks of programming really changes how you function
