We want to implement vision into our robot more this year, last year we only auto aligned with the HP station seeing that was all we needed. This year, our drivers are requesting a little bit more
We currently have a Limelight 3, Limelight 2, and a Raspberry PI.
Our plan is to use our LimeLight3 to detect April tags, and align with the April tags while shooting and to check the distance between our robot and the speaker. We wanted to also use our limelight 2 to aid our limelight 3. We also intend on using this so the drive team can see better. Right now we are unsure of whether to use the LL software or use photonvision, but we are leaning towards photonvision
My team also wanted to use our Rasberry PI to automate the alignment and intake of notes on the floor. We plan on using the Pre-built Raspberry Pi from WPILibPi. We would use openCV to find a Note (that we are right next to) and turn ourselves based on the Note, then move up. We are hoping that by running this on our Pi 3, we would be able to process this fast enough.
Before we make substantial progress on this, is there anything we should be concerned about? Any other options to consider Is our idea feasible? I also ama ware that due to the bandwidth problems, we can only stream one of cameras on the drive station, will switching between the cameras work?