Would using object detection to score coral on unfilled branches automatically be feasible?
Even if solvable, the arbitrary orientation and quantities on L1 seem like they would be extremely challenging to automate in any way.
Looking forward to five official scorers at each event this year!
Even if so, this is the type of thing that’s better off not being automated.
Do you think it would be worth it to make an alignment algorithm? Someone else I discussed this with also mentioned buttons that correspond to each branch, but I was concerned about the precision needed in terms of robot alignment to achieve such a system. What do you think?
How do you score coral without an alignment capability through vision? Relying on driver lineup is kind of crazy imo. Also since the reef pieces are symmetric for each of the 6 April tag locations on the hexagonal reef platform (2 groups of levels per tag) you really only need to implement 6 scoring actions? The rest is aligning to the tag.
Oh, I certainly misunderstood the question.
You’re asking if you can code your robot to use vision to score on the REEF.
I thought you were asking if FIRST can use vision to score the match.
Big difference; good luck.
My experience in this field says “no”. The base is square, line yourself up that way; I would use that and design your robot around that.
If you have 10 people dedicated to vision programming and tuning though, go for it.
In terms of button scheme, having a single button cycle through all the branch heights is probably your best bet.
Yeah, of course vision will be used to align the robot to the base or whatnot with the April tag, but I’m wondering if it would be possible to align a robot mechanism, similarly to how Amazon’s Sparrow or a shelf filling robot, to a specific branch to make depositing coral more accurate and efficient. Do you think it would be feasible in any capacity?
Also, I definitely see what you mean about just making 6 scoring actions that you could connect to the previously mentioned button system or other controller. I’m just trying to see if it’s possible to automate it further.
That would definitely be much more realistic for week 1 competitions and whatnot. That was probably going to be our original idea, but I’ll try looking into further experimentation, as we do have a few people that can do vision testing.
Thank you for the cycling button option, that honestly seems way less convoluted than a panel with multiple buttons! I’ll see if I can attach a display that visually shows which branch the scoring action is cycled to.
It’s probably possible but in comparison to what teams did last year for notes, where the highest extent of object detection was detecting something that looked like a cheerio, the reef branches are rather abstract and even creating something that would be able to accurately differentiate the reef branches seems outlandish (at least to me). It’s not really my area of expertise. It might be quite easy, if you have the knowledge of training these object recognition software, but in comparison to what our team has done for object detection it’s worlds apart, is what I’m saying.
Makes sense. It’s definitely more complex of an idea for sure. If only they were differently colored or something lol. I’ll probably focus on a system similar to the one you mentioned and just experiment around with it. Do you think machine learning or something of the sorts would be practical in this application at this level?
Thanks for the clarification! I was a bit confused at your first message, but it would be really neat if there was a system like that in place. Probably would help to make matches cycle faster.
Likely not very practical. You could however have the driver specify which specific branch the robot should score at and then the robot might be able to do that action.
You don’t need object detection. You need to read apriltags to align your drivebase properly relative to the branches. Using object detection for this purpose would be difficult imo.
I’ve considered this for a while and I think it’ s actually quite practical. A simple way to achieve this is that you first use apriltag to drive your robot to a specific position and have your object-detection camera facing the reef. And then you divide the picture in the camera into 6 part, if you detect a coral in the part then it means that branch has been used. Lastly, your program choose a branch that has no coral on it, and score.
Even though the object-detection can sometimes be unstable, given that the pose of the reef in the camera are very likely to be similar, you can probably achieve this function with proper model and enough fine-tuning.
But the question, still, is not how, but why. Why not just have 2 drivers, one control the robot and the other select the branch? Automating the process, though cool, doesn’t bring extra advantage in the game.
We don’t trust other teams; having a single driver allows for human players to operate both the processor and the station.
Do you expect to have your team man both positions in every match?
Yes, if we can figure out how to automate the scoring process.
Your alliance partners might not support this idea.
We did it last year and I think 1690 and 2056 did the same