We are considering using Photon for the upcoming Romi Challenge, and we were wondering if it is possible to use it to detect multiple types of objects at once. If so, we are wondering if someone can point us to the resources to learn how to do it?
First off, I think the problem may be that we are sort of ascribing an ML output concept to a traditional vision system, but we are wondering if this is a possibility.
Assuming everything is colored such that the tuning can differentiate between objects, can one use an instance of Photon to simultaneously identify a golf ball and the goal (retroreflective v a specific color)?
If so, I would assume each would live in its own pipeline; correct? Is it possible to identify which pipeline has a result, or does it only look at one at a time (which you tell it to use)?
The Pixycam can do this (which may be more up to the task if we decide to go this route, but that is more hardware and more power) as can the WPILib ML (which we are not capable of setting up at the moment).
You can have multiple pipelines. And multiple cameras. I think to use together you basically need some sort of focus switching program.
There are a couple ways to possibly be thinking about it.
One case you at one point go to get golf balls (focus/pipeline 1) then need to take those to the goal (focus/pipeline 2), you just have to switch once.
A second case you use the location of one object to decide what to do with another object. For example if you want to go to the golf ball that is closest to the target. You’d have to find the target, then find the golf balls, then get the golf ball and find the target again.
In the second case, you might want to switch multiple times to update whether the closest golf ball in the first frame is still the closest, which you could do.
That is at least what I’m thinking. I don’t think there is a way to identify both objects in the same frame, but I don’t know that for certain.