Vuforia in FRC: Is AR on an FRC robot possible?

I saw this year that PTC is giving some awesome tools to FIRST students in the virtual KOP again for 2020. I know that in FTC, Vuforia is offered as a way to do some sort of AR based field navigation and recognition, which I know to be because they don’t use retro-reflective tape like on an FRC field. As I see it, it seems PTC is marketing all these tools to both FRC and FTC students. Is it possible to run Vuforia AR on a FRC robot?

1 Like

Lordy, I hope there isn’t tape.

3 Likes

I have very little experience working with that particular software suite. However, given it’s “just software”, I’d feel safe saying the answer to the question as written is “Yes”.

The question I have - “Why”?

Limelight seems to be the gold standard in vision processing for FRC at the moment. Is there a significant advantage a hunk of hardware running Vuforia has over limelight?

1 Like

Vuforia is great, and I have wondered why it isn’t used in FRC. Yes, it can do AR, but it also has really powerful object recognition tools built in. The FTC teams in our club have used Vuforia very successfully to read the beacon colors in Velocity Vortex, recognize the jewel colors and find the block colors in Relic Recovery and find the order of the minerals in Rover Ruckus to drive auto routines. It does not require the use of retroreflective tape and seems to do a really good job of recognizing objects. It runs really well on the FTC phones. I assume it could also run on a RasPi or similar FRC co-processor.

It’s been out there for a while. I am still wondering why no one seems to be using it in FRC.

1 Like

We (sister team 7152) tried using a magic leap this year but head ref pointed out it’s illegal because it prevents refs from seeing our eyes which was a saftey concern .(I’m assuming when u say AR you are talking about a driver Station HUD)

Hmmmm…

This is definitely good, but isn’t exactly the FRC vision problem historically. Rather, FRC has been “Identify vector to (or position relative to) a retro-reflective target with high framerate, low latency, and high robustness”.

Usually the simple solution of LED Ring + dark-exposure + pixel threshold ( maybe with dilation/contraction) is easy/fast/robust enough to identify the target.

Additionally, Android phone integration, though possible, is non-trivial.

When you say “object-based detection”, how would this apply to the FRC application? Would you still be looking for brightly-colored clumps of pixels? Or would you try to identify on a more complex shape (ex: the rocket from 2019)?

Welcome to Chief Delphi!!

FYI - I am assuming something different. Vuforia is a suite of software for driving AR applications, not AR viewing hardware.

The functionality (I assume) OP refers to is the ability to identify objects in a stream of camera images, and determine camera orientation relative to that object (which is a building block of being able to do AR well).

By the way good job at the South Florida Regional this year too!

Thank you !!!, Great job at Curie !!

2 Likes

The phones that FTC teams use as their co-processor is really just out of convenience since they already have the phones on the robot as part of the control system and the phones already have good cameras built in so the video signal is already on that device. I believe Vuforia is able to run on a RasPi with USB camera feeds as well.

FTC absolutely uses this to identify scoring elements and then sends commands to the robot to navigate toward that scoring element. The difference is in how the scoring element is identified by Vuforia versus the typical FRC vision systems.

As I understand it, in Vuforia you can train the software to recognize certain objects. The images on the wall of the FTC field are trained into the software so that the software recognized those. You could also train the software to recognize a rocket or cargo ship and then be able to identify the scoring location on that object (the shape of the cutout around the hatch location, for example). So, instead of targeting the two green blobs, you would target the actual hatch opening or cargo opening.

Vuforia could also recognize dropped hatch panels on the floor, cargo balls rolling around on the floor and you could automate the acquisition of these objects. It might also be able to recognize whether a hatch or cargo was already in a scoring location. Overall, based on what little I know about Vuforia, it would allow a much wider range of vision tasks than simply navigating toward green blobs. But, I am not an expert in the software (I have just watched it used very successfully in the last several FTC games with no retro-reflective tape).

1 Like

Gotcha, thanks!

Guessing a bit here, and interested to have someone with more machine vision experience chime in too:

I see a common thread between the FRC and FTC applications as “a target with high contrast with background”. In FRC, it’s retro-reflective on non-retro-reflective. In FTC, it’s A known, texture-dense image on a flat white background. In both cases, there’s high contrast for a machine vision routine to identify the target and extract features. Even for weird lighting conditions, dark images still look quite different than light background.

However, I think training on a rocket itself would be a harder problem to solve, given the lack of consistent background contrast. Exactly what’s around or behind the rocket will vary match to match, event to event. I think there’s a tradeoff between an algorithm that can identify lots of different-looking rockets (but be less sure about exactly where “the” rocket is), versus an algorithm that precisely identifies the rocket location (but produces many false negatives finding the rocket in the first place).

Surely, there must be a middle ground. However, personally, I’d have concern that the sub-inch precision that’s desirable in both FRC/FTC applications would be difficult to achieve at any middle ground.

My personal concern is grounded in two observations:

  1. The bit I’ve tinkered with it myself, it’s hard.
  2. I’ve not seen a robot on Einstein advertise it as a feature that was critical to their execution.

Not to say someone shouldn’t be trying though.

That being said, if FRC goes to use FTC-like vision targets for this year, rather than retroreflective tape, Vuforia or something similar suddenly becomes a much better option than roll-your-own.

Echoing Marshall’s idea from a few threads ago - large QR codes seem like a good idea to me.

Well, in Velocity Vortex, Vuforia was able to detect the color of the beacon with no background (or I should say a random off-field background ) we had some trouble at our state tournament that year because the venue used blue tarps to cover the floor and it was hard to see blue against the blue background. But our programmer was able to adjust the code for that. At Super-regionals, we had trouble with the stage lighting that was causing a lot of glare off of background objects. We had to add a card to cover a portion of the field of view to block that out of the camera image.

In Rover Ruckus, the silver and gold minerals were just sitting on the grey field tiles and the off-field background was in the field of view. Vuforia was able to handle that quite easily.

The field wall targets, while bounded by a white outline, are placed with the off-field background in view. The field walls for FTC are clear, just like FRC and you can see the background beyond the image in the field of view.

In general, I would say that Vuforia seems to be able to recognize trained objects in a very busy background and does not need to do a lot of filtering for a specific color that we use with the retro-reflective tape in FRC.

Frankly, I dislike the vision targets in FTC, and I hope they never make their way to FRC.

We joke that they are the most expensive game element on the field.

Vuforia is cool software, but I hate how FTC vision has become largely “use Vuforia” and “use the Tensorflow model Google made”. At least with FRC teams have a fighting chance at making their own vision solution.

1 Like

Gotcha, thanks. Actually, I probably should have been more specific - I don’t think there’s a concern if there’s any background noise - only that the thing you are looking for has a consistent background and reasonably consistent lighting.

Your three examples align roughly with my expectations, but skew my thoughts toward thinking “this is probably more feasible than I’m guessing”.

Just to clarify for others too - I’m assuming that “accurate alignment to target” versus “identify target presence” are very different levels of difficulty.