If a team wants to be super precise with their vision, I imagine they could use both (you would probably need multiple coprocessors for this, I suspect. I don’t trust a Pi/Arduino/roboRio to handle both at once).
For basic positioning, you can pretty much get the same information out of either approach, with slightly different precision. The cone nodes’ positions are the same relative to each AprilTag, and the cube nodes are the same relative to the retroreflective tape, so it’s just a matter of which offsets you (don’t) want to calculate more.
Our team will likely go with AprilTags just because processing them is more reliable and less of a pain than the tape.