Vision Processing will give you more reproducible results. That being said, however, there’s nothing stopping you from using both.
IMU’s are tricky. Assuming it’s a good IMU, you may have some luck with the gyroscope being able to line you up properly, although distance from the tower is something your driver will have to manage. It should be noted that most IMUs suffer some sort of sensor drift (the ADIS16448 is notorious for this, with the NavX family being noticeably better), so watch out for this and be sure to calibrate your sensors at the start of each match.
Vision Processing (assuming a rigid camera mount), does not drift (unless your camera is literally garbage). Vision Processing is good at telling you where the target is, and with some math it can tell you the angle you’re at and how far you are from it. For this to be useful, though, you have to tune your vision tracking, which in all honesty isn’t a huge trouble. Cameras, however, aren’t a good source for a continuous feedback loop, since their update rate is quite slow (30-60Hz in the best case).
Since IMU sensor drift is quite gradual, we can afford to use them as an alignment method in the short term. Other than that, they are quite accurate and update fast. I would suggest something like the following:
Grab Camera Image -> Process to find angle offset from target -> Use gyro to align to said offset -> Grab Camera Image to confirm alignment.
This (mostly) overcomes both sensor drift and the slow update rate of cameras.