Check out team 254's vision seminar at champs last year:
https://www.youtube.com/watch?v=rLwOkAJqImo
To answer your question, if you want to see data coming out of the GRIP debugging, use localhost or 127.0.0.1.
A couple key points about processing the video stream:
1. Underexpose your camera. 254 goes more into detail about why in the video, but it does wonders.
2. You can also eliminate noise by applying a gaussian blur and running cv erode followed by cv dilate (same number of iterations). I'd also use the Filter Contours block and filter by area.
Also, make sure your camera is mounted in the center of your robot. It helps so much when it comes to alignment. Also, not as important, but keeping your camera behind your center of rotation makes your perceived angle of alignment less, which eliminates oscillation.
On 687 last year, a few of these problems resulted in our vision system not working. First, the camera had a lot of latency. Make sure you take camera latency into account when you write your vision system. Also, make sure you get your drive PID loop tuned perfectly. A bad PID loop would make your drivebase spin around randomly.
I hope this helps, good luck this season!