Are You Using A Camera To Align Their Shooter?

After watching week 0.5 & 1 I was curious how many teams are actually using a camera to make sure they can shoot accurately. Thanks!

We use a camera through the driver station, not with vision tracking, and it works pretty well. Our top ‘prong’ of our shooter goes right in the middle of the picture so it somewhat resembles a scope so you know where you are aiming. It’s quite accurate IMO if you get practice with it.

Using Roborealm, we have our camera on our shooter that gives us on our feed a crosshair that shows where the arch of the ball should go when shot from a distance (we used 8ft at west valley, this may change) and this gave us almost 100% accuracy out of all of our shots at west valley. We plan on hopefully having an autoaimer (will take a couple pictures, average, figure out the good location for a better shot and autoalign the robot to take this shot) by Central, but we wont know till after Saturday (6 hours with the bot) if it will be ready then.

Any teams on here have any success with GRIP?

Yes

Did you run into any issues on the actual field?

Answer unclear. Try again later.

(Dunno, we were running custom opencv code at Merrimack, switched to GRIP to iterate faster)

Interesting thought, perhaps next year all design should be done based off a list of possible ideas and a magic 8 ball…

We use a camera through the DS as well where our driver lines up the robot with a permanent vertical line, although vision tracking for Auto mode.
Our driver never actually sees the high goal opening ever on the DS when shooting.
We all look up after to see if it actually goes in.

Hey, in your original question you didn’t specify “on actual field” :stuck_out_tongue:

If it’s any consolation if it doesn’t work on a real field I’m gonna have a REALLY long day tomorrow trying to debug it from 2500 miles away.

That being said, I’ve found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask “what if” is incredible. It’s missing some features I’d like to see (better ability to do feature refinement most notably).

For reference, we had two different groups of students working in parallel. One was using GRIP and the other was building a custom solution running on a beagle bone black using python and opencv. The core issue we had with the BBB solution was communicating to the RoboRio. GRIP handling that out of the box has been the real difference maker in that it allows the robot code to treat the camera as essentially a simple sensor we can access from Smart Dashboard.

In short, I hope it works on the field because I’m a big fan of it. But at the same time, I hope it doesn’t because it means I’m gonna have to work that much harder to find a competitive edge in the future :stuck_out_tongue: (But no, I really hope it works)

I’ll make sure to drop you a line if GRIP suddenly starts to stand for, “Generating Radioactive Interdimensional Portals”

I would recommend against GRIP. Our team was going to use GRIP initially but I rewrote our processing code in OpenCV using GRIP as a realtime processing agent in the pits than just copying over the hsv, erosion kernels, etc. to the cv code.

  1. GRIP, if ran on the dashboard, requires sending camera data over a second time in addition to the DS which clogs up bandwidth and laptop CPU
  2. GRIP, if ran the RIO, never even worked for us. Gave us some error and resulted in the program never writing to NetTables.
  3. GRIP on the RIO also requires the installation and execution of the Java VM which is quite alot of overhead if you aren’t a Java team
  4. There is also the latency of running it on the DS that is amplified on the field which produces visible control lag for the driver or robot code if used
  5. You learn more if you do it by hand! :smiley: It’s not hard. (Getting an mjpeg is a pain though)

We use a windows tablet with labview and do local usb camera processing but also forward coordinate data to driver station for driver to be able to see the alignment and verify before taking a shot. we have working auto crossing one of 5 defenses and taking a high goal shot. we have over 90% accuracy with auto aim, even the driver just enables auto aim but he still has manual control.

I wrote the vision code for my team week 2…finally just got autonomous working a day before our first competition. We will see how it does but I am fairly confident assuming we get everything set.

We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?

From the limited testing the camera should be safe and secure in this. Rough game

Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.

We’re just up the road from you folks and happy to help anytime. Just let us know what you’re seeing (vision pun) and we will do what we can to help.

We had our shooter using grip and had great success in the practice fields we went to. Got to our first competition and grip interfered with the field software and never worked. IF you are using grip I would highly suggest a backup system or plan.
Our programmer kept mumbling… “I bet they will release an update to grip after this…” :eek:

Do you have details on what sort of interference GRIP had with the FMS?
Were you running GRIP on the DS, Roborio, or coprocessor?

We’re intending to use GRIP on an onboard Raspberry Pi 2, but also using the SmartDashboard extension to send a low-res feed from it to the DS for the driver. Just wondering what specifically we should be wary of.