Log in

View Full Version : Are You Using A Camera To Align Their Shooter?


alexpell00
09-03-2016, 16:55
After watching week 0.5 & 1 I was curious how many teams are actually using a camera to make sure they can shoot accurately. Thanks!

Anthony Galea
09-03-2016, 17:02
We use a camera through the driver station, not with vision tracking, and it works pretty well. Our top 'prong' of our shooter goes right in the middle of the picture so it somewhat resembles a scope so you know where you are aiming. It's quite accurate IMO if you get practice with it.

MikLast
09-03-2016, 17:53
Using Roborealm, we have our camera on our shooter that gives us on our feed a crosshair that shows where the arch of the ball should go when shot from a distance (we used 8ft at west valley, this may change) and this gave us almost 100% accuracy out of all of our shots at west valley. We plan on hopefully having an autoaimer (will take a couple pictures, average, figure out the good location for a better shot and autoalign the robot to take this shot) by Central, but we wont know till after Saturday (6 hours with the bot) if it will be ready then.

IronicDeadBird
09-03-2016, 17:54
Any teams on here have any success with GRIP?

Andrew Schreiber
09-03-2016, 17:55
Any teams on here have any success with GRIP?

Yes

IronicDeadBird
09-03-2016, 17:56
Yes


Did you run into any issues on the actual field?

Andrew Schreiber
09-03-2016, 18:01
Did you run into any issues on the actual field?

Answer unclear. Try again later.


(Dunno, we were running custom opencv code at Merrimack, switched to GRIP to iterate faster)

IronicDeadBird
09-03-2016, 18:04
Answer unclear. Try again later.


(Dunno, we were running custom opencv code at Merrimack, switched to GRIP to iterate faster)

Interesting thought, perhaps next year all design should be done based off a list of possible ideas and a magic 8 ball...

waialua359
09-03-2016, 18:04
We use a camera through the driver station, not with vision tracking, and it works pretty well. Our top 'prong' of our shooter goes right in the middle of the picture so it somewhat resembles a scope so you know where you are aiming. It's quite accurate IMO if you get practice with it.
We use a camera through the DS as well where our driver lines up the robot with a permanent vertical line, although vision tracking for Auto mode.
Our driver never actually sees the high goal opening ever on the DS when shooting.
We all look up after to see if it actually goes in.

Andrew Schreiber
09-03-2016, 18:19
Interesting thought, perhaps next year all design should be done based off a list of possible ideas and a magic 8 ball...

Hey, in your original question you didn't specify "on actual field" :P

If it's any consolation if it doesn't work on a real field I'm gonna have a REALLY long day tomorrow trying to debug it from 2500 miles away.

That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably).

For reference, we had two different groups of students working in parallel. One was using GRIP and the other was building a custom solution running on a beagle bone black using python and opencv. The core issue we had with the BBB solution was communicating to the RoboRio. GRIP handling that out of the box has been the real difference maker in that it allows the robot code to treat the camera as essentially a simple sensor we can access from Smart Dashboard.

In short, I hope it works on the field because I'm a big fan of it. But at the same time, I hope it doesn't because it means I'm gonna have to work that much harder to find a competitive edge in the future :P (But no, I really hope it works)

IronicDeadBird
09-03-2016, 18:21
Hey, in your original question you didn't specify "on actual field" :P

If it's any consolation if it doesn't work on a real field I'm gonna have a REALLY long day tomorrow trying to debug it from 2500 miles away.

That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably).

For reference, we had two different groups of students working in parallel. One was using GRIP and the other was building a custom solution running on a beagle bone black using python and opencv. The core issue we had with the BBB solution was communicating to the RoboRio. GRIP handling that out of the box has been the real difference maker in that it allows the robot code to treat the camera as essentially a simple sensor we can access from Smart Dashboard.

In short, I hope it works on the field because I'm a big fan of it. But at the same time, I hope it doesn't because it means I'm gonna have to work that much harder to find a competitive edge in the future :P (But no, I really hope it works)

I'll make sure to drop you a line if GRIP suddenly starts to stand for, "Generating Radioactive Interdimensional Portals"

Arhowk
09-03-2016, 19:31
Any teams on here have any success with GRIP?

I would recommend against GRIP. Our team was going to use GRIP initially but I rewrote our processing code in OpenCV using GRIP as a realtime processing agent in the pits than just copying over the hsv, erosion kernels, etc. to the cv code.


GRIP, if ran on the dashboard, requires sending camera data over a second time in addition to the DS which clogs up bandwidth and laptop CPU
GRIP, if ran the RIO, never even worked for us. Gave us some error and resulted in the program never writing to NetTables.
GRIP on the RIO also requires the installation and execution of the Java VM which is quite alot of overhead if you aren't a Java team
There is also the latency of running it on the DS that is amplified on the field which produces visible control lag for the driver or robot code if used
You learn more if you do it by hand! :D It's not hard. (Getting an mjpeg is a pain though)

sanelss
09-03-2016, 19:41
We use a windows tablet with labview and do local usb camera processing but also forward coordinate data to driver station for driver to be able to see the alignment and verify before taking a shot. we have working auto crossing one of 5 defenses and taking a high goal shot. we have over 90% accuracy with auto aim, even the driver just enables auto aim but he still has manual control.

Fauge7
09-03-2016, 20:59
I wrote the vision code for my team week 2....finally just got autonomous working a day before our first competition. We will see how it does but I am fairly confident assuming we get everything set.

s5511
10-03-2016, 08:38
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?

IronicDeadBird
10-03-2016, 08:40
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?

From the limited testing the camera should be safe and secure in this. Rough game

Jonathan Ryan
10-03-2016, 08:54
We use a windows tablet with labview and do local usb camera processing but also forward coordinate data to driver station for driver to be able to see the alignment and verify before taking a shot. we have working auto crossing one of 5 defenses and taking a high goal shot. we have over 90% accuracy with auto aim, even the driver just enables auto aim but he still has manual control.
Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.

marshall
10-03-2016, 08:58
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?

We're just up the road from you folks and happy to help anytime. Just let us know what you're seeing (vision pun) and we will do what we can to help.

Stappy
10-03-2016, 10:45
We had our shooter using grip and had great success in the practice fields we went to. Got to our first competition and grip interfered with the field software and never worked. IF you are using grip I would highly suggest a backup system or plan.
Our programmer kept mumbling.. "I bet they will release an update to grip after this..." :eek:

vScourge
11-03-2016, 10:00
Do you have details on what sort of interference GRIP had with the FMS?
Were you running GRIP on the DS, Roborio, or coprocessor?

We're intending to use GRIP on an onboard Raspberry Pi 2, but also using the SmartDashboard extension to send a low-res feed from it to the DS for the driver. Just wondering what specifically we should be wary of.

cjl2625
11-03-2016, 10:11
We are having great luck with RoboRealm; I find it more powerful than GRIP.
However I don't think RoboRealm has updated networktables compatibility for this year's control system. As a result, I devised a workaround using HTTP (RoboRealm sends data im an HTTP request to a local Python HTTP server, and then use PyNetworkTables to share the data with the robot.)
Perhaps it's not the most efficient method, but it works fine for me.

euhlmann
11-03-2016, 12:54
We are using NI vision to automatically line up and calibrate our shooter. We line up using a constant rate turn and then adjust the shooter based on empirical data we collected that relates the size and position of the target on screen to how we need to calibrate the shooter to make that shot.

However I wouldn't recommend NI vision. It's very poorly documented. Next year we will probably switch to OpenCV

ThomasClark
11-03-2016, 18:04
That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably).


That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue (https://github.com/WPIRoboticsProjects/GRIP/issues/new) and it doesn't look too hard, I can try implementing it.

I would recommend against GRIP. Our team was going to use GRIP initially but I rewrote our processing code in OpenCV using GRIP as a realtime processing agent in the pits than just copying over the hsv, erosion kernels, etc. to the cv code.

Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.


GRIP, if ran on the dashboard, requires sending camera data over a second time in addition to the DS which clogs up bandwidth and laptop CPU
GRIP, if ran the RIO, never even worked for us. Gave us some error and resulted in the program never writing to NetTables.
GRIP on the RIO also requires the installation and execution of the Java VM which is quite alot of overhead if you aren't a Java team
There is also the latency of running it on the DS that is amplified on the field which produces visible control lag for the driver or robot code if used
You learn more if you do it by hand! :D It's not hard. (Getting an mjpeg is a pain though)


1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension (https://github.com/WPIRoboticsProjects/GRIP-SmartDashboard/releases). Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.

Andrew Schreiber
11-03-2016, 18:24
That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue (https://github.com/WPIRoboticsProjects/GRIP/issues/new) and it doesn't look too hard, I can try implementing it.



Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.



1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension (https://github.com/WPIRoboticsProjects/GRIP-SmartDashboard/releases). Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.



I was actually playing with adding some stuff myself.


Update from AZ - GRIP seems to be running fine on our machine. Post event I'll see if I can get our vision kids to post a bit more detail.

kylelanman
12-03-2016, 01:11
Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.

^This. We pulled down the GRIP source and did a python port of the algorithm we had in GRIP. Because GRIP makes it so easy to try things we ended up with a simple 3 block algorithm. With out the rapid prototyping it likely would have had a few extra unneeded steps. We made the python program that runs on a Beagle Bone Black publish values to NT identically to how GRIP does. This allows us to switch between either GRIP on the DS and our python program on the BBB without any code changes required. The robot is none the wiser as to which one is currently being used.

sanelss
12-03-2016, 22:33
Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.

I'll sooner or later make a video of and provide some documentation since it performed so well this year.

We put on a great game but we just never really have any luck so we aren't advancing to worlds even though I think we have a great vision system and the robot performed beutifully :( so eventually when i stop being so sour over our loss i'll get around to doing it, you'll have to hold tight until then

Jaci
13-03-2016, 00:05
We're using a Kinect this year for our vision processing, connected to a coprocessor running Freenect and OpenCV.

The Kinect uses an IR stream to find depth, however you can also view the IR stream Raw, which is extremely useful, as it means we don't need to have a big green LED on our robot's camera.

Our coprocessor (originally the Pine64, but changed to the raspberry pi because of driver support in libusb) finds the contours and bounding boxes of the high goal target. These values are sent to the RoboRIO via regular sockets. A single frame of data takes up only 32 bytes per target, which means we never run out of bandwidth. All this code is in C/C++.

Instead of doing some (unreliable) math to find the angle and distance to the target, we're just using a PID controller with the error set to the deviation between the centre of the bounding box and the centre of the frame to align. For distance, we're just using a lookup table with the distance of the target from the bottom of the frame in pixels. Calculating Distance and Angle is an unnecessary step and just complicates things.

While a target is in view, our flywheels will passively spin up to the appropriate speed to avoid taking time to spinup when we're ready to take a shot. This means the shot it taken almost instantly when I hit the 'shoot' button on the joystick.

Our vision code is written in C/C++ and our RoboRIO code is written in Java/Kotlin.

Jaci
13-03-2016, 00:11
As a further note, I'll be attempting to add Kinect support to GRIP after the season's conclusion. If you're planning to use a Kinect next year and want support for this in GRIP, keep an eye on #163 (https://github.com/WPIRoboticsProjects/GRIP/issues/163)

GuyM142
13-03-2016, 00:39
We use vision in order to align to the goal, we found out that the rate in which we get new measurement from the vision processing was too low to work properly with PID so we decided to use one image to calculate how much degrees to turn and then used the gyro to reach that angle. After settling down we take another image just to make sure the robot is on target.

Tottanka
13-03-2016, 06:14
We used GRIP to create a python algorithm which we use with OpenCV.
The frame rate was too slow for us as well, so we are taking one shot of the target, and use Encoders to turn the robot the calculated angle for the target with PID.
Later we double check that it is indeed aligned and that's it.
Takes us less than a second to align properly.

Greg McKaskle
13-03-2016, 09:58
However I wouldn't recommend NI vision. It's very poorly documented. Next year we will probably switch to OpenCV

The documentation for NIVision is located in program files/National Instruments/Vision/Documentation. In the Vision folder, there should also be shortcuts to examples and to additional pdf documentation. If using a text language, the CVI documentation would be what you are interested in. I also highly recommend the vision concepts document.

I'm curious if your feedback is based on the contents of these documents, or on the ability to find them?

Greg McKaskle

cbf
13-03-2016, 23:46
The documentation for NIVision is located in program files/National Instruments/Vision/Documentation. In the Vision folder, there should also be shortcuts to examples and to additional pdf documentation. If using a text language, the CVI documentation would be what you are interested in. I also highly recommend the vision concepts document.

I'm curious if your feedback is based on the contents of these documents, or on the ability to find them?

Greg --

I'm Erik's programming mentor on 2877. The documentation problems weren't so serious that we couldn't get our Vision work. In fact, we just came back from the WPI District event and we had the only 20 point autonomous there. We didn't miss a single vision-assisted goal the entire weekend, when our drive train was actually working.

The lack of documentation is for things like annotating an image (the imaqOverlay calls just didn't work for us), or what the "float" pixel value means in the imaqDraw family of calls. See my (essentially) unanswered questions at: https://decibel.ni.com/content/thread/43729?tstart=0.

Also, although we almost certainly had the best Vision at WPI, doing it on the RoboRio is slow, so we'll probably go for an on-board co-processor next year. And it's doubtful if the NI libraries would be available for any of the co-processors we'd consider.

IronicDeadBird
14-03-2016, 12:27
The big thing I want to emphasize with your robot if you are using vision is the closer you are to the target the less disruptive light can be in the way and the more accurate vision can be.

AlexD744
14-03-2016, 21:56
Any teams on here have any success with GRIP?

We're using GRIP, and competed this week at orlando. No problems reported from the driver. In fact, the robot made every autonomous shot it took, so it was working pretty well. Let me know if you have any questions.

FRC2501
04-04-2016, 10:45
After watching week 0.5 & 1 I was curious how many teams are actually using a camera to make sure they can shoot accurately. Thanks!

Our team has been able to get our robot shooting with "vision assist".

We use two USB webcams, one in the RIO and one in the Kangaroo. The Kangaroo has GRIP running on it and is set to broadcast the contours out to NetworkTables. From there our robot program (C++) on the RIO takes the numbers and adjusts our robot's drive motors to center the robot side to side. We then use a potentiometer on our shooter arm to adjust it to the right angle and then it fires. We have been making about 24/25 shots at the batter, and 20/25 from the defenses.

Our second camera in the RIO is so the drivers have some human vision on the field, because we haven't figured out how to publish video from the kangaroo to the driverstation.

We plan to get the robot shooting in auto during this week and hope to do good at 10,000 lakes in a few days!

euhlmann
04-04-2016, 11:22
The big thing I want to emphasize with your robot if you are using vision is the closer you are to the target the less disruptive light can be in the way and the more accurate vision can be.

We didn't notice a difference between vision when we were close vs far at either of our two regionals. With correct camera exposure settings and tuned vision filtering, you can completely eliminate overhead lighting and tower LEDs from your image.

Mister_E
04-04-2016, 11:50
We are having great luck with RoboRealm; I find it more powerful than GRIP.
However I don't think RoboRealm has updated networktables compatibility for this year's control system. As a result, I devised a workaround using HTTP (RoboRealm sends data im an HTTP request to a local Python HTTP server, and then use PyNetworkTables to share the data with the robot.)
Perhaps it's not the most efficient method, but it works fine for me.

Using a Kangaroo & RoboRealm. No problems with NetworkTables; even ran an instance of RoboRealm on the Driver Station computer to port NT values to an Arduino Nano (in a hat with LEDS) & an Uno that lit up to relay vision tracking & shooter speed progress. Highest OPR at SBPLI Regional and went 8/9 shooting in auto in quals. I blame the carpet for our misses, heh.

ThomasClark
04-04-2016, 13:30
Our second camera in the RIO is so the drivers have some human vision on the field, because we haven't figured out how to publish video from the kangaroo to the driverstation.

Use the "Publish Video" operation in GRIP, and the GRIP SmartDashboard extension with the IP set to the Kangaroo's IP address.