![]() |
Are You Using A Camera To Align Their Shooter?
After watching week 0.5 & 1 I was curious how many teams are actually using a camera to make sure they can shoot accurately. Thanks!
|
Re: Are You Using A Camera To Align Their Shooter?
We use a camera through the driver station, not with vision tracking, and it works pretty well. Our top 'prong' of our shooter goes right in the middle of the picture so it somewhat resembles a scope so you know where you are aiming. It's quite accurate IMO if you get practice with it.
|
Re: Are You Using A Camera To Align Their Shooter?
Using Roborealm, we have our camera on our shooter that gives us on our feed a crosshair that shows where the arch of the ball should go when shot from a distance (we used 8ft at west valley, this may change) and this gave us almost 100% accuracy out of all of our shots at west valley. We plan on hopefully having an autoaimer (will take a couple pictures, average, figure out the good location for a better shot and autoalign the robot to take this shot) by Central, but we wont know till after Saturday (6 hours with the bot) if it will be ready then.
|
Re: Are You Using A Camera To Align Their Shooter?
Any teams on here have any success with GRIP?
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
Did you run into any issues on the actual field? |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
(Dunno, we were running custom opencv code at Merrimack, switched to GRIP to iterate faster) |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
Our driver never actually sees the high goal opening ever on the DS when shooting. We all look up after to see if it actually goes in. |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
If it's any consolation if it doesn't work on a real field I'm gonna have a REALLY long day tomorrow trying to debug it from 2500 miles away. That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably). For reference, we had two different groups of students working in parallel. One was using GRIP and the other was building a custom solution running on a beagle bone black using python and opencv. The core issue we had with the BBB solution was communicating to the RoboRio. GRIP handling that out of the box has been the real difference maker in that it allows the robot code to treat the camera as essentially a simple sensor we can access from Smart Dashboard. In short, I hope it works on the field because I'm a big fan of it. But at the same time, I hope it doesn't because it means I'm gonna have to work that much harder to find a competitive edge in the future :P (But no, I really hope it works) |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
We use a windows tablet with labview and do local usb camera processing but also forward coordinate data to driver station for driver to be able to see the alignment and verify before taking a shot. we have working auto crossing one of 5 defenses and taking a high goal shot. we have over 90% accuracy with auto aim, even the driver just enables auto aim but he still has manual control.
|
Re: Are You Using A Camera To Align Their Shooter?
I wrote the vision code for my team week 2....finally just got autonomous working a day before our first competition. We will see how it does but I am fairly confident assuming we get everything set.
|
Re: Are You Using A Camera To Align Their Shooter?
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
We had our shooter using grip and had great success in the practice fields we went to. Got to our first competition and grip interfered with the field software and never worked. IF you are using grip I would highly suggest a backup system or plan.
Our programmer kept mumbling.. "I bet they will release an update to grip after this..." :eek: |
Re: Are You Using A Camera To Align Their Shooter?
Do you have details on what sort of interference GRIP had with the FMS?
Were you running GRIP on the DS, Roborio, or coprocessor? We're intending to use GRIP on an onboard Raspberry Pi 2, but also using the SmartDashboard extension to send a low-res feed from it to the DS for the driver. Just wondering what specifically we should be wary of. |
Re: Are You Using A Camera To Align Their Shooter?
We are having great luck with RoboRealm; I find it more powerful than GRIP.
However I don't think RoboRealm has updated networktables compatibility for this year's control system. As a result, I devised a workaround using HTTP (RoboRealm sends data im an HTTP request to a local Python HTTP server, and then use PyNetworkTables to share the data with the robot.) Perhaps it's not the most efficient method, but it works fine for me. |
Re: Are You Using A Camera To Align Their Shooter?
We are using NI vision to automatically line up and calibrate our shooter. We line up using a constant rate turn and then adjust the shooter based on empirical data we collected that relates the size and position of the target on screen to how we need to calibrate the shooter to make that shot.
However I wouldn't recommend NI vision. It's very poorly documented. Next year we will probably switch to OpenCV |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
Quote:
Quote:
2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot. |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
I was actually playing with adding some stuff myself. Update from AZ - GRIP seems to be running fine on our machine. Post event I'll see if I can get our vision kids to post a bit more detail. |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
We put on a great game but we just never really have any luck so we aren't advancing to worlds even though I think we have a great vision system and the robot performed beutifully :( so eventually when i stop being so sour over our loss i'll get around to doing it, you'll have to hold tight until then |
Re: Are You Using A Camera To Align Their Shooter?
We're using a Kinect this year for our vision processing, connected to a coprocessor running Freenect and OpenCV.
The Kinect uses an IR stream to find depth, however you can also view the IR stream Raw, which is extremely useful, as it means we don't need to have a big green LED on our robot's camera. Our coprocessor (originally the Pine64, but changed to the raspberry pi because of driver support in libusb) finds the contours and bounding boxes of the high goal target. These values are sent to the RoboRIO via regular sockets. A single frame of data takes up only 32 bytes per target, which means we never run out of bandwidth. All this code is in C/C++. Instead of doing some (unreliable) math to find the angle and distance to the target, we're just using a PID controller with the error set to the deviation between the centre of the bounding box and the centre of the frame to align. For distance, we're just using a lookup table with the distance of the target from the bottom of the frame in pixels. Calculating Distance and Angle is an unnecessary step and just complicates things. While a target is in view, our flywheels will passively spin up to the appropriate speed to avoid taking time to spinup when we're ready to take a shot. This means the shot it taken almost instantly when I hit the 'shoot' button on the joystick. Our vision code is written in C/C++ and our RoboRIO code is written in Java/Kotlin. |
Re: Are You Using A Camera To Align Their Shooter?
As a further note, I'll be attempting to add Kinect support to GRIP after the season's conclusion. If you're planning to use a Kinect next year and want support for this in GRIP, keep an eye on #163
|
We use vision in order to align to the goal, we found out that the rate in which we get new measurement from the vision processing was too low to work properly with PID so we decided to use one image to calculate how much degrees to turn and then used the gyro to reach that angle. After settling down we take another image just to make sure the robot is on target.
|
Re: Are You Using A Camera To Align Their Shooter?
We used GRIP to create a python algorithm which we use with OpenCV.
The frame rate was too slow for us as well, so we are taking one shot of the target, and use Encoders to turn the robot the calculated angle for the target with PID. Later we double check that it is indeed aligned and that's it. Takes us less than a second to align properly. |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
I'm curious if your feedback is based on the contents of these documents, or on the ability to find them? Greg McKaskle |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
I'm Erik's programming mentor on 2877. The documentation problems weren't so serious that we couldn't get our Vision work. In fact, we just came back from the WPI District event and we had the only 20 point autonomous there. We didn't miss a single vision-assisted goal the entire weekend, when our drive train was actually working. The lack of documentation is for things like annotating an image (the imaqOverlay calls just didn't work for us), or what the "float" pixel value means in the imaqDraw family of calls. See my (essentially) unanswered questions at: https://decibel.ni.com/content/thread/43729?tstart=0. Also, although we almost certainly had the best Vision at WPI, doing it on the RoboRio is slow, so we'll probably go for an on-board co-processor next year. And it's doubtful if the NI libraries would be available for any of the co-processors we'd consider. |
Re: Are You Using A Camera To Align Their Shooter?
The big thing I want to emphasize with your robot if you are using vision is the closer you are to the target the less disruptive light can be in the way and the more accurate vision can be.
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
We use two USB webcams, one in the RIO and one in the Kangaroo. The Kangaroo has GRIP running on it and is set to broadcast the contours out to NetworkTables. From there our robot program (C++) on the RIO takes the numbers and adjusts our robot's drive motors to center the robot side to side. We then use a potentiometer on our shooter arm to adjust it to the right angle and then it fires. We have been making about 24/25 shots at the batter, and 20/25 from the defenses. Our second camera in the RIO is so the drivers have some human vision on the field, because we haven't figured out how to publish video from the kangaroo to the driverstation. We plan to get the robot shooting in auto during this week and hope to do good at 10,000 lakes in a few days! |
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
| All times are GMT -5. The time now is 12:47. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi