|
|
|
![]() |
|
|||||||
|
||||||||
| View Poll Results: Are You Tracking The Goal To Help Shoot | |||
| Yes - And It Works! |
|
133 | 49.63% |
| Kinda - We Are Working On It! |
|
104 | 38.81% |
| Nope - We Are Not Using A Camera |
|
31 | 11.57% |
| Voters: 268. You may not vote on this poll | |||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
We're just up the road from you folks and happy to help anytime. Just let us know what you're seeing (vision pun) and we will do what we can to help.
|
|
#2
|
|||
|
|||
|
Re: Are You Using A Camera To Align Their Shooter?
We had our shooter using grip and had great success in the practice fields we went to. Got to our first competition and grip interfered with the field software and never worked. IF you are using grip I would highly suggest a backup system or plan.
Our programmer kept mumbling.. "I bet they will release an update to grip after this..." ![]() |
|
#3
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
Do you have details on what sort of interference GRIP had with the FMS?
Were you running GRIP on the DS, Roborio, or coprocessor? We're intending to use GRIP on an onboard Raspberry Pi 2, but also using the SmartDashboard extension to send a low-res feed from it to the DS for the driver. Just wondering what specifically we should be wary of. Last edited by vScourge : 11-03-2016 at 10:06. |
|
#4
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
We are using NI vision to automatically line up and calibrate our shooter. We line up using a constant rate turn and then adjust the shooter based on empirical data we collected that relates the size and position of the target on screen to how we need to calibrate the shooter to make that shot.
However I wouldn't recommend NI vision. It's very poorly documented. Next year we will probably switch to OpenCV |
|
#5
|
|||
|
|||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
I'm curious if your feedback is based on the contents of these documents, or on the ability to find them? Greg McKaskle |
|
#6
|
|||
|
|||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
I'm Erik's programming mentor on 2877. The documentation problems weren't so serious that we couldn't get our Vision work. In fact, we just came back from the WPI District event and we had the only 20 point autonomous there. We didn't miss a single vision-assisted goal the entire weekend, when our drive train was actually working. The lack of documentation is for things like annotating an image (the imaqOverlay calls just didn't work for us), or what the "float" pixel value means in the imaqDraw family of calls. See my (essentially) unanswered questions at: https://decibel.ni.com/content/thread/43729?tstart=0. Also, although we almost certainly had the best Vision at WPI, doing it on the RoboRio is slow, so we'll probably go for an on-board co-processor next year. And it's doubtful if the NI libraries would be available for any of the co-processors we'd consider. |
|
#7
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
Quote:
Quote:
2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot. |
|
#8
|
|||
|
|||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
I was actually playing with adding some stuff myself. Update from AZ - GRIP seems to be running fine on our machine. Post event I'll see if I can get our vision kids to post a bit more detail. |
|
#9
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
^This. We pulled down the GRIP source and did a python port of the algorithm we had in GRIP. Because GRIP makes it so easy to try things we ended up with a simple 3 block algorithm. With out the rapid prototyping it likely would have had a few extra unneeded steps. We made the python program that runs on a Beagle Bone Black publish values to NT identically to how GRIP does. This allows us to switch between either GRIP on the DS and our python program on the BBB without any code changes required. The robot is none the wiser as to which one is currently being used.
|
|
#10
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
|
|
#11
|
|||
|
|||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
We put on a great game but we just never really have any luck so we aren't advancing to worlds even though I think we have a great vision system and the robot performed beutifully so eventually when i stop being so sour over our loss i'll get around to doing it, you'll have to hold tight until then |
|
#12
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
We're using a Kinect this year for our vision processing, connected to a coprocessor running Freenect and OpenCV.
The Kinect uses an IR stream to find depth, however you can also view the IR stream Raw, which is extremely useful, as it means we don't need to have a big green LED on our robot's camera. Our coprocessor (originally the Pine64, but changed to the raspberry pi because of driver support in libusb) finds the contours and bounding boxes of the high goal target. These values are sent to the RoboRIO via regular sockets. A single frame of data takes up only 32 bytes per target, which means we never run out of bandwidth. All this code is in C/C++. Instead of doing some (unreliable) math to find the angle and distance to the target, we're just using a PID controller with the error set to the deviation between the centre of the bounding box and the centre of the frame to align. For distance, we're just using a lookup table with the distance of the target from the bottom of the frame in pixels. Calculating Distance and Angle is an unnecessary step and just complicates things. While a target is in view, our flywheels will passively spin up to the appropriate speed to avoid taking time to spinup when we're ready to take a shot. This means the shot it taken almost instantly when I hit the 'shoot' button on the joystick. Our vision code is written in C/C++ and our RoboRIO code is written in Java/Kotlin. |
|
#13
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
As a further note, I'll be attempting to add Kinect support to GRIP after the season's conclusion. If you're planning to use a Kinect next year and want support for this in GRIP, keep an eye on #163
|
|
#14
|
|||
|
|||
|
Re: Are You Using A Camera To Align Their Shooter?
We're using GRIP, and competed this week at orlando. No problems reported from the driver. In fact, the robot made every autonomous shot it took, so it was working pretty well. Let me know if you have any questions.
|
|
#15
|
||||
|
||||
|
Re: Are You Using A Camera To Align Their Shooter?
Quote:
Our driver never actually sees the high goal opening ever on the DS when shooting. We all look up after to see if it actually goes in. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|