Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Are You Using A Camera To Align Their Shooter? (http://www.chiefdelphi.com/forums/showthread.php?t=145491)

IronicDeadBird 10-03-2016 08:40

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by s5511 (Post 1554942)
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?

From the limited testing the camera should be safe and secure in this. Rough game

Jonathan Ryan 10-03-2016 08:54

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by sanelss (Post 1554656)
We use a windows tablet with labview and do local usb camera processing but also forward coordinate data to driver station for driver to be able to see the alignment and verify before taking a shot. we have working auto crossing one of 5 defenses and taking a high goal shot. we have over 90% accuracy with auto aim, even the driver just enables auto aim but he still has manual control.

Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.

marshall 10-03-2016 08:58

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by s5511 (Post 1554942)
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?

We're just up the road from you folks and happy to help anytime. Just let us know what you're seeing (vision pun) and we will do what we can to help.

Stappy 10-03-2016 10:45

Re: Are You Using A Camera To Align Their Shooter?
 
We had our shooter using grip and had great success in the practice fields we went to. Got to our first competition and grip interfered with the field software and never worked. IF you are using grip I would highly suggest a backup system or plan.
Our programmer kept mumbling.. "I bet they will release an update to grip after this..." :eek:

vScourge 11-03-2016 10:00

Re: Are You Using A Camera To Align Their Shooter?
 
Do you have details on what sort of interference GRIP had with the FMS?
Were you running GRIP on the DS, Roborio, or coprocessor?

We're intending to use GRIP on an onboard Raspberry Pi 2, but also using the SmartDashboard extension to send a low-res feed from it to the DS for the driver. Just wondering what specifically we should be wary of.

cjl2625 11-03-2016 10:11

Re: Are You Using A Camera To Align Their Shooter?
 
We are having great luck with RoboRealm; I find it more powerful than GRIP.
However I don't think RoboRealm has updated networktables compatibility for this year's control system. As a result, I devised a workaround using HTTP (RoboRealm sends data im an HTTP request to a local Python HTTP server, and then use PyNetworkTables to share the data with the robot.)
Perhaps it's not the most efficient method, but it works fine for me.

euhlmann 11-03-2016 12:54

Re: Are You Using A Camera To Align Their Shooter?
 
We are using NI vision to automatically line up and calibrate our shooter. We line up using a constant rate turn and then adjust the shooter based on empirical data we collected that relates the size and position of the target on screen to how we need to calibrate the shooter to make that shot.

However I wouldn't recommend NI vision. It's very poorly documented. Next year we will probably switch to OpenCV

ThomasClark 11-03-2016 18:04

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by Andrew Schreiber (Post 1554589)
That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably).

That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue and it doesn't look too hard, I can try implementing it.

Quote:

Originally Posted by Arhowk (Post 1554647)
I would recommend against GRIP. Our team was going to use GRIP initially but I rewrote our processing code in OpenCV using GRIP as a realtime processing agent in the pits than just copying over the hsv, erosion kernels, etc. to the cv code.

Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.

Quote:

Originally Posted by Arhowk (Post 1554647)
  1. GRIP, if ran on the dashboard, requires sending camera data over a second time in addition to the DS which clogs up bandwidth and laptop CPU
  2. GRIP, if ran the RIO, never even worked for us. Gave us some error and resulted in the program never writing to NetTables.
  3. GRIP on the RIO also requires the installation and execution of the Java VM which is quite alot of overhead if you aren't a Java team
  4. There is also the latency of running it on the DS that is amplified on the field which produces visible control lag for the driver or robot code if used
  5. You learn more if you do it by hand! :D It's not hard. (Getting an mjpeg is a pain though)

1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension. Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.

Andrew Schreiber 11-03-2016 18:24

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by ThomasClark (Post 1555632)
That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue and it doesn't look too hard, I can try implementing it.



Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.



1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension. Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.



I was actually playing with adding some stuff myself.


Update from AZ - GRIP seems to be running fine on our machine. Post event I'll see if I can get our vision kids to post a bit more detail.

kylelanman 12-03-2016 01:11

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by ThomasClark (Post 1555632)
Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.

^This. We pulled down the GRIP source and did a python port of the algorithm we had in GRIP. Because GRIP makes it so easy to try things we ended up with a simple 3 block algorithm. With out the rapid prototyping it likely would have had a few extra unneeded steps. We made the python program that runs on a Beagle Bone Black publish values to NT identically to how GRIP does. This allows us to switch between either GRIP on the DS and our python program on the BBB without any code changes required. The robot is none the wiser as to which one is currently being used.

sanelss 12-03-2016 22:33

Re: Are You Using A Camera To Align Their Shooter?
 
Quote:

Originally Posted by Jonathan Ryan (Post 1554946)
Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.

I'll sooner or later make a video of and provide some documentation since it performed so well this year.

We put on a great game but we just never really have any luck so we aren't advancing to worlds even though I think we have a great vision system and the robot performed beutifully :( so eventually when i stop being so sour over our loss i'll get around to doing it, you'll have to hold tight until then

Jaci 13-03-2016 00:05

Re: Are You Using A Camera To Align Their Shooter?
 
We're using a Kinect this year for our vision processing, connected to a coprocessor running Freenect and OpenCV.

The Kinect uses an IR stream to find depth, however you can also view the IR stream Raw, which is extremely useful, as it means we don't need to have a big green LED on our robot's camera.

Our coprocessor (originally the Pine64, but changed to the raspberry pi because of driver support in libusb) finds the contours and bounding boxes of the high goal target. These values are sent to the RoboRIO via regular sockets. A single frame of data takes up only 32 bytes per target, which means we never run out of bandwidth. All this code is in C/C++.

Instead of doing some (unreliable) math to find the angle and distance to the target, we're just using a PID controller with the error set to the deviation between the centre of the bounding box and the centre of the frame to align. For distance, we're just using a lookup table with the distance of the target from the bottom of the frame in pixels. Calculating Distance and Angle is an unnecessary step and just complicates things.

While a target is in view, our flywheels will passively spin up to the appropriate speed to avoid taking time to spinup when we're ready to take a shot. This means the shot it taken almost instantly when I hit the 'shoot' button on the joystick.

Our vision code is written in C/C++ and our RoboRIO code is written in Java/Kotlin.

Jaci 13-03-2016 00:11

Re: Are You Using A Camera To Align Their Shooter?
 
As a further note, I'll be attempting to add Kinect support to GRIP after the season's conclusion. If you're planning to use a Kinect next year and want support for this in GRIP, keep an eye on #163

GuyM142 13-03-2016 00:39

We use vision in order to align to the goal, we found out that the rate in which we get new measurement from the vision processing was too low to work properly with PID so we decided to use one image to calculate how much degrees to turn and then used the gyro to reach that angle. After settling down we take another image just to make sure the robot is on target.

Tottanka 13-03-2016 06:14

Re: Are You Using A Camera To Align Their Shooter?
 
We used GRIP to create a python algorithm which we use with OpenCV.
The frame rate was too slow for us as well, so we are taking one shot of the target, and use Encoders to turn the robot the calculated angle for the target with PID.
Later we double check that it is indeed aligned and that's it.
Takes us less than a second to align properly.


All times are GMT -5. The time now is 12:47.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi