Lift Tracker - 1.0

LIFT TRACKER - SWAT 1806

S.W.A.T 1806 is proud to announce the first Lift Tracking software of the 2017 game FIRST Steamworks. No longer will mentors and coaches have to yell at the programmers to get vision tracking working, it’s here. The software will recognize the distance from the target, and the angle to the target. This will also run on different processing computers like the PI and Kangaroo with relative ease. You can also easily edit this software by using the included GRIP file and generate the code that you like, no more messing with pesky HSV!

How to install:

  1. Install opencv 3.X on whatever computer it’s running on from here

  2. Download NetworkTables 3.0 (inside repo) and make sure it’s in the build path

  3. Download the repo

  4. Run GRIP with the included project file, and tune your values to your liking, and export the code and overwrite everything in LiftTracker.java BE SURE TO NOT DELETE LINES 303-310, KEEP IT SOMEWHERE IN THAT CODE

  5. Export the project as a runnable jar

  6. Run it using command line or a batch file (or .sh file if you are on linux)

Before you run it though, you need to calculate the distance constant. This is a pretty easy task and should take under 10 min. Choose 5 distances for the robot to sit (12, 24, 48, 60, 72 in). Move your robot to each of these distances and record the variable lengthBetweenContours, then write that down. Multiply the distance and lengthBetweenContours and write what you get down. After you do that for all of the values, average everything and that’s the distance constant. There is a variable in the code that you can change named DISTANCE_CONSTANT so you can easily change it

This is the GitHub link

If you have any questions, feel free to post a comment or make an issue on the GitHub page and I’ll be happy to look at it. Currently this is in beta, so please contact me for corrections.

Thanks to:

  • Fauge7, you a G
  • TowerTracker 1.0 for giving us some inspiration on what to do
  • S/O to the people on the FRC Discord for overcoming the FRC Discord

**FAQ:
**

Q:What camera did you guys use

A: Microsoft Lifecam HD3000 @ 640 x 480

Attempt #3 at trying to post my thread

Looks awesome!

Pretty cool!

But consider using .gitignore…

Thanks for the suggestion, I fixed it!

Any chance I could get a little more guidance/advice for a complete noob?
I’m a very good programmer but have not yet worked in the environments called for for robot programming.
Just learning the LabView stuff for robot control.

For instance…

  1. The instructions say “install” opencv but when I go and download it from the site I just get a folder of libraries. There’s nothing to install. So, are you really just saying “go download the libraries”?

  2. Why do I have to download Network Tables 3.0 if it is already in the repo.
    I did a git clone of the repo so am I good with Network Tables?
    (Side note - I understand the concept of Network Tables but haven’t used them yet - I’ll be Googling that to try and figure it out.)
    It says make sure it is in the “build path”. Build Path for what toolchain?

  3. I download and installed GRIP for windows. I tried to open the .grip file and it crashed GRIP and wouldn’t open. Is that because maybe I don’t have the opencv files in the “correct” place or the Network Tables file in the right spot?

  4. Export the project as a runnable JAR. Using what tool?

I don’t think if you respond it has to be in too much detail. Broad strokes are good. I can Google and follow-up with any unresolved issues after I work on it some more.

Great job!

  1. Installing opencv is just the library, it should come with a file opencv_3.XX.jar or something like that. You put that file inside of your built path.

  2. I included NetworkTables 3.0 in the repo so you can easily include it in the ECLIPSE built path. This allows you to use the different classes and methods from inside that jar in your project. You are good with the git clone once you get it inside the build path

  3. I’m not really sure what is going on with that, I just installed GRIP on a fresh machine and opened up the GRIP file and it worked fine. Send me over a log file and I can check it out for you.

  4. To export a jar, you go to File -> Export -> Runnable JAR from eclipse and it’ll be good to run

I hope this helps! PM me if you need any more help

I just updated the code a bunch, follow the new instructions inside the github

GitHub

Hello, we’re new to vision this year, and we came across your github repository. We have used GRIP before to generate our code, and we have attempted to get the contours from the camera from the robot, but to no success.

We were wondering how we would incorporate your Process.java into our Robot.java class. How would we able to run it, without exporting it as a runnable jar file?

Thanks :slight_smile:

You should just be able to just copy paste the Process.java methods into your robot code and edit it to grab the GRIP values which ever way you want to (Probably networktables)

Awesome code. I looking to run this on a coprocessor (Jetson TX1). Would both liftTracker and processing.java run on the coprocessor or liftTracker on jetson and processing.java on robo rio. In processing.java you set the IP for networktables. Is this where your getting the network tables from and would that IP need to change for the jetson. Also your pulling a table called LiftTracker but in GRIP the network table published is still called myContours. Also setting up the camera steam ( I am using USB camera) I am assuming I would have to change IP to jetson as well (probably would have to create a mjpeg server?) and then also try to send it to the Driver Station.

Sorry for so many questions
Thanks!

Awesome code. I looking to run this on a coprocessor (Jetson TX1). Would both liftTracker and processing.java run on the coprocessor or liftTracker on jetson and processing.java on robo rio.

Yeah, so the LiftTracker.java file is literally just the outputted code from grip. If you want it by itself, you can have a method somewhere in a *.java file and run this method:

tracker.process(Mat)

In processing.java you set the IP for networktables. Is this where your getting the network tables from and would that IP need to change for the jetson.

When I run:

NetworkTable.setIPAddress("roborio-1806-frc.local");

That is the IP where I’m putting all of the output values (angle, distance, etc). So, you would want to change 1806 over to your team number.

Also your pulling a table called LiftTracker but in GRIP the network table published is still called myContours.

LiftTracker is the table where I am *putting *all of my values, not getting. You aren’t going to get any values from myContours from GRIP. You aren’t actually relying on the GRIP gui, it’s the code generated from it, placed in the LiftTracker.java

Also setting up the camera steam ( I am using USB camera) I am assuming I would have to change IP to jetson as well (probably would have to create a mjpeg server?) and then also try to send it to the Driver Station.

What I would do is replace the .open with the URL to:

videoCapture.open(0);

This makes it so it’ll open up the webcam plugged into the USB. Then you might want to start an MJPG server. I’m not sure how well these work hand and hand together so you may have to experiment around

And so the GP train continues. Awesome that you guys did this!

Have you benchmarked it to see resource usage on the roborio? I know that running two JVMs can eat up most, if not all, of the resources.

Nah we haven’t tested it on the rio, we have only tested it on our drivers station computer and soon the pi / jetson

So I’m having a bit of trouble: probably because I did something wrong, but still trouble.

I’ve created a new project to test this with that essentially prints the angle and distance to the target to the driver station console so I know it works. This is where the problems start. getAngle() throws a null pointer exception at line 143, and I’m not sure why.

Anyone who thinks they can help would be very much appreciated! I have copied this down exactly, changed things to my team number, and implemented it into the main robot class without any obvious errors.

Thanks!

I’d like to see a stacktrace of this, line 143 is checking to see if there is two contours in the picture, but it should be fixed from my latest commit.

PM me a pic / pastebin of the stack and i’d love to see it

Looking through the commit history, looks like this bug was fixed. Re-pull and then try it again.

The bug was caused by when tracker.filterContoursOutput was empty, meaning that the processing routines didn’t find any contours, therefore there was no angle to find. An appropriate if statement was added to ensure that a contour was found.

After getting the values from the lift tracker program, what are you guys doing to drive the robot to the gear peg?

Did you ever figure this out?

What we are doing is running a modified drive straight class that is attached to the vision processing acting like the gyro, hope this helps