Log in

View Full Version : Tower Tracker 1.0


Fauge7
19-01-2016, 23:51
Team 3019 graciously presents their vision tracking program for everybody to use and borrow! Now instead of dreaming about the perils of computer vision YOU can bring your team the joy of having a robot that is capable of tracking the target with ease! No more Grip crashes or weird deploys, can calculate fun things such as: distance to target, and angle to target! to be used to auto align and auto aim!

if you are going to modify the code, all i ask is give me and my team credit in a comment at the top of the code and comment your suggestions and or your praise!

to install:

download opencv 3.1 from here (http://opencv.org/)
download Network table 3.0 jar
make a new project in eclipse
make opencv and networktables added as a user library to the build path of your new project
copy opencv_ffmpeg310_64.dll from C:\Path\to\opencv\build\bin to C:\Windows\System32
add the code in a class that is named TowerTracker.java
when your ready to export
export the .jar file as a runnable jar
move the .jar to a folder similar to this (http://imgur.com/gallery/RT587rf/new)
run the .jar with "java -jar c:\Path\to\TowerTracker.jar" on a command prompt window


the code is just an example of what it can do, i can add network table stuff soon but i thought i would publish it first!
github link (https://github.com/fauge7/TowerTracker/blob/master/src/com/fauge/robotics/towertracker/TowerTracker.java)

want to see an example of what it can output?
here you go! (http://imgur.com/a/qOOyu)

how it works: using an axis camera or a mjpeg streamer you can use a stream of a webcam to process images using an opencv program that runs on the driver station computer. This program can be modified to run on a coprocessor and directly input to the roborio for even better results because network tables can only go at 10hz vs the camera stream which is 30hz...this program can easily be ported over to c++ and python and would probably run better with those as c++ and python are way more supported then java with opencv.

SenorZ
20-01-2016, 11:56
Thank you!

We're looking at non-vision tracking options this year since the retro-tape is so high up. But we'll take a look at this code, too.

akablack
20-01-2016, 13:40
WOW! Awesome work! One question, Can this be deployed to run on the roborio ?

mklinker
20-01-2016, 17:26
Where do I install the Network Table 3.0 jar

Fauge7
20-01-2016, 23:17
WOW! Awesome work! One question, Can this be deployed to run on the roborio ?

Yes! The only problem with running it on the rio is with the vision tracking will take up too much of the rio's resources and might lag out the robot. If you are going to do this I would suggest using a linux board such as a raspi 2 or an odroid c1+, they both run linux so they have similar interfaces and have more support. This will also allow you to do much more advanced tracking such as real time object tracking and detection

Where do I install the Network Table 3.0 jar

Network table goes in as a user library(tutorial) (http://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Frefe rence%2Fpreferences%2Fjava%2Fbuildpath%2Fref-preferences-user-libraries.htm) in your eclipse project, you will not need to do anything when you export the .jar unlike for opencv.
When you export the .jar file put it in a folder that looks similar to this (http://imgur.com/gallery/RT587rf/new) where opencv_java310.dll could be opencv_java310.so for linux

Munchskull
20-01-2016, 23:27
Could this run on a C.H.IP computer?

Fauge7
20-01-2016, 23:31
Could this run on a C.H.IP computer?

as long as it can run java, it can run...so theoretically, but this begs the question if you would WANT to run it on a chip computer...why wouldnt you upgrade to atleast a raspi or an odroid? at 35$ they provide atleast 4x the processing power, which makes every ounce of difference if you want realtime video processing...you could get away with only one or two frames...

jreneew2
21-01-2016, 07:28
Yes! The only problem with running it on the rio is with the vision tracking will take up too much of the rio's resources and might lag out the robot.

Im not an expert on multithreading, but would you still experience the lag if you ran the the code on a seperate thread on the roboRIO?

Thank you for this code, I have ported it to c++. I am running it on a separate thread, but just with a static image for now.

Fauge7
21-01-2016, 08:43
would you still experience the lag if you ran the the code on a seperate thread on the roboRIO

I am not an expert either which is why it is not multithreaded but my understanding of the Rio is that it already uses both cores to run the robot code. So I think it still might, of course why wouldn't you just test this and get back to me, it's an open source project

curtis0gj
21-01-2016, 10:20
Thank you so much for making your java vision tracking solution public. For our team vision tracking seemed extremely daunting but this made it a realistic task.

I am trying to run the jar but I get a couple of errors. I may have added the network table user library incorrectly.
The steps I took are as follows: I made a new java project, added the user library for opencv following this tutorial: http://docs.opencv.org/2.4/doc/tutorials/introduction/java_eclipse/java_eclipse.html.
Then I added the network table user library from this directory: C:\Users\Curtis Johnston\wpilib\java\current\lib.
When I try to run the executable jar from command prompt I get the following errors.


platform: /Windows/amd64/
Exception in thread "main" java.lang.UnsatisfiedLinkError: no ntcore in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java :1864)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at edu.wpi.first.wpilibj.networktables.NetworkTablesJ NI.<clinit>(NetworkTablesJNI.java:57)
at edu.wpi.first.wpilibj.networktables.NetworkTable.i nitialize(NetworkTable.java:42)
at edu.wpi.first.wpilibj.networktables.NetworkTable.g etTable(NetworkTable.java:176)
at testing.TowerTracker.main(TowerTracker.java:80)

jreneew2
21-01-2016, 10:23
I am not an expert either which is why it is not multithreaded but my understanding of the Rio is that it already uses both cores to run the robot code. So I think it still might, of course why wouldn't you just test this and get back to me, it's an open source project

Yeah, I will tonight. I'm using static images now, but tonight i can set up some tape and test.

Again, thank you for this code, it is very helpful.

kmckay
21-01-2016, 10:35
Thank you for this code, I have ported it to c++. I am running it on a separate thread, but just with a static image for now.

Would you be open to sharing the C++ version?

Fauge7
21-01-2016, 10:36
Thank you so much for making your java vision tracking solution public. For our team vision tracking seemed extremely daunting but this made it a realistic task.

I am trying to run the jar but I get a couple of errors. I may have added the network table user library incorrectly.
C:\Users\Curtis Johnston\wpilib\java\current\lib.
When I try to run the executable jar from command prompt I get the following errors.



for some reason first still distributes networktable 2.0, you are looking for the network tables 3.0 (http://wpilib.screenstepslive.com/s/4485/m/wpilib_source/l/480976-maven-artifacts)
the specific file you want is the edu.wpi.first.wpilib.networktables.cpp:NetworkTabl es:3.0.0-SNAPSHOT

there it has the instructions for downloading the newest .jar file for the network tables...I made the same mistake when making this

Greg McKaskle
21-01-2016, 10:39
The roboRIO has two cores and a modern linux scheduler. All processes and threads will be assigned to a processor core based on priority and history of execution. The default robot code doesn't use the entire roboRIO to execute, and in fact can be made much lighter and efficient if that is what the team chooses to do.

It is quite easy to consume an entire core on any computer by writing one loop without a wait or delay of some sort. At that point, you can add more cores or fix the problem.

There are intentionally many ways to approach the vision processing challenge, and the tradeoffs are as team-based as technical. I fully expect to see awesome processing based on the DB laptop, based on coprocessors, and based on just the roboRIO. None of these are, in my opinion, a bad approach. And of course there are teams who will solve the challenge with no camera at all.

By the way, the DS shows you the CPU trace of the roboRIO in realtime. Just click on the second or third tab on the right side. This info is also logged and can be reviewed using the Log File Viewer after a practice or a match. If the robot feels sluggish, you can try to identify if it was because you maxed the CPU or something else.

Greg McKaskle

jreneew2
21-01-2016, 12:52
Would you be open to sharing the C++ version?

https://github.com/team2053tigertronics/2016Code/tree/master/Robot2016/src/vision

Its a bit messy, and the algorithm is a bit different, but you can change it easily. Also, im getting resource initialized error after a while. It might be an array error.

FleventyFive
21-01-2016, 15:49
Also interested in running this on RoboRio and in C++. Anyone have any luck compiling a C++ WPIlib robot program with some opencv in it? If so, I'd love to hear how you did it.

akablack
21-01-2016, 15:49
Thanks for sharing that! How did you get the opencv libraries onto the roborio? I've been having some trouble with that

Ronso007
21-01-2016, 15:56
wow! Amazing!
I just wish that we had something like that in labview :/

kmckay
21-01-2016, 17:29
https://github.com/team2053tigertronics/2016Code/tree/master/Robot2016/src/vision

Its a bit messy, and the algorithm is a bit different, but you can change it easily. Also, im getting resource initialized error after a while. It might be an array error.

Thanks! I will see what we can do and get back in touch with you for any improvements we can make.
Ideally, I'd love to see this processed on board with a raspberry pi or arduino board, but that's version 2.0 stuff.

Fauge7
21-01-2016, 17:58
wow! Amazing!
I just wish that we had something like that in labview :/

It works with labview! all you have to do is output to a network table, and from there you can get the values in labview.

Quantum Byte
21-01-2016, 20:03
Nicely done.

So how would you use this to get the values of the contours?

Lets say I want my robot to "auto" shoot when the contours are a specific dimension. Anyway I can do that with this ?

Thanks.

Fauge7
21-01-2016, 22:02
Nicely done.

Lets say I want my robot to "auto" shoot when the contours are a specific dimension. Anyway I can do that with this ?

Thanks.

I will add in the network table features today since i finally got it to work (longer then expected)

essentially you need to find the amount that your shooter can be off (tolerance) while still "scoring", for instance it can make it anywhere from 6-8 feet from the goal. Then you need to make sure your robot keeps driving until it is somewhere between that tolerance and then it can start its fire sequence. I would recommend looking into a pid drive system for that, its a closed loop drive that would work nicely with this.

jreneew2
22-01-2016, 06:43
Also interested in running this on RoboRio and in C++. Anyone have any luck compiling a C++ WPIlib robot program with some opencv in it? If so, I'd love to hear how you did it.

Thanks for sharing that! How did you get the opencv libraries onto the roborio? I've been having some trouble with that

We followed 2168's vision example (https://github.com/Team2168/2168_Vision_Example) and there precompiled version worked fine for us. Any questions, just ask.

Greg McKaskle
22-01-2016, 07:55
I just wish that we had something like that in labview :/

You might want to look at the Getting Started Window, Tutorials tab, say #8.

Greg McKaskle

TheGuyWhoCodes
23-01-2016, 14:03
When trying to run the program on the Jetson TK1, I get the error

java.lang.UnsatisfiedLinkError: no libopencv_java2410 in java.library.path


I did the install tutorial off of OpenCV's documentation. We have also verified that it has been working on a Windows machine. I think the reason might be that we don't have the correct *.so file inside of the working directory for OpenCV. Is there anybody that can upload the .so file to verify if that's the problem? The so file we are trying to find in the program is called libopencv_java248.so, and it's inside the directory /usr/lib/
Thank you!
-Chris

FleventyFive
24-01-2016, 02:46
In case anyone is still daring to try to do vision on the Rio, and wants to put it right in their normal C++ project, I was able to (finally) get opencv to build for ARM and integrate into a WPILib project. Been unable to test on a Rio so far, but at least it builds. Here's an example project (https://github.com/ironmig/2016robot/tree/add-opencv) with the special opencv build and a file (BuildOpenCV.txt) explaining how to set it up. Was a big PIA for me, so thought i'd leave notes on the steps I went through. You can just throw in your own source files into the project, but make sure to delete/comment out the line #define REAL in WPILib.h, as it creates some conflict of OPENCV. Okay it's 3 a.m I should go to sleep now.

FleventyFive
24-01-2016, 02:52
Thanks for sharing that! How did you get the opencv libraries onto the roborio? I've been having some trouble with that

If you want more detailed build steps than the 2168 thing from last year, here's a 2016 project (https://github.com/ironmig/2016robot/tree/add-opencv) with a working (building, at least) opencv on C++ as well as instructions on how to set it up if you want to build it yourself (easiest on Ubuntu or similar) in the BuildOpenCV.txt file.

FleventyFive
24-01-2016, 20:52
Nevermind, I have no idea what I'm doing.

Fauge7
25-01-2016, 00:26
Nevermind, I have no idea what I'm doing.

why are you trying to run it on a jetson or something else...

virtuald
25-01-2016, 00:34
FYI, anyone looking for a precompiled version of opencv 3.1 for the roboRIO, the robotpy project has had one available since before build season. Works with C++, Java, and Python 2/3 -- very easy to install the shared libraries on the roboRIO through our opkg repo.

https://github.com/robotpy/roborio-opencv

pnitin
25-01-2016, 15:09
you can do this on $35 Raspberry pi and save critical RoboRio resources

see this

Look here for tracking shronghold goalpost
http://www.mindsensors.com/blog/how-...our-frc-robot-

Fauge7
25-01-2016, 15:17
you can do this on $35 Raspberry pi and save critical RoboRio resources

see this

Look here for tracking shronghold goalpost
http://www.mindsensors.com/blog/how-...our-frc-robot-

if your trying to advertise atleast make sure the link works...

FleventyFive
25-01-2016, 16:47
why are you trying to run it on a jetson or something else...

I'm still gonna try to see what I can do with the Rio. I just have no idea what I'm doing when it comes to cross compiling huge C++ projects and make files and shared libraries and all of that stuff.

FleventyFive
25-01-2016, 16:58
FYI, anyone looking for a precompiled version of opencv 3.1 for the roboRIO, the robotpy project has had one available since before build season. Works with C++, Java, and Python 2/3 -- very easy to install the shared libraries on the roboRIO through our opkg repo.

https://github.com/robotpy/roborio-opencv

You, sir, just made my day. Been trying to figure out how to build a more up-to-date version of OpenCV for two days now, and realized I have no idea what I'm doing. You RobotPy people really rock.

Now, would you be willing to help me setup my build environment for cross compiling an OpenCV program in C++ that will use the shared libs on the RoboRio (preferably in Eclipse)?


EDIT: Information on setting up build environment is avaible on the latest release on gituhub. Wow, that was easy!

1024Programming
25-01-2016, 17:30
Can this work with usb cameras?

Turing'sEgo
25-01-2016, 18:36
*My java opencv is a little rusty*

Here is how you use a use camera with the opencv libraries and store the image in the data type Mat.

VideoCapture camera = new VideoCapture(0);

Mat frame = new Mat();
camera.read(frame);

instead of

videoCapture = new VideoCapture();
//replaces the ##.## with your team number videoCapture.open("http://10.##.##.11/mjpg/video.mjpg");

Fauge7
25-01-2016, 19:18
Yes, if you are going to use a usb webcam I would recommend mjpeg streamer and running it off of the driverstation.

pnitin
26-01-2016, 18:54
if your trying to advertise atleast make sure the link works...

http://www.mindsensors.com/blog/how-to/how-to-track-stronghold-high-goalpost-using-vision-system-on-your-frc-robot-

MekhiThomas
02-02-2016, 19:04
My team is trying to use your code to test our vision but every time we run the executable JAR file we get two errors-
"Error opening file </build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:578>" and when we try to use the network table, we get the error "NT: Error: could not resolve roborio-3929.local address <TCPConnecotr.cpp:93>". We've installed opencv 3.1 and we are able to stream camera output to the dashboard. We are using the microsoft livecam 3000. Any help would be appreciated. Thanks!

KaiFukuyama
03-02-2016, 18:29
My robotics team is also getting the same error that everyone else is. When we try and run it in the command prompt is gives us the error that it is having a hard time with having Open CV in the library. It can't find it but it is in there. Any ideas on how to fix it? We followed the directions as it said and nothing works. We also added the DLL file in Open CV folder to system 32. I included a screenshot of the error we were getting. Please help.

Fauge7
03-02-2016, 18:36
My team is trying to use your code to test our vision but every time we run the executable JAR file we get two errors-
"Error opening file </build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:578>" and when we try to use the network table, we get the error "NT: Error: could not resolve roborio-3929.local address <TCPConnecotr.cpp:93>". We've installed opencv 3.1 and we are able to stream camera output to the dashboard. We are using the microsoft livecam 3000. Any help would be appreciated. Thanks!

If its having problems with the ffmpeg you need to download opencv and follow my instructions at the top of this thread. I decribe what you have to do after installing it. As for the microsoft usb camera you have to use whats called mjpeg streamer. It takes a usb webcam and outputs the feed to a web page in which you can call later.

[FONT="Times New Roman"]When we try and run it in the command prompt is gives us the error that it is having a hard time with having Open CV in the library. It can't find it but it is in there. Any ideas on how to fix it? We also added the DLL file in Open CV folder to system 32.

Did you extract the program into a runnable jar and then put it in a folder with the files similar to the screenshot i included?

KaiFukuyama
04-02-2016, 07:41
If its having problems with the ffmpeg you need to download opencv and follow my instructions at the top of this thread. I decribe what you have to do after installing it. As for the microsoft usb camera you have to use whats called mjpeg streamer. It takes a usb webcam and outputs the feed to a web page in which you can call later.



Did you extract the program into a runnable jar and then put it in a folder with the files similar to the screenshot i included?

We figured out what the problem was with why it won't run. We needed to put the Open CV jar file in with system 32.

Fauge7
05-02-2016, 01:01
We figured out what the problem was with why it won't run. We needed to put the Open CV jar file in with system 32.

that or you put it in the same folder like i have...Cheers on getting it to work!

SlittyEyes
05-02-2016, 16:18
I modified the code to output distance values to the network table. When I add code to put the distance value on the dashboard, no values show up. Also, when I use the OutlineViewer to check any networktable values (with the localhost as "roboRIO-3929-FRC.local"), nothing shows up except the Root folder. Did you get the network table to work?

Fauge7
08-02-2016, 21:33
I modified the code to output distance values to the network table. When I add code to put the distance value on the dashboard, no values show up. Also, when I use the OutlineViewer to check any networktable values (with the localhost as "roboRIO-3929-FRC.local"), nothing shows up except the Root folder. Did you get the network table to work?

Before checking you need to make sure your program connects to the network tables on the robot. Here is what you can do to ensure that it will connect before it does the vision processing.

while(!NetworkTable.isConnected()){}

riftware
11-02-2016, 00:09
Team 3019 graciously presents their vision tracking program for everybody to use and borrow! Now instead of dreaming about the perils of computer vision YOU can bring your team the joy of having a robot that is capable of tracking the target with ease! No more Grip crashes or weird deploys, can calculate fun things such as: distance to target, and angle to target! to be used to auto align and auto aim!

if you are going to modify the code, all i ask is give me and my team credit in a comment at the top of the code and comment your suggestions and or your praise!


My students are using Grip to get the countours out of the streamer but were still working on distance to target last I checked. We identify valid targets by taking the ration of length to width which I think is working pretty well. I think they were looking at taking a couple of known distances and identifying size at those distances and then extrapolating for distances between. Your methodology looks very interesting but I did have a couple of questions on the "angle"

(Note: Its been 28 or so years since I had to do more complicated math so go easy on me). It looked like in the code you had a known angle of the camera (I'm assuming vertical angle.) and you are plugging that in. I get how this works more or less for figuring out distance/vertical angle. Where we are struggling a bit is in figuring out when we are "off center" - given what you can get out of a contour I'm not sure we would know that we need to move a bit to the right or left in order for a shot to work. Did you wind up solving that? If the contour had given us the boundaries of the rectangle's length or coordinates then I think we could evaluate relative sizes to know to move left or right but I'm not sure even about that. Any feedback is appreciated!

legts
15-02-2016, 10:34
Would this code work with a Microsoft Lifecam?

lethc
15-02-2016, 12:07
Would this code work with a Microsoft Lifecam?

You will have to modify a few lines of code but other than that... Yes.

kinganu123
15-02-2016, 16:34
You will have to modify a few lines of code but other than that... Yes.

Wait, is there a way to offload the usbcamera data onto a program on the computer?! I thought that wasn't possible yet...

lethc
16-02-2016, 00:56
Wait, is there a way to offload the usbcamera data onto a program on the computer?! I thought that wasn't possible yet...

We are only using it to view the USB camera feed in the SmartDashboard

axton900
17-02-2016, 17:21
Has anyone gotten a python version of this code to work?
In python many of the methods that this code requires do not exist and this is a problem for many teams that plan on using Python for their vision processing.
We have trying to port this code for use as sample code for days but as you can see many of the functions do not exist which causes problems.
Here is our code.. so far
http://pastebin.com/ecdDFDQp

the rectangle class is the main problem we are being faced with. There seems to be no equivalent for those using Python. If anyone has found a solution to this issue. Then please let us know!
Thanks!

virtuald
18-02-2016, 14:06
the rectangle class is the main problem we are being faced with. There seems to be no equivalent for those using Python. If anyone has found a solution to this issue. Then please let us know!
Thanks!

I believe that in the OpenCV python bindings rectangles are represented as tuples.

Fauge7
19-02-2016, 00:04
Has anyone gotten a python version of this code to work?
In python many of the methods that this code requires do not exist and this is a problem for many teams that plan on using Python for their vision processing.
We have trying to port this code for use as sample code for days but as you can see many of the functions do not exist which causes problems.
Here is our code.. so far
http://pastebin.com/ecdDFDQp

the rectangle class is the main problem we are being faced with. There seems to be no equivalent for those using Python. If anyone has found a solution to this issue. Then please let us know!
Thanks!

This code runs on the driver station laptop, there is 0 need to switch programming languages unless you want it to run on the rio. I would avoid running it on the rio as that can cause issues during the match. All you need to do is setup the program like i instructed then simply run it, maybe even export it to have a runnable .jar file and a simple batch file to make executing easier.

Breadbocks
19-02-2016, 00:51
We got it all compiled into a jar and running, but it never seems to open the connection to the camera. It's at the right IP, if we put the address in the code in our browser the mjpg stream comes up, but it just sits at opening forever in the cmd prompt.

Fauge7
20-02-2016, 10:47
We got it all compiled into a jar and running, but it never seems to open the connection to the camera. It's at the right IP, if we put the address in the code in our browser the mjpg stream comes up, but it just sits at opening forever in the cmd prompt.

We had this problem too, we fixed it by installing the ffmpeg codec into the system32 like I mentioned on the front page.

JohnM
20-02-2016, 14:09
We had this problem too, we fixed it by installing the ffmpeg codec into the system32 like I mentioned on the front page.

I did this, but it is still not pulling the images off the camera. I'm not exactly using this code, but the code is the same for getting the images from our network camera.
Any idea?

joeojazz
20-02-2016, 14:25
Do you know if you can find the distance with this anyone

Fauge7
21-02-2016, 12:17
Do you know if you can find the distance with this anyone

yes, update the constants with the constants on YOUR robot, you will find the distance to be accurate to within +- 6-8 inches.

joeojazz
22-02-2016, 14:58
Team 3019 graciously presents their vision tracking program for everybody to use and borrow! Now instead of dreaming about the perils of computer vision YOU can bring your team the joy of having a robot that is capable of tracking the target with ease! No more Grip crashes or weird deploys, can calculate fun things such as: distance to target, and angle to target! to be used to auto align and auto aim!

if you are going to modify the code, all i ask is give me and my team credit in a comment at the top of the code and comment your suggestions and or your praise!

to install:

download opencv 3.1 from here (http://opencv.org/)
download Network table 3.0 jar
make a new project in eclipse
make opencv and networktables added as a user library to the build path of your new project
copy opencv_ffmpeg310_64.dll from C:\Path\to\opencv\build\bin to C:\Windows\System32
add the code in a class that is named TowerTracker.java
when your ready to export
export the .jar file as a runnable jar
move the .jar to a folder similar to this (http://imgur.com/gallery/RT587rf/new)
run the .jar with "java -jar c:\Path\to\TowerTracker.jar" on a command prompt window


the code is just an example of what it can do, i can add network table stuff soon but i thought i would publish it first!
github link (https://github.com/fauge7/TowerTracker/blob/master/src/com/fauge/robotics/towertracker/TowerTracker.java)

want to see an example of what it can output?
here you go! (http://imgur.com/a/qOOyu)

how it works: using an axis camera or a mjpeg streamer you can use a stream of a webcam to process images using an opencv program that runs on the driver station computer. This program can be modified to run on a coprocessor and directly input to the roborio for even better results because network tables can only go at 10hz vs the camera stream which is 30hz...this program can easily be ported over to c++ and python and would probably run better with those as c++ and python are way more supported then java with opencv.

Can't find the .jar where is it located at.

alexpell00
09-03-2016, 16:50
Thanks for the code, really helped! If anyone is having trouble with the thresh holds you can take the HSV image to http://html-color-codes.info/colors-from-image/ to pull the hex colors of the target. Once you have 3-5 hex colors simply convert them to bgr and find the range from high to low (add/subtract 10% on either end to make it work better).

Fauge7
10-03-2016, 00:19
Thanks for the code, really helped! If anyone is having trouble with the thresh holds you can take the HSV image to http://html-color-codes.info/colors-from-image/ to pull the hex colors of the target. Once you have 3-5 hex colors simply convert them to bgr and find the range from high to low (add/subtract 10% on either end to make it work better).

Yay! Glad it helped! Any link to your code to find how its implemented?

Honestly, my team just used grip to find the values, its easy to use and with the sliders.

Zaque
13-03-2016, 13:44
First, thanks for posting this. We have been struggling with vision code for some time now. However, I am having difficulty figuring out how to install the networktables. I can't figure out how to find the correct files and download them by following the instructions on this page (http://wpilib.screenstepslive.com/s/4485/m/wpilib_source/l/480976-maven-artifacts).

Thanks again for all these resources!

lethc
13-03-2016, 23:52
Just wanted to post and say we ran a modified version of TowerTracker on a Jetson TK1 at the Greater Kansas City regional this weekend with great success. Thank you Fauge and team 3019 for sharing your work. :)

Fauge7
14-03-2016, 23:06
Just wanted to post and say we ran a modified version of TowerTracker on a Jetson TK1 at the Greater Kansas City regional this weekend with great success. Thank you Fauge and team 3019 for sharing your work. :)

No problem, Even if my team got last im proud to say that a team got to the finals with it :) Congrats one being able to use it succesfully!

Fauge7
28-03-2016, 04:32
Tower tracker is now award winning! Shout out to team 1806 Swat! Congrats on going 15-0 at this week's regional! Proof the program can change robots. If anybody needs help with implementing it I am more then happy to help. Pm me with questions.

alexpell00
31-03-2016, 14:15
Hello. We are having trouble running it on a raspberry Pi. We I try to run the exported jar I get an no opencv_java310 in Java.library.path. Would love to be able to use it for this regions just having some trouble getting it to work. Opencv 310 is installed and compiled on the Pi.

axton900
31-03-2016, 15:55
Hey!
when you are building the java file, make sure to include the opencv jar file in your class path. An easy way to do this is to using the -cp command while building.
Doing this will work:
java -cp /home/pi/opencv/build/bin/opencv-310.jar TowerTracker.jar
This configures the class path to include the jar for OpenCV which is the error that you are facing. /home/pi/opencv/build/bin is the default location for the jar.
If you need any more help with this.
Feel free to PM me!
I went through this same process last week :)

axton900
06-04-2016, 18:38
Hey guys!
How do you calibrate the camera, exactly...
I have opened up GRIP and got the RGB values of a setup in which the goal is clear and then took those values and edited the Lower and Upper bounds respectively but have been getting 0 contours..

Thanks!

jreneew2
06-04-2016, 18:41
Hey guys!
How do you calibrate the camera, exactly...
I have opened up GRIP and got the RGB values of a setup in which the goal is clear and then took those values and edited the Lower and Upper bounds respectively but have been getting 0 contours..

Thanks!

Do you have a link to your code? Also you might want to check that the lower and upper bounds are in BGR order instead of RGB. It took me a while to figure that out.

axton900
06-04-2016, 19:11
I am using an untouched version of TowerTracker and I found RGB values on GRIP and I put them as the Upper and Lower Bounds in BGR order and I still am detecting nothing. Any suggestions? Thanks!

jreneew2
06-04-2016, 19:14
Can you use imshow to display a window of what your original, resize, and threshold Mat's look like? That might help.

axton900
06-04-2016, 19:20
I am not sure how to do so. I am running the Java code provided. I remember your team discussing this on another thread. How exactly do you guys calibrate because that seems to be the problem.
Thanks!

jreneew2
06-04-2016, 19:29
To calibrate the camera, what we did was save the first image it grabbed and then downloaded it onto our pc. Then we brought it into GRIP and found the right RGB values. Then we just plugged it into lower and upper bounds arguments. It worked for us. We did have an issue where we had a really bright image right when the camera started. My best guess is because the camera lens is still adjusting its white balance or something like that. But when just ran the code again after the camera was on for a second. This wasn't a problem in competition because your robot is on a while before the match starts.

axton900
06-04-2016, 19:32
Elaborate on the finding the perfect RGB values. Would you find these to be good?

Woolly
06-04-2016, 19:33
I would actually recommend doing your calibration in the HSB/HSV color space, as it separates color(H) from brightness(B/V) which means you can get a more robust calibration that will working in many different lighting environments (as the color your LEDs output shouldn't change) provided you turn your camera's exposure down.

axton900
06-04-2016, 19:36
and how to modify the code to use HSV instead of RGB?

jreneew2
06-04-2016, 19:39
I would actually recommend doing your calibration in the HSB/HSV color space, as it separates color(H) from brightness(B/V) which means you can get a more robust calibration that will working in many different lighting environments (as the color your LEDs output shouldn't change) provided you turn your camera's exposure down.

That is true, but we didn't have any issues with RGB.

Also, those values seem fine to me. I think i see the issue. Tower Tracker checks if the goal bounding rectangle is wider than it is tall. So when viewing from sharp angles like that is not going to give you a selected contour. Do you have a test image where the goal is straight on? If not, you can just get rid of this segment of code:


float aspect = (float)rec.width/(float)rec.height;
if(aspect < 1.0)
iterator.remove();

axton900
06-04-2016, 20:23
Oh! Thanks so much! We were hoping to have this running by this Saturday for our competition! :)

jreneew2
06-04-2016, 20:26
So it works? If so, good luck at competition. Any questions, PM me or post here.

jreneew2
06-04-2016, 20:29
Sorry for double post but just making sure you see this...

I would also check for solidity of the target. This makes sure you don't get stray objects even if it passes all other checks. Here is c++ code, very similar to java.


float area = contourArea(contours[i]);
float hull_area = contourArea(hull);
float solidity = (float)area / hull_area;

if (aspect > 1 && rect.area() > 100 && (solidity >= .04 && solidity <= .4)) {
selected.push_back(contours[i]);
}

Fauge7
08-04-2016, 02:20
The aspect ratio part of the code helps filter out things that the hsv filter cannot. We know the target is always wider then it is tall therefor it's aspect ratio will always be greater then 1 and if it's not don't detect it as a possible target. There are other things you can do to help but my team has found its good enough. You always know it's going to be atleast a certain pixel range because you can only shoot from a certain part.

jreneew2
08-04-2016, 07:02
The aspect ratio part of the code helps filter out things that the hsv filter cannot. We know the target is always wider then it is tall therefor it's aspect ratio will always be greater then 1 and if it's not don't detect it as a possible target. There are other things you can do to help but my team has found its good enough. You always know it's going to be atleast a certain pixel range because you can only shoot from a certain part.

Yes that is a good idea, however I knew axton would have trouble with that section of code. So he should add it in later once he gets better pictures of the goals.

I also added checks for solidity in my c++ version here (https://github.com/jreneew2/OpenCVTesting/blob/master/CannyStill/CannyStill.cpp) if you want to check it out. Just like in grip.

axton900
08-04-2016, 08:56
Yes! It turned out that the bounds are in HSV and not RGB and that solved some problems. I am hoping to get some good pics during field calibration today for the our event and calibrate correctly! Thanks guys for all the help!

Fauge7
09-04-2016, 02:53
Yes! It turned out that the bounds are in HSV and not RGB and that solved some problems. I am hoping to get some good pics during field calibration today for the our event and calibrate correctly! Thanks guys for all the help!

if you need to you can save images every x amount of frames and write them to the driverstation laptop. My team did that during the match to see if we had the values correct (which we did). Sometimes fta is rude about measurement sometimes they are nice.

Mr. Rick
14-04-2016, 10:46
Team 3019 graciously presents their vision tracking program for everybody to use and borrow! Now instead of dreaming about the perils of computer vision YOU can bring your team the joy of having a robot that is capable of tracking the target with ease! No more Grip crashes or weird deploys, can calculate fun things such as: distance to target, and angle to target! to be used to auto align and auto aim!

if you are going to modify the code, all i ask is give me and my team credit in a comment at the top of the code and comment your suggestions and or your praise!

to install:

download opencv 3.1 from here (http://opencv.org/)
download Network table 3.0 jar
make a new project in eclipse
make opencv and networktables added as a user library to the build path of your new project
copy opencv_ffmpeg310_64.dll from C:\Path\to\opencv\build\bin to C:\Windows\System32
add the code in a class that is named TowerTracker.java
when your ready to export
export the .jar file as a runnable jar
move the .jar to a folder similar to this (http://imgur.com/gallery/RT587rf/new)
run the .jar with "java -jar c:\Path\to\TowerTracker.jar" on a command prompt window


the code is just an example of what it can do, i can add network table stuff soon but i thought i would publish it first!
github link (https://github.com/fauge7/TowerTracker/blob/master/src/com/fauge/robotics/towertracker/TowerTracker.java)

want to see an example of what it can output?
here you go! (http://imgur.com/a/qOOyu)

how it works: using an axis camera or a mjpeg streamer you can use a stream of a webcam to process images using an opencv program that runs on the driver station computer. This program can be modified to run on a coprocessor and directly input to the roborio for even better results because network tables can only go at 10hz vs the camera stream which is 30hz...this program can easily be ported over to c++ and python and would probably run better with those as c++ and python are way more supported then java with opencv.

Fauge7, thanks so much for taking the time to write this out! We are trying to use your solution but there are a few errors in your github code. I fixed a couple of them (for example, line 87 and a few merge artifacts left over near the bottom), but I'm still getting the following error when running the jar file in the command line:

Exception in thread "main" java.lang.NullPointerException at org.usfirst.frc.team5407.robot.TowerTracker.main(T owerTracker.java:107)


We are totally new to java, or any language for that matter. Does anyone have working java code they would be willing to share?? Thanks!

Fauge7
21-04-2016, 18:48
Yes, I messed up the github code but luckly one of my team members has the code on his github. https://github.com/Aventek/TowerTracker3019Modified

Hopefully this helps, pm me if you have any questions!

E, Palmer
11-07-2016, 01:17
This may be a dumb question. But could someone please explain to me the "fun" math i am not really sure how its doing what it is doing.....

euhlmann
11-07-2016, 12:12
This may be a dumb question. But could someone please explain to me the "fun" math i am not really sure how its doing what it is doing.....

I can try :)


y = rec.br().y + rec.height / 2;
y= -((2 * (y / matOriginal.height())) - 1);

These two lines are a bit confusing (lesson to be learned here: don't write spaghetti code in a public code demo :rolleyes:). Let's rearrange the operations

double half_image_height = matOriginal.height() / 2;
double pixel_y = half_image_height - (rec.br().y + rec.height / 2);
y = pixel_y / half_image_height;

Unless I'm missing something (which is likely), this doesn't seem quite right. Notice the rec.br().y+rec.height()/2. I would change that to rec.br().y-rec.height()/2 so it finds the middle y-coordinate of the bounding rectangle.

With that change, this operation makes a bit more sense. The point is to calculate the offset in normalized coordinates from the middle of the target to the horizon. The horizon is assumed to be at (matOriginal.height()/2). First, it subtracts the y-coordinate of the middle of the target from half of the image size in pixels. This results in the pixel offset between the horizon line on the image to the target y-coordinate. Then, the entire thing is normalized by dividing it by half the pixel image height.

http://image.prntscr.com/image/e150796702bb489fb658b4370ed1858f.png


distance = (TOP_TARGET_HEIGHT - TOP_CAMERA_HEIGHT) / Math.tan((y * VERTICAL_FOV / 2.0 + CAMERA_ANGLE) * Math.PI / 180);


This section is more clear right off the bat.

There are two parts. First, the angle from the horizon to the target is approximated using the small angle approximation.

(y * VERTICAL_FOV / 2.0 + CAMERA_ANGLE) * Math.PI / 180

The small angle approximation states that sin(x) = x for small values of x (in radians; in practice anything up to about 30 degrees), so they used linear scaling to calculate the angle rather than the more precise

angle = arcsin( pixel_y * sin(VERTICAL_FOV/2.0) / half_image_height )

(Remember that y is actually pixel_y/half_image_height)
Now the camera may be tilted relative to the field, meaning that its local "horizon" isn't the same, so the offset CAMERA_ANGLE is added, for the angle that the camera is tilted relative to the actual horizon.

Now the final part is to calculate the distance using the known height of the target and this calculated angle to the horizon.
http://image.prntscr.com/image/cca85137ccdb4d3dbaad801108642533.png
h is the distance from the camera's height to the target's height (TOP_TARGET_HEIGHT - TOP_CAMERA_HEIGHT), so tan(alpha) = h/d => d = h/tan(alpha)

I hope this explains it.

E, Palmer
17-07-2016, 01:37
I am very likely working on a faulty understanding of FOV but wouldn't this be much simpler?

double AngleToHalfScreen = Vertical_FOV/2;

double OffsetFromMiddle = Math.abs(targetY - pixelHeight/2);

double FractionofOffset = OffsetFromMiddle/(pixelHeight/2);

FinalAngle = AngleToHalfScreen+(AngleToHalfScreen*FractionofOff set);


This is way to simple to be right.

Thank you so much for taking the time to explain this.