View Full Version : What Did you use for Vision Tracking?
This year my team didn't have time to get into vision tracking. I've been trying to dive into it but before I get started I was wondering what the best option was. I've heard a lot of speculation about what is good and what is bad. I was wondering what people actually used at competition. I would love to hear feed back on what worked and what didn't work.
BrianAtlanta
01-05-2016, 13:14
1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
jreneew2
01-05-2016, 14:08
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
We used a very similar setup, except written in c++. We actually had virtually no lag on the raspberry pi side. The only portion where there was lag was actually the roborio processing the data from the network tables.
The setup was pretty simple to get up and running. We compiled opencv and networktables 3 on the pi, then wrote a simple c++ program to find and send the necessary data to align with the target back to roborio. I actually followed a video tutorial here (https://www.youtube.com/watch?v=6j-Wy9j0TCs) to install cv on the raspberry pi. For network tables, I downloaded the code off of github and simply compiled it like a normal program and added it to my library path if i remember correctly.
granjef3
01-05-2016, 14:14
2383 used a Jetson TX1, with a kinect used as an IR camera. Vision code was written in OpenCV using C++ and communicated with the roboRIO over network tables.
During the offseason we will be exploring the android phone method that 254 used for reliability reasons; the Jetson+Kinect combo was expensive and finicky, compared to an android phone with an integrated battery.
This year shaker robotics used the RRio with NIvision(java) to track the targets, we analyzed frames only when we needed them to prevent using too much of the rio's resources.
nighterfighter
01-05-2016, 14:56
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.
We switched to using a PixyCam, and had much better results.
JohnFogarty
01-05-2016, 15:02
4901 used Grip on an a RPi v2 + a Pi Camera.
For more info on our implementation visit here https://github.com/GarnetSquardon4901/rpi-vision-processing
Ben Wolsieffer
01-05-2016, 15:06
We used Java and OpenCV on an NVIDIA Jetson TK1, processing images from a Microsoft Lifecam HD3000.
David Lame
01-05-2016, 15:26
Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.
We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera.
There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control.
Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult.
If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks.
Once you get that going, the sky's the limit.
Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.
We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera.
There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control.
Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult.
If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks.
Once you get that going, the sky's the limit.
That is very true and I got grip working on a laptop so I'm trying to figure out where the next best place to go is. I don't know much about vision processing algorithms, python or openCV.
There is a good documentation for the raspberry pi which I have been working on when I can.
Thanks for the reply
BrianAtlanta
01-05-2016, 16:34
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
The code wasn't that hard for our developers, for openCV or pyNetworkTables. They looked at tutorials at pyImageSearch, it's a great resource. I think we were getting in the 20-30fps, but I would have to review our driver station videos. We would process on the Pi, calculate a few things such as, (x,y), area, height, (x,y) of center of image and such. This information would be sent via pyNetworkTables to the robot code. The robot code would use this information as input to elevation PID and Drivetrain PID.
The images were also sent via mjpg-streamer to the driverstation for drivers to see, but it's not really needed. We just liked seeing the shooter camera.
We'd be happy to help, just PM me.
Brian
We used a modified version of TowerTracker (http://www.chiefdelphi.com/forums/showthread.php?t=142173&highlight=towertracker) for autonomous alignment and a flashlight for teleop alignment.
Team 987 used an onboard Jetson TK1 for our vision tracking. We programmed it with C++ code using openCV. The Jetson sends target information to the roborio with tcp packets. From there a compressed low frame rate stream was sent to the driver station for diagnostics. The low frame rate stream used very little bandwidth to ensure we stay well under the maximum (it used around 500 KB/s).
We ended up using the RoboRio with a USB Camera, but we attempt to uses opencv on Jetson, begalbone black and raspberry pi 3. We scraped the jetson after realizing how hard we were landing after hitting a defence. (But we had opencv working) then we had the begalbone scraped after the raspberry pi 3 was released. We got the pi to work after 16 hours of compiling opencv but we ran out if time.
rich2202
01-05-2016, 17:49
IP Camera. Custom Code on the Driver Station.
Vision program written in C++. It took the picture off the Smart Dashboard, and processed it. Pretty ingenious code. He looked for "corners". Rated each pixel for the likelihood it was a corner (top corner, bottom left corner, bottom right corner). the largest grouping was declared a corner.
BrianAtlanta
01-05-2016, 17:57
Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.
FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile.
We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is.
Brian
Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.
FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile.
We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is.
Brian
Wow that long?
I am extremely new to open-CV and python do you have any good places to start?
Wow that long?
I am extremely new to open-CV and python do you have any good places to start?
The installation takes a long time because you need to build it on the pi, which does take several hours. Which language are you looking to get started on?
axton900
01-05-2016, 18:04
We used an Axis IP Camera and a Raspberry Pi running a modified version of Team 3019's TowerTracker OpenCV java program. I believe someone posted about it earlier in this thread.
marshall
01-05-2016, 18:14
We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.
We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination.
We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup.
We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github.
apache8080
01-05-2016, 18:51
We used a USB camera connected to a Raspeberry Pi. On the Raspberry Pi we used Python and OpenCV to track the goal from the retro-reflective tape. The tape was tracked by using basic color thresholding to track the color green. Once we found the green color we contoured the binary image and used OpenCV moments to calculate the centroid of the goal. After finding the centroid the program calculated the angle the robot had to turn to center to the goal by using the camera's given field of view angle. Using pynetworktables we sent the calculated angle to the RoboRIO and then a PID controller turned the robot to that angle. Here is the link to our vision code. (https://github.com/Team3256/FRC_VisionTracking_2016)
billbo911
01-05-2016, 19:04
We started out with OpenCV on a PCDuino. I say "started out" because we ultimately found we could do really well without it, and the fact that we realized why our implementation was actually causing us issues once in a while.
We have identified to root cause of the issues and will be implementing a new process going forward.
We are moving to OpenCV on RPi-3. It is WAY FASTER than what we had with the PCDuino, and is actually a bit less expensive. In addition, there is tons of support in the RPi community.
We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.
We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination.
We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup.
We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github.
I'll add that we use ZeroMQ to communicate between the TX1 and Labview Code on the RoboRIO. That turned out to be one of the least painful parts of the development process - it took like 10 lines of C++ code overall and just worked from there on.
We ran OpenCV for Java on an onboard coprocessor. At first it was run on a kangaroo, but when that burnt out we switched to an onboard laptop.
virtuald
02-05-2016, 11:21
We used OpenCV+Python on the roborio, as an mjpg-streamer (https://github.com/robotpy/roborio-packages/tree/2016/ipkg/mjpg-streamer) plugin so that we could optionally stream the images to the DS. pynetworktables to send data to the robot code.
Only about ~40% CPU usage, worked really well, the problems we had was in the code that used the results from the camera.
Code can be found here. (https://github.com/frc2423/2016/tree/master/OpenCV)
andrewthomas
02-05-2016, 11:35
Team 1619 used an Nvidia Jetson TK1 for our vision processing with a Logitech USB webcam. We wrote our vision processing code in Python using OpenCV and communicated with the roboRIO and driver station using a custom written socket server similar to NetworkTables. We also streamed the camera feed from the Jetson to the driver station using a UDP stream.
MamaSpoldi
02-05-2016, 14:06
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.
We switched to using a PixyCam, and had much better results.
Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.
The only issue we came up against after the initial integration was at Champs when the much more polished surface of the driver's station wall reflected back the LEDs to the camera simulating a goal target and we shot at ourselves in autonomous. :yikes: It was quickly fixed by adding the requirement that we had to rotate at least 45 degrees before we started looking for a target.
The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software.
OpenCV C/C++ mixed source running on a Pine64 coprocessor with a Kinect as a camera. 30fps tracking using the Infrared stream. Targets are found and reduced to a bounding box ready to be sent over the network. Each vision target takes 32 bytes of data and is used for auto alignment and sent to the Driver Station WebUI for driver feedback. Code will be available in a few days, I'm boarding the plane home soon.
Alpha Beta
02-05-2016, 15:15
Nothing too fancy.
LabVIEW FRC Color Processing Example (Thanks NI. This was our most sophisticated use of vision ever, and the examples you provided every team in LabVIEW were immensely helpful.)
Running in a custom dashboard on the driver's station (i5 laptop several years old.)
Hue, Sat Val parameters stored in a CSV file with the ability to save new values during a match.
Target coordinates sent back to robot through Network Tables.
Axis Camera m1013 with exposure setting turned to 0 in LabVIEW.
Green LED ring with a significant amount of black electrical tape blocking out some of the lights.
PS. For Teleop we had a piece of tape on the computer screen so drivers could confirm the auto aim worked. If center of tape = center of goal, then fire.
PSS. Pop-up USB camera was not running vision tracking.
JamesBrown
02-05-2016, 15:18
The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software.
This is great, our students have vision on their list of skill to add this off season. Our programming students have fairly limited experience, so I was looking for a way to incorporate vision that would be easy enough for them to grasp quickly as we have a lot to work on.
PixyCam looks like a great option.
DinerKid
02-05-2016, 15:31
1768 began the season using OpenCV and a Jetson TK1, we later switched to a Nexus 5X which became desirable due to its all in one packaging (camera and processor, which made taking it off the robot to do testing between events easy) and because our programmers felt as though it would be simpler to communicate between the roboRIO and the Nexus.
The nexus was used to measure distance and angle to the target, this information was then sent to the roboRIO. Nested PID loops then used the NavX MXP gyro data to alight the robot to the target. Images taken during the auto aligning process were used to adjust the turn set point. After two consecutive images returned an angle to the target of less than 0.5 degrees new images were not used to adjust the set point allowing the PID to maintain a position rather than bounce between slightly varying set points.
~DK
The installation takes a long time because you need to build it on the pi, which does take several hours. Which language are you looking to get started on?
It seems like openCV is the way to go. Does anyone have a good turtorial location for openCV and vision tracking? I am hoping to put it on raspberry pi.
nighterfighter
02-05-2016, 20:00
Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.
We already had our tracking code written for the Axis camera, so our P loop only had to have a tiny adjustment (instead of a range of 320 pixels, it was 5 volts), so the PixyCam swap was almost zero code change.
We got our PixyCam hooked up and running in a few hours. We only used the analog output, didn't have time to get the digital output on it working. So if it never saw a target, (output value of around .43 volts I believe) the robot would "track" to the right constantly. But that is easy enough to fix in code...(if the "center" position doesn't update, you aren't actually tracking).
If we had more time we probably would have used I2C or SPI to interface with the camera, in order to get more data.
I know of at least 2 other teams from Georgia who used the PixyCam as well, being added in after/during the DCMP.
BrianAtlanta
02-05-2016, 20:28
It seems like openCV is the way to go. Does anyone have a good turtorial location for openCV and vision tracking? I am hoping to put it on raspberry pi.
Check out my post on the first page for details, but TLDR - pyImageSearch is a great place if you want to use python. As per the 254 vision processing session at worlds, language doesn't really change performance of openCV, since it's c++ under the covers. So, pick a language you're comfortable with, pyImageSearch has a good amount of tutorials, so we went python.
Brian
Check out my post on the first page for details, but TLDR - pyImageSearch is a great place if you want to use python. As per the 254 vision processing session at worlds, language doesn't really change performance of openCV, since it's c++ under the covers. So, pick a language you're comfortable with, pyImageSearch has a good amount of tutorials, so we went python.
Brian
What OS do you use. I am struggling with windows trying to find a pain free way of installing OpenCV and I am wondering if I should just virtual box a Linux based OS
We already had our tracking code written for the Axis camera, so our P loop only had to have a tiny adjustment (instead of a range of 320 pixels, it was 5 volts), so the PixyCam swap was almost zero code change.
We got our PixyCam hooked up and running in a few hours. We only used the analog output, didn't have time to get the digital output on it working. So if it never saw a target, (output value of around .43 volts I believe) the robot would "track" to the right constantly. But that is easy enough to fix in code...(if the "center" position doesn't update, you aren't actually tracking).
If we had more time we probably would have used I2C or SPI to interface with the camera, in order to get more data.
I know of at least 2 other teams from Georgia who used the PixyCam as well, being added in after/during the DCMP.
Do you know if it is possible to take the image that the PixyCam sees and stream it back to the driver station? Perhaps using MJPG Streamer or another method.
nighterfighter
02-05-2016, 21:03
Do you know if it is possible to take the image that the PixyCam sees and stream it back to the driver station? Perhaps using MJPG Streamer or another method.
Maybe. It streams the image over USB.
So if you could figure out a way to get the roboRIO to recognize it, you might be able to stream it back.
You might be better off letting the PixyCam do processing, and using the axis camera/USB webcam for driver vision.
Edit: You could probably send back the output of the Pixycam though...or reconstruct it. You can get the size and position of each object it senses. Send those back to the driver station, and have a program draw it on screen for you. Anything it doesn't see is just black. So you would have a 320x240 (or whatever resolution) black box, with green/red/etc boxes based on what the Pixy is processing. However, that would be a few frames behind what it currently is detecting.
BrianAtlanta
02-05-2016, 23:09
What OS do you use. I am struggling with windows trying to find a pain free way of installing OpenCV and I am wondering if I should just virtual box a Linux based OS
We're running Raspbian, wheezy I think. I've attached the link to the install instructions we used. On this page is a link for the Wheezy variant instructions. Be aware the steps below are a 4 hr process, with the OpenCV compiling taking 2 of those 4 hours.
OpenCV Pi Installation Instructions (http://www.pyimagesearch.com/2015/10/26/how-to-install-opencv-3-on-raspbian-jessie/)
BrianAtlanta
02-05-2016, 23:17
Do you know if it is possible to take the image that the PixyCam sees and stream it back to the driver station? Perhaps using MJPG Streamer or another method.
We used the mjpg-streamer to stream the processed image with targeting back to driverstation. We did run in to a race condition with our streamer. When the streamer was setup, the -r option was used. This deletes the image after it's streamed. The problem came when the streamer tried to pick up the next image before OpenCV wrote it. The streamer would then crash, usually within 2-5 minute range. We removed the -r option it didn't crash even after an hour of running.
Another note with the streamer. Consider not thrashing the SD when using the Pi. Constantly writing to the SD can reduce the time before corruption. We switched to writing the image to a RAM Disk, so nothing to the SD card, only memory.
Brian
We use a Raspberry Pi and RPI Camera, with the exposure turned way way down and a truly ridiculous amount of green LED's. Then we do some image processing stuff with OpenCV (blurring, HSV filtering, etc.) Then draw contours, and filter them out set on criteria. Lastly it communicates that over to the RoboRIO through network Tables. It's all written in Python (the bestest language)
We spent a lot of time trying to get OpenCV in Java to work, and putting it on the RoboRIO. In the end we went with the Raspberry Pi, and didn't feel like GRIP was reliable enough that we would want to use it on our robot during a competition.
Kauaibots (team 2465) used the JetsonTK1 w/a Logitech C930 webcam (90 degree FOV). Software (C++) was in OpenCV, and it detected the angle/distance to the tower light stack, and also the angle to the lights on the edges of the defenses, as well as distance/angle to the retro-reflective targets in the high goal.
Video processing algorithm ran at 30fps on 640x480 images, and wrote a compressed copy (.MJPG file) to SD card for later review, and also wrote a JPEG image to a directory that was monitored by MJPG-Streamer. The VideoProc algorithm was designed to switch between 2 cameras, though we ended up only using one camera. The operator could choose to optionally overlay the detected object information on top of the raw video, so the drivers could see what the algorithm was doing.
Communication w/the RoboRIO was via Network tables, including a "ping" process to ensure the video processor was running, commands to the video processor to select the current algorithm and camera source, and to communicate detection events back to the RoboRIO.
***
The latency correction discussed in the presentation at worlds is a great idea. We have a plan for that.... :)
Moving ahead, the plan is to use the navX-MXP's 100Hz update rate and it's dual simultaneous outputs (SPI to RoboRIO, USB to Jetson) and high-accuracy timestamp to timestamp the video in the video processor, send that to the RoboRIO, and in the RoboRIO use the timestamp to locate the matching entry in a time-history buffer of unit quaternions (quaternions are the value that is used to derive yaw, pitch and roll). This approach, very similar to what was described in the presentation at worlds, corrects for latency by accounting for any change in orientation (pitch, roll and yaw) after the video has been acquired but before the roboRIO gets the result from the video processor.
We're collaborating with another team who's been working on neural networked detection algorithms, and the plan is to post a whitepaper on the results of this promising concept - if you have any questions please feel free to private message me for details on this effort.
marshall
05-05-2016, 10:53
We're collaborating with another team who's been working on neural networked detection algorithms, and the plan is to post a whitepaper on the results of this promising concept - if you have any questions please feel free to private message me for details on this effort.
I wonder who that could be? :confused: :cool:
rod@3711
05-05-2016, 12:42
We used a Logitech Pro 9000 type USB camera connected to the RoboRio. Wrote custom C++ code to track the tower.
A short video of our driver station in autonomous is on youtube. https://youtu.be/PRhgljJ9zus
The yellow box is our region of interest. The light blue highlights show detection of bright vertical lines and yellow highlights show detection of bright horizontal lines. The black circle is our guess at the center-bottom of the tower window.
But alas, we only got it working at the last couple of matches. A lot of fun, but did not help us get to St Louis.
Our tracking code follows:
void Robot::trackTower(){
// Tower Tracking.
// copy the camera image (frame) into a 2D XY array. The array is
// 3D with the 3rd dimension being the 3 color 8 bit characters.
// Restrict search to the upper center of image.
// In both the horizontal X and vertical Y directions, find occurences
// of bright pixels with dark pixels on both sides. Tally every
// occurence in an X and Y histogram.
// this all assumes a 320x240 image and a 4 characters for red/green/blue/extra.
char *arrayP; // point to 2D array
arrayP = (char*)imaqImageToArray(frame,IMAQ_NO_RECT,NULL,NU LL);
// not certain how to access array,so copy into local array.
memcpy (array,arrayP, sizeof(array));
memset (histoX,0,sizeof(histoX)); // histograms for dark-bright-dark occurances in X
memset (histoY,0,sizeof(histoY)); // histograms for dark-bright-dark occurances in Y
const int left = 50; // upper center search window
const int right = 210;
const int top = 0;
const int bottom = 60;
const int spread=8; // dark-bright-dark must occur in 6 pixels
const int threshold = 25; // bright must be 30 bigger than dark
// look for the bottom horizontal gaffer tape.
// only look at green color character [1].
// mark each pixel meeting the dark-bright-dark criteria blue
// tally each occurance in X histgram.
for (short col = left; col <= right; col++) {
for (short row = top+spread; row < bottom; row++) {
int center = array[row - spread / 2][col][1];
if (((center - array[row - spread][col][1]) > threshold) &&
((center - array[row][col][1]) > threshold)) {
array[row - spread / 2][col][0] = 0; // blue
// array[row - spread / 2][col][1] = 0;
array[row - spread / 2][col][2] = 255; // red
array[row - spread / 2][col][3] = 0; // flag
histoY[row - spread / 2]++;
}
}
}
// now find horizontal line by finding most occurances.
int max = 0;
int maxY =0; // row number of bottom tape
for (short row = top+1; row < bottom-1; row++) {
// use 3 histogram slots
int sumH = histoY[row-1] + histoY[row] + histoY[row+1];
if (sumH > max){
max = sumH; // found new peak
maxY = row;
}
}
// now look for vertical tapes. Only search down to bottom tape maxY
for (short row = top; row <= maxY; row++) {
for (short col = left+spread; col < right; col++) {
int center = array[row][col - spread / 2][1];
if (((center - array[row][col - spread][1]) > threshold) &&
((center - array[row][col][1]) > threshold)){
array[row][col - spread / 2][0] = 255; // blue
// array[row][col - spread / 2][1] = 255; // green
array[row][col - spread / 2][2] = 0;
array[row][col - spread / 2][3] = 0; // flag
histoX[col - spread / 2]++;
}
}
}
// look for the left and right vertical tapes
int max1 = 0; // first peak
int max2 = 0; // second peak
int maxX1 = 0;
int maxX2 = 0;
for (int col=left+1; col<=right-1; col++) {
// find the biggest peak, use 3 slots
int sumH = histoX[col-1] + histoX[col] + histoX[col+1];
if (sumH > max1){
max1 = sumH;
maxX1 = col;
}
}
for (int col=left+1; col<=right-1; col++) {
// find the 2nd peak
if (abs(maxX1 - col) < spread)
continue; // do not look if close to other peak
int sumH = histoX[col-1] + histoX[col] + histoX[col+1];
if (sumH > max2){
max2 = sumH;
maxX2 = col;
}
}
int maxX = (maxX1 + maxX2) / 2; // center or 2 peaks
if (max2 < 5) // did not find a good second peak
maxX = 0; // put it in middle
int startIndex = 0;
int maxLength = 0;
int maxStart = 0;
int endIndex = 0;
for (int col=left; col<=right; col++) {
int count = 0;
if (array[maxY][col][3] == 0){
count++;
}
if (array[maxY-1][col][3] == 0){
count++;
}
if (array[maxY+1][col][3] == 0){
count++;
}
if (startIndex > 0){
if (count < 1) {
endIndex = col;
if (maxLength < (endIndex - startIndex)){
maxLength = (endIndex - startIndex);
maxStart = startIndex;
}
startIndex = 0;
}
}else{
if(count > 1) {
startIndex = col;
}
}
}
//SmartDashboard::PutNumber("maxLength", maxLength);
maxX = maxStart + (maxLength /2);
// mark region of interest in yellow
for (short row = top; row <= bottom; row++) {
array[row][left][0] = 0; // blue
array[row][left][1] = 255; // green R+G = yellow
array[row][left][2] = 255; // red R+G = yellow
array[row][right][0] = 0; // blue
array[row][right][1] = 255; // green
array[row][right][2] = 255; // red
}
for (short col = left; col < right; col++) {
array[top][col][0] = 0; // blue
array[top][col][1] = 255; // green
array[top][col][2] = 255; // red R+G = yellow
array[bottom][col][0] = 0; // blue
array[bottom][col][1] = 255; // green
array[bottom][col][2] = 255; // red
}
/* look at one color
for (short col = left; col <= right; col++) {
for (short row = top; row < bottom; row++) {
array[row][col][0] = 0; // blue
array[row][col][1] = 0; // green
// array[row][col][2] = 0; // red
}
}
*/
// copy 2D array back into image
memcpy(arrayP, array, sizeof(array));
imaqArrayToImage(frame, array, 320, 240);
//SmartDashboard::PutNumber("a0",array[20][20][0]); // blue
//SmartDashboard::PutNumber("a1",array[20][20][1]); // green
//SmartDashboard::PutNumber("a2",array[20][20][2]); // red
//SmartDashboard::PutNumber("a3",array[20][20][3]); // not used
imaqDispose(arrayP);
// imaqDrawTextOnImage(frame,frame, {10,10},"hi there",NULL,NULL);
imaqDrawShapeOnImage(frame, frame, { maxY-5, maxX-5, 10, 10 }, DrawMode::IMAQ_DRAW_VALUE, ShapeMode::IMAQ_SHAPE_OVAL, 0);
Robot::chassis->trackingX = maxX; // let the world know
Robot::chassis->trackingY = maxY; // let the world know
}
I wonder who that could be? :confused: :cool:
I won't name any names, but our team's Purple Aloha shirts are nowhere as near as loud as the clothing this team likes to wear.... :)
Team 107 was using the NI Vision software and IP or USB camera. The vision software was located on the driver station. This has limits for bandwidth from the field but running smaller pictures and some compression we were able to push these limits.
They also had a on board Kangaroo computer running NI Vision and using the network tables to communicates. Doing this will allow for faster video processing in the future if we write our own camera drivers or see what is out there. It also allows for bigger pictures so we get more resolution and accuracy. Doing this would also allow for a second camera to be used going through the driver station so the drivers could see where they were going without taking up targeting bandwidth.
Both solutions were capable of running 30fps so for weight reason they went with the desktop version of the software. It is fun to play with the kangaroo and a target.
Joe Ross
07-05-2016, 22:24
We used the LabVIEW vision example, integrated into our dashboard. I took a quick look at the teams calibrating their vision on the Einstein Mass field, and over half of them used LabVIEW or the NI Vision assistant.
Did using the Labview Vision example running on the roboRIO result in a significant performance decrease?
Did robot controls become more sluggish or have more lag because of the extra CPU usage of the Labview Vision Processing?
We began the season with GRIP running on the roboRIO for an IP camera. For some reason, it would never work on the official field FMSes (it worked fine in our shop and on the practice fields).
We decided to throw all of that out the window and we eventually rolled a Python script utilizing OpenCV on a Raspberry Pi 3 with an accompanying camera module. The Pi communicated with the roboRIO through a USB to Ethernet cable (Ethernet went into the Pi, USB went into the roboRIO's USB Host port), thus creating a separate network between the RIO and the Pi. Finally, we used NetworkTables to actually transport data on the connection. We hoped to avoid any sort of meddling that the FMS might cause by creating our own direct connection between the RIO and Pi (bypassing the radio altogether), and this seemed to do the trick for us.
We began the season with GRIP running on the roboRIO for an IP camera. For some reason, it would never work on the official field FMSes (it worked fine in our shop and on the practice fields).
We decided to throw all of that out the window and we eventually rolled a Python script utilizing OpenCV on a Raspberry Pi 3 with an accompanying camera module. The Pi communicated with the roboRIO through a USB to Ethernet cable (Ethernet went into the Pi, USB went into the roboRIO's USB Host port), thus creating a separate network between the RIO and the Pi. Finally, we used NetworkTables to actually transport data on the connection. We hoped to avoid any sort of meddling that the FMS might cause by creating our own direct connection between the RIO and Pi (bypassing the radio altogether), and this seemed to do the trick for us.
That's an interesting idea. How did you get the values over the serial connection?
TomLockwood
20-05-2016, 10:53
You reference "The latency correction discussed in the presentation at worlds" - is that presentation available?
Thanks,
***
The latency correction discussed in the presentation at worlds is a great idea. We have a plan for that.... :)
Assuming you mean this one (http://www.chiefdelphi.com/forums/showthread.php?t=147568), there's a mostly-complete recording of it here (http://www.chiefdelphi.com/forums/showthread.php?p=1581630#post1581630). There may be a better recording posted in the future, but I haven't seen it yet if it has been.
thatprogrammer
18-07-2016, 08:09
Here's an interesting question... did any teams run live tracking like 254? In other words, were any other teams able to get their turrets to track the goal as they drove towards it, dynamically following it?
cprofitt
11-08-2016, 14:53
We ended up using openCV on a RasPi 2 then used Network Tables to transmit the data to the RoboRio.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.