Log in

View Full Version : GRIP Axis Camera feed too slow


mlyhoops
31-01-2016, 17:23
Does anyone know how we can get the values from the network table to update faster? Our problem is that our code gets the centerX value from the NetworkTables in GRIP but the values update too slowly (takes about 2 seconds) so we cannot use these values to track the target. Right now we are also using a gyro to get values and know how far we need to turn to the target but it requires us to use another button to get the target value first and then add that to the gyro to be able to find the target. We want to make it as quick as possible and only use one button for finding and tracking the target. If you have any answers or questions about our setup, just email me at mlyhoops@gmail.com

ThomasClark
31-01-2016, 17:48
Is it 2 seconds latency, or is it only getting one frame every two seconds?

Also, are you running on the RIO, or another processor?

Right now we are also using a gyro to get values and know how far we need to turn to the target but it requires us to use another button to get the target value first and then add that to the gyro to be able to find the target

I think this is actually a good way to track the targets. Using the target positions directly as the input to a feedback loop will result in a lot of overshooting, since reading gyro values has a lot less lag than real-time computer vision. Maybe you can just make the process more automated, so the driver can hit a single button and set the whole thing off?

mlyhoops
31-01-2016, 18:36
Is it a seconds second latency, or is it only getting one frame every two seconds?

Also, are you running on the RIO, or another processor?



I think this is actually a good way to track the targets. Using the target positions directly as the input to a feedback loop will result in a lot of overshooting, since reading gyro values has a lot less lag than real-time computer vision. Maybe you can just make the process more automated, so the driver can hit a single button and set the whole thing off?

What is a seconds second latency? It gets anywhere from 13-29 fps.

We are running it on the roboRIO.

How would we adjust for overshooting? We had it working pretty well but it would always end up past the point where we want it and it wouldn't be very accurate.

ThomasClark
31-01-2016, 18:47
What is a seconds second latency?

A typo. If you're getting a decent framerate but some latency, that's normal. It's just that by the time you get images from the camera and process them, they're out of date, although 2 seconds does seem particularly high. Maybe try lowering the resolution.


How would we adjust for overshooting? We had it working pretty well but it would always end up past the point where we want it and it wouldn't be very accurate.


Stop the robot for two seconds
Read the current position of the target and calculate the change in rotation you need
Use the calculated rotation as the setpoint for a PID loop (http://wpilib.screenstepslive.com/s/4485/m/13810/l/241879-operating-the-robot-with-feedback-from-sensors-pid-control) with the gyro as the input

Greg McKaskle
31-01-2016, 21:16
Vision processing is not easy. On an FRC field, it is even more not-easy. Realtime vision processing is really not-easy. And real time on an FRC field is downright hard.

GRIP and other tools make it easy to get some initial success and make progress, but be sure to try alternate processing techniques that are better or simpler. Be sure to measure how long your various processing steps take, and determine what they add.

There are also techniques for speeding up network tables. The LV version, and I'm assuming the others, let you set the update rate and flush when time-sensitive updates are made.

So, think about how you can measure and control what is going on, and keep improving it.

Greg McKaskle

mlyhoops
31-01-2016, 23:10
Thanks!

Fauge7
31-01-2016, 23:13
Also, on the field, any image packet gets a low priority meaning that there is a semi decent lag time from on the field and what you see.

Justin Buist
31-01-2016, 23:43
Personally I'm doubting anybody is going to find GRIP useful on the actual robot. Running GRIP on one of our mentor's Core i7 laptop had it hot to the touch and we were just doing an HSV filter and contour finder. On my own at home it was pegging me at 30% usage which is an awful lot for the given task.

When we actually deployed that pipeline to the RoboRIO we saw something like 75% CPU usage just from GRIP and the actual robot code wanted the other 75%. Load average was 3.3ish which on a dual core machine that's not being I/O bound means you're hitting the CPU too hard and starving the robot process. We quickly ditched the idea of GRIP on the robot and went with just hitting OpenCV directly, borrowing from a lot of work done by team 2168. That's working out quite well.

That said there's a purpose to GRIP yet in that it's really easy to experiment with the provided transformations even on a student computer. Familiarity with that makes it easier to muck with OpenCV directly which also runs splendid on a student computer and doesn't require a RoboRIO.

ThomasClark
31-01-2016, 23:46
That said there's a purpose to GRIP yet in that it's really easy to experiment with the provided transformations even on a student computer. Familiarity with that makes it easier to muck with OpenCV directly which also runs splendid on a student computer and doesn't require a RoboRIO.

That's true. Another good use of GRIP is on a dedicated vision coprocessor.

Turing'sEgo
31-01-2016, 23:49
That said there's a purpose to GRIP yet in that it's really easy to experiment with the provided transformations even on a student computer. Familiarity with that makes it easier to muck with OpenCV directly which also runs splendid on a student computer and doesn't require a RoboRIO.

Exactly. Perhaps someone, or rather a group of people, could work on a code generator of sorts for GRIP. I'd help contribute to that, but sadly I do not know where to start with code generation, alas it is not my area of expertise.

ThomasClark
01-02-2016, 00:13
Exactly. Perhaps someone, or rather a group of people, could work on a code generator of sorts for GRIP. I'd help contribute to that, but sadly I do not know where to start with code generation, alas it is not my area of expertise.

Sounds like a good idea. Code generation is something we considered a while ago, but we decided to focus on other things because NetworkTables was an easy solution, and code generation would require extra effort for each new language we support. If people are willing to put in that effort, that would be pretty cool.

Maybe check out RobotBuilder and see how it does it. Also, if you need help figuring out GRIP's internal structures, feel free to ask some questions in the gitter (https://gitter.im/WPIRoboticsProjects/GRIP#)

1024Programming
01-02-2016, 08:32
Does anyone know how we can get the values from the network table to update faster? Our problem is that our code gets the centerX value from the NetworkTables in GRIP but the values update too slowly (takes about 2 seconds) so we cannot use these values to track the target. Right now we are also using a gyro to get values and know how far we need to turn to the target but it requires us to use another button to get the target value first and then add that to the gyro to be able to find the target. We want to make it as quick as possible and only use one button for finding and tracking the target. If you have any answers or questions about our setup, just email me at mlyhoops@gmail.com

Can you please post your code that actually gets the values? Our team has been trying to do this and it doesn't matter if it lags.

ThomasClark
01-02-2016, 12:26
Can you please post your code that actually gets the values? Our team has been trying to do this and it doesn't matter if it lags.

https://github.com/WPIRoboticsProjects/GRIP/wiki/Tutorial:-Run-GRIP-from-a-CPP-or-Java-FRC-program#java

tl;dr

NetworkTable grip = NetworkTable.getTable("grip");
double[] centers = grip.getNumberArray("targets/centerX", new double[0]);

marshall
01-02-2016, 15:29
Vision processing is not easy. On an FRC field, it is even more not-easy. Realtime vision processing is really not-easy. And real time on an FRC field is downright hard.

QFT.