View Full Version : Network Tables
lucas.alvarez96
29-03-2014, 18:13
Yeah, I know everybody has asked for this before, but I really need a Network Tables library for C++. I'd used pynetworktables and had a fully functioning code with Python, but the latency while using an Axis IP cam (even while connected to my laptop's ethernet port) is too dang high! When I tried C++, it worked like a charm, but there's no network table implementation for sending simple values such as x and y coordinates. Network sockets are sort of intimidating, and I haven't the faintest on how to implement them on our DS (with C++ and the openCV library) and the robot (using java). I would be truly grateful if someone could send me a part of their sockets code, just to get an idea on how to code it.
Thanks in advance! :D
virtuald
30-03-2014, 10:22
How high is the latency? We haven't had any significant problems with it, and all of our code is implemented in python (robot + image processing + custom dashboard).
lucas.alvarez96
30-03-2014, 15:56
I'd reckon like 3 seconds of latency...my FPS is great, it's just that there has to be some problems with the buffer :(
import cv2
import urllib
import numpy as np
stream=urllib.urlopen('http://10.25.76.11/mjpg/video.mjpg')
bytes=''
while True:
bytes+=stream.read(16384)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
cv2.imshow('i',i)
if cv2.waitKey(1) ==27:
exit(0)
I got the code from StackOverflow. I had tried using a simple VideoCapture(ip), and got it to work on C++, but Python just throws up some errors:
Traceback (most recent call last):
File "C:\Users\Lucas\Documents\opencv\opencv\webcam.py", line 7, in <module>
cv2.imshow("window", img)
error: C:\slave\WinInstallerMegaPack\src\opencv\modules\c ore\src\array.cpp:2482: error: (-206) Unrecognized or unsupported array type
And when I print out the value of "ret" (the first variable outputted by VideoCapture::read()), I get False, which indicates that there is no image being captured (duh).
Any ideas?
virtuald
30-03-2014, 16:06
I use this:
vc = cv2.VideoCapture()
vc.set(cv2.cv.CV_CAP_PROP_FPS, 1)
if not vc.open('http://%s/mjpg/video.mjpg' % self.camera_ip):
return
while True:
retval, img = vc.read(buffer)
...
One bug present in OpenCV that hasn't been fixed can be found on their bug tracker here: http://code.opencv.org/issues/2877 . If you compile your own OpenCV you can patch the bug. I've been meaning to patch it in a better way but haven't done so yet, as I don't have an axis camera easily available for testing.
lucas.alvarez96
30-03-2014, 16:31
Ok so I tried this:
import cv2
import numpy as np
import time
camera_ip = "10.25.76.11"
vc = cv2.VideoCapture()
vc.set(cv2.cv.CV_CAP_PROP_FPS, 1)
if not vc.open('http://%s/mjpg/video.mjpg' % camera_ip):
time.sleep(0)
while True:
retval, img = vc.read(buffer)
cv2.imshow("img", img)
if cv2.waitKey(20) == 27:
break
cv2.destroyAllWindows()
exit(0)
And got this:
Traceback (most recent call last):
File "C:/Users/Lucas/Desktop/cdch/render_stream3.py", line 14, in <module>
retval, img = vc.read(buffer)
TypeError: <unknown> is not a numpy array
So truth be told, I'm not quite sure what's going on... :confused:
And yeah, I'd read somewhere that ffmpeg could sometimes be the source of the problem, but I'm on windows and haven't the faintest idea on how to compile from source...
I'm very sorry Dustin if the problem is to obvious, but I've been struggling with this for weeks and my team REALLY needs it for the championships...
virtuald
30-03-2014, 16:35
Sorry, I copy/pasted that incorrectly. You should remove the buffer from the vc.read() call for initial testing. buffer happens to be a python keyword, and I was using it as a variable, so I wasn't allocating a new image buffer each time. So it was actually something like...
h = vc.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
w = vc.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
capture_buffer = np.empty(shape=(h, w, 3), dtype=np.uint8)
while True:
retval, img = vc.read(capture_buffer)
lucas.alvarez96
30-03-2014, 17:03
Yeaaaaah.....same error....
Traceback (most recent call last):
File "C:\Users\Lucas\Desktop\cdch\render_stream3.py", line 19, in <module>
cv2.imshow("img", img)
error: C:\slave\WinInstallerMegaPack\src\opencv\modules\c ore\src\array.cpp:2482: error: (-206) Unrecognized or unsupported array type
Using this code:
import cv2
import numpy as np
import time
camera_ip = "10.25.76.11"
vc = cv2.VideoCapture()
vc.set(cv2.cv.CV_CAP_PROP_FPS, 1)
if not vc.open('http://%s/mjpg/video.mjpg' % camera_ip):
time.sleep(0)
h = vc.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
w = vc.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
capture_buffer = np.empty(shape=(h, w, 3), dtype=np.uint8)
while True:
retval, img = vc.read(capture_buffer)
cv2.imshow("img", img)
if cv2.waitKey(20) == 27:
break
cv2.destroyAllWindows()
exit(0)
virtuald
30-03-2014, 17:06
Interesting. Don't pass it a capture buffer then, and see what happens.
virtuald
30-03-2014, 17:07
Wait, the error is on the imshow. Odd. What type/shape is the image?
print img
print img.shape
print img.type
lucas.alvarez96
30-03-2014, 17:18
With or without buffer, there's a problem with the capture. retval returns false, and the img array returns None, so I can't print img.shape or img.type.
AttributeError: 'NoneType' object has no attribute 'shape'
virtuald
31-03-2014, 19:14
With or without buffer, there's a problem with the capture. retval returns false, and the img array returns None, so I can't print img.shape or img.type.
AttributeError: 'NoneType' object has no attribute 'shape'
Odd. That's very strange. Must be something with the way your ffmpeg is compiled.
lucas.alvarez96
31-03-2014, 19:34
Well, if Windows 8.1 or OpenCV doesn't include ffmpeg by default, then I should go and install it....
virtuald
31-03-2014, 20:13
Oh that's right! You have to copy the ffmpeg DLL to C:\Python27 for it to work correctly. It should be included with the opencv binary distribution. It's rather odd to me that they're separate. It'll be called something like 'opencv_ffmpeg2xx.dll'
JamesTerm
31-03-2014, 20:14
Yeah, I know everybody has asked for this before, but I really need a Network Tables library for C++. I'd used pynetworktables and had a fully functioning code with Python, but the latency while using an Axis IP cam (even while connected to my laptop's ethernet port) is too dang high! When I tried C++, it worked like a charm, but there's no network table implementation for sending simple values such as x and y coordinates. Network sockets are sort of intimidating, and I haven't the faintest on how to implement them on our DS (with C++ and the openCV library) and the robot (using java). I would be truly grateful if someone could send me a part of their sockets code, just to get an idea on how to code it.
Thanks in advance! :D
Did you try SmartCppDashboard? Its in first forge with full source to network tables using winsock2. It also uses ffmpeg to support h264 a special build to minimize latency for it... I can get link later once I get to a PC.
lucas.alvarez96
31-03-2014, 22:50
Oh that's right! You have to copy the ffmpeg DLL to C:\Python27 for it to work correctly. It should be included with the opencv binary distribution. It's rather odd to me that they're separate. It'll be called something like 'opencv_ffmpeg2xx.dll'
Ok Dustin, so I copied the .dll and discovered that I had an old .dll in there (246. I'm using 248). So it's still not working, but I did discover in the source folder that there's an ffmpeg folder in the 3rdparty folder which include a make file and dll files. So I'm gonna give that a try.
Did you try SmartCppDashboard? Its in first forge with full source to network tables using winsock2. It also uses ffmpeg to support h264 a special build to minimize latency for it... I can get link later once I get to a PC.
James, that would be awesome! Please post the link as soon as you can. Thanks man!
JamesTerm
31-03-2014, 23:17
James, that would be awesome! Please post the link as soon as you can. Thanks man!
Here
https://www.youtube.com/watch?v=nLmviNrMers
Shows a demo of everything... and in here is a link to the first forge... I'll include here for convenience:
http://firstforge.wpi.edu/sf/projects/smartcppdashboard
Now... this code is due for an update, but we are just about ready to head out to Lonestar regionals... so after that I may get the code all updated probably in the next two weeks, but in its current state it should get you going pretty well.
Here is a sneak peak of what the newer stuff can do (not yet checked in there): https://www.youtube.com/watch?v=fccxxlvMqY0
virtuald
31-03-2014, 23:49
I think he's talking about this: http://firstforge.wpi.edu/sf/sfmain/do/viewProject/projects.smartcppdashboard
The opencv files should be structured something like so:
C:\Python27\opencv_ffmpeg2xx.dll
C:\Python27\lib\site-packages\cv2.pyd
I custom compiled my version of ffmpeg, but at this point I don't recall if I had to do something special to get MJPG support. I'll have to go find the source tree if I still have it...
sparkytwd
01-04-2014, 00:17
We had to install ffdshow as well to get loading from a file working on windows.
So it really seems as though you were in the same boat as me three weeks ago. There is one very easy fix to the problem you are fixing, but it requires kinda out of the box thinking ;)
Through my VC++ opencv adventure, I learned that many processor intensive routines may have a such a low performance that the processing rate will become less than the capture rate. I believe I properly understand your setup. You are using an MJPEG stream with VideoCapture. I have two solutions available for you that should be relatively easy to use.
If you must stay with the network MJPEG stream approach,
--Decrease the capture rate of the camera to the MAX rate that your vision software will run at. This will cause you to economize on your lag and if for some reason the software runs a bit faster, it will just wait for the next frame.
--THREAD YOUR GRABBER
The second option is what saved me. I was before getting 30 seconds of lag even though I was running at 5fps. What this will do is run the grabber parallelly. I believe you are pretty decent at C++ so you should be able to figure it out. Try <thread> and <mutex>.
So what you want to do is wait for the next frame to be available from the camera and download it immediately. Do not do this in the main Mat because it will make the entire program wait and brick up your effort. After the frame grab, send the data to the actual processing Mat. In your processing loop, copy over that global Mat into a local Mat so the system can free the resource as quick as possible! Then, perform the operations you want to do. This way, the old frames are constantly deleted as new frames are available
Try to use both the following together. Using just the first option will make no difference. Using both options is when you get both efficient and get rid of the lag.
I have been paying attention to your posts lately and I believe you have a PandaBoard. Snag an inexpensive USB webcam and that should eliminate most of the lag you are experiencing!
I just gave you my two cents, so good luck!
Also, you might find it easier to implement a UDP bidirectional socket instead of NetworkTables. There are libraries for this in C++, Java and even LabView!
JamesTerm
01-04-2014, 11:31
Also, you might find it easier to implement a UDP bidirectional socket instead of NetworkTables. There are libraries for this in C++, Java and even LabView!
I am not sure of yash101's setup, but if UDP is used from robot to driver-station beware. It costs more bandwidth by nature... which shouldn't matter if it is a direct connection, but if the robot is receiving packets it will need a dedicate thread to listen to packets on startup, due to the VxWorks issue. This (http://www.chiefdelphi.com/forums/showthread.php?t=126102) thread elaborates on that and also talks about a bug fix with the Network Tables.
I am not sure of yash101's setup, but if UDP is used from robot to driver-station beware. It costs more bandwidth by nature... which shouldn't matter if it is a direct connection, but if the robot is receiving packets it will need a dedicate thread to listen to packets on startup, due to the VxWorks issue. This (http://www.chiefdelphi.com/forums/showthread.php?t=126102) thread elaborates on that and also talks about a bug fix with the Network Tables.
I don't see how this can be true.
Assuming the data to send is the same size (which depends on implementation), UDP would in the real world send less data, as it never resends packets that failed. You are also sending images on the same ethernet link, so a few extra bytes or packets here or there really makes no difference.
In either case, you need a listener somewhere to read all of the packets from the buffer. Network Tables already created a thread to do this, with UDP you are doing it on your own.
In many ways, I prefer UDP sockets as it is very simple to implement (there are thousands of tutorials on the internet describing basic sockets), you can use a simple struct to organize the data, and a checksum to discard packets that are garbled in transit. No library required. In LV, you can do it super easily by flattening a cluster to a string and then unflattering it on the other side.
JamesTerm
01-04-2014, 13:05
I don't see how this can be true.
Assuming the data to send is the same size (which depends on implementation), UDP would in the real world send less data, as it never resends packets that failed. You are also sending images on the same ethernet link, so a few extra bytes or packets here or there really makes no difference.
I should post some benchmarks... I think I got the high band width from UDP because I used the DO_NOT_WAIT flag when creating the socket. So perhaps it would be less, but I have yet to see confirmation of that.
I don't see how this can be true.
In either case, you need a listener somewhere to read all of the packets from the buffer. Network Tables already created a thread to do this, with UDP you are doing it on your own.
Needing a listener somewhere is not the same as needing a listener that is on its own dedicated thread listening even at startup. When I first dinked around with UDP I used WinSock2 for everything, and could set the options as such where I didn't need to make a new thread... this was great and kept the code simple. I then transfer this same code to VxWorks and saw the fireworks... the driver station would repeatedly lose connection and get it back. The UDP buffer overflowed and corrupted the TCPIP packets.
So ... you don't need to have a dedicated thread if you are using WinSock2, but for VxWorks, and the original WinSock... you do.
Alan Anderson
01-04-2014, 13:48
I then transfer this same code to VxWorks and saw the fireworks... the driver station would repeatedly lose connection and get it back. The UDP buffer overflowed and corrupted the TCPIP packets.
So ... you don't need to have a dedicated thread if you are using WinSock2, but for VxWorks, and the original WinSock... you do.
I believe you are ascribing the wrong cause to what you saw. VxWorks running on the cRIO has a single shared buffer for all incoming network communication, and that's what made your flood of UDP traffic get in the way of the TCP packets from the Driver Station. Under Windows, your UDP buffer was possibly overflowing and (correctly) discarding packets, but under Windows that wouldn't mess up network communication on any other ports.
You do need to read the incoming network data at least as fast as it is arriving, but there is no requirement for you to do it in its own thread.
JamesTerm
01-04-2014, 14:23
I believe you are ascribing the wrong cause to what you saw. VxWorks running on the cRIO has a single shared buffer for all incoming network communication, and that's what made your flood of UDP traffic get in the way of the TCP packets from the Driver Station. Under Windows, your UDP buffer was possibly overflowing and (correctly) discarding packets, but under Windows that wouldn't mess up network communication on any other ports.
You do need to read the incoming network data at least as fast as it is arriving, but there is no requirement for you to do it in its own thread.
What you say here is what I was trying to say more or less... WinSock2 handles the stress of neglecting to process the packets. It was indeed the neglect of processing the incoming packets that caused this symptom which I confirmed with Greg McKaskle, and Brian from team 118 also ran into this issue back in 2012. It may indeed be possible that you can make it where it doesn't have its own thread... you could implement some kind of hand shake solution or defer sending packets until Auton or Telop functions are being called. But these alternatives are messy implementation IMHO. NetworkTables on the other hand does not have these issues at all, and probably one of the main reasons I switched... that and the ease of use, where adding more variables manually in UDP is a real pain. I've included these UDP_Listener.h (https://www.dropbox.com/s/njo8ic6675cwyx1/UDP_Listener.h) UDP_Listener.cpp (https://www.dropbox.com/s/897kmeiepgvlrdy/UDP_Listener.cpp) for reference where I could macro switch from UDP to Network Tables... it was so much easier working with network tables that I dropped this whole interfacing design.
mhaeberli
01-04-2014, 23:09
So, maybe I didn't follow your analysis and notes well enough, but FYI I had similar problems using direct python opencv VideoCapture bindings, until I had the Windows PATH variables set correctly to the OpenCV install. I'm seeing latencies like a second, but nothing like 30.
Why wait until Auton or Teleop functions are called?
Why don't you just run your code all the time, and then additionally run auton or teleop code based on the driver station data?
JamesTerm
02-04-2014, 15:40
Why wait until Auton or Teleop functions are called?
Why don't you just run your code all the time, and then additionally run auton or teleop code based on the driver station data?
I take it you are talking about the idea of waiting for auton and telop to start listening. Network tables does like what you say "run your code all the time" so that any c++ programmer can use it without having to know anything about writing a new thread (i.e. task). I agree that this is the best way because it can still work while robot is disabled, and handles 2 way communication.
But let's go back for a moment... when I first did UDP and waited for Auton and Teleop before I knew about this bug. Why did I do it then? Because writing threads is a messy business, and should be avoided if at all possible. I mean look at the trouble that happened with the lockup bug in Network Tables... Using winsock2 I was able to listen to packets on the same thread and it was nice and clean code to work out. So given that... all of our code runs on a single thread... we don't use the PID that comes with WPI... it does not need to be on a separate thread. Instead we introduce a time-slice delta in seconds to the computations... this way the PID can work in 10ms (ish) iterations on the same thead rather than the default 50 on a separate thread... thus being free from any mess of critical sections!
lucas.alvarez96
02-04-2014, 18:53
Thanks for all the help :D
So does anybody have the Network Tables library for C++?
I haven't got Wind River and nobody on my team know where the CD is...
We run everything in a single task (including a UDP listener and RS232 listener). We just don't stop the task when auton or teleop is disabled. We clear the integrators and reset to a safe state when the disabled signal goes from low to high, and run everything in a single 10ms high priority task. The UDP listener reads in a While loop with a 0ms timeout and breaks when the read returns a timeout. The RS232 listener does similar.
I don't have the UDP the setup down, but I would like to get the code down sometime soon.
I was wondering if anyone knows how to write a socket server with javax.microedition.io.*;
I am currently shot in the dark and have no idea to start
JamesTerm
03-04-2014, 10:24
We run everything in a single task (including a UDP listener and RS232 listener). We just don't stop the task when auton or teleop is disabled. We clear the integrators and reset to a safe state when the disabled signal goes from low to high, and run everything in a single 10ms high priority task. The UDP listener reads in a While loop with a 0ms timeout and breaks when the read returns a timeout. The RS232 listener does similar.
Wow! you can do that in labview? Or did you make the switch to c++?
This Main.cpp (https://www.dropbox.com/s/gcgk9f6fid393ay/Main.cpp) is how we setup. There is only one member variable that is always instantiated between autonomous and teleop. Back when I started in 2011 I instantiated everything in auton and redid this in teleop... that wrecked havoc on teleop where we lost controls (our first match ever... went down like that). I found that for best results keep everything ready to go in between modes. When looking at this code, you can see how the loops happen within the Autonomous() and OperatorControl() callbacks. I can see now from your response that I could have one loop... and these callbacks simply signal what just happened... that is a cool idea. :)
Wow! you can do that in labview? Or did you make the switch to c++?
This Main.cpp (https://www.dropbox.com/s/gcgk9f6fid393ay/Main.cpp) is how we setup. There is only one member variable that is always instantiated between autonomous and teleop. Back when I started in 2011 I instantiated everything in auton and redid this in teleop... that wrecked havoc on teleop where we lost controls (our first match ever... went down like that). I found that for best results keep everything ready to go in between modes. When looking at this code, you can see how the loops happen within the Autonomous() and OperatorControl() callbacks. I can see now from your response that I could have one loop... and these callbacks simply signal what just happened... that is a cool idea. :)
I have an RT timed loop in Robot Main
Within the Robot Status VI (the one that returns an enum teleop enabled/auton enabled/teleop disabled...) is a read to a bit register which has an auton bit, enabled bit, and test bit. I read this directly and do a bit of boolean logic to output an 'auton enabled' and 'reinit' flag. Whenever we are not auton enabled we run the driver control code, when whenever the enabled bit goes from low to high we reinit. Reinit resets the state machines to their initial (safe) states, and clears the integrators of all of the control loops.
We no longer use any of the FRC provided framework, or the majority of the libraries, and the code has gotten a lot simpler and cleaner since we did that.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.