Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Vision on separate board (http://www.chiefdelphi.com/forums/showthread.php?t=119947)

Noam787 02-10-2013 12:02

Vision on separate board
 
Hi everyone,

So, we were a rookie team last year and we didn't expect to have such high ping when using vision on the driver's computer. We are considering using a single board computer such as raspberry pi and we want to know if anyone had an experience using vision with single board computer and would recommend any board and\or what to consider when picking a board.

tnx GreenBlitz 4590#

Bald & Bearded 02-10-2013 12:14

Re: Vision on separate board
 
Last year we used a Pandaboard single board computer based on a white paper from another team. The two key issues you need to address no matter what you choose to use are:
1. Power - You will want to build voltage regulators to power the board off of the main power distribution board.
2. Communication - Spend a great deal of time defining how your vision program will communicate with the CRIO and other robot code. KISS principle applies as always. We also had the board streaming the image back to the DS so if you want to do that make sure you can control the bandwidth usage (resolution and Frames Per Second).

There are lots of good and cheap single board computers out there, I would just make sure you picked one which has a solid Linux build with all drivers available.

ekapalka 02-10-2013 23:56

Re: Vision on separate board
 
If your weight can afford it, I would recommend getting a used laptop computer (super cheap; maybe add an SSD). The power supply shouldn't be too difficult to figure out, and you'll probably be able to get more processing power. Of course, many teams have had success with Raspberry Pi's and other Linux compatible systems such as ODROID X2s or UDOOs (both have quad cores; worth looking at for vision processing). I can't believe you guys did vision processing your rookie year. You guys must have been pretty dedicated and organized :P

faust1706 03-10-2013 09:29

Re: Vision on separate board
 
I'd recommend using an O-droid product, that is, X-2, X, U-2, U. They are quad core, arm based, 1.7 GHz, much more powerful than the pi that may teams seem to use for some reason. As for talking with the cRio, we have been using a udp message.

Noam787 03-10-2013 10:45

Re: Vision on separate board
 
what specs of the board should i look for when picking one?

protoserge 03-10-2013 13:25

Re: Vision on separate board
 
Quote:

Originally Posted by Noam787 (Post 1294351)
what specs of the board should i look for when picking one?

I'd look for at least an ARM A9 dual core processor, RAM, Ethernet, USB Host, i2c running Linux and OpenCV. This has been well-documented and there are a lot of recent threads on this forum that you will find useful.

As mentioned before, the Odroid (www.hardkernel.com) is a good unit and I think it's one of the best specifications (quad core, etc). I think we will be getting one to experiment with. In the past, we have used one of the single core Mk801 Android PC units. We upgraded to a dual core Mk808, but never used it in competition.

I would not use an onboard laptop unless you absolutely have to - the size and weight concerns far outweigh the small cost of an ARM-based single board computer.

ekapalka 03-10-2013 13:40

Re: Vision on separate board
 
Quote:

Originally Posted by Noam787 (Post 1294351)
what specs of the board should i look for when picking one?

A fast multi-core CPU. ARM based boards or other small computers won't be capable of exploiting GPU for vision processing, so the CPU is pretty much all that matters. After that, I would look for something with USB 3.0 ports to accommodate higher end cameras (which you don't necessarily need, it's just a perk)

Joe Ross 03-10-2013 14:06

Re: Vision on separate board
 
There are two major issues that may cause latency when using vision on the driver station, bandwidth limit and driver station CPU throughput. Both can have an impact even when doing vision on a separate computer, so it would be helpful for you to determine which was causing your issue, so you can avoid it with your new architecture (or even get it working with the old architecture).

The bandwidth limit on the field is 7mbps. As the bandwidth used approaches that limit, the latency increases. There is very good data about this in the FMS Whitepaper. This only affects data sent over the radio, so if your vision processing is completely limited to your onboard computer, this won't be an issue. However, most likely you will want to have vision feedback on the driver station, therefore you will need to worry about bandwidth. One thing the whitepaper doesn't cover is that dark pictures compress much easier and also are easier to process. See http://www.chiefdelphi.com/forums/sh...2&postcount=44

The classmate PC only has an atom processor which can be overloaded with just displaying high resolution images. If the CPU is overloaded, it will also affect network latency. You can look at the driver station CPU usage on the charts tab of the driver station. Whether you do driver station or onboard vision processing, you will need to carefully manage CPU usage, as vision processing very quickly consumes all available CPU time. Limiting the rate at which vision processing occurs is smart no matter what platform you use. Again, if you send the processed images back to the driver station, you will need to worry about driver station CPU usage regardless of where the processing occurs, if you are using the classmate.

Noam787 03-10-2013 15:40

Re: Vision on separate board
 
Quote:

Originally Posted by Joe Ross (Post 1294375)
There are two major issues that may cause latency when using vision on the driver station, bandwidth limit and driver station CPU throughput. Both can have an impact even when doing vision on a separate computer, so it would be helpful for you to determine which was causing your issue, so you can avoid it with your new architecture (or even get it working with the old architecture).

The bandwidth limit on the field is 7mbps. As the bandwidth used approaches that limit, the latency increases. There is very good data about this in the FMS Whitepaper. This only affects data sent over the radio, so if your vision processing is completely limited to your onboard computer, this won't be an issue. However, most likely you will want to have vision feedback on the driver station, therefore you will need to worry about bandwidth. One thing the whitepaper doesn't cover is that dark pictures compress much easier and also are easier to process. See http://www.chiefdelphi.com/forums/sh...2&postcount=44

The classmate PC only has an atom processor which can be overloaded with just displaying high resolution images. If the CPU is overloaded, it will also affect network latency. You can look at the driver station CPU usage on the charts tab of the driver station. Whether you do driver station or onboard vision processing, you will need to carefully manage CPU usage, as vision processing very quickly consumes all available CPU time. Limiting the rate at which vision processing occurs is smart no matter what platform you use. Again, if you send the processed images back to the driver station, you will need to worry about driver station CPU usage regardless of where the processing occurs, if you are using the classmate.

We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images

Noam787 03-10-2013 15:43

Re: Vision on separate board
 
Quote:

Originally Posted by Joe Ross (Post 1294375)
There are two major issues that may cause latency when using vision on the driver station, bandwidth limit and driver station CPU throughput. Both can have an impact even when doing vision on a separate computer, so it would be helpful for you to determine which was causing your issue, so you can avoid it with your new architecture (or even get it working with the old architecture).

The bandwidth limit on the field is 7mbps. As the bandwidth used approaches that limit, the latency increases. There is very good data about this in the FMS Whitepaper. This only affects data sent over the radio, so if your vision processing is completely limited to your onboard computer, this won't be an issue. However, most likely you will want to have vision feedback on the driver station, therefore you will need to worry about bandwidth. One thing the whitepaper doesn't cover is that dark pictures compress much easier and also are easier to process. See http://www.chiefdelphi.com/forums/sh...2&postcount=44

The classmate PC only has an atom processor which can be overloaded with just displaying high resolution images. If the CPU is overloaded, it will also affect network latency. You can look at the driver station CPU usage on the charts tab of the driver station. Whether you do driver station or onboard vision processing, you will need to carefully manage CPU usage, as vision processing very quickly consumes all available CPU time. Limiting the rate at which vision processing occurs is smart no matter what platform you use. Again, if you send the processed images back to the driver station, you will need to worry about driver station CPU usage regardless of where the processing occurs, if you are using the classmate.

We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images.

Noam787 04-10-2013 03:07

Re: Vision on separate board
 
Quote:

Originally Posted by Joe Ross (Post 1294375)
There are two major issues that may cause latency when using vision on the driver station, bandwidth limit and driver station CPU throughput. Both can have an impact even when doing vision on a separate computer, so it would be helpful for you to determine which was causing your issue, so you can avoid it with your new architecture (or even get it working with the old architecture).

The bandwidth limit on the field is 7mbps. As the bandwidth used approaches that limit, the latency increases. There is very good data about this in the FMS Whitepaper. This only affects data sent over the radio, so if your vision processing is completely limited to your onboard computer, this won't be an issue. However, most likely you will want to have vision feedback on the driver station, therefore you will need to worry about bandwidth. One thing the whitepaper doesn't cover is that dark pictures compress much easier and also are easier to process. See http://www.chiefdelphi.com/forums/sh...2&postcount=44

The classmate PC only has an atom processor which can be overloaded with just displaying high resolution images. If the CPU is overloaded, it will also affect network latency. You can look at the driver station CPU usage on the charts tab of the driver station. Whether you do driver station or onboard vision processing, you will need to carefully manage CPU usage, as vision processing very quickly consumes all available CPU time. Limiting the rate at which vision processing occurs is smart no matter what platform you use. Again, if you send the processed images back to the driver station, you will need to worry about driver station CPU usage regardless of where the processing occurs, if you are using the classmate.

We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images.

Noam787 04-10-2013 11:53

Re: Vision on separate board
 
We actually used ours laptop it had an i7 CPU and 8GB of RAM more than enough to operate our image processing program. The problem was that we couldn’t compress the image because we used the camera almost only on the other side of the field and finding the goals required higher resolution image. But still I don’t know if the bandwidth is the problem because we took only one image to calculate the angle to turn. That’s why we want to try using a board to processes images

billbo911 04-10-2013 15:05

Re: Vision on separate board
 
Quote:

Originally Posted by Noam787 (Post 1294194)
Hi everyone,

So, we were a rookie team last year and we didn't expect to have such high ping when using vision on the driver's computer. We are considering using a single board computer such as raspberry pi and we want to know if anyone had an experience using vision with single board computer and would recommend any board and\or what to consider when picking a board.

tnx GreenBlitz 4590#

Today and tomorrow, 2073 will be using a PCDuino and a MS USB Webcam onboard our robot at CalGames. It is running Ubuntu and OpenCV to do our target tracking. It passes off the corners and center positions of the top target to the cRio as a 24 character string over the local network on the robot. No WiFi bandwidth limitations at all.

This is the first time we have fully implemented this in competition. So, when they return, we will let you know how it performs.

yash101 07-10-2013 23:00

Re: Vision on separate board
 
I am also working on vision processing for our team. After finding out about the bandwidth limit restrictions, I decided it would be better to do all the work onboard the robot. I was loving the Pi, until, today, I found out about the odroid. Just made me nuts. The processor should be faster than my new i3 laptop! OpenCV might like running on the odroid. I find the Pi good for continuous applications requiring less power. However, I think the odroid will be more of the "performance," what is needed in a robot. Also, my Pi is oc'ed to 1.1GHz. Use a USB Camera to get away from network. Also, for some of these boards, you can use i2C instead of network, reducing network load and even making the system more robust.

billbo911 07-10-2013 23:43

Re: Vision on separate board
 
Quote:

Originally Posted by billbo911 (Post 1294588)
Today and tomorrow, 2073 will be using a PCDuino and a MS USB Webcam onboard our robot at CalGames. It is running Ubuntu and OpenCV to do our target tracking. It passes off the corners and center positions of the top target to the cRio as a 24 character string over the local network on the robot. No WiFi bandwidth limitations at all.

This is the first time we have fully implemented this in competition. So, when they return, we will let you know how it performs.

From all the reports I got back from the team, the camera tracking worked exactly as intended. The first half of the day we had great success with it.
But... later in the day, something in the robot went south. We lost all ability to drive. Luckily, we have zero indication it was related to the camera system in any way. Most likely we lost a DSC or possibly the PDB.

Either way, we are quite happy with the off-board vision processing.

yash101 08-10-2013 16:44

Re: Vision on separate board
 
What FPS did you get?

alxg833 08-10-2013 17:20

Re: Vision on separate board
 
The biggest issue I've been having so far is just getting communications up. What's your preferred method of interfacing the board with the cRIO? I've tried a basic TCP, but I was getting a ton of lag for some reason.

billbo911 08-10-2013 18:25

Re: Vision on separate board
 
Quote:

Originally Posted by alxg833 (Post 1295348)
The biggest issue I've been having so far is just getting communications up. What's your preferred method of interfacing the board with the cRIO? I've tried a basic TCP, but I was getting a ton of lag for some reason.

Our approach is to have the offboard processor do all the heavy lifting. It processes the images to determine the "target" location. It then places that information in the form of a string into a memory location. Every image that generates a valid target, over rights the previous data.

We use a Socket Request handler on the board to respond to Socket Requests from the cRio. The response to a Socket Request is to send the latest string to the cRio and then close the socket. This way only the latest target information is passed to the cRio as a 24 character string. The cRio then uses that information to perform whatever task we have coded it to do.

We are not sending images from the board to the cRio. Doing so would really defeat the purpose of using the offboard processor.

yash101 09-10-2013 00:29

Re: Vision on separate board
 
I bet the team was proud of the vision tracking!?:D :cool:

alxg833 09-10-2013 15:29

Re: Vision on separate board
 
Quote:

Originally Posted by billbo911 (Post 1295357)
We use a Socket Request handler on the board to respond to Socket Requests from the cRio. The response to a Socket Request is to send the latest string to the cRio and then close the socket. This way only the latest target information is passed to the cRio as a 24 character string. The cRio then uses that information to perform whatever task we have coded it to do.

Pardon me for being a bit dense, but how are the Socket Requests sent? Most of my difficulties have just been getting the Pi and the cRIO to talk to each other, and my efforts with NetworkTables have been met with considerable frustration. Do you guys have any advice?

I also started learning NetworkTables very recently (about 1-2 weeks ago), so my knowledge might not be as thorough as it could be yet. This is pretty much my working knowledge of it.

billbo911 09-10-2013 16:25

Re: Vision on separate board
 
1 Attachment(s)
Quote:

Originally Posted by alxg833 (Post 1295563)
Pardon me for being a bit dense, but how are the Socket Requests sent? Most of my difficulties have just been getting the Pi and the cRIO to talk to each other, and my efforts with NetworkTables have been met with considerable frustration. Do you guys have any advice?

I also started learning NetworkTables very recently (about 1-2 weeks ago), so my knowledge might not be as thorough as it could be yet. This is pretty much my working knowledge of it.

Ewww, ich, Network Tables! I have had nothing but headaches learning about them.
This method does not use them.

In the attached picture, you can see what our "Socket Request Receiver" looks like.

We only run it 10 times a second because that is sufficient. Everything other than the TCP Open Connection, TCP Close Connection, TCP Read, and Scan From String are there to display the data or create target center data.

The IP address and Port numbers are set to point to the PCDuino, or rPi in your case. The IP and port combined create a "Socket". The PCDuino listens for requests on that "Socket".

yash101 13-10-2013 21:07

Re: Vision on separate board
 
I contacted RoboRealm and they are coming up with an ARM version for Linux. Sadly, it won't be out till next year. Anyways, how do you guys think about setting up with RoboRealm and using the same commands in OpenCV onboard the robot

Chadfrom308 14-10-2013 08:52

Re: Vision on separate board
 
Quote:

Originally Posted by alxg833 (Post 1295348)
The biggest issue I've been having so far is just getting communications up. What's your preferred method of interfacing the board with the cRIO? I've tried a basic TCP, but I was getting a ton of lag for some reason.

Try UDP, it is faster than TCP!

http://www.diffen.com/difference/TCP_vs_UDP

Look at this, it tells you all the differences between them

yash101 14-10-2013 09:10

Re: Vision on separate board
 
Dat's a pretty good differentiation

ekapalka 14-10-2013 13:58

Re: Vision on separate board
 
Quote:

Originally Posted by Chadfrom308 (Post 1296305)
Try UDP, it is faster than TCP!

Is it possible to communicate with the cRIO via UDP (target information) and the DriverStation via TCP (sending camera images) at the same time?

Greg McKaskle 15-10-2013 09:00

Re: Vision on separate board
 
The driver station talks to the robots via UDP and the dashboard uses TCP.

I'd hesitate to simply call UDP faster than TCP. They are like nails and screws, and you should use the right one for the task. If you attempt to make UDP as robust as TCP, you will almost certainly add more overhead than TCP. And if you don't, you will lose data and the rest of your application will need to deal with holes in the data.

UDP is great for repetitive protocols where you can tolerate lost packets. It also limits the amount of data you can send in a single UDP packet. TCP can send large payloads, it checks and retransmits portions that are lost, and gives the recipient the data that you sent, but it will sometimes take a bit longer to do so.

Greg McKaskle

Chadfrom308 15-10-2013 12:23

Re: Vision on separate board
 
Well how big is the tracking information? It should just be a string of coords and maybe some other information

How much of that can you send in a packet? I'm not good with networking, but I don't think that would take too much bandwidth for that...

And do you need like 20fps for vision? I can process that no problem, the only thing is, do I need to? Maybe just send vision data when you are aiming. I don't see the point of trying to find targets when you are picking up frisbees

Joe Ross 15-10-2013 12:39

Re: Vision on separate board
 
Quote:

Originally Posted by ekapalka (Post 1296387)
Is it possible to communicate with the cRIO via UDP (target information) and the DriverStation via TCP (sending camera images) at the same time?

Yes.

Hjelstrom 15-10-2013 12:41

Re: Vision on separate board
 
Quote:

Originally Posted by Greg McKaskle (Post 1296540)
The driver station talks to the robots via UDP and the dashboard uses TCP.

I'd hesitate to simply call UDP faster than TCP. They are like nails and screws, and you should use the right one for the task. If you attempt to make UDP as robust as TCP, you will almost certainly add more overhead than TCP. And if you don't, you will lose data and the rest of your application will need to deal with holes in the data.

UDP is great for repetitive protocols where you can tolerate lost packets. It also limits the amount of data you can send in a single UDP packet. TCP can send large payloads, it checks and retransmits portions that are lost, and gives the recipient the data that you sent, but it will sometimes take a bit longer to do so.

Greg McKaskle

I will second what Greg says above. Even in the video games industry we almost exclusively use TCP nowdays (though that is not my area of expertise). I'd only use UDP for things where you don't care if you lose a packet and I'd use TCP for everything else. Using UDP to transmit joystick controls or something like that (with a simple mechanism for ignoring "older" packets) makes sense for example.

Our Kinect co-processor in 2012 was a TCP listen server. We used the information on this site to learn enough to write the necessary networking code: http://beej.us/guide/bgnet/ The cRio connected as a client to our pandaboard "server" and you could also connect to the pandaboard from the driver station. We used simple strings for our commands so you could interact with it through a telnet prompt from the driver station. To do this, all you need is the code to open a connection and then send strings back and forth.

techhelpbb 15-10-2013 13:18

Re: Vision on separate board
 
I use UDP all the time to move large specific flows of data within massive server environments (10,000+ systems) with very strict timing requirements.

As others have said TCP works in the general case. It works well for sending data of some unknown type at some unknown interval over a generally reliable link.

Where TCP does not do so well is in situations it was really never designed for. Streaming video in real time. Sending timing sensitive data where the TCP process to insure data transmissions reliably get to the other end might get in the way on unreliable links.

There are often situations where you can engineer one system or the other to tolerate situations where TCP has shortcomings.
There are other situations where it just makes more sense to use UDP.

How much data do I send through UDP every day? 50-75 Gigabytes.

Please be aware if you are using UDP on Linux you really should tune the communications stack parameters to make it more tolerant. The default parameters for UDP will often contribute to UDP packet loss on links that are common place. Of course make sure you consider that FIRST exercises some control on this system so there are some parameters you can't control (like in the field, on the robot communications hardware or within the some peripherals).

Chadfrom308 15-10-2013 13:20

Re: Vision on separate board
 
Quote:

Originally Posted by Hjelstrom (Post 1296577)
Our Kinect co-processor in 2012 was a TCP listen server. We used the information on this site to learn enough to write the necessary networking code: http://beej.us/guide/bgnet/ The cRio connected as a client to our pandaboard "server" and you could also connect to the pandaboard from the driver station. We used simple strings for our commands so you could interact with it through a telnet prompt from the driver station. To do this, all you need is the code to open a connection and then send strings back and forth.

Your vision system was amazing. What else die you do to implement it? Like sensor wise

Hjelstrom 15-10-2013 14:36

Re: Vision on separate board
 
Thanks! In case you haven't seen this paper, it has a lot of details:

http://www.chiefdelphi.com/media/papers/2698?

Other sensors involved were a gear-tooth sensor for measuring the shooter wheel speed and an frc gyro mounted on the turret which was used to guide the turret to the angle requested by the aiming system.

yash101 15-10-2013 16:58

Re: Vision on separate board
 
That article was actually the first article that I read after I started getting set up for vision processing. BTW, Greg, how's it going? Are you going to the Las Vegas regional?

Also, was your vision processing last season onboard or on the driver station?

Hjelstrom 16-10-2013 16:58

Re: Vision on separate board
 
Quote:

Originally Posted by yash101 (Post 1296644)
That article was actually the first article that I read after I started getting set up for vision processing. BTW, Greg, how's it going? Are you going to the Las Vegas regional?

Also, was your vision processing last season onboard or on the driver station?

Hi! Yes I'll be at the Las Vegas regional.

In 2013 we used something similar to 341's vision system which they presented in 2012. It ran on the driver station which was extremely useful because it allows you to 'see' what the system is doing all of the time. Also having a co-processor on your robot turns out to be a big hassle. In 2012, the depth data provided by the Kinect allowed us to make shots from a wide variety of locations. However, in 2013 aiming really did not require depth so a more traditional vision system worked great.

Here is the vision paper by 341:
http://www.chiefdelphi.com/media/papers/2676?

magnets 16-10-2013 17:14

Re: Vision on separate board
 
Unless you're using kinect, you don't need a board on the robot. The driver station laptop works perfectly for vision. Our robot could line up with the target in less than 1 second every time with this method.

The lag from the driver station is minimal, as it's very easy for a dark/compressed 320x240 20 fps image to use less than 3 mbit/s, and usually the trip times for packets are under 50 ms. 341 in 2012 had a great vision setup, and they didn't need the super lightning-fast image processing response from an onboard processor.

Unless somebody can show me a specific example of the driver station link not being fast enough, I maintain that using a separate board for vision is not a good use of your time.

yash101 19-10-2013 01:28

Re: Vision on separate board
 
Quote:

Originally Posted by Hjelstrom (Post 1296847)
Hi! Yes I'll be at the Las Vegas regional.

In 2013 we used something similar to 341's vision system which they presented in 2012. It ran on the driver station which was extremely useful because it allows you to 'see' what the system is doing all of the time. Also having a co-processor on your robot turns out to be a big hassle. In 2012, the depth data provided by the Kinect allowed us to make shots from a wide variety of locations. However, in 2013 aiming really did not require depth so a more traditional vision system worked great.

Here is the vision paper by 341:
http://www.chiefdelphi.com/media/papers/2676?

This was exactly what I was looking for. Our team wants to do the processing on the laptop. Anyways, have you tried RoboRealm, or just OpenCV?

Hjelstrom 19-10-2013 16:24

Re: Vision on separate board
 
Quote:

Originally Posted by yash101 (Post 1297307)
This was exactly what I was looking for. Our team wants to do the processing on the laptop. Anyways, have you tried RoboRealm, or just OpenCV?

I have not tried RoboRealm.

alxg833 21-10-2013 15:29

Re: Vision on separate board
 
Just wanted to say that I owe a lot of thanks to you guys. Your advice has been incredibly helpful and easy to understand, especially to someone who's trying to learn new material. Thank you all so much!


All times are GMT -5. The time now is 22:41.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi