Go to Post it's flattering, but a little scary, because I don't know if I'm ready to be an inspiration! - MissInformation [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 13 votes, 5.00 average. Display Modes
  #1   Spotlight this post!  
Unread 15-10-2014, 09:47
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Question Optimal board for vision processing

Hello, I'm a new (first year) member in a rookie team (this will be the 3rd year for the team) . As a enthusiast developer, I will be a part of the programming sub team. We program the RoboRio in C++ and the SmartDashboard in Java or C# (using ikvm to port the java binaries to .NET).

In this period, before the competition starts, I learn as much materiel as I can. A friend of mine and I were thinking about developing a vision processing system for the robot, and we pretty much figured that utilizing the RoboRio (or the cRio we have form last year) isn't any good because, well, it's just too weak for the job. We thought about sending the video live to the driver station (classmate/another laptop), when it will be processed and then sent back to the RoboRio. the problem is the 7Mbit/s networking bandwidth limit and, of course, the latency.
So, we thought about employing an additional board, which will connect to the RoboRio and do the image processing there. We though about using Arduino or Raspberry Pi, but we are not sure they too are strong enough for the task.

So, to sum up: What is the best board for using in FRC vision systems?

Also, if we connect, for example, a Raspberry Pi to the robot's router and the router to the IP camera, the 7Mbit/s bandwidth limit does not apply, right? (because the camera and the Pi are connected via LAN)

P.S. I am aware that this question has been asked in this forum already, but it was a year ago. So today there may be better/other options.

Last edited by matan129 : 15-10-2014 at 09:53.
  #2   Spotlight this post!  
Unread 15-10-2014, 10:14
jman4747's Avatar
jman4747 jman4747 is offline
Just building robots
AKA: Josh
FRC #4080 (Team Reboot)
Team Role: CAD
 
Join Date: Apr 2013
Rookie Year: 2011
Location: Atlanta GA
Posts: 422
jman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond repute
Re: Optimal board for vision processing

The most powerful board in terms of raw power is the Jetson TK1. It utilizes an Nvidia GPU witch is orders of magnitude more powerful than a CPU for virtually any vision processing task. And if just want to use its' CPU it's still has a quad core 2.32Ghz ARM which to my knowledge is more than most if not any other SBLC on the market. It is however $192 and much larger than an R-Pi.

http://elinux.org/Jetson_TK1

PS Here are some CD threads with more info:
http://www.chiefdelphi.com/forums/sh...ghlight=Jetson

http://www.chiefdelphi.com/forums/sh...ghlight=Jetson
__________________
---------------------
Alumni, CAD Designer, machinist, and Mentor: FRC Team #4080

Mentor: Rookie FTC Team "EVE" #10458, FRC Team "Drewbotics" #5812

#banthebag
#RIBMEATS
#1620

Last edited by jman4747 : 15-10-2014 at 10:19.
  #3   Spotlight this post!  
Unread 15-10-2014, 10:24
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

So far the vision processing MORT11 has done has been done with a stripped down dual core AMD mini-laptop (bigger than a netbook) on the robot that is worth less than $200 on the common market. It has the display and keyboard removed. It has proven to be legal in the past but we have rarely relied on vision processing so it often is removed from the robot mid-season. It was also driven 200+ times over the bumps in the field with an SSD inside it and it still works fine. For cameras we used USB cameras like the PS3-Eye which has a Windows professional vision library and can handle 60 frames a second in Linux (though you hardly need that).

That laptop is heavier than the single board computers in part because of the battery. However I would suggest that battery is worth the weight. As the laptop is COTS the extra battery is legal. This means the laptop can be running while the robot is totally off.

The tricky part is not finding a single board or embedded system that can do vision processing. The tricky part is powering it reliably and the battery fixes that issue while providing enormous computing power in comparison.

Very likely all of the embedded and single board system that will be invariably listed in this topic will not be able to compete on cost/performance with a general purpose laptop. The market forces in the general computing industry drive differently.

The cRIO gets around this issue because the cRIO gets boosted 19V from the PDU and then bucks it to the internal low voltage it needs. As the battery sags under the motor loads, dropping the 19V is no big deal if you need 3.3V. As switching regulators are generally closed loop they adapt to these changing conditions.

So just be careful. The 5V regulated outputs on the robot PDU may not operate in a way you desire or maybe provide the Wattage you need and then you need to think about how you intend to power this accessory.

People have worked around this in various ways: largish capacitors, COTS power supplies, just using the PDU. I figure that since electronics engineering is not really a requirement for FIRST that using a COTS computing device with a reliable and production power system is asking less.

Keep in mind that I see no reason an Apple/Android device like a tablet or cell phone would not be legal in past competitions on the robot as long as the various radio parts are properly turned off. It is possible someone could create a vision processing system in an old phone using the phone's camera and use the phone's: audio jack (think Square credit card reader), display (put a photo-transistor against the display and toggle the pixels) or charging/docking port (USB/debugging and with Apple be warned they have a licensed chip you might need to work around) to connect it to the rest of the system. I've been playing around with ways to do this since I helped create a counter-proposal against the NI RoboRio and it can and does work. In fact I can run the whole robot off an Android device itself (no cRIO or RoboRio).

Last edited by techhelpbb : 15-10-2014 at 10:36.
  #4   Spotlight this post!  
Unread 15-10-2014, 10:26
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Re: Optimal board for vision processing

Quote:
Originally Posted by jman4747 View Post
The most powerful board in terms of raw power is the Jetson TK1. It utilizes an Nvidia GPU witch is orders of magnitude more powerful than a CPU for virtually any vision processing task. And if just want to use its' CPU it's still has a quad core 2.32Ghz ARM which to my knowledge is more than most if not any other SBLC on the market. It is however $192 and much larger than an R-Pi.

http://elinux.org/Jetson_TK1

PS Here are some CD threads with more info:
http://www.chiefdelphi.com/forums/sh...ghlight=Jetson

http://www.chiefdelphi.com/forums/sh...ghlight=Jetson
Thanks for the suggestion! But it's kind of pricey, when compared to Pi. Is it worth it?
Also, is the developing for CUDA any different from 'normal' developing?

Quote:
Originally Posted by techhelpbb View Post
So far the vision processing MORT11 has done has been done with a stripped down dual core AMD mini-laptop (bigger than a netbook) on the robot that is worth less than $200 on the common market. It has the display and keyboard removed. It has proven to be legal in the past but we have rarely relied on vision processing so it often is removed from the robot mid-season.

That laptop is heavier than the single board computers in part because of the battery. However I would suggest that battery is worth the weight. As the laptop is COTS the extra battery is legal. This means the laptop can be running while the robot is totally off.

The tricky part is not finding a single board or embedded system that can do vision processing. The tricky part is powering it reliably and the battery fixes that issue while providing enormous computing power in comparison.

Very likely all of the embedded and single board system that will be invariably be listed in this topic will not be able to compete on cost/performance with a general purpose laptop. The market forces in the general computing industry drive differently.

The cRIO gets around this issue because the cRIO gets boosted 19V from the PDU and then bucks it to the internal low voltage it needs. As the battery sags under the motor loads dropping the 19V is no big deal if you need 3.3V. As switching regulators are generally closed loop they adapt to these changing conditions.

So just be careful. The 5V regulated outputs on the robot PDU may not operate in a way you desire or maybe provide the Wattage you need and then you need to think about how you intend to power this accessory.

People have worked around this in various ways: largish capacitors, COTS power supplies, just using the PDU. I figure that since electronics engineering is not really a requirement for FIRST that using a COTS computing device with a reliable and production power system is asking less.

Keep in mind that I see no reason an Apple/Android device like a tablet or cell phone would not be legal in past competitions on the robot as long as the various radio parts are properly turned off. It is possible someone could create a vision processing system in an old phone using the phone's camera and use the phone's: audio jack, display or charging/docking port to connect it to the rest of the system.
Thank for the detailed info! But in this case, I guess I can just use a stripped-down classmate (we have 2 of those), or just any other mini laptop in order to do so (I guess that the Atom Processor is more then powerful enough in terms of computing power). Also, what platform did you use to develop the image processing code?

Last edited by matan129 : 15-10-2014 at 10:32.
  #5   Spotlight this post!  
Unread 15-10-2014, 10:40
Chadfrom308's Avatar
Chadfrom308 Chadfrom308 is offline
Slave to the bot
AKA: Chad Krause
FRC #0308 (The Monsters)
Team Role: Driver
 
Join Date: Jan 2013
Rookie Year: 2011
Location: Novi
Posts: 272
Chadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to behold
Re: Optimal board for vision processing

I would say the most bang for your buck is the Beaglebone black. 987 used it way back in 2012 with the kinect sensor. Very powerful, and if I can remember clearly, it has about 20fp/s. Maybe somebody can give a more accurate number, but it is plenty powerful. Same type of computer (rpi style microcomputer) that has ethernet for UDP communications.

Odroid and pdDuino are both good options too

RPis are okay. I hear most teams get anywhere from 2fp/s to 10fp/s (again all depending what you are doing). I would say for simple target tracking, you would get about 5fp/s.

I want to also start doing some vision tracking this year on another board. I would end up using the regular dashboard (or maybe modified a slight bit) with labview. I would be using a BeagleBone or maybe RPi just to start off. I don't know how to use linux, which is my biggest problem. Anyone have any information on how to auto start up and use vision tracking on linux? I need something simple to follow.
  #6   Spotlight this post!  
Unread 15-10-2014, 10:43
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Thank for the detailed info! But in this case, I guess I can just use a stripped-down classmate (we have 2 of those), or just any other mini laptop in order to do so (I guess that the Atom Processor is more then powerful enough in terms of computing power). Also, what platform did you use to develop the image processing code?
We started off testing this idea when the COTS rules would allow a computing device several years ago (more than 3 years ago).

Our first tests were conducted on Dell Mini 9's running Ubuntu Linux LTS version 8 which I had loaded on mine while doing development work on another unrelated project. The Dell Mini 9 is a single core Atom processor.

Using Video4Linux and OpenJDK (Java) the programming captain crafted his own recognition code. I believe that helped get him into college. It was very interesting.

We then tried a dual core Atom classmate and it worked better when his code was designed to use that extra resource.

Between years I slammed together a vision system using 2 cameras on a Lego MindStorm PTZ and used OpenCV with Python. With that you could locate yourself on the field using geometry not parallax.

Other students have since worked on other Java based and Python based solutions using custom and OpenCV code.

I have stripped parts out of OpenCV and loaded them into ARM processors to create a camera with vision processing within it. It was mentioned in the proposal I helped to submit to FIRST. I think using an old phone is probably more cost effective (they make lots of a single model of phone and when they are old they plummet in price).

OpenCV wraps Video4Linux so the real upside of OpenCV from the 'use a USB camera perspective' is that it will remove things like detecting the camera being attached and setting the modes. Still Video4Linux is pretty well documented and the only grey area you will find is if you pick a random camera. Every company that tries to USB interface a CMOS or CCD camera does their own little thing with the configuration values. So I suggest finding a camera you can understand (Logitech or PS3-Eye) and not worrying about the other choices. A random cheapo camera off Amazon or eBay might be a huge pain when you can buy a used PS3-Eye at GameStop.

Last edited by techhelpbb : 15-10-2014 at 10:53.
  #7   Spotlight this post!  
Unread 15-10-2014, 10:50
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Re: Optimal board for vision processing

Quote:
Originally Posted by Chadfrom308 View Post
I would say the most bang for your buck is the Beaglebone black. 987 used it way back in 2012 with the kinect sensor. Very powerful, and if I can remember clearly, it has about 20fp/s. Maybe somebody can give a more accurate number, but it is plenty powerful. Same type of computer (rpi style microcomputer) that has ethernet for UDP communications.

Odroid and pdDuino are both good options too

RPis are okay. I hear most teams get anywhere from 2fp/s to 10fp/s (again all depending what you are doing). I would say for simple target tracking, you would get about 5fp/s.

I want to also start doing some vision tracking this year on another board. I would end up using the regular dashboard (or maybe modified a slight bit) with labview. I would be using a BeagleBone or maybe RPi just to start off. I don't know how to use linux, which is my biggest problem. Anyone have any information on how to auto start up and use vision tracking on linux? I need something simple to follow.
Thanks for the info - actually, I have a friend which has a RPi (model b) lying around, I guess he will allow me to test with it. If it will not do, I'll check bout the beaglebone.
Also, can someone answer my question about the bandwidth limit?

And I might be able to assist you with linux:
If I remember correctly, try to open the Terminal and run
Code:
sudo crontab -e
Then you will e able to edit the cron file, which is basically a file which automates tasks in linux systems. Add the following line to it:
Code:
@reboot AND_THEN_A_COMMAND
The command you typed should be executed in every startup.
  #8   Spotlight this post!  
Unread 15-10-2014, 10:55
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Also, can someone answer my question about the bandwidth limit?
I have been CSA at several competitions over the years.
If you can avoid sending live video you depend on over the WiFi please do (I speak for myself not FIRST or 11/193 when I write this).

I can assure you what you think you have for bandwidth you probably do not have.
I can back that up with various experiences and evidence I have collected over the years.

If you must send something to the driver's station send pictures one at a time over UDP if you can.
If you miss one - do not send it again.

I have no interest in hijacking this topic with any dispute over this (so if someone disagrees feel free to take this up with me in private).

Last edited by techhelpbb : 15-10-2014 at 11:00.
  #9   Spotlight this post!  
Unread 15-10-2014, 10:57
jman4747's Avatar
jman4747 jman4747 is offline
Just building robots
AKA: Josh
FRC #4080 (Team Reboot)
Team Role: CAD
 
Join Date: Apr 2013
Rookie Year: 2011
Location: Atlanta GA
Posts: 422
jman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Thanks for the suggestion! But it's kind of pricey, when compared to Pi. Is it worth it?
Also, is the developing for CUDA any different from 'normal' developing?
It is worth it IF you want to process video and track an object continuously. As for power, the new voltage regulators' 12 V 2A port will be more than enough. The jetson needs 12V and people have tested this thing running heavy duty vision applications on the GPU/CPU without cracking 1 amp.
It is soooooooooooooo easy to use the GPU. After the setup (installing libraries, updating, etc.) We were tracking red 2014 game pieces on the GPU within the next 30 min. We used the code from here: http://pleasingsoftware.blogspot.com...-computer.html Read trough this and the related get hub linked in the article.
https://github.com/aiverson/BitwiseAVCBalloons

Open CV has GPU libraries that basically work automatically with the Jetson.
http://docs.opencv.org/modules/gpu/doc/gpu.html
You can see in the gethub of the above example as well the different compile command for activating GPU usage.
https://github.com/aiverson/BitwiseA...aster/build.sh

If you ever get to use that code on the Jetson note: The program in the above link opens up a display window for each step of the process and closing the displays speeds up the program from 4fps with all open to 16fps with only the final output open. I presume with the final output closed and no GUI open (AkA how it would be on a robot) it would be much faster. Also we used this camera and were set to 1080p for the test: http://www.logitech.com/en-us/produc...ro-webcam-c920
__________________
---------------------
Alumni, CAD Designer, machinist, and Mentor: FRC Team #4080

Mentor: Rookie FTC Team "EVE" #10458, FRC Team "Drewbotics" #5812

#banthebag
#RIBMEATS
#1620

Last edited by jman4747 : 15-10-2014 at 10:59.
  #10   Spotlight this post!  
Unread 15-10-2014, 11:06
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
I have been CSA at several competitions over the years.
If you can avoid sending live video you depend on over the WiFi please do (I speak for myself not FIRST or 11/193 when I write this).

I can assure you what you think you have for bandwidth you probably do not have.
I can back that up with various experiences and evidence I have collected over the years.

If you must send something to the driver's station send pictures one at a time over UDP if you can.
If you miss one - do not send it again.

I have no interest in hijacking this topic with any dispute over this (so if someone disagrees feel free to take this up with me in private).
Yeah, people have told me basically what you said here, but I asked if local LAN (the camera is connected with Ethernet cable to the router on the robot which connects also with Ethernet to the RPi/NVIDA thing/any other board) counts as networks badwidth? (It does not seems likely, but want to be sure about this).

Quote:
Originally Posted by jman4747 View Post
It is worth it IF you want to process video and track an object continuously. As for power, the new voltage regulators' 12 V 2A port will be more than enough. The jetson needs 12V and people have tested this thing running heavy duty vision applications on the GPU/CPU without cracking 1 amp.
It is soooooooooooooo easy to use the GPU. After the setup (installing libraries, updating, etc.) We were tracking red 2014 game pieces on the GPU within the next 30 min. We used the code from here: http://pleasingsoftware.blogspot.com...-computer.html Read trough this and the related get hub linked in the article.
https://github.com/aiverson/BitwiseAVCBalloons

Open CV has GPU libraries that basically work automatically with the Jetson.
http://docs.opencv.org/modules/gpu/doc/gpu.html
You can see in the gethub of the above example as well the different compile command for activating GPU usage.
https://github.com/aiverson/BitwiseA...aster/build.sh

If you ever get to use that code on the Jetson note: The program in the above link opens up a display window for each step of the process and closing the displays speeds up the program from 4fps with all open to 16fps with only the final output open. I presume with the final output closed and no GUI open (AkA how it would be on a robot) it would be much faster. Also we used this camera and were set to 1080p for the test: http://www.logitech.com/en-us/produc...ro-webcam-c920
Cool, thanks! I'll look into this. If I'm doing processing in a resolution of, let's say, 1024*768 (more seems too much), how many FPS will I get? (approx.)
  #11   Spotlight this post!  
Unread 15-10-2014, 11:12
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Yeah, people have told me basically what you said here, but I asked if local LAN (the camera is connected with Ethernet cable to the router on the robot which connects also with Ethernet to the RPi/NVIDA thing/any other board) counts as networks badwidth? (It does not seems likely, but want to be sure about this).
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.

Quote:
Originally Posted by matan129 View Post
Cool, thanks! I'll look into this. If I'm doing processing in a resolution of, let's say, 1024*768 (more seems too much), how many FPS will I get? (approx.)
This question greatly depends on how you achieve this.
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth).
If your CPU is/are slow or your code is bad then things might not work out so well.

Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements. I would rather see a good cooperative solution that people can leverage and discuss/document than a lot of people rediscovering how to do this in a vacuum over and over.

I will put an honorable mention here for VideoLAN(VLC) as far as unique and interesting ways to send video over a network.
Anyone interested might want to look it over.

Last edited by techhelpbb : 15-10-2014 at 11:22.
  #12   Spotlight this post!  
Unread 15-10-2014, 11:21
jman4747's Avatar
jman4747 jman4747 is offline
Just building robots
AKA: Josh
FRC #4080 (Team Reboot)
Team Role: CAD
 
Join Date: Apr 2013
Rookie Year: 2011
Location: Atlanta GA
Posts: 422
jman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond reputejman4747 has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by matan129 View Post
Cool, thanks! I'll look into this. If I'm doing processing in a resolution of, let's say, 1024*768 (more seems too much), how many FPS will I get? (approx.)

When we tried going down to 480p. Our frame rate did not improve per say however the capture time went down which is very important for tracking. That said our test wasn't extensive so the are other factors at play. It may or may not improve overall performance.
__________________
---------------------
Alumni, CAD Designer, machinist, and Mentor: FRC Team #4080

Mentor: Rookie FTC Team "EVE" #10458, FRC Team "Drewbotics" #5812

#banthebag
#RIBMEATS
#1620

Last edited by jman4747 : 15-10-2014 at 11:24.
  #13   Spotlight this post!  
Unread 15-10-2014, 11:21
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,082
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Optimal board for vision processing

The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio or (b) your driver station laptop. No new hardware costs, no worry about powering an extra board, no extra cabling or new points of failure. FIRST and WPI will provide working example code as a starting point, and libraries to facilitate communication between the driver station laptop and the cRIO exist and are fairly straightforward to use.

In all seriousness, in the retroreflective tape-and-LED-ring era, FIRST has never given us a vision task that couldn't be solved using either the cRIO or your driver station laptop for processing. Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).

I am still partial to the driver station method. With sensible compression and brightness/contrast/exposure time, you can easily stream 30fps worth of data to your laptop over the field's wifi system, process it in in a few tens of milliseconds, and send back the relevant bits to your robot. Round trip latency with processing will be on the order of 30-100ms, which is more than sufficient for most tasks that track a stationary vision target (particularly if you utilize tricks like sending gyro data along with your image so you can estimate your robot's pose at the precise moment the image was captured). Moreover, you can display exactly what your algorithm is seeing as it runs, easily build in logging for playback/testing, and even have "on-the-fly" tuning between or during matches. For example, on 341 in 2013 we found we frequently needed to adjust where in the image we should try to aim, so by clicking on our live feed where the discs were actually going we recalibrated our auto-aim control loop on the fly.

If you are talking about using vision for something besides tracking a retroreflective vision target, then an offboard solution might make sense. That said, think long and hard about the opportunity cost of pursuing such a solution, and what your goals really are. If your goal is to build the most competitive robot that you possibly can, there is almost always lower hanging fruit that is just as inspirational to your students.
  #14   Spotlight this post!  
Unread 15-10-2014, 11:23
matan129 matan129 is offline
Registered User
FRC #4757 (Talos)
Team Role: Programmer
 
Join Date: Oct 2014
Rookie Year: 2015
Location: Israel
Posts: 19
matan129 is an unknown quantity at this point
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.



This question greatly depends on how you achieve this.
If you do it in compiled code you can achieve 5fps or more easily (with reduced color depth).
If your CPU is/are slow or your code is bad then things might not work out so well.

Anything you send over TCP/IP, TCP/IP will try to deliver and once it starts that it is hard to stop it (hence reliable transport). With UDP you control the protocol so you can choose to give up. This means with UDP you need to do more work. Really - someone should do this and just release a library then it can be tuned for FIRST specific requirements.
So... to summarize, I will always be able to choose NOT to send data over WiFi?
Is it 'safe' to develop a high-res/fps vision system which all its parts are physically on the robot (i.e. the camera and the RPi)? By this question I mean that suddenly in the field I will discover that all the communication actually goes through the field wifi and hence the vision system is unusable (because I have limited WiFi bandwidth - which I never intended to use in the first place).

Quote:
Originally Posted by Jared Russell View Post
The optimal 'board 'for vision processing in 2015 is very, very likely to be (a) the RoboRio or (b) your driver station laptop. No new hardware costs, no worry about powering an extra board, no extra cabling or new points of failure. FIRST and WPI will provide working example code as a starting point, and libraries to facilitate communication between the driver station laptop and the cRIO exist and are fairly straightforward to use.

In all seriousness, in the retroreflective tape-and-LED-ring era, FIRST has never given us a vision task that couldn't be solved using either the cRIO or your driver station laptop for processing. Changing that now would result in <1% of teams actually succeeding at the vision challenge (which was about par for the course prior to the current "Vision Renaissance" era).

I am still partial to the driver station method. With sensible compression and brightness/contrast/exposure time, you can easily stream 30fps worth of data to your laptop over the field's wifi system, process it in in a few tens of milliseconds, and send back the relevant bits to your robot. Round trip latency with processing will be on the order of 30-100ms, which is more than sufficient for most tasks that track a stationary vision target (particularly if you utilize tricks like sending gyro data along with your image so you can estimate your robot's pose at the precise moment the image was captured). Moreover, you can display exactly what your algorithm is seeing as it runs, easily build in logging for playback/testing, and even have "on-the-fly" tuning between or during matches. For example, on 341 in 2013 we found we frequently needed to adjust where in the image we should try to aim, so by clicking on our live feed where the discs were actually going we recalibrated our auto-aim control loop on the fly.

If you are talking about using vision for something besides tracking a retroreflective vision target, then an offboard solution might make sense. That said, think long and hard about the opportunity cost of pursuing such a solution, and what your goals really are. If your goal is to build the most competitive robot that you possibly can, there is almost always lower hanging fruit that is just as inspirational to your students.
Wow, thanks! And yes, in the beginning I intend to only develop recognition of the retroreflective strips. Well, I'll talk with the other guys in the programming team and we"ll see about this. The major goal (at least for this time) of my planned vision system is to assist the driver is scoring - that it will slightly fix the position of the robot and therefore be more precise.

Last edited by matan129 : 15-10-2014 at 11:32.
  #15   Spotlight this post!  
Unread 15-10-2014, 11:30
Joe Ross's Avatar Unsung FIRST Hero
Joe Ross Joe Ross is offline
Registered User
FRC #0330 (Beachbots)
Team Role: Engineer
 
Join Date: Jun 2001
Rookie Year: 1997
Location: Los Angeles, CA
Posts: 8,600
Joe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond repute
Re: Optimal board for vision processing

Quote:
Originally Posted by techhelpbb View Post
Depends on the mode of the D-Link in previous years. In bridge mode anything on the wired Ethernet will likely go on the WiFi. In routed mode the D-Link is a gateway and therefore things not directed to it or the broadcast should go through the D-Links internal switch but not necessarily over the WiFi.
Do you have any evidence of this? Bridge mode does not mean that the D-Link acts as a hub and blasts data anywhere. It still has an internal switch and will only send the data to the appropriate port. If both connections are Ethernet, it won't send the data through WiFi. The only exception is broadcast packets.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:52.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi