Go to Post knowing 'my breath stinks' has helped me far more than any thanks and praise I've ever received.. - DonRotolo [more]
Home
Go Back   Chief Delphi > FIRST > General Forum
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 29-08-2012, 10:44
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: 2012 Beta Testers

Quote:
Originally Posted by Greg McKaskle View Post
118 used C++ and a Beagle Bone. I don't really see an issue with doing it on the dashboard. What are your concerns?

Greg McKaskle
When I was in beta last year... I was really excited about the idea of doing video processing through the driver station, and I really wanted to do that. Since this was a new presented feature, I couldn't find very much information on step by step what to do. Brad had some instructions but only for the Java platform. Jarod (341) has some good info, and source... but same scenario. I could not find the info I needed and eventually gave up. I still want to figure this out and hopefully can help others at for the task 5 exhibition who were stuck like I am now. So at this point... I just want to gather links docs info... etc on how to do this using c++ wind-river environment. Eventually have a step-by-step document on what to do... as well as example code on how to send commands from the driver station back to the robot... for things like target coordinates etc.

I've never heard of a beagle bone... thanks for info on team 118. I should hook up with them and see if they can help.

Hopefully enough teams have tested this new path... and from what I've heard it's been positive results with minimal latency.


P.S. If I can figure all of this out... it would be cool to overlay cross hairs on the video image that comes to the driver station... I'll save this research for next iteration.
Reply With Quote
  #2   Spotlight this post!  
Unread 29-08-2012, 18:51
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 2012 Beta Testers

You may be able to find step-by-step instructions, but perhaps you don't need them.

To get an image from the dashboard to the camera, you simply do an http get to start the mjpg stream. You can alternately request individual jpegs. You can do this in LV with the default dashboard, in Java with smart dashboard. You can also write a C/C++ dashboard, but there are no tools in the kit for doing this -- you can use Microsoft or gcc for PC development.

Once you have the image, you can use any laptop image processing library you like -- some use OpenCV, some use the NI IMAQ libraries.

To send info back to the robot, you can use UDP, TCP, or the smart dashboard. The smart dashboard is the simplest approach, and with some sample code, the UDP and TCP aren't too bad either.

As for placing marks on top of the image, the NI IMAQ vision control does this pretty easily, and I'm assuming you can modify the image or similarly overlay it in smart dashboard.

Once you start the project, you can ask additional quesitons.

Greg McKaskle
Reply With Quote
  #3   Spotlight this post!  
Unread 29-08-2012, 23:00
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: 2012 Beta Testers

Quote:
Originally Posted by Greg McKaskle View Post
You may be able to find step-by-step instructions, but perhaps you don't need them.

To get an image from the dashboard to the camera, you simply do an http get to start the mjpg stream. You can alternately request individual jpegs. You can do this in LV with the default dashboard, in Java with smart dashboard. You can also write a C/C++ dashboard, but there are no tools in the kit for doing this -- you can use Microsoft or gcc for PC development.

Greg McKaskle

Thanks for this preliminary information... I think for CD I'll just ask some high top level questions for the good of the group... and figure out the best direction to go.

What I'd like to do is have a solution that many c++ teams can use without needing to write any JAVA code. So I remember the default dashboard having callback tags or something like this, but it sounds like there is no easy way to somehow bridge the callbacks into a c++ environment. Let me know if this is not true... I know for c# it's possible to write bridge code as we do this for our products at work. If it is possible then perhaps we could offer the bridge code for others to use (since everyone gets the default dash board).

Ok so let's look at the C++ dashboard option... I presume this would be something to the effect where it becomes the replacement exe (to replace the default one), and at that point we could easily interface with it. If we went down this route, would it be something we could offer to other teams? I'd be happy to port over the default dashboard from JAVA to c++ for this exe if that sounds like the best direction.

If I can get past this first hurdle... it will help clarify the next step for us. Thanks in advance for helping us out.
Reply With Quote
  #4   Spotlight this post!  
Unread 04-09-2012, 17:52
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: 2012 Beta Testers

Quote:
Originally Posted by JamesTerm View Post
Thanks for this preliminary information... I think for CD I'll just ask some high top level questions for the good of the group... and figure out the best direction to go.
I've spoke with 118 and they went the "http get" path...using UDP packets. (Thanks 118 for this info). We'll pursue this direction as well but instead we'll use Windows and Visual Studio. I figure it will be a more feasible alternative for teams to consider using vision processing on the driver station, but it will be interesting to see how wireless latency compares against high cpu usage video processing on the cRIO. There's probably some info on this comparison else where in CD (I'm going to re-read 341's posts on their latency).
Reply With Quote
  #5   Spotlight this post!  
Unread 04-09-2012, 19:26
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 2012 Beta Testers

Reading between the lines, I suspect that 118 is doing an HTTP GET to init the mjpeg stream. This is inherently done over TCP. I suspect they sent data back to the cRIO using UDP. Of course the computer doing this work was mounted on their robot, not on the DS.

If you want to do this with the smart dashboard, it would involve writing some windows specific code using gcc or Visual Studio that would read the UDP packet from the DS, parse the info and display it. It would do the TCP GET to get the mjpeg stream opened, and some code to decode or display it. It could also recompile or implement the network table protocol for C++ and use that to do read/writes to the cRIO for sharing data. No need to interoperate with JAVA unless you decide to.

Similarly, the LV dashboard does the UDP and TCP work. No network table implementation was made publicly available last year.

Greg McKaskle
Reply With Quote
  #6   Spotlight this post!  
Unread 06-09-2012, 11:17
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: 2012 Beta Testers

Quote:
Originally Posted by Greg McKaskle View Post
If you want to do this with the smart dashboard, it would involve writing some windows specific code using gcc or Visual Studio that would read the UDP packet from the DS, parse the info and display it.
Thanks for sharing this work-flow we may try to pursue that... one thing I'd like to know is if the smart dashboard can be developed on off-line. I know I've seen a testing harness available in the past, but that one is out dated. It would be great to be able to interface with the smart dashboard without needing a cRIO present.

Also while I'm here... I'm looking forward to seeing if the Joystick protocol is going to change to use the full HID info. I was thinking of creating an article/feature request about this in terms of being able to use the POV controls on a logitech game pad. I wanted to use that control for this year's game but could not.
Reply With Quote
  #7   Spotlight this post!  
Unread 06-09-2012, 14:15
Tom Line's Avatar
Tom Line Tom Line is offline
Raptors can't turn doorknobs.
FRC #1718 (The Fighting Pi)
Team Role: Mentor
 
Join Date: Jan 2007
Rookie Year: 1999
Location: Armada, Michigan
Posts: 2,533
Tom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond repute
Re: 2012 Beta Testers

Quote:
Originally Posted by Greg McKaskle View Post
Reading between the lines, I suspect that 118 is doing an HTTP GET to init the mjpeg stream. This is inherently done over TCP. I suspect they sent data back to the cRIO using UDP. Of course the computer doing this work was mounted on their robot, not on the DS.

If you want to do this with the smart dashboard, it would involve writing some windows specific code using gcc or Visual Studio that would read the UDP packet from the DS, parse the info and display it. It would do the TCP GET to get the mjpeg stream opened, and some code to decode or display it. It could also recompile or implement the network table protocol for C++ and use that to do read/writes to the cRIO for sharing data. No need to interoperate with JAVA unless you decide to.

Similarly, the LV dashboard does the UDP and TCP work. No network table implementation was made publicly available last year.

Greg McKaskle
Greg, when I thought about this I assumed that the Labview dashboard live-stream is already an mpeg stream and that Labview users can pull frames from that and use the Labview example vision code to process it. Is that the case? Is there a reason not to do it that way?
Reply With Quote
  #8   Spotlight this post!  
Unread 06-09-2012, 20:46
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 2012 Beta Testers

Quote:
Greg, when I thought about this I assumed that the Labview dashboard live-stream is already an mpeg stream and that Labview users can pull frames from that and use the Labview example vision code to process it. Is that the case? Is there a reason not to do it that way?
Last year, the default dashboard changed from reading individual JPEGs to doing an mjpeg stream. As you mention, it has always been possible to branch the image wire and connect it to the vision VIs for processing. Getting the processing back to the robot would involved UDP or TCP done on the open ports, or possibly using the beta-quality Network-Tables.

The example code ofr vision and tutorial that went with it supported both laptop and cRIO. It didn't integrate it into the dashboard, but you pretty much just needed to copy and paste the loop and connect it to the mjpeg wire.

So yeah, no reason not to. If the processing is done when needed or on low resolution images, the cRIO should have plenty juice to process the images. But the added power of the laptop makes it far easier to get a working solution with less optimization. For reference, the cRIO is about 800 MIPs. Image processing is almost entirely integer, so that is a pretty good metric to use. The Atom in the classmates is around 3000 MIPs.

Greg McKaskle
Reply With Quote
  #9   Spotlight this post!  
Unread 07-09-2012, 08:48
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: 2012 Beta Testers

Quote:
Originally Posted by Greg McKaskle View Post
So yeah, no reason not to. If the processing is done when needed or on low resolution images, the cRIO should have plenty juice to process the images. But the added power of the laptop makes it far easier to get a working solution with less optimization. For reference, the cRIO is about 800 MIPs. Image processing is almost entirely integer, so that is a pretty good metric to use. The Atom in the classmates is around 3000 MIPs.
Greg McKaskle
Thanks for the benchmarks... I presume the 800 MIPs is referring to the cRIO II right? I tested the example rebound rumble vision processing code on the regular cRIO and found it took 80-110ms per frame to process. I did not test this on the cRIO II though. If I understand correctly image processing *on the cRIO* is almost entirely integer, while processing using openCV on a classmate would not need this kind of optimization.
Reply With Quote
  #10   Spotlight this post!  
Unread 09-09-2012, 22:07
Joe Ross's Avatar Unsung FIRST Hero
Joe Ross Joe Ross is offline
Registered User
FRC #0330 (Beachbots)
Team Role: Engineer
 
Join Date: Jun 2001
Rookie Year: 1997
Location: Los Angeles, CA
Posts: 8,581
Joe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond repute
Re: 2012 Beta Testers

Quote:
Originally Posted by Tom Line View Post
Greg, when I thought about this I assumed that the Labview dashboard live-stream is already an mpeg stream and that Labview users can pull frames from that and use the Labview example vision code to process it. Is that the case? Is there a reason not to do it that way?
That's basically what we did this year. I posted a simplified example here: http://forums.usfirst.org/showthread...9750#post59750
Reply With Quote
  #11   Spotlight this post!  
Unread 10-09-2012, 01:51
Tom Line's Avatar
Tom Line Tom Line is offline
Raptors can't turn doorknobs.
FRC #1718 (The Fighting Pi)
Team Role: Mentor
 
Join Date: Jan 2007
Rookie Year: 1999
Location: Armada, Michigan
Posts: 2,533
Tom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond reputeTom Line has a reputation beyond repute
Re: 2012 Beta Testers

Quote:
Originally Posted by Joe Ross View Post
That's basically what we did this year. I posted a simplified example here: http://forums.usfirst.org/showthread...9750#post59750
Thanks for that link Joe. Figuring out UDP communication was on my to-do list before next season for exactly this reason. It will be a big help in understanding how it works.
Reply With Quote
  #12   Spotlight this post!  
Unread 10-09-2012, 07:08
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,752
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 2012 Beta Testers

Quote:
I presume the 800 MIPs ...
The processors are very similar. The data sheet for the MPC5125, the chip in the cRIO-II, stated the 800 MIPS as being accomplished for less than one watt. The cRIO uses the MPC5100 and lists the MIPS as 760.

I quoted integer benchmarks because many image operations are integer based. The sequence of operations for FRC in 2012 was to decode the JPG, color threshold, convex hull, make particle measurements, and compare the particle scores to target scores. I believe all of those operations are integer based, many of them being applied to every pixel, so lots of integer ops. I'm sure there are some floats used too, but way more integers.

A few years ago, the target was the ellipse/circle, and a Hough transform use used for the shape matching. At least the current geometric shape library is entirely Hough-based, I assume it was then. This will have a bigger mix of float operations. Since coordinates in the image are integers and there are so many of them, there will at least be lots of int loads and stores. And in general, image processing libs tend to be optimized. Since ints are still somewhat faster than floats, even with SSE, they will use the fastest approach that gets the right answer.

Greg McKaskle
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 17:30.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi