Go to Post It doesn't matter if you write a thousand lines of code for a robot, if you didn't teach and/or inspire the HS students as you did it then you are not a mentor. - JamesBrown [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
View Poll Results: What did you use for Vision Processing
Driver Station 28 60.87%
Laptop/Netbook on Robot 3 6.52%
Other 2 4.35%
Did not use/cRIO 13 28.26%
Voters: 46. You may not vote on this poll

Closed Thread
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 24-08-2012, 19:37
apalrd's Avatar
apalrd apalrd is offline
More Torque!
AKA: Andrew Palardy (Most people call me Palardy)
VRC #3333
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Auburn Hills, MI
Posts: 1,347
apalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by cbale2000 View Post
For me the bigger question is, why doesn't FIRST just ditch the cRio and go with laptop controlled robots?
Given how much timing jitter we have now, I can only imagine how bad it would be if we ran a non-RTOS.

Or how bad the boot times would be.

<rant>I would be very, very happy with an embedded system. Something like a PowerPC or ARM that has a good RTOS without any extra junk, just raw hardware access, and an Ethernet stack.</rant>

The key to getting good execution timing and RT performance is to reduce overhead. Going to a non-RTOS without a real-time coprocessor would just be terrible for timing and performance.
__________________
Kettering University - Computer Engineering
Kettering Motorsports
Williams International - Commercial Engines - Controls and Accessories
FRC 33 - The Killer Bees - 2009-2012 Student, 2013-2014 Advisor
VEX IQ 3333 - The Bumble Bees - 2014+ Mentor

"Sometimes, the elegant implementation is a function. Not a method. Not a class. Not a framework. Just a function." ~ John Carmack
  #17   Spotlight this post!  
Unread 24-08-2012, 19:44
RyanCahoon's Avatar
RyanCahoon RyanCahoon is offline
Disassembling my prior presumptions
FRC #0766 (M-A Bears)
Team Role: Engineer
 
Join Date: Dec 2007
Rookie Year: 2007
Location: Mountain View
Posts: 689
RyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by cbale2000 View Post
For me the bigger question is, why doesn't FIRST just ditch the cRio and go with laptop controlled robots?
[EDIT: this first set of arguments is further supported by [apalrd]'s arguments for RTOSes] If you start from FIRST's requirement that the control system have a kill switch that can't be blocked by user code (it's the reason the IFI controllers have a "master" coprocessor and a big reason why we can't touch the FPGA on the cRIOs) and step into their paranoid mindset for a second, either you accept that the laptop won't be the primary control processor, in which case your I/O board expands to become a microcontroller board, and you basically get back to where we are now*, or the laptop probably won't be able to run just any commodity OS, and that the user code will probably have to run in some sort of jail/VM/isolate. The latter two points combined means you have no guarantee that it can be programmed in "virtually any language." Or else I would argue that the cRIOs can be programmed in "virtually any language" as well: assuming the language is open source, you should be able to cross-compile for the PowerPC target. This is how RobotPY works.

Also, you have to find somebody to maintain and support this new platform. FIRST employs only a handful of control system engineer staff; a lot of the work on the software for the current system is done by NI or the WPIlib project. If you move to a laptop-based system, you lose at least half the team. Who handles all the calls and emails when teams start having problems with the system? Also, the laptops have to come from somewhere, so you have to find a sponsor willing to donate, or sell at greatly reduced price, 2300+ laptops. You may argue that already happens with the driver stations - and i'm not saying it's impossible, just that it would have to happen.

If we ignore all the above and suppose they did allow teams to use their own laptops, there's also the point of maintainability at the competitions. The ability of the FTAs and CSAs to help troubleshoot problems becomes greatly reduced when you open up such a critical part of the control system. FIRST is having a difficult enough time keep the current system running, as evidenced by the communication problems, etc, even when there aren't malicious parties involved. I'm not trying to insult FIRST at all - just saying the job is a difficult one already. It would become increasingly unclear if the problem was in the field system or if the team had messed something up themselves.

* Perhaps the argument here comes down to the fact that the cRIOs are bulkier than they need to be. And I would agree with you. I doubt FIRST needs controllers that are certified for 50G shock loads, etc. See above points on logistics, though. It might have been interesting (political issues notwithstanding) if we had kept the old IFI controllers but made it easier to interface them with a laptop.

Despite my arguments to the contrary, I think it would be a great opportunity if FIRST did move to a laptop-based system. I guess the last point is that I am encouraged by FIRST opening up the driver station in the last couple of years. Perhaps this is a sign of things to come (I hope).
__________________
FRC 2046, 2007-2008, Student member
FRC 1708, 2009-2012, College mentor; 2013-2014, Mentor
FRC 766, 2015-, Mentor
  #18   Spotlight this post!  
Unread 24-08-2012, 20:53
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,082
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Who used Driver Station for Vision?

Keep in mind that JPEG compression is sensitive to the image contents. We found that by using a very intense lighting rig (two Superbright LED rings) combined with a very short exposure time for the camera, most of every image we took was very, very dark except for the target (and other lighting sources). Dark areas of the image have low SNR and compress very well.

The 640x480 images getting transmitted to our laptop were almost always under 20KB because they were so underexposed. Even at 30 fps, that is less than a megabyte per second.

See the attached file for an example of the images we were sending back for vision processing.
Attached Thumbnails
Click image for larger version

Name:	image (22).jpg
Views:	64
Size:	16.9 KB
ID:	12954  

Last edited by Jared Russell : 24-08-2012 at 20:56. Reason: KB, not Kb
  #19   Spotlight this post!  
Unread 26-08-2012, 22:34
MAldridge's Avatar
MAldridge MAldridge is offline
Lead Programmer
AKA: Rube #1
FRC #0418 (LASA Robotics)
Team Role: Programmer
 
Join Date: Jan 2011
Rookie Year: 2010
Location: Austin
Posts: 117
MAldridge will become famous soon enoughMAldridge will become famous soon enough
Re: Who used Driver Station for Vision?

we wound up moving our vision code to the laptop, but only because an unbounded loop on the cRIO prevented it from running correctly. If we had found that loop, I think we would have had it working onboard. That said, I think this year we will go with a single board system and a USB camera being processed by a C app in debian (mainly because the axis hardware gave us more trouble then it was worth when trying to set things like exposure time and compression level). We didn't see that much usage of network time, maybe 5MB/s, and I think we were working with 8 bit color (not sure on that, but it wasn't much).
__________________
'Why are you a programer?' --Team Captain
'Because the robot isn't complicated enough!' --Me
  #20   Spotlight this post!  
Unread 27-08-2012, 00:26
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by Caboose View Post
I would think a better plan would be to have the vision processing calculated on the robot, use of cheaper and better USB cameras with low image capture lag, with a light and powerful laptop and send relevant data to the Driver Station/Robot. But alas there is a dreaded $400 limit to ALL parts, if only laptops that would go on the robot could be a little more expensive the FMS would not have a need to worry about large images clogging the field network...
Team 11 found and used an AMD dual core netbook (it had a bigger screen than what most might consider a netbook) on our robot for Rebound Rumble. It came with a SSD in it. The screen was removed from it. The original battery was used in it (we had also considered ITX boards, PC104 boards, BeagleBones...didn't want to fight with the power supply issues). It passed inspection at the 3 competitions it was used in. Later in the season it was removed (it worked fine it was removed to adjust for driving styles). It was *just* under the $400 limit.

They had 2 USB cameras connected to it. One high resolution (1080p), low speed (measured 5+ frames a second) and one high speed (measured 30+ frames a second...this was fun to watch and could swamp a single core), low resolution (640x480). It was running Linux and using custom Java software written by the students to process video and send control signals to the cRIO over the robot Ethernet. We tried quite a few USB cameras (I've got a 1 cubic foot box full of them now). Some had terrible white balance. Some didn't work well in Video4Linux but were a little better in Windows (well it was a Microsoft camera LOL). Some had terrible frame rates or highly variable frame rates unexpectedly. We found oddly that several of the very cheap webcams on Amazon worked great ($5 webcam versus $125 webcam and the $5 webcam works better for this...go figure). (I didn't mention exactly which cameras because I don't want to take all the challenge out of this.)

One of the original concerns that prompted this design which has now spanned 2 years of competition (we actually thought about this the year before and didn't have any weight to spare for it, though our soon to be programming captain made some very impressive tests) was the bandwidth sending video to the driver's station. We had a great deal of problems locating working clear samples of Java code for the cRIO that could process video so this seemed like an idea worth testing (mind you I know the cRIO can do this we just couldn't get the samples to work or to function in a way we preferred).

Though we didn't use it, OpenCV is an extremely functional and professional vision library you can call in many languages. Our students actually communicated with Video4Linux (V4L) which OpenCV actually uses as well (though it can use other solutions to get the video sources).

Our team uses a lot of Linux. The programmers who worked on this part were quite comfortable with it and to my knowledge no mentor provided technical support because they didn't need to. The netbook had Windows 7 on it and we removed it. I'm quite sure from my own work professionally that you could use Windows, Linux, BSD or Mac OSX and get workable results even with a single core Atom CPU (we originally tested with a Dell Mini 9 which is precisely that at the time it was running Ubuntu 9). My advice (take it or leave it) is try not to think you need to process every frame and every pixel of every frame.

Though we used Java (most precisely OpenJDK) I personally tested PyGames and it worked just fine stand alone.
If someone else is interested in trying it this shows you most everything you need to know:
http://www.pygame.org/docs/tut/camera/CameraIntro.html

I had that interfaced with a NXT controller for an experiment and that was also controlled with Python code.

Quote:
Originally Posted by RyanCahoon View Post
If we ignore all the above and suppose they did allow teams to use their own laptops, there's also the point of maintainability at the competitions. The ability of the FTAs and CSAs to help troubleshoot problems becomes greatly reduced when you open up such a critical part of the control system. FIRST is having a difficult enough time keep the current system running, as evidenced by the communication problems, etc, even when there aren't malicious parties involved. I'm not trying to insult FIRST at all - just saying the job is a difficult one already. It would become increasingly unclear if the problem was in the field system or if the team had messed something up themselves.

* Perhaps the argument here comes down to the fact that the cRIOs are bulkier than they need to be. And I would agree with you. I doubt FIRST needs controllers that are certified for 50G shock loads, etc. See above points on logistics, though. It might have been interesting (political issues notwithstanding) if we had kept the old IFI controllers but made it easier to interface them with a laptop.

Despite my arguments to the contrary, I think it would be a great opportunity if FIRST did move to a laptop-based system. I guess the last point is that I am encouraged by FIRST opening up the driver station in the last couple of years. Perhaps this is a sign of things to come (I hope).
I'm confused by this (not to appear too argumentative).

A few people warned us this year about the netbook we used and with proper mounting there are plenty of examples of our robot smashing over the bump in the center of the field at full throttle. We did that in practice on our own field and on the real field literally well over 150 times. No issues. Course we did have an SSD in it.

Also doesn't FIRST allow you to use other laptops for the driver's station and doesn't that create to some extent the same support issue? I grant you the DS is basically Windows software so that did sort of reduce the variability. However, there's nothing at all stopping FIRST from producing a Linux distro all their very own. This would give them control over the boot times, the drivers, the interfaces and the protocol stacks. It's really much the same problem FIRST faces if they put DD-WRT or OpenWRT on the robot APs. I assure everyone that a laptop for processing video on the robot and literally entirely in lieu of the cRIO (with a replacement for the digital side car) can be done and I have no problems proving it.

Last edited by techhelpbb : 27-08-2012 at 10:29.
  #21   Spotlight this post!  
Unread 27-08-2012, 00:56
RyanCahoon's Avatar
RyanCahoon RyanCahoon is offline
Disassembling my prior presumptions
FRC #0766 (M-A Bears)
Team Role: Engineer
 
Join Date: Dec 2007
Rookie Year: 2007
Location: Mountain View
Posts: 689
RyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by techhelpbb View Post
A few people warned us this year about the netbook we used and with proper mounting there are plenty of examples of our robot smashing over the bump in the center of the field at full throttle. We did that in practice on our own field and on the real field literally well over 150 times. No issues. Course we did have an SSD in it.
I think you misunderstood me. I was (in a way agreeing with you and) saying that the cRIO is rated for much more extreme conditions than we would see in FRC, and I agree that (with an SSD) laptop hardware would likely be fine.

Personally, I would love to see laptop-based control systems; my post was to point out reasons why FIRST is unlikely to adopt them.

Quote:
Originally Posted by techhelpbb View Post
Also doesn't FIRST allow you to use other laptops for the driver's station and doesn't that create to some extent the same support issue?
The rules have allowed for custom driver stations in the last couple of years, and that is encouraging that they might open various parts of the control system up more in the future. However, the difference between the robot controller and the driver station is that if the driver station has an error then the e-stop can still be pressed to shut off the robot. If the robot controller has an error, then any methods of remote deactivation (at least that currently exists) fail along with it.
__________________
FRC 2046, 2007-2008, Student member
FRC 1708, 2009-2012, College mentor; 2013-2014, Mentor
FRC 766, 2015-, Mentor
  #22   Spotlight this post!  
Unread 27-08-2012, 01:12
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by RyanCahoon View Post
However, the difference between the robot controller and the driver station is that if the driver station has an error then the e-stop can still be pressed to shut off the robot. If the robot controller has an error, then any methods of remote deactivation (at least that currently exists) fail along with it.
You're right. However the laptop doesn't have the ports the digital side car does. Whatever you use to replace that has the ports the digital side car does. The current e-stop is merely in the hardware of the digital side car.

Brian

Last edited by techhelpbb : 27-08-2012 at 10:29.
  #23   Spotlight this post!  
Unread 27-08-2012, 12:25
FrankJ's Avatar
FrankJ FrankJ is offline
Robot Mentor
FRC #2974 (WALT)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2009
Location: Marietta GA
Posts: 1,946
FrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond reputeFrankJ has a reputation beyond repute
Re: Who used Driver Station for Vision?

The digital side car gets all of its data from the digital IO card on the Crio. The current "estop" is implemented in the crio's FPGA? I expect this is one of the reasons teams are not allowed to change the FPGAs programming.

Typically estops are outside of the control computers & kills power to the driven elements. When there are implemented in the control computer, the software needs to be locked down in such a way that the machine programmers cannot change it. First could have a separate device that served the Estop function.

If you did not have as base a industrial controller like the Crio as base competitions would be a whole lot less interesting because you would have a lot of bricks on the field. This is spite of the few teams that would have really amazing stuff. Thanks to NI for being there. They provide more support than most people realize.

Now back the original topic. We used the laptop for our vision processing. While it was pretty amazing, we had issues making it run on what we had. We are still learning about how to do it.

Last edited by FrankJ : 27-08-2012 at 12:27.
  #24   Spotlight this post!  
Unread 28-08-2012, 20:17
protoserge's Avatar
protoserge protoserge is offline
CAD, machining, circuits, fun!
AKA: Some call me... Tim?
FRC #0365 (MOE) & former 836 Mentor)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2002
Location: Wilmington, DE
Posts: 754
protoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond repute
Re: Who used Driver Station for Vision?

The RoboBees (FRC 836) used a laptop as well for stereo vision processing.

I'll see if we have our white paper completed for the process. There are some optimizations we performed and some general algorithm guidance for using range and angle to target.

On the topic of bandwidth, we would be lucky to get a constant 10-12 Mbps during the past year. At the DC Regional, we had to change the camera resolution and refresh cycle a few times to get the bandwidth usage down even further. I will caveat that I am not on the command and control (programming and sensors) subteam, so I do not know all the intricacies in the process.

I anticipate FIRST instituting QoS this next competition season, but don't expect much more available bandwidth than realized this past season.

This is a good time to start thinking about embedded development platforms such as a PandaBoard.
  #25   Spotlight this post!  
Unread 28-08-2012, 22:31
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by stinglikeabee View Post
The RoboBees (FRC 836) used a laptop as well for stereo vision processing.

...

On the topic of bandwidth, we would be lucky to get a constant 10-12 Mbps during the past year. At the DC Regional, we had to change the camera resolution and refresh cycle a few times to get the bandwidth usage down even further. I will caveat that I am not on the command and control (programming and sensors) subteam, so I do not know all the intricacies in the process.
So you were sending video from the laptop to the driver's station as well?

Team 11 never sent full speed video, they had indicators and all that was sent was the information to operate those indicators at the driver's station. For the most part the entire robot wasn't using a whole lot more network usage than something you could easily create from the KOP and the sample code. The indicators consume trivial network usage.

Last edited by techhelpbb : 28-08-2012 at 22:34.
  #26   Spotlight this post!  
Unread 28-08-2012, 23:02
protoserge's Avatar
protoserge protoserge is offline
CAD, machining, circuits, fun!
AKA: Some call me... Tim?
FRC #0365 (MOE) & former 836 Mentor)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2002
Location: Wilmington, DE
Posts: 754
protoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond reputeprotoserge has a reputation beyond repute
Re: Who used Driver Station for Vision?

Quote:
Originally Posted by techhelpbb View Post
So you were sending video from the laptop to the driver's station as well?

Team 11 never sent full speed video, they had indicators and all that was sent was the information to operate those indicators at the driver's station. For the most part the entire robot wasn't using a whole lot more network usage than something you could easily create from the KOP and the sample code. The indicators consume trivial network usage.
Video is a bit of a misnomer. It was .jpg images sent every few seconds at low resolution to the driver station interface.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 01:36.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi