Go to Post FIRST isn't for wimps. It never has been. - JaneYoung [more]
Home
Go Back   Chief Delphi > FIRST > General Forum
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
 
Thread Tools Rating: Thread Rating: 4 votes, 5.00 average. Display Modes
  #1   Spotlight this post!  
Unread 16-08-2013, 08:05
Gdeaver Gdeaver is offline
Registered User
FRC #1640
Team Role: Mentor
 
Join Date: Mar 2004
Rookie Year: 2001
Location: West Chester, Pa.
Posts: 1,357
Gdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

I followed some of the link in this thread for RTlinx and labview. What changes 2 our programming in labview will we have to be concerned with? I see the words mutex and blocking non-blocking, 2 schedulers and other stuff that relates to running on a duel core. Do we have to deal with these issues or will labview take care of it? With the current single core c-rio our programming mentor teaches the new programers labview basics, The importance of real time and follows up with some lessons on state machines. After this the kids are let loose to get hands on experience. If we have to deal with the complexities of multiple cores this is going to require allot more of formal instruction on our mentors part. A serious load for a first time young programmer.
Reply With Quote
  #2   Spotlight this post!  
Unread 16-08-2013, 08:54
jhersh jhersh is offline
National Instruments
AKA: Joe Hershberger
FRC #2468 (Appreciate)
Team Role: Mentor
 
Join Date: May 2008
Rookie Year: 1997
Location: Austin, TX
Posts: 1,006
jhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by Gdeaver View Post
I followed some of the link in this thread for RTlinx and labview. What changes 2 our programming in labview will we have to be concerned with? I see the words mutex and blocking non-blocking, 2 schedulers and other stuff that relates to running on a duel core. Do we have to deal with these issues or will labview take care of it?
The LabVIEW programming experience is the same. You will not need to do anything special to deal with multiple cores. With LabVIEWs inherent parallelism the multiple cores are utilized naturally any time you have parallel loops executing. As always, take care to avoid race conditions, but if you limit the use of global variables, that's usually pretty easy to avoid in LabVIEW.
Reply With Quote
  #3   Spotlight this post!  
Unread 16-08-2013, 09:15
Ether's Avatar
Ether Ether is offline
systems engineer (retired)
no team
 
Join Date: Nov 2009
Rookie Year: 1969
Location: US
Posts: 8,015
Ether has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond reputeEther has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by jhersh View Post
The LabVIEW programming experience is the same. You will not need to do anything special to deal with multiple cores.
Excerpt from "Under the Hood of NI Linux Real-Time":
It’s also important to note that performance degradation can occur in both time critical and system tasks on multicore systems running NI Linux Real-Time if serially dependent tasks are allowed to run in parallel across processor cores. This is because of the inefficiency introduced in communicating information between the serially dependent tasks running simultaneously on different processor cores. To avoid any such performance degradation, follow the LabVIEW Real-Time programming best practice of segregating time-critical code and system tasks to different processor cores. You can accomplish this by setting a processor core to only handle time-critical functions, and specify the processor core to be used by any Timed Loop or Timed Sequence structure as illustrated in Figure 4. You can learn more about the best practices in LabVIEW Real-Time for optimizing on multicore systems at Configuring Settings of a Timed Structure.
Is the above something that teams need to be aware of and take into account in their programming efforts?


Reply With Quote
  #4   Spotlight this post!  
Unread 16-08-2013, 09:31
jhersh jhersh is offline
National Instruments
AKA: Joe Hershberger
FRC #2468 (Appreciate)
Team Role: Mentor
 
Join Date: May 2008
Rookie Year: 1997
Location: Austin, TX
Posts: 1,006
jhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by Ether View Post
Excerpt from "Under the Hood of NI Linux Real-Time":

[snip]

Is the above something that teams need to be aware of and take into account in their programming efforts?
It may be something that an advanced team would want to pay attention to if they are trying to push the controller to its limits. However, given that a single core of the roboRIO is approximately 5x faster than the cRIO at basic tasks means that a little inefficiency will likely go unnoticed for most teams.
Reply With Quote
  #5   Spotlight this post!  
Unread 16-08-2013, 09:51
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,620
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by jhersh View Post
It may be something that an advanced team would want to pay attention to if they are trying to push the controller to its limits. However, given that a single core of the roboRIO is approximately 5x faster than the cRIO at basic tasks means that a little inefficiency will likely go unnoticed for most teams.
I am a bit confused about this in relation to what was implemented in the RoboRio.

I work in highly parallel environments daily (8,000+ Linux servers globally running extremely high speed tasks...I will avoid the term real time here it is often misunderstood as a metric).

I can see how the abstraction of LabView could make the dual cores less apparent to the end user. Unless there's a way for the students to bind their process to a particular processor I don't see any way they can easily deadlock themselves.

However not all teams work in LabView. If a team is using C or Java can they create code that can target specific processor? If so they can create deadlocks because they could spawn up a 'serial process' between the 2 processor and get stuck between the 2 processors.

In any kind of multiple processor environment with user ability to direct resources this sort of complication can arise. Automated resource allocation can either fix this or itself might magnify the issue.

The control system we proposed to FIRST for example had Parallax Propellers (along with Atmel and ARM) and those chips have 8 'cogs' (cores). In that environment a student might create a set of interelated tasks that must operate in a serial fashion but because of the architecture they would be aware from the beginning that they've divided up the process. So for example: if process B stalls in cog 2 perhaps they should debug process A in cog 1. The design goal with the multiple processors in the proposed environment was not to centrally distribute the resources at execution time. It was to provide finite determinisitic resources as part of the initial design process so that the result had direct predictability. Anything that could not be performed within that timing constraint could then have added processing or be delegated to external circuitry (FPGA - currently Xilinix Spartan 3 - and conditioned I/O). Added processing was a low cost barrier because of the way the entire system was architected 100s of processors from any vendors could operate cooperatively till the robot power supply limits became an issue (yes each processor does inherit additional processing cost as a result of this expansion capability but it is a small cost considering the value of the capability).

For those that don't understand the techno-speak about what we proposed:
You could decide to use 4 cogs in a single controller such that each cog controls a single tire of the drive system.
You would know instantly which cog was tasked with what job and what to expect from it.
You could issue orders between these cogs something like this:
Cog_RightFront - Move forward at 10 RPM
Cog_LeftRear - Turn swerve 15 degrees
(I am not kidding about this sort of programming at all I actually made it work. Robot Control Language -RCL- from General Robotics and Lego Logo are examples from previous decades of the language structure. The change here is the way it maps to physical processing directly. The cogs are each 20MIPS they have plenty of time to parse relativily natural language if that is what is really needed and that can be further tokenized for a performance boost.)

Obviously in Linux you can spawn up processes or threads. This is something I exploit daily. What level of granuality is being exposed to students? Further what tools are being offered to the students to debug these circumstances if they are in fact able to control the distribution of resources? On Linux systems I have software I wrote that is able to monitor every process and it's children and either report over secure channels to a central management infrastructure or perform forced local management of anything that goes bonkers. In this case the 'central management' is really the field and DS.

Last edited by techhelpbb : 16-08-2013 at 11:54.
Reply With Quote
  #6   Spotlight this post!  
Unread 16-08-2013, 10:13
jhersh jhersh is offline
National Instruments
AKA: Joe Hershberger
FRC #2468 (Appreciate)
Team Role: Mentor
 
Join Date: May 2008
Rookie Year: 1997
Location: Austin, TX
Posts: 1,006
jhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by techhelpbb View Post
What level of granuality is being exposed to students?
Students programming in LabVIEW don't have to be exposed at all. If they choose to, they can use timed loops and specify core affinity for each loop.

In C++, students are exposed directly to the Linux process / thread controls you would expect from a typical system, with the addition of real-time scheduler priorities.

As for Java, I'm not sure how it's exposed.
Reply With Quote
  #7   Spotlight this post!  
Unread 16-08-2013, 10:22
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,620
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by jhersh View Post
Students programming in LabVIEW don't have to be exposed at all. If they choose to, they can use timed loops and specify core affinity for each loop.

In C++, students are exposed directly to the Linux process / thread controls you would expect from a typical system, with the addition of real-time scheduler priorities.

As for Java, I'm not sure how it's exposed.
I am actually glad about this. There are more environments with these real expectations in this world everyday. This provides the students an excellent opportunity and in an operating system with reach far beyond merely those controllers.

It does mean a challenge for the students but I have great confidence they'll grasp it.

What tools are being planned to help them debug the sorts of issues that may arise?

Last edited by techhelpbb : 16-08-2013 at 10:30.
Reply With Quote
  #8   Spotlight this post!  
Unread 16-08-2013, 10:29
jhersh jhersh is offline
National Instruments
AKA: Joe Hershberger
FRC #2468 (Appreciate)
Team Role: Mentor
 
Join Date: May 2008
Rookie Year: 1997
Location: Austin, TX
Posts: 1,006
jhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond reputejhersh has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by techhelpbb View Post
What tools are being planned to help them debug the sorts of issues that may arise?
For LabVIEW users, the Execution Trace Toolkit allows you to visualize when different parts of the system are executing.

Standard open-source tools apply to C++ and Java.
Reply With Quote
  #9   Spotlight this post!  
Unread 16-08-2013, 10:44
apalrd's Avatar
apalrd apalrd is offline
More Torque!
AKA: Andrew Palardy (Most people call me Palardy)
VRC #3333
Team Role: College Student
 
Join Date: Mar 2009
Rookie Year: 2009
Location: Auburn Hills, MI
Posts: 1,347
apalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond reputeapalrd has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

I don't think I've ever done anything in FRC that I couldn't do on a Vex Cortex, aside from stream video back to the driver station laptop. In fact, I've run Cortex code has run as fast as 100hz without complaining about CPU loading, and under RobotC (a terrible bytecode language with all the inefficiencies of Java and none of the benefits that dosn't even support the C language really at all) I was able to run all of my code in 2ms (measuring the time it took the task to execute).

I did come a bit short on IO (granted I used 5 LED's with individual GPIO pins), but I managed to live with the 12 DIO and 8 analog and 10 PWM. I think an extra two of each would be nice for FRC, but it's perfect for Vex robots. It's also got I2C and two UART ports.

I would agree that, the peak performance of the roboRIO vs the Cortex provides more cost efficiency. But for 99% of teams, the Cortex would be just fine (in fact, way better because it's so easy to setup and dosen't require default code), so the doubled cost dosn't provide doubled benefit, or even any benefit to them. And then there are the 5% who insist vision procesing is important (it has not been important to me yet), and the 1% who might utilize the full benefits of the roboRIO and implement a good onboard vision system without compromizing their RT controls.

We're still not doing anything controls-wise that we couldn't have done on the 2008 IFI controller. We now use floating point math, and LabVIEW, and other useful code features to do it, but we haven't found a challange which we simply couldn't do previously. Our development times have stayed about the same, our calibration efficiency is up a bit during online calibration-heavy activity but way way down for short between-match changes. We've also spent a lot more money on system components (cRios, more cRios, tons of digital sidecars, analog modules, solenoid modules, radios, more radios, more radios, ...) than with that system.

In fact, I would argue that our code has gotten more complicated because of all the stuff we've had to do to get development and calibration times down. We wrote Beescript because it took too long to write new autonomous code and deploy it, especially in competition, and would never have done so (and possibly had more flexible autonomous modes using real programming features like math and if statements) if the compile and download times were short, or we could modify calibrations without rebuilding.

We've thought a lot about implementing a calibration system that reads cal files and such, but we can't get the design to a point where we can retain the current online debugging, cal storage, and efficiency. And the more code we write, the longer the compile times get. I know I can't get a system with the flexibility I want and expect, while retaining all of the performance I expect, and it's incredibly frustrating to see systems in the real world operate on far lower resources running far more application code (running way faster) with good development and calibration tools that try hard to streamline and optimize the process as much as possible, and can do it efficiently with such little overhead, while we're running throwing more CPU speed and libraries at the problem and still nowhere near the performance (on all fronts - RT performance, boot times, development times and overhead, calibration efficiency, etc.).
__________________
Kettering University - Computer Engineering
Kettering Motorsports
Williams International - Commercial Engines - Controls and Accessories
FRC 33 - The Killer Bees - 2009-2012 Student, 2013-2014 Advisor
VEX IQ 3333 - The Bumble Bees - 2014+ Mentor

"Sometimes, the elegant implementation is a function. Not a method. Not a class. Not a framework. Just a function." ~ John Carmack
Reply With Quote
  #10   Spotlight this post!  
Unread 16-08-2013, 14:49
Racer26 Racer26 is offline
Registered User
no team
Team Role: Alumni
 
Join Date: Apr 2003
Rookie Year: 2003
Location: Beaverton, ON
Posts: 2,229
Racer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.

And even then, lots of teams had successful deadreckoned keeper autos.

Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended.

2003 was the start of auto. You needed to be first to the top of that ramp.
2004 the target didn't move, but auto could be defended by cross field ramming.
2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly.
2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them
2007, the rack moved after robots were placed, but typically didn't move a whole lot.
2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned.
2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP
2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules.
Reply With Quote
  #11   Spotlight this post!  
Unread 16-08-2013, 15:22
magnets's Avatar
magnets magnets is offline
Registered User
no team
 
Join Date: Jun 2013
Rookie Year: 2012
Location: United States
Posts: 748
magnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by Racer26 View Post
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.

And even then, lots of teams had successful deadreckoned keeper autos.

Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended.

2003 was the start of auto. You needed to be first to the top of that ramp.
2004 the target didn't move, but auto could be defended by cross field ramming.
2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly.
2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them
2007, the rack moved after robots were placed, but typically didn't move a whole lot.
2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned.
2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP
2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules.
This is a little inaccurate. You weren't always allowed to position your robot exactly where you wanted it so you couldn't be sure that your robot started in the same spot each time. In 2012, we needed vision in auto. Our strategy was to get to the center bridge and get the balls first, so we would be traveling very quickly when we hit the bridge, causing our robot to get misaligned. When we drove forward to the key again, we usually would be 2 to 3 feet away from where we started, and we needed the camera to line up with the target.

Also, many other teams have used vision as part of their main strategy. In 2006, wildstang had a nifty turret thing that was always pointed at the goal whenever it was in range so that they could get the balls in at any time. Also, 118 used a camera very well in 2012 with their shooter because it would let them shoot from anywhere near the key without having to line up.

The point is, for some games and some teams, vision is a huge part of the game.

I know teams used vision to line up for a full court shot this year, and teams also used vision to line up with the legs of the pyramid.
Reply With Quote
  #12   Spotlight this post!  
Unread 16-08-2013, 15:25
Pault's Avatar
Pault Pault is offline
Registered User
FRC #0246 (Overclocked)
Team Role: College Student
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Boston
Posts: 618
Pault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by Racer26 View Post
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.

And even then, lots of teams had successful deadreckoned keeper autos.

Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended.

2003 was the start of auto. You needed to be first to the top of that ramp.
2004 the target didn't move, but auto could be defended by cross field ramming.
2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly.
2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them
2007, the rack moved after robots were placed, but typically didn't move a whole lot.
2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned.
2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP
2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules.
Yes, but your looking at the past. And a lot of things can (and will) change by the year 2019. I dream of the day that FRC advances to the point where the GDC can make a game which nearly requires vision tracking for some high scoring oppurtunity. In 6 years, I highly doubt that we will get to the point where vision tracking is necessary to be competitive, but I would not be suprised if it becomes a near requirement for powerhouse teams. Heck, even in 2012 most of the top tier key shooters used vision tracking (341 and 1114 come to mind). And if FRC is going to lock into a control system for that long, they better be sure that it is going to be able to handle our growth and not hold us back.

My 2 cents.
Reply With Quote
  #13   Spotlight this post!  
Unread 16-08-2013, 15:36
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,620
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by Racer26 View Post
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.
I have to say I agree that trying to build machine vision into the control system of an FRC robot is asking quite a bit when so few people will fully dig into it. There is a difference between merely using it and really grabbing hold of it. It is not really the most effective reason to demand an upgrade every few years to an FRC system when the older robots and that investment then become that much harder to maintain.

Personally I think that a better way to handle video recognition is on the robot not at the driver's station with the current FRC environment and for this purpose I feel that an auxilary device to process that is the more sensible. It hardly makes sense to try to find something faster than a general purpose COTS PC for the price. The market for that general purpose PC is huge compared to FIRST so of course it will be the greater performance for the price and without question each year that price will buy even more performance as long as it is allowed. Plus if you break an old laptop I doubt you'll spend more for the older model. The other way is to integrate the camera with the video recognition system in the same package. I really look at the Raspberry Pi and other COTS systems (besides a general purpose PC) as something a little more like an attempt to integrate the camera and the video recognition system (rough I admit). (Not against the Raspberry Pi or anything like that as has been demonstrated elsewhere on the forum.)

In any case I think video recognition is one of those fantastic things that inspires people to think that the robot can adapt to it's environment based on sight. Most people start thinking of the way they see and imprint that on the robot. In so many ways the way humans use sight and the way a machine does are very different things. It is an ever evolving piece of technology. On the plus side that evolution drives jobs and innovation which I'm sure students would love to have. On the other hand video recognition is no PWM. There is a point at which you can implement PWM and there's no sense to try any harder. Video recognition has so many compromises there is always something to try and always a good opportunity to look at the robot as the vehicle and the camera / video recognition as a subsystem with ample opportunity for tinkering.

I am not sure it makes sense to sell the Apple product of FRC robot control systems. That model works great when people can afford to upgrade. Making those upgrades the entire control system seems a touch more expensive than necessary.

Last edited by techhelpbb : 16-08-2013 at 15:54.
Reply With Quote
  #14   Spotlight this post!  
Unread 16-08-2013, 09:48
Racer26 Racer26 is offline
Registered User
no team
Team Role: Alumni
 
Join Date: Apr 2003
Rookie Year: 2003
Location: Beaverton, ON
Posts: 2,229
Racer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond reputeRacer26 has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Absolutely, I agree that the performance per unit cost value of a cRIO or roboRIO is orders of magnitude higher than for the Vex Cortex. I'm ALSO not suggesting that FRC use a Vex Cortex. It was simply an example of some of the other options.

In terms of performance, they're not even in the same ballpark, so yes, being 'just' double the cost is a good value. If you're going to use that performance. Otherwise though, its like buying a Bugatti Veyron and never taking it to a racetrack or Germany's autobahns. Buying performance you won't be using is frivolous.

Please don't misunderstand me. I'm not trying to take pot shots at NI. I'm a certified LabVIEW developer, and I work with NI equipment every day.

roboRIO is a huge improvement over cRIO as an FRC control system. No contest. I'm very excited to get my hands on it and see it in action. I'm just disappointed that it seems like a couple of the spots where quantum leaps could have been made fell a bit short.

BUT I'm also aware that much of the slow boot problem with the cRIO-based control system we've had since 09 is NOT the boot time of the cRIO, but rather, the radios. They're still working out what we're going to be using for radios, so maybe I'll be pleasantly surprised.

@Don:

I don't know what 'alternative' I'm proposing. The FIRST community is collectively VERY smart though. I've seen some neat 900MHz ethernet bridges, capable of throughputs in the 2Mbps range. I do know that 802.11 lacks the reliability factor I believe an FRC control system should have. Even my home 802.11 network, in a rural area, with minimal interference on the 2.4GHz band frequently drops, hiccups, or does otherwise rude things to the communications. There has to be a better solution.

As to 1676 not being able to run their robot on a Cortex? That's cool. I wouldn't have guessed it. 1676 though is definitely one of those top tier teams that's good at making optimal use of the resources they're given. If 1676 had to choose between whatever functions its doing that couldn't be achieved with a Cortex, and booting to a driveable state in under 10s, as Chris suggests, would you still want that extra performance?

I can say with confidence that 4343's 2013 robot is the first robot I've been involved with that couldn't have been done with a 2008 IFI system, and that's only because it streamed video back to the DS.

I DO like this suggestion of multiple comms channels, so that mission-critical comms (FMS control, joystick values, etc) could be transmitted on a low-bandwidth, extremely reliable, fast link-up channel, while the extras like streaming video ride on typical 802.11.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 21:37.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi