View Full Version : NI Week Athena Announcement and Q&A Panel
Links here: https://decibel.ni.com/content/community/academic/student_competitions/frc
Announcement keynote at 9:30a ET/8:30a CT/6:30a PT, Panel at Noon ET/11:00a CT/9:00a PT.
Post all discussion/reactions here.
Live stream just went up with 15 mins to go. Hall looks to be filling up fast.
Update: 5 minutes!
Update 2: Away we go!
We have to WUB HARDER to reach these kids!
That jab at Java :rolleyes:
NI said that Java was an arcane language, and was part of the problem with current education is that students learn it, yet they are going to be announcing a controller that will probably work with Java.:rolleyes:
There's a feature to zoom in the NXT programming language, but STILL NO ZOOM IN LABVIEW
Your robot always fails when it matters the most! (Pesky radio!)
MrRoboSteve
08-08-2013, 10:01
Name is "roboRIO"
ARM Cortex A9 with 500% faster CPU.
50% smaller, 75% lighter.
ESD and shock safe. Overvoltage protection throughout.
Same APIs and programming language support.
Will donate one to each team
More details in breakout.
http://i.imgur.com/qOElrLP.png
Ooooh
Best part: Every team gets one free! :D
Can't edit the thread title. :mad:
10 dedicated pwm, 10 shared
10 dedicated dio, 16 shared
4 total relay (that's not enough!!)
8 analog in
2 analog out
2 USB host
1 USB device
Ethernet, CAN, integrated accelerometer
Signal Light
Runs linux
256 MB Storage
256 MB RAM
667MHz Dual Corm ARM cortex A9
No more SideCar? I love it ^^
I wish I was two years younger, I bet using this will be fantastic.
EDIT: woo post 300.
Nate Laverdure
08-08-2013, 10:14
Specifications document (https://decibel.ni.com/content/docs/DOC-30419)
5.7 in. x 5.6 in.
Under 12 oz.
Wow.
protoserge
08-08-2013, 10:25
I bet this thing is <$250 as well.
Andrew Schreiber
08-08-2013, 10:29
Most important questions for me:
Cost? For those of us that run 2 robots how much is this going to set us back?
Availability of both hardware and software? I don't like getting a new system on kickoff, and shipping it to me early December doesn't help much either.
Yes, I know it's over a year and a half out but it's never too early to make a plan of attack for utilizing new technology.
NI said that Java was an arcane language, and was part of the problem with current education is that students learn it, yet they are going to be announcing a controller that will probably work with Java.:rolleyes:
There's a feature to zoom in the NXT programming language, but STILL NO ZOOM IN LABVIEW
-I agree with those thoughts on Java. Nothing against Java as a language, but IMO every introduction to programming I've seen for students that works in Java primarily teaches the concepts of object-oriented programming and Java syntax before actual programming concepts in general. I don't think OO is the only way to program, and a class that starts and only touches Java will promote a much more closed way of thinking about programming
-Every day I work in Simulink, I'm like 'man this is a big subsystem, I should zoom in' and then I zoom in. And then later I zoom out. AND THEN I GO BACK TO LABVIEW AND CANT ZOOM.
@everyone, Thoughts on the new controller:
-I think the expansion port is great for packaging, and allows plenty of additional IO, but I think it's going to be JUST that - a place where you can plug in a break out board to use the additional PWM/GPIO/ADC signals when you need them. I don't think more than a few teams will actually make something else for this port. Maybe a company or two, but basically no teams.
--I'm not worried about enough IO, but a few more ADC's would be nice for extra datalogging. Buzz18 uses 5 of 7 and Buzz 17 used 6 of 7, and it's nice to look at other analog signals every now and then. I also wonder if the 5v/3.3v supply is monitored by the ADC separately because we like to look at that for diagnostics, and previously used a jumper in adc7 to do this, which consumes an analog channel.
-The spec lists integrated 3-axis accelerometer. I wonder why they didn't include a gyro on this, since a gyro is definitely a way more useful sensor for FRC. I have never found a use for a chassis-mounted accelerometer in FRC.
-No spec on boot times as far as I can tell. Dissappointing. The radio issue demonstrated by 2468 almost made a joke out of the current boot times.
-OS is listed as RT Linux, which will definitely make some people very happy, but I don't really care that much. I still think we're brute-forcing the CPU loading issues rather than considering efficient design in quite a few places (outside of user code).
-Hopefully the download times improved. They were just purely awful this year. I know that was because of a LV RT bug, but it's still totally unacceptable in every way that it made it past testing like that.
-Maybe if I can get one early enough it'll push me to design something better. Who knows. But releasing software on kickoff is just crazy, I mean we really do have to install LV and new Driver Station and new Utilities on a whole bunch of computers, and there isn't even anything game related in a new version of software. I too would like to see the new controller and software in my hand at least 6mos before 2015 kickoff.
otherguy
08-08-2013, 10:35
4 total relay (that's not enough!!)
My thoughts exactly. For the past three years we've been using more than 8 channels for pneumatic components.
The only hope is for the additional "Pneumatic" channels eluded to on page 3 of this document (https://decibel.ni.com/content/servlet/JiveServlet/download/30419-6-67520/roboRIO%20Specification%20Flyer.pdf).
Will CAN save us all?
jman4747
08-08-2013, 10:42
Most important questions for me:
Cost? For those of us that run 2 robots how much is this going to set us back?
If they're giving one to each team any team with a crio already (anyone who was a 2014 or earlier rookie team) will have both a roboRio and crio.
protoserge
08-08-2013, 10:42
Most important questions for me:
Cost? For those of us that run 2 robots how much is this going to set us back?
Availability of both hardware and software? I don't like getting a new system on kickoff, and shipping it to me early December doesn't help much either.
Yes, I know it's over a year and a half out but it's never too early to make a plan of attack for utilizing new technology.
Here are my thoughts on your concerns:
This thing is going to be cheaper than the cRIO. A current dev board (here (http://www.newark.com/freescale-semiconductor/mcimx6q-sl/i-mx6-hdmi-lvds-rj45-sabre-lite/dp/05W6138)) with a quad core ARM is $180. The plastic case and added pins doesn't add much to the cost. I would expect this to be priced under $250, but $300 would not be too far-fetched if I missed some components. I would be highly disappointed if this was anywhere close to the current $525 for the present cRIO.
LabVIEW won't change much between now and then. The target device may have some different setup requirements in the code (header files, target device settings). I would be surprised if the workflow was any different. The target device shouldn't matter much when it comes to the code - the compiler will do the work for you. The current cRIO is running a Real Time Operating System (RTOS) and I expect the roboRIO to also run a RTOS variant of Linux.
I am curious if NI will support the present cRIOs to allow us to continue to use them in the future for various other projects/classroom activities since we have four of them.
Clinton Bolinger
08-08-2013, 10:45
But releasing software on kickoff is just crazy, I mean we really do have to install LV and new Driver Station and new Utilities on a whole bunch of computers, and there isn't even anything game related in a new version of software. I too would like to see the new controller and software in my hand at least 6mos before 2015 kickoff.
Sounds like a great question for Friday with Frank or the Q&A later today.
Cross the Road Electronics is also going to have a CAN pneumatic device that will take care of the Compressor, Pressure Switch and 8 solenoids (12v OR 24v not both).
http://i.imgur.com/276hmyB.jpg
Pneumatic Controller from IRI 2013 (https://lh4.googleusercontent.com/-5saMk08ZmDc/Uem1o9fp-9I/AAAAAAAAI7Y/URp5MPkuwfg/w427-h569-no/20130719_175418.jpg)
-Clinton-
-
-The spec lists integrated 3-axis accelerometer. I wonder why they didn't include a gyro on this, since a gyro is definitely a way more useful sensor for FRC. I have never found a use for a chassis-mounted accelerometer in FRC.
A gyro would have trouble if a team decided to mount their control system vertically, or if they didn't put the controller in the center of the robot. Every year we get these accelerometers in the KoP, and I still have no idea what to do with them. We have like 10.
Also, I feel like the 500% increase in processing power was a bit unnecessary, but it's going to be cool to work with a dual core ARM Cortex processor.
Looking at the spec sheet, we can now use both ethernet and USB connected cameras, but their is no use listed for the USB device port.
Some questions:
-Do we have a jumper for the 6V servo power?
-The spec sheet lists specs for both a 5V and 3.3V supplies. Where are they, and can we use them? The old DSC had two extra pins for the 5V supply next to the DIO, but I don't see a way to get 5V from here without using one +5 pins on a DIO.
-How do we know which of the pins in the "custom electronics" connector do which things?
-The picture of the pcb shows a 5V/3.3V selection jumper, but it can't be seen from the new pictures. What does this select?
-Do we have a jumper for the 6V servo power?
There aren't any jumper ports... I hope we won't need a separate board for them.
protoserge
08-08-2013, 10:54
A gyro would have trouble if a team decided to mount their control system vertically, or if they didn't put the controller in the center of the robot. Every year we get these accelerometers in the KoP, and I still have no idea what to do with them. We have like 10.
Also, I feel like the 500% increase in processing power was a bit unnecessary, but it's going to be cool to work with a dual core ARM Cortex processor.
USB Host + ARM processing power... Definitely not unnecessary. Now you can hook your Kinect directly to the robot controller and do image processing.
Sounds like a great question for Friday with Frank or the Q&A later today.
Cross the Road Electronics is also going to have a CAN pneumatic device that will take care of the Compressor, Pressure Switch and 8 solenoids (12v OR 24v not both).
http://i.imgur.com/276hmyB.jpg
Pneumatic Controller from IRI 2013 (https://lh4.googleusercontent.com/-5saMk08ZmDc/Uem1o9fp-9I/AAAAAAAAI7Y/URp5MPkuwfg/w427-h569-no/20130719_175418.jpg)
-Clinton-
This is not good. The eight solenoid control really only allows for 4 solenoids, as the double solenoids require two channels to operate. With the relays it will allow for a total of 8 double solenoids. I know that 118 has used more than this in the past, and 236's bump thing (http://www.youtube.com/watch?v=IgrlsfXICsA) used 8 just for their wheels, plus 2 or 3 more on the rest of the bot.
USB Host + ARM processing power... Definitely not unnecessary. Now you can hook your Kinect directly to the robot controller and do image processing.
If you can get the library to process kinect data on the controller, and if you can find a USB driver that works with linux on whatever USB controller is being used.
Jimmy Nichols
08-08-2013, 11:03
http://new.livestream.com/accounts/4829514/athena/images/26730317
Just posted a new pic of the controller.
Jon Stratis
08-08-2013, 11:14
This is not good. The eight solenoid control really only allows for 4 solenoids, as the double solenoids require two channels to operate. With the relays it will allow for a total of 8 double solenoids. I know that 118 has used more than this in the past, and 236's bump thing (http://www.youtube.com/watch?v=IgrlsfXICsA) used 8 just for their wheels, plus 2 or 3 more on the rest of the bot.
As the device runs on CAN, I would assume you can utilize more than one of the devices on the robot at a time. From what I've seen inspecting, a majority of teams don't need more than 4 double solenoids. Since you have to draw the line somewhere (4, 8, 20, whatever), why not keep the whole thing as small as possible while making it all most teams need? The teams that need more will get more by adding more boards.
Akash Rastogi
08-08-2013, 11:22
Best part: Every team gets one free! :D
Can you elaborate? Every rookie?
Can you elaborate? Every rookie?
Every team that'll sign up to the 2015 season, according to earlier announcements.
Jon Stratis
08-08-2013, 11:28
Can you elaborate? Every rookie?
I expect that the cRio will not be considered part of the legal control system starting in 2015. That doesn't stop you from using it on a practice bot, just like teams used the old IFI control system for non-competition robots after the cRio came out.
Clinton Bolinger
08-08-2013, 11:31
As the device runs on CAN, I would assume you can utilize more than one of the devices on the robot at a time. From what I've seen inspecting, a majority of teams don't need more than 4 double solenoids. Since you have to draw the line somewhere (4, 8, 20, whatever), why not keep the whole thing as small as possible while making it all most teams need? The teams that need more will get more by adding more boards.
What Jon said +1.
Also, teams can use single acting Solenoids and reduce the number of outputs needed. As long as they are comfortable with having a default state of their cylinder, that will actuate at the end of the match or loss of comms.
The CTRE device is replacing the current solenoid module that has 8 outputs, but allows for easier (and less weight) expansion of more outputs.
-Clinton-
efoote868
08-08-2013, 11:48
A gyro would have trouble if a team decided to mount their control system vertically, or if they didn't put the controller in the center of the robot. Every year we get these accelerometers in the KoP, and I still have no idea what to do with them. We have like 10.
There are 3 axis gyros as well. I used a digital I2C one in my senior design project. The chip (L3GD20) had a footprint of about 4mm x 4mm, wonder if they can fit one on in future revisions?
Joe Ross
08-08-2013, 11:55
--I'm not worried about enough IO, but a few more ADC's would be nice for extra datalogging. Buzz18 uses 5 of 7 and Buzz 17 used 6 of 7, and it's nice to look at other analog signals every now and then. I also wonder if the 5v/3.3v supply is monitored by the ADC separately because we like to look at that for diagnostics, and previously used a jumper in adc7 to do this, which consumes an analog channel.
The specs list 8 ADC and 2 DAC. Presumably, the extra ones are on the Custom Electronics Port. Since the ADCs are 12 bit 0-5v, we get 2 free bits compared to the current cRIO, as long as you don't care about -10 to 10v.
AdamHeard
08-08-2013, 11:56
This is not good. The eight solenoid control really only allows for 4 solenoids, as the double solenoids require two channels to operate. With the relays it will allow for a total of 8 double solenoids. I know that 118 has used more than this in the past, and 236's bump thing (http://www.youtube.com/watch?v=IgrlsfXICsA) used 8 just for their wheels, plus 2 or 3 more on the rest of the bot.
The relay outputs onboard are intended to drive a spike I imagine, if they are keeping with the nomenclature of the sidecar (and the fact that they are three pins confirms this).
Driving pneumatics off a spike is doable, but really doesn't make any sense in this era.
The CAN based pneumatics bumper would presumably allow effectively unlimited for FRC sake.
Also, no way 236 used more than 8 solenoids for that motion. I just don't see enough going on for that.
wilsonmw04
08-08-2013, 11:56
how does one use livestream. i made an account and went to the link provided but there isn't a video feed, just a few posts with pics of the new controller. it says there are 300+ watching but i can't find it!! please an old guy out :-)
Steven Donow
08-08-2013, 11:58
how does one use livestream. i made an account and went to the link provided but there isn't a video feed, just a few posts with pics of the new controller. it says there are 300+ watching but i can't find it!! please an old guy out :-)
Nothing is streaming yet(well, not until <2 minutes from now)
EDIT: As I typed that, the stream started, albeit with a lost signal message(That went away as I edited this post...)
wilsonmw04
08-08-2013, 11:58
how does one use livestream. i made an account and went to the link provided but there isn't a video feed, just a few posts with pics of the new controller. it says there are 300+ watching but i can't find it!! please an old guy out :-)
NM! it starts on its own...
AdamHeard
08-08-2013, 11:59
Anyone know when the video from this morning will be posted?
Joe Ross
08-08-2013, 11:59
how does one use livestream. i made an account and went to the link provided but there isn't a video feed, just a few posts with pics of the new controller. it says there are 300+ watching but i can't find it!! please an old guy out :-)
Refresh, they just posted the stream.
Steven Donow
08-08-2013, 12:13
From the Q&A, there will be an expansion port for your own electronics/more analog inputs.
Deploying over USB will be available.
LabVIEW Pro Edition, Java SE, and C++ 11 will be used.
Both Ethernet and USB radios are compatible, they are still testing to find which radio will be used.
Ethernet cameras (i.e. Axis cameras), USB camera (less expensive), and even commercial cameras (high quality).
Targeting low $400s in price, and expecting for it to eventually lower in the future. Expect a more firm answer at the 2014 Championship.
The Kinect will be available to interface directly to the RoboRIO, but NI doesn't plan on including native support.
Reverse polarity protection +- 12V
Library level (not processor level) software simulator.
Additional diagnostic tools will be available in the driverstation for teams, and in the FMS for event staff.
RoboRIO will have conformal coating.
1 RoboRIO will be available per team at a reduced rate once per year, additional units will be at the academic price.
Kate won't say whether the roboRIO is even allowed for 2015. :yikes:
Java SE! Java SE! Java SE!
We will use Java SE, not ME, and we'll be programming in eclipse!
Steven Donow
08-08-2013, 12:22
Will be available for purchase in fall of 2014, beta testing next summer, info will be revealed in Frank's FRC Blog
Can be mounted safely via zipties
You can continue to use Ethernet cameras, use simple USB cameras, or load your own Linux drivers to use virtually any USB camera.
jman4747
08-08-2013, 12:23
Can buy them fall 2014!
You can attach it with cable ties!!!!!!!!!!
Peter Johnson
08-08-2013, 12:25
Glad I wasn't wrong on RT Linux! RobotPy will essentially be obsoleted by this, as there will no longer need to be a custom Python interpreter (just need to wrap the new WPILib), plus we get all the rest of the goodies that come with Linux (shell access!).
It should be more than powerful enough to talk to the Kinect, at least at low resolution. Linux support is actually pretty good, at least if you're just pulling the image+depth field.
Team cost: "Low $400's" to start. Always working to reduce cost.
Don't forget every team gets one for free. This is just for teams that want an additional controller.
Anupam Goli
08-08-2013, 12:28
Glad I wasn't wrong on RT Linux! RobotPy will essentially be obsoleted by this, as there will no longer need to be a custom Python interpreter (just need to wrap the new WPILib), plus we get all the rest of the goodies that come with Linux (shell access!).
It should be more than powerful enough to talk to the Kinect, at least at low resolution. Linux support is actually pretty good, at least if you're just pulling the image+depth field.
I'm going to love playing around with the RT Linux on this thing. Custom drivers can be loaded to have any device working with the controller!
Steven Donow
08-08-2013, 12:29
When the roboRIO is put into place, "legacy hardware" will no longer be legal
This isn't the "FRC And Purchase Orders" panel, people :'(
When the roboRIO is put into place, "legacy hardware" will no longer be legal
Not surprising, since every team gets a roboRIO for free.
jman4747
08-08-2013, 12:36
New PD board.
Faster encoder and SPI sampling
protoserge
08-08-2013, 12:39
Team cost: "Low $400's" to start. Always working to reduce cost.
That's not awful. Definitely higher than I was hoping. What are they "adding on"?
Andrew Schreiber
08-08-2013, 12:45
That's not awful. Definitely higher than I was hoping. What are they "adding on"?
Recouping dev costs most likely. The CRIO was more or less an off the shelf part with a new screen print. This seems to be a custom built solution for some very specific needs with a small market.
Not waterproof! :rolleyes: (But conformal coated)
protoserge
08-08-2013, 13:13
Recouping dev costs most likely. The CRIO was more or less an off the shelf part with a new screen print. This seems to be a custom built solution for some very specific needs with a small market.
I agree with the development costs. They are producing a minimum of 6000 units if I remember the RFP correctly.
Have you seen the new cRIO they just released this week (cRIO-9068)? It's also an ARM Cortex A9 with a RT Linux. It should be pretty interesting to see how that is applied.
protoserge
08-08-2013, 13:34
Zynq-7020 information...
http://www.xilinx.com/products/silicon-devices/soc/zynq-7000/index.htm
http://www.xilinx.com/applications/broadcast/cameras.html :cool:
All cool stuff, but am I the only one disappointed that the code name wasn't carried over to the final product?
Athena is just cool.
ablatner
08-08-2013, 14:34
At work and can't view the stream, but the comments here make it sound incredible. Electronics just got so much easier. Maybe with this and CAN Talons we'll give CAN another shot.
protoserge
08-08-2013, 14:36
Amen to that. CAN should be much better documented and supported come the 2015 season.
Oh, and I'm sure the name "Athena" will stick around for a while.
Meshbeard
08-08-2013, 14:36
Ohhhhhhhhhhhhhh man. I'm so jealous. Everything about this controller is better than the cRIO sidecar duo. I too liked "Athena" better than roboRIO, so I'll probably refer to it as the Athena for a little while longer.
Is there any more information available on the custom electronics port? pinout? This custom electronics port is the coolest feature of the Athena IMO.
I'm hyped and I'm not even a student anymore.
ebmonon36
08-08-2013, 15:28
Is there any more information available on the custom electronics port? pinout? This custom electronics port is the coolest feature of the Athena IMO.
During the Q&A, they referred to the custom electronics port as MXP (myRIO Extension Ports). NI's other product with an MXP has its pinout shown here:
http://zone.ni.com/reference/en-XX/help/373925A-01/myriohelp/myrio_connector_pinouts/
The point counts match the features of the custom electronics port, however I have no confirmation that this is the correct pinout diagram for Athena.
Your robot always fails when it matters the most! (Pesky radio!)
Haha, as one of the students on stage operating the robot, I can tell you that it wasn't actually the radio's fault. Our computer opened up some bizarre program that I didn't know how to close so space bar was my first reaction, which, as you know, is the E-Stop. This meant that I had to power-cycle the robot and wait for the cRIO reboot. Whoops. :yikes:
Joe Ross
08-08-2013, 16:00
FPGA?
It uses a Xilinx Zynq-7020, which is an Dual Core Cortex A9 ARM processor and Artix 7 FPGA in one package.
Joe Hershberger mentioned that encoder decoding was increased to 1mhz.
Tom Line
08-08-2013, 16:04
This is very exciting. The NiWeek expo was incredible and the roborio is impressive. We talked with Greg and Joe, and they presented a neat option. With the USB available you can even use a small USB dongle as your robot radio.
It does look very cool, I'm excited about the use of RT-Linux, that opens up lots of possibilities. I like that there is basic direct connections but also that there is expansion capabilities. Having SPI and I2C interfaces gives lots and lots sensor options.
And just think, add a small solid state drive and vBulletin, it can be your team's webserver in the off season :rolleyes:
Joe Ross
08-08-2013, 16:44
Anyone know when the video from this morning will be posted?
Both the keynote and the panel have been posted.
AdamHeard
08-08-2013, 16:47
Both the keynote and the panel have been posted.
I guess I'm dumb and not seeing the keynote, mind linking it?
Thanks.
Joe Ross
08-08-2013, 16:51
http://www.ni.com/niweek/keynote-videos/
protoserge
08-08-2013, 19:33
FPGA?
The Zynq Chipset is a combined ARM Cortex A9 dual core and FPGA. Pretty cool if you ask me! I posted the links in one of my last posts, but here is the Zynq page: http://www.xilinx.com/products/silicon-devices/soc/zynq-7000/index.htm
pigpenguin
08-08-2013, 20:39
Confused on servo power, due to lack of jumpers and everything. Is it software defined some how? Or will we need a separate board to run servos?
Peter Johnson
08-08-2013, 21:03
Confused on servo power, due to lack of jumpers and everything. Is it software defined some how? Or will we need a separate board to run servos?
It's definitely unclear. Specs 6V servo power so I guess it's software controlled.
Peter Johnson
08-08-2013, 21:14
Can anyone identify the connector being used for CAN bus? It almost looks like it only has two pins.. where are the +5V and ground pins?
RufflesRidge
08-08-2013, 21:42
Confused on servo power, due to lack of jumpers and everything. Is it software defined some how? Or will we need a separate board to run servos?
FRC speed controllers don't connect the power pin to anything, odds are that they just routed 6v to the power pin on all of the PWM pins.
Can anyone identify the connector being used for CAN bus? It almost looks like it only has two pins.. where are the +5V and ground pins?
The CAN connector looks a lot like the power connector on the 2CAN. Probably just the two wires. As a differential bus CAN shouldn't need the 5V and GND being passed around.
FRC speed controllers don't connect the power pin to anything, odds are that they just routed 6v to the power pin on all of the PWM pins.
Correct. PWM power pins are 6V, and you get a couple of amps for driving servos.
The CAN connector looks a lot like the power connector on the 2CAN. Probably just the two wires. As a differential bus CAN shouldn't need the 5V and GND being passed around.
Correct as well - the CAN topology in this system does not distribute power. Power would be supplied to each device separately.
Meshbeard
08-08-2013, 22:09
Can anyone identify the connector being used for CAN bus? It almost looks like it only has two pins.. where are the +5V and ground pins?
I'd assume that those two connectors are for the CAN-high and CAN-low wires. Power is probably separate since all the power systems on an FRC robot are separate from the controls.
What will likely happen is all of CTRE's new CAN stuff will have those two CAN ports separate from the regular old 12v power inputs on all the other components.
donkehote
08-08-2013, 22:26
i wish they had gotten a better toaster to record the webcast. I sincerely hope that the roborio can process better video quality than the webcast.
Joking aside, it looks great! I really like the smaller footprint, the lighter weight, and the canbus support. Do i see an optical port at the top next to the USB? would that be for faster camera response, or is it some other connector for something.
Joe Ross
08-08-2013, 22:49
Do i see an optical port at the top next to the USB? would that be for faster camera response, or is it some other connector for something.
Are you talking about the USB B connector? (To the left of the 2 USB A connectors)
donkehote
08-08-2013, 22:56
Are you talking about the USB B connector? (To the left of the 2 USB A connectors)
Oh, its a USB B, that's what it is. i was thinking it looked like the optical audio port and i was wondering what it would be for.
Mark McLeod
08-08-2013, 23:33
I believe Greg mentioned that the USB B port could be used to drop programs into the roboRIO.
SoftwareBug2.0
08-08-2013, 23:52
C++ 11 will be used.
I'm pretty happy about that detail.
Peter Johnson
09-08-2013, 01:02
The CAN connector looks a lot like the power connector on the 2CAN. Probably just the two wires. As a differential bus CAN shouldn't need the 5V and GND being passed around.
Correct as well - the CAN topology in this system does not distribute power. Power would be supplied to each device separately.
I'd assume that those two connectors are for the CAN-high and CAN-low wires. Power is probably separate since all the power systems on an FRC robot are separate from the controls.
What will likely happen is all of CTRE's new CAN stuff will have those two CAN ports separate from the regular old 12v power inputs on all the other components.
For robustness some systems pass around the control power. In these systems, the CAN bus control logic on each component is powered from the CAN bus power lines instead of the mains supply, ensuring the component is always reachable even if the mains supply to that component is lost (with optoisolation between the CAN control logic and the rest of the circuitry). This could be useful even in the FRC world: for example, tripping a breaker would not cause that CAN bus component (e.g. motor controller) to drop off the bus (so you could still pull status, and it could report it doesn't have mains power etc).
There is a serious problem with the current CAN layer where timeouts to Jaguars could swamp other parts of the system due to error spamming. As timeouts can be caused for multiple reasons (bad termination, disconnected cable, etc), regardless of the power distribution approach, my hope is that the software layer will be updated to avoid this issue.
For robustness some systems pass around the control power. In these systems, the CAN bus control logic on each component is powered from the CAN bus power lines instead of the mains supply, ensuring the component is always reachable even if the mains supply to that component is lost (with optoisolation between the CAN control logic and the rest of the circuitry). This could be useful even in the FRC world: for example, tripping a breaker would not cause that CAN bus component (e.g. motor controller) to drop off the bus (so you could still pull status, and it could report it doesn't have mains power etc).
Note the new PDB is CAN enabled so it can report a tripped breaker as well as the current flowing through each breaker. So you will be able to tell if something on the BUS isn't responding due to a tripped/cycling breaker.
DonRotolo
09-08-2013, 08:48
EDIT: There is a hardware connection within the Jags to prevent this (see posts following)
The current Jaguar CAN has us daisy-chaining network nodes, so if one drops the network, everything beyond it becomes unreachable.
In the automotive world, (most) CAN Buses have a star architecture, everyone hears everything, always.
otherguy
09-08-2013, 09:21
The current Jaguar CAN has us daisy-chaining network nodes, so if one drops the network, everything beyond it becomes unreachable.
I'm pretty sure this isn't true.
I can't find the schematics or pictures of the PCB to support this, but if I remember correctly, the CAN traces are bridged ON THE PCB of each jaguar. That means a jag without power will passively relay CAN communications through itself. This is true for CAN communications on all the jaguars. It doesn't apply to a black jag which is being used as a serial bridge. If that one loses power/communications... then you lose your conduit through which to communicate to the CAN bus. There's no way around that besides using something like the 2CAN.
I believe the softrware side of the house (for Java at least) doesn't handle a controller going off line gracefully. There's a timeout period of something like 3 seconds which the CAN communication code will block on. So this will effectively take down all CAN communications for that period unless YOU set your program up to detect the missing CAN devices and stop trying to communicate to them. 3 seconds of no communications to motor controllers doesn't make for a very happy robot. This is what happened to us in 2012 @ NYC during eliminations (with your team BTW). We had a snap action breaker which would randomly failing open for no apparent reason. This would take one of our drive train motor controllers off line, and cause the remaining CAN controllers to not get any communications for long periods of time(we had all our drivetrain motor comm.s in a single try/catch block). We were trying to fix this between Semi finals matches but by the time I realized what was causing our problem there was just not enough time to make the changes and deploy the code. Of course we didn't have that issue UNTIL semi-finals.
dyanoshak
09-08-2013, 09:28
The current Jaguar CAN has us daisy-chaining network nodes, so if one drops the network, everything beyond it becomes unreachable.
This statement misleading.
The CAN signals are hard wired from connector to connector on the Jaguar PCB. Even if the Jaguar loses power, other Jags on the network will be unaffected.
However, if the CAN cabling is physically disconnected, then yes, every Jag after that will be unreachable.
Edit:
James beat me to it :)
He's also correct about the Black Jag when it is the serial bridge:
It doesn't apply to a black jag which is being used as a serial bridge. If that one loses power/communications... then you lose your conduit through which to communicate to the CAN bus. There's no way around that besides using something like the 2CAN.
flameout
09-08-2013, 11:55
I've seen references to RTLinux, and to Linux with "realtime extensions." Does anyone know what variant of realtime Linux will be on the RoboRIO? Have they revived RTLinux?
Peter Johnson
09-08-2013, 12:04
I've seen references to RTLinux, and to Linux with "realtime extensions." Does anyone know what variant of realtime Linux will be on the RoboRIO? Have they revived RTLinux?
It's a custom NI version. Per https://decibel.ni.com/content/message/56764 and http://www.ni.com/white-paper/14626/, it sounds like it's 2.6 with the PREEMPT_RT patchset (rt.wiki.kernel.org).
flameout
09-08-2013, 12:11
Thank you -- I saw the whitepaper, but that didn't mention anything specific about the kernel itself. The NI community post is much more informative.
I'm glad to hear it's PREEMPT_RT -- in my experience with RTAI, Xenomai, and PREEMPT_RT, it has (by far) the best driver selection, being the only one able to use standard Linux drivers in realtime.
Thad House
09-08-2013, 12:43
I'm really happy about this. Having full linux onboard would be great. Access to the shell means we can write other programs, and have the main program call those programs, and it should allow us to get more modular. Also adding new languages should be possible just by wrapping the libraries, so C# and python will both be easily doable because both are available on Arm Linux.
Joe Ross
09-08-2013, 12:53
It's a custom NI version. Per https://decibel.ni.com/content/message/56764 and http://www.ni.com/white-paper/14626/, it sounds like it's 2.6 with the PREEMPT_RT patchset (rt.wiki.kernel.org).
I'm guessing 3.2. http://article.gmane.org/gmane.linux.rt.user/10332
DonRotolo
09-08-2013, 13:28
Thanks otherguy and dyanoshak for the corrcetion. I was just plain wrong. Learn something new every day and all...
Yes, of course if you physically disconnect it, or if the 'Serial-to-CAN converter' (that first black Jag) goes down, all bets are off.
Meshbeard
09-08-2013, 17:47
If I recall correctly, the issue that some teams were having with CANbus was when one jag had some sort of error and flooded the CAN network with error messages so commands from the controller couldn't get through. I think it was just an issue with the jags.
RufflesRidge
09-08-2013, 21:39
If I recall correctly, the issue that some teams were having with CANbus was when one jag had some sort of error and flooded the CAN network with error messages so commands from the controller couldn't get through. I think it was just an issue with the jags.
It's an issue with how WPILib deals, or doesn't deal rather, with a jag erroring out. If you keep trying to talk to the jag with messages that require an ACK you will have to keep waiting for the full timeout to not get one. Which will cause your code to slow down and throw motor safety errors which slow the code down further and...well you can see where this is going.
You can use the No-Ack versions of messages to help and/or add some intelligence in a class that wraps CAN Jag to prevent spamming messages to a jag that's not responding and to detect and re-initialize a jag that browns out if you are using anything other than the default mode.
It's an issue with how WPILib deals, or doesn't deal rather, with a jag erroring out. If you keep trying to talk to the jag with messages that require an ACK you will have to keep waiting for the full timeout to not get one. Which will cause your code to slow down and throw motor safety errors which slow the code down further and...well you can see where this is going.
You can use the No-Ack versions of messages to help and/or add some intelligence in a class that wraps CAN Jag to prevent spamming messages to a jag that's not responding and to detect and re-initialize a jag that browns out if you are using anything other than the default mode.
The delays and motor safety you see could be one of 2 things...
1) The errors causing slowdown in LabVIEW WPILib due to the Auto-error handler feature which has to synchronize with the main thread. This is instigated when the errors from a Jaguar access call has its error wire coming out of the VI not connected. Just wire it up to the side of a structure like a loop, case structure, or sequence structure, and the auto error handler will not get invoked.
2) You have no threading in your robot program that allows other motors to be accessed while the one controller fails to respond. This of course causes the rest of them to not get the control update in time either. You will then see the motor safety messages. This applies to any language.
I'm guessing 3.2. http://article.gmane.org/gmane.linux.rt.user/10332
Very nice sleuthing.
nuggetsyl
11-08-2013, 17:58
I would like to make a request for GEN 2 of our new 2014 controller.
Mounting Holes.
AllenGregoryIV
11-08-2013, 18:34
I would like to make a request for GEN 2 of our new 2014 controller.
Mounting Holes.
I didn't really notice that. It does say mounting features (https://decibel.ni.com/content/docs/DOC-30419). Anyone want to explain how they are designed to work? We won't have to resort to velcro and zip ties like we currently do for the radio.
Also where are the other shared PWM pins? The pinout on the site only lists 3 of the 10 pins that should have PWM.
Steven Donow
11-08-2013, 18:54
I didn't really notice that. It does say mounting features (https://decibel.ni.com/content/docs/DOC-30419). Anyone want to explain how they are designed to work? We won't have to resort to velcro and zip ties like we currently do for the radio.
Also where are the other shared PWM pins? The pinout on the site only lists 3 of the 10 pins that should have PWM.
IIRC from the stream they said that there are holes in the corners of it intended for zipties(ie. it's more reliable that just ziptieing a radio now). But I think they also said there are other holes that you could use to mount it to, though I think you have to mount it to something else, not just the frame of your robot(they mentioned this as a way of encouraging the approaching 3D printing boom)
Greg McKaskle
11-08-2013, 20:14
My controller is at work or I could attach pictures. Lets see if I can do this in words. The controller PCB is mounted in a plastic case that has an upper and lower shell screwed together in the corners from the bottom.
At the moment, the controller has eight integrated mounting channels in the lower shell. Four molded channels are radiused around each corner screw. If the controller is placed flat on the table, the cable tie will enter an edge parallel to the table, turn ninety degrees and exit the other edge near that corner, still parallel to the table. This is useful for mounting to any sort of vertical element perpendicular to the table or a frame element.
The remaining four holes diagonal through the bottom edge of the controller's long side. The cable tie will enter long side parallel to the table, turn ninety and exit straight down into the table.
The holes are designed to work with 1/2" grid perforated stock.
At the moment, there are also four bronze threaded inserts into the bottom shell near the edge. They would allow a machine screw mount, most likely to a mounting plate designed by the team.
Velcro could also be used on either the bottom or on the edges.
The housings shown during NIWeek are still in development and are either machined or 3D printed. Molds have not been finalized and mounting is still subject to change -- especially if someone has a brilliant idea.
Greg McKaskle
Jon Stratis
11-08-2013, 23:09
I like the idea of the threaded inserts on the bottom for mounting. Do you know what size these are? While 1/4-20 would be serious overkill, you know teams have plenty of them sitting around from using the kitbot for years, which would make them a decent choice... but in the end, it's just something to add to the McMaster order, so it's not that big of a deal if it's not part of a team's "standard" sizes.
Also, what is the spacing on the inserts? Since they aren't through-holes (I assume), getting mounting holes for them lined up correctly could be a real pain.
I like having so many options for mounting! I have no idea what my team will decide to do...
MrRoboSteve
11-08-2013, 23:46
Nice work, Greg & NI team.
A couple suggestions / questions about the mechanical layout:
1. Can you try to have the status LEDs sit slightly above the surface of the enclosure, so that it's possible to see them from positions other than directly above the board? I know the Ethernet LEDs are going to be integrated with that part and are difficult to move, it's the ones in the upper RH corner that I'm concerned about.
2. Do the integrated LEDs relay all of the status that is currently available from the cRIO + DSC + analog sidecar combination?
3. It would be useful to have a PDF paper drilling template for teams that don't use perf stock for mounting.
4. Ethernet port should have a strong mechanical connection to the board.
5. Serial number / version sticker should be on the "top" of the board so that it's easily visible.
6. Any chance of having mechanical capture for the .10 cables, a la the DSC or Jaguar?
That's it for now.
AllenGregoryIV
12-08-2013, 00:53
Thanks Greg, I like the idea of threaded inserts on the bottom but I'm worried that they'll be easily be stripped out by eager freshmen. The zip tie holders seem like a reasonable idea.
Also what is the idea behind the threaded inserts on the top near the USB ports?
Why are the reset and user buttons on the side of the device? Seems like they could be hard to reach once it is mounted in the robot.
6. Any chance of having mechanical capture for the .10 cables, a la the DSC or Jaguar?
That's it for now.
I was just wondering about this my self.
timytamy
12-08-2013, 01:16
At the moment, there are also four bronze threaded inserts into the bottom shell near the edge. They would allow a machine screw mount, most likely to a mounting plate designed by the team.
Greg McKaskle
Can they be metric? Or if not could you supply some of the screws?
Ask for imperial screws in Australia half the time you'll get pointed towards wood screws...
Greg McKaskle
12-08-2013, 07:23
Size of insert? Maybe a 6-32 or 8-32? I don't have it in front of me.
----------------
1. Can the LEDs be proud so they are visible from extreme angles?
The view angle is great. Mine is 3D printed, so hard to tell about details, but they seem flush and view well from extreme angles.
2. Do the integrated LEDs relay all of the status that is currently available from the cRIO + DSC + analog sidecar combination?
They convey quite a bit more status info -- exception being relay state.
3. It would be useful to have a PDF paper drilling template for teams that don't use perf stock for mounting.
Yep.
4. Ethernet port should have a strong mechanical connection to the board.
And good ESD protection.
5. Serial number / version sticker should be on the "top" of the board so that it's easily visible.
Hmm. This is available through web connection. Aesthetics count for something. We will keep the problem in mind.
6. Any chance of having mechanical capture for the .10 cables, a la the DSC or Jaguar?
The capture at the connector is being addressed. The cable retention "hook" has been moved to the mounting plate to offer design flexibility.
--------
The USB inserts are for this ...
http://sine.ni.com/nips/cds/view/p/lang/en/nid/210962
It isn't required, but the controller is compatible with the cable spec.
Other two on front are for custom circuit attachment.
Button location may be a carry-over from myRIO. The buttons aren't expected to be used very often.
Chris Rake can better answer these questions, so trust his updates more than my memory.
Greg McKaskle
I've got another question.
On the picture of the PCB here (http://www.usfirst.org/roboticsprograms/frc/blog-new-2015-controller-from-national-instruments), there is something (maybe a jumper) to select between 3.3v and 5v. Will we be able to use 3.3v sensors?
--Snip--
At the moment, there are also four bronze threaded inserts into the bottom shell near the edge. They would allow a machine screw mount, most likely to a mounting plate designed by the team.
--Snip--
Greg McKaskle
Sounds like this only allows bolt mounting from one side. Meaning that I will have to thread into the NI controller from the backside of the mounting plate. This may have potential access issues. Would it be possible to put thru-holes in the case such that we can thread into the mounting plate/nut? This mounting would be similar to how all the other components are typically mounted. i.e victor/jag/talon
I've got another question.
On the picture of the PCB here (http://www.usfirst.org/roboticsprograms/frc/blog-new-2015-controller-from-national-instruments), there is something (maybe a jumper) to select between 3.3v and 5v. Will we be able to use 3.3v sensors?
Yes. The jumper will be inside the case. You'll be able to clean out the metal shavings while you're in there. It selects which voltage is routed to the center pin of the DIO channels. Both voltages are available on the Custom Electronics Port and the SPI port.
Also where are the other shared PWM pins? The pinout on the site only lists 3 of the 10 pins that should have PWM.
The pinout for the myRIO shared functionality is not the same as roboRIO. It will be as compatible as reasonable, however. A pinout for the roboRIO's shared functionality will be available a little later (when it's nailed down).
Sounds like this only allows bolt mounting from one side. Meaning that I will have to thread into the NI controller from the backside of the mounting plate. This may have potential access issues. Would it be possible to put thru-holes in the case such that we can thread into the mounting plate/nut? This mounting would be similar to how all the other components are typically mounted. i.e victor/jag/talon
The idea here is that if you want that kind of mounting (which would make the case larger) you can attach the roboRIO to a mounting kit that allows this kind of through-hole mounting. If you choose to use the built-in cable tie mounting or an adhesive fastener, you are not forced to have a larger controller.
Joe Ross
12-08-2013, 13:38
One thing that was nice on the DSC was there was enough space around the RSL pins to plug in a 3 pin PWM cable. Since teams have many PWM cables, it was easier then making a 2 pin cable. It's hard to tell from the pictures if there is space, but it would be nice if there was room for a 3 pin cable for the RSL.
AdamHeard
12-08-2013, 13:51
The idea here is that if you want that kind of mounting (which would make the case larger) you can attach the roboRIO to a mounting kit that allows this kind of through-hole mounting. If you choose to use the built-in cable tie mounting or an adhesive fastener, you are not forced to have a larger controller.
I suppose this kit could be a simple plastic plate and some fasteners.
Sounds like a good item for Andymark to sell for <$10.
AdamHeard
12-08-2013, 13:52
One thing that was nice on the DSC was there was enough space around the RSL pins to plug in a 3 pin PWM cable. Since teams have many PWM cables, it was easier then making a 2 pin cable. It's hard to tell from the pictures if there is space, but it would be nice if there was room for a 3 pin cable for the RSL.
It's fairly easy to cut a 3 pin housing to 2 with dykes. Clean up the jagged plastic left with sandpaper or file really quick.
I agree with your point, but if no change is made the above is an easy solution.
One thing that was nice on the DSC was there was enough space around the RSL pins to plug in a 3 pin PWM cable. Since teams have many PWM cables, it was easier then making a 2 pin cable. It's hard to tell from the pictures if there is space, but it would be nice if there was room for a 3 pin cable for the RSL.
There is currently not room for a 3-pin cable. I'll pass your comment on to mechanical.
AllenGregoryIV
12-08-2013, 15:46
The pinout for the myRIO shared functionality is not the same as roboRIO. It will be as compatible as reasonable, however. A pinout for the roboRIO's shared functionality will be available a little later (when it's nailed down).
That should probably be mentioned on this page. It's currently labeled as the roboRIO pinout.
https://decibel.ni.com/content/docs/DOC-30419
There is currently not room for a 3-pin cable. I'll pass your comment on to mechanical.
That should probably be mentioned on this page. It's currently labeled as the roboRIO pinout.
https://decibel.ni.com/content/docs/DOC-30419 (https://decibel.ni.com/content/docs/DOC-30419https://decibel.ni.com/content/docs/DOC-30419)
That link doesn't work for me.
AllenGregoryIV
12-08-2013, 16:24
That link doesn't work for me.
Sorry, I managed to paste it twice in the link box. I fixed the original too.
https://decibel.ni.com/content/docs/DOC-30419
This controller is really exciting. I can't wait to play around with linux on it!
The expansion port is seriously awesome. It's probably not too hard to make an arduino shield that plugs into it and communicates with serial. This pretty much means that you can get as much i/o as you need.
I'm loving the footprint.
If I'm not mistaken, its considerably smaller than even the old IFI systems.
I've long thought that the cRIO was serious quantities of overkill for our application, and I can see that roboRIO is no different in that regard. The truth of the matter is that we simply don't need that much processing horsepower.
5 years ago, we were running robots, with vision, gyros, accelerometers, and other sensors, on simple 8 bit microcontrollers running at clock speeds of a few 10s of MHz, worth less than $10. 10 years ago, we were working with just 26 bytes of variable space.
I'm disappointed by the price point. To me, a FIRST controller should be priced such that each team can receive a free one each year, as we did with the IFI system. I know I'm not the only one that feels 1 free donated one, and then needing to buy one each subsequent year is NOT a good solution. This is partly an artifact of using so much excess processing power.
I have extensive experience with RT variants of Linux through my workplace. We use RTAI currently, after switching away from RTLinux several years ago when it stopped being updated. If that experience has taught me anything, its that roboRIO's boot times are unlikely to be dramatically different from cRIOs. I certainly hope my prediction is wrong on this front.
I'm really interested to see what they do in terms of radios. The existing solution using standard 5GHz wifi equipment (2.4GHz in Israel, due to 5GHz being a restricted military frequency) is a bit lacking (the biggest bottleneck in robot boot times is the radios).
All in all, I think roboRIO will be a dramatic improvement over cRIO as an FRC platform. I just think it falls a bit short in places it could have excelled.
We'll see though. Maybe I'll be surprised.
Jon Stratis
14-08-2013, 11:31
I was thinking about expansion boards last night. Could we get a couple of mounting points by the expansion slot (one on each side maybe?) to facilitate direct connection? I'm thinking an expansion board would be fairly small, and it would be great to be able to plug it into the expansion slot and bolt it down so it sits just above the controller. If not bolted down, I would worry about such a design possibly coming loose during competition.
Speaking of boot times, any official input on boot times? Download times? Compile times? Any times? Any time requirements/specs, if the development isn't finished yet?
My work projects (all on 180-240mhz MPC5600 PowerPC's) have a requirement to boot and be ready to synchronize in under 300ms from chip power on. The requirement was the same for the 56mhz MPC500's and older processors, and they also met it.
I don't know if there is any time requirement at all on the current system, and it seems like there just plain isn't (crazy!). VxWorks currently boots in a few seconds (which is very reasonable), the rest is all Netcomm and user code. There is a lot of inefficiency in the user code init libraries (at least in LV, for example the encoder open block reads/writes the same config register and converts units 4x times) too. But a lot of it is still in Netcomm, and I've been told it's because it has to load an FPGA image to read the ADC cals, then unload that image and load the real FPGA image. I would gladly sacrifice a bit of FPGA functionality (for example two counter channels, or the DIO PWM, or DMA) to fit the ADC cal reading in the main FPGA image.
The Vex guys can boot a Cortex and user code and establish radio link in ~15s. The PIC-based IFI would boot and establish radio link in 5s. Why are we sitting around a minute? And getting worse every year?
As for size, it's not too much smaller than an IFI RC. Very similar in size.
As for the compile times, I think they'd be about the same. For Java, I know they are using the standard java se compiler, and the compiled code will be uploaded somehow (not FTP anymore).
For the boot times, I'm hoping they'll be faster.
Linux can boot really really fast. I've got a little device that runs an FTP server at my house that boots in < 10 seconds.
I know that it will take a little longer as it needs to load the FPGA image/network stuff/user code initialization, but there isn't a reason it needs to be slow.
Check this out http://www.youtube.com/watch?v=-l_DSZe8_F8
wilsonmw04
14-08-2013, 15:13
The Vex guys can boot a Cortex and user code and establish radio link in ~15s. The PIC-based IFI would boot and establish radio link in 5s. Why are we sitting around a minute? And getting worse every year?
This is not my experience at all. We typically have boot times of ~20 seconds.
I know it's capable of booting fast, but the current cRio platform shows they did very little to optimize the process. The dual-FPGA-image makes it seem like that, at least.
LV compile/download is very different:
-LV does the compiling. There was a bug in LV 2012 (used for FRC 2013) which caused the compile times to be horribly slow. I'm very aware that this is not an FRC-specific bug, but the fact that it made it through FRC beta and LabVIEW general testing shows they have no metrics or tests for times, and don't regression test for time increases, further showing that they don't have timing requirements.
-LV also does the downloading, but requries that software on the target be fully initialized or the target will crash and reboot. I am not entirely sure why this is, but it's really annoying.
-There are some cases where LV requires the 'no-app' switch, this has gone down recently but it was really really bad for me in 2012 and 2011. I also managed to get the cRio in an indeterminate state while downloading code at a competition in 2012, and had to re-image the controller in the pit because of it (it should never be possible to lock it up so that you can't just download again and fix it)
My understanding is that both Ethernet and USB are supported on roboRio for downloading code. So it's reasonable that FTP could be used still.
Edit: My time estimates are for LV, include all user code init (fully ready to enable), and radio link.
Thad House
14-08-2013, 15:29
I know it's capable of booting fast, but the current cRio platform shows they did very little to optimize the process. The dual-FPGA-image makes it seem like that, at least.
LV compile/download is very different:
-LV does the compiling. There was a bug in LV 2012 (used for FRC 2013) which caused the compile times to be horribly slow. I'm very aware that this is not an FRC-specific bug, but the fact that it made it through FRC beta and LabVIEW general testing shows they have no metrics or tests for times, and don't regression test for time increases, further showing that they don't have timing requirements.
-LV also does the downloading, but requries that software on the target be fully initialized or the target will crash and reboot. I am not entirely sure why this is, but it's really annoying.
-There are some cases where LV requires the 'no-app' switch, this has gone down recently but it was really really bad for me in 2012 and 2011. I also managed to get the cRio in an indeterminate state while downloading code at a competition in 2012, and had to re-image the controller in the pit because of it (it should never be possible to lock it up so that you can't just download again and fix it)
My understanding is that both Ethernet and USB are supported on roboRio for downloading code. So it's reasonable that FTP could be used still.
Actually by defaults if you look at the white paper on the RT Linux it says that by default it uses WebDAV instead of FTP, which should be faster.
Also I think if NI cannot get LabVIEW deployment sped up, they should teach the teams how to manually deploy the executable. We did this in both 2012 and 2013 and never had a problem, and it always took less then 5 seconds to download new code to the robot. I do not know why labview does not do this by default like the other 2 languages, but it should. Compiling was another story, but that should be fixed.
Actually, since we switched from LV 8.6 to LV2011/LV2012, the actual downloading is quite fast. However, we have to wait for the robot to fully boot up (and netcomm/user code to fully initialize), and it takes a while to close everything on the target before downloading, and often fails there (requiring us to reboot and try again). In addition, we see a lot of 'Access Denied' errors when LV does funny things (like LV will disconnect but the cRio will still see it connected), and rebooting to get rid of them and download is also really annoying. So if they allowed it to download when VxWorks was up, and we didn't have to fail and reboot so often, it would be fine (for the actual download, not including the compile).
DonRotolo
14-08-2013, 20:26
The truth of the matter is that we simply don't need that much processing horsepower.
Wasn't it Bill Gates who said something like 'we simply don't need more than 640k RAM'? How many CPU cycles is too many?I'm disappointed by the price point. To me, a FIRST controller should be priced such that each team can receive a free one each year, as we did with the IFI system. I know I'm not the only one that feels 1 free donated one, and then needing to buy one each subsequent year is NOT a good solution. This is partly an artifact of using so much excess processing power.The larger artifact was the agreement between IFI and FIRST, which was expensive and unsustainable.
I would like to make a request for GEN 2 of our new 2014 controller.
Our 2015 controller? 2014 will look suspiciously like a cRio...:p
Ian Curtis
14-08-2013, 21:06
Wasn't it Bill Gates who said something like 'we simply don't need more than 640k RAM'? How many CPU cycles is too many?
He did not. (http://www.computerworld.com/s/article/9101699/The_640K_quote_won_t_go_away_but_did_Gates_really_ say_it_) Or at least, he says he didn't and there is no evidence to suggest otherwise.
Greg McKaskle
14-08-2013, 22:29
I suppose I can respond to a few of the questions and comments.
The RFP description of boot time is in section 6, point 8. Basically, booted and connected in less than 40 seconds, but requests that it be minimized.
Yes the performance is measured. Yes people care about it. But it is hard to have official times when major features are missing. It isn't like you can simply buy one of these are resell it.
As for ftp, it can be enabled, but by default more secure protocols are used instead.
I've already discussed the issues with deployment. A number of performance issues were corrected, tests improved, and now that 2013 SW from NI is officially released, we can verify numbers all over again.
I'd try to give you numbers for LV 2013, but the cRIO that I have in my possession resulted from a team swap and it apparently needs to be cleaned. Right now it looks like a XMas tree ornament when I plug it in. Swarftastic.
Greg McKaskle
wilsonmw04
15-08-2013, 08:20
I suppose I can respond to a few of the questions and comments.
The RFP description of boot time is in section 6, point 8. Basically, booted and connected in less than 40 seconds, but requests that it be minimized.
Yes the performance is measured. Yes people care about it. But it is hard to have official times when major features are missing. It isn't like you can simply buy one of these are resell it.
As for ftp, it can be enabled, but by default more secure protocols are used instead.
I've already discussed the issues with deployment. A number of performance issues were corrected, tests improved, and now that 2013 SW from NI is officially released, we can verify numbers all over again.
I'd try to give you numbers for LV 2013, but the cRIO that I have in my possession resulted from a team swap and it apparently needs to be cleaned. Right now it looks like a XMas tree ornament when I plug it in. Swarftastic.
Greg McKaskle
Thanks Greg for keeping up with this thread. It's nice to get info from the horse's mouth so to speak. Keep up the good work!
However, we have to wait for the robot to fully boot up (and netcomm/user code to fully initialize), and it takes a while to close everything on the target before downloading, and often fails there (requiring us to reboot and try again). In addition, we see a lot of 'Access Denied' errors when LV does funny things (like LV will disconnect but the cRio will still see it connected), and rebooting to get rid of them and download is also really annoying. So if they allowed it to download when VxWorks was up, and we didn't have to fail and reboot so often, it would be fine (for the actual download, not including the compile).
You probably already know this, but we've found it really helpful to disconnect and connect to the cRIO in the project explorer before we download code. It seems to fix the silly errors that require a reboot.
I suppose I can respond to a few of the questions and comments.
The RFP description of boot time is in section 6, point 8. Basically, booted and connected in less than 40 seconds, but requests that it be minimized.
Yes the performance is measured. Yes people care about it. But it is hard to have official times when major features are missing. It isn't like you can simply buy one of these are resell it.
As for ftp, it can be enabled, but by default more secure protocols are used instead.
I've already discussed the issues with deployment. A number of performance issues were corrected, tests improved, and now that 2013 SW from NI is officially released, we can verify numbers all over again.
I'd try to give you numbers for LV 2013, but the cRIO that I have in my possession resulted from a team swap and it apparently needs to be cleaned. Right now it looks like a XMas tree ornament when I plug it in. Swarftastic.
Greg McKaskle
I just don't understand why FIRST is willing to put up with such slow boot times. 40s is an eternity. Most Windows computers boot up in about that much time, and they've got a lot more to do.
My real world experience with the old IFI PIC18F-based RC was that from power application to radio-link established and ready to perform useful work was approximately 3s on average.
Heck, some Windows 8 machines have gotten down into the 10s territory.
I'm sure I'm not the only one that remembers being able to reach down, flip the breaker on my robot, and drive it by the time I could get back to the controls. I don't understand why this isn't being made a priority requirement on a custom-built system.
Greg McKaskle
15-08-2013, 11:35
I think it is worthwhile distinguishing between a requirement for a minimally acceptable system and a goal or a metric by which proposals will be judged.
I think everyone wants it to boot faster. But most wifi systems I'm familiar with take 20 or 30 seconds to boot -- smartphones, routers, computers, etc. The more powerful or flexible the system, the longer it takes. Meanwhile, my first Nokia phone booted in a few seconds. I understand why the boot times of the phones differ, and the same technical reasons apply to the FRC controllers.
Greg McKaskle
I think it is worthwhile distinguishing between a requirement for a minimally acceptable system and a goal or a metric by which proposals will be judged.
I think everyone wants it to boot faster. But most wifi systems I'm familiar with take 20 or 30 seconds to boot -- smartphones, routers, computers, etc. The more powerful or flexible the system, the longer it takes. Meanwhile, my first Nokia phone booted in a few seconds. I understand why the boot times of the phones differ, and the same technical reasons apply to the FRC controllers.
Greg McKaskle
See my earlier comments about the power being overkill for the application.
Also, if WiFi can't do better than 20 or 30s boot times, which I agree seems to be pretty normal among WiFi devices, then maybe WiFi is the wrong technology.
In my opinion, there's two ways the controller can work. We could have a simple, cheap controller based of a microcontroller (under 50MHz), just like IFI. This could only be programmed in C (or maybe LV), but it probably couldn't run Java. It wouldn't have an operating system, or an FPGA, and encoders/high speed counters would work off of interrupts. We would use a radio like IFI's and we wouldn't be sending the camera image to the driver station. Image processing would be done with a CMUcam through a serial connection, and the controller would cost <$200.
OR, we could go with what NI has given us, a system that's really advanced, really cool, and used in the real world. This system will run a rt os, like vxWorks, or NI's realtime Linux thing (that's really awesome). This controller has an FPGA, more I/O (like USB, ethernet, and CAN), but costs more. The dual core ARM9 SoC with FPGA with 256 MB ram is overkill for most teams, but i expect to see some really cool vision/kinect applications done on the robot. The problem is that this solution is significantly more difficult to implement. NI has only so much money and so many people to make this happen, so while certain distros of embedded linux can boot in <10 seconds, it's not going to happen for us.
Many people say that this trade off is not worth it, but would you really like to go back to the time when only really good teams could use PID loops, or when you had to use look up tables for trig functions, or you needed Kevin Watson's awesome code to make a great robot program? (remember things like this (http://www.chiefdelphi.com/media/papers/1575)?)
In 05, I could not name a single team that could cap the vision tetra more than 10% of the time. If we had the same challenge again, teams could do it.
As for the compile times, most of the actual compiling/downloading aren't really that bad (except for sometimes in LV, when the no-app thing happens), it's the restarting of the controller. If you want to speed up development, use something that reads constants out of a text file stored on the robot (see the 2013 cheesy poof's code for inspirations).
Also, it's pretty spectacular how easy to use NI's current controller is, and the new one should be the same way. I don't know of any other platform with a dual core processor and an FPGA that's easy for an inexperience programmer to use. FPGA's and embedded systems that run vxWorks are usually way beyond what a kid in high school can program. We also get support from other teams and people like Greg McKaskle to help us work out our problems.
protoserge
15-08-2013, 12:50
Is it really that big of a deal that it takes 40 seconds (maximum) to boot? NI has until 2015 to make it faster. How is this system "overpowered"? By leveraging a platform they [NI] are implementing in the MyRIO and the Zynq-powered cRIO, the cost has been decreased from the previous FIRST cRIO and we end up with more capability for the 2015-2020 seasons.
This is 2013. We are doing more with these robots than simply converting joystick inputs into PWM outputs for motor controllers. Teams are integrating computer vision, obstacle recognition, and performing on-the-fly adjustments to their robot control system. If NI allows us even more capability and opens up the FPGA for programming, we could see immensely capable systems integrated into this new footprint without the need for coprocessors.
Only time will tell.
Except roboRIO isn't used in the real world. Its a custom built solution just for FRC. That argument (kind of) worked with cRIO.
One of the biggest things about the real world is engineering within constraints. That used to be a part of the control system. Back in 2003, we only had 26 bytes of variable space. You had to be creative. I feel like cRIO and roboRIO as FRC control systems give too much power. I agree we needed more than the old IFI system was capable of, I just think cRIO was too big a leap, and roboRIO is that kind of leap again.
C is still the dominant language used in embedded applications, so I don't see that as a limit.
I honestly don't believe that teams being better today has much at all to do with the control system.
WPILib has had a profound impact on making it easier to program an FRC robot, and the sheer quantity of team growth means there are more smart people involved.
I would estimate though, that fewer than 1% of teams competing in 2013 did anything (except stream video to the DS) with the control system that wasn't possible in 2005. And those 1%? They're the ones with the resources to make maximal use of whatever system they're given.
A new control system should target rookie teams with simplicity, while being powerful enough that veterans can do some really cool stuff. Its a delicate balance to strike. I just feel that cRIO and to a greater extent roboRIO err too much on the side of raw power.
That said? I think roboRIO is a dramatic improvement to nearly all of cRIOs shortcomings as an FRC control system. (Footprint, weight, design, etc. Its just better).
AdamHeard
15-08-2013, 13:07
In my opinion, there's two ways the controller can work. We could have a simple, cheap controller based of a microcontroller (under 50MHz), just like IFI. This could only be programmed in C (or maybe LV), but it probably couldn't run Java. It wouldn't have an operating system, or an FPGA, and encoders/high speed counters would work off of interrupts. We would use a radio like IFI's and we wouldn't be sending the camera image to the driver station. Image processing would be done with a CMUcam through a serial connection, and the controller would cost <$200.
OR, we could go with what NI has given us, a system that's really advanced, really cool, and used in the real world. This system will run a rt os, like vxWorks, or NI's realtime Linux thing (that's really awesome). This controller has an FPGA, more I/O (like USB, ethernet, and CAN), but costs more. The dual core ARM9 SoC with FPGA with 256 MB ram is overkill for most teams, but i expect to see some really cool vision/kinect applications done on the robot. The problem is that this solution is significantly more difficult to implement. NI has only so much money and so many people to make this happen, so while certain distros of embedded linux can boot in <10 seconds, it's not going to happen for us.
Many people say that this trade off is not worth it, but would you really like to go back to the time when only really good teams could use PID loops, or when you had to use look up tables for trig functions, or you needed Kevin Watson's awesome code to make a great robot program? (remember things like this (http://www.chiefdelphi.com/media/papers/1575)?)
In 05, I could not name a single team that could cap the vision tetra more than 10% of the time. If we had the same challenge again, teams could do it.
As for the compile times, most of the actual compiling/downloading aren't really that bad (except for sometimes in LV, when the no-app thing happens), it's the restarting of the controller. If you want to speed up development, use something that reads constants out of a text file stored on the robot (see the 2013 cheesy poof's code for inspirations).
Also, it's pretty spectacular how easy to use NI's current controller is, and the new one should be the same way. I don't know of any other platform with a dual core processor and an FPGA that's easy for an inexperience programmer to use. FPGA's and embedded systems that run vxWorks are usually way beyond what a kid in high school can program. We also get support from other teams and people like Greg McKaskle to help us work out our problems.
Straw man all the way!
Except roboRIO isn't used in the real world. Its a custom built solution just for FRC. That argument (kind of) worked with cRIO.
This is not a fair argument. While the packaging for roboRIO is customized for student robotics (not just FIRST) the platform is real-world. The software stack for roboRIO is not designed from the ground up with FIRST in mind, so you can't expect the trade-offs to select for things like 3 second boot times instead of fail-safe network connectivity (for example). While the 2015 control system team will work to optimize things further, there simply aren't enough of us to start over from scratch, optimizing to the utmost for FRC care-abouts. We have to start from the platform we're leveraging.
The only reason this controller is possible is because of how highly leveraged it is. NI invested 60 man-years of effort into building the NI Linux Real-Time platform. As important as FIRST is to us, there's no way I can see that anyone could justify such an investment or anything close solely for a donated / deep-discount product.
Except roboRIO isn't used in the real world.
If you want to be technical, roboRIO is not used in the real world. However, labview, c++, and java all are used in the real world, as is real time linux, ARM processors, FPGA's, and NI's hardware.
See my earlier comments about the power being overkill for the application.
Also, if WiFi can't do better than 20 or 30s boot times, which I agree seems to be pretty normal among WiFi devices, then maybe WiFi is the wrong technology.
Could you suggest another wireless technology that could support the bandwidth, field management tasks, documented standards and security requirements while remaining off the shelf and accessible to teams? I honestly can't think of one.
I remember the 900mhz radios from the IFI days. I'll take wifi over that any day of the week.
Well, Vexnet can connect fast, just saying. And the Vexnet dongle is actually just a USB WiFi dongle repackaged, if you plug it into a computer it might find it as a generic-brand Wifi adapter. I don't have hard numbers, but the user manual suggests 'It usually takes 5 to 10 seconds to successfully establish a link'. I can measure one later if necessary, but this seems about right.
Also, we do a lot of initial development on tether (when we're doing hardcore logic work, not the later stages which are almost all calibration), and booting fast without the radio should be considered too.
I don't believe at all that you can't boot a controller capable of all of FRC's needs roughly as fast as the Vex system, with comparable times for tether and radio operation. You might have to better define what FRC's needs actually are.
I remember the 900mhz radios from the IFI days. I'll take wifi over that any day of the week.
An interesting view to be sure:
I'm unsure why you would have it though...
Those 900MHz radios (Rebranded EWave Inc. Screamer422's (http://www.electrowave.com/products/screamer422.shtml)) were ready to go nearly instantaneously, and capable of 9.6kbps. a little over 1KB/s.
4 years of FRC experience with 2.4/5GHz wifi a/g/n has taught me that its an unreliable standard. Delays to matches are common. Sometimes robots refuse to connect, and they often drop connection mid match, PLUS, being such a widely accepted standard, with a large range of compatible devices, it opens the door to attacks such as what happened at Einstein 2012. They also experience issues because we're using consumer-grade electronics that were never designed for the sort of dynamic loading environment an FRC bot creates. We're using routers that were intended to sit under peoples desks at home and never move.
6 years of FRC experience with the 900MHz serial radio modems taught me that they are essentially bulletproof. I don't think I ever saw one fail in any fashion due to being roughhoused aboard an FRC bot, and I don't remember ever having a radio related match delay. Additionally, the 900MHz band is several orders of magnitude quieter in terms of noise from other consumer electronics, AND has longer range. Its also much tougher to attempt various kinds of attacks on the 900MHz band, as radios are less proliferous.
I certainly won't attempt to say you could shove streaming video over 9.6kbps. I know you can't. There are definitely other technologies that would be better suited than wifi, though.
I would estimate the bandwidth needed for a typical FRC bot carrying streaming video to be somewhere in the range of 2Mbps, allowing for 320x240 streaming video uncompressed, with some overhead for control comms.
evanperryg
15-08-2013, 16:00
The Vex guys can boot a Cortex and user code and establish radio link in ~15s. The PIC-based IFI would boot and establish radio link in 5s. Why are we sitting around a minute? And getting worse every year?
Very true. The specs on this thing are pretty impressive, yet (at this point) it is still pathetically slow.
The truth of the matter is that we simply don't need that much processing horsepower.
I know I'm not the only one that feels 1 free donated one, and then needing to buy one each subsequent year is NOT a good solution.
You seem to forget that this thing isn't specifically designed for FRC. It was designed for student robotics programs. Some of those programs may require more processing performance than we need.
Why would you need to buy a new cRio every year? That's silly and a waste of money. My team owns 2 cRios, enough for the competition bot and a practice bot. Everything else can probably just use a vex signal splitter or an arduino.
I don't believe at all that you can't boot a controller capable of all of FRC's needs roughly as fast as the Vex system, with comparable times for tether and radio operation. You might have to better define what FRC's needs actually are.
My team knows, dead certain, that you can run an FRC bot as effectively off Vex as off of the cRio. In fact, we have our entire 2012 bot wired into a cortex and it works perfectly.
All in all, I am really excited for the new controller. I love how small it is, in particular. The cRio is really bulky for what we use it for, and this opens up more space for other electrical components. However, if the bootup times are still 40s at release I will be very disappointed. Seeing how great the specs are, I suspect that all they need to do is refine the code. Also, I assume it is too far into development at this point, but 4 Relay outputs doesn't seem like enough.
My team knows, dead certain, that you can run an FRC bot as effectively off Vex as off of the cRio. In fact, we have our entire 2012 bot wired into a cortex and it works perfectly.
While I do agree that the vex cortex is probably closer to an optimal controller than the cRIO was, it is not a replacement for the cRIO/roboRIO.
The vex controller can't do image processing, it can't be programmed in LV or java, and it doesn't have an ethernet port or a CAN port. It also doesn't have an FPGA. While it may be a substitute for 99% of FRC robot controllers, there are definitely teams who do need the extra stuff.
My team knows, dead certain, that you can run an FRC bot as effectively off Vex as off of the cRio. In fact, we have our entire 2012 bot wired into a cortex and it works perfectly.
Which, really, is the crux of my point. The cRIO costs roughly double what a Vex Cortex costs. For most teams, that's a crappy value, since the vex cortex could do the job adequately for 99% of teams.
With only 2 cRIOs, dedicated to a current competition bot and a practice bot, you have nothing to run past robots on for demos. Most teams like to keep at least one, and ideally all of their past robots in operational condition.
cadandcookies
15-08-2013, 16:46
Most teams like to keep at least one, and ideally all of their past robots in operational condition.
"Would like to" doesn't mean can. My team has been running for 8 years with plenty of resources, but we still don't keep our robots year after year, mostly due to space concerns. I know that there are plenty of teams in our area that have even less space that we do, for whom keeping their old robots isn't even considered.
To be honest, I don't really see the point in complaining about the cost of the controller-- it is what it is, and they've already said they're aiming to reduce the cost as much as possible long-term. I suppose one may feel free to be discontented with the future control system (for a variety of reasons), but it's a rather clear step up from the cRIO in just about every way possible.
In terms of boot times, yes it would be very nice to bring them down in the 10-20s range, but I have a feeling there's more to it than just "FIRST is willing to put up with it." Keeping the price low (even when people are claiming it's already too high) probably factors in, as well as parts that we probably aren't considering.
Personally, I think a faster boot time would be an excellent improvement in terms of how fast match cycles go, and as a relatively well-off team, we'd probably be okay with an increase in the price for that, but there's a far larger picture than just my team or even all the teams on CD, which is what NI and FIRST have to consider.
My overall opinion is that every control system has its quirks that we'll be dealing with for quite a while, and I'm happy that those are getting out now so that we can consider them well before we actually have to use it.
Chris is me
15-08-2013, 17:16
While I do agree that the vex cortex is probably closer to an optimal controller than the cRIO was, it is not a replacement for the cRIO/roboRIO.
The vex controller can't do image processing, it can't be programmed in LV or java, and it doesn't have an ethernet port or a CAN port. It also doesn't have an FPGA. While it may be a substitute for 99% of FRC robot controllers, there are definitely teams who do need the extra stuff.
If the choice was, hypothetically, between "have image processing offboard" and "take a really long time to boot and connect", I think nearly everyone would be in favor of the former option. I'm not saying the Cortex as is would be the perfect FRC controller, but I think making fast processing an optional feature you add on via an offboard PC (something teams already do) would allow the "base" control system to be simpler / faster / more robust / quicker.
The frustrating thing to me about adapting existing technology to FRC is that FRC has some specific requirements that are unusually important that aren't present in a lot of commercial applications. In the (admittedly very few) conversations with NI people I have had in the past, it seems they consistently underestimate the importance of quick boot time. It just doesn't seem to be a high priority in the controller design or implementation. Perhaps FIRST should have emphasized the importance of speedy boot time and quick field connections when doing their RfP for the control system.
All of the cool software things you can do with a more powerful controller are automatically far less important in my mind than making sure that the robots connect to the field in a reasonable time frame and that they never disconnect. Just my relatively uninformed two cents.
(None of this post is intended to discredit the hard work NI and its employees have put into this program. It's just some thoughts - I apologize if I step on some toes)
Which, really, is the crux of my point. The cRIO costs roughly double what a Vex Cortex costs. For most teams, that's a crappy value, since the vex cortex could do the job adequately for 99% of teams
The fact that it's only half the price blows me away. I would expect it to be way less given the capabilities. Talk about a crappy value. Par for the course I guess.
Greg McKaskle
15-08-2013, 20:15
I'm not positive who the NI folks were, but my suspicion is that they were volunteering and taking part in the same awesome event as you were. If they could snap their fingers and shave seconds off of boot time, they likely would. But they are probably trained engineers acting in a volunteer role.
On the boot time topic, I just timed it without user code, and it is under 30 seconds -- on a cRIO, over ethernet. If you think about the topology, the cRIO has nothing to do with how the bridge/radio connects to the field. To the cRIO, it is a cable. NI didn't make the radio or write the firmware, and we have little influence over the selection criteria except that it needs to bridge ethernet. I'm not trying to pass the buck here, just pointing out that the cRIO is just one ingredient in the soup.
The RoboRIO has additional options for radio connectivity, and may I point out that the myRIO even includes an integrated radio option. As mentioned in the Q/A, radio selection is still in progress and that is because it has a big impact -- on boot times, throughput, security, price, etc. The control system team cares deeply about team experience and one should not assume that this opportunity to improve things will be wasted. One of the things I look forward to after the system is available is to publish a development blog that details the other possibilities that just weren't meant to happen due to budget, time, space, weight, etc.
On the price and capabilities topic, NI responded to the RFP with what we feel is a very exciting product to use on a robot. I will not knock an IFI controller, or the Sasquatch. Each has its strengths. In the non-Hollywood world, it is not possible for all 2600 team to agree on the ideal controller characteristics. The laws of physics and economics apply here and we don't all want the same experience. And we don't have to agree either.
Alpha testing starts a few billion milliseconds from now and that means Joe and I need to get back to work.
Greg McKaskle
MrRoboSteve
15-08-2013, 20:23
4 years of FRC experience with 2.4/5GHz wifi a/g/n has taught me that its an unreliable standard. Delays to matches are common. Sometimes robots refuse to connect, and they often drop connection mid match, PLUS, being such a widely accepted standard, with a large range of compatible devices, it opens the door to attacks such as what happened at Einstein 2012. They also experience issues because we're using consumer-grade electronics that were never designed for the sort of dynamic loading environment an FRC bot creates. We're using routers that were intended to sit under peoples desks at home and never move.
What you're saying was true for previous years, but the work that went into improving things for 2013 really seemed to pay off.
At the two events I CSAed at, there were no issues in qualification or elimination matches with radio root causes. We routinely ran ahead of schedule. And the third regional (week 2) I mentored at also ran ahead of schedule.
DampRobot
15-08-2013, 21:42
As a mechanical guy, I hope my comments aren't misplaced. I guess that I'm more of a "user" than most people on this thread, which for a normal product would be considered "developers."
First, I really have to thank the NI team. They've made my job a ton easier. The C-RIO/DSC setup was honestly quite cumbersome and took up a lot of space. Not just electronics board space, but also vertical space. Their CAD models were very complex, and added a ton of time to our CAD model rebuilds. So, replacing the two larger components with a smaller, flatter, simpler controller with logical mounting holes is awesome all around. I always wondered why the DSC and C-Rio needed to be separate, and this is a great answer to that question.
If the radio and it's power adapter could be integrated into the robotRIO, that would be a huge plus too.
Boot times aren't a huge deal for me. A few times when we're racing to get to queuing, and we have to re-deploy code, that extra time makes me sweat. But for most of the time, I hardly notice it. Maybe it's a bigger deal for the programmers, who have to develop with it, but I always assumed that they could just use the time when code was comping or the robot was booting to check CD or something.
I don't love the cost, but see the robotRIO as the same as expensive mechanical components. You can easily drop as much as the robotRIO just on decent drive gearboxes every season, and hardly anyone complains about those costs. It shouldn't be a huge deal to buy a new robotRIO every season, if you want to keep old robots running that is.
The robotRIO is overpowered (in my humble opinion). However, teams will always end up using all of an available resource when they think it could possible benefit them (CPU speed, memory, robot weight, height, motor number, etc.). Lots of teams will continue to see 100% CPU usage with the robotRIO just as they did with the C-RIO. Also, a ton of teams seem to believe that complex image processing is necessary on every robot, when in reality, 95% of teams don't or shouldn't focus on vision processing.
Good job, NI. Lower costs and faster boot times are always welcome, but in my mind, given how good of a product this is for me, I don't really care.
Peter Johnson
15-08-2013, 22:07
Boot times aren't a huge deal for me. A few times when we're racing to get to queuing, and we have to re-deploy code, that extra time makes me sweat. But for most of the time, I hardly notice it. Maybe it's a bigger deal for the programmers, who have to develop with it, but I always assumed that they could just use the time when code was comping or the robot was booting to check CD or something.
I'm not sure of how LabView handles code reloads/deploys on the new OS, but at least for C++ and Java the Linux platform should make these kind of "soft" boots significantly faster (to the point of making boot times nearly moot). On VxWorks it was necessary (in almost all cases) to completely reboot the cRIO to reload user code even for C++ and Java because your user program was running as a kernel module. Assuming that robot programs run as normal user-mode (but root) programs on Linux (and I sure hope that's the case!) a "soft" reboot for C++ and Java should just consist of killing and restarting the user process (milliseconds) rather than a full OS reboot (20+ seconds).
In other words the upload/test cycle should be extremely fast (at least for C++ and Java) on Athena, assuming you don't power cycle the robot.
For the Athena Python port, I plan to take advantage of this fact to instantly reload the user program as soon as a new Python file is saved/uploaded (note: due to Python implementation memory leaks this requires restarting the entire interpreter, preventing this from working on the cRIO port, but will not be a problem on Linux).
DonRotolo
15-08-2013, 22:09
An interesting.. <snip> ...with some overhead for control comms.OK, so which technology are you suggesting be used instead of "unreliable" 802.11?Which, really, is the crux of my point. The cRIO costs roughly double what a Vex Cortex costs. For most teams, that's a crappy value, since the vex cortex could do the job adequately for 99% of teams.Team 1676 is very proud to be in the 1%. A Cortex just would not let us run our robot the way we do. Unless you think maybe we should dumb it all down for the lowest common denominator?
The fact that it's only half the price blows me away. I would expect it to be way less given the capabilities. Talk about a crappy value. Par for the course I guess.Requoted for truth. The IFI 'real price' wasn't cheap, either.
Clinton Bolinger
16-08-2013, 00:03
I don't see why the communication has to be one (Wifi) OR the other (900 mHz serial). With the technology available today FIRST should be able to implement multiple communications on the same system.
High Priority Task like the Joystick outputs to the robot, drive commands, Auton/Teleop modes, eStops, etc could communicate over a very fast booting wireless protocol similar to the Old Control Systems. The amount of data is very minimal and wouldn't need a lot of bandwidth. (Synapse or Xbee). I have personally tested a Synapse device, with a simple 2 axis joystick and two PWM outputs, to drive a robot with a boot-up and time to drive less than a second.
Low Priority Task and task that need a lot of Bandwidth could still communicate over Wifi (if teams want this functionality). This information and data would not need to be encrypted because it doesn't necessarily control the robot. If a team's wireless radio fails in a match, teams can still move and drive there robot in a open loop state.
The system could also be setup as follows:
{Robot}
| ^ |
v | v
Wifi Synapse
| ^
| |
| Commands
| ^
v |
Laptop -> Process Image
For all the computer/programmers, how many bits or bytes of data is really need to send the important data to and from the robot:
-PWM - 20 Channels (160 bits)
-DIO - 26 Channels (26 bits)
-Relays - 4 dual-input channels (8 bits)
-Analog Input - 8 channels (96 bits)
-Analog Output - 2 channels (24 bits)
-Miscellaneous Bits for Auto/Teleop/Enable/Status/ etc (22 bits)
Total = ~336 Bits = 42 Bytes
Seems like Wifi might be a bit overkill for the amount of data need sent from the drivers to the robot.
Personally, I believe that the long boot-up times come from the Wireless bridge/router that we use on the robots.
-Clinton-
AdamHeard
16-08-2013, 00:29
I really agree with Clinton on this one.
To give FIRST the benefit of the doubt, we have no idea they aren't currently exploring that option.
It makes a huge amount of sense to have a more robust method for pure control, and use the wifi for camera, etc...
AllenGregoryIV
16-08-2013, 00:39
I'm actually really surprised I haven't seen anyone push for a dual radio idea before. That just makes sense.
Dashboard and anything else noncritical can all be over wifi. It also allows us to continue to be able wirelessly program the robot as well, which is a big advantage over having to tether a serial cable like the old controllers.
techhelpbb
16-08-2013, 01:02
I'm actually really surprised I haven't seen anyone push for a dual radio idea before. That just makes sense.
Dashboard and anything else noncritical can all be over wifi. It also allows us to continue to be able wirelessly program the robot as well, which is a big advantage over having to tether a serial cable like the old controllers.
A dual radio option was offered as part of the RFQ proposal we sent FIRST.
It was proposed basically like what Clinton outlined above.
In fact I even built the radio module (Turtle....slow, steady and hardened).
I even made it bridge in a way that should have worked with the field.
It used COTS radio components offered from a variety of vendors to offer wide selection of frequency and performance.
Had no choice because of the short time between the announcement of the RFQ and the deadlines.
RFQ was not accepted however it was reviewed.
I will not knock an IFI controller, or the Sasquatch.
I notice that proposed solution we offered seems to be overlooked.
Not to worry I took what I built and dropped the FIRST features.
If anyone is really interested I can post the proposal we made.
I have to say however, NI has the contract, come what may until FIRST sends out another RFQ the ball is in their court.
AdamHeard
16-08-2013, 01:24
I think a few people in this thread need to be a bit nicer to the NI people, or they might stop communicating on chief (which is awesome that they do).
I'm not saying I agree or disagree with them in anyway, but let's not totally ruin the fact that they are willing to come on chief and interact directly with the community.
techhelpbb
16-08-2013, 01:31
I think a few people in this thread need to be a bit nicer to the NI people, or they might stop communicating on chief (which is awesome that they do).
I'm not saying I agree or disagree with them in anyway, but let's not totally ruin the fact that they are willing to come on chief and interact directly with the community.
I agree whatever the basic concerns are here about what this is versus what it could have been.
There is no longer a choice in the matter.
Choices were offered and choices were made.
On the one hand I do not think it is realistic to expect NI to comply with some of the changes.
On the other hand I would be disappointed in NI if merely stating some concerns was enough to make them go silent in this forum.
However I do not think that is NI's style.
I admit to be often quite rough on them and they are still here.
Whoever won that RFQ needed to be prepared to face the public.
I even put that in the proposal.
What the community thinks does matter.
If what the community thinks does not matter then there's something wrong.
There are times when I think what the community thinks is ignored.
Let us all strive to deal with that responsibly.
I have been avoiding this topic specifically because the proposal I worked on not being accepted makes this sound like sour grapes.
The only reason I am posting in this topic now is because someone brought up the dual radios.
Give credit where it is due...be that to NI...or whoever else goes the mile to do the hard things.
Further know that there are other things that no other control system offered that I helped propose.
So there are ideas coming into focus now that were just in grasp during the process.
If someone wants to see the proposal from U.S. Cybernetical I will post it (I have approval from all parties).
If not oh well. I did what I felt was right with what I had. That is all anyone can do.
I even promised I would help Sasquatch out if their Kickstarter did not fund.
I've ordered the unit from them and patiently await it's arrival.
This was the arrangement they preferred I am happy to accommodate.
In the end what I got out of that proposal will end up being far more valuable than what it appears.
Akash Rastogi
16-08-2013, 02:06
snip
Brian I think Adam meant more of the harsh posts, possibly Racer's post for example, not yours exactly.
:]
techhelpbb
16-08-2013, 02:17
Brian I think Adam meant more of the harsh posts, possibly Racer's post for example, not yours exactly.
:]
I understand. My point does not change.
Community involvement should go with the territory for this RFQ.
Sometimes community involvement means you get the criticism to.
However I am not being critical of Adam.
Just making my intent transparent.
Greg McKaskle
16-08-2013, 07:24
Brian, by not not-knocking your proposal, I wasn't implicitly knocking it either.
I know of it, but I haven't read it and don't know what to call it. I was attempting to convey that any robot controller that makes it to market is likely to inspire kids. Theres this Danish company that I'm pretty involved with that wasn't not-knocked either.
As for NI's presence on CD, I'm here in large part because I value skepticism and alternative ideas. I also believe that FUD fills an information void, and I'd prefer to be direct and open when possible.
CD also lets me observe and participate with kids working their way through issues. The CD village needs all sorts, and as long as the focus doesn't veer too far away from building future generations of leaders/engineers/artists, I'm willing put up with quite a bit.
Greg McKaskle
I followed some of the link in this thread for RTlinx and labview. What changes 2 our programming in labview will we have to be concerned with? I see the words mutex and blocking non-blocking, 2 schedulers and other stuff that relates to running on a duel core. Do we have to deal with these issues or will labview take care of it? With the current single core c-rio our programming mentor teaches the new programers labview basics, The importance of real time and follows up with some lessons on state machines. After this the kids are let loose to get hands on experience. If we have to deal with the complexities of multiple cores this is going to require allot more of formal instruction on our mentors part. A serious load for a first time young programmer.
I followed some of the link in this thread for RTlinx and labview. What changes 2 our programming in labview will we have to be concerned with? I see the words mutex and blocking non-blocking, 2 schedulers and other stuff that relates to running on a duel core. Do we have to deal with these issues or will labview take care of it?
The LabVIEW programming experience is the same. You will not need to do anything special to deal with multiple cores. With LabVIEWs inherent parallelism the multiple cores are utilized naturally any time you have parallel loops executing. As always, take care to avoid race conditions, but if you limit the use of global variables, that's usually pretty easy to avoid in LabVIEW.
The LabVIEW programming experience is the same. You will not need to do anything special to deal with multiple cores.
Excerpt from "Under the Hood of NI Linux Real-Time" (http://www.ni.com/white-paper/14626/#toc5):
It’s also important to note that performance degradation can occur in both time critical and system tasks on multicore systems running NI Linux Real-Time if serially dependent tasks are allowed to run in parallel across processor cores. This is because of the inefficiency introduced in communicating information between the serially dependent tasks running simultaneously on different processor cores. To avoid any such performance degradation, follow the LabVIEW Real-Time programming best practice of segregating time-critical code and system tasks to different processor cores. You can accomplish this by setting a processor core to only handle time-critical functions, and specify the processor core to be used by any Timed Loop or Timed Sequence structure as illustrated in Figure 4. You can learn more about the best practices in LabVIEW Real-Time for optimizing on multicore systems at Configuring Settings of a Timed Structure.
Is the above something that teams need to be aware of and take into account in their programming efforts?
Dave Flowerday
16-08-2013, 09:28
I think a few people in this thread need to be a bit nicer to the NI people
The NI representatives are hardly above the fray:
The fact that it's only half the price blows me away. I would expect it to be way less given the capabilities. Talk about a crappy value. Par for the course I guess.
I find it shockingly poor form for someone representing National Instruments (a major FIRST sponsor) to publicly speak this way about another FIRST sponsor and STEM supporter. If I made a statement like this in a public forum while representing my employer about some other company or competitor, I have no doubt I would have to face consequences from PR or even HR.
Excerpt from "Under the Hood of NI Linux Real-Time" (http://www.ni.com/white-paper/14626/#toc5):
[snip]
Is the above something that teams need to be aware of and take into account in their programming efforts?
It may be something that an advanced team would want to pay attention to if they are trying to push the controller to its limits. However, given that a single core of the roboRIO is approximately 5x faster than the cRIO at basic tasks means that a little inefficiency will likely go unnoticed for most teams.
Absolutely, I agree that the performance per unit cost value of a cRIO or roboRIO is orders of magnitude higher than for the Vex Cortex. I'm ALSO not suggesting that FRC use a Vex Cortex. It was simply an example of some of the other options.
In terms of performance, they're not even in the same ballpark, so yes, being 'just' double the cost is a good value. If you're going to use that performance. Otherwise though, its like buying a Bugatti Veyron and never taking it to a racetrack or Germany's autobahns. Buying performance you won't be using is frivolous.
Please don't misunderstand me. I'm not trying to take pot shots at NI. I'm a certified LabVIEW developer, and I work with NI equipment every day.
roboRIO is a huge improvement over cRIO as an FRC control system. No contest. I'm very excited to get my hands on it and see it in action. I'm just disappointed that it seems like a couple of the spots where quantum leaps could have been made fell a bit short.
BUT I'm also aware that much of the slow boot problem with the cRIO-based control system we've had since 09 is NOT the boot time of the cRIO, but rather, the radios. They're still working out what we're going to be using for radios, so maybe I'll be pleasantly surprised.
@Don:
I don't know what 'alternative' I'm proposing. The FIRST community is collectively VERY smart though. I've seen some neat 900MHz ethernet bridges, capable of throughputs in the 2Mbps range. I do know that 802.11 lacks the reliability factor I believe an FRC control system should have. Even my home 802.11 network, in a rural area, with minimal interference on the 2.4GHz band frequently drops, hiccups, or does otherwise rude things to the communications. There has to be a better solution.
As to 1676 not being able to run their robot on a Cortex? That's cool. I wouldn't have guessed it. 1676 though is definitely one of those top tier teams that's good at making optimal use of the resources they're given. If 1676 had to choose between whatever functions its doing that couldn't be achieved with a Cortex, and booting to a driveable state in under 10s, as Chris suggests, would you still want that extra performance?
I can say with confidence that 4343's 2013 robot is the first robot I've been involved with that couldn't have been done with a 2008 IFI system, and that's only because it streamed video back to the DS.
I DO like this suggestion of multiple comms channels, so that mission-critical comms (FMS control, joystick values, etc) could be transmitted on a low-bandwidth, extremely reliable, fast link-up channel, while the extras like streaming video ride on typical 802.11.
techhelpbb
16-08-2013, 09:51
It may be something that an advanced team would want to pay attention to if they are trying to push the controller to its limits. However, given that a single core of the roboRIO is approximately 5x faster than the cRIO at basic tasks means that a little inefficiency will likely go unnoticed for most teams.
I am a bit confused about this in relation to what was implemented in the RoboRio.
I work in highly parallel environments daily (8,000+ Linux servers globally running extremely high speed tasks...I will avoid the term real time here it is often misunderstood as a metric).
I can see how the abstraction of LabView could make the dual cores less apparent to the end user. Unless there's a way for the students to bind their process to a particular processor I don't see any way they can easily deadlock themselves.
However not all teams work in LabView. If a team is using C or Java can they create code that can target specific processor? If so they can create deadlocks because they could spawn up a 'serial process' between the 2 processor and get stuck between the 2 processors.
In any kind of multiple processor environment with user ability to direct resources this sort of complication can arise. Automated resource allocation can either fix this or itself might magnify the issue.
The control system we proposed to FIRST for example had Parallax Propellers (along with Atmel and ARM) and those chips have 8 'cogs' (cores). In that environment a student might create a set of interelated tasks that must operate in a serial fashion but because of the architecture they would be aware from the beginning that they've divided up the process. So for example: if process B stalls in cog 2 perhaps they should debug process A in cog 1. The design goal with the multiple processors in the proposed environment was not to centrally distribute the resources at execution time. It was to provide finite determinisitic resources as part of the initial design process so that the result had direct predictability. Anything that could not be performed within that timing constraint could then have added processing or be delegated to external circuitry (FPGA - currently Xilinix Spartan 3 - and conditioned I/O). Added processing was a low cost barrier because of the way the entire system was architected 100s of processors from any vendors could operate cooperatively till the robot power supply limits became an issue (yes each processor does inherit additional processing cost as a result of this expansion capability but it is a small cost considering the value of the capability).
For those that don't understand the techno-speak about what we proposed:
You could decide to use 4 cogs in a single controller such that each cog controls a single tire of the drive system.
You would know instantly which cog was tasked with what job and what to expect from it.
You could issue orders between these cogs something like this:
Cog_RightFront - Move forward at 10 RPM
Cog_LeftRear - Turn swerve 15 degrees
(I am not kidding about this sort of programming at all I actually made it work. Robot Control Language -RCL- from General Robotics and Lego Logo are examples from previous decades of the language structure. The change here is the way it maps to physical processing directly. The cogs are each 20MIPS they have plenty of time to parse relativily natural language if that is what is really needed and that can be further tokenized for a performance boost.)
Obviously in Linux you can spawn up processes or threads. This is something I exploit daily. What level of granuality is being exposed to students? Further what tools are being offered to the students to debug these circumstances if they are in fact able to control the distribution of resources? On Linux systems I have software I wrote that is able to monitor every process and it's children and either report over secure channels to a central management infrastructure or perform forced local management of anything that goes bonkers. In this case the 'central management' is really the field and DS.
What level of granuality is being exposed to students?
Students programming in LabVIEW don't have to be exposed at all. If they choose to, they can use timed loops and specify core affinity for each loop.
In C++, students are exposed directly to the Linux process / thread controls you would expect from a typical system, with the addition of real-time scheduler priorities.
As for Java, I'm not sure how it's exposed.
techhelpbb
16-08-2013, 10:22
Students programming in LabVIEW don't have to be exposed at all. If they choose to, they can use timed loops and specify core affinity for each loop.
In C++, students are exposed directly to the Linux process / thread controls you would expect from a typical system, with the addition of real-time scheduler priorities.
As for Java, I'm not sure how it's exposed.
I am actually glad about this. There are more environments with these real expectations in this world everyday. This provides the students an excellent opportunity and in an operating system with reach far beyond merely those controllers.
It does mean a challenge for the students but I have great confidence they'll grasp it.
What tools are being planned to help them debug the sorts of issues that may arise?
What tools are being planned to help them debug the sorts of issues that may arise?
For LabVIEW users, the Execution Trace Toolkit allows you to visualize when different parts of the system are executing.
Standard open-source tools apply to C++ and Java.
I don't think I've ever done anything in FRC that I couldn't do on a Vex Cortex, aside from stream video back to the driver station laptop. In fact, I've run Cortex code has run as fast as 100hz without complaining about CPU loading, and under RobotC (a terrible bytecode language with all the inefficiencies of Java and none of the benefits that dosn't even support the C language really at all) I was able to run all of my code in 2ms (measuring the time it took the task to execute).
I did come a bit short on IO (granted I used 5 LED's with individual GPIO pins), but I managed to live with the 12 DIO and 8 analog and 10 PWM. I think an extra two of each would be nice for FRC, but it's perfect for Vex robots. It's also got I2C and two UART ports.
I would agree that, the peak performance of the roboRIO vs the Cortex provides more cost efficiency. But for 99% of teams, the Cortex would be just fine (in fact, way better because it's so easy to setup and dosen't require default code), so the doubled cost dosn't provide doubled benefit, or even any benefit to them. And then there are the 5% who insist vision procesing is important (it has not been important to me yet), and the 1% who might utilize the full benefits of the roboRIO and implement a good onboard vision system without compromizing their RT controls.
We're still not doing anything controls-wise that we couldn't have done on the 2008 IFI controller. We now use floating point math, and LabVIEW, and other useful code features to do it, but we haven't found a challange which we simply couldn't do previously. Our development times have stayed about the same, our calibration efficiency is up a bit during online calibration-heavy activity but way way down for short between-match changes. We've also spent a lot more money on system components (cRios, more cRios, tons of digital sidecars, analog modules, solenoid modules, radios, more radios, more radios, ...) than with that system.
In fact, I would argue that our code has gotten more complicated because of all the stuff we've had to do to get development and calibration times down. We wrote Beescript because it took too long to write new autonomous code and deploy it, especially in competition, and would never have done so (and possibly had more flexible autonomous modes using real programming features like math and if statements) if the compile and download times were short, or we could modify calibrations without rebuilding.
We've thought a lot about implementing a calibration system that reads cal files and such, but we can't get the design to a point where we can retain the current online debugging, cal storage, and efficiency. And the more code we write, the longer the compile times get. I know I can't get a system with the flexibility I want and expect, while retaining all of the performance I expect, and it's incredibly frustrating to see systems in the real world operate on far lower resources running far more application code (running way faster) with good development and calibration tools that try hard to streamline and optimize the process as much as possible, and can do it efficiently with such little overhead, while we're running throwing more CPU speed and libraries at the problem and still nowhere near the performance (on all fronts - RT performance, boot times, development times and overhead, calibration efficiency, etc.).
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.
And even then, lots of teams had successful deadreckoned keeper autos.
Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended.
2003 was the start of auto. You needed to be first to the top of that ramp.
2004 the target didn't move, but auto could be defended by cross field ramming.
2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly.
2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them
2007, the rack moved after robots were placed, but typically didn't move a whole lot.
2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned.
2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP
2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules.
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.
And even then, lots of teams had successful deadreckoned keeper autos.
Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended.
2003 was the start of auto. You needed to be first to the top of that ramp.
2004 the target didn't move, but auto could be defended by cross field ramming.
2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly.
2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them
2007, the rack moved after robots were placed, but typically didn't move a whole lot.
2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned.
2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP
2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules.
This is a little inaccurate. You weren't always allowed to position your robot exactly where you wanted it so you couldn't be sure that your robot started in the same spot each time. In 2012, we needed vision in auto. Our strategy was to get to the center bridge and get the balls first, so we would be traveling very quickly when we hit the bridge, causing our robot to get misaligned. When we drove forward to the key again, we usually would be 2 to 3 feet away from where we started, and we needed the camera to line up with the target.
Also, many other teams have used vision as part of their main strategy. In 2006, wildstang had a nifty turret thing that was always pointed at the goal whenever it was in range so that they could get the balls in at any time. Also, 118 used a camera very well in 2012 with their shooter because it would let them shoot from anywhere near the key without having to line up.
The point is, for some games and some teams, vision is a huge part of the game.
I know teams used vision to line up for a full court shot this year, and teams also used vision to line up with the legs of the pyramid.
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.
And even then, lots of teams had successful deadreckoned keeper autos.
Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended.
2003 was the start of auto. You needed to be first to the top of that ramp.
2004 the target didn't move, but auto could be defended by cross field ramming.
2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly.
2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them
2007, the rack moved after robots were placed, but typically didn't move a whole lot.
2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned.
2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP
2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules.
Yes, but your looking at the past. And a lot of things can (and will) change by the year 2019. I dream of the day that FRC advances to the point where the GDC can make a game which nearly requires vision tracking for some high scoring oppurtunity. In 6 years, I highly doubt that we will get to the point where vision tracking is necessary to be competitive, but I would not be suprised if it becomes a near requirement for powerhouse teams. Heck, even in 2012 most of the top tier key shooters used vision tracking (341 and 1114 come to mind). And if FRC is going to lock into a control system for that long, they better be sure that it is going to be able to handle our growth and not hold us back.
My 2 cents.
techhelpbb
16-08-2013, 15:36
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.
I have to say I agree that trying to build machine vision into the control system of an FRC robot is asking quite a bit when so few people will fully dig into it. There is a difference between merely using it and really grabbing hold of it. It is not really the most effective reason to demand an upgrade every few years to an FRC system when the older robots and that investment then become that much harder to maintain.
Personally I think that a better way to handle video recognition is on the robot not at the driver's station with the current FRC environment and for this purpose I feel that an auxilary device to process that is the more sensible. It hardly makes sense to try to find something faster than a general purpose COTS PC for the price. The market for that general purpose PC is huge compared to FIRST so of course it will be the greater performance for the price and without question each year that price will buy even more performance as long as it is allowed. Plus if you break an old laptop I doubt you'll spend more for the older model. The other way is to integrate the camera with the video recognition system in the same package. I really look at the Raspberry Pi and other COTS systems (besides a general purpose PC) as something a little more like an attempt to integrate the camera and the video recognition system (rough I admit). (Not against the Raspberry Pi or anything like that as has been demonstrated elsewhere on the forum.)
In any case I think video recognition is one of those fantastic things that inspires people to think that the robot can adapt to it's environment based on sight. Most people start thinking of the way they see and imprint that on the robot. In so many ways the way humans use sight and the way a machine does are very different things. It is an ever evolving piece of technology. On the plus side that evolution drives jobs and innovation which I'm sure students would love to have. On the other hand video recognition is no PWM. There is a point at which you can implement PWM and there's no sense to try any harder. Video recognition has so many compromises there is always something to try and always a good opportunity to look at the robot as the vehicle and the camera / video recognition as a subsystem with ample opportunity for tinkering.
I am not sure it makes sense to sell the Apple product of FRC robot control systems. That model works great when people can afford to upgrade. Making those upgrades the entire control system seems a touch more expensive than necessary.
I'm actually really surprised I haven't seen anyone push for a dual radio idea before. That just makes sense.
Dashboard and anything else noncritical can all be over wifi. It also allows us to continue to be able wirelessly program the robot as well, which is a big advantage over having to tether a serial cable like the old controllers.
All the control data for the robot could be sent over the IFI style radio, and they even make Axis Cameras with built in wi-fi.
On a separate note, the compile/download times are where I'm hoping to see some serious improvement. This year, we used our 2012 robot code (in LV) on the 2012 robot using 2013 LV libraries for testing. The time to deploy in debug mode was about 1 minute, and the time to compile and download was about 3 minutes.
The problem was when the cRIO got into its "unhappy" mode, where it would have trouble downloading code. One day when our programmer wasn't there, the cRIO got into "unhappy" mode, and the people their weren't familiar with the imaging tool. It took them over 2 hours to download one program to the robot. They tried everything, turning it on and off, copying/pasting into a new project, using a different laptop, but whenever they tried downloading, it would download very slowly. Eventually, they decided to try just letting it download for as long as it took, and it took them 27 minutes to download our code!
Compare this to Java, where FTP is used to transfer the compiled code. The compiling takes 10 seconds, the actual sending takes about 5 seconds, and the rest is just a cRIO reboot, most of which is the network stuff loading and the FPGA being set up. There's no reason why we can't see this performance with LV on the new system, maybe even better since the whole OS shouldn't need to be restarted when a new program is loaded. Linux is good a that sort of thing, you can do any os update/software/driver install without ever needing to restart the computer!
Although this bug has gone without a fix since 2009, I'm really hoping that they can fix this issue for the roboRIO.
Greg McKaskle
16-08-2013, 17:17
The long deploy times experienced last season have not been present since 2009. The were introduced when newer compiler and caching system were put into place. I've elaborated on the bugs in other posts and given known workarounds. There is no need to wait for a RoboRIO for bugs in the compile and deploy system to be fixed.
Greg McKaskle
The long deploy times experienced last season have not been present since 2009. The were introduced when newer compiler and caching system were put into place. I've elaborated on the bugs in other posts and given known workarounds. There is no need to wait for a RoboRIO for bugs in the compile and deploy system to be fixed.
Greg McKaskle
I'm sorry if my earlier post came off as being rude, I didn't mean it that way. See my opinion about roboRIO here (http://www.chiefdelphi.com/forums/showpost.php?p=1287393&postcount=141).
I agree, there have always been ways to fix the deploying issues in LV, but they aren't always accessible to teams who aren't experienced programmers, or who aren't on CD. My only real negative experience with the current control system was in 09, when we couldn't LV to recognize our cRIO to download the code. We called over an FTA who was as puzzled as we were, and who told us to reimage the controller. He was right, it fixed the problem, but it took too long, resulting in us missing (and losing) our elim match. (looking back, we probably should have borrowed another cRIO!) Luckily, we won the next two and made it to the finals.
Since then, I've seen strange deployment issues every year on other team's robots, so I figured that when something like this happened to us, it was the same thing.
Greg McKaskle
16-08-2013, 20:46
There is no need to apologize. I didn't take your post as rude, but I felt that it was useful to separate roboRIO discussion from bugs that were unfortunately present in the LV development environment last year. I think it is easy for people to become confused when reading threads as scattered as this one has now become.
Greg McKaskle
Tom Line
19-08-2013, 16:10
Greg, is there any chance that NI is going to release the Thursday morning keynote video intro in high resolution?
That video was really neat, especially since the entire thing personafied FIRST.
I have a recording from my phone that is shakey and a Really Big Guy (tm) sitting in front of me blocked a portion of the screen.
http://www.youtube.com/watch?v=AOlHDrCNkuM
Joe Ross
19-08-2013, 16:54
Greg, is there any chance that NI is going to release the Thursday morning keynote video intro in high resolution?
That video was really neat, especially since the entire thing personafied FIRST.
I have a recording from my phone that is shakey and a Really Big Guy (tm) sitting in front of me blocked a portion of the screen.
http://www.youtube.com/watch?v=AHDrCNkuM
Higher resolution then this? http://www.youtube.com/watch?v=tTdPZ4rKwLA
Tom Line
19-08-2013, 23:24
Higher resolution then this? http://www.youtube.com/watch?v=tTdPZ4rKwLA
Your link isn't working.
Before the keynote was a video sequence with music - that's what I'm looking for. Clicking on the link in my post works for me. That's strange. Here's a link to another video the intro sequence video:
http://www.youtube.com/watch?v=X9JmTvBtIew
You can also do a youtube search for "niweek intro video" on youtube and it will be the first result.
Chris_Ely
20-08-2013, 12:31
This might be what you are looking for.
http://www.youtube.com/watch?v=v74Hm_Y4cBc
One thing that was nice on the DSC was there was enough space around the RSL pins to plug in a 3 pin PWM cable. Since teams have many PWM cables, it was easier then making a 2 pin cable. It's hard to tell from the pictures if there is space, but it would be nice if there was room for a 3 pin cable for the RSL.
Mechanical has taken this feedback and is incorporating it into Rev B.
RyanCahoon
20-08-2013, 13:44
is there any chance that NI is going to release the Thursday morning keynote video intro in high resolution?
This might be what you are looking for.
http://www.youtube.com/watch?v=v74Hm_Y4cBc
Here's the one with the education theme: http://www.youtube.com/watch?v=d7g90QwbF3o. I suspect the graphics were rendered for their unique display screen form factor; they'd probably have to be redone for a standard rectangular video, if that's what you're looking for.
Tom Line
20-08-2013, 15:33
Here's the one with the education theme: http://www.youtube.com/watch?v=d7g90QwbF3o. I suspect the graphics were rendered for their unique display screen form factor; they'd probably have to be redone for a standard rectangular video, if that's what you're looking for.
Thanks - that was it.
Mechanical has taken this feedback and is incorporating it into Rev B.
This is just so cool. Kudos to NI for letting the guys that make the calls interact with the users of the product. And for letting them have the power to make the call to immediately roll feedback into the product.
Joe Ross
20-08-2013, 18:51
Mechanical has taken this feedback and is incorporating it into Rev B.
Thanks
I suspect the graphics were rendered for their unique display screen form factor; they'd probably have to be redone for a standard rectangular video, if that's what you're looking for.
I had some time to kill backstage so I chatted with the AV crew about their setup. It really is impressive with multiple overlapping projectors and 5 distinct screens. The animation was rendered in 5 segments and then played back all together. Given the different paths the videos take to the projectors it is pretty impressive that they got all to synchronize so tightly that you can't tell.
The rendering was extremely intensive too - something like multiple days per keynote video on a large rendering farm.
Joe Ross
26-08-2013, 18:15
I found the Using Microsoft Kinect with myRIO (https://decibel.ni.com/content/docs/DOC-31239) whitepaper in the NI myRIO community. I assume the process of getting the kinect running with the roboRIO will be similar.
Other interesting papers:
Obstacle Avoidance with myRIO and Kinect (https://decibel.ni.com/content/docs/DOC-31241)
Color Following with myRIO and Kinect (https://decibel.ni.com/content/docs/DOC-31242)
billbo911
05-05-2014, 15:10
OK, I know I'm a bit lazy and did not read all 14 pages, but at least I am willing to admit to my failure.
Here are my questions:
We have 3 cRio's. Will LabView still work with these older controllers as we move forward with the RobotRIO?
Will we still have access to tools to re-format the older controllers once this wonderful step forward takes place?
I believe these older controllers still have a huge value to them in terms of teaching potential.
Tem1514 Mentor
05-05-2014, 15:37
OK, I know I'm a bit lazy and did not read all 14 pages, but at least I am willing to admit to my failure.
Here are my questions:
We have 3 cRio's. Will LabView still work with these older controllers as we move forward with the RobotRIO?
Will we still have access to tools to re-format the older controllers once this wonderful step forward takes place?
I believe these older controllers still have a huge value to them in terms of teaching potential.
Plus a huge $ value as well. It would nice if we could use the cRIO as a BACKUP at an event in case the robotRIO gave up.
Of course another option is maybe NI would offer a trade in of cRIO hardware for robotRIO hardware. Now that would be a real win-win.
Tom Line
05-05-2014, 15:48
OK, I know I'm a bit lazy and did not read all 14 pages, but at least I am willing to admit to my failure.
Here are my questions:
We have 3 cRio's. Will LabView still work with these older controllers as we move forward with the RobotRIO?
Will we still have access to tools to re-format the older controllers once this wonderful step forward takes place?
I believe these older controllers still have a huge value to them in terms of teaching potential.
This was a large point of concern during the final Alpha Q&A in New Hampshire. The teams all made it a point that even if the cRIO's are not supported for eternity, some allowance needed to be made to grant the teams longer than a 1 year license to the old versions of FRC software. A concrete plan hasn't been laid out yet but they (FIRST and NI) are aware of the concern.
We are meeting again June 8/9, so I'll make sure I ask for clarification to see if they've made a decision (if none of the FIRST / NI guys speak up here).
We have 5 cRIOs. There's always the option of installing the old software and using it in evaluation mode, but it's not one we like either!
Using the old Crio for 2015 will be a rules (First) decision. I don't see it interchanging with the roborio though. You will have to use the old supporting hardware.
Trading the First Crio for a roborio would not be win-win for NI since they cannot resale the First Crio to anybody. They were made specifically for First & not part of their commercial offering. The supporting hardware is not made by NI.
The Java plugins for the Crio will be archived. Since they are open source they will continue to work.
A concrete plan hasn't been laid out yet but they (FIRST and NI) are aware of the concern.
cRIO's won't turn into bricks at the end of this season. There are a few things going on to continue support:
1) cRIO-FRC II (the 4-slots) are planned to be supported in the 2015 software release for teams who want to continue using cRIO's for demonstration, experimentation, etc.
2) cRIO-FRC (the 8-slot) will not be officially supported, but will be included in the software, and "should work".
3) Future support (2016 software and beyond) is TBD.
4) Check the NI license manager for the 2014 software :)
Thad House
05-05-2014, 23:02
4) Check the NI license manager for the 2014 software :)
That is AWESOME! :D Thanks alot for doing that. I was worrying a little about programming our old robots. That helps.
1 question, will it be possible to activate again after kickoff, or are we going to have to make sure we keep an installed copy.
1 question, will it be possible to activate again after kickoff, or are we going to have to make sure we keep an installed copy.
I'm not aware of any restrictions in activation after kickoff and you shouldn't have to keep a copy installed to keep the license "active". You can always give us a shout if you get into trouble.
Also if anyone does something cool with their cRIO controllers (something like sending it up to space (http://www.extremetech.com/extreme/181740-you-can-finally-watch-a-live-video-feed-of-earth-from-space-and-its-awesome) perhaps) please let me know :)
DampRobot
08-05-2014, 01:47
I haven't read all 13 pages, but does anyone know the max speed (in terms of counts/sec) the robotRio could read for a quadrature encoder? It would be great if you also knew the specs for the C-Rio.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.