The increasing amount of pre-canned code

I would have posted this in programming, but its not really a programming question.

Do you think the increasing amount of pre-canned code (in the form of WPILib and all its flavours) is helping or hurting the effectiveness of young teams?

I’ve seen quite a few teams this year, asking for help with LabVIEW, and specifically the I think what has started to happen, is that these teams are seeing both the veterans, and the code that makes such drive systems seem easy, and jumping on the bandwagon before really understanding how these systems work.

A similar effect has happened with the cameras.

If you hand teams code on a silver platter, they often don’t understand WHY it works. Which causes problems when something breaks and it stops working, because they can’t know how to fix it if they don’t understand how it works.

Obviously some form of default code is required, but should we really be handing out PID algorithms, and holonomic drive code, and so forth? I don’t know, but I think its causing some younger teams to get in over their heads with advanced systems, such as holonomic drive platforms. I can see this causing problems come competition when something breaks and no one knows why. It also causes increased questions directed to veteran teams about how to code this type of thing when they can’t get it to work right away, because they don’t fully understand what they’re doing. This much, I think is a good thing, as it results in the younger team learning some more advanced concepts from teams who’ve done it successfully before.

(Note: I’m not picking specifically on holonomic drives, they are just an easy example of the concept I’m referring to.)

Only 2 of our programmers actually understand the camera scripts…
our mentor had them make their own from scratch this summer lol
and they just so happen to be twins…I suspect they cheat using telekinesis :expressionless:

I agree with you, if all you have to do is attach and relate file A to file B, then the point of programming is getting alittle fuzzy.
It would be nice if people actually had to understand the concepts and syntax before letting them take the fate of your robot into their hands.

I dislike WPILib quite a bit, I’ve found that it overcomplicates things and makes the inner workings vague. As programming leader I try to encourage the other programmers to look into how things work or make it themselves. Unfortunately, our programming mentor is a big fan of WPILib.

The cRIO actually makes me miss IFI, even though I only worked with IFI for a year. It just seemed to always work, our team has constant issues with our cRIO. I personally dislike everything I’ve seen from National Instruments, especially thier software.</rant>


You raise an interesting point. I think a large amount of the disdain aimed at NI is due to perceived unreliability of the cRIO. I think this is less a function of ACTUAL unreliability, and more a function of the cRIO being several orders of magnitude more powerful than we need it to be.

We went from an 8bit MCU running at 20MHz with a few K of ram, to a 32bit processor, running at 400MHz with a boatload of ram, for the SAME application. Yes, there were some teams that had started to push the limitations of the IFI system, but the cRIO seems a bit like driving a finishing nail with a sledgehammer. Sure, it works, but is it REALLY the right tool for the job?

I’m reminded of three quotes

John of Salisbury in 1159 wrote: “We are like dwarfs sitting on the shoulders of giants. We see more, and things that are more distant, than they did, not because our sight is superior or because we are taller than they, but because they raise us up, and by their great stature add to ours.”

Newton wrote: “If I have seen a little further it is by standing on the shoulders of Giants.”

Dan Ingals said “We in the computer software business insist on stepping on the toes of those who came before us instead of climbing on their shoulders”.

Computing has made huge strides in the last 50 years, why not let people use the fruits of the labors of others?

I don’t need how to program to use a computer. I don’t need to know how to program to use CAD to design a part. I don’t need to know how to program to use a 3D Mill to make my part. And so on.

I too liked the IFI controller, but I also stood on the shoulders of a giant (Kevin Watson) and used his code to do some of the grunt work. I use PIC chips today but I use other peoples code to make my code better. More time for my ideas since I’m not wasting time reinventing the wheel.

We give people credit here for posting their ideas, and all of us learn from them. How many times have you taken a CD idea and extended and expanded it to make it much more?

If a team can use the “out of the box” drive code, camera code or the hyperspace teleportation code then good for them. Learning the tools (CAD, Computer Machining, entire Labview / WPILibrary code base, is why robotics isn’t a 6 week thing, it’s a year round thing.

Stand on the shoulders of giants! And be prepared to have others stand on yours.

One thing I’ve learned in school is that black boxes are better than clear ones.

I know that a two input NAND gate is made up of four transistors, and a latch is made up of a couple NAND gates, and a flip flop is made up of more NAND gates, and a shift register is made up of a few flip flops… and on and on and on.

When you want to make your robot drive forward, its pretty much irrelevant to know those facts, even though they are the basis for all things programming.

A good example is a gyro - I have vague ideas about how it works, but that’s neither here nor there. I know it works, I can use it without hesitation.
My robot spinning exactly 45 degrees shouldn’t depend on me knowing the inner workings of that sensor.

Further abstract it. Pretty much every year they give teams the KOP, and in it they include transmissions and wheels. Why should you have to fabricate a wheel if a perfectly good one is available to you? Why should you be required to spend hours designing a transmission if a perfectly good one is available to you?

No one says you have to use it. No one says “you must use the camera and the code that comes with it”. Go ahead, redo it yourself. You’ll gain knowledge, and frustration from it, and you might come out with a better product and an advantage to all the other teams.

To summarize: “pre-canned” code is helping all teams.

For every programmer that wants to write their own interrupt service routines, work with ADCs and digital I/Os at the register level, or write their own USART/RS232/SPI/I2C communication functionality, there’s probably a dozen more that struggle to barely make their robot drive.

Assets like WPILib, Kevin Watson’s code, LabView, EasyC, etc. all help those dozen teams have a robot that they can drive around and actually do things with. But for that other one, open up the C++ or Java libraries and go code something cool.

If you want to teach your programmers low level stuff, go buy a PIC, MSP430, AVR, etc. and teach them how to set it up. Maybe even put it on a Vex/FTC robot and have them drive it around using the alternative controller.

that’s interesting, but why stop there? back in the good ol’ days, the KOP was exactly what it sounds like, parts, this was before kit bot frames, motor adapters, and all of the pre-engineered parts that are available now (anyone remember small-parts catalog?). Do I miss it? Yes, because back then you really have to “make” everything work, have to figure out how to mount a motor to a shaft, etc.

But the real question is if I had a choice, would I make that a requirement for all teams, and my answer is no, having worked with a lot of the younger teams, this program really wouldn’t be possible if they had to built from scratch, a lot of the newer teams just don’t have the infrastructure in place yet to handle that type of work (especially for teams with minimal access to machine tools, shops, etc). I think the teams should strive for designing everything themselves, but I don’t think it is a reasonable expectation right off the bat with first reaching as many students as we do now. I think a similar logic applies to the codes.

I can see the validity in “standing on the shoulders of giants”. I can see the validity in using a gyro without fully understanding how it works via means of some ‘black-box’ code, the problems I see are when the canned code encourages teams to do things that they dont have the required experience to use even the pre-canned stuff. When they decide to use the pre-canned code because its /cool/ or /new/ and don’t understand the reasons you might want something like a holonomic drive, and a basic understanding of how one works. At the very least, you need to understand what the inputs and outputs from a particular black box are, and how changing the inputs affects the output. If you dont understand that, but want to use that black box just because team x uses it and does well, then you’re doing something wrong.

To summarize, I’m not suggesting that teams need to understand how the software crunches numbers. I’m suggesting that teams need to understand the mechanical aspect of how a holonomic drive works, before they try to implement a black-box routine to do the code part of it for them, and I’m not sure this is happening.

I don’t have a problem with re-using someone else’s code, so long as you understand the basics of what its doing to make mechanism X work.

WPILib definitely helps teams. Even after the six weeks some teams are struggling to make their robot drive. Imagine what it would be like if they had to write drive code from scratch?!?! As programmers, we must remember that not everyone is a programmer. In fact, many FRC mentors have never had to write a line of code in their lives.

If a programmer wants to learn the inner workings of WPILib then I suggest they, and their team, sit down and walk through it. Maybe they could even make some changes.

In summary, there is always this tension between helping teams out and taking the challenge away from them. The GDC faces this every year when they sit down and create a new game.

They might get in over their head, but that’s part of the learning experience.
If something does break down, if they do fail, they will learn.
That’s the point of FIRST.

You know why veteran programmers are so good at debugging code? Its because they’ve made a hundred times as many mistakes as the rookies. We should give the rookies every opportunity to learn, even if it means putting them at risk of “failure”.

Ok, but what gives them more learning when “failing”. Failing because you don’t even a little bit understand what the black box is doing, or failing because you understood the concept, tried to build it, and failed?

I’m not sure which side of this fence I really sit on. On one hand, I think WPILib is spoon-feeding a lot of the more advanced stuff that is advanced in mechanical, programming, and just general understanding of how it works, and this is both frustrating to rookie teams that don’t understand it, and makes the teams who spent years perfecting a system of their own to do the same thing lose whatever advantage they’d gained through those years of perfection into the system (swerve drives come to mind here). On the other hand, I see it getting alot of teams who don’t have the time, money, mentors, and other resources to develop these advanced systems up to speed.

I’m not sure if it helps or hurts, and I’m not sure if the competition is better or worse for it. It makes things different than they would be without it. I think its probably a net positive force, by encouraging teams to try an advanced drivetrain, and forcing them to learn how it works along the way, however, I can see downsides to it too.

I think there comes a line where its starting to make things too easy. I’m just not sure where that line is.

The “line” is the point at which we’ve raised the floor so high that the advanced teams are hitting the ceiling. The ceiling is pretty darned high (and the GDC keeps raising it), so I don’t think we’re in danger of that any time soon.

I believe students get much more educational benefit from learning the system level point of view - “What can that do, and what can I do with it?” rather than “How does it do it?”. I sent my students home with a homework assignment - “Pick your favorite sensor and think of 3 ways we can use it”. The raised floor will allow them to spend more time on that thought and less time on the implementation.

If this is about blank canvas versus paint by numbers, that is a deep discussion. If a good mentor is there to challenge, encourage, goad, and help explain why something just happened, then the blank canvas can certainly offer the richest and most fulfilling experience. The blank canvas approach seems to work less well without the awesome mentor and support network. Arguably, the systems approach, also works far better with the mentor to fill those roles. Also, if a team chooses to go blank canvas and uses kit materials and WPILib only for prototyping that seems to expose them to the full benefits of both approaches. Some teams simply don’t have the time or resources to undergo the second phase.

If this discussion is about taking the contest back to year XX when things were just right for your taste, then why not really make it challenging and design your own limited challenge within your team or region for the offseason. You know the ones where you make a bridge from pasta, a boat from cardboard, or I don’t know, how about autonomous robots without ICs of any kind – tubes for all. No need to worry about spoiling them with SW libraries that way.

If you have specific feedback on WPILib, I’m sure Brad and the other contributors will be happy to get feedback or assistance.

Greg McKaskle

I like the IFI processor, it worked well at its time.
The biggest advantages to me using the cRio are that I no longer have to write lookup tables for things, as I have the power to calculate them in real time.

Working in LAbVIEW: Some of the WPIlib is nice, like integrating gyros and counting encoder clicks. Some of it is not, such as the really really really annoying fact that you must set both the forward and reverse coils of a relay at the same Set (I actually wrote a vi to set them separately, based on copied code from that Set). I would totally agree with the fact that PID and Holonomic are probably going too far. Jim (Zondag) actually didn’t tell us programmers that there was a PID library, and since we didn’t find it until after writing the crab-drive code, we didn’t use it.

The cRio: Being a first year programmer in 2009, I probably would have never been able to code the 4-wheel independent steering code without the trig power on the cRio. It was nice to have all of the power I needed. That said, after spending 8+ hours debugging a firmware issue in the cRio this year (which turned out to be a problem in the 24vdc supply on the PD board), I would say that the cRio is definitely not as robust as the IFI system. While talking to the NI tech support, if they say “Ummm… That’s Bad” then you know they must not have found that problem in their testing and aren’t prepared to solve it. When a problem arises with this new system, there are so many more points of error that it’s quite difficult to debug some times.

cRio boot times: They bug me. Waiting for the robot to boot is the most annoying thing there is. However, after talking to NI tech support for around 4 hours, he claims that the cRio itself boots in several seconds and then the FIRST code waits for FMS comm to timeout (25s) before loading the team code. Is this the case? If so, they could make the FMS timeout a little faster.

PID and Holonomic: They are sitting unused in my WPIlib pallate. They will never be touched. As I teach the newbees (freshmen), they too will learn to leave them alone. At least they didn’t give us crab-drive code. Debugging that is too much fun for them to just hand us.

That said, after spending 8+ hours debugging a firmware issue in the cRio this year (which turned out to be a problem in the 24vdc supply on the PD board), I would say that the cRio is definitely not as robust as the IFI system. While talking to the NI tech support, if they say “Ummm… That’s Bad” then you know they must not have found that problem in their testing and aren’t prepared to solve it. When a problem arises with this new system, there are so many more points of error that it’s quite difficult to debug some times.

cRio boot times: They bug me. Waiting for the robot to boot is the most annoying thing there is. However, after talking to NI tech support for around 4 hours, he claims that the cRio itself boots in several seconds and then the FIRST code waits for FMS comm to timeout (25s) before loading the team code. Is this the case? If so, they could make the FMS timeout a little faster.

Sorry it took so long to debug.

Since the cRIO and many of the other control system are off the shelf, not specially designed for this competition, their interactions, especially when one of them is failing, is indeed something not that many people have seen. Also, the AEs are mostly new to this system too. They are familiar with NI products in normal usage.

As for the boot time. The cRIO FPGA is booted very quickly. The PPC typically boots in about ten. The bridge and other elements, the FRC specific tasks all take a bit longer. But the boot time isn’t waiting for any particular element such as FMS to timeout – no magic bullet.

Greg McKaskle

You can get a really good idea on what’s taking so long to boot up by connecting up a serial cable to the cRio and booting it. The debugging that I’ve done using the serial port confirms Greg’s claim that there is no magic bullet.

This is a pretty interesting topic. My post here may wander a bit, but I hope I’ll get to the point before too long.

From a standpoint of simulating what a real-world controls/embedded software engineer experiences, having the WPI Library make the FIRST experience much more real-world-like.

While at some point each company has to start from scratch, that work has usually been done and incorporated into the company software library many years ago. It’s the job of each engineer to utilize the tried and true library and not to reinvent the wheel.

While at first glance using a library seems too easy, libraries present of a lot of challenges of their own. The post that started this topic brought up a lot of good points. I just started a new job recently (I’m a controls engineer working with embedded software), and getting up to speed on all of the library code is quite a challenge. Like 1075guy pointed out, you really should know what the library is doing before using it, or else you may not know how to fix some problems.

As someone else pointed out, it can often be difficult to figure out what some libraries are actually doing. As software becomes more complex, deciphering what the software engineer was trying to do can be difficult at times. I’ll admit that there have been times where I said to myself, “I can re-write this to do the same thing in less time than it would take for me to figure out what going on in this code.” However, actually re-writing it is usually a pretty bad idea since there are usually a lot of lessons learned in the code that you’re looking through, and you probably don’t know all of them. That being said, it’s often very helpful just to give it a go re-writing it. After that exercise, you can usually get through the library code pretty easily since you went through the thought process yourself. (Just be sure not to use your own code - you’ll probably get fired. :slight_smile: )

I said I’d get to the point eventually, so here it goes. In the real world, you’re more likely to faced with a problem like we have now in FIRST. You have a new challenge and you have some good code on the bookshelf. You have to determine how to make the best use of the library code and then fill in the gaps. But just keep in mind that if you use code that you don’t understand, you’re playing with fire - and don’t expect the insurance company to bail you out if you burn down your house playing with fire.

This is exactly the point I was trying to get to and not quite sure how to get there.

I remember in 2009, my team reached the Waterloo Regional, and had to reflash our cRIO and update workbench, and so forth. From the version of WPILib we’d been using in testing, to the version that got flashed there, the Library had changed. They changed the timer code from returning something in seconds to returning in microseconds, or vice versa, I can’t remember exactly. This caused our steering code to stop working as it used a timer, and the numbers were now different by a factor of 1 million. I tracked it down fairly quickly, but making changes like that to precanned code could cause problems that less experienced programmers might not track down as quickly.

Admittedly, we should have updated before ship, but I hadn’t thought of it, and I certainly wasn’t expecting a change that changed the outward functionality of the black-box that is WPILib.

I’m not really griping about the new system. I actually rather like it. The object-oriented approach to coding the robot is so much more logical. I just sort of feel like, particularly for the C++ last year that WPILib was poorly documented, and somewhat unfinished at time of release. Perhaps it could have been handled better, or perhaps they could have documented changes like the above change better in some sort of changelog. Maybe they did and I just didn’t see it.

I would like to add a vote for encouraging systems level design instead of low level programming. I believe this does more to inspire interest in science and technology than learning how to interface with a sensor. With the LabView and WPILib tools the teams can do cool things that might encourage them to pursue technology as a career. (And maybe then learn the details of how semiconductor gyros work.)

We might also be giving the libraries too much credit. Our team is using mecanum wheels this year for the first time. I don’t think anyone said, “Hey, there is a, let’s go holonomic!” The team looked at the game requirements and decided that omnidirectional movement was important and besides, they always wanted to try mecanum wheels. No one mentioned the software, after all it’s just a “simple matter of programming” to run the wheels.

We have a pretty lean team so our programmers are builders too. Having high level libraries allows us to try new sensors and equipment without learning the low level coding. Without those resources we would have to prioritize the work which would probably mean just getting a drive and kicker to work. Knowing that, we probably wouldn’t even bother with the other sensors and thus miss the learning of how such devices might be employed in the robotic system.

I believe the cRIO and LabVIEW have allowed at least our team to get to the next level of design and understanding.