Log in

View Full Version : On the quality and complexity of software within FRC


faust1706
12-06-2015, 16:45
McCall quality factors are lacking, or in some cases non-existent. in FRC and it is about time to address it.

Year after year, the vast majority of teams are writing poorly written, inefficient, and disorganized code. Their code base fails to generalize for multiple years due to poor design, but no one seems to care or talk about this.

Someone posts a cad drawing of a robot that they did for practice. People ask questions asking about the foundations of the design ("How thick is the G10?"), but no one asks questions when code is released. No one asks, "what is the *big-O notation of this function?" No one cares enough to truly scrutinize someone else's code. No one cares that an on board vision program isn't threaded. No one cares about how inefficient a routine is so long as it works in a match. A lot of teams forbid the altering of code once it works, which is a disgusting practice. Teams that get 10-15 fps on a vision program and say "it's good enough" when they haven't even done their research on optimization techniques. The vast majority of code would get a 'C' at best in an intro to programming class in a high school. So why is no one talking about this?

gblake
12-06-2015, 17:08
... So why is no one talking about this?I think you exaggerate a bit when you write "no one"; and I think it would be (should be) unusual to generalize about "the vast majority of teams" FRC software's condition, without first agreeing with your audience what requirements that software is supposed to satisfy.

There is more than one way to skin most cats, including software cats.

Blake

Anupam Goli
12-06-2015, 17:27
I'll bite.

Year after year, the vast majority of teams are writing poorly written, inefficient, and disorganized code. Their code base fails to generalize for multiple years due to poor design, but no one seems to care or talk about this.

Year after year the vast majority of teams fail to build a robot that can play the game effectively. Just like there are teams that have great designs but not so great software, there are teams with horrible designs but great software (I was on one of these teams in high school, the only thing I could brag about doing in high school was writing good software for a terrible robot).

People ask questions asking about the foundations of the design ("How thick is the G10?"), but no one asks questions when code is released. No one asks, "what is the *big-O notation of this function?"

FIRST gives us restrictions on weight and height. The only software restriction we have is the ports we can use and the hardware limitations. I don't think many of us are concerned about the efficiency of our code with the hardware in the roboRIO. A 120 lb limit is a much harder limitation to work with.

Also, as a firmware engineer (intern), my first priority is proving the concept before actually applying it to the hardware I work with. I don't care about the efficiency until I have to ship it on very limited hardware, I want to see the concept work before I start trimming my variables, deallocating memory, etc.

A lot of teams forbid the altering of code once it works, which is a disgusting practice.

I'm not so sure it's disgusting, but it's not a practice i'm fond of. Just like our mechanical design, our software design should be constantly evolving to include more automation and autonomous capability. However, software changes are harder to see physically than mechanical changes. Of course if you write code and deploy, and it doesnt work, someone's going to yell at you to change it back. However, with a mechanical design, you can see the points of failure easily. I agree, this is a bad perception to have. There are merits to having software not left alone (incompatible libraries in future updates, etc.) but those are few and far between I suppose.

The vast majority of code would get a 'C' at best in an intro to programming class in a high school. So why is no one talking about this?

You vastly overestimate the rigor of a high school programming class. If you saw the MATLAB code I wrote for my Computing for Engineers introductory college course, I'm sure you'd have a heart attack. When you're not graded on efficiency and don't have to time to make it efficient, why make it so?


Just like the top teams have amazing designs, they also have amazing software. I marvel over 254's software releases and always learn something about their software design. I suppose the majority of CD's talk is mechanical design-oriented, so it's tough to see all of the software discussion going on (and there's a lot).

Pault
12-06-2015, 17:28
I can agree that compared to the CD ME community, the CD programming community really does not do nearly as good of a job at encouraging and helping teams improve their code beyond just making sure they understand the fundamentals and their code works. There is very little talk about making code flexible, or features that could be added, etc. All of the threads are something like either "What programming language should I use?", "Where do I go to learn how to code in X language?", "Why does this code not work?", "What is a PID?", etc. I think it could really benefit the teams on here if there was more in depth conversation.

Your being very harsh on the average team, though. Just like with mechanical and electrical, the goal of many programming teams is just to "make it work well enough for us to get picked for eliminations." Many of them dont even have a coding mentor, and rely pretty much exclusively on the resources online to figure out how to write basic code. I don't think we need to be criticizing them because their code is not as robust as we would expect from our own teams, they are struggling to get it to work at all.

kylestach1678
12-06-2015, 17:55
I think it could really benefit the teams on here if there was more in depth conversation.
This is very true. Only a very small portion of CD contains anything related to programming, and of that, as you said, very little is anything more than stack overflow-style "why does this not work" problems. It would be fantastic if there was more discussion on CD about the fantastic code that some teams have created such as 971's control code, 1706's and 900's vision programs, and 254's motion planning to name a few.

Ari423
12-06-2015, 18:05
I'm afraid I have to agree with OP. I am the head (and only) programmer on my team, and have been so for the past 3 years, so I have a lot of experience with programming for FRC.

I will be the first to admit programming is hard to do well, but I think some teams just aren't putting enough effort to make their code efficient. Just today on CD I saw a piece of code (which I will leave unnamed) that almost brought my eyes to tears in its inefficiency. Even without a programming mentor and with a 6 person team, I have always put an emphasis on code efficiency and design as well as effectiveness. Sometimes I feel as if the FRC community has taken all of the focus from programming and moved it to mechanical, and left programming as an afterthought: something that you just throw together in a few hours to make it work and then never look at again. I understand that there are multiple ways to do a single task, but some ways ARE better than others.

A good example of this is dashboards. I cannot speak to programming a Java or C++ dashboard, but I know that making a dashboard in LabVIEW is easy to do, relatively speaking. Every year, after I finish the robot code and control panel, while I am waiting for the robot to be ready to test on, I program a dashboard that shows the important information coming from the robot, and adds other inputs than those on the control panel (trims, resets, etc). It usually looks something like this:
19123
It doesn't take long to make, makes debugging much easier, and generally looks nice. We don't always use all of the displays, but they is at least one display for every aspect of the robot code. However, I often see much larger and more advanced teams who use LabVIEW but either don't have a dashboard, use the default dash, or make a custom dash with only a few numerical outputs. I don't understand; maybe someone can explain this to me. With the customizability given to you, why not make a dashboard suited to your needs that is easy to read and looks nice?

tl;dr Why is programming becoming an afterthought in the FRC community?

connor.worley
12-06-2015, 18:12
IMO once your code works, you get more diminished returns improving it than you do improving your mechanisms. Efficiency doesn't really matter because there's no requirement to scale.

Requiring teams to submit their code to the judges for review might fix this, but I don't know if the volunteer capacity exists.

MikLast
12-06-2015, 18:36
I don't understand; maybe someone can explain this to me. With the customizability given to you, why not make a dashboard suited to your needs that is easy to read and looks nice?


Maybe the only thing you really need is three auto buttons.
(that's all we needed. the driver station console gave us everything else we needed.)

cjl2625
12-06-2015, 19:04
I'm afraid I have to agree with OP. I am the head (and only) programmer on my team, and have been so for the past 3 years, so I have a lot of experience with programming for FRC.


Yup, exact same for me.

Most of our code for this year is pretty gross, which is probably a result waiting for the mechanical team to make something, and then me scrapping together something that makes it move. There's plenty of automation, but it's perhaps not the most efficient.

In general, it's just rushed through to make something that works. I usually don't improve it, perhaps because I'm afraid of breaking the code, and then getting yelled at by those darn wrench-swinging monkeys of the mechanical subteam (:P). But mainly it's a result of time. I'm usually so busy that it's not really worth my time to make these improvements that won't really have a worthwhile effect. Perhaps if the programming team was more than one person, it would be more manageable to produce beautiful, efficient code.
I'm determined to expand the programming team for next year, so we'll see what happens with the code then.

Though over the summer, when I actually have time, I do like to reiterate some of the code that may be used in the future. I've remade our swerve drive code a few times, and now it's significantly more efficient and organized than the original version. And I know there's still room for improvement, which I'll try to tackle this summer.

faust1706
12-06-2015, 20:11
Sorry for lack of quotes, on mobile. The blame is also on us. My mentor had me calculate the big O of every algorithm I designed, and he questioned every algorithm design I had. If I didn't have a well thought out answer, I wasn't allowed to use it. We've been using that code base ever since, haven't changed a line of it, and it has been able to adapt to the 2013 game, a 3 camera system in 2014, and 3d imaging this year.

Part of our partnership with nvidia will be promoting good code in the community and getting people excited about programming. We thought of the idea of hosting a class at worlds, but we feel even if we had it, not a lot of people would show up.

I do not have a solution to this deeply rooted problem as it ultimately comes down to each individual team. Addressing it as a problem is the first step. Another idea we had with nvidia is to have an award for quality of source code, teams that want to be a contender would submit their code online before competition began.

teslalab2
12-06-2015, 20:12
I can say this, I was the programming team and half of the mechanical team, and though I spent a lot of time on the code, it worked and that was about it... when I wanted to finally get around to fixing the spaghetti in my code, the robot mechanically suicided and I didn't have time to fix my code, this went on constantly until build was over.

Necroterra
12-06-2015, 20:16
Throwing in my 2 cents about some of the reasons teams might have poor software.


Not enough manpower: I think that many teams have only a single programmer. If so, it is not trivial for one person without any support to learn and apply everything that goes into buiilding a good project.
Not enough support: I also wouldn't be surprised if many teams had no dedicated software mentors. Additionally, even here on CD we don't have much of a focus on software, and the WPILIB documentation, while decent, doesn't teach anything about how to design good software (Git, how to organize code, patterns, etc.)
Not enough visibility: Almost every other aspect of a team is immediately present in their robot and team - you can see the mechanical design, the team branding, the manufacturing, etc. but aside from the occasional code release on CD, I would bet that many teams' programmers haven't really even looked at anyone else's code.
Not enough incentive for good software: aside from innovation in controls, there is very little incentive for medium or low performing teams to put a lot of resources into their code. Most teams don't have a robot that performs well enough for things like extendable/flexible code or vision to actually matter. The roboRIO is more than fast enough for pretty much everybody, and with the CANTalons, the vast majority of functionality is achieved by very simple code.
Team's attitude towards software: I think some teams have a very negative or accusatory attitude towards the software subteam. I've definitely seen some shirts / signatures / quotes / jokes that seem to indicate some (probably not conscious) lack of respect towards programmers. I think our team had some bad experiences with these sorts of things before I joined, but I can't myself comment on it.
Limited robot access: It is inherently difficult to develop software for a machine that isn't built yet. Design/fabrication teams will always want to improve the robot for all six weeks, meaning the programmers have to fight for time with the robot.
Time limitation: six weeks is a really short timeframe for software. Implementing a good workflow, doing code reviews / refactors, etc. just doesn't really make sense on such a short timescale. I think many teams don't do these sorts of things during the offseason either, either because they just don't work much on robots outside of the main season or for any of the reasons above.


I just think overall, the structure and culture of FRC isn't honestly that great for learning software unless the students are really motivated. I've definitely gotten a lot out of it, but I also have been doing a lot of research and outside work. In my one year of experience, I ran into many of the above issues.

Also, forbidding the modification of release code (that is, working code at competition) is a great idea. Aside from hotfixes / emergencies / changing something trivial like autonomous values, or controller inputs, messing with code at competition is bad practice.

EricH
12-06-2015, 20:18
All of the threads are something like either "What programming language should I use?", "Where do I go to learn how to code in X language?", "Why does this code not work?", "What is a PID?", etc. I think it could really benefit the teams on here if there was more in depth conversation.

Caution: "Not a Programmer" Talking in Programming Thread.

I'm going to take the above as kind of a "trigger point". Mainly because, it's what got my attention.

Who is asking those questions?

No, seriously, who is asking those questions? And, just for grins, why?

Think about it.


Now that you've thought about it, from what I've seen an awful lot of those threads are your first-time programmers. Not 2nd, 3rd, or 4th year programmers, but first-timers. The exception is the "why is this not working", which can come from anybody who is trying something new, or just wants a code review.

The other factor is that many teams don't give their programmers a robot until Week 5.75 if the programmers are lucky. Then they expect the robot to work in the first match on the field. See: hotfix the code and pray it works. Then fix some other bug. Repeat.


1197 does have a good programming team--but I know enough not to ask questions. (I leave that to the programming mentors.)

connor.worley
12-06-2015, 20:25
Another idea we had with nvidia is to have an award for quality of source code, teams that want to be a contender would submit their code online before competition began.

If you do this, allow submissions after competition. Lots of cool features get added between competitions.

Pault
12-06-2015, 20:29
Caution: "Not a Programmer" Talking in Programming Thread.

I'm going to take the above as kind of a "trigger point". Mainly because, it's what got my attention.

Who is asking those questions?

No, seriously, who is asking those questions? And, just for grins, why?

Think about it.


Now that you've thought about it, from what I've seen an awful lot of those threads are your first-time programmers. Not 2nd, 3rd, or 4th year programmers, but first-timers. The exception is the "why is this not working", which can come from anybody who is trying something new, or just wants a code review.

The other factor is that many teams don't give their programmers a robot until Week 5.75 if the programmers are lucky. Then they expect the robot to work in the first match on the field. See: hotfix the code and pray it works. Then fix some other bug. Repeat.


1197 does have a good programming team--but I know enough not to ask questions. (I leave that to the programming mentors.)

Yes, I completely realize that. I never said it was a bad thing that those threads exist. I was just pointing out how few threads actually provoke deeper conversations about coding than that. Imagine if nearly every thread about mechanical topics were "What CAD software should we use?", "Where can I learn to use X CAD software?", "Will this tank drive design function without breaking?", "How do gear ratios work?", etc. The technical side of CD would get boring really fast.

EricH
12-06-2015, 20:45
Yes, I completely realize that. I never said it was a bad thing that those threads exist. I was just pointing out how few threads actually provoke deeper conversations about coding than that. Imagine if nearly every thread about mechanical topics were "What CAD software should we use?", "Where can I learn to use X CAD software?", "Will this tank drive design function without breaking?", "How do gear ratios work?", etc. The technical side of CD would get boring really fast.

I did touch on some of that--later in the post.

Really simply, most teams' programmers get the robot really late in the season--and most teams' programmers are then trying to get the robot running under pressure from mechanical who wants drive time. Ugly code that works is of far, far greater value to the team than really nice, standards-compliant, reusable year-after-year code. That is the perception, at least. And it is very difficult to break out of that without a determined effort by one or more programmers to force the issue.

And then there's another problem. 4 years. Every 4 years, there is a complete turnover. (I intentionally exclude mentors from this.) Given that many teams have limited programmers in the first place (and, given some of those threads, it's tough to get a programmer to step up to replace a lone programmer who is moving on), there isn't really a good progression... so even if you do get a programmer or two who are starting to force the team towards high quality, or do more complex things, right about the time they hit that point they're gone, and someone else is starting from near-zero. You can argue that if they were doing it right that wouldn't be an issue, but getting to the point of doing it right can take YEARS.



As far as complexity...Let's just say that some mechanical mentors aren't willing to trust the programmers with more than basic sensors, and have to be poked, prodded, and otherwise convinced to (allow the programmers to) use the more advanced items. (I'm not one of them--but I prefer the simpler ways of doing things over the more complex ways.) OTOH, when the complex stuff works right, they suddenly want a lot more... Just got to get them there--again, that usually takes a programmer who insists on it until... oops, graduated another one, time to start over. ;)

Pault
12-06-2015, 20:48
Also, forbidding the modification of release code (that is, working code at competition) is a great idea. Aside from hotfixes / emergencies / changing something trivial like autonomous values, or controller inputs, messing with code at competition is bad practice.

I would like to introduce you to git (https://git-scm.com/). Version control and branches make it possible to do the stuff that you are talking about easily with almost no risk of shooting yourself in the foot. Just remember to commit often even in the rush of competition and never to send your robot into a match with untested code changes.

Jared
12-06-2015, 21:02
No one cares about how inefficient a routine is so long as it works in a match.

As somebody who honestly doesn't care how inefficient my (or anybody else's) software is so long as it works in a match, I hope I can explain my mentality.

The simple version is that FIRST is an engineering competition, not a science fair or research project. It's the difference between an engineer at a company that designs and manufactures really cheap CD players and a college professor. The engineer designing the CD player realizes that their solution isn't perfect, and shouldn't be, due to its limited parts cost and design time. The college professor has years and years to conduct research and strives to achieve perfection.

It's not always about doing things in a very academic, well documented, extremely optimized way, but instead it's about getting things done that work as well as they need to, but no better. Why shoot for 60 fps when 2 fps works just as well? Why learn about optimization techniques for things that are efficient enough? Why change code that already works when there is other work to be done?
To many, FIRST is an engineering competition. Teams must engineer a solution to a problem in a limited amount of time, just like real engineers do in the "real world". There is a finite amount of time to optimize an infinite number of things, so engineers must make tradeoffs. Should they give the programmers 10 hours with the robot to write vision software, or should they iterate on our hook so that we can lift totes easily? I didn't see a team this year that seriously benefited from having computer vision on their robot. Should a programmer spend time optimizing and threading their code when they could spend time practicing driving the robot?

Most people (myself included) judge robots by their ability to score points, prevent opponents from scoring, win matches (if applicable), seed high, and progress in the tournament structure of the competition.

A really cool mechanical thing that isn't very effective in a match is unimpressive - for instance, some 3D printed parts, many carbon fiber parts, many crazy magnesium alloy parts...

A really simple mechanical thing that is very effective in a match is very impressive. Look at the robots from 118, 254, 1114, and other really great teams. They use traditional methods of manufacturing with traditional materials because it works.

A really cool software thing that isn't very effective (vision processing this year) is unimpressive to the crowd.

A piece of software that just works is impressive, regardless of its efficiency, documentation, or organization.

I never saw a robot and thought "Gee, that team would be so much better off if there software was more efficient", or "that team could beat the Poofs if their vision system had a tiny bit less lag". Instead, I thought things like "if only their elevator was a little faster".



A lot of teams forbid the altering of code once it works, which is a disgusting practice. Teams that get 10-15 fps on a vision program and say "it's good enough" when they haven't even done their research on optimization techniques.

I have done both of these things. Once my code works, I don't change it. There are an infinite number of better things I can do with my time during build season than tweaking code to be slightly more efficient. If it's efficient enough to do what I need it to do, I don't want to waste either my time or my team's time with changing things.

What if 10-15 fps is enough? What if 1 frame per second is good enough?


I guess what I'm saying is that most teams software is 'good enough' and is rarely is what causes a team to be unsuccessful. There are many teams with very well written software and ineffective robots, but there aren't many teams with effective robots limited by bad software. Rarely are the best robots filled with extremely complex algorithms and parts, but rather, well engineered simple solutions. The code posted in Chief Delphi whitepapers often deals with concepts more advanced than anything found on any world champion robot.

marshall
12-06-2015, 21:26
As somebody who honestly doesn't care how inefficient my (or anybody else's) software is so long as it works in a match, I hope I can explain my mentality.

The simple version is that FIRST is an engineering competition, not a science fair or research project. It's the difference between an engineer at a company that designs and manufactures really cheap CD players and a college professor. The engineer designing the CD player realizes that their solution isn't perfect, and shouldn't be, due to its limited parts cost and design time. The college professor has years and years to conduct research and strives to achieve perfection.

It's not always about doing things in a very academic, well documented, extremely optimized way, but instead it's about getting things done that work as well as they need to, but no better. Why shoot for 60 fps when 2 fps works just as well? Why learn about optimization techniques for things that are efficient enough? Why change code that already works when there is other work to be done?
To many, FIRST is an engineering competition. Teams must engineer a solution to a problem in a limited amount of time, just like real engineers do in the "real world". There is a finite amount of time to optimize an infinite number of things, so engineers must make tradeoffs. Should they give the programmers 10 hours with the robot to write vision software, or should they iterate on our hook so that we can lift totes easily? I didn't see a team this year that seriously benefited from having computer vision on their robot. Should a programmer spend time optimizing and threading their code when they could spend time practicing driving the robot?

Most people (myself included) judge robots by their ability to score points, prevent opponents from scoring, win matches (if applicable), seed high, and progress in the tournament structure of the competition.

A really cool mechanical thing that isn't very effective in a match is unimpressive - for instance, some 3D printed parts, many carbon fiber parts, many crazy magnesium alloy parts...

A really simple mechanical thing that is very effective in a match is very impressive. Look at the robots from 118, 254, 1114, and other really great teams. They use traditional methods of manufacturing with traditional materials because it works.

A really cool software thing that isn't very effective (vision processing this year) is unimpressive to the crowd.

A piece of software that just works is impressive, regardless of its efficiency, documentation, or organization.

I never saw a robot and thought "Gee, that team would be so much better off if there software was more efficient", or "that team could beat the Poofs if their vision system had a tiny bit less lag". Instead, I thought things like "if only their elevator was a little faster".





I have done both of these things. Once my code works, I don't change it. There are an infinite number of better things I can do with my time during build season than tweaking code to be slightly more efficient. If it's efficient enough to do what I need it to do, I don't want to waste either my time or my team's time with changing things.

What if 10-15 fps is enough? What if 1 frame per second is good enough?


I guess what I'm saying is that most teams software is 'good enough' and is rarely is what causes a team to be unsuccessful. There are many teams with very well written software and ineffective robots, but there aren't many teams with effective robots limited by bad software. Rarely are the best robots filled with extremely complex algorithms and parts, but rather, well engineered simple solutions. The code posted in Chief Delphi whitepapers often deals with concepts more advanced than anything found on any world champion robot.

+1.

I couldn't have said it better.

Necroterra
12-06-2015, 21:45
I would like to introduce you to git (https://git-scm.com/). Version control and branches make it possible to do the stuff that you are talking about easily with almost no risk of shooting yourself in the foot. Just remember to commit often even in the rush of competition and never to send your robot into a match with untested code changes.

That, actually, is what I was trying to get at. I might have worded it badly, sorry.

The Gitflow workflow is, in my opinion, perfect for FRC Teams. Work on features and develop throughout build season, then once drive practice gets close, start a release branch, and put out a master update for the drivers to practice on (or of course just have them practice on develop... it depends on how much driver practice you would want to do). Then, a few days before each competition, start another release and clean code, disable any extraneous or nonworking features, decide on autonomous routines, etc. Changes at competition should be hotfix branch only.

AlexanderTheOK
12-06-2015, 22:30
I didn't see a team this year that seriously benefited from having computer vision on their robot.

This part is really important here. I was a rookie in 2012, a year when vision processing was actually a useful tool for aiming due to the small size of the baskets. Every offseason afte,r I developed vision code (going from thresholds to SURF, ORB, and cascade classifiers) hoping that in the 3 YEARS I HAD LEFT, there would be ONE GAME in which vision processing was useful.

So far, the closest thing to useful vision processing in those 3 more years I spent programming has been cheesy vision. While it is an ingenious solution to an unexpected but consistent field failure, it is not robot vision processing, it was a simpler, quicker to develop, and more effective alternative to it.

This year, at my last regional as a student ever, Ventura, there were two teams that were able to consistently stack the 3 yellow totes, 1717 and 696.

Both robots accomplished this task without cameras, because cameras were not needed.

As others have said, it's an engineering competition, where we are making a robot that scores points. When code efficiency scores teams points, you'll see a lot of teams start making efficient code.

John Retkowski
12-06-2015, 22:36
I don't know why, but this thread reminds me of a joke me and one of the mechanical mentors started making this year about deciding to forget the code and just wind up the robot before every match. It's a mechanical team's dream...

Basel A
12-06-2015, 22:50
I agree with many of the above posters about code efficiency, but not about the relative gains of code improvements. While "getting it to work" is a bare minimum that most teams accept, automating small tasks can help a driver tremendously (place stack, shoot once in position), as can sophisticated control like maintaining lift position rather than open loop lift control by the driver. Like anything else there are diminishing returns, but I do think most teams would do better if they put a higher priority on their code/controls.

In my mind, there are two ways an individual can work to change the status quo of "no one cares about code." The first is obvious: work with your team to do what you believe is important. Different teams differently value scouting, CAD, marketing. There are teams that are known for their excellence in their preferred areas. Be the team that excels in code and spreads the word.

The second way is also obvious. If you think people should critique code on Chief Delphi the same way they do CAD, then you do that. It only takes two people to have a conversation. Perhaps you'll start a larger trend, maybe you'll just give some good advice. Teams quite often post their codebases on CD, and usually receive no response. Start the conversation you want to see.

I do think FRC would better prepare students for 21st century engineering if FRC games placed a higher value on code, but I also recognise the difficulty of effectively doing that while maintaining watchability and other priorities. Not to mention that it's not FIRST's chief goal to prepare students but rather to inspire them.

Ari423
12-06-2015, 23:25
Limited robot access: It is inherently difficult to develop software for a machine that isn't built yet...

Time limitation: six weeks is a really short timeframe for software...

What I find helps is writing the code before the robot is built. Sure you end up modifying or deleting most of it, but taking the time to make things looks nice while you are waiting for the robot to be ready to be tested gives you the standard for the modifications you make to your code. Also, thinking about all of the different possibilities for a general idea opens your mind to new ideas the mechanical team hadn't thought of.

Not enough manpower: I think that many teams have only a single programmer. If so, it is not trivial for one person without any support to learn and apply everything that goes into buiilding a good project.

Not enough support: I also wouldn't be surprised if many teams had no dedicated software mentors.

I understand about what people are saying about only having one programmer and no programming mentors, but I am in that situation as well and I still am able to make efficient and well-formatted code. I think what is important is having someone else's clean code to look up to. The programmer before me left me his old code. While some of it was dirty, most of the competition code was neat and easy to understand, even without comments. Perhaps it's because he was an artist as well, but I really appreciated the time he took to make the code look nice (which helped debugging and such because you knew instantly what you were looking at).

Not enough incentive for good software: aside from innovation in controls, there is very little incentive for medium or low performing teams to put a lot of resources into their code....The roboRIO is more than fast enough for pretty much everybody, and with the CAN Talons, the vast majority of functionality is achieved by very simple code.

While I agree that there isn't much incentive in FIRST, I feel that programmers should strive for more than functionality. When you leave FIRST and go to get a job, your employers will want other programmers to be able to understand your code, and bad practices and dirty code don't help with this. When your code is clean it makes updating it and advancing it much easier.

Most teams' programmers get the robot really late in the season--and most teams' programmers are then trying to get the robot running under pressure from mechanical who wants drive time. Ugly code that works is of far, far greater value to the team than really nice, standards-compliant, reusable year-after-year code.

This, again, is where making code before you get the robot helps. You will need to rewrite stuff, but it's less work than writing the entire code from scratch.

To many, FIRST is an engineering competition. Teams must engineer a solution to a problem in a limited amount of time, just like real engineers do in the "real world". There is a finite amount of time to optimize an infinite number of things, so engineers must make tradeoffs.

I agree with what you are saying; there is a point where code can become over-developed. But there are some practices that take no extra time, but make the code more efficient and easier to read. And if it doesn't work the first time, it makes it easier for the programmer or someone helping to debug. Learning good coding practices in the off-season can greatly help during the build season.

It's an engineering competition, where we are making a robot that scores points. When code efficiency scores teams points, you'll see a lot of teams start making efficient code.

Even though optimized code doesn't score you points, it does make getting to the point where you can score points and debugging when you can't score points easier. It takes less time to copy to the robot and a well-formatted dashboard can help you debug the robot mid-match and save yourself half a match (which we needed to do and we able to thanks to our dashboard).

Brian Maher
12-06-2015, 23:47
Requiring teams to submit their code to the judges for review might fix this, but I don't know if the volunteer capacity exists.

I think this has a lot of potential as an optional submission, but require it to be considered for the Innovation in Control Award, like how a Business Plan submission is required for the Entrepreneurship Award.

This year, I had our team's programmers assemble a document including the code, with full comments, and some flowcharts and rationales. It helped them better understand their decision making process, realize its advantages and weaknesses, and identify ways to improve. It's also nice to have another deliverable for judges.

P.S. getting programmers to document their work can be difficult.

sforsyth
13-06-2015, 00:01
You most likely have not worked in the real world... ahhhh to have a fresh college mind where everything is perfect and there is plenty of time to make everything perfect and just the way *I* want it :p

There are others that have said it in this thread I'm sure... the perfect code is whatever you think it is, your perfect code is not necessarily mine. If it works, then it is probably perfect to the person that wrote it.

Most of these kids are brand new to programming and it is better served just to learn the language, leave the perfection to College courses if that is what they decide to go into.

That being said, I'm sure there there are many ways to help them learn to write clean code and that really should be what you are asking... how can we help the kids?

jman4747
13-06-2015, 00:04
A lot of teams forbid the altering of code once it works, which is a disgusting practice.

I would say that depends on the timing. If you're 10 minutes from queuing for another match or in the finals and everything is working than I'd usually say leave it. If it's mid August then yea have at it. With what we need the code to do and for how long it's not always smart to change a stable iteration. If it's not critical that the robot be working for a while then optimization should obviously be encouraged. If in the middle of competition however I wouldn't be making major changes to algorithms, structure, or data flow etc etc.

AdamHeard
13-06-2015, 00:18
Aside from vision (and possibly elaborate real time spline generation) what even deals with enough data in FRC to worry about O complexity?

SoftwareBug2.0
13-06-2015, 00:45
Year after year, the vast majority of teams are writing poorly written, inefficient, and disorganized code. Their code base fails to generalize for multiple years due to poor design, but no one seems to care or talk about this.

Someone posts a cad drawing of a robot that they did for practice. People ask questions asking about the foundations of the design ("How thick is the G10?"), but no one asks questions when code is released. No one asks, "what is the *big-O notation of this function?" No one cares enough to truly scrutinize someone else's code. No one cares that an on board vision program isn't threaded. No one cares about how inefficient a routine is so long as it works in a match. A lot of teams forbid the altering of code once it works, which is a disgusting practice. Teams that get 10-15 fps on a vision program and say "it's good enough" when they haven't even done their research on optimization techniques. The vast majority of code would get a 'C' at best in an intro to programming class in a high school. So why is no one talking about this?

I agree there's a lot of poor quality code in FRC and little higher-level discussion. I think there are many reasons for this, some of which have already been mentioned.

-A lot of high schoolers do poorly with programming: Even though the AP computer science test is ridiculously easy over 30% of students who took it last year received a 1 (for those not familiar with AP tests, 1 is the lowest possible grade). And out of all the AP tests given that was the highest percentage of ones awarded.

-Reading code is not very fun. I know you've released a couple of interesting projects and it's cool to know what they do but I'm pretty much not going to read their source code unless I want to reuse them.

-The code libraries that you're given in FRC don't set you up to do fancy things well. It's non-modular in a way that makes unit testing hard. It's hard enough that I don't think I would have figured out a way to do it when I was in high school. Sure there are teams that have worked around this and built things that you could build on, but they're not well known. For example, my team publishes our code which includes a very different way of using C++ but I don't know of any other team that's even looked closely at it. Similarly, 1510 has published an alternate framework for Java which they've been using but I only know of one other team that has bothered to try it.

-Optimization and efficiency are ignored because they can be. To do what the average team does the processor is about two orders of magnitude faster than it needs to be and you have 3 or 4 orders of magnitude more RAM than needed. I always think that I'm going to get to teaching the students on my team about efficiency when we get to the point where there would be some benefit to doing so, but things always end up 'good enough'.

-FRC games allow success with limited software functionality. Every year there are robots that have no autonomous program at all and win tournaments. This is silly. The game should start with each robot in a box taped on the floor and you shouldn't get driver control until it leaves the box.

One thing that I must disagree with you about is that restricting changes to code can be rational. Even if your code is correct it does not follow that your system will behave as expected. Minor code changes can and do trigger bugs in other parts of the system. You might now be triggering a different (and buggy) code path in a library you're using. We've all seen compiler bugs before. And there's the famous "download silently failed and left system in an unusable state". When one of these other problems occurs it would be nice to find out why but right before a match is neither the time nor place.

Also, there is some interesting coding going on even if it's not going on a robot. For example, here: http://www.chiefdelphi.com/forums/showthread.php?threadid=137451.

gblake
13-06-2015, 01:05
I am surprised by how few people have challenged the OP's assertion that I'll paraphrase as "most FRC software stinks"; and by how many respondents seem to embrace that assessment.

It is completely impossible to make a judgment like that without knowing what the software is required/expected to do.

You can easily make a case for McCall's criteria or other rather abstract notions of what is/isn't high-quality software being completely irrelevant if the software is satisfying all requirements placed on it.

McCall, et al published their important work to take a shot at being able evaluate software written for use in a certain general context. However, outside of that context, other criteria often apply.

First agree on what you want a given collection of software to do and be. Then have a proper conversation about quality. Don't put the cart before the horse. Don't let the tail wag the dog.

Blake

faust1706
13-06-2015, 01:20
You most likely have not worked in the real world

This is my second summer interning as a software developer. The code base is cleaner than anything I have ever written and it is highly organized. Extremely navigable, highly commented, and the code you submit has to get approval from your ENTIRE team before it gets pushed. The company deals with massive amounts of data for analysis. Optimization is the name of the game. More functions than not have their corresponding big O with it, and many have big theta as well. Anyways, irrelevant.

I agree there's a lot of poor quality code in FRC and little higher-level discussion.

A lot of high schoolers do poorly with programming

Studies suggest that roughly 40 percent of people aren't even capable of learning to program. Just look at the fizz bizz test. Really good read. (http://blog.codinghorror.com/why-cant-programmers-program/)

I am surprised by how few people have challenged the OP's assertion that I'll paraphrase as "most FRC software stinks"; and by how many respondents seem to embrace that assessment.

First agree on what you want a given collection of software to do and be. Then have a proper conversation about quality.


Because it is the general consensus it seems. I did expect some backlash. While robots vary greatly year after year, the software generally does not. I forget where I read this, but it basically said that software is not considered good if it isn't able to adapt to an unforeseen use of it.

Imaging if google crashed when you tried to find a route from london to toyko,

Making adaptive code is something that needs to be a goal from the start, not an after thought.

How about we all just real Zen and the Art of Motorcycle Maintenance and call it a day in regards to a conversation of quality?


Reading code is not very fun.

I love reading code, especially elegant and concise code. I think I'll make an off season "competition" of a code standard challenge from the team's code this year. I'll begin working on the details tomorrow with a friend and anyone else who is interested.

artK
13-06-2015, 02:17
I've also given the quality and development of FRC controls and software a fair bit of thought. I have also thought for a while about the amount of software discussion versus mechanical, strategic and even business discussions on Chief Delphi.

I think part of it has to do with the lack of robotics programmers, both on CD and in the FRC community as a whole. A lot of teams don't have dedicated software mentors, who can teach the students the principles of software engineering (especially design patterns which allow for easily expandable code), and control theory (at the very least some feedback control, but the more the merrier). Most teams would be lucky to get a computer science teacher who generally teaches the basics (which are important, but doesn't necessarily lead to good software design). This leaves the upperclassmen, who may not have years of programming experience joining the team, to teach the lowerclassmen, with leadership changing every year or two.

A reason for a lack of mentors with experience in robotics software may be due to the lack of exposure of computer science and software engineering students to the field, usually until grad school or dedicated robotics programs like WPI or Waterloo have, which don't have a huge output of people. And the difference between robotics software from what CS and SE students typically work with that plunging in head first is intimidating enough that they never give robotics much thought.

Additionally, I bet a lot of students use other websites not only to help debug their code, but to copy and paste code from a website, which doesn't lead as much learning. I would like to see somebody try and copy/paste a CAD model.

gblake
13-06-2015, 02:33
Because it is the general consensus it seems. I did expect some backlash. While robots vary greatly year after year, the software generally does not. I forget where I read this, but it basically said that software is not considered good if it isn't able to adapt to an unforeseen use of it.

Imaging if google crashed when you tried to find a route from london to toyko,

Making adaptive code is something that needs to be a goal from the start, not an after thought.

How about we all just real Zen and the Art of Motorcycle Maintenance and call it a day in regards to a conversation of quality?
Whether or not a general impression about low-quality exists, what is probably more important to think about is whether or not it is accurate.

The notion of adapting to unforeseen use is interesting to ponder. What does it have to do with software that isn't required to adapt to unforeseen circumstances? In that circumstance creating adaptable software is a waste of resources.

I read that book long ago. No need to repeat that.

Instead why don't we discuss what our software is supposed to do and be *before* we decide that it is a failure. The software development industry is no stranger to the importance of putting a good specification in place early.

I understand your thesis; but I don't think you have proven it. Until you can put together a good argument that the majority of FRC Software doesn't satisfy teams' requirements, put me in the camp that both says there is always room for improvement, and says that a general indictment is unwarranted.

gblake
13-06-2015, 03:04
Help me out folks, if a team's software isn't made public, isn't it supposed to be rewritten from scratch every year?

And even if all of a team's software is made public from year to year, isn't the general notion of the annual FRC learning experience that software, like the physical robot, will be largely developed anew, during the season, each year?

I realize that there is a broad spectrum of ways teams actually do their software development; but isn't the clean sheet of paper, plus some true off the shelf code, sort-of the big picture thrust of the rules?

Necroterra
13-06-2015, 04:28
Help me out folks, if a team's software isn't made public, isn't it supposed to be rewritten from scratch every year?

And even if all of a team's software is made public from year to year, isn't the general notion of the annual FRC learning experience that software, like the physical robot, will be largely developed anew, during the season, each year?

I realize that there is a broad spectrum is ways teams actually do their software development; but isn't the clean sheet of paper, plus some true of the shelf code, short-of the big picture thrust of the rules?

I'm pretty sure that you are right and all reused code is supposed to be public. I think a lot of teams throw it on their website or a github repo, but it isn't under a easy to find name. However, it is also really hard to enforce this rule and it usually isn't that big of a deal, so it gets ignored as far as I know.

I don't see why you would ever start completely from scratch - things such as path planning, vision code, state machines, interfaces, dashboard tools, etc. can be developed and refined over years, and generally can't be done well in 6 weeks. Re-deriving your algorithms every year might be a learning experience, but it won't teach you as much as refining and expanding your codebase will.

I think that there are some situations where following this rule and being an effective FRC programmer don't necessarily align. Here are a few theoretical examples where the rule is technically broken.


I wrote a spline generator in the offseason / last year, and copy pasted it into my project this year without putting it on Github. This is technically against the rules, but at least the student is still learning.
I had my friend / parent / mentor give me a spline generator they wrote for college, and copy pasted it without making it public. This is bad, since you neither shared it nor wrote code yourself.
I opened my code from last year, and used it as a reference for how I did certain things such as getting joystick inputs. This to me is fine, since while you might retype certain lines verbatim, a single line or method call barely counts as code - in this case, really you are just not having to look things up in the documentation as much.




I love reading code, especially elegant and concise code. I think I'll make an off season "competition" of a code standard challenge from the team's code this year. I'll begin working on the details tomorrow with a friend and anyone else who is interested.

I would love this, so would it be like you provide a description of the robot and it's sensors / components, and people submit code that should make it run?

efoote868
13-06-2015, 08:47
On a one-off project, if I have a specific set of tasks and I'm writing serial code that must execute once every 20 milliseconds, why should I care if it takes 2 microseconds or 10 milliseconds to execute?

If I have 512 MB of memory available, why should I care if I use 10 kb or 300 MB, so long as it holds everything I need and works the way I want it to?


Particularly in one-off projects where I'm not designing to value, more important than smallest, absolute fastest, most efficient is code readability and "it just works." If I have 10 hours to write and test code to fit a spec, you'd better believe I'm spending about 8-9 hours testing my code to make sure it works. If I spend any time optimizing, that is time not spent testing.

I understand that if I'm manufacturing 100 million widgets and I can save a fraction of a cent on manufacturing PCB or hardware specs I'm doing it, but if I'm doing just one prototype, cost is not my motivating factor.

Michael Hill
13-06-2015, 09:13
Sorry for lack of quotes, on mobile. The blame is also on us. My mentor had me calculate the big O of every algorithm I designed, and he questioned every algorithm design I had. If I didn't have a well thought out answer, I wasn't allowed to use it. We've been using that code base ever since, haven't changed a line of it, and it has been able to adapt to the 2013 game, a 3 camera system in 2014, and 3d imaging this year.

Part of our partnership with nvidia will be promoting good code in the community and getting people excited about programming. We thought of the idea of hosting a class at worlds, but we feel even if we had it, not a lot of people would show up.

I do not have a solution to this deeply rooted problem as it ultimately comes down to each individual team. Addressing it as a problem is the first step. Another idea we had with nvidia is to have an award for quality of source code, teams that want to be a contender would submit their code online before competition began.

Here's one of the interesting things about FRC. Your mentor has taught you some really good skills that are applicable at a much higher level than what's normally done in FRC. A bit academic, but from your other posts I've seen on here, it looks like it's made you a better developer. While code may have been developed faster for the robot, it doesn't really matter because you learned something. While you may not calculate the Big O for every algorithm you do, you will probably also keep it in mind in any future algorithms your write.

marshall
13-06-2015, 09:20
I understand that if I'm manufacturing 100 million widgets and I can save a fraction of a cent on manufacturing PCB or hardware specs I'm doing it, but if I'm doing just one prototype, cost is not my motivating factor.

Even with large scale manufacturing, cost is no longer the limiting factor thanks in large part to companies like these: http://www.arm.com/ and http://www.intel.com/

Processing power is cheap and there are so many layers of abstraction these days that optimizing for Big O isn't necessarily where you'll see the most performance gains. From my experience in the manufacturing and software development industries, there is a lot more to be gained from network optimizations than there is from Big O these days. You can optimize an algorithm all day long but if you send it via a crappy protocol then the information might not get there or, more likely, it will arrive late.

My point being:

There is more than one way to skin most cats, including software cats

Meow.

artK
13-06-2015, 09:25
Help me out folks, if a team's software isn't made public, isn't it supposed to be rewritten from scratch every year?


Yes. The rules usually state this fairly early on in the manual. I think Frank even made a post telling teams this rule early so people would publish it.


And even if all of a team's software is made public from year to year, isn't the general notion of the annual FRC learning experience that software, like the physical robot, will be largely developed anew, during the season, each year? I realize that there is a broad spectrum is ways teams actually do their software development; but isn't the clean sheet of paper, plus some true of the shelf code, short-of the big picture thrust of the rules?


Yes, new software is supposed to be written each year. The real question is what new software needs to be written. A number of teams, from my general impression, will start essentially from scratch each year. I believe this may contribute to the problem. Since most teams don't/can't reuse the code from the previous year, they have to start from what they know, and usually without help. It's like asking a designer to CAD up a drivebase who doesn't necessarily know what goes into a drivebase. Yeah, it might work, but it's probably really heavy, slow, and doesn't turn well.

By comparison, other teams will reuse the framework from the previous year, and adapt it to the new robot. It's like starting the season with enough COTS parts to build a drivebase, and the parts increase in quality and quantity each year. Is it ready for competition? No. Does it save development time, allowing programmers to work on other things? Yes. Is their an easily available framework for teams to use? Not really, at least not now. You could borrow code from teams like 254 or 236 who post their code every year, or try and use the scripting languages I've seen 1902, 987, and 4334 post (I think those teams have did that in the past).

Greg McKaskle
13-06-2015, 10:43
I have two pennies.

I do not think that FRC code is bad across the board. I think the range of code quality is similar to that of wiring and mechanical construction. In other words it ranges from impressively good to -- "are you actively trying to sabotage your team" -- bad.

To me, much more important than the code on the robot is the knowledge gained by the students on the team. If students wrote code that has issues, but learned how to identify the issues and improve their performance the next time, then FRC has provided a successful challenge. If they have learned how to communicate and write SW as part of a team, how to break down a problem, complete their solution, meet a deadline, evaluate their results, ask questions, and accept mentoring, then they are well prepared for college, training, and career.

To paraphrase, it isn't really about the robot code. That is only the campfire ...

And by the way, communication with nonprogrammers is super important. Most programmers don't write code for themselves or for people like them. They write code as part of a product team that includes EEs, MEs, business and marketing, writers, perhaps even doctors or physicist. When you learn to explain what your code does and doesn't do to a nonprogrammer, when you learn to understand someone else requirements, expectations, resources, tolerances, and deadlines in a common domain language, you are better prepared to become a successful programmer. This goes for all engineering disciplines, not just programmers.

Finally, I see FRC as a multidisciplinary engineering challenge. It incorporates strategic decision making, logistical efficiency, interaction with real-world materials and tolerances, and interaction with five other robots (most years). And it isn't just a controlled demonstration of capabilities. It is a timed sport-like competition that incorporates human drivers and players , and it also places value on presentation, aesthetics, and outreach.

What I'm getting at is that FRC is BIG and complex and really challenging. Code is but one part of the robot. It is necessary for teams to make decisions as to what they will spend time on. There are many many ways to successfully navigate the big FRC ocean. I'd highly encourage the programmers to enter other contests and take on additional challenges. I'd also encourage everyone on the team to learn more about programming. While CS/CE specialists may always be needed, you may also find it very useful as an ME/EE/whatever. I think computers are here to stay.

Greg McKaskle

Ari423
13-06-2015, 10:47
Yes, new software is supposed to be written each year. The real question is what new software needs to be written. A number of teams, from my general impression, will start essentially from scratch each year. I believe this may contribute to the problem. Since most teams don't/can't reuse the code from the previous year, they have to start from what they know, and usually without help. It's like asking a designer to CAD up a drivebase who doesn't necessarily know what goes into a drivebase. Yeah, it might work, but it's probably really heavy, slow, and doesn't turn well.

THANK YOU!!! On my team, we generally re-write the code each year, but we model it off of previous year's code. This allows us to use parts of the code that worked and find better ways to do things that didn't. Once they learn how to program, all of our programmers have access to view the robot code so they can learn good practices from it and become a better programmer before they take on the role of head programmer.

I think I'll make an off season "competition" of a code standard challenge from the team's code this year. I'll begin working on the details tomorrow with a friend and anyone else who is interested.

I would definitely be interested in doing this. Either have contestants interpret what already made code does (and maybe optimize it) or give them a task and have them make code to make it work. Only problem will be different programming languages. I have some "interesting" to read through code that might be a fun challenge without comments for this competition.

4 years. Every 4 years, there is a complete turnover. (I intentionally exclude mentors from this.) Given that many teams have limited programmers in the first place (and, given some of those threads, it's tough to get a programmer to step up to replace a lone programmer who is moving on), there isn't really a good progression... so even if you do get a programmer or two who are starting to force the team towards high quality, or do more complex things, right about the time they hit that point they're gone, and someone else is starting from near-zero.

As long as you have at least one year of cross-over to teach the new programmer, we have always been fine. When I joined the team, the head (and only) programmer was a senior, and he spend most of his time working on the robot, while I spent most of my time learning how to program. He only actually taught me how to program the robot in two hours after the end of the season. Sure, I wasn't as good my first year as head programmer as I am now, but I was still able to make neat, efficient code that works. So either every programmer on my team who has been in this situation is a genius, or other people can do it too.

Again, I cannot stress enough how much it helps to have a version of code finished before the robot is built. Sure you might not be able to test most of it, but you can test that state machine that would take you hours to make and test while you have better things to be doing. Having code done beforehand allows you to make more advanced code.

faust1706
13-06-2015, 13:17
side note that I don't want to create another thread about: I need help deciding on a task of sorts that would interest programmers enough to do it. PM me(or shoot me an email: hunter.park11235@gmail.com) if you're interested in helping me design a task.

AlexanderTheOK
13-06-2015, 17:08
I have personally been following and teaching this guide (https://www.thc.org/root/phun/unmaintain.html).

wireties
13-06-2015, 21:31
Not all teams have poor software practices. But it is rarely "professional" looking. While helping other teams at competitions I find software all over the place. 1296 splits the application up into tasks/threads for each subsystem and uses messages to synchronize everything. This way the students are all working on smaller self-contained behaviors. We also emphasize good coding standards (including testablilty, reduced complexity, Hungarian notation, etc), draw it then code it, test and retest etc.

Somewhere in this thread it was asserted FRC robots are not all that complex because they "do not process much data". That is kind of a IT way of looking at it rather an engineer's approach. The complexity in FRC robot software centers around temporal constraints, control algorithms and quality assurance. Handling the data to and from the driver station and dashboard is the simple part.

Several persons in this thread have remarked about the value of tested code. Code that works is crazy valuable but can be even more so if good practices are followed from day zero. Software has a "life cycle". That life cycle ends if the code is unreadable or not maintainable even if it works. So the holy grail is to follow good practices to begin with, test the heck out of it and always keep a copy of the code that worked! It can be done, even in a 6-week time frame.

HTH

GeeTwo
14-06-2015, 02:11
I'll admit that I've just skimmed the responses, so I'm probably going to repeat a few points:


Selection Bias: Most of the posts of code on CD are either code that is not working, and the writer doesn't know why, or software that the coder believes is well written (often justifiably so), and is being posted in a GP spirit. I'll wager that neither is typical FRC code!
Excepting mentor coders, an average FRC coder's experience is probably better measured in months than years. Many high schools (including Slidell High) do not offer a single programming class. We don't despair at how clumsily the bear dances, but marvel that it dances at all.
Most robot designs don't settle down in time for the coders to have a chance to really test the code. This is true at the rookie end, where the robot gets bagged before it's working, and at the powerhouse end, where the robot is redesigned at least twice between stop build and CMP, and at most points in between.
How many posts have we seen lamenting that six weeks and three days is not enough time to build a robot? Guess which department gets the shortest part of that too-short stick? Anyone?
Except in mandatory high-reliability environments (e.g. medical, space, perhaps automotive), code that works 95% of the time delivered by the deadline is "better" than code that works 99.999% of the time but comes along a few months too late. FRC is NOT a high-reliability environment that grants enough time to "get it right".
Git is great when you have access to the internet, when the head programmer has the training and discipline to manage forks and when every programmer has the discipline to sync up after every couple of hours of coding. High school students at competitions typically meet none of these requirements.
Test code on the practice field? Don't make me laugh! We can't get enough time to test mechanical systems on a field that doesn't properly simulate the competition field.
While we don't have a general rule of "don't fix code unless it's broken", a basic risk/reward analysis frequently results in code lock down during competition.

SoftwareBug2.0
14-06-2015, 15:12
Git is great when you have access to the internet, when the head programmer has the training and discipline to manage forks and when every programmer has the discipline to sync up after every couple of hours of coding. High school students at competitions typically meet none of these requirements.


FYI, you don't actually need an internet connection to run git. You don't have to constantly connect to some server for every little thing and that's one of its greatest strengths. It's why it's called a distributed version control system.

Also, maybe this is just me but it seems like paying attention to what happens with different forks of code should be the easy part of a software mentor's job.

Gregor
14-06-2015, 20:29
Also, maybe this is just me but it seems like paying attention to what happens with different forks of code should be the easy part of a software mentor's job.

The majority of teams don't have a software mentor.

GeeTwo
15-06-2015, 00:26
FYI, you don't actually need an internet connection to run git. You don't have to constantly connect to some server for every little thing and that's one of its greatest strengths. It's why it's called a distributed version control system.

Do you only have one programmer, or do they all share one programming workstation?

artK
15-06-2015, 01:41
Do you only have one programmer, or do they all share one programming workstation?

Neither is necessary. The way git works is that updating the code has two phases: commit and push. Commits are the differences between source files from the previous commit (additions and deletions of lines of code). When you commit, you save the changes locally. When you push code, all the commits since the previous push are saved remotely, and the all of them update. Since you commits are local and pushes are remote, you can commit a number of times while disconnected from the internet, then push all of these commits when you reconnect to the remote server. I attached an image from the Git wikipedia page that visualizes what I said.

https://en.wikipedia.org/wiki/Git_(software)#/media/File:Git_operations.svg

gblake
15-06-2015, 01:53
Neither is necessary. The way git works is that updating the code has two phases: commit and push. Commits are the differences between source files from the previous commit (additions and deletions of lines of code). When you commit, you save the changes locally. When you push code, all the commits since the previous push are saved remotely, and the all of them update. Since you commits are local and pushes are remote, you can commit a number of times while disconnected from the internet, then push all of these commits when you reconnect to the remote server. I attached an image from the Git wikipedia page that visualizes what I said.

https://en.wikipedia.org/wiki/Git_(software)#/media/File:Git_operations.svgOK

So, if student A uses computer A to work on fixing bug A; and student B uses computer B to work on fixing bug B; and computer C is used to transfer code into the robot; how do the students'results get into computer C (and shared across computers A & B) (during the scarce time available during a tournament) if computers A, B, and C aren't linked by some sort of LAN? Do the students pass around a memory stick?

I think this is the scenario GeeTwo is envisioning.

SoftwareBug2.0
15-06-2015, 03:26
The majority of teams don't have a software mentor.

That would be a little surprising to me. Why do you think that? I would have expected that most teams would have somebody who is a de facto software mentor even if they're not experts.

If most software teams have no adult guidance at all that might be the definitive answer to "what's up w/ FRC code quality?"

GeeTwo
15-06-2015, 08:11
Do you only have one programmer, or do they all share one programming workstation?

(emphasis mine)
Neither is necessary. The way git works is that updating the code has two phases: commit and push. Commits are the differences between source files from the previous commit (additions and deletions of lines of code). When you commit, you save the changes locally. When you push code, all the commits since the previous push are saved remotely, and the all of them update. Since you commits are local and pushes are remote, you can commit a number of times while disconnected from the internet, then push all of these commits when you reconnect to the remote server. I attached an image from the Git wikipedia page that visualizes what I said.


OKSo, if student A uses computer A to work on fixing bug A; and student B uses computer B to work on fixing bug B; and computer C is used to transfer code into the robot; how do the students'results get into computer C (and shared across computers A & B) (during the scarce time available during a tournament) if computers A, B, and C aren't linked by some sort of LAN? Do the students pass around a memory stick?

I think this is the scenario GeeTwo is envisioning.

Yes. When you're at competition without the internet, either one workstation has "all the current code", or none of them do. Things were actually worse for us for much of our existence - we couldn't access github over the school internet connection. I wasn't mentoring programming at the time, but I understand that many pushes (from homes) either didn't happen or had to be "straightened out" later by the programming leads. I understand that individual files had to be transferred around on a memory stick and then forced into git as replacements, which sometimes caused additional headaches. Without a connection, git is still useful as a "time machine", but it doesn't really help distributed development, or at least not within our understanding of how to do it.

OBTW, I don't see the image.

AlexanderTheOK
15-06-2015, 09:01
OK

So, if student A uses computer A to work on fixing bug A; and student B uses computer B to work on fixing bug B; and computer C is used to transfer code into the robot; how do the students'results get into computer C (and shared across computers A & B) (during the scarce time available during a tournament) if computers A, B, and C aren't linked by some sort of LAN? Do the students pass around a memory stick?


Thanks for the nightmare fuel.

The thought of having multiple students on multiple computers fixing robot code DURING THE COMPETITION... *shudders*



All joking aside, I really don't see why that many people need to be fixing robot code during the competition. There really should be one or two people who know everything well enough to fix it. Heck. One or two people can develop most robot code on their own. It's a good idea to have two programmers on one laptop in a coding pair to fix bugs quicker, but more than that you end up having too many laptops and people in the pits.

Kevin Leonard
15-06-2015, 09:31
This year, 20 had it's best programming team yet. And they were a team, for once, whereas in previous years we often had one person who did most of the programming on the robot.
We had most functions during teleop that were automated, and advanced controls on our elevator , forks, and drive, and it was exceptional to see them take a less-than-ideal mechanical design and optimize it the way they did.

However, I wish we chose a simpler mechanical design, because there was only so much they could do to fix the robot before returns started to diminish.

I'm generally a fan of making the mechanical design support the programming. Making the programming the easy part. Making the programming be "When the driver presses the button, shoot", and not "When the driver presses the button, activate the aiming system, then shoot".

For example, 20's 2013 robot was built to be as simple a floor loader as possible. We shot from one position on the field, and that was right in front of the pyramid. We had some limit switches hooked up so that when the robot was in position, they would cause some lights to light up, and our drivers knew they were locked in and shot.

However, if your team has more programming resources, they could persue a design with a higher ceiling due to programming, like 195's 2013 robot.

They were a full-court shooter, and they went to Connecticut with a very good full court shooter and won the event with Team 20. At champs, they were faster, more accurate, and could shoot from more places. At IRI, they had perfected their bag of tricks to include shots through the opponent's pyramid and from anywhere on the field.

There is definitely a place in FRC for more complex, elegant programming, but it's not necessary to succeed. Complex programming is a resource, in the same way that sponsorships, machinery, and CAD knowledge is a resource. If you're able to use that resource, that's fantastic for you and your team, you have more options en route to success. If you don't have that resource, you have a limited set of options for success.
I know that in 2016, 20 is likely to lean more heavily on our programming team than ever before, because we know they're extremely capable, but in 2017 we might not.

Hugh Meyer
15-06-2015, 09:40
We use SVN for version control and take our server with us to each competition. This allows us to have as many or as few programmers working on code, if needed, as we want. While we don’t make a huge number of changes it does vary from year to year. Mostly we are changing the robot configuration file to change autonomous mode.

Another reason for taking the SVN server to competition is to keep track and store the robot configuration and log files from each match. Our drive team does a commit to the SVN server after each match, and then anyone interested can do an update and immediately have access to the last match configuration and log files.

The students are very capable and able to keep track of using version control. I try very hard to teach them the right way and once they get use to this tool it is one of those things you wonder how you lived without it. Even if you only have one programmer on the team you should be using version control. I tell people it is like having a giant edit undue button. One can revert the files to any previous commit. This is a wonderful feature.

We even have our CAD, electronics, & scouting teams using SVN. Why anyone would not use version control is beyond me. It is not that hard and is a tool every student should have in their knowledge base.

-Hugh

jtrv
15-06-2015, 10:08
I just finished my last year of high school, and have been a programmer for my team during my sophomore, junior, and senior years. I took 2 years of Java courses at my school, including the AP CS exam. We gained our first programmer mentor in 8 years at the start of my junior year, and he was a college freshman at the time.

We've won engineering awards at nearly every single competition (maybe not in one or two of the competitions) we've been to since 2012, which is 8-10. No national champs.

So with some decent experience, I'll ask you a question that may give you some insight on the 'state' of FRC programming:

What is Big O? I've never heard of it.

_______________________________


The fact is - many, many teams do not have any insight to advance programming techniques at all. Many teams don't even have access to the basic fundamentals. For gods sake, this year we started using abstract stuff for the first time. We knew what how it all worked, but we never bothered implementing it.

We are one of the few high schools lucky enough to have programming classes at all. A close friend from another nearby team said his school doesn't have any programming classes at all, and they didn't have a programming mentor for multiple years. I'm not sure if they even have one now, but I know they didn't several years ago. How can you expect kids to figure out these 'standards' when they're teaching themselves basic syntax and struggling to understand the concept of the networking system being used in FRC?

_______________________________


I think there is a very, very large disconnect between mentors on CD and the students on their teams. On some teams - like many in this thread - there are very, very large expectations to meet software and programming 'standards', whereas other teams do not even have the slightest clue on the 'norms' of this stuff.

Sure, you can say 'Get your programming teachers at your school to mentor!' Well, yes. But it's not even that simple. The teachers who teach programming courses here are math teachers and know Java and only Java. Additionally, expecting them to dedicate incredible amounts of time to mentoring is a ridiculous expectation. Mentoring is a LOT of work. We're struggling to find someone who will be able to handle the paperwork we go through each year. We aren't exactly in an area with a booming tech industry, and I'm sure many other teams are in the same situation.

Additionally, many teams struggle with building the robot in the first place. Sometimes teams are lucky to test their bot before week 6 begins. It's an immense amount of pressure to get a robot near fully functional in less than a week.

GeeTwo
15-06-2015, 11:44
So with some decent experience, I'll ask you a question that may give you some insight on the 'state' of FRC programming:

What is Big O? I've never heard of it.

I hadn't either until a few months ago, and I minored in computer science (1984). Back then, we just called it "order of magnitude". It tells you how resources required by a process (usually time) scales as the amount of data being handled increases. For example, a bubble sort is of order n2 where n is the number of items being sorted, so "n2" is the "big O" for the bubble sort, starting from an unsorted data set.

Greg McKaskle
15-06-2015, 14:17
Teachers are one possible source of mentoring, but so are engineers and technicians.

FRC is quite different from how I think of CS classes are normally taught in high schools. FRC is part realtime, part feedback control, and the remainder being composed of scientific/mathematical/engineering tasks such as cleaning up noisy sensors, building state machines, and mathematical transforms to match mechanical or electrical constructions.

The robots I see within FRC don't have a database, don't draw circles and squares using a turtle, and don't have grids of rocks, flowers, and whatnot that morph over time. Also, the issues that happen on the robot are rarely isolated. "Our robot code has a problem", invariably involves a wiring glitch, mechanical bind, toasted motor, missing jumper, scratched encoder disc, and sometimes a logic or race condition mixed in. This is incredibly similar to the type of SW written by NI customers, the majority of whom are not CS.

So don't limit yourself to computer scientists or computer science teachers.

Greg McKaskle

Jared Russell
15-06-2015, 16:00
You know Maslow's hierarchy of needs?


Maslow's Hierarchy of Needs

HIGHER LEVEL NEEDS
Self-Actualization
Esteem
Love/belonging
Safety
Physiological
FUNDAMENTAL NEEDS


Maslow posited that the most basic level of needs must be met until an individual can really focus on higher level fulfillment. This idea has been met with a healthy does of criticism, but I think there is an appropriate FRC metaphor to be made:


Jared's Hierarchy of FRC Needs for Repeatable, On-Field Success

HIGHER LEVEL NEEDS
Software
Mechanical design and construction
Construction fundamentals (batteries, wiring, pneumatics, fasteners, etc.)
Sponsorship, equipment, and space
Mentorship and team organization
FUNDAMENTAL NEEDS


Basically, you are only as good as the foundation beneath you. Even if you excel at a higher level in the hierarchy, a deficiency in a lower level will ultimately compromise the effectiveness of the robot and team in competition. Teams that are proficient at all of these things can do pretty well; teams that excel at all of them can do excellently. Teams who write excellent software but can't build a mechanism to save their life are usually not going to fare well on the field in FRC.

TL;DR: Software in robotics is hard because not only is it hard to create good software - doing so is also heavily influenced by every other aspect of a team and robot.

There are very few instances of a robot or team where I've thought, "The only thing holding them back is software quality", so I have to agree with Blake that for the majority of teams, the software meets the requirements (though everything can always be better).

artK
15-06-2015, 17:11
Yes. When you're at competition without the internet, either one workstation has "all the current code", or none of them do.

Having it all in one place isn't usually that bad if your programmers can program as a group (One guy types, everyone talks about it). It allows for a lot of ideas to be expressed and quickly refined as a group, or rejected when the group thinks up a better solution.

Things were actually worse for us for much of our existence - we couldn't access github over the school internet connection. I wasn't mentoring programming at the time, but I understand that many pushes (from homes) either didn't happen or had to be "straightened out" later by the programming leads. I understand that individual files had to be transferred around on a memory stick and then forced into git as replacements, which sometimes caused additional headaches. Without a connection, git is still useful as a "time machine", but it doesn't really help distributed development, or at least not within our understanding of how to do it.

A connection of some sort is always going to be necessary in any distributed work. Git does allow for merging, which can take two branches and unite them together again, and allowing the programmer to resolve any merge conflicts. If you could only push at the school, I would commit freely at home, then push at school.

If this is a serious issue, there may be other version control programs that suit your needs better. I am simply most familiar with Git and its features.


OBTW, I don't see the image.
Does this work?
https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Git_operations.svg/760px-Git_operations.svg.png

gblake
15-06-2015, 17:56
I hadn't either until a few months ago, and I minored in computer science (1984). Back then, we just called it "order of magnitude". It tells you how resources required by a process (usually time) scales as the amount of data being handled increases. For example, a bubble sort is of order n2 where n is the number of items being sorted, so "n2" is the "big O" for the bubble sort, starting from an unsorted data set.Yeah...

For those who haven't noticed or cared (and... it's legitimate to be proud of not needing to notice or care ;)), I have noticed that the buzzworthiness of knowing and discussing the big O characteristics of various data structures and algorithms for various operations (insert, sort, etc.) has been on the rise for a while because of the money to be made in big data (business analytics, online shopping, social networking, searching, databases in clouds, intelligence/spying, etc. etc.), and it's now quite trendy.

It's become so trendy that I worry that folks are losing sight of the differences between efficiently operating on small datasets and efficiently operating on large datasets. For small datasets, low-overhead brute force will often beat the pants off of manipulating some fancy data structure that is appropriate for larger datasets. There is more to writing efficient code than learning the big O characteristics of various data structures and algorithms.

FRC has opportunities to for programmers to experience both sides of this fence.

AlexanderTheOK
15-06-2015, 19:57
The one thing people harping about code efficiency and big O functions need to hear is this:

The most important resource isn't CPU time, or even the money to run a team. It's time. There are only so many man hours a person can spend doing something. (obviously varying from person to person) Money can be acquired through time. (People can spend time fundraising) CPU time can be bought with money (Just buy another onboard computer for the second camera.)

If it's going to take 50 hours of work to get your vision tracking program optimally threaded for all 4 cores, for the sake of saving 200 dollars on an onboard computer, you are literally working for less than minimum wage.

Now if you're writing code for mass produced industrial equipment, you're no longer looking at 200 dollars, likely 200,000 dollars, or 2,000,000 dollars. Then it would be worth it.

You might also want to learn how to write threaded code well. In that case, it becomes worth it for the educational value.


FRC robots are not mass produced, and most FRC programmers are still worried about getting EVERYTHING working in the several hours they have with the robot.

Out of all the ways FRC programmers can spend their time, code efficiency is the least efficient way to spend it.

faust1706
15-06-2015, 20:08
It took me less than 30 minutes to properly thread my vision code in 2013 on a quad core sbc....

But that's not the point. It's that frc ultimately fails at demonstrating what computer science really is.

MrRoboSteve
15-06-2015, 20:36
Shipping is a feature. (http://www.joelonsoftware.com/items/2009/09/23.html)

EricH
15-06-2015, 20:53
But that's not the point. It's that frc ultimately fails at demonstrating what computer science really is.

That's also not its goal. Plain and simple. FRC is aimed at inspiration. Not at learning the reality. If demonstrating the reality were the goal, the mechanical students (remember, they're all in high school) would all need to be able to do at least basic stress analysis, geartrain stresses, fatigue-rating...

...and know the math behind all that, which can be worse than the math used to compute the various items I just mentioned.

Oh, and the electricals would need to be able to understand the guts of the electrical devices, which as I recall can get into Diff. Eq. As I recall, that's barely touched on in H.S. calculus.



Plain and simple: Part of any (and I do mean ANY) engineering project is to deliver a product that works, on time and on budget (and on weight). Comp Sci is very much an engineering field. If you have the time to go above and beyond, nobody is stopping you from going above and beyond, provided that the product (your code) works (read: runs the robot) properly. I think Randall puts it very nicely in 664 (http://xkcd.com/664/)...

gblake
15-06-2015, 21:22
OK, I'll bite.
It took me less than 30 minutes to properly thread my vision code in 2013 on a quad core sbc....
How did you know it was properly threaded?
But that's not the point. It's that frc ultimately fails at demonstrating what computer science really is.And,

What is computer science, really?

faust1706
15-06-2015, 22:13
OK, I'll bite.

How did you know it was properly threaded?
And,

What is computer science, really?

Let me rephrase that, it was threaded well enough that I couldn't make the program run any faster due to limitations on how many frames a second I could grab from the camera.

I do not have a firm definition of computer science, as I do not have one of mechanical engineering either. But I'll give it a shot by combining various definitions I've heard from my professors and read over the years.

Computer Science is concerned with information in much the same sense that physics is concerned with energy; it is devoted to the representation, storage, manipulation and presentation of information. Computer scientists deal mostly with software and software systems, which includes theory, design, development, and application.

Let's dissect this in terms of FRC:
Representation: An interesting survey would be to see what percentage of frc programmers know how various data types are stored in the computer (and the rough range of them)
Storage: Beyond the occasional class, data structures such as queues and trees are not used in FRC to my understanding
Manipulation: Manipulating ints is as trivial as tightening a bolt, manipulating complex data structures is another thing
Presentation: Besides graphs and printing to console, how are teams presenting the data they have when they want to?
Theory: Do teams take to the calculate the complexity of their problem before they solve it?
Design: (Serious question): Do teams take to the whiteboard and draw out their algorithm, like this (http://static1.1.sqspcdn.com/static/f/469577/25881122/1421885790913/friendshipalg.jpg?token=54aAA4v6ksMSBkY6QtOf6VwLSo M%3D) famous one?
Development: There are many theories about software development so I won't touch this as to not anger people.
Application: Does it solve what it is suppose to? I feel in general this is the one area FRC meets.

Side note: For those who say why bother about efficiency if what is being done isn't complex to begin with. It's about learning and developing good habits. Programming habits are some of the hardest to break in my opinion and it takes self discipline to do things right, and not what is easy (Insert jfk speech). When you are constantly worried about optimization, you begin thinking like a computer scientist.

Ian Curtis
15-06-2015, 22:48
It's become so trendy that I worry that folks are losing sight of the differences between efficiently operating on small datasets and efficiently operating on large datasets. For small datasets, low-overhead brute force will often beat the pants off of manipulating some fancy data structure that is appropriate for larger datasets. There is more to writing efficient code than learning the big O characteristics of various data structures and algorithms.

"The purpose of computation is insight, not numbers." -- Richard Hamming (https://en.wikipedia.org/wiki/Richard_Hamming)

Why this matters:

Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."[5]


But that's not the point. It's that frc ultimately fails at demonstrating what [CHOOSE YOUR OWN FIELD] really is.

FRC "fails" to demonstrate what every field truly is -- but its true value is you can get exposure to the essence of almost ANY field.

GeeTwo
15-06-2015, 23:21
It's a good idea to have two programmers on one laptop in a coding pair to fix bugs quicker, but more than that you end up having too many laptops and people in the pits.
We understand the coding exception:
D. After Kickoff, there are no restrictions on when software may be developed. as meaning that we can develop code in the stands or the lobby or even off site. However, as I noted above, the lack of connectivity pretty much limits us to a single laptop.


Jared's Hierarchy of FRC Needs for Repeatable, On-Field Success

HIGHER LEVEL NEEDS
Software
Mechanical design and construction
Construction fundamentals (batteries, wiring, pneumatics, fasteners, etc.)
Sponsorship, equipment, and space
Mentorship and team organization
FUNDAMENTAL NEEDS

Excellent point. I would add "strategy" between Mechanical design and construction and Construction fundamentals, but the bottom line is that even great software can't redeem a robot that is built poorly or is built to do the wrong things.

A connection of some sort is always going to be necessary in any distributed work. :) My point exactly.

Does this work? <IMG/> Yes!

Yeah... the buzzworthiness of knowing and discussing the big O characteristics of various data structures and algorithms for various operations (insert, sort, etc.) has been on the rise for a while because of the money to be made in big data.


The most important resource isn't CPU time, or even the money to run a team. It's [software development] time....Out of all the ways FRC programmers can spend their time, code efficiency is the least efficient way to spend it.

Yes and Yes!

Using a red/black tree to sort a list if ten numbers is less efficient than bubble. Coding it is only more efficient if you're regularly going to sort hundreds or thousands of numbers.

When we started the team, Gixxy (our first program lead, now a CSCI major, and fascinated with computers since before he was two) was planning to write this big code base to carry from year to year. However, most of the FRC reusable code is already in WPIlib or LabView. Unless you reuse hardware (or at least hardware design), the software is also going to be a fresh start. In four years, I believe that we have reused vision processing code (running on a raspberry pi), the threaded connection to the pi, and an xBox controller front end. We have borrowed old drive system and manipulator code to build new, but we followed up with such massive modifications that I wonder whether we saved more time by reusing than we spent in fixing "old code" problems. That is, the main reason most programmers practice good coding (that is, someone will have to maintain it ten or more years from now, maybe me) is not a valid concern for deck plate FRC programming.

AlexanderTheOK
16-06-2015, 00:51
The most important resource isn't CPU time, or even the money to run a team. It's [software development] time...

Well. It is applicable to software, and software being the topic of this discussion, maybe we ought not to digress, but it really applies to everything. There was a weekend during which the programmers here didn't touch the laptops at all, because there just weren't enough man hours being put into modifying the mechanical aspects of the robot, and there wasn't much good we could do editing code. We weren't very useful or good at it, but what we did accomplish was orders of magnitude better than what would have come out of another dozen hours of typing.

floogulinc
16-06-2015, 05:12
The vast majority of code would get a 'C' at best in an intro to programming class in a high school. So why is no one talking about this?


You vastly overestimate the rigor of a high school programming class. If you saw the MATLAB code I wrote for my Computing for Engineers introductory college course, I'm sure you'd have a heart attack. When you're not graded on efficiency and don't have to time to make it efficient, why make it so?
there's a lot).

Oh man, this. Here (https://github.com/floogulinc/intro-to-programming) is an entire semester of a highschool Java intro class (with many things done using much better/more advanced than the rest of the class including the teacher did). Even the most basic programming done in FRC would be A material in my school's programming class.

asid61
16-06-2015, 05:39
Oh man, this. Here (https://github.com/floogulinc/intro-to-programming) is an entire semester of a highschool Java intro class (with many things done using much better/more advanced than the rest of the class including the teacher did). Even the most basic programming done in FRC would be A material in my school's programming class.

Can attest to this. High school CS was a joke compared to what I see the Electrical division coding.

Ether
16-06-2015, 08:38
"The purpose of computation is insight, not numbers." -- Richard Hamming (https://en.wikipedia.org/wiki/Richard_Hamming)

http://www.chiefdelphi.com/forums/showpost.php?p=1207276&postcount=10

notmattlythgoe
16-06-2015, 08:46
On top of a lot (probably most) teams not having a programming mentor of some sort, I would venture to say that most of the programming mentors out there have no formal training in controls.

marshall
16-06-2015, 08:56
On top of a lot (probably most) teams not having a programming mentor of some sort, I would venture to say that most of the programming mentors out there have no formal training in controls.

That's actually a good point. I suppose there are a couple of ways to look at FRC robots. One is as a control system problem. I'm not sure many teams take the approach to look at their robots in that light. I think that has a lot to do with why LabView is often shunned by some. LabView is an absolutely amazing tool for programming control systems...

All this talk of optimization is great but I haven't seen anyone in this thread talk about profiling. It's been a while since I took the compiler optimization class that I took in college but as I recall, there were some choice readings by Donald Knuth that pointed out that pre-mature optimization is the root of all evil or something similar. Profiling helps to determine where to optimize systems and with FRC robots containing a physical component, profiling would entail looking at the code as well as the robots output and the speed/efficiency of motors and gearboxes... which as others have pointed out, is more likely to provide substantial efficiency gains.

notmattlythgoe
16-06-2015, 09:10
That's actually a good point. I suppose there are a couple of ways to look at FRC robots. One is as a control system problem. I'm not sure many teams take the approach to look at their robots in that light. I think that has a lot to do with why LabView is often shunned by some. LabView is an absolutely amazing tool for programming control systems...

All this talk of optimization is great but I haven't seen anyone in this thread talk about profiling. It's been a while since I took the compiler optimization class that I took in college but as I recall, there were some choice readings by Donald Knuth that pointed out that pre-mature optimization is the root of all evil or something similar. Profiling helps to determine where to optimize systems and with FRC robots containing a physical component, profiling would entail looking at the code as well as the robots output and the speed/efficiency of motors and gearboxes... which as others have pointed out, is more likely to provide substantial efficiency gains.

The Software industry is really starting to move towards the Agile process of development. Over-engineering is one of the wastes of time that the Agile process tries to avoid. Why engineer to a future problem that might never happen. On the topic of this thread, why over engineer your optimization unless it is actually going to be a problem. Outside of vision, optimization isn't going to be an issue.

Alan Anderson
16-06-2015, 09:55
The Software industry is really starting to move towards the Agile process of development. Over-engineering is one of the wastes of time tat the Agile process tries to avoid. Why engineer to a future problem that might never happen.

I try to stay away from the trap of unnecessary generalization and superfluous creation of flexible frameworks. My two favorite Patterns are 1) Do the simplest thing that could possibly work, and 2) You aren't going to need it.

notmattlythgoe
16-06-2015, 10:00
I try to stay away from the trap of unnecessary generalization and superfluous creation of flexible frameworks. My two favorite Patterns are 1) Do the simplest thing that could possibly work, and 2) You aren't going to need it.

Exactly.

Ether
16-06-2015, 10:28
Why engineer to a future problem that might never happen.

This is a great motto for FRC.

For passenger airplanes, spacecraft, nuclear plants and warheads, and medical equipment... not so much.

notmattlythgoe
16-06-2015, 10:35
This is a great motto for FRC.

For passenger airplanes, spacecraft, nuclear plants and warheads, and medical equipment... not so much.




Well you obviously have to engineer to the project requirements, and in those cases the requirements include redundancies. My point is don't engineer past the scope of the project.

GeeTwo
16-06-2015, 12:40
Well you obviously have to engineer to the project requirements, and in those cases the requirements include redundancies. My point is don't engineer past the scope of the project.

Or to paraphrase from the legal community, engineer to the requirements, the whole requirements, and nothing but the requirements.

efoote868
16-06-2015, 14:04
Or to paraphrase from the legal community, engineer to the requirements, the whole requirements, and nothing but the requirements.

I've had a few instances professionally where a more senior person has told me not to worry about designing for something in the future, only to find that a little bit of effort in the past would have saved a whole lot of trouble later.

"That'll never happen" has become an inside joke.

Ether
16-06-2015, 14:36
engineer to ... nothing but the requirements.

don't engineer past the scope of the project.

And my point was that I agreed with you for FRC.

But I sure hope you guys aren't suggesting that should be the "hear no evil, see no evil" attitude of the lead project engineer of a project where human life hangs in the balance.

It's certainly not the way I ran my projects.

notmattlythgoe
16-06-2015, 14:47
And my point was that I agreed with you for FRC.

But I sure hope you guys aren't suggesting that should be the "hear no evil, see no evil" attitude of the lead project engineer of a project where human life hangs in the balance.

It's certainly not the way I ran my projects.





I'm absolutely not suggesting that Ether. If you run your projects that way even when human lives don't hang in the balance you'll end up with unhappy customers.

I'm saying that you shouldn't be spending a bunch of extra time optimizing loops and building adaptable code with the assumption that you'll need that optimization or need to adapt it in the future. In most (not all) cases it will be a wast of time and money.

GeeTwo
16-06-2015, 15:06
And my point was that I agreed with you for FRC.

But I sure hope you guys aren't suggesting that should be the "hear no evil, see no evil" attitude of the lead project engineer of a project where human life hangs in the balance.

It's certainly not the way I ran my projects.

Of course I'm suggesting that a vehicle designed and built on a $4000 budget in six weeks to last about an hour of actual operation should be operated on the open highway ...

... never.

Ari423
16-06-2015, 15:16
Of course I'm suggesting that a vehicle designed and built on a $4000 budget in six weeks to last about an hour of actual operation should be operated on the open highway ...

... never.

Been there done that! Our wheelchair/go-cart test bot breaks the speed limit.

evanperryg
16-06-2015, 16:59
FIRST gives us restrictions on weight and height. The only software restriction we have is the ports we can use and the hardware limitations. I don't think many of us are concerned about the efficiency of our code with the hardware in the roboRIO. A 120 lb limit is a much harder limitation to work with.

As you said, a big reason programming restrictions may be so small is that school programming courses are very low-caliber. the most advanced programming taught in my school is very, very simple VBA, or a little RobotC. The "programming class" is literally just a semester of ALICE, the biggest joke in the programming world. However, most other technical aspects have some sort of advanced class. Physics, autos, woods, any technical class will teach you some amount of mechanical engineering. Schools with PLTW have a rigorous electronics course that goes as far as to explain the workings of FPGAs and other highly integrated digital systems, and most schools have an introduction to electrical class that will get into basic digital logic. Programming? Not so much. The "problem," if you can even consider there to be a problem here, is that most students already have some kind of base for other technical aspects of the robot, but none for programming.

Studies suggest that roughly 40 percent of people aren't even capable of learning to program. Just look at the fizz bizz test. Really good read. (http://blog.codinghorror.com/why-cant-programmers-program/)

Seems like another reason why FRC programming is generally very rough. If it's inherent that only a limited number of people can program at all, then the fact that FRC is getting lots of people to make code that even works is pretty impressive.


It's become so trendy that I worry that folks are losing sight of the differences between efficiently operating on small datasets and efficiently operating on large datasets. For small datasets, low-overhead brute force will often beat the pants off of manipulating some fancy data structure that is appropriate for larger datasets. There is more to writing efficient code than learning the big O characteristics of various data structures and algorithms.

FRC scouting in a nutshell.

That's also not its goal. Plain and simple. FRC is aimed at inspiration.
Inspiration through science and technology. OP's prerogative seems to be making the science and technology in FRC more advanced, thereby advancing the inspiration. Advancing programming in students is certainly in line with FIRST's goals, but it has diminishing returns and is much more difficult to implement than other modes of FRC-related inspiration.

IMO once your code works, you get more diminished returns improving it than you do improving your mechanisms. Efficiency doesn't really matter because there's no requirement to scale.
We've begun to encounter the issue that WPIlib for Java is poorly coded itself. The libraries relating to vision code are particularly messy. This may be why so many teams opt to build their own libraries entirely. Sometimes, optimization is good. But, usually, it isn't all that necessary for FRC because we have a magical beast of processing power called the roboRIO. Since there's no real limit on how inefficient your code can be (unless you try vision code, then good luck) there's usually no reason to optimize your code. Outside of FRC? Yeah, optimization is important. In FRC? Not really, because 2 minutes out of your 2:15 match, your robot is an RC stacking machine, not an autonomous robot.

Thad House
16-06-2015, 17:30
We've begun to encounter the issue that WPIlib for Java is poorly coded itself. The libraries relating to vision code are particularly messy. This may be why so many teams opt to build their own libraries entirely. Sometimes, optimization is good. But, usually, it isn't all that necessary for FRC because we have a magical beast of processing power called the roboRIO. Since there's no real limit on how inefficient your code can be (unless you try vision code, then good luck) there's usually no reason to optimize your code. Outside of FRC? Yeah, optimization is important. In FRC? Not really, because 2 minutes out of your 2:15 match, your robot is an RC stacking machine, not an autonomous robot.

This is a very true fact. You can tell the Java libraries for this year were rushed to get finished in time. Also, some of the hacks needed to interface with the native c++ code for the FPGA just look like they could cause more issues. This is the same thing that causes the vision libraries to be slow. It wastes alot of time marshalling the structs between java and c++, and this looks to be what is so slow. We've been trying to alleviate alot of these issues in the DotNet port, and its easier since working with native code is easier, but its still a challenge trying clean up the code to make it faster, yet keeping it running the same code.

connor.worley
16-06-2015, 17:36
This is a very true fact. You can tell the Java libraries for this year were rushed to get finished in time. Also, some of the hacks needed to interface with the native c++ code for the FPGA just look like they could cause more issues. This is the same thing that causes the vision libraries to be slow. It wastes alot of time marshalling the structs between java and c++, and this looks to be what is so slow. We've been trying to alleviate alot of these issues in the DotNet port, and its easier since working with native code is easier, but its still a challenge trying clean up the code to make it faster, yet keeping it running the same code.

I wish there was a middle ground, something that could run natively like C++ but be like Java in the sense that it lacks a lot of the pitfalls C++ learners fall into.

Thad House
16-06-2015, 17:52
I wish there was a middle ground, something that could run natively like C++ but be like Java in the sense that it lacks a lot of the pitfalls C++ learners fall into.

Too bad having a garbage collector and running natively don't work together super nice, without a lot of work. Because that's basically what that would be. I think later this summer, I want to do a runtime shootoff between LV, C++, C#, Java and Python (all the currently working languages) and see how much performance you gain or lose with each one.

Ether
16-06-2015, 18:15
I'm saying that you shouldn't be spending a bunch of extra time optimizing loops and building adaptable code with the assumption that you'll need that optimization or need to adapt it in the future. In most (not all) cases it will be a wast of time and money.

Was that last sentence intended to extend the scope ?

"Waste of money" sounds like you are talking about something other than just an FRC robot.

Ether
16-06-2015, 18:17
...

Hi Evan. If you've got a couple of minutes, there's some unfinished business over on your "Blown CIM" thread.

evanperryg
16-06-2015, 18:35
Hi Evan. If you've got a couple of minutes, there's some unfinished business over on your "Blown CIM" thread.




I'm sorry... what are you talking about?

kylestach1678
16-06-2015, 18:44
I'm sorry... what are you talking about?

I believe that he may have got the wrong Evan.

evanperryg
16-06-2015, 18:56
I believe that he may have got the wrong Evan.

yep, found the other evan... (http://i.imgur.com/M0Tqhqv.png)

gblake
16-06-2015, 20:38
OP wanted to raise the bar for the FRC software quality.

What are some ways to do that without being too preachy (and without getting dragged into the weeds by topics like what-is-the-the-one-true-code-formatting-style, or the-one-true-way-to-use-a-Hungarian-naming-convention?

Publish some reference designs (several... the number of good ways to do things will be legion) that guide students, and lead them to ask good questions about details, but that don't hand them answers on a silver platter? Students are given physical kit-bot parts. Maybe the kit-bot BOM should include some software parts they can put together to form a basic FRC software system (does this already exist?)?

Perhaps put a few good examples of software requirements specifications in the Kit of Parts?

Create simulators (that expose the appropriate APIs) that students can use when their own team's real equipment is unavailable, or during off-season practice sessions, thereby giving them more development time during build season, and more practice time before build season?

Something else?

Blake

notmattlythgoe
16-06-2015, 21:04
Was that last sentence intended to extend the scope ?

"Waste of money" sounds like you are talking about something other than just an FRC robot.




I'm talking in a more general world than FRC.

Jared Russell
16-06-2015, 23:49
Create simulators (that expose the appropriate APIs) that students can use when their own team's real equipment is unavailable, or during off-season practice sessions, thereby giving them more development time during build season, and more practice time before build season?

I've thought about this one a lot through the years, because I think that this is the biggest obstacle to highly functional (let alone high quality) code. Most teams simply don't have enough time with a functional robot to do effective iterative software development with hardware in the loop.

Simulation encompasses a wide spectrum of approaches, from mocking speed controller class interfaces all the way to doing a full dynamics simulation. The former is useful for debugging logic errors; the latter is required (to some level of fidelity) to actually do closed-loop testing of the program. This year 254 did a little of both for developing and debugging our control algorithms and designing our can grabbers (however, our approach was strongly tied to our use of Java...we built a "fake" WPIlib JAR and swapped it out to do simulated tests).

The problem with simulation beyond just mocking low level interfaces is that teams now need a way to specify their robot configuration to the simulation. This is tedious and error prone in most cases, and very difficult to do accurately (e.g. estimating friction, damping, bending, or inertial properties of robot mechanisms is hard). Even professionally, I've watched many PhDs lose hours of work having to debug configuration issues in their URDF files (a common format for expressing robot topologies). The best solution for FRC would be to provide examples for common FRC mechanisms and COTS drivetrains and let teams go from there...but I worry that the complexity gets large so quickly that if a team can navigate that, well, they are probably not the ones who REALLY need programming help.

gblake
16-06-2015, 23:58
This is a very true fact. You can tell the Java libraries for this year were rushed to get finished in time. Also, some of the hacks needed to interface with the native c++ code for the FPGA just look like they could cause more issues. This is the same thing that causes the vision libraries to be slow. It wastes alot of time marshalling the structs between java and c++, and this looks to be what is so slow. We've been trying to alleviate alot of these issues in the DotNet port, and its easier since working with native code is easier, but its still a challenge trying clean up the code to make it faster, yet keeping it running the same code.Some brave and enterprising young soul might want to try creating libraries that use JNI to implement these interfaces ... if JNI isn't being used already.

Thad House
17-06-2015, 00:48
Some brave and enterprising young soul might want to try creating libraries that use JNI to implement these interfaces ... if JNI isn't being used already.

They are all JNI. The issues come from not being able to pass by reference, so passing structs, such as the ones used for vision, still has trouble even with the JNI.

Some of the things that I know are fine, but still make me cringe, involve the use of generics and enums. Since Java on the CRIO did not support either of these, most of the Java code doesn't have them. However, you can tell that some of the new classes do use generics and enums. I know this is fine, but you can tell there is a disconnect between the old and the new code, and something that would be nice would be for somebody to take a month and thoroughly clean it up. I bet with some cleanup to include new features, and maybe some algorithm refactoring, we could get the code much nicer and easier to work with. Maybe they could fix all the spelling and punctuation errors in the comments too :D

SoftwareBug2.0
17-06-2015, 01:37
We've begun to encounter the issue that WPIlib for Java is poorly coded itself. The libraries relating to vision code are particularly messy. This may be why so many teams opt to build their own libraries entirely. ...

Interesting. The C++ version always gave me the impression that it was written by someone who really wanted to be writing Java.

notmattlythgoe
17-06-2015, 06:18
Interesting. The C++ version always gave me the impression that it was written by someone who really wanted to be writing Java.

That's funny, because I always get the impression that the Java libraries were written by a C++ developer.

wireties
17-06-2015, 07:11
That's funny, because I always get the impression that the Java libraries were written by a C++ developer.

Pretty common in embedded environments.

evanperryg
17-06-2015, 09:48
They are all JNI. The issues come from not being able to pass by reference, so passing structs, such as the ones used for vision, still has trouble even with the JNI.

Some of the things that I know are fine, but still make me cringe, involve the use of generics and enums. Since Java on the CRIO did not support either of these, most of the Java code doesn't have them. However, you can tell that some of the new classes do use generics and enums. I know this is fine, but you can tell there is a disconnect between the old and the new code, and something that would be nice would be for somebody to take a month and thoroughly clean it up. I bet with some cleanup to include new features, and maybe some algorithm refactoring, we could get the code much nicer and easier to work with. Maybe they could fix all the spelling and punctuation errors in the comments too :D

We're working on it. Among other things, we've also removed three nested while(true)s with no delays, and the notorious error message that swears at you.

Thad House
17-06-2015, 13:04
We're working on it. Among other things, we've also removed three nested while(true)s with no delays, and the notorious error message that swears at you.

I would love to see this when you guys get done, or close to done. Maybe some of the fixes would be useful in the other ports as well.

Also, which nested while loops? I haven't noticed any that have caused issues so far, but maybe I just don't remember. I've read too much code lately.

SoftwareBug2.0
18-06-2015, 00:13
That's funny, because I always get the impression that the Java libraries were written by a C++ developer.

It could be both :). For best results, assign someone good at Java to do the Java version and a C++ expert to do the C++ version. For "interesting" results asssign the Java guy to do the C++ and a C++ guy to Java.

Anyway, here are a few of the reasons that the C++ looks like somebody wanted to write Java:
-Pointers to stuff passed around without specific notes about ownership
-Abstract base classes used like Java's interfaces in places where templates might be more appropirate
-Virtual fuctions overused
-Types that can't be used like normal C++ variables because they don't have copy or assignment operators the rule rather than the exception

What C++isms do you see in the Java version?

notmattlythgoe
18-06-2015, 07:50
It could be both :). For best results, assign someone good at Java to do the Java version and a C++ expert to do the C++ version. For "interesting" results asssign the Java guy to do the C++ and a C++ guy to Java.

Anyway, here are a few of the reasons that the C++ looks like somebody wanted to write Java:
-Pointers to stuff passed around without specific notes about ownership
-Abstract base classes used like Java's interfaces in places where templates might be more appropirate
-Virtual fuctions overused
-Types that can't be used like normal C++ variables because they don't have copy or assignment operators the rule rather than the exception

What C++isms do you see in the Java version?

Underscores, underscores everywhere.

Thad House
19-06-2015, 17:59
Whats also great about the WPILib is that whenever you initialize a digital port, delete it, and create a new one, the HAL leaks 6 bytes. Now since many teams don't do this, its not a big deal, but still, its a little odd that an official program has a memory leak, even if it is such a small rare one.

faust1706
19-06-2015, 21:05
Whats also great about the WPILib is that whenever you initialize a digital port, delete it, and create a new one, the HAL leaks 6 bytes. Now since many teams don't do this, its not a big deal, but still, its a little odd that an official program has a memory leak, even if it is such a small rare one.

Has anyone run valgrind, or something similar, on as much of the WPILib as they could?

SoftwareBug2.0
20-06-2015, 01:36
Has anyone run valgrind, or something similar, on as much of the WPILib as they could?

I don't how much people have tried it but the version from here (https://usfirst.collab.net/sf/projects/wpilib/) has a static assert that's basically sizeof(void*)==sizeof(int32_t).

Peter Johnson
20-06-2015, 02:08
Whats also great about the WPILib is that whenever you initialize a digital port, delete it, and create a new one, the HAL leaks 6 bytes. Now since many teams don't do this, its not a big deal, but still, its a little odd that an official program has a memory leak, even if it is such a small rare one.

Note that at the HAL level, freeDIO() is the opposite of allocateDIO(), and not the opposite of initializeDigitalPort(). The initializeDigitalPort function allocates the memory you're talking about, and was only intended to be called once (ever) per port, with the caller being responsible for saving the resulting pointer across multiple uses. The lack of an uninit function to free the memory in question is admittedly poor API design, but it's worth noting that the higher level WPILib classes use the HAL consistent with the above (call init once, then just use allocate/free), and thus have no memory leaks in this case even if you create/destroy multiple times per port.

I've found that the WPI folks are very welcoming of patches... I'm sure a patch to add appropriate uninit functions to the HAL would be accepted in short order.

Thad House
20-06-2015, 02:21
Note that at the HAL level, freeDIO() is the opposite of allocateDIO(), and not the opposite of initializeDigitalPort(). The initializeDigitalPort function allocates the memory you're talking about, and was only intended to be called once (ever) per port, with the caller being responsible for saving the resulting pointer across multiple uses. The lack of an uninit function to free the memory in question is admittedly poor API design, but it's worth noting that the higher level WPILib classes use the HAL consistent with the above (call init once, then just use allocate/free), and thus have no memory leaks in this case even if you create/destroy multiple times per port.

I've found that the WPI folks are very welcoming of patches... I'm sure a patch to add appropriate uninit functions to the HAL would be accepted in short order.

However, if you don't keep the same digital input, and instead go new, it does initialize a new digital port. So calling
Talon t = new Talon(0);
t.free();
t = new Talon(0);

Leaks the memory. Because InitDigitalPort always returns a new digital port, instead of reusing an old one. At least on the java side. And t.free() does not actually release the digital port structure.

I have a few bugs I plan on submitting to WPI that I have found.


I do want to say thank you for doing the python port. I have been able to use that for some help as well, and am implementing the DotNet simulator to use a dictionary similar to the python one, and it should be directly compatible with the websim api.

gblake
24-06-2015, 20:10
Before is slips off of everyone's radar for good, I thought I would give this thread one more poke.OP wanted to raise the bar for the FRC software quality.

What are some ways to do that? ...

connor.worley
24-06-2015, 20:18
After thinking about this more, I'm not sure if it's even a problem. I've never expected FIRST to take in a bunch of kids and spit out seasoned engineers. The main goal is just to get them to check a STEM box when they're choosing a major for college. So let the enthusiasts develop as much as they please, but I think the average experience is already pretty good as far as accomplishing FIRST's goal goes. Just getting a joystick to control a motor is pretty exciting for most people.

marshall
24-06-2015, 22:11
After thinking about this more, I'm not sure if it's even a problem. I've never expected FIRST to take in a bunch of kids and spit out seasoned engineers. The main goal is just to get them to check a STEM box when they're choosing a major for college. So let the enthusiasts develop as much as they please, but I think the average experience is already pretty good as far as accomplishing FIRST's goal goes.

Agreed.

Just getting a joystick to control a motor is pretty exciting for most people.

Hello worl...AHHHH!!! It's out of control! Jane, stop this crazy thing!

Kevin Leonard
25-06-2015, 08:25
I think its great if my students develop advanced programming and controls. Its a great thing to learn and can be incredibly inspiring to see the robot perform amazing functions on the field that would be impossible or very difficult otherwise.
However, if I have a few students who go from no programming experience to some programming experience, and this makes them want to pursue it further, thats just as good to me, if not more in the lines of FIRST's goals.

I do, however, wish I knew how to keep a large programming team engaged (and perhaps thats the topic for another thread), as its difficult to let every programming student work on robot code when you have a large team.

GeeTwo
25-06-2015, 10:18
However, if I have a few students who go from no programming experience to some programming experience, and this makes them want to pursue it further, thats just as good to me, if not more in the lines of FIRST's goals.

I think that falls under inspiration. Last time I checked, it was one of FIRST's goals.


I do, however, wish I knew how to keep a large programming team engaged (and perhaps thats the topic for another thread), as its difficult to let every programming student work on robot code when you have a large team.
Perhaps this recent thread (http://www.chiefdelphi.com/forums/showthread.php?t=137499)?

Andrew Schreiber
25-06-2015, 10:29
Before is slips off of everyone's radar for good, I thought I would give this thread one more poke.

Honestly, games where autonomous has tiered rewards that are actually worth something and that are not penalized for attempting them.

2015 - Pretty terrible, the only task you could accomplish on your own was REALLY hard. The other tasks all required your partners to also do something. (I don't count can burglaring as an auton task)

2014 - Almost good, the penalty for attempting to score a ball was pretty harsh though.

2013 - Great. 0 penalty for attempting to score in any of the goals. Even drive forward and dump 2 in the low goal was viable and provided a reasonable reward. And the reward -> difficulty scaled appropriately to even the upper tier.

2012 - Scoring was MUCH harder than 2013 so meh.

2011 - Most teams struggled to score, let alone scoring uber tubes autonomously.

2010 - Literally 0 point.

2009 - There was a game?

2008 - Great. Even just driving forward was worth points, bonus points if you could turn at the end of it.

2007 - See 2011 only strike the word uber

2006 - See 2013

2005 - meh, not a whole lot of teams attempted it. Vision was REALLY hard.

2004 - Very few teams attempted to knock off the balls. But a lot of folks prepped for teleop, kinda decent but not really.

2003 - Robot Demolition Derby isn't really a good auton, sorry.


If teams have a reason to write good code they probably will write some. But if they are penalized for attempting auton teams will just pass because the risk is not worth the reward.

Kevin Leonard
25-06-2015, 10:43
Honestly, games where autonomous has tiered rewards that are actually worth something and that are not penalized for attempting them.

2015 - Pretty terrible, the only task you could accomplish on your own was REALLY hard. The other tasks all required your partners to also do something. (I don't count can burglaring as an auton task)

2014 - Almost good, the penalty for attempting to score a ball was pretty harsh though.

2013 - Great. 0 penalty for attempting to score in any of the goals. Even drive forward and dump 2 in the low goal was viable and provided a reasonable reward. And the reward -> difficulty scaled appropriately to even the upper tier.

2012 - Scoring was MUCH harder than 2013 so meh.

2011 - Most teams struggled to score, let alone scoring uber tubes autonomously.

2010 - Literally 0 point.

2009 - There was a game?

2008 - Great. Even just driving forward was worth points, bonus points if you could turn at the end of it.

2007 - See 2011 only strike the word uber

2006 - See 2013

2005 - meh, not a whole lot of teams attempted it. Vision was REALLY hard.

2004 - Very few teams attempted to knock off the balls. But a lot of folks prepped for teleop, kinda decent but not really.

2003 - Robot Demolition Derby isn't really a good auton, sorry.


If teams have a reason to write good code they probably will write some. But if they are penalized for attempting auton teams will just pass because the risk is not worth the reward.

I consider canburgling an auto task- THAT WAS STILL REALLY HARD, especially if you wanted to do it at more than a regional level.

2012 autonomous was just as good as 2013 IMO, because scoring low baskets was easy, and worth 4pts/score (vs 2013's 2 pts/score), and feeding balls into a partner was another great autonomous task that was easy.

2014 would have been perfect as well, were it not so punishing to miss autonomous.

Really the GDC has gotten autonomous right 3 times. 2008, 2012, and 2013.

I think 2012 was the best year for programmers. Improved controls turned into improved results for most teams. Improved autonomous was valuable, and there were effective tasks to do for teams at every level, programming-wise.

plnyyanks
25-06-2015, 11:08
I think 2012 was the best year for programmers. Improved controls turned into improved results for most teams. Improved autonomous was valuable, and there were effective tasks to do for teams at every level, programming-wise.

Definitely agree. That year was the holy grail of good game design, a nice correlation between automation difficulty and point return, and the tools to make it happen. The robot code (https://github.com/frc1124/2012) I wrote that year is one of the few things from that long ago I'm still proud of

Monochron
25-06-2015, 11:09
I'll throw out my 2 cents for what I think is the main thing that holds back the evolution of programming on a team:

Getting a mechanical system to the base state of "it works"(regardless of how well) takes a lot more effort and time then programming does. By that I mean that code changes can be done quickly and efficiently with minimal peoples' effort and mechanical changes often involve a team of people machining, bolting, cutting, lifting, etc. This may sound like programming could evolve quickly but what usually happens is that mechanical issues take precedence in the design process. When engineers are making a big modification to a mechanical part they often like to keep all other variables static. Which means programming changes don't go through if the mechanism still needs to be tested out / modified.

Once the code "works" it can be hard to justify changing it when you know that you are already sinking time into changing mechanical or electrical systems.


A good way to avoid these situations are to ensure that your programming team has an adequate testing environment so that code can evolve in isolation from ever changing mechanical parts. Set up a branching model so that you can give the mechanical folks a working build and then continue to develop in parallel. This is one of the things we strove for this past year and it, I think, made a big difference in the quality of our code.

MrRoboSteve
25-06-2015, 11:41
What effects would this change to the rules have on software quality?


R15 Teams must stay “hands-off” their bagged ROBOT elements during the following time periods:

A. between Stop Build Day and their first event,

B. during the period(s) between their events, and

C. outside of Pit hours while attending events.

Additional time is allowed as follows:

D. After Kickoff, there are no restrictions on when software may be developed.

E. On days a team is not attending an event, they may continue development of any items permitted per R17, including items
listed as exempt from R17, but must do so without interfacing with the ROBOT.

F. Teams attending 2-day events may access their ROBOTS per the rules defined in the 2015 Administrative Manual Section 5.6: ROBOT Access Period - for Teams Attending District Events.

G. ROBOTS may be exhibited per 2015 Administrative Manual Section 5.5.3: Robot Displays.

H. Teams may operate their ROBOT for the purpose of testing software updates.


I purposely left out fixing the robot in the new language.

GeeTwo
25-06-2015, 11:47
I'm only going back to 2012, as befits my team's experience:

2015 - Pretty terrible, the only task you could accomplish on your own was REALLY hard. The other tasks all required your partners to also do something. (I don't count can burglaring as an auton task)

2014 - Almost good, the penalty for attempting to score a ball was pretty harsh though.

2013 - Great. 0 penalty for attempting to score in any of the goals. Even drive forward and dump 2 in the low goal was viable and provided a reasonable reward. And the reward -> difficulty scaled appropriately to even the upper tier.

2012 - Scoring was MUCH harder than 2013 so meh.

...

If teams have a reason to write good code they probably will write some. But if they are penalized for attempting auton teams will just pass because the risk is not worth the reward.

Apart from the mobility bonus "gimme" in 2014 (really? drive across a line?) I thought that 2012 and 2014 were nearly identical. You got bonus points for scoring in hybrid/auto. If you miss, you have to pick up the ball again to score it for no bonus. And while the two years of experience certainly helped, scoring unopposed in AA seemed a lot easier than in RR, both high and low.

2015 was the only one that failed to reward incrementally, and the number of things that could go wrong caused a number of teams (including mine) to decide that none of our routines was worth the risk. I am surprised at how many teams did NOT have a "drive into the auto zone" auto. Granted, it was only three points, but it was essentially the same as the mobility bonus in 2014, and it seemed like the great majority of teams did it.

I consider canburgling an auto task- THAT WAS STILL REALLY HARD, especially if you wanted to do it at more than a regional level.

I do not consider canburgling as an auto task, but I can see the point - it was a scarce worm that went to the early bird. The two reasons not were that it was not rewarded directly because it was autonomous, and (more importantly) most of the canburglar programming was a single actuator with no sensor feedback. That is, it was best solved as a mechanical problem, not an automation problem.

I think 2012 was the best year for programmers. Improved controls turned into improved results for most teams. Improved autonomous was valuable, and there were effective tasks to do for teams at every level, programming-wise.
I don't recall Rebound Rumble this way at all, but I wasn't mentoring yet. As I recall, if you didn't do the kinect (and I saw few teams that did), you had either very easy (score preloaded balls; tip one bridge) or rather hard tasks (both; tip multiple bridges; pick up balls and score them) in auto/hybrid. Please expand on this.

GeeTwo
25-06-2015, 11:51
What effects would this change to the rules have on software quality?

H. Teams may operate their ROBOT for the purpose of testing software updates.

I purposely left out fixing the robot in the new language.

If teams can't fix the robot (or even effectively troubleshoot wiring, which usually amounts to the same thing), they won't be able to operate it to test software once something breaks. If the rule is actually followed, it would have about the same average effect as moving bag and tag back to 12:15 am on Wednesday.

Monochron
25-06-2015, 12:09
Apart from the mobility bonus "gimme" in 2014 (really? drive across a line?) I thought that 2012 and 2014 were nearly identical. You got bonus points for scoring in hybrid/auto. If you miss, you have to pick up the ball again to score it for no bonus.
The big difference cor 2014 was that unscored auto balls could NOT have ASSIST points applied to them which was the primary means of scoring that year. So if you missed an auto shot (or 3) you then needed to clear those balls before you could start scoring points in the range that your opponents could. You lost valuable cycle time to playing cleanup for a relatively meager amount of points.

Kevin Leonard
25-06-2015, 12:22
I'm only going back to 2012, as befits my team's experience:



Apart from the mobility bonus "gimme" in 2014 (really? drive across a line?) I thought that 2012 and 2014 were nearly identical. You got bonus points for scoring in hybrid/auto. If you miss, you have to pick up the ball again to score it for no bonus. And while the two years of experience certainly helped, scoring unopposed in AA seemed a lot easier than in RR, both high and low.

2015 was the only one that failed to reward incrementally, and the number of things that could go wrong caused a number of teams (including mine) to decide that none of our routines was worth the risk. I am surprised at how many teams did NOT have a "drive into the auto zone" auto. Granted, it was only three points, but it was essentially the same as the mobility bonus in 2014, and it seemed like the great majority of teams did it.



I do not consider canburgling as an auto task, but I can see the point - it was a scarce worm that went to the early bird. The two reasons not were that it was not rewarded directly because it was autonomous, and (more importantly) most of the canburglar programming was a single actuator with no sensor feedback. That is, it was best solved as a mechanical problem, not an automation problem.


I don't recall Rebound Rumble this way at all, but I wasn't mentoring yet. As I recall, if you didn't do the kinect (and I saw few teams that did), you had either very easy (score preloaded balls; tip one bridge) or rather hard tasks (both; tip multiple bridges; pick up balls and score them) in auto/hybrid. Please expand on this.

In Rebound Rumble during auto, you could:
Feed a partner balls
Score low goals
Score middle goals
Score high goals
Grab the two balls off the side bridge and shoot those as well
Grab the two balls off the middle bride and shoot those as well

In Aerial Assist, if you missed balls, you couldn't score during teleop until those balls were scored. Also, in 2012 if you missed, you could score those balls from the key during teleop, where opponents couldn't defend you, where in 2014 you got rammed if you tried to pick up your misses.

GeeTwo
25-06-2015, 14:16
The big difference cor 2014 was that unscored auto balls could NOT have ASSIST points applied to them which was the primary means of scoring that year. So if you missed an auto shot (or 3) you then needed to clear those balls before you could start scoring points in the range that your opponents could. You lost valuable cycle time to playing cleanup for a relatively meager amount of points.

Before the season, that's what I expected. The actuality was that most of our matches seemed to devolve into one robot on offense and two on defense on both sides; no assist points, and few truss shots. It was really disappointing seeing how poor many teams' ball pickups were.

I only recall one time where we had to chase a wayward auto ball for more than a few seconds. Our pickup roller was somehow still in the track of our launcher; it started upright to be within the frame perimeter. The ball went backwards (not quite a reverse truss shot). If it had happened in teleop, the effect would have been the same.

I guess if you had a low percentage auto, it was not worth loading them at all in 2014, or stuffing them in the low goal. If you were much over 60%, the risk was more than justified. And even a box on wheels should be able to a score a low goal auto pretty consistently.

In Rebound Rumble during auto, you could: ..
Feed a partner balls

OK, I missed that. I don't remember passing balls between robots being illegal, but I don't recall it ever happening, either. Or is this something else?

Also, in 2012 if you missed, you could score those balls from the key during teleop, where opponents couldn't defend you, where in 2014 you got rammed if you tried to pick up your misses. This was a difference in teleop, not auto. The key was protected whether the balls you were carrying had been preloaded in the robot (or if you didn't have any balls). In both games a loose ball was a loose ball, whether it was initially loaded in the robot; in RR it was available but not a liability, and in AA you couldn't get another ball until you scored it. The "opportunity cost" of missing a shot was usually the same whether the shot was taken in auto or teleop in both games.

Kevin Leonard
25-06-2015, 14:27
Before the season, that's what I expected. The actuality was that most of our matches seemed to devolve into one robot on offense and two on defense on both sides; no assist points, and few truss shots. It was really disappointing seeing how poor many teams' ball pickups were.

I only recall one time where we had to chase a wayward auto ball for more than a few seconds. Our pickup roller was somehow still in the track of our launcher; it started upright to be within the frame perimeter. The ball went backwards (not quite a reverse truss shot). If it had happened in teleop, the effect would have been the same.

I guess if you had a low percentage auto, it was not worth loading them at all in 2014, or stuffing them in the low goal. If you were much over 60%, the risk was more than justified. And even a box on wheels should be able to a score a low goal auto pretty consistently.


OK, I missed that. I don't remember passing balls between robots being illegal, but I don't recall it ever happening, either. Or is this something else?

This was a difference in teleop, not auto. The key was protected whether the balls you were carrying had been preloaded in the robot (or if you didn't have any balls). In both games a loose ball was a loose ball, whether it was initially loaded in the robot; in RR it was available but not a liability, and in AA you couldn't get another ball until you scored it. The "opportunity cost" of missing a shot was usually the same whether the shot was taken in auto or teleop in both games.

Did you watch 2014 Einstein Finals-2?
Missed autonomous shots caused the 254-469-2848-74 alliance to lose a match because of how long it took them to re-score those missed balls. Whereas missed autonomous balls on Einstein in 2012 didn't mean you automatically lost the match.
What this meant was that if you messed up auto at most events during eliminations, you lost the match.

As for feeding balls to partners in 2012, 4334 did it throughout Archimedes Eliminations, as well as on Einstein. 20 did it as the third robot of the Connecticut Regional Championship alliance.

Basically the problem with 2014 autonomous was that letting your partner run their autonomous routine was a major liability if they failed, whereas in 2012 and 2013, missing auto shots just lost you that autonomous score.

GeeTwo
25-06-2015, 15:04
Did you watch 2014 Einstein Finals-2? no, I admit that after Bayou I lost interest.

As for feeding balls to partners in 2012, 4334 did it throughout Archimedes Eliminations, as well as on Einstein. 20 did it as the third robot of the Connecticut Regional Championship alliance. Was this only in eliminations? Was the ball passed on the floor, or was there some sort of hand shake? Honestly, I was surprised more teams couldn't to a bumper-to-bumper or short pass ball transfer in Aerial Assist where you could effectively score 10 or 20 points each time you did it; it's nice to know that some teams did this in a game that didn't directly call for it.

Knufire
25-06-2015, 18:52
The two ways I saw ball passing occur in 2012 was either on the floor (through the shooting robots intake) or a light toss into the shooting robots' hopper if they had one. I know 3322 did it very early in the season.

EricH
25-06-2015, 20:57
What effects would this change to the rules have on software quality?

H. Teams may operate their ROBOT for the purpose of testing software updates.

I purposely left out fixing the robot in the new language.

Minimal, at best, and it might actually make it worse.

As pointed out, no fixing (or other electro-mechanical work, presumably) would be allowed, thus as soon as something broke, no further testing of software could be done.

But... most upgrades of software tend to work with (and follow after) upgrades in hardware. No upgrades in hardware mean software doesn't need upgrading.

And there is one other item that I can see happening. This is why I think it could become WORSE code, not better.

--A team could, theoretically, upload a base code right before the bag that has driving disabled, make one upgrade (enabling the drive code), and spend the rest of the allowed time "testing the upgrade"--which to everybody else is "driver practice". As a side note, a good, practiced driver can often do at least as well with lousy code as with good code--just a touch of compensating needed maybe.

evanperryg
25-06-2015, 23:40
The actuality was that most of our matches seemed to devolve into one robot on offense and two on defense on both sides; no assist points, and few truss shots.
...
I only recall one time where we had to chase a wayward auto ball for more than a few seconds.
...
And even a box on wheels should be able to a score a low goal auto pretty consistently.


Interesting that your experience with AA was entirely the opposite of mine. What I saw was teams all contributing in some way offensively, then contributing in some way defensively when they completed their offensive job for that cycle. What I also saw was teams doing their best at every level of play to keep opposing alliance members away from their auto balls. (see: Crossroads, Einstein finals, IRI) I also saw lots of boxes getting caught on walls, or just not moving at all in auto.

I do not consider canburgling as an auto task, but I can see the point - it was a scarce worm that went to the early bird. The two reasons not were that it was not rewarded directly because it was autonomous, and (more importantly) most of the canburglar programming was a single actuator with no sensor feedback. That is, it was best solved as a mechanical problem, not an automation problem.

Having worked with 548's brutal can pullers, I can attest to the fact that the fastest (or at least some of the fastest) did use sensors. A potentiometer was used do detect when the pullers were all the way down, so they would move forward as fast as possible.

As for the quality and complexity of FRC code, I think we can all agree it'd be great to teach great programming. It'd also be great if more teams actually did great programming. Yet, ultimately, we have the kind of hardware that allows us to be sloppy, libraries that are poorly made themselves, and generally diminishing returns for making code more efficient. Great code can be made in FRC and it can have good returns, but unless you have everything else about your robot down to laser-precise perfection, there's probably something else that could be made better more easily, whose returns would scale better with the effort you put into it. For the sake of inspiration, have at it because programming is vital to our future generation of engineers. However, for the sake of making a consistently successful team, there is often something that will give greater returns for less effort.

orangelight
30-06-2015, 11:24
I do not consider canburgling as an auto task, but I can see the point - it was a scarce worm that went to the early bird. The two reasons not were that it was not rewarded directly because it was autonomous, and (more importantly) most of the canburglar programming was a single actuator with no sensor feedback. That is, it was best solved as a mechanical problem, not an automation problem.
Programming spent hours tuning our canburglars to be the fastest they could possibly be, and still be accurate. At FiM Livonia we probably won the can battle against 503 because they had a programming issue that caused theirs to be deployed a quarter second slower then they should.

nate12345678
30-06-2015, 18:59
As a novice on this website, I have never really drawn an interest until I saw this thread. This is been one of the most interesting threads I've seen anywhere.

Now, as I have now been a programmer for FRC for two years, I think I might be able to offer some observations on this subject. On the Mechanical/Electrical opinion, they are absolutely correct. Small mechanical changes can absolutely make a huge difference, from flipping cans on your mentor to stacking 3 6-stacks capped in a match. However, this change won't be any good if the code is inefficient for the driver. Sometimes, automatically doing multi-step tasks can reduce your time at the feeder by 10 seconds a stack. I think part of the reason programmers get so restless and irritated by mechanical is the fact that it takes about 2-3 weeks to finish the first iteration of the code, but 12 weeks to make a robot mechanically ready for champs. Programmers maybe get one hour a week to test on a robot, but what about the other 20+ hours? This seems like a major issue for our programming team. More people need to spend time working on extra-robot programming. I spent the entire season working on our scouting application, and I must say that it was one of the best programming experiences I've ever had.

tldr everything part of making a robot necessary, programmers need things to do when not testing.