|
|
|
#1
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
If you do this, allow submissions after competition. Lots of cool features get added between competitions.
|
|
#2
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
|
|
#3
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
I can say this, I was the programming team and half of the mechanical team, and though I spent a lot of time on the code, it worked and that was about it... when I wanted to finally get around to fixing the spaghetti in my code, the robot mechanically suicided and I didn't have time to fix my code, this went on constantly until build was over.
|
|
#4
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Throwing in my 2 cents about some of the reasons teams might have poor software.
I just think overall, the structure and culture of FRC isn't honestly that great for learning software unless the students are really motivated. I've definitely gotten a lot out of it, but I also have been doing a lot of research and outside work. In my one year of experience, I ran into many of the above issues. Also, forbidding the modification of release code (that is, working code at competition) is a great idea. Aside from hotfixes / emergencies / changing something trivial like autonomous values, or controller inputs, messing with code at competition is bad practice. |
|
#5
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
|
|
#6
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
The Gitflow workflow is, in my opinion, perfect for FRC Teams. Work on features and develop throughout build season, then once drive practice gets close, start a release branch, and put out a master update for the drivers to practice on (or of course just have them practice on develop... it depends on how much driver practice you would want to do). Then, a few days before each competition, start another release and clean code, disable any extraneous or nonworking features, decide on autonomous routines, etc. Changes at competition should be hotfix branch only. |
|
#7
|
||||||
|
||||||
|
Re: On the quality and complexity of software within FRC
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
|
#8
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
The simple version is that FIRST is an engineering competition, not a science fair or research project. It's the difference between an engineer at a company that designs and manufactures really cheap CD players and a college professor. The engineer designing the CD player realizes that their solution isn't perfect, and shouldn't be, due to its limited parts cost and design time. The college professor has years and years to conduct research and strives to achieve perfection. It's not always about doing things in a very academic, well documented, extremely optimized way, but instead it's about getting things done that work as well as they need to, but no better. Why shoot for 60 fps when 2 fps works just as well? Why learn about optimization techniques for things that are efficient enough? Why change code that already works when there is other work to be done? To many, FIRST is an engineering competition. Teams must engineer a solution to a problem in a limited amount of time, just like real engineers do in the "real world". There is a finite amount of time to optimize an infinite number of things, so engineers must make tradeoffs. Should they give the programmers 10 hours with the robot to write vision software, or should they iterate on our hook so that we can lift totes easily? I didn't see a team this year that seriously benefited from having computer vision on their robot. Should a programmer spend time optimizing and threading their code when they could spend time practicing driving the robot? Most people (myself included) judge robots by their ability to score points, prevent opponents from scoring, win matches (if applicable), seed high, and progress in the tournament structure of the competition. A really cool mechanical thing that isn't very effective in a match is unimpressive - for instance, some 3D printed parts, many carbon fiber parts, many crazy magnesium alloy parts... A really simple mechanical thing that is very effective in a match is very impressive. Look at the robots from 118, 254, 1114, and other really great teams. They use traditional methods of manufacturing with traditional materials because it works. A really cool software thing that isn't very effective (vision processing this year) is unimpressive to the crowd. A piece of software that just works is impressive, regardless of its efficiency, documentation, or organization. I never saw a robot and thought "Gee, that team would be so much better off if there software was more efficient", or "that team could beat the Poofs if their vision system had a tiny bit less lag". Instead, I thought things like "if only their elevator was a little faster". Quote:
What if 10-15 fps is enough? What if 1 frame per second is good enough? I guess what I'm saying is that most teams software is 'good enough' and is rarely is what causes a team to be unsuccessful. There are many teams with very well written software and ineffective robots, but there aren't many teams with effective robots limited by bad software. Rarely are the best robots filled with extremely complex algorithms and parts, but rather, well engineered simple solutions. The code posted in Chief Delphi whitepapers often deals with concepts more advanced than anything found on any world champion robot. |
|
#9
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
I couldn't have said it better. |
|
#10
|
|||
|
|||
|
Re: On the quality and complexity of software within FRC
Quote:
So far, the closest thing to useful vision processing in those 3 more years I spent programming has been cheesy vision. While it is an ingenious solution to an unexpected but consistent field failure, it is not robot vision processing, it was a simpler, quicker to develop, and more effective alternative to it. This year, at my last regional as a student ever, Ventura, there were two teams that were able to consistently stack the 3 yellow totes, 1717 and 696. Both robots accomplished this task without cameras, because cameras were not needed. As others have said, it's an engineering competition, where we are making a robot that scores points. When code efficiency scores teams points, you'll see a lot of teams start making efficient code. |
|
#11
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
I don't know why, but this thread reminds me of a joke me and one of the mechanical mentors started making this year about deciding to forget the code and just wind up the robot before every match. It's a mechanical team's dream...
|
|
#12
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
I agree with many of the above posters about code efficiency, but not about the relative gains of code improvements. While "getting it to work" is a bare minimum that most teams accept, automating small tasks can help a driver tremendously (place stack, shoot once in position), as can sophisticated control like maintaining lift position rather than open loop lift control by the driver. Like anything else there are diminishing returns, but I do think most teams would do better if they put a higher priority on their code/controls.
In my mind, there are two ways an individual can work to change the status quo of "no one cares about code." The first is obvious: work with your team to do what you believe is important. Different teams differently value scouting, CAD, marketing. There are teams that are known for their excellence in their preferred areas. Be the team that excels in code and spreads the word. The second way is also obvious. If you think people should critique code on Chief Delphi the same way they do CAD, then you do that. It only takes two people to have a conversation. Perhaps you'll start a larger trend, maybe you'll just give some good advice. Teams quite often post their codebases on CD, and usually receive no response. Start the conversation you want to see. I do think FRC would better prepare students for 21st century engineering if FRC games placed a higher value on code, but I also recognise the difficulty of effectively doing that while maintaining watchability and other priorities. Not to mention that it's not FIRST's chief goal to prepare students but rather to inspire them. |
|
#13
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
Quote:
I'm going to take the above as kind of a "trigger point". Mainly because, it's what got my attention. Who is asking those questions? No, seriously, who is asking those questions? And, just for grins, why? Think about it. Now that you've thought about it, from what I've seen an awful lot of those threads are your first-time programmers. Not 2nd, 3rd, or 4th year programmers, but first-timers. The exception is the "why is this not working", which can come from anybody who is trying something new, or just wants a code review. The other factor is that many teams don't give their programmers a robot until Week 5.75 if the programmers are lucky. Then they expect the robot to work in the first match on the field. See: hotfix the code and pray it works. Then fix some other bug. Repeat. 1197 does have a good programming team--but I know enough not to ask questions. (I leave that to the programming mentors.) |
|
#14
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
|
|
#15
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
Quote:
Really simply, most teams' programmers get the robot really late in the season--and most teams' programmers are then trying to get the robot running under pressure from mechanical who wants drive time. Ugly code that works is of far, far greater value to the team than really nice, standards-compliant, reusable year-after-year code. That is the perception, at least. And it is very difficult to break out of that without a determined effort by one or more programmers to force the issue. And then there's another problem. 4 years. Every 4 years, there is a complete turnover. (I intentionally exclude mentors from this.) Given that many teams have limited programmers in the first place (and, given some of those threads, it's tough to get a programmer to step up to replace a lone programmer who is moving on), there isn't really a good progression... so even if you do get a programmer or two who are starting to force the team towards high quality, or do more complex things, right about the time they hit that point they're gone, and someone else is starting from near-zero. You can argue that if they were doing it right that wouldn't be an issue, but getting to the point of doing it right can take YEARS. As far as complexity...Let's just say that some mechanical mentors aren't willing to trust the programmers with more than basic sensors, and have to be poked, prodded, and otherwise convinced to (allow the programmers to) use the more advanced items. (I'm not one of them--but I prefer the simpler ways of doing things over the more complex ways.) OTOH, when the complex stuff works right, they suddenly want a lot more... Just got to get them there--again, that usually takes a programmer who insists on it until... oops, graduated another one, time to start over. ![]() |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|