|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
OP wanted to raise the bar for the FRC software quality.
What are some ways to do that without being too preachy (and without getting dragged into the weeds by topics like what-is-the-the-one-true-code-formatting-style, or the-one-true-way-to-use-a-Hungarian-naming-convention? Publish some reference designs (several... the number of good ways to do things will be legion) that guide students, and lead them to ask good questions about details, but that don't hand them answers on a silver platter? Students are given physical kit-bot parts. Maybe the kit-bot BOM should include some software parts they can put together to form a basic FRC software system (does this already exist?)? Perhaps put a few good examples of software requirements specifications in the Kit of Parts? Create simulators (that expose the appropriate APIs) that students can use when their own team's real equipment is unavailable, or during off-season practice sessions, thereby giving them more development time during build season, and more practice time before build season? Something else? Blake Last edited by gblake : 17-06-2015 at 00:44. |
|
#2
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
Quote:
Simulation encompasses a wide spectrum of approaches, from mocking speed controller class interfaces all the way to doing a full dynamics simulation. The former is useful for debugging logic errors; the latter is required (to some level of fidelity) to actually do closed-loop testing of the program. This year 254 did a little of both for developing and debugging our control algorithms and designing our can grabbers (however, our approach was strongly tied to our use of Java...we built a "fake" WPIlib JAR and swapped it out to do simulated tests). The problem with simulation beyond just mocking low level interfaces is that teams now need a way to specify their robot configuration to the simulation. This is tedious and error prone in most cases, and very difficult to do accurately (e.g. estimating friction, damping, bending, or inertial properties of robot mechanisms is hard). Even professionally, I've watched many PhDs lose hours of work having to debug configuration issues in their URDF files (a common format for expressing robot topologies). The best solution for FRC would be to provide examples for common FRC mechanisms and COTS drivetrains and let teams go from there...but I worry that the complexity gets large so quickly that if a team can navigate that, well, they are probably not the ones who REALLY need programming help. |
|
#3
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Before is slips off of everyone's radar for good, I thought I would give this thread one more poke.
|
|
#4
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
After thinking about this more, I'm not sure if it's even a problem. I've never expected FIRST to take in a bunch of kids and spit out seasoned engineers. The main goal is just to get them to check a STEM box when they're choosing a major for college. So let the enthusiasts develop as much as they please, but I think the average experience is already pretty good as far as accomplishing FIRST's goal goes. Just getting a joystick to control a motor is pretty exciting for most people.
|
|
#5
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
Hello worl...AHHHH!!! It's out of control! Jane, stop this crazy thing! |
|
#6
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
I think its great if my students develop advanced programming and controls. Its a great thing to learn and can be incredibly inspiring to see the robot perform amazing functions on the field that would be impossible or very difficult otherwise.
However, if I have a few students who go from no programming experience to some programming experience, and this makes them want to pursue it further, thats just as good to me, if not more in the lines of FIRST's goals. I do, however, wish I knew how to keep a large programming team engaged (and perhaps thats the topic for another thread), as its difficult to let every programming student work on robot code when you have a large team. |
|
#7
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
Quote:
Quote:
|
|
#8
|
|||
|
|||
|
Re: On the quality and complexity of software within FRC
Quote:
2015 - Pretty terrible, the only task you could accomplish on your own was REALLY hard. The other tasks all required your partners to also do something. (I don't count can burglaring as an auton task) 2014 - Almost good, the penalty for attempting to score a ball was pretty harsh though. 2013 - Great. 0 penalty for attempting to score in any of the goals. Even drive forward and dump 2 in the low goal was viable and provided a reasonable reward. And the reward -> difficulty scaled appropriately to even the upper tier. 2012 - Scoring was MUCH harder than 2013 so meh. 2011 - Most teams struggled to score, let alone scoring uber tubes autonomously. 2010 - Literally 0 point. 2009 - There was a game? 2008 - Great. Even just driving forward was worth points, bonus points if you could turn at the end of it. 2007 - See 2011 only strike the word uber 2006 - See 2013 2005 - meh, not a whole lot of teams attempted it. Vision was REALLY hard. 2004 - Very few teams attempted to knock off the balls. But a lot of folks prepped for teleop, kinda decent but not really. 2003 - Robot Demolition Derby isn't really a good auton, sorry. If teams have a reason to write good code they probably will write some. But if they are penalized for attempting auton teams will just pass because the risk is not worth the reward. |
|
#9
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
Quote:
2012 autonomous was just as good as 2013 IMO, because scoring low baskets was easy, and worth 4pts/score (vs 2013's 2 pts/score), and feeding balls into a partner was another great autonomous task that was easy. 2014 would have been perfect as well, were it not so punishing to miss autonomous. Really the GDC has gotten autonomous right 3 times. 2008, 2012, and 2013. I think 2012 was the best year for programmers. Improved controls turned into improved results for most teams. Improved autonomous was valuable, and there were effective tasks to do for teams at every level, programming-wise. |
|
#10
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
Quote:
|
|
#11
|
||||
|
||||
|
Re: On the quality and complexity of software within FRC
I'll throw out my 2 cents for what I think is the main thing that holds back the evolution of programming on a team:
Getting a mechanical system to the base state of "it works"(regardless of how well) takes a lot more effort and time then programming does. By that I mean that code changes can be done quickly and efficiently with minimal peoples' effort and mechanical changes often involve a team of people machining, bolting, cutting, lifting, etc. This may sound like programming could evolve quickly but what usually happens is that mechanical issues take precedence in the design process. When engineers are making a big modification to a mechanical part they often like to keep all other variables static. Which means programming changes don't go through if the mechanism still needs to be tested out / modified. Once the code "works" it can be hard to justify changing it when you know that you are already sinking time into changing mechanical or electrical systems. A good way to avoid these situations are to ensure that your programming team has an adequate testing environment so that code can evolve in isolation from ever changing mechanical parts. Set up a branching model so that you can give the mechanical folks a working build and then continue to develop in parallel. This is one of the things we strove for this past year and it, I think, made a big difference in the quality of our code. |
|
#12
|
|||
|
|||
|
Re: On the quality and complexity of software within FRC
What effects would this change to the rules have on software quality?
Quote:
|
|
#13
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
If teams can't fix the robot (or even effectively troubleshoot wiring, which usually amounts to the same thing), they won't be able to operate it to test software once something breaks. If the rule is actually followed, it would have about the same average effect as moving bag and tag back to 12:15 am on Wednesday.
|
|
#14
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
Quote:
As pointed out, no fixing (or other electro-mechanical work, presumably) would be allowed, thus as soon as something broke, no further testing of software could be done. But... most upgrades of software tend to work with (and follow after) upgrades in hardware. No upgrades in hardware mean software doesn't need upgrading. And there is one other item that I can see happening. This is why I think it could become WORSE code, not better. --A team could, theoretically, upload a base code right before the bag that has driving disabled, make one upgrade (enabling the drive code), and spend the rest of the allowed time "testing the upgrade"--which to everybody else is "driver practice". As a side note, a good, practiced driver can often do at least as well with lousy code as with good code--just a touch of compensating needed maybe. |
|
#15
|
|||||
|
|||||
|
Re: On the quality and complexity of software within FRC
I'm only going back to 2012, as befits my team's experience:
Quote:
2015 was the only one that failed to reward incrementally, and the number of things that could go wrong caused a number of teams (including mine) to decide that none of our routines was worth the risk. I am surprised at how many teams did NOT have a "drive into the auto zone" auto. Granted, it was only three points, but it was essentially the same as the mobility bonus in 2014, and it seemed like the great majority of teams did it. Quote:
I don't recall Rebound Rumble this way at all, but I wasn't mentoring yet. As I recall, if you didn't do the kinect (and I saw few teams that did), you had either very easy (score preloaded balls; tip one bridge) or rather hard tasks (both; tip multiple bridges; pick up balls and score them) in auto/hybrid. Please expand on this. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|