(teaser is a parody of Google’s “Introducing Pixel”)
Build an Auton is a graphical interface made by team 834 to create autonomous code for FRC robotics in an easily accessible fashion similar to LEGO’s drag and drop NXT programming. The program is already fully functional, and was used at our latest pre-season competition.
Eventually, we’d like to make it open source and release it for all teams to use, perhaps even before the end of Build Season 2017, but there’s still a lot of work to be done. Setting the program up need to be streamlined, and there’s still potential to make the program even more intuitive. The end goal is to make the program so consistent and easy to use that a team can use it to program an autonomous mode for a given situation while in queue.
We’re currently accepting applications for the closed beta, and are looking for teams with extensive Java experience as well as newer teams who have previously struggled with autonomous code. You can apply here: https://docs.google.com/forms/d/e/1FAIpQLSf_tHejQ1O9c9VVScxCrB6FnW_uiyFBxbVj1GZHTJ54iNL-4A/viewform
I hope some of you are as excited for this as we are, and we’ll be looking forward to your applications!
App sent, fantastic idea! what sensor products were used in the machine in the video?
The only sensors in the video were a gyroscope for turn angles and encoders for distance. I don’t know the specific ones off the top of my head, but I could find them out for you soon. Thanks for the app and the interest!
I love this, but I’ve always had a bit of an issue with graphical programming solutions.
I kinda find its “cheating” in a way. I like having to deal with issues and frustrations that come with traditional text-based programming, like syntax, variable scope, etc. I also find it a better learning experience. Also, I feel more satisfaction when it works.
For example, with vision, I know I could use GRIP to make some working vision quickly, but I wouldn’t have learned as much as I have about vision if I hadn’t used OpenCV with C++.
But don’t get me wrong, programmes like these are great.
Anyone else with me on this?
Hey, I’m the developer of BuildAnAuton.
Right now, the user has to configure the encoder through getDistancePerPulse() and the program simply uses the getDistance() method. Thanks to your suggestion though, I’ll probably implement something that asks for gearbox ratio and wheel size to automatically calibrate the encoder. However, from experience, I’ve found that that calculation is usually off by a little bit, and requires some fine tuning, so the user will still need to do some of that work. For torque curve, encoder and gyro sensitivity I don’t think those would factor into how the program currently works.
Moment of inertia and turning scrub shouldn’t be a factor. In fact, I’ve already tested the program on other robots, and it has worked fine.
The programs are exported as a separate file type that is uploaded to the robot via FTP. And yes, it can be integrated into other programs; it gives up control for teleop, and the program is run by a few line of code in your robot code that you can be creative with.
And to viggy96, I totally get what you mean by “cheating”. I personally prefer to code by hand and learned to work with our robot this way. However, I’d like to defend my program a little. When I started robotics I already had a solid foundation in programming, so I only had to learn the robot-specific things. However, for teams without experienced programmers I hope that my tool can be useful. And for experienced programmers, I think it can still be useful for making autonomous modes (which I find frustrating and time-consuming to do by hand) quickly and consistently, allowing time for more interesting parts of the robot, such as vision and working around the tight schedule of competitions.
We program in LabView so… No. That said, we’ve been working towards a similar setup with our team library.
A couple comments for the developers.
- A big difficulty for teams is tuning PIDs. An auto tuning algorithm would be useful (going to be working on this for ourselves soon).
- Please ensure there is an input from camera code or other arbitrary sensor as well.
- I’d like to see your self correction code if possible to compare up against what we’ve been working on.
This looks really awesome, and I submitted an application for my team, but this probably falls under R14 from last year (usual warning about possible rule changes implied).
Software and mechanical/electrical designs created before Kickoff are only permitted if the source files (complete information sufficient to produce the design) are available publicly prior to Kickoff.
Thanks for the heads up! As a whole, given that we have no clue what next year’s game will look like until the season starts, the current version of Build an Auton will basically be a jumping off point for whatever we end up doing for the season, who knows, maybe 3.0 will look completely different. Either way though, we’re going to try and make our program as public as possible as soon as possible so we won’t have to worry about that.
It certainly looks really good, I will definitely talk to our new programmers about this. (I graduated last year after having done the programming for 3 years solo) I think it could be a really helpful tool for them as they try to figure out the FRC code landscape. This would also be a really helpful tool for teams who want to write several autonomous modes they can choose between or that the robot chooses between based on sensor input without the tedious work of fidgeting with each one’s basic creation. I like it!
Love the idea! I applied for our team (1721), can’t wait to hear back!
We have an off-season this weekend, and I’d love to give it a test on a real field there to see how it reacts!