As many of you have probably heard, there has recently been huge advancements in AI text generation. I recently decided to ask an AI to code a drivetrain for an FRC robot and to my shock it generated code that was very good, although with a few errors here and there. I was wondering what people thought about the idea of using AI to help program robots in FRC and wanted to know if people had stipulations about it.
This is the code the AI provided
public class Drivetrain {
private TalonSRX leftMotor;
private TalonSRX rightMotor;
public Drivetrain() {
leftMotor = new TalonSRX(1);
rightMotor = new TalonSRX(2);
}
There was a similar thread last year focused around Github Copilot which touches on some of the issues that it had back then:
In terms of ethics, eh? Its likely pulling from public repos, which isint much different than what id suggest someone to do if they were stuck (go look at how other teams do it.)
Not to pick on the OP, but “Can I use this technology to compete in a technology competition” amuses me. The future keeps arriving - take advantage!
If you get really specific, it can do some interesting things, like tell it to use SparkMax NEOs and omni-directional drive train, add an elevator and maybe a piston operated claw. If it doesn’t crash on you because it’s getting too many requests, anyway. The code I’ve gotten out of it is at least good enough to either be halfway decent example code or at least a basic outline, but it’s hardly “SEND IT” worthy. It’s only capable of single-file coding in Robot.java, at best. Could it be used by a rookie team to code their robot overnight? Maybe. Would you be better off using the WPILib examples or even the code generator in WPILib, very possibly.
The majority of the programming is already done for the majority of teams. WPILib started off dramatically simplifying the core needs and now has tons of advanced functions as well. Everyone is wanting SDS or WCP or Rev to give them swerve code. Limelight nearly obsoleted vision code and the addition of Photon Vision may polish it off. There are hundreds, maybe thousands of GitHub repos of complete code. The chance that the AI regurgitates the best code seems unlikely. I’d be shocked if anyone was concerned about using AI.
Sure, send it. The training material for ChatGPT is from 2021* though, and there have been significant changes to WPILib since it last perused GitHub.
I will say, it is quite good at generating non-application-specific code, and even specific stuff in other well studied domains.
My concern with code that it generates is licensing, which is still a volatile and undecided issue.
*there are indications that this may not be the whole truth, but either way the quality of FRC specific code I’ve had it generate has been…less than stellar.
I don’t think it is a question of ethics, I think it is more of a question of using the tools you are provided. While Chat GTP is undoubtedly an impressive tool, it is just regurgitating what it has seen online in a somewhat intelligent way, the ai is not free from error, and it would not replace a skilled programmer on a team. Why would this be unethical, it would be like questioning the ethics of using intellisence, or an advanced IDE.
Now, if you want to get into an ethics discussion, consider the use of the chat AI to generate a book report or get you started on writing your team’s Impact Award essay. OTOH, maybe using an AI for an Impact Award essay would actually demonstrate advancing robotics and engineering in society in a positive way? Crediting your use of it, of course.
The only place I see it getting dicey is with commercial use where the training data was not licensed for that use - but that doesn’t really apply here.
I think it’s ethical. Its been a while since IDEs would auto-generate setters and getters for your objects, this is a more advanced version of that. The fine details that separate good from great come into play when debugging and when tuning, which the AI does not do (to my knowledge).
I see LLMs like ChatGPT as more than just a neat trick or even a useful tool – I see it as the future of programming as a applicable skillset[1]. Why write a program by hand when you can just specify what you want in natural language? As I’ve been exploring this tool for a few weeks now I’m learning new ways to interact with it. It’s gotten to a point where it has largely replaced google for my queries (I work in software dev), and I’m actively pushing my team’s programmers to use this as a first resort over google when they run into issues. The key benefit here is that they can describe their specific situation exactly and ChatGPT not only gives example code but can explain what different parts of it do. When you drill down into something incredibly niche it often gets something wrong, but:
The sheer breadth of knowledge it has is invaluable to a beginner who can’t tell a boolean from an array
This is software, not politics. Literally everything the machine spits out is testable.
It can expand on certain topics and give the user more things to look into that are relevant to their idea
It can give the user several different ways to approach a problem
In my experience, there is a breaking point with google queries where getting more specific just makes the results worse. For this the results only get better with more context.
[1] Manual programming (artisanal programming?) will still probably be a thing, but it will be more of a niche thing like baking bread or crocheting
Just another layer of abstraction.
kinda like how everyone poo-pooed Java’s auto garbage collection in the JVM…
“you’re not learning good memory hygiene”!!!
Text models have very little semantic understanding of what they’re doing; all they do is hallucinate common boilerplate that is strongly contextually-linked to the prompt.
This may indeed become a common tool in software development, but it will not be for the betterment of the industry as a whole. It’ll be because it’s cheap, not because it results in better software.
Please do not teach your programmers to uncritically use GPT output in their work. Stuff which appears to work at first blush may have jagged edge-cases that the computer cannot know about, because it lacks any sort of underlying model for how the code actually works. “We can test anything it generates” is a very dangerous philosophy - how rigorous is your test coverage? How much are you willing to invest in test architecture to catch the sharp edges that inevitably occur in text-engine-generated code?
Pretty much, yeah. Looking forward to the day when we can specify a purpose to the computer and it will program directly in binary, making it as efficient as possible. Decades ago software was so limited in terms of memory capacity that they had to be fast/memory efficient. Now, despite having faster computers, the software can often feel the same or slower. This seems like a good way to beat back the inefficiency and really take advantage of the hardware we have available today.