How is programming going so far?

This is my first year in FRC, and I’m doing my team’s robot and vision (Jetson TX1) programming. I’d like to know how far most of the other teams are in their programming so I can see how far behind I actually am :stuck_out_tongue: .

We’ve got all the code for what drivers will control all set and tested. Right now, we’ve been testing and tuning the autos we’ve written already. Dashboard also in progress right now. Got to using speed control on the Talon SRXs for everything to make us consistently drive straight. Also, PID and ramping for autonomous driving and turning.

Everything seems to be going OK for us so far, all subsystem code is completed, we’re still playing with tuning, need to finish some stuff on the dashboard, and may tweak our vision code a bit, but that’s about it.

Our code is always public if you want to look around, but fair warning it is Kotlin and a lot of it is written in a modular domain specific languagethat we developed in the offseason, so it won’t look like normal Java. :wink:

This has really been a big year for us in standardizing our tools and procedures. Here’s what we’re using as of now:

  • Waffle.io- Programming task and issue tracking, has been useful for keeping us on task and on track
  • Travis CI- Currently we’re just using Travis to automatically verify that code builds for pull requests, we plan to extend into testing in the future
  • Maven- Published releases of a lot of our code including our modular DSL, vision code, and LED library
  • GradleRIO- Multi-project setup lets us keep vision and robot code together and allows deploying to the roboRIO and the Jetson at the same time
  • IntelliJ- We tweaked our gitignore so that IntelliJ run configurations are pushed to git, this has helped keep run configurations synced between different computers and different programmers.
  • Mandatory code reviews- This is our first year enforcing code reviews on pull requests, we definitely need to work on review quality, but even with imperfect reviews the process has helped.
1 Like

I’m fairly confident with our Teleop section, perhaps some tweaking on the elevator speeds.

Autonomous may as well be blank for how well it’s working out right about now.

Programming has been going great for us. This is the first year we’ve really been able to fully benefit from code reuse, and it has freed us up to focus on much more ambitious control schemes and continual improvements in functionality.

We’re extremely fond of our YAML-based configuration system; through extensive use of constructor injection and Jackson, we’re able to define almost all robot-specific data in markup language rather than code, which lets us focus on maximizing the reusability and flexibility of the java we write. We still need to get around to commenting our map files, though.

Teleop code is all written and mostly tested. Basic auto building blocks exist, but have mostly not yet been tested or put together into configurations. I blame mechanical.

I got hold-position FPID working on the elevator last night as well as elevator motion profiling working for the most part (funnily enough, only with feed-forward. I guess if you do the math instead of just throwing numbers at a system, it works much better.)

I wasn’t able to get the pose estimator working in the short time I spent on it, so I’m rewriting that today in the hopes that I can get Pure Pursuit working during week 0.

Been going well this year. We are working on implementing a lot of new things this year such as motion profiling, path planning, state space control, etc. Personal favorite is machine learning for cube detection.

Is that working well? I’m thinking about using ML for our vision processing next year, since we finally have the computing power (Jetson TX1) to run it.

So it’s actually going well

My first test was just a proof of concept so I trained it on 20 images for around 25 minutes. https://drive.google.com/drive/folders/0BxiolqzZnQUhTnBWQmhlNjFWeWs?usp=sharing

Here are some output images.

Right now I’ve been training my network for about a day and on about 1600 images. I’ll check up on it later tonight. My plan is to run it on a phone, but I have had success with just running it on the Rio.

It’s going a lot better than past years :stuck_out_tongue: This year we’ve implemented vision, used an IMU for turning, encoders for driving distance, and trying out PID control all for the first time. Teleop mode should be done, but due to trying CAD for the first time this year, mechanical still hasn’t passed the elevator along (projected to get it Saturday…). As for autonomous, we’re trying out a form of A* path planning mashed with ultrasonic sensors to maneuver around other robots as best as possible.

We have finished our subsystem development(with PID and all that) and one cube autonomous testing. We are moving into 2 cube autonomous testing and 3 cube testing later on.

Sent from my iPad using Tapatalk

I hope that AWS instance was the kick in the butt you guys needed to attempt this :slight_smile:

If you, or anyone, needs any guidance in terms of deep learning, do not hesitate to shoot me a message, I research it for a living.

This has been the most fun year as a programmer so far. Right now most of the teleop has been tested with the exception for a few bells and whistles and our auto creation is now underway. Finally getting around to messing with PIDF control and motion profiling which seems to be going pretty okay. One thing we could probably do a bit more work with is vision.

What matters is not a that. What matters is how far ahead you’ll be compared to now, for an off-season competition next autumn and for the 2019 FRC season.