Smart path storing on the robot (automatically regenerates new or updated paths, keeps old ones)
Cube tracking with vision
Easy version tracking by displaying latest Git commit/branch/changes on Shuffleboard
UDP communication between Jetson (vision co-processor) and roboRIO
In addition, we continue to make updates and innovations throughout the off-season, so stay tuned for more (maybe even some cool new tech incoming).
I’m also super happy with the number of amazing new and talented programmers who have worked on the code this year! We’ve had a great season and I can’t wait to see what next year brings!
What’s the design philosophy behind the Sertan framework? Is there a particular architecture you’re trying to drive toward?
Also, what was your experience having the Git build SHA1 info available at runtime? How did you end up using it, was it valuable enough to replicate next year, etc. ?
We’re trying to keep the overall philosophy and structure similar to WPILib – subsystems, commands, etc. We mainly made Sertain in order to make use of more Kotlin APIs that we wouldn’t have been able to otherwise. It also gives us a little more fine-tune control over the lifecylce of commands, so we know what’s running and when. We’ll be adding more over the course of this off-season, especially in regards to testing and simulation. I’d really like us to do more test-driven development next year, but even having a few tests is a step in the right direction.
It was actually super easy with Gradle, fewer than 50 lines. You can see the code here. That was all made post-competition season so it’s never been used in competition, but it has helped us tremendously at off-season demonstrations and even just everyday debugging. We just display it on Shuffleboard for anyone wanting to check the version of the code, but I’m looking to do more now that we have access to all that information.
Nothing fancy. Basically just color thresholding, find the biggest blob of the right dimensions, then send the location back to the roboRIO which uses it to grab a cube. We did dabble around with actual object tracking with OpenCV’s CSRT, KCF and MOSSE classes, but we didn’t ever get anything more usable than without, and the FPS drops weren’t really worth it. Nothing zebracorns-level by any means
We did just get a LiDAR, though, and have got some basic SLAM running, but that’s a whole different story… Hopefully we can strap it on the robot and upload the code within the next few weeks to see how well it does or doesn’t work.
Thank you for your code release! I had been toying with converting our team’s code to Kotlin, so it’s interesting to see how you did things. I’m also curious about if you plan on using IDEA next year, given that VS Code is going to be the new standard, and from my experience it doesn’t quite pick up gradle dependencies for its intellisense yet.
One suggestion I have - you link to sertain’s JavaDocs, but there are no descriptions/documentation for any of it in those docs. Those would be really helpful for pointing people in the right direction for where to get started. From what I could see from a quick glance, you came up with a similar notation that we did for declaratively creating CommandGroups, which is encouraging to me, though I’m not quite sure what your “mirror” and “bridge” commands are, or how they’re meant to interact with core WPIlib.
We love IDEA/IntelliJ and definitely plan to continue using it. Every IDE has its use, and IntelliJ exceeds at usability with Java/Kotlin/Gradle. That said, most of us do use VS Code for everything non-Java/Kotlin, such as our vision and other coproccessor code.
Thanks for the suggestion! Yea… we’re in the process of cleaning it up a little. CommandBridge really shouldn’t be accessible, and most of that is all internal hacking to get around some weird WPILib stuff. We’ll add some examples and step-up the documentation over the next few weeks!