View Full Version : What have you developed or are in process of developing?
Hello,
With the recent rapid growth of our programming and control systems, I'd like to take on a big challenge this year. Nothing that would require multiple build seasons, since this is my senior year and I'd like to finish it while I'm still around, but hey, anything works. I've looked into splines from 254 and that seems probably the highest bar we can set for ourselves. We have a PID system working.
So i'd like to know what you guys are in process of developing to maybe spark some ideas for myself.
Caleb Sykes
03-10-2014, 12:20
We are in the process of developing a 2-ball autonomous for our off-season events.
Additionally, we have been playing around with field-centric tank drive code. Our robot was symmetric last year, so a field centric drive that gives no preference to the "front" or the "back" of the robot might actually be very beneficial.
Additionally, we have been playing around with field-centric tank drive code.
ehm... what...
I've just been toying around with a square bot (omnis with equal horizontal and vertical force) as a way to learn some mechanical (already knew more electrical than the experiment would be able to teach)
Whenever I get a few minutes of free time i also work on fss (First Scripting Syntax) which was designed to be a combination of C++, Java, and Wurst (http://peq.github.io/WurstScript/manual.html) (which in itself is a combination of python and the native scripting syntax) but I've spent so much time developing a code editor window for java that i have no idea what im doing anymore, not to mention anytime my freetime is over an hour i just end up watching anime. (it would be worth mentioning I'm our team's only programmer)
fun little ideas for your team that i jot down in a notebook whenever i think of some -
Automatic error recovery
Code logic probe
Custom dashboard for modifying constants (SmartDashboard just nver appealed to me in that aspect)
Text-based auton system (i did this in under an hour, its not that hard. in another two hours i had a full gui and automatic compiler + deployment, which is ironic because i've been working on redoing that gui for the 2015 system for over 5 months now)
Voltage modulation (send a motor 8 volts)
vision processing (<insert cliche method to process images here>)
automatic controller detection (ps3, xbox 360, logitech, flightstick, yadaydada)
auton mode swapping (2014 we did it with start and select on the driver controller)
automatic ping reports (idk just a thought, if you thought ur robot had shaky comms during the match u can just upload the ping reports)
spibus absolute encoders (those won us an innovation in controls award and a judges award last year)
Caleb Sykes
03-10-2014, 23:19
ehm... what...
We are roughly using the approach laid out in this (http://www.chiefdelphi.com/media/papers/2438) whitepaper. We had a working gyro and some basic autonomous functionality with it last season, and are looking to become even more familiar with this sensor. In addition to the basic functionality described in the paper, we would also like to:
Add a button the driver can press that sets speed_command to zero, allowing the robot to turn in place to the angle specified by the joystick.
Make the maximum angle that allows the robot to turn forward instead of backward (in the text given as the number 100) a function of the robot's velocity, and not just a constant.
This summer, I wrote a scripting language for auto mode, using LabVIEW. Can be updated instantly through the dashboard, has support for running multiple commands simultaneously (needed for our two ball auto), and I'm considering adding conditionals (if statements).
I'm also planning on experimenting with vision and potentially on-board kinect.
SoftwareBug2.0
04-10-2014, 01:38
Manual override mode: For every PWM/solenoid/relay/digital output on the cRIO you could press a sequence of buttons on a gamepad to set it to a chosen value. Useful for initial bring-up of components, debugging, or if something goes unexpectedly wrong during a match. This was especially useful for us because our normal operation modes involved doing a lot of actions at once.
To use it, you'd basically hit a button to specify the output type, then a couple to specify which output number, and then a couple more to choose the value to set it to. So for example, the values available for the PWMs were full forward, full reverse, stop, or "no override".
This feature was easy to implement because of the overall design of our code. At a high level, it looked like this:
+------------------------------------------------+
| +-------------+ |
| | | |
| +>|---------| | |
+-->|state | | |
sensor--->|estimator|-+->|-------------| |
input |---------| |control logic|->overrides->outputs
user input--->goals----->|-------------| ^
| |
+---------------------------------------+
All the outputs are collected together into a datastructure. So after the "normal" control logic ran it was easy to change parts of the output before it was written, and these changes would natuarally be reflected in the state estimation code. This meant that we didn't have to snake any changes through the normal control logic or state estimation code.
pastelpony
04-10-2014, 08:03
Im in the process of developing a scouting app as of right now, though when I get access to a robot, I want to test the use of an Intel Galileo and a fork of CheesyVision for my team's use (if it's allowed next year.)
Kingland093
04-10-2014, 10:17
My team has been working on vision tracking for the past 2 years… still without success. I've been also looking into encoders and ultrasonic rangefinders
pastelpony
04-10-2014, 10:17
My team has been working on vision tracking for the past 2 years… still without success. I've been also looking into encoders and ultrasonic rangefinders
Are you using LabVIEW?
faust1706
04-10-2014, 10:34
This project has been drawn out for a few years, and it is sort of my child now. I'm getting paid (via grants) to work on it in the lab at the university I go to.
Take data from the kinect's depth map, find objects in front of it, input those into my path finding algorithm, use our vision system to find where we are on the field and which way we are facing (camera pose estimation), generate a path based on where we are, with the obstactles the kinect detected, to a point on the field in which we can score, and follow that path (much like 254's code this year)
Kingland093
04-10-2014, 10:35
Are you using LabVIEW?
Java
DonRotolo
04-10-2014, 10:39
I'm developing an online training program for FRC robot inspectors. And hoping FIRST will approve it...
(Not off-topic: It does involve programming, and robots, but perhaps not in the usual sense...)
Pratik Kunapuli
04-10-2014, 11:00
Java
WPI has a decent tutorial for vision processing here (https://wpilib.screenstepslive.com/s/3120/m/8731/l/91395-c-java-code). It goes through example code for both Java and C++ and does a pretty good job of explaining the process. My other suggestion would be to look through this white paper (http://www.chiefdelphi.com/media/papers/2676). It is Team 341's vision processing code form 2012 written in Java. The thread associated with the paper also answers a lot of questions.
thatprogrammer
04-10-2014, 12:59
Looks like it's time to post what I want to do over the course of now until the season, and during the season.
1. Develop cheesy drive with a steering wheel being used for turning
2. Work on a WCD and reversible alternate material bumpers + code for shifter gearboxes.
3. Work on splines similar to what 254 has done.
4. Learn how to program co-processors. *Most likely an arduino*
Cel Skeggs
04-10-2014, 13:26
Last night, I got a prototype working of a set of buildscripts as part of my team's code framework that lets us download our code to either the roboRIO or the cRIO, both with Java 5 features (or 8 on the roboRIO), from the exact same project.
Essentially I ripped out all of the internals from the WPILibJ 2014 SDK, added in Retrotranslator, built a new version of the preverifier executable, modified the build process for faster building, and then put everything into our shared buildscript. The current complete build process for this goes:
When our framework changes: Build CCRE Core Jar file, Build pre-Igneous library packages, Compile Igneous code (Java 1.5), Unpack pre-Igneous libraries, Retrotranslate libraries (except for stub StringBuilder class), Preverify libraries (with upgraded preverifier), Build post-Igneous library packages.
When our code changes: Compile Robot Code (Java 1.5), Retrotranslate Robot Code, Unpack post-Igneous libraries without StringBuilder stub, Preverify libraries (with upgraded preverifier), Package application Jar, Romize Robot Code Suite, Download and Erase Robot Logfiles, Possibly upgrade Squawk release, Deploy Robot Code Suite, Reboot Robot via OTA, open cRIO console.
I'm still getting it to work even if you don't have the roboRIO plugins installed, and once we have that, our entire software team can switch over to Eclipse for code development and the only thing that they'll need to do to target the roboRIO once the build season starts is to install the roboRIO SDK plugins and press a different download button.
I'm also working on upgrading the Poultry Inspector application (named as such because our team is "The Flaming Chickens"), which is a dashboard replacement that uses our team's pub/sub networking protocol (named Cluck, for similar reasons as before.)
I've been upgrading it to have a cleaner interface (SuperCanvas) designed to work well with a touchscreen, and I'm working on extending it to have more options for interaction.
Other goals for the near future: write Eclipse plugins for our code framework, add built-in Mechanum support, add a virtual patch panel system to reconfigure I/O at runtime, add a more robust configuration toolkit, build a unified automatic deployment system, and other miscellaneous features.
Fletch1373
04-10-2014, 20:59
I'm developing an online training program for FRC robot inspectors. And hoping FIRST will approve it...
(Not off-topic: It does involve programming, and robots, but perhaps not in the usual sense...)
I'd be interested in helping if you're open-sourcing it or otherwise looking for help!
nandeeka
04-10-2014, 22:35
We learning how to use NetworkTables and are using them to give ourselves better data for tuning PID loops.
nathanwalters
07-10-2014, 12:48
This year we're looking at using a coprocessor to handle a lot of stuff that would normally be done by the robot itself or the driverstation. Because we don't have to worry about using CPU cycles on the RIO, we'll be able to do quite a bit. We're looking at developing a field-positioning system that integrates input from a variety of sources (encoders, a 9-axis IMU, the camera, etc.) that will be able to precisely determine our position and orientation on the field. We're also looking at a more complex logging framework that will essentially continuously log the state of every part of the robot. That will let us virtually replay the match after the fact to help with debugging. With the new CAN bus APIs, we should be able to grab info about current draw from our actuators, which will provide greater insight into what's happening on the robot at any given time. One student is designing a temperature-monitoring system that will use an Arduino with a bunch of temperature sensors attached to it to monitor the temperature of various parts of the robot. And, of course, we'll continue to refine our LED code :)
So yeah, that's what we're working on right now. I'm excited to see if we'll get all this working in time for build season!
weaversam8
13-10-2014, 17:53
Try designing HTML5 apps in Phonegap to help with scouting, marketing, PR, etc.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.