New Sensors?

There is a lot of talk of new motors being introduced next season or what we can look forward to in terms of drive control. I’d like to pose another question. To my favorite kind of people.

What kinds of sensors would you like to see FIRST introduce in the KOP or FIRST Choice to enhance the abilities of teams to create more intricate autonomous programs or even more hybrid driven robots that make more accurate decisions based on better data?

I would love to see the field provide some technology (there are many ways to do this) to enable location tracking. The ability to do a GPS-style determination of the robots location could really enable some cool stuff!

I’d love to see a class 1 laser distance switch (25 foot range), but they’re pricey.

This probably isn’t very feasible, but what I would love to see is a grid of weight sensors under the field that could tell your robot where all of the other robots were. Then i would have a 10-15 second autonomous period, followed by a minute of optional autonomous, if your robot stayed autonomous in this period you would get a bonus on each item scored. The game would have to be noncooperative as the weight sensors would not tell you the alliance of the robot. I think that it would be practical to have a fully autonomous robot in this minute period without too huge of a score bonus and while it might be hard to explain to spectators, it would definitely draw middle leveled team’s attention to the control system. I’ve noticed that the most amazing teams always have a basically perfect autonomous program, 1114 in 2008, 67 in 2012, 254 this year, and I think most teams don’t think about this as much as they should early in the design stage (I know my team hasn’t). I doubt any of this will happen, but it would be pretty cool in my opinion.

To get really good auto programs, provide an over head camera view of the field like in robocup. However I don’t think this would change the game that much and would be very hard to standardize from event to event. Perhaps field mounted cameras on each end that robots have access to over the network.

I’d love to see them add a LIDAR of some sort. They are basically a given in most autonomous navigation competitions.

I’ve been looking for FRC friendly LIDAR options. I’m looking at obtaining one of these for experimentation this fall:


For sale here:

They come in just under the budget restrictions and use a laser that is out of the visible spectrum. It’ll probably require some hardening to help it survive field conditions.

You might be able to use something like this as a laser switch, it is also out of the visible spectrum so should be safe.


Lydars are really nice. I wish it was something that we could use as well. I was a part of a NASA Centennial Challenge team and we used a low end Lydar…which still costs over 1K. So I’m not sure we’ll see one in FRC for quite some time.


I’d love to see a drive gearbox that has the motors, speed controller, and encoder built in. And talks CAN. I think this would allow teams to do some neat stuff out of the box with just software and little chance of messing things up mechanically/electrically.

I agree this is a great idea. 250 lbf-in stall torque and 1000 rev/min free speed would be just about right. 9 Ampere free current should be feasible using good gears, with 12 Volt supply.

This online community could field several development teams with the engineering chops to make such a thing happen. What would it take to get a design competition off the ground? An RFP from Dean? :cool:

looks pretty sweet… 2 notes to make sure you don’t have issues with inspection:

  • make sure you bring a copy of the datasheet with you so your inspector can see it’s class 1
  • take a careful look at the included “motor system” odds are you’ll need to swap out the motor that’s included for an FRC legal motor!

I really wouldn’t be surprised if Vex/AM tries to get something out like this to teams by Kickoff.

LightWare makes very affordable Arduino compatible laser rangefinders :slight_smile: My team isn’t convinced we need one, though

What about a camera looking down at the field suspended above it? It would just be a media server. Teams could utilize a cheap camera supplied in the kit of parts during the build season to practice with the system, and make the switch to an identical secure system during competition.

I like the idea, though if there was a modular version it would be cheaper to fix when it breaks. Still, a gearbox that a team could just bolt the motor(s) and all-in-one drive electronics unit onto would put the team pretty close to ready-to go.

Especially if there was also a CAN Backbone + Power Distribution combo unit that could combine power and CAN connectivity into a single cable. Or some sort of 12V variant of USB would work nicely too.

For most shooting games, it would be useful (Aerial Assist being the exception). That said, $300 is expensive.

I’m not sure there’s a need for new KOP sensors. As is, we don’t use the limit switches provided. We’ve done pretty well buying sensors we find online. We used prox switches ($8) and prox sensors ($12) this year to great effect for us.

Take a look at these. I’m not sure what you mean in the sense of GPS style tracking, but maybe one of these could help.

That lightWare Lidar is Class 1M … The rules (So far) state it has to be class 1. Lidar-lite just got approved as a class 1 for its laser version and they do have an LED version that does not require any Classification.

New sensors for new sensors’ sake will just be a gimmick that very few teams end up using successfully (like the camera has been in many years). There would need to also be a game mechanic that requires the new sensor.

I would personally love to see an end-of-match autonomous mode, which necessitates very good localization. Up until now, teams could get by on using odometry from a known starting position and be (reasonably) repeatable over 10-15 seconds. Remove the ability to precisely control the initial conditions of the robot and it is a whole different animal.

Robocup uses a top-down, whole field camera as previously mentioned. This might be infeasible, but attaching a high resolution camera (with high intensity LED rings) rigidly to a goal or field element would be really, really cool. The field would then send the images over wired Ethernet to the player station and would let teams detect their own robot in the video stream on the driver’s station. You could then do localization (or even visual servoing) based on the feed and send commands to the robot.

Advantages of this proposal:
0) Robots that can work from semi-arbitrary starting conditions would be HELLA COOL.

  1. Since 2005, the vision challenge has always been to detect a given feature of the field or game element. Instead, turn the challenge on its head and let teams have to devise their own fiducials (using retroreflective tape or otherwise). Designing good features to detect, localize, and track is a tough engineering challenge!

  2. Field WiFi network doesn’t need to carry image data, so no bandwidth hiccups and latency problems like we’ve seen for the past few seasons.

  3. Teams don’t need to worry about mounting a camera to their robot (expensive, complex, occasionally flaky, fragile, etc.).

  4. Teams only need a sensor and a laptop to do the programming. Gives the programmers a meaningful task on day 1!

A variation on this proposal would be to have a protected area of the field/beside the field (with ethernet cables and power supplied) where teams can position their own custom camera (or other) sensor prior to the match. This eliminates the need for every team to acquire one of the official field sensors and could allow for even more creativity.

It’s clear the Google car and its brethren will be real very soon, and in fact its the social, not the technical, challenges which will dominate. I for one don’t want to wait for FIRST to design games that will require this technology - I believe we need to develop it now, and I believe once developed it will be both very usable in current games, and future-ready when drivers have to focus on problems of a higher order than vehicular navigation.

Two lynchpin sensors:

A) A $79 LED-based optical ranging system (Lidar-Lite) is being released this month. We’re planning on building a 2-D 360 scanning lidar, the goal is to range the entire field (at robot height) at 1-2 degree resolution within about 3 seconds.

Lidar Lite (LIDAR-Lite by PulsedLight - Dragon Innovation):

  • Uses LEDs, not Lasers, w/an optic. Repeat: NO LASERS REQUIRED.
  • Measures distance to 20 meters with a 10ms integration period using time-of-flight calculations
  • Notably, provides SNR measurements for each calculation. Key point: We should get much higher SNR for ranging retroreflective tape than other surfaces.

B) An auto-calibrating Attitude Heading Reference System (AHRS). The nav6 IMU we developed for FIRST (Google Code Archive - Long-term storage for Google Code Project Hosting.) provides this solution; this was used by several teams at nationals last year and includes C++, Java and LabView Libraries for easy integration onto the robot.

Given knowledge of the field metrics, these two intelligent sensors together provide:

  • Robot Current Position relative to Field (derived from field metrics and 360 degree lidar scan) throughout the match.
  • Robot Starting Orientation (measured w/magnetometers before game (and motors) start).
  • Instantaneous Orientation (100Hz Motion Fusion of Gyro/Accelerometer).
  • Gravity-corrected Linear Acceleration measurements
  • Angle to Retro-reflective Tape (based on SNR thresholds of LIDAR data).

And unlike camera-based approaches, we believe this approach should be insensitive to variable lighting conditions.

The nav6 sells for $70. And I’m estimating we can build the Lidar Lite for about $150 in parts, not sure what we the sale price to the FIRST community might be yet.

Now of course on top of that we need collision avoidance and waypoint navigation algorithms. But the amount of published research done in this area recently (not to mention some cool Cheesy Poofs navigation code :P) is rapidly making this an engineering, not a research, task.

It’s a good time to be a robotics engineer!

The Lidar-Lite looks to be an inexpensive and capable sensor. But it is new and untested by the masses in the wild. I’ve seen other efforts like this fall far short of claims when the real product was shipped. For that price it’s worth giving it a shot. An inferred Led may just be good enough.

As to the nav6 - Invensens is now releasing their code for 9 axis sensor fusion. Up to now they only released this to their high volume partners. Fusing in the mag with these libraries may make the nav 6 way better with just a code update.

PNI corp released a new 9 sensor fusion asic this spring that has shown superior magnetic corrections with cheap mags.

Astrial navigation is do able at most venues. It does need cpu horse power.

Yes. The new Invensense Motion Driver 6.0 beta release occurred yesterday. We are now developing the nav6’s bigger brother, the nav9, by using the MPU-9250, and featuring an STM32 ARM MCU to host the Invensense MPL library, which only ships in binary form (forcing the move to a new processor).

This gives full 9-axis motion fusion, including:

  • magnetic disturbance detection
  • improved Magnetometer calibration, incl temperature shift correction
  • better support for pre-match and factory calibration
  • faster dynamic calibration

Also adding USB, I2C, SPI interface options.

Thinking of adding a CAN interface too, but not sure the extra circuitry keeps it within the budget of most FIRST teams.