Introducing Frosted Glass

When Limelight was introduced last season, I was a bit…disappointed by the price tag. I didn’t focus too much on it, because I don’t know how much the device costs to manufacture, or how much time was spent developing, but regardless, I felt that it was too expensive to be accessible to many teams. Of course, many teams don’t have the mentorship to guide eager programmers through the process of vision, or they may not have time to dedicate to vision, and the limelight is an amazing solution that benefitted teams that chose to go that way.

But I knew there must be another solution. Specifically, a cheaper one. A programmer at heart, I decided to create yet another vision package. I wanted to create something for a device that many FRC students might have, such as a laptop, but also small enough to fit on a robot, like a raspberry pi. It needed to have a good camera and fast internals as to not bottleneck performance. So I reached in my pocket, pulled out my iPhone, and created Frosted Glass.

What is Frosted Glass?
Frosted Glass is an out-of-the-box vision processing app designed for iOS. It functions similarly to a raspberryPi set up for FRC, but without the hassle of setup. It’s currently in the app store at version 0.2, and will successfully detect retroreflective targets. It runs at ~80fps at 640x480 resolution and will use NetworkTables to communicate with the roboRIO.

** Why would I put my iPhone on a robot? **
To be honest…you probably won’t. At least not in competition. Not many people have spare iPhone’s lying around waiting for use. But my ideal use of Frosted Glass isn’t to completely replace a hand-built vision system. That would take away too much of the learning experience for FRC. Instead, I envision teams using it in the first couple days or weeks of build season, when testing chassis, shooters, or autonomous routines. When they don’t want to dedicate time to vision * just yet *. Then when they have the kinks worked out, they can come up with a more permanent solution, and build their own improved processing pipeline, and learn what it takes to set up an offboard (or onboard) system. An iPhone is a perfect tool for testing because you can pick it up and move it around, simulating robot movement without having a built robot, but no one wants to put theirs on a robot during an intense match.

It’s also great for off-season events when you aren’t in the grind of robotics yet, but maybe want to experiment with target tracking before January.

Whatever the reason may be, I hope teams will find a use for it.

** What about Android **
I’ve gotten asked this a bit since I’ve started discussing Frosted Glass within the FRC community. The short answer is: I have an iPhone, so that’s what I developed for. The long answer is: Cheezy Poofs have their CheezDroid vision app from 2016 that teams can use, and if someone would like to help turn this into an android app as well, I’ve heard lots of good arguments for Android over iPhone ($$$) and would be willing to venture down that path.

** How do I get Frosted Glass **
Like I said before, it’s in the App Store, but it’s also on my github. You need a Mac with the latest version of Xcode to build, but if you would like to customize it for your needs, go ahead!

** What can I do to help **
Right now, Frosted Glass is still early in development. It has no working NetworkTables interaction, and virtually no customization. I’m looking for iOS developers (specifically Swift developers) that know what they’re doing and would like to contribute to an Open-Source FRC project to help me continue developing. I expect to have a finalized version before 2019, and a working pipeline < 24 hours after kickoff, so teams can start practicing almost immediately.

I mean, I just recently sold my iPhone 7 for $220…and that was the version with extra storage. I see several 6/6S/7 models in that sub-$200 range on Facebook Marketplace. Which (and this is absolutely zero knock on the Limelight crew) is a lot cheaper than a Limelight, at the expense of having to argue the corners of the R11 blue box and R13. (The iPhone was not retrieved from a previous robot, so the undepreciated–and illegal–price doesn’t apply. R13 says “fair market value”, and the fair market value of a working iPhone 6S/7 is well under $300 these days with plenty of eBay sold listings to back that up.)

I haven’t had a chance to download the app, but if it’s that simple to use I’d sure as heck be a player on that.

(in the 2018 rules anyway, future ones may vary, blah blah blah.)

Edit to add: iPhone 7, Apple Certified Refurbished, $469. A hefty CAWst, but something you can point to definitively.

Or, if an A8 chip and the older camera is sufficient, a brand new iPod Touch is $199.

What’s the methodology for getting the phone to communicate with the roboRIO? Wifi? Or is there a recommended Ethernet adapter?

During build season, you can use wifi, but I admit I haven’t tested ethernet capabilities. I found this set of adapters that looks like it’ll let you connect to ethernet over lighting, as well as charge your phone so it doesn’t die.

Ii’ve thought about having an iphone on the robot, secured in every possible way while facetiming an ipad. But wouldn’t that be illegal.

I’m pretty sure Facetime requires an internet connection to work which you wouldn’t have on the field, even if you were to wire the ipad into the robots Access Point to make it legal.

That said, there’s nothing precluding using some P2P method via an iphone with an ethernet dongle to send live video via the robots radio back to the driver station.

Sadly I haven’t figured this part out yet. I haven’t tried too hard, but I couldn’t get CSCore to compile. I doubt it’ll work with iOS anyway, and I don’t think streaming is really a necessity for Frosted Glass. Since streaming is not very CPU intensive, you could just set up a CameraServer on the RIO with a lifecam and be good.

EDIT: But I think it would be helpful to do a Limelight sort of thing where you can edit the control directly from a webpage, rather than having to mess with a phone. But maybe they did that because there isn’t a screen on their product.

While it’s true streaming isn’t very CPU intensive on the RIO, I have found the default compression it uses to be pretty inefficient. We used a Limelight this year only for streaming video feeds since it was FAR easier on bandwidth and allowed higher framerates than going through the RIO at the same resolution, especially for multiple cameras (and also because our programmers couldn’t figure out vision targeting).

What fps and res did you end up using?

Peertalk might also merit examination, since it allows communication over USB and has been deployed in the App Store successfully:

Using the Limelite, we ended up with two cameras, one at 320p 90fps and the other also at 320p but only 30fps (the second camera framerate is limited by the Limelite itself). The framerates we actually got were pretty consistent on both cameras during actual use too, there were very few lost frames or framerate drops. It actually used so little bandwidth we were able to open two instances of the viewer to display on two screens at once without any issues.

When we tried to run a single USB camera through the RIO using the default settings we had to set the framerate down to like 15fps at 320p to avoid control-input lag.

Honestly, my only gripe with the Limelite is that it doesn’t allow more cameras to be connected to it. The compression is efficient enough it could probably support 3-4 cameras using available bandwidth (not sure if the limitation is related to the hardware processing though). It would also be nice if it could alternate between video sources for the purpose of targeting, we used the Limelite camera when the arm on our robot was down to intake cubes this year, but due to our arms movement we had to use a second camera via USB on the Limelite to see our arm when it was in the “up” position; had we utilized vision processing, we would have been able to target the cubes using the Limelite, but not the goals using the USB camera connected to it.

“You wouldn’t have internet connection on the field”
Not even your own data service for your phone plan can work? I’m not sure if i’m following through.
Another thing i would like to know is based on the 2017 field, can the human players on the airship be able to have airpieces in to communicate to the driveteam(s) on how many more gears were needed and such? And the portals where the bots get the gears across the fields, “move a bit left/right, the gear is in!, go, go, go!” etc?
Based on 2018, human players at the portals down on opponent alliance side, whether drive team(s) was/were planning on getting a cube from them to not accidentally push it out, how many cubes were left that they had, etc.

If using cell phones with/without data was illegal, how did most teams communicate to their human players?

Hand signals.

Wireless communication (except for the FRC Radio on the robot) of any kind to or from the robot or components on the robot, or even to other members of the drive team is expressly prohibited in the rules.

Consequently devices with cellular data service would not be permitted on the robot (I’ve even known FTAs suggest to drive teams to avoid bringing their personal cell phones to the field to avoid potential interference).

The only permissible method of wireless communication is via the Robot Radio (wireless AP) through the Field Management System. This is done both for safety (since the FMS has the ability to disable a robot and other communication methods might allow a team to bypass this), as well as security (to prevent people from interfering with robot connections).

this team is using a phone for their shooter

In that particular case, I believe they are using the phone as a camera system, and most likely using a wired connection (USB, most likely) to the RoboRIO or a coprocessor. If they were using a wireless connection, they would be illegal. 2017 R68 (which applies to the robot in question) specifically bans any wireless communication that is not required by another rule. That’s also true of 2018 R69.

I highly doubt that they are using non-approved methods to send the images back to the driver’s station.

This has been fixed for 2019 in cscore, although recompression will of course take more CPU than taking the MJPEG stream directly from the camera. Previously if the camera was MJPEG, cscore would use the image directly from the camera (to minimize CPU) rather than decompress/recompress, and if the camera was not MJPEG, cscore would always use 80% compression. Functions have been added to the MjpegServer class to allow the user to force recompression in the first case or choose a different compression ratio in the second case. See