Has anyone successfully used RoboRealm?

Hi,

I was poking and I see that teams receive a free licence for RoboRealm. I am really interested in camera tracking so this excited me until I searched this forum and found that people were saying that it was way too complicated for what it was worth. I was just wondering if any team was able to get it running and make the software useful? If so, how did you do it?

Thanks a bunch in advance!

2481 used it in 2014 for hot goal detection. Our experience was anything but complicated. RoboRealm was very easy to learn and use. Once the students learned how to use RoboRealm the hot goal detection algorithm that they made took them about 15 minutes to create.

We found everything we needed on the roborealm website. Specifically this page.

RoboRealm provided us robust hot goal detection that contributed to playing on Einstein. From the time we started using it at our second regional to the end of champs I believe we only had one match were hot goal detection did not work properly.

We’re going to experiment with it this year. I played around with it a little bit, and it wasn’t too complicated at all. Honestly, the hardest part was getting NetworkTables to work properly :slight_smile:

We’re planning on using an Intel NUC with Windows (probably Windows 10) onboard the robot with RoboRealm running. They’re light (about 1 pound), and small (4"x4").

We tried to use it in 2014 for hot goal detection; it worked great in the lab.

We had horrible trouble at our first competition with it; it seemed that there were issues with getting the video stream from the robot to the RoboRealm on the driver station. We tweaked everything we could find to try to make it work (including the usual tweaks to frame rate and image size), but it was just not reliable for us.

We walked away with what we learned and implemented the same image processing in Java on the roboRIO.

So I am still a little confused on how this works. My understanding is that a camera sends video to your driverstation or onboard robot computer. Next, the roboRealm will find an object? and then the computer will send back what? The Coordinates of object?

You decide what the computer sends back through the NetworkTables protocol. Coordinates, Object Found?, Color of object, etc. are some examples.

I used it several seasons ago. My experience was that in order for it to work well you needed a high quality feed from the robot camera to the DS. Which then lead to my team having issues with hitting the driver station bandwidth cap at competitions.

The most efficient/effective way to use robo-realm would be to have an on-robot computer to do the image-processing.

If you look at the example that I posted you will see that we added a python block to implement some simple logic in roborealm. We positioned our robot such that it could only see one goal. After applying some basic filtering to isolate out each LED around the goal as a blob we simply count the blobs. If the blob count is greater than 50 then we send a single boolean over NetworkTables to the robot that indicates the goal is hot. If we see less than 50 blobs then the boolean we send is false.

Can you find distance of objects if you were to use a Kinect?

I used the Xbox 360 Kinect and was able get the distance from objects more reliably and accurately than I could with regular cameras.

Did you use an onboard computer for the image processing?

In 2014, I used RoboRealm with the front camera of the driver station laptop and essentially made a copy of CheesyVision.

No, my initial uses of RoboRealm that I referenced in the post above were with the Kinect during the 2012 season. I put one on the robot instead of using one for Hybrid mode. The method I used was similar to what everyone else has referenced in that I was able to send a video stream back to the DS over WiFi and then I did the image processing and returned needed variables via NetworkTables to the robot. Would I do it again, no. There are much better options for standalone depth cameras today that work better, and it was more of a “gee this looks fun to play with” project anyway.

610 used RoboRealm with Kinect during 2013 season. In summary:

  1. RoboRealm ran on netbook; cRIO/RoboRio talked to RoboRealm using http get. RoboRealm scripts handled target detection (aiming and distance.)

  2. Depth sensing worked well indoors and the laser worked well with reflective tap for auto-aiming.

  3. But we had trouble with infra-red over exposure when robot was running in environments with sunlight.

Eventually, we removed the Kinect+RoboRealm implementation.

I think I probably give it a go after this season. Sounds rewarding if you get it to work. BTW I really like your FRC music video :smiley:

In 2012 we used it for detecting the frisbee goals in both autonomous and teleop. We ran the Roborealm on a single board computer with an SSD running Win7 on the robot and then sent coordinates to the cRIO code using network tables to point the turret/shooter. It worked pretty well. They provided some recognition schemes at kickoff that got us pretty close to recognizing the goals well.
To tune the software and see the logs and feedback files, we would just remote desktop into the SBC.