The Romi is a great platform for introducing students to WPILib. The code that runs on the Romi is identical to what you would run on a full-size robot, with a few architectural differences. The major difference is that the robot code is running in simulation mode on your drive station computer instead of running on a RoboRio, which has some subtle implications when you are working with it.
Yes, the Romi has a decent IMU and good wheel encoders, so it can be used for basic navigation programming. The IMU can be a bit finicky and you may have to account for some drift, but we have used it to learn about trajectory following with good results.
The arm is a cool add-on. Just be sure to use plenty of loctite! The Romi is pretty cool without it, though, and you may have an easier experience starting with the base kit before you take on the added complexity of powering the servos, which will want their own voltage regulator.
Actually, the Raspberry Pi image for the Romi (called WPILibPi) was originally created as a project called FRCvision, which has similar capabilities to PhotonVision. Since the bulk of vision processing computation is performed on the Raspberry Pi in either scenario, and limited output data is usually sent over Network Tables to your robot code, this is no less performant than it would be on your full-size robot.
We have coded OpenCV image processing routines on the Romi/Pi. These run just as efficiently as if you were using the Pi as an image co-processor on a full-size robot. In fact, the Romi is a good platform for introducing students to using WPILibPi as a vision co-processor.
We have had great experiences putting a Raspberry Pi camera on the Romi and getting it to drive toward vision targets. Although I haven’t put this together from end-to-end yet, the Romi can also be used for tracking apriltag vision targets.
The Romi has five auxiliary I/O ports that can do basic analog, digital, or PWM I/O, and it can talk to I2C sensors. It doesn’t have a CAN bus, although you could probably add an adapter if you really wanted to. The main limitation here would probably be the small size of the robot, which limits how much hardware you can mount. Power delivery can also be a challenge if you add a lot of hardware, as the Romi is powered by 6 AA batteries.
You could use a LimeLight with the Romi, just wire its ethernet into the Pi and provide power. You will need a separate battery for the LL, but there is nothing that would prevent you from experimenting with LimeLight, PhotonVision, or other vision systems. I have even plugged a Coral TPU accelerator into the Pi and run TensorFlow machine vision models on the Romi. Other teams I know of have installed PhotonVision on top of their WPILibPi image.
The Romi kit is about $100 - $150, not including the Raspberry Pi. It’s a great value for what it provides, and we have gotten a lot of mileage out of sending these home with our programmers. They can get a lot more coding experience with their own personal robot than they could ever get in our robotics lab. The Romi was one of the things that sustained our team through the Covid years.