There are many sensors that can inform you about the current position of your robot. Encoders, gyros/IMUs, vision systems, ultrasonic sensors, LIDARs, contact sensors, etc...
Moreover, there are many, many approaches for taking various combinations of these and producing an overall estimate of robot pose.
One approach that tends to work well in FRC is to combine encoders with a gyro/IMU and use a model like this for estimating robot velocity:
Code:
robot_velocity = (left_wheel_velocity + right_wheel_velocity) / 2;
robot_heading = getHeadingFromGyroOrImu();
Once you have the velocity and heading, you can integrate the velocities over time to obtain position (taking into account that robot velocity is in the direction of the robot heading)...
Code:
robot_position_x += cos(robot_heading) * robot_velocity * dt;
robot_position_y += sin(robot_heading) * robot_velocity * dt;
This assumes your robot moves at a constant velocity in a constant direction over a period of time (dt) which we know is not true, but if dt is small enough, this is a good approximation (it is a Riemann sum).
If you are interested, there are many other more sophisticated ways to do this that exploit the fact that there is also overlap in what sensors can sense (for example, an IMU can tell you something about your linear velocity, and encoders can tell you something about your angular velocity) and "fuse" these measurements intelligently by considering the uncertainty properties of each source. Look up Kalman Filters, Extended Kalman Filters, Particle Filters. But in general, the above formulation is good enough in FRC.