|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools |
Rating:
|
Display Modes |
|
#2
|
|||||
|
|||||
|
Re: Camera Pose Estimation help
Figuring out the angles between the camera coordinate frame and a given pixel in the image is something that is very useful and definitely worth learning. It does require some knowledge of matrix algebra and is typically a university level topic, but if you have the will to learn, there are some good resources online. You will want to read up on the following terms:
pinhole camera model camera calibration homogenous coordinates camera resectioning intrinsic matrix extrinsic matrix Some good references are the OpenCV camera calibration page and the Wikipedia page for Camera Resectioning. Basically, you need to find a function that converts an (x, y) pixel in the camera frame into a ray oriented in the direction of (X, Y, Z) in the world frame. A common way to figure out this transform is to perform calibration against a series of images of a pattern of known dimensions, like a checkerboard with precisely measured squares. You can then use these measurements to compute the "optimal" (least squares) intrinsic matrix (modeling things like the field of view of the camera) and extrinsic matrix (modeling things like the pose of the camera relative to the world). Distance is trickier. Each pixel in a standard 2D camera frame actually represents a ray between the focal plane and objects in the world. There is no explicit way to measure distance, but there are tricks you can use (stereo cameras, assuming things about the size or position of the target, etc.) depending on what the actual problem is that you are trying to solve. One reason the Microsoft Kinect and other depth cameras are so cool is that you get a measured distance value for each pixel! |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|