Buscar en este blog

miércoles, 14 de abril de 2010

Illumination-Aware Imaging

Illumination-Aware Imaging
Conventional imaging systems incorporate a light source for illuminating an object and a separate sensing device for recording the light rays scattered by the object. By using lenses and software, the recorded information can be turned into a proper image. Human vision is an ordinary process: the use of two eyes (and a powerful brain that processes visual information) provides human observers with a sense of depth perception. But how does a video camera attached to a robot "see" in three dimensions?

Carnegie Mellon scientist Srinivasa Narasimhan believes that efficiently producing 3-D images for computer vision can best be addressed by thinking of a light source and sensor device as being equivalent. That is, they are dual parts of a single vision process.

For example, when a light illuminates a complicated subject, such as a fully-branching tree, many views of the object must be captured. This requires the camera to be moved, making it hard to find corresponding locations in different views.

In Narasimhan's approach, the camera and light constitute a single system. Since the light source can be moved without changing the corresponding points in the images, complex reconstruction problems can be solved easily for the first time. Another approach is to use a pixilated mask interposed at the light or camera to selectively remove certain light rays from the imaging process. With proper software, the resulting series of images can more efficiently render detailed 3-D vision information, especially when the object itself is moving.

Narasimhan calls this process alternatively illumination-aware imaging or imaging-aware illumination. He predicts it will be valuable for producing better robotic vision and rendering 3-D shapes in computer graphics.

No hay comentarios: