Photometric Invariants for Segmentation and Recognition
In this project, we are interested in deriving quantities that can be computed from one or more images and are invariant to important scene parameters such as lighting, geometry and material. We have proposed a photometric invariant, called reflectance ratio, that is simple to compute from a single image and can be used for image segmentation and object recognition. Neighboring points on a smoothly curved surface have similar surface orientations and illumination conditions. Hence, their brightness values can be used to compute the ratio of their reflectance coefficients, which is invariant to local surface geometry and illumination conditions. Reflectance ratios at all image points can be computed in a single raster scan of a black and white or color image. The reflectance ratios of regions in the scene can be computed with a second scan of the image. The region reflectance ratio represents a physical property of a region that is invariant to the illumination conditions. The ratio invariant is used to recognize objects from a single brightness image of a scene.

We have also derived a new class of photometric invariants that can be used for a variety of vision tasks including lighting invariant material segmentation, change detection and tracking as well as material invariant shape recognition. The key observation is that for a large class of real-world materials, the BRDF can be decomposed into material related terms and object shape and lighting related terms. Based on this observation a set of invariants are derived as simple rational functions of the appearance parameters (say, material or shape and lighting). The invariants in this class differ from one another in the number and type of image measurements they require. Most of the invariants need changes in illumination or object position between image acquisitions. We have demonstrated the power of these invariants via experiments with scenes with complex shapes, materials, textures, shadows and specularities.

Publications

print_paperentry_byid: more than 2 or no entry matches.


print_paperentry_byid: more than 2 or no entry matches.


print_paperentry_byid: more than 2 or no entry matches.


print_paperentry_byid: more than 2 or no entry matches.


print_paperentry_byid: more than 2 or no entry matches.


Videos

  Varying Illumination:
This video shows the illumination of a scene being varied over a wide range.
  Reflectance Ratio Invariant:
This video shows region reflectance ratios computed from the above video. The ratio invariant value for each region is shown in color (shades between blue to red). Despite the significant illumination variation, the region reflectance ratios (colors of regions) are seen to remain constant (invariant).
  Cluttered Dynamic Scene:
This video shows a highly cluttered scene with an object in motion. The illumination and the geometry of the scene is complex and it is difficult to apply standard feature- and geometry-based recognition techniques to find the objects and their poses.
  Region Reflectance Ratios:
This video shows the regions, their reflectance ratios (color coded) and their centroids (white dots) computed from the above video clip.
  Reflectance Based Recognition:
During recognition, the ratios of triplets of regions are used as indices to generate hypotheses. These hypotheses are verified using the positions and ratios of neighboring regions in the image. This video shows a region triplet (shown as a triangle) on the moving object that was successfully verified by the recognition algorithm. The object is thus recognized and its pose is computed using the positions of the regions that constitute the triangle.

Databases

Reflectance and Texture Database (CURET)

Related Projects

Appearance Matching

Shape from Brightness

Oren-Nayar Reflectance Model