Realistic image synthesis requires accurate models for object
geometry, illumination and material properties. Today, these are
often the limiting factor in realism, and we therefore often acquire
them from the real world, using methods commonly referred to as
inverse or imagebased modeling and rendering. The first challenge is
developing efficient and robust acquisition methods. However, even if
we do acquire measured geometry, illumination and materials, it is
difficult to work with them, since they often represent
unstructured highdimensional data. Hence, a challenge is determining
compact structured representations. Finally, we seek to develop
efficient Monte Carlo rendering algorithms that can incorporate
measured illumination or reflectance for rendering. A new project is to
consider the challenge of different resolutions and develop multiscale
representations of appearance.
Primary Current Participants 
Acquisition 
Our goal has been to acquire reflectance under much less structured conditions than previously. We have shown how to acquire material properties under complex lighting using our signalprocessing framework. More recently, we have developed imagebased rendering techniques for spatiallyvarying reflectance that enable the use of many fewer images than previously. Most recently, we have used dilution in water to easily acquire scattering properties of many materials in the single scattering regime, and a new theory of compressive structured light to efficiently acquire inhomogeneous participating media including dynamic scenes.
A SignalProcessing Framework for Inverse Rendering: Siggraph 01, pages 117128
This paper is the most mathematical so far and derives the theory for
the general 3D case with arbitrary isotropic BRDFs.
It also applies the
results to the practical problem of inverse rendering under
complex illumination.  

Reflectance Sharing: ImageBased Rendering from a Sparse Set of Images
PAMI Aug 06, pages 12871302 ,
EGSR 05, pages 253264 We develop the theoretical framework and practical results for imagebased rendering of spatiallyvarying reflectance from a very small number of images. In doing so, we trade off some spatial variation of the reflectance for an increased number of angular samples. Paper: EGSR 05 (PDF) Video (83M) PAMI 06 (PDF) 
Acquiring Scattering Properties of Participating Media by Dilution
SIGGRAPH 06, pages 10031012. We present a simple device and technique for robustly estimating the properties of a broad class of participating media that can be either (a) diluted in water such as juices or beverages, (b) dissolved in water such as powders and sugar/salt crystals, or (c) suspended in water, such as impurities. Paper: PDF 

Compressive Structured Light for Recovering Inhomogeneous Participating
Media ECCV 2008. Recovering dynamic inhomogeneous participating media is a significant challenge in vision and graphics. We introduce a new framework of compressive structured light, where patterns are emitted to obtain a line integral of the volume density at each camera pixel. The framework of compressive sensing is then used to recover the density from a sparse set of patterns. Paper: PDF Video (25M) 
Structured Representations and Editing 
Once we have acquired reflectance, we need to find structured representations that are compact, highly accurate, and easily editable . In 2006, we made significant progress on this problem, with 4 papers at SIGGRAPH. Our first approach, inverse shade trees uses a new linear hierarchical matrix factorization algorithm to create intuitive, editable decompositions of spatiallyvarying reflectance (SVBRDFs). A related project explores editing of measured and analytic BRDFs in complex illumination with cast shadows (more recent work extends this to editing a full global illumination solution). We have also made the first comprehensive study of timevarying surface appearance (TSVBRDFs) with a novel nonlinear spacetime appearance factorization. Finally, we have developed an efficient compact representation for heterogeneous subsurface scattering (BSSRDFs).
Inverse Shade Trees for NonParametric Material Representation and
Editing
SIGGRAPH 06, pages 735745. We develop an inverse shade tree framework of hierarchical matrix factorizations to provide intuitive, editable representations of highdimensional measured reflectance datasets of spatiallyvarying appearance. We introduce a new alternating constrained least squares framework for these decompositions, that preserves the key features of linearity, positivity, sparsity and domainspecific constraints. The SVBRDF is decomposed onto 1D curves and 2D maps, that are easily edited. Paper: PDF Video (24M) 

RealTime BRDF Editing in Complex Lighting
SIGGRAPH 06, pages 945954. Inverse shade trees develops structured nonparametric curvebased BRDF representations. This allows for datadriven editing, but only with point lighting. In this project, we develop the theory and algorithms to for the first time allow users to edit these BRDFs in real time to design materials in their final placement in a scene with complex natural illumination and cast shadows. Paper: PDF (20M) Video (59M) 

TimeVarying Surface Appearance:
Acquisition, Modeling and Rendering
SIGGRAPH 06, pages 762771. We conduct the first comprehensive study of timevarying surface appearance, including acquisition of the first database of timevarying processes like burning, drying and decay. We then develop a nonlinear spacetime appearance factorization that allows easy editing or manipulation such as control, transfer and texture synthesis. We demonstrate a variety of novel timevarying rendering applications. Paper: PDF Video QT (64M) Video AVI (46M) 

A Compact Factored Representation of Heterogeneous Subsurface Scattering
SIGGRAPH 06, pages 746753. Heterogeneous subsurface scattering in translucent materials is one of the most beautiful but complex effects. We acquire spatial BSSRDF datasets using a projector, and develop a novel nonlinear factorization that separates a homogeneous kernel, and heterogeneous discontinuities. Paper: PDF (11M) 


TimeVarying BRDFs
IEEE Transactions on Visualization and Computer Graphics
13, 3 pages 595609, 2007. The properties of virtually all realworld materials change with time, causing their BRDFs to be timevarying. In this work, we address the acquisition, analysis, modeling and rendering of a wide range of timevarying BRDFs, including the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic timevarying BRDF (TVBRDF) models. Paper: PDF Video (49MB) 

A Precomputed Polynomial Representation for Interactive BRDF
Editing with Global Illumination
ACM Transactions on Graphics 27(2), Article 13, pages
113. Presented at SIGGRAPH 2008. We develop a mathematical framework and practical algorithms to edit BRDFs with global illumination in a complex scene. A key challenge is that light transport for multiple bounces is nonlinear in the scene BRDFs. We address this by developing a new bilinear representation of the reflection operator, deriving a polynomial multibounce tensor precomputed framework, and reducing the complexity of further bounces. Paper: PDF Video (7M) 
Rendering: Monte Carlo Sampling 
Having acquired illumination (environment maps) and measured BRDFs, we still need to be able to use them for image synthesis. A related project deals with interactive rendering. Here, we focus on Monte Carlo sampling for more traditional global illumination. The challenge is that we need to stratify and importance sample highdimensional measured datasets. While Monte Carlo sampling is a mature area in both statistics and computer graphics, this problem has not been addressed before. We have developed effective techniques for importance sampling both illumination and materials.
Structured Importance Sampling of Environment Maps
Siggraph 03, pages 605612 We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. PDF Video 

Efficient Shadows from Sampled Environment Maps
Journal of Graphics Tools 11(1), pages 1336, 2006. We evaluate various possibilities and show how coherence can be used to speed up shadow testing with environment maps by an order of magnitude or more. Paper: PDF 

Efficient BRDF Importance Sampling Using a Factored Representation
Siggraph 04, pages 494503 We introduce a Monte Carlo Importance sampling technique for general analytic and measured BRDFs based on a new BRDF factorization. PDF (8M) 


Adaptive Numerical Cumulative Distribution Functions for Efficient
Importance Sampling
EGSR 05, pages 1120 Importance sampling highdimensional functions like lighting and BRDFs is increasingly important, but a direct tabular representation has storage cost exponential in the number of dimensions. By placing samples nonuniformly, we show that we can develop compact CDFs that enable new applications like sampling from oriented environment maps and multiple importance sampling. Paper: PDF 
MultiScale Appearance 
Scale is one of the most important, but often neglected, features of appearance throughout the processing pipeline. If we zoom into the earth from space, our perception will be widely varying depending on if we are looking continentscale, city wide, at the street map or in our rooms. Even simple zooms in and out of computer graphics models can lead to widely varying appearances, giving rise to the need for proper filtering, representation and synthesis algorithms. At SIGGRAPH 2007 and 2008, we have presented some of the first methods for multiscale (properly filtered) normal maps and multiscale texture synthesis.

Frequency Domain Normal Map Filtering
SIGGRAPH 07, article 28, pages 111. While mipmapping textures is commonplace, accurate normal map filtering remains a challenging problem because of nonlinearities in shadingwe cannot simply average nearby surface normals. In this paper, we show analytically that normal map filtering can be formalized as a spherical convolution of the normal distribution function (NDF) and the BRDF. This leads to accurate multiscale normal map representations that preserve properly filtered appearance across a range of scales. The introduction of the von MisesFisher distribution and spherical EM into graphics, also enables the use of highfrequency materials. Paper: PDF Video (103M) Very Cool Trailer (MOV 54M) 
Multiscale Texture Synthesis
SIGGRAPH 08. The appearance of many textures changes dramatically with scale; By using an exemplar graph with a few small singlescale exemplars and modifying a standard parallel synthesis method, we develop the first multiscale texture synthesis algorithm. The method is simple and can be implemented in realtime in the GPU, and even enables (by a simple recursive graph) infinite zooms into an image. Paper: PDF Video (175M) 
Geometry 
While most of our focus has been on reflectance, we have also conducted some research on new geometry acquisition frameworks, and techniques that bridge the gap between active and passive acquisition (Spacetime Stereo). These methods also allow acquisition of geometry under uncontrolled unstructured illumination. We have also developed ways to robustly create compact structured generative models from unstructured range data. More recently, we have shown how to efficiently combine shape and normals for precise 3D geometry, and developed a new viewpointcoding scheme that uses multiple viewpoints instead of spatial and temporal codes for structured light.
Creating Generative Models from Range
Images Siggraph 99, pages 195204 We have explored the creation of highlevel parametric models from lowlevel range data. Our modelbased approach is relatively insensitive to noise and missing data and is fairly robust. Full Paper: PS (2.5M) PDF (1.5M) 


Spacetime Stereo: A Unifying Framework for Depth from
Triangulation CVPR 03, II359II366 ; PAMI Feb 05, pages 296302 We propose a common framework, spacetime stereo, which unifies many previous depth from triangulation methods like stereo, laser scanning, and coded structured light. As a practical example, we discuss a new temporal stereo technique for improved shape estimation in static scenes under variable illumination. Paper: CVPR 03 , PAMI 05 

Efficiently Combining Positions and Normals for Precise 3D Geometry
Siggraph 05, pages 536543 We show how depth and normal information, such as from a depth scanner and from photometric stereo, can be efficiently combined to remove the distortions and noise in both, producing very high quality meshes for computer graphics. Paper: PDF 

ViewpointCoded Structured Light
CVPR 2007. We introduce a theoretical framework and practical algorithms for replacing timecoded structured light patterns with viewpoint codes, in the form of additional camera locations. Current structured light methods typically use log(N) light patterns, encoded over time, to unambiguously reconstruct N unique depths. We demonstrate that each additional camera location may replace one frame in a temporal binary code. Paper: PDF 