COMS 6998 Computational Photography
You have a good bit of freedom in choosing your final project. You are expected to implement
one (or more) related research papers, or think of some interesting
novel ideas and implement them using the techniques discussed in
class. Below, we list some possible projects. However, you are welcome to
come up with your own ideas - in fact we encourage this.
This project will require programming as well as the manipulation of images and/or video.
Most likely, you will need to write a program with a user
interface (i.e., a demo) so that you can show the results. It is
also strongly recommended that you evaluate your system with varying input parameters.
A comparion to other work is welcome, but not required. You can use
your favoriate programming languages (e.g., Matlab, C, C++, Python,
You are allowed to work on a project in groups of two.
In your submission, please clearly identify the contribution of
both group members. Please note that members in the same group will not necessarily
get the same grade.
What to Submit?
The project is expected to be carried out through the remaining portion of the
semester. There will be three checkpoints: a project
proposal, an intermediate milestone report, and a final project
Create a webpage for the final project. This page
should include the three reports, the interesting implementation
details, some results (e.g., images, videos, etc.), and so on.
Your website should also includes downloadable source and
executable code. Do not modify the website after the due date of
the assignment. Also, send an email to firstname.lastname@example.org
and email@example.com with the URL of your webpage BEFORE the
- Project Proposal (Due: April 4) (5%)
This will be a short report (usually one or two pages will be
enough). You will explain what problem you are trying to solve,
why you want to solve it, and what are the possible steps to
the solution. Please include a time table.
- Intermediate Milestone Report (Due: April 15) (5%)
In this report, you will need to give a brief summary of
current progress, including your current results, the
difficulties that arise during the implementation, and how your
proposal may have changed in light of current progress. Please keep in mind that the goal
is to go around obstacles not stop working when you reach them.
- Final Project Presentations (April
27 and May 4) (15%)
This will be a short presentation in class about your project. It will
5-7 minutes per person. So if you are in a group of two people, you will get
10-14 minutes for your presentation.
- Final Project Report (Due: May
Assemble all the materials you have finished for your project
in the final report, including the motivation, the approach,
your implemention, the results, and discussion of problems encountered.
If you have comparison results or
some interesting findings, put them on the webpage and discuss
these as well.
Possible Project Ideas
1. From Your Previous Assignments ...
You have tried the face detector and the SVM classifier in your
previous assignments. What could you do next using these tools?
you can find some resources of face detection with a web camera
used for the biometrics class in the last semester. You can use
the code if you want to build a complete face detection system.
You can also add a face
recognition module in your system. Or, do something completely different with these same components.
2. Radiometric Calibration of a Camera and
High-Dynamic Range Imaging (Probably Not as Fun as Other Projects, but Cool)
This project is to re-implement the paper by Debevec & Malik 97. The
input will be a set of photographs of the same 3D scene taken
under different exposures. You will need to write the code to
estimate the camera response curve of the camera, as well as to
create high-dynamic range images from the set of photographs. Here you can find some sets of photographs.
There is a file called "testfile" for each set, in wich the second
column shows the shutter speed for each photograph.
If you have access to a camera with controls for the shutter
speed, you are highly encouraged to take your own set of
photographs and work on that data.
Once you get a high-dynamic range image, you might also want to
tone mapping algorithm to display the high-dynamic range image
You will implement a light field viewer based on the techniques
described by Levoy & Hanrahan 96.
Given the input as a light field, the viewer you need to implement
should allow the user interactively move the camera, and generate
an image corresponding to that particular view. In other words, it
should perform the following operations:
The basic operation performed by the light field viewer is looking
up the color corresponding to a given ray in space. Since the
light field is represented by "slabs" (pairs of planes in space),
you will need to first derive code for intersecting a ray in space
with a rectangular region of a plane (specified with the region's
four corners). If you have not taken a graphics course, perhaps you should skip this.
- For each pixel, generate a ray starting at the camera
position and in the pixel's direction
- Intersect the ray with the uv and st planes,
finding the closest samples in each
- If both intersections are valid, look up the color for
that pixel, else color the pixel black.
- Once all pixels have been assigned a color, use
glDrawPixels() to render to the screen
Here are some single-slab light field
data. Here is the format of the light
field data. Here is multi-slab light
4. Seam Carving
This project is to re-implement the Seam Carving paper by Avidan & Shamir 07. We
will discuss the paper in the class. There is a
follow-up paper that extends the idea for videos.
- Write code to compute the edge-based measure for an
- Implement the dynamic programming code to find the optimal
seam to remove along one direction.
- Implement the algorithm to deal with resizing along both
x and y directions (i.e., to find the optimal seams-order)
- Implement other applications discussed in the paper.
- Discussions: is there a better measure? How does this algorithm
compare with other simple resizing algorithms, e.g., to
remove an optimal column instead of an optimal seam?
What are the limitations of this algorithm?
- (Not easy) You might want to extend this to videos, as shown in their
5. View Morphing
This project is to re-implement the View Morphing paper by Seitz & Dyer. Here are some suggested steps for
As an important related work, you might want to check the
feature-based image morphing paper by Beier &
Neely. It is strongly recommended to compare these two
morphing methods in your final project.
- Write the code to estimate the fundamental matrix from the
corresponding feature points of the two input images. You can
manually select the corresponding feature points, or use
SIFT feature estimator and the RANSAC
algorithm to find these corresponding feature points automatically.
- Write the code for morphing in parallel views
- Write the code for morphing in non-parallel views
6. Image Analogies (Not Easy)
Implement the Image
Analogies paper by Aaron Hertzman.
7. Single View Metrology (Thanks to Prof. Steve
Implement the Single View Metrology paper.
Here are some suggested steps:
- Take some photographs or find some images where you can
identify at least three sets of parallel lines, such as
photographs of buildings, rooms, and so on. You can manually
label the line segments of the parallel lines in the input
images. You can implement a zoom feature and label with
- Compute the vanishing points for each set of the parallel
- Choose reference points. You will need to choose a
reference plane and a point off that plane from the image. If
you shoot the photograph yourself, measure the 3D
positions for 4 corner points in the reference plane and the 3D
poisition of the point off that plane. This will give you the
homography matrix H that maps u-v points to X-Y
positions on the plane. The fifth point determines the scale
factor off of the plane.Alternatively, you can specify
H and the scale factor by identifying a regular
structure such as a cube. This latter approach is necessary
for paintings and other scenes in which physical measurements
are not feasible.
- Compute 3D positions for a set of feature points in the
image. The paper provides methods to measure the height of a
point off the reference plane. Using the homography matrix
H estimated above, you can also estimate the
coordinates of the point on the plane through this point that
is parallel to the reference plane.
- Create a 3D mesh model from those feature points (you can
either triangulate the points or manually create the mesh).
Also, crop the corresponding regions from the original image
as textures. You will need to wrap the quadrilateral image
regions into a rectangular texture image for texture mapping.
- Render the reconstructed 3D mesh (with texture mapped)
from different views.