My research area is natural language processing and the automatic depiction of textual descriptions as 3D scenes. In particular, I'm interested in the ways that textual meaning is resolved by context inferred from the functional and physical properties of the background actions, setting, and constituent objects.
I did my PhD research in the Language and Speech Processing Group in the
Department of Computer Science at
Columbia University. My thesis advisor was Prof. Julia Hirschberg.
My PhD thesis (completed Jan 2017) is "Painting Pictures with Words -- From Theory to System"
I am a co-founder at WordsEye. I have previously worked on computer graphics and user interface design at Symbolics, Nichimen Graphics, and AT&T Research. At AT&T,
Richard Sproat and I created the first version of WordsEye, a system for automatically depicting 3D scenes from textual input. We published a
paper on WordsEye at Siggraph 2001.
Some of our more recent papers here