Bob Coyne Visualizing the
meaning of language

My research area is natural language processing and the automatic depiction of textual descriptions as 3D scenes. In particular, I'm interested in the ways that textual meaning is resolved by context inferred from the functional and physical properties of the background actions, setting, and constituent objects.

I did my PhD research in the Language and Speech Processing Group in the Department of Computer Science at Columbia University. My thesis advisor was Prof. Julia Hirschberg. My PhD thesis (completed Jan 2017) is "Painting Pictures with Words -- From Theory to System"

I am currently CTO at WordsEye. I have previously worked on computer graphics and user interface design at Symbolics, Nichimen Graphics, and AT&T Research. At AT&T, Richard Sproat and I created the first version of WordsEye, a system for automatically depicting 3D scenes from textual input. We published a paper on WordsEye at Siggraph 2001.

VigNet is a lexical semantic resource developed for and used by WordsEye.
To request a copy, please send email to coyne at cs dot columbia dot edu.

Captives of Time
Input text:  A silver head of time is on the grassy ground. The blossom is next to the head. It is in the ground. The green light is three feet above the blossom. The yellow light is 3 feet above the head. The large wasp is behind the blossom. It wasp is facing the head.