Bob Coyne Visualizing the
meaning of language
Home
Research
Bibliography

My papers

2018 Morgan Ulinski, Bob Coyne, Julia Hirschberg.SpatialNet: A Declarative Resource for Spatial Relations in Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP). NAACL, June 2019, Minneapolis, Minnesota.

2018 Morgan Ulinski, Bob Coyne and Julia Hirschberg.Evaluating the WordsEye Text-to-Scene System: Imaginative and Realistic Sentences in LREC 2018, May. Miyazaki, Japan.

2014 Morgan Ulinski, Anusha Balakrishnan, Bob Coyne, Julia Hirschberg, and Owen Rambow. WELT: Using Graphics Generation in Linguistic Fieldwork In ACL: System Demonstrations 2014, June. Baltimore, MD.

2014 Morgan Ulinski, Anusha Balakrishnan, Daniel Bauer, Bob Coyne, Julia Hirschberg, and Owen Rambow. Documenting Endangered Languages with the WordsEye Linguistics Tool in Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 6-14, 2014, June. Baltimore, MD.

2012 B. Coyne, A. Klapheke, M. Rouhizadeh, R. Sproat, D. Bauer. Annotation Tools and Knowledge Representation for a Text-To-Scene System COLING 2012.

2012 D. Bauer, B. Coyne, and O. Rambow. Frame-Based Representation of Lexical, Graphical, and Factual Knowledge for Text-to-Scene Generation<

2012 D. Bauer, B. Coyne, and O. Rambow. Frame-Based Representation of Lexical, Graphical, and Factual Knowledge for Text-to-Scene Generation in Concept Types and Frames in Language, Cognition, and Science(CTF 2012), Heinrich Heine University, Düsseldorf.

2011 Masoud Rouhizadeh, Daniel Bauer, Bob Coyne, Owen Rambow, and Richard Sproat Collecting Spatial Information for Locations in a Text-to-Scene Conversion System, in Computational Models of Spatial Language Interpretation and Generation Workshop (CoSLI-2) at CogSci 2011. Boston Massachusetts.

2011 Masoud Rouhizadeh, Daniel Bauer, Bob Coyne, Owen Rambow, and Richard Sproat Collecting Spatial Information for Locations in a Text-to-Scene Conversion System, in Computational Models of Spatial Language Interpretation and Generation Workshop (CoSLI-2) at CogSci 2011. Boston Massachusetts.

2011 Bob Coyne, Cecilia Schudel, Michael Bitz, and Julia Hirschberg, Evaluating a Text-to-Scene Generation System as an Aid to Literacy in Workshop on Speech and Language Technology in Education (SlaTE) at Interspeech 2011

2011 Masoud Rouhizadeh, Bob Coyne, Richard Sproat, Collecting Semantic Information for Locations in the Scenario-Based Lexical Knowledge Resource of a Text-to-Scene Conversion System in KES (Knowledge Engineering Systems) 2011

2011 Bob Coyne, Daniel Bauer, Owen Rambow VigNet: Grounding Language in Graphics using Frame Semantics in the Workshop on Relational Models of Semantics (RELMS) at ACL 2011

2011 Masoud Rouhizadeh, Margit Bowler, Richard Sproat, Bob Coyne, Collecting Semantic Data by Mechanical Turk for the Lexical Knowledge Resource of a Text-to-Picture Generating System, The 9th International Conference on Computational Semantics

2010 Masoud Rouhizadeh, Margit Bowler, Richard Sproat, Bob Coyne, Data Collection and Normalization for Building the Scenario-Based Lexical Knowledge Resource of a Text-to-Scene Conversion System in 5th International Workshop on Semantic Media Adaptation and Personalization

2010 Bob Coyne, Richard Sproat, and Julia Hirschberg, Spatial Relations in Text-to-Scene Conversion, in Computational Models of Spatial Language Interpretation Workshop at Spatial Cognition 2010. Mt Hood, Oregon.

2010 Bob Coyne, Owen Rambow, Julia Hirschberg, Richard Sproat, Frame Semantics in Text-to-Scene Generation, in KES 2010, Workshop on 3D Visualisation of Natural Language. Cardiff, Wales.

2009 Bob Coyne, Owen Rambow, LexPar: A Freely Available English Paraphrase Lexicon Automatically Extracted from FrameNet, in IEEE-ICSC (International Conference on Semantic Computing) 2009. Berkeley CA.

2009 Bob Coyne, Owen Rambow, Meaning-Text-Theory and Lexical Frames, in Fourth International Conference on Meaning Text Theory 2009. Montreal CA.

2001 Bob Coyne, Richard Sproat, WordsEye: An Automatic Text-to-Scene Conversion System, in SIGGRAPH Proceedings 2001. Los Angeles, CA.

Related Work


Text-to-Scene Conversion

1. G. Adorni, M. Di Manzo, and F. Giunchiglia. Natural Language Driven Image Generation. COLING 84, pages 495-500, 1984.

2. Boberg, Richard. Generating Line Drawings from Abstract Scene Descriptions. MIT MS Thesis, Dept. of Elec. Eng, 1972.

3. S. R. Clay and J. Wilhelms. Put: Language-Based Interactive Manipulation of Objects. IEEE Computer Graphics and Applications, pages 31-39, March 1996.

4. Kahn, Ken. Creation of Computer Animation from Story Descriptions, Ph.D. Thesis, AI Tech. Report 540, AI Lab, MIT, Cambridge, MA, August 1979.

5. Simmons, Robert. The CLOWNS Microworld. Proceedings of TINLAP 75, 17-19.

6. Simmons, Robert and Novak, Gordon. Semantically Analyzing an English Subset for the CLOWNS Microworld. American Journal of Computational Linguistics, Microfiche 18. 1975.

7. Yamada, Atsushi. Studies on Spatial Description Understanding Based on Geometric Constraints Satisfaction. Ph.D. dissertation, University of Kyoto, 1993.

8. Bob Coyne and Richard Sproat WordsEye: An Automatic Text-to-Scene Conversion System. Siggraph Proceedings 2001

9. Funda Durupinar,Umut Kahramankaptan and Ilyas Cicekli
Intelligent Indexing, Querying and Reconstruction of Crime Scene Photographs.
Dept. of Computer Engineering, Bilkent University Ankara, Turkey

10. Richard Sproat, Inferring the Environment in a Text-to-Scene Conversion System, First International Conference on Knowledge Capture (K-CAP '01), Victoria, BC, Canada, 2001.

Language and Graphics

1. Papers from the University of Pennsylvania's Center for Human Modeling and Simulation. This is only a sample: many more papers are listed at their website.

- N. Badler, R. Bindiganavale, J. Allbeck, W. Schuler, L. Zhao, and M. Palmer. Parameterized Action Representation for Virtual Human Agents. In J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, editors, Embodied Conversational Agents, pages 256-284. MIT Press, Cambridge, MA, 2000.

- R. Bindiganavale, W. Schuler, J. Allbeck, N. Badler, A. Joshi, and M. Palmer. Dynamically Altering Agent Behaviors Using Natural Language Instructions. Autonomous Agents, pages 293-300, 2000.

- Badler, Norm; Zhao, Liwei; Costa, Monica; Vogler, Christian; and Schuler, William. "Modifying Movement Manner Using Adverbs," Fourth International Workshop on Autonomous Agents: Communicative Agents in Intelligent Virtual Environments, Barcelona, Spain, June 3-7, 2000.

- Badler, N. and Xu Y. "Algorithms for generating motion trajectories described by prepositions." Proc. Computer Animation 2000 Conference, IEEE Computer Society, Philadelphia, May 3-5, 2000, pp. 33-39. (Y. Xu and N. Badler.)

2. The Ulysse and CarSim systems:

The Ulysse System:

- Godéreaux, Christophe; El-Guedj, Pierre-Olivier; Revolta, Fréderic; and Nugues, Pierre. Ulysse: An Interactive Spoken Dialogue Interface to Navigate in Virtual Worlds. In John Vince and Rae Earnshaw (Eds), Virtual Worlds on the Internet. Chapter 4, pp. 53-70, IEEE Computer Society Press, Los Alamitos, 1999.

- Bersot, Olivier; El-Guedj, Pierre-Olivier; Godéreaux, Christophe; and Nugues, Pierre A Conversational Agent to Help Navigation and Collaboration in Virtual Worlds. Virtual Reality, 3(1):71-82, 1998.

- Godéreaux, Christophe; Diebel, Korinna; El-Guedj, Pierre-Olivier; Revolta, Frederic; and Nugues, Pierre. An Interactive, Spoken Dialogue Interface to Virtual Worlds. In John H. Connolly and Lyn Pemberton (Eds), Linguistic Concepts and Methods in CSCW. Chapter 13, pp 177-200, Springer, London, 1996.

The CarSim system:

- Dupuy, Sylvain; Egges, Arjan; Legendre, Vincent; and Nugues, Pierre. Generating a 3D Simulation Of a Car Accident from a Written Description in Natural Language: The CarSim System. Proceedings of ACL workshop on Temporal and Spatial Information Processing, pp. 1-8, 2001. Toulouse.

3. Papers from the Kairai Projectat the University of Tokyo.

- Yusuke Shinyama, Takenobu Tokunaga, and Hozumi Tanaka, ``Kairai - Software Robots Understanding Natural Language'' Third International Workshop on Human-Computer Conversation, Jul. 2000.

- Yusuke Shinyama, ``A Dialogue System Controlling the Acts of Software Robots'' (in Japanese). M.S. Thesis, Dept. of Computer Science, Tokyo Inst. of Technology, Feb. 2000.

- Yusuke Shinyama, Takenobu Tokunaga, and Hozumi Tanaka, ``Processing of 3-D Spatial Relations for Virtual Agents Acting on Natural Language Instructions'' Second Workshop on Intelligent Virtual Agents, Sep. 1999

4. The Virtual Director

- Mukerjee, Amitabha, Kshitij Gupta, Siddharth Nautiyal, Mukesh P. Singh and Neelkanth Mishra, 2000 Conceptual Description of Visual Scenes from Linguistic Models Journal of Image and Vision Computing, 2000.


Other papers

1. Drewery, Karin and Tsotsos, John. Goal Directed Animation using English Motion Commands. Graphics Interface, 131-135, 1986.

2. Tijerino, Yuri; Abe, Shinji; Miyasato, Tsutomu; and Kishino, Fumio. What you Say is what you See - Interactive Generation, Manipulation and Modification of 3-D Shapes based on Verbal Descriptions. Artificial Intelligence Review. 8, 215-234. 1994.

3. Tijerino, Yuri; Mochizuki, Kenji; and Kishino, Fumio. Interactive 3-D Graphics Driven through Verbal Insturctions: Previous and Current Activities at ATR. Computers and Graphics. 18(5), 621-631. 1994.

4. Dhiraj Joshi, James Z. Wang, Jia Li. The Story Picturing Engine: Finding Elite Images to Illustrate a Story Using Mutual Reinforcement MIR 2004


Links to related or seemingly related systems:

A. The Sonas system.
B. A system for automated generation of 3D animation from a high level script.