RESEARCH PROJECTS
Power Analysis in Written Dialog: We believe the possession of and attempts to posses power and influence in the social context of a dialog would reflect in the use of language of the participants. We aim to identify and learn from those linguistic cues to find out power relations between participants of a dialog. We plan to study these manifestations in the context of different genres like email, discussion forums etc.
Power Analysis in Political Debates: In summer 2012, I interned at Avaya Labs under Dr. Ajita John and Dr. Doree Seligmann on how the power differentials between candidates of the 2012 Republican presidential primary elections manifest in the language and structure of the series of presidential debates. I continue to work on this project as part of my thesis.
Local Question Answering: In Summer 2014, I am interning at the Google NYC office, working with Dr. Hila Becker and the Google local search quality team on improving the answer quality of local search queries regarding business establishments. More details to come later.
Negation Detection: In Summer 2013, I interned at the IBM Thomas J. Watson Research Center in the DeepQA project, where I worked on adapting the Jeopardy! challenge winning Watson computer to the biomedical domain. I worked on both Relation detection in biomedical domain with Dr. Chang Wang and Negation Detection with Dr. Branimir K. Boguraev. We devised a novel method of using text-span negation annotations in the syntactic space in order to train negation scope detection systems. We also applied our system to identify negated clinical factors, a critical subtast of clinical decision making systems.
Biomedical Relation Extraction: In summer 2011, I interned at Siemens Corporate Researchunder Dr. Swapna Somasundaran on extracting relations between generic bio medical entities such as diseases, drugs, treatments and so on, to build a medical ontology. We used a semi supervised approach to build a system using distant learning methods.
Modality Analysis: This project aims at recognizing various modalities: Ability, Effort, Intention, Permission, Requirement, Success and Wanting, expressed in unstructured text. We used a supervised learning approach coupled with rule-based and crowd sourcing methods to obtain training data and using lexical and syntactic features.
Committed Belief Learning: We analyzed text beyond propositional meaning extraction and tried to determine which propositions in text does the author believes in. We built a supervised learning system using lexical and deep syntactic features.