NAACL 2015
TechTalks from event: NAACL 2015
1A: Semantics
-
Unsupervised Induction of Semantic Roles within a Reconstruction-Error Minimization FrameworkWe introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages.
-
Predicate Argument Alignment using a Global Coherence ModelWe present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements on the state-of-the-art.
-
Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clusteringMost recent unsupervised methods in vector space semantics for assessing thematic fit (e.g. Erk, 2007; Baroni and Lenci, 2010; Sayeed and Demberg, 2014) create prototypical role-fillers without performing word sense disambiguation. This leads to a kind of sparsity problem: candidate role-fillers for different senses of the verb end up being measured by the same yardstick, the single prototypical role-filler.
-
A Compositional and Interpretable Semantic SpaceVector Space Models (VSMs) of Semantics are useful tools for exploring the semantics of single words, and the composition of words to make phrasal meaning. While many methods can estimate the meaning (i.e. vector) of a phrase, few do so in an interpretable way. We introduce a new method (CNNSE) that allows word and phrase vectors to adapt to the notion of composition. Our method learns a VSM that is both tailored to support a chosen semantic composition operation, and whose resulting features have an intuitive interpretation. Interpretability allows for the exploration of phrasal semantics, which we leverage to analyze performance on a behavioral task.
- All Sessions
- Best Paper Plenary Session
- Invited Talks
- Tutorials
- 1A: Semantics
- 1B: Tagging, Chunking, Syntax and Parsing
- 1C: Information Retrieval, Text Categorization, Topic Modeling
- 2A: Generation and Summarization
- 2B: Language and Vision (Long Papers)
- 2C: NLP for Web, Social Media and Social Sciences
- 3A: Generation and Summarization
- 3B: Information Extraction and Question Answering
- 3C: Machine Learning for NLP
- 4A: Dialogue and Spoken Language Processing
- 4B: Machine Learning for NLP
- 4C: Phonology, Morphology and Word Segmentation
- 5A: Semantics
- 5B: Machine Translation
- 5C: Morphology, Syntax, Multilinguality, and Applications
- 6A: Generation and Summarization
- 6B: Discourse and Coreference
- 6C: Information Extraction and Question Answering
- 7A: Semantics
- 7B: Information Extraction and Question Answering
- 7C: Machine Translation
- 8A: NLP for Web, Social Media and Social Sciences
- 8B: Language and Vision
- 9A: Lexical Semantics and Sentiment Analysis
- 9B: NLP-enabled Technology
- 9C: Linguistic and Psycholinguistic Aspects of CL
- 8C: Machine Translation
- Opening remarks