TechTalks from event: NAACL 2015

1A: Semantics

  • Unsupervised Induction of Semantic Roles within a Reconstruction-Error Minimization Framework Authors: Ivan Titov and Ehsan Khoddam
    We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages.
  • Predicate Argument Alignment using a Global Coherence Model Authors: Travis Wolfe, Mark Dredze, Benjamin Van Durme
    We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements on the state-of-the-art.
  • Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering Authors: Clayton Greenberg, Asad Sayeed, Vera Demberg
    Most recent unsupervised methods in vector space semantics for assessing thematic fit (e.g. Erk, 2007; Baroni and Lenci, 2010; Sayeed and Demberg, 2014) create prototypical role-fillers without performing word sense disambiguation. This leads to a kind of sparsity problem: candidate role-fillers for different senses of the verb end up being measured by the same yardstick, the single prototypical role-filler.
  • A Compositional and Interpretable Semantic Space Authors: Alona Fyshe, Leila Wehbe, Partha P. Talukdar, Brian Murphy, Tom M. Mitchell
    Vector Space Models (VSMs) of Semantics are useful tools for exploring the semantics of single words, and the composition of words to make phrasal meaning. While many methods can estimate the meaning (i.e. vector) of a phrase, few do so in an interpretable way. We introduce a new method (CNNSE) that allows word and phrase vectors to adapt to the notion of composition. Our method learns a VSM that is both tailored to support a chosen semantic composition operation, and whose resulting features have an intuitive interpretation. Interpretability allows for the exploration of phrasal semantics, which we leverage to analyze performance on a behavioral task.