NAACL 2015
TechTalks from event: NAACL 2015
7A: Semantics
-
High-Order Low-Rank Tensors for Semantic Role LabelingThis paper introduces a tensor-based approach to semantic role labeling (SRL). The motivation behind the approach is to automatically induce a compact feature representation for words and their relations, tailoring them to the task. In this sense, our dimensionality reduction method provides a clear alternative to the traditional feature engineering approach used in SRL. To capture meaningful interactions between the argument, predicate, their syntactic path and the corresponding role label, we compress each feature representation first to a lower dimensional space prior to assessing their interactions. This corresponds to using an overall cross-product feature representation and maintaining associated parameters as a four-way low-rank tensor. The tensor parameters are optimized for the SRL performance using standard online algorithms. Our tensor-based approach rivals the best performing system on the CoNLL-2009 shared task. In addition, we demonstrate that adding the representation tensor to a competitive tensor-free model yields 2\% absolute increase in F-score.
-
Large-scale Semantic Parsing without Question-Answer PairsIn this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.
-
A Large Scale Evaluation of Distributional Semantic Models: Parameters, Interactions and Model SelectionThis paper presents the results of a large-scale evaluation study of window-based Distributional Semantic Models on a wide variety of tasks. Our study combines a broad coverage of model parameters with a model selection methodology that is robust to overfitting and able to capture parameter interactions. We show that our strategy allows us to identify parameter configurations that achieve good performance across different datasets and tasks.
- All Sessions
- Best Paper Plenary Session
- Invited Talks
- Tutorials
- 1A: Semantics
- 1B: Tagging, Chunking, Syntax and Parsing
- 1C: Information Retrieval, Text Categorization, Topic Modeling
- 2A: Generation and Summarization
- 2B: Language and Vision (Long Papers)
- 2C: NLP for Web, Social Media and Social Sciences
- 3A: Generation and Summarization
- 3B: Information Extraction and Question Answering
- 3C: Machine Learning for NLP
- 4A: Dialogue and Spoken Language Processing
- 4B: Machine Learning for NLP
- 4C: Phonology, Morphology and Word Segmentation
- 5A: Semantics
- 5B: Machine Translation
- 5C: Morphology, Syntax, Multilinguality, and Applications
- 6A: Generation and Summarization
- 6B: Discourse and Coreference
- 6C: Information Extraction and Question Answering
- 7A: Semantics
- 7B: Information Extraction and Question Answering
- 7C: Machine Translation
- 8A: NLP for Web, Social Media and Social Sciences
- 8B: Language and Vision
- 9A: Lexical Semantics and Sentiment Analysis
- 9B: NLP-enabled Technology
- 9C: Linguistic and Psycholinguistic Aspects of CL
- 8C: Machine Translation
- Opening remarks