Dr. D. JOAKIM NIVRE

 

Current Trends in Data-Driven Dependency Parsing.

Dependency-based syntactic parsing has become a standard technique in natural language processing and a number of different models have been proposed in recent years, in particular data-driven models that can be trained using syntactically annotated corpora, or treebanks. Most of these models can be characterized as either graph-based or transition-based. Graph-based models learn to score entire dependency trees and use exact search to find the best tree for a given input sentence. Transition-based models learn to score local parsing actions and use greedy search to find the best sequence of actions for a given input sentence. Both types of models give state-of-the-art accuracy but a comparative error analysis reveals that they have different error distributions and that this difference can be tied to theoretical properties of the models. Recent work on data-driven parsing has therefore to a large extent been characterized by attempts to combine the strengths of the two models, either through the development of hybrid models or through system combination. In this talk, I review the classic graph-based and transition-based models, characterize their typical strengths and weaknesses, and report on recent work aiming to improve the basic models.