arXiv Analytics

Sign in

arXiv:1808.09180 [cs.CL]AbstractReferencesReviewsResources

What do character-level models learn about morphology? The case of dependency parsing

Clara Vania, Andreas Grivas, Adam Lopez

Published 2018-08-28Version 1

When parsing morphologically-rich languages with neural models, it is beneficial to model input at the character level, and it has been claimed that this is because character-level models learn morphology. We test these claims by comparing character-level models to an oracle with access to explicit morphological analysis on twelve languages with varying morphological typologies. Our results highlight many strengths of character-level models, but also show that they are poor at disambiguating some words, particularly in the face of case syncretism. We then demonstrate that explicitly modeling morphological case improves our best model, showing that character-level models can benefit from targeted forms of explicit morphological modeling.

Related articles: Most relevant | Search more
arXiv:2005.01330 [cs.CL] (Published 2020-05-04)
From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of Parsing Morphologically-Rich Languages (MRLs)?
arXiv:1606.01280 [cs.CL] (Published 2016-06-03)
Dependency Parsing as Head Selection
arXiv:1608.02076 [cs.CL] (Published 2016-08-06)
Bi-directional Attention with Agreement for Dependency Parsing