Van Miltenburg et al. (2021) suggest NLP research should adopt
preregist...
Language models are defined over a finite set of inputs, which creates a...
In recent years, the natural language processing (NLP) community has giv...
Various efforts in the Natural Language Processing (NLP) community have ...
In this work, we investigate the knowledge learned in the embeddings of
...
Large multilingual pretrained language models such as mBERT and XLM-RoBE...
With an increase of dataset availability, the potential for learning fro...
Creole languages such as Nigerian Pidgin English and Haitian Creole are
...
This work introduces Itihasa, a large-scale translation dataset containi...
Building robust natural language understanding systems will require a cl...
We present Køpsala, the Copenhagen-Uppsala system for the Enhanced
Unive...
Transition-based and graph-based dependency parsers have previously been...
This article is a linguistic investigation of a neural parser. We look a...
The need for tree structure modelling on top of sequence modelling is an...
We present the Uppsala system for the CoNLL 2018 Shared Task on universa...
Punctuation is a strong indicator of syntactic structure, and parsers tr...
We provide a comprehensive analysis of the interactions between pre-trai...
Previous work has suggested that parameter sharing between transition-ba...
How to make the most of multiple heterogeneous treebanks when training a...