Introduction

ELMo is a deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). These word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. They can be easily added to existing models and significantly improve the state of the art across a broad range of challenging NLP problems, including question answering, textual entailment and sentiment analysis.

Salient features

ELMo representations are:

Key result

Adding ELMo to existing NLP systems significantly improves the state-of-the-art for every considered task. In most cases, they can be simply swapped for pre-trained GloVe or other word vectors.

TaskPrevious SOTA Our baselineELMo + BaselineIncrease (Absolute/Relative)
SQuADSAN84.481.185.84.7 / 24.9%
SNLIChen et al (2017)88.688.088.7 +/- 0.170.7 / 5.8%
SRLHe et al (2017)81.781.484.63.2 / 17.2%
CorefLee et al (2017)67.267.270.43.2 / 9.8%
NERPeters et al (2017)91.93 +/- 0.1990.1592.22 +/- 0.102.06 / 21%
Sentiment (5-class)McCann et al (2017)53.751.454.7 +/- 0.53.3 / 6.8%

Pre-trained models

There are reference implementations of the pre-trained bidirectional language model available in both PyTorch and TensorFlow. The PyTorch verison is fully integrated into AllenNLP, with a detailed tutorial available. The TensorFlow version is also available in bilm-tf.

More information

See our paper Deep contextualized word representations for more information about the algorithm and a detailed analysis.