Named entity recognition | NLP-progress

Excerpt

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.


Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.

Example:

MarkWatneyvisitedMars
B-PERI-PEROB-LOC

CoNLL 2003 (English)

The CoNLL 2003 NER task consists of newswire text from the Reuters RCV1 corpus tagged with four different entity types (PER, LOC, ORG, MISC). Models are evaluated based on span-based F1 on the test set. ♩ used both the train and development splits for training.

ModelF1Paper / SourceCode
ACE + document-context (Wang et al., 2021)94.6Automated Concatenation of Embeddings for Structured PredictionOfficial
LUKE (Yamada et al., 2020)94.3LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attentionOfficial
CL-KL (Wang et al., 2021)93.85Improving Named Entity Recognition by External Context Retrieving and Cooperative LearningOfficial
InferNER (Moemmur et al., 2021)93.76InferNER: an attentive model leveraging the sentence-level information for Named Entity Recognition in Microblogs 
ACE (Wang et al., 2021)93.6Automated Concatenation of Embeddings for Structured PredictionOfficial
CNN Large + fine-tune (Baevski et al., 2019)93.5Cloze-driven Pretraining of Self-attention Networks 
RNN-CRF+Flair93.47Improved Differentiable Architecture Search for Language Modeling and Named Entity Recognition 
CrossWeigh + Flair (Wang et al., 2019)♩93.43CrossWeigh: Training Named Entity Tagger from Imperfect AnnotationsOfficial
LSTM-CRF+ELMo+BERT+Flair93.38Neural Architectures for Nested NER through LinearizationOfficial
Flair embeddings (Akbik et al., 2018)♩93.09Contextual String Embeddings for Sequence LabelingFlair framework
BERT Large (Devlin et al., 2018)92.8BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 
CVT + Multi-Task (Clark et al., 2018)92.61Semi-Supervised Sequence Modeling with Cross-View TrainingOfficial
BERT Base (Devlin et al., 2018)92.4BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 
BiLSTM-CRF+ELMo (Peters et al., 2018)92.22Deep contextualized word representationsAllenNLP Project AllenNLP GitHub
Peters et al. (2017) ♩91.93Semi-supervised sequence tagging with bidirectional language models 
CRF + AutoEncoder (Wu et al., 2018)91.87Evaluating the Utility of Hand-crafted Features in Sequence LabellingOfficial
Bi-LSTM-CRF + Lexical Features (Ghaddar and Langlais 2018)91.73Robust Lexical Features for Improved Neural Network Named-Entity RecognitionOfficial
BiLSTM-CRF + IntNet (Xin et al., 2018)91.64Learning Better Internal Structure of Words for Sequence Labeling 
Chiu and Nichols (2016) ♩91.62Named entity recognition with bidirectional LSTM-CNNs 
HSCRF (Ye and Ling, 2018)91.38Hybrid semi-Markov CRF for Neural Sequence LabelingHSCRF
IXA pipes (Agerri and Rigau 2016)91.36Robust multilingual Named Entity Recognition with shallow semi-supervised featuresOfficial
NCRF++ (Yang and Zhang, 2018)91.35NCRF++: An Open-source Neural Sequence Labeling ToolkitNCRF++
LM-LSTM-CRF (Liu et al., 2018)91.24Empowering Character-aware Sequence Labeling with Task-Aware Neural Language ModelLM-LSTM-CRF
Yang et al. (2017) ♩91.26Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks 
Ma and Hovy (2016)91.21End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF 
LSTM-CRF (Lample et al., 2016)90.94Neural Architectures for Named Entity Recognition 

CoNLL++

This is a cleaner version of the CoNLL 2003 NER task, where about 5% of instances in the test set are corrected due to mislabelling. The training set is left untouched. Models are evaluated based on span-based F1 on the test set. ♩ used both the train and development splits for training.

Links: CoNLL++ (including direct download links for data)

Long-tail emerging entities

The WNUT 2017 Emerging Entities task operates over a wide range of English text and focuses on generalisation beyond memorisation in high-variance environments. Scores are given both over entity chunk instances, and unique entity surface forms, to normalise the biasing impact of entities that occur frequently.

FeatureTrainDevTest
Posts3,3951,0091,287
Tokens62,72915,73323,394
NE tokens3,1601,2501,589

The data is annotated for six classes - person, location, group, creative work, product and corporation.

Links: WNUT 2017 Emerging Entity task page (including direct download links for data and scoring script)

Ontonotes v5 (English)

The Ontonotes corpus v5 is a richly annotated corpus with several layers of annotation, including named entities, coreference, part of speech, word sense, propositions, and syntactic parse trees. These annotations are over a large number of tokens, a broad cross-section of domains, and 3 languages (English, Arabic, and Chinese). The NER dataset (of interest here) includes 18 tags, consisting of 11 types (PERSON, ORGANIZATION, etc) and 7 values (DATE, PERCENT, etc), and contains 2 million tokens. The common datasplit used in NER is defined in Pradhan et al 2013 and can be found here.

Few-NERD

Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities and 4,601,223 tokens. Three benchmark tasks are built:

  • Few-NERD (SUP) is a standard NER task;
  • Few-NERD (INTRA) is a few-shot NER task across different coarse-grained types;
  • Few-NERD (INTER) is a few-shot NER task within coarse-grained types.

Website: Few-NERD page;

Download & code: https://github.com/thunlp/Few-NERD

Results on Few-NERD (SUP)

ModelF1Paper / SourceCode
BERT-Tagger (Ding et al., 2021)68.88Few-NERD: A Few-shot Named Entity Recognition DatasetOfficial

Go back to the README