Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE)
- the task of determining the inference relation between two (short, ordered) texts: entailment, contradiction, or neutral (MacCartney and Manning 2008)
Natural Language Inference - Papers with Code
Benchmarking Datasets
Hugging Face đ€ Datasets > Natural Language Inference
- The Stanford Natural Language Inference (SNLI) Corpus
- 570k human-written English sentence pairs
- manually labeled for balanced classification
- labels entailment, contradiction, and neutral
- benchmark for evaluating representational systems for text (especially including those induced by representation-learning methods) and resource for developing NLP models of any kind
- MultiNLI
- Multi-Genre Natural Language Inference (MultiNLI)
- crowd-sourced
- 433k sentence pairs
- annotated with textual entailment information
- modeled on the SNLI corpus
- âŠdiffers in that covers a range of genres of spoken and written text
- âŠand supports a distinctive cross-genre generalization evaluation
- served as basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen