Title: Impact of Tokenization on Language Models: An Analysis for Turkish
Authors: Cagri Toraman, Eyup Halit Yilmaz, Furkan Şahinuç, Oguzhan Ozcelik
Published: 19th April 2022 (Tuesday) @ 12:01:46
Link: http://arxiv.org/abs/2204.08832v1

Abstract

Tokenization is an important text preprocessing step to prepare input tokens for deep language models. WordPiece and BPE are de facto methods employed by important models, such as BERT and GPT. However, the impact of tokenization can be different for morphologically rich languages, such as Turkic languages, where many words can be generated by adding prefixes and suffixes. We compare five tokenizers at different granularity levels, i.e. their outputs vary from smallest pieces of characters to the surface form of words, including a Morphological-level tokenizer. We train these tokenizers and pretrain medium-sized language models using RoBERTa pretraining procedure on the Turkish split of the OSCAR corpus. We then fine-tune our models on six downstream tasks. Our experiments, supported by statistical tests, reveal that Morphological-level tokenizer has challenging performance with de facto tokenizers. Furthermore, we find that increasing the vocabulary size improves the performance of Morphological and Word-level tokenizers more than that of de facto tokenizers. The ratio of the number of vocabulary parameters to the total number of model parameters can be empirically chosen as 20% for de facto tokenizers and 40% for other tokenizers to obtain a reasonable trade-off between model size and performance.


The impact of tokenization algorithms can be different for low-resource languages, such as agglutinative Turkic and Uralic languages, where words can have prefixes and suffixes.

For instance, in Turkish, parsing the word “veremedim” (translated as “I could not give”) results in “ver-e-me-di-m” including four suffixes in a single word. A Morphological-level tokenizer can output five tokens in this case, providing the model with a better understanding of word semantics.

An example benefit is that the language model would relate the suffix “-me” to negation, similar to the word “not” in English.

In order to answer our research questions, we compare the performance of different tokenization methods for Turkish. We select five tokenizers at different granularity levels, i.e. their outputs vary from smallest pieces (characters) to the surface form (words), which are Character-level, BPE, WordPiece, Morphological-level, and Word-level tokenization, respectively.