Tokenizer
Excerpt
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library 🤗 Tokenizers. The “Fast” implementations allows:
- a significant speed-up in particular when doing batched tokenization and
- additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).
The base classes PreTrainedTokenizer and PreTrainedTokenizerFast implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and “Fast” tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository). They both rely on PreTrainedTokenizerBase that contains the common methods, and SpecialTokensMixin.
PreTrainedTokenizer and PreTrainedTokenizerFast thus implement the main methods for using all the tokenizers:
- Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers).
- Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece…).
- Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization.
BatchEncoding holds the output of the PreTrainedTokenizerBase’s encoding methods (__call__
, encode_plus
and batch_encode_plus
) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (input_ids
, attention_mask
…). When the tokenizer is a “Fast” tokenizer (i.e., backed by HuggingFace tokenizers library), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token).
PreTrainedTokenizer
class transformers.PreTrainedTokenizer
( **kwargs )
Base class for all slow tokenizers.
Inherits from PreTrainedTokenizerBase.
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes)
- vocab_files_names (
Dict[str, str]
) — A dictionary with, as keys, the__init__
keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - pretrained_vocab_files_map (
Dict[str, Dict[str, str]]
) — A dictionary of dictionaries, with the high-level keys being the__init__
keyword name of each vocabulary file required by the model, the low-level being theshort-cut-names
of the pretrained models with, as associated values, theurl
to the associated pretrained vocabulary file. - model_input_names (
List[str]
) — A list of inputs expected in the forward pass of the model. - padding_side (
str
) — The default value for the side on which the model should have padding applied. Should be'right'
or'left'
. - truncation_side (
str
) — The default value for the side on which the model should have truncation applied. Should be'right'
or'left'
.
__call__
( text: Union = Nonetext_pair: Union = Nonetext_target: Union = Nonetext_pair_target: Union = Noneadd_special_tokens: bool = Truepadding: Union = Falsetruncation: Union = Nonemax_length: Optional = Nonestride: int = 0is_split_into_words: bool = Falsepad_to_multiple_of: Optional = Nonereturn_tensors: Union = Nonereturn_token_type_ids: Optional = Nonereturn_attention_mask: Optional = Nonereturn_overflowing_tokens: bool = Falsereturn_special_tokens_mask: bool = Falsereturn_offsets_mapping: bool = Falsereturn_length: bool = Falseverbose: bool = True**kwargs ) → BatchEncoding
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
add_tokens
( new_tokens: Unionspecial_tokens: bool = False ) → int
Parameters
-
new_tokens (
str
,tokenizers.AddedToken
or a list of str ortokenizers.AddedToken
) — Tokens are only added if they are not already in the vocabulary.tokenizers.AddedToken
wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc. -
special_tokens (
bool
, optional, defaults toFalse
) — Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).See details for
tokenizers.AddedToken
in HuggingFace tokenizers library.
Number of tokens added to the vocabulary.
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.
Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the resize_token_embeddings() method.
Examples:
tokenizer = BertTokenizerFast.from_pretrained(<span>"google-bert/bert-base-uncased"</span>)
model = BertModel.from_pretrained(<span>"google-bert/bert-base-uncased"</span>)
num_added_toks = tokenizer.add_tokens([<span>"new_tok1"</span>, <span>"my_new-tok2"</span>])
<span>print</span>(<span>"We have added"</span>, num_added_toks, <span>"tokens"</span>)
model.resize_token_embeddings(<span>len</span>(tokenizer))
add_special_tokens
( special_tokens_dict: Dictreplace_additional_special_tokens = True ) → int
Parameters
-
special_tokens_dict (dictionary str to str or
tokenizers.AddedToken
) — Keys should be in the list of predefined special attributes: [bos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
].Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the
unk_token
to them). -
replace_additional_special_tokens (
bool
, optional,, defaults toTrue
) — IfTrue
, the existing list of additional special tokens will be replaced by the list provided inspecial_tokens_dict
. Otherwise,self._additional_special_tokens
is just extended. In the former case, the tokens will NOT be removed from the tokenizer’s full vocabulary - they are only being flagged as non-special tokens. Remember, this only affects which tokens are skipped during decoding, not theadded_tokens_encoder
andadded_tokens_decoder
. This means that the previousadditional_special_tokens
are still added tokens, and will not be split by the model.
Number of tokens added to the vocabulary.
Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the resize_token_embeddings() method.
Using add_special_tokens
will ensure your special tokens can be used in several ways:
- Special tokens can be skipped when decoding using
skip_special_tokens = True
. - Special tokens are carefully handled by the tokenizer (they are never split), similar to
AddedTokens
. - You can easily refer to special tokens using tokenizer class attributes like
tokenizer.cls_token
. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (for instance BertTokenizer cls_token
is already registered to be :obj_’[CLS]’_ and XLM’s one is also registered to be '</s>'
).
Examples:
tokenizer = GPT2Tokenizer.from_pretrained(<span>"openai-community/gpt2"</span>)
model = GPT2Model.from_pretrained(<span>"openai-community/gpt2"</span>)
special_tokens_dict = {<span>"cls_token"</span>: <span>"<CLS>"</span>}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
<span>print</span>(<span>"We have added"</span>, num_added_toks, <span>"tokens"</span>)
model.resize_token_embeddings(<span>len</span>(tokenizer))
<span>assert</span> tokenizer.cls_token == <span>"<CLS>"</span>
apply_chat_template
( conversation: Unionchat_template: Optional = Noneadd_generation_prompt: bool = Falsetokenize: bool = Truepadding: bool = Falsetruncation: bool = Falsemax_length: Optional = Nonereturn_tensors: Union = Nonereturn_dict: bool = Falsetokenizer_kwargs: Optional = None**kwargs ) → Union[List[int], Dict]
Parameters
-
conversation (Union[List[Dict[str, str]], List[List[Dict[str, str]]], “Conversation”]) — A list of dicts with “role” and “content” keys, representing the chat history so far.
-
chat_template (str, optional) — A Jinja template to use for this conversion. If this is not passed, the model’s default chat template will be used instead.
-
add_generation_prompt (bool, optional) — Whether to end the prompt with the token(s) that indicate the start of an assistant message. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect.
-
tokenize (
bool
, defaults toTrue
) — Whether to tokenize the output. IfFalse
, the output will be a string. -
padding (
bool
, defaults toFalse
) — Whether to pad sequences to the maximum length. Has no effect if tokenize isFalse
. -
truncation (
bool
, defaults toFalse
) — Whether to truncate sequences at the maximum length. Has no effect if tokenize isFalse
. -
max_length (
int
, optional) — Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize isFalse
. If not specified, the tokenizer’smax_length
attribute will be used as a default. -
return_tensors (
str
or TensorType, optional) — If set, will return tensors of a particular framework. Has no effect if tokenize isFalse
. Acceptable values are:'tf'
: Return TensorFlowtf.Tensor
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return NumPynp.ndarray
objects.'jax'
: Return JAXjnp.ndarray
objects.
-
return_dict (
bool
, defaults toFalse
) — Whether to return a dictionary with named outputs. Has no effect if tokenize isFalse
. -
tokenizer_kwargs (
Dict[str -- Any]
, optional): Additional kwargs to pass to the tokenizer. **kwargs — Additional kwargs to pass to the template renderer. Will be accessible by the chat template.
Returns
Union[List[int], Dict]
A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate()
. If return_dict
is set, will return a dict of tokenizer outputs instead.
Converts a list of dictionaries with "role"
and "content"
keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting. When chat_template is None, it will fall back to the default_chat_template specified at the class level.
batch_decode
( sequences: Unionskip_special_tokens: bool = Falseclean_up_tokenization_spaces: bool = None**kwargs ) → List[str]
Parameters
- sequences (
Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
( token_ids: Unionskip_special_tokens: bool = Falseclean_up_tokenization_spaces: bool = None**kwargs ) → str
Parameters
- token_ids (
Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.
encode
( text: Uniontext_pair: Union = Noneadd_special_tokens: bool = Truepadding: Union = Falsetruncation: Union = Nonemax_length: Optional = Nonestride: int = 0return_tensors: Union = None**kwargs ) → List[int]
, torch.Tensor
, tf.Tensor
or np.ndarray
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text))
.
push_to_hub
( repo_id: struse_temp_dir: Optional = Nonecommit_message: Optional = Noneprivate: Optional = Nonetoken: Union = Nonemax_shard_size: Union = ‘5GB’create_pr: bool = Falsesafe_serialization: bool = Truerevision: str = Nonecommit_description: str = Nonetags: Optional = None**deprecated_kwargs )
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
<span>from</span> transformers <span>import</span> AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(<span>"google-bert/bert-base-cased"</span>)
tokenizer.push_to_hub(<span>"my-finetuned-bert"</span>)
tokenizer.push_to_hub(<span>"huggingface/my-finetuned-bert"</span>)
convert_ids_to_tokens
( ids: Unionskip_special_tokens: bool = False ) → str
or List[str]
Parameters
- ids (
int
orList[int]
) — The token id (or token ids) to convert to tokens. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding.
The decoded token(s).
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
convert_tokens_to_ids
( tokens: Union ) → int
or List[int]
Parameters
The token id or list of token ids.
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
Returns the added tokens in the vocabulary as a dictionary of token to index. Results might be different from the fast call because for now we always add the tokens even if they are already in the vocabulary. This is something we should change.
num_special_tokens_to_add
( pair: bool = False ) → int
Parameters
- pair (
bool
, optional, defaults toFalse
) — Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.
Number of special tokens added to sequences.
Returns the number of added tokens when encoding a sequence with special tokens.
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
prepare_for_tokenization
( text: stris_split_into_words: bool = False**kwargs ) → Tuple[str, Dict[str, Any]]
Parameters
- text (
str
) — The text to prepare. - is_split_into_words (
bool
, optional, defaults toFalse
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - kwargs (
Dict[str, Any]
, optional) — Keyword arguments to use for the tokenization.
Returns
Tuple[str, Dict[str, Any]]
The prepared text and the unused kwargs.
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining kwargs
as well. We test the kwargs
at the end of the encoding process to be sure all the arguments have been used.
tokenize
( text: str**kwargs ) → List[str]
Parameters
- text (
str
) — The sequence to be encoded. - **kwargs (additional keyword arguments) — Passed along to the model-specific
prepare_for_tokenization
preprocessing method.
The list of tokens.
Converts a string into a sequence of tokens, using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens.
PreTrainedTokenizerFast
The PreTrainedTokenizerFast depend on the tokenizers library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the Using tokenizers from 🤗 tokenizers page to understand how this is done.
class transformers.PreTrainedTokenizerFast
( *args**kwargs )
Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).
Inherits from PreTrainedTokenizerBase.
Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.
This class also contains the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes)
- vocab_files_names (
Dict[str, str]
) — A dictionary with, as keys, the__init__
keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - pretrained_vocab_files_map (
Dict[str, Dict[str, str]]
) — A dictionary of dictionaries, with the high-level keys being the__init__
keyword name of each vocabulary file required by the model, the low-level being theshort-cut-names
of the pretrained models with, as associated values, theurl
to the associated pretrained vocabulary file. - model_input_names (
List[str]
) — A list of inputs expected in the forward pass of the model. - padding_side (
str
) — The default value for the side on which the model should have padding applied. Should be'right'
or'left'
. - truncation_side (
str
) — The default value for the side on which the model should have truncation applied. Should be'right'
or'left'
.
__call__
( text: Union = Nonetext_pair: Union = Nonetext_target: Union = Nonetext_pair_target: Union = Noneadd_special_tokens: bool = Truepadding: Union = Falsetruncation: Union = Nonemax_length: Optional = Nonestride: int = 0is_split_into_words: bool = Falsepad_to_multiple_of: Optional = Nonereturn_tensors: Union = Nonereturn_token_type_ids: Optional = Nonereturn_attention_mask: Optional = Nonereturn_overflowing_tokens: bool = Falsereturn_special_tokens_mask: bool = Falsereturn_offsets_mapping: bool = Falsereturn_length: bool = Falseverbose: bool = True**kwargs ) → BatchEncoding
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
add_tokens
( new_tokens: Unionspecial_tokens: bool = False ) → int
Parameters
-
new_tokens (
str
,tokenizers.AddedToken
or a list of str ortokenizers.AddedToken
) — Tokens are only added if they are not already in the vocabulary.tokenizers.AddedToken
wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc. -
special_tokens (
bool
, optional, defaults toFalse
) — Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).See details for
tokenizers.AddedToken
in HuggingFace tokenizers library.
Number of tokens added to the vocabulary.
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.
Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the resize_token_embeddings() method.
Examples:
tokenizer = BertTokenizerFast.from_pretrained(<span>"google-bert/bert-base-uncased"</span>)
model = BertModel.from_pretrained(<span>"google-bert/bert-base-uncased"</span>)
num_added_toks = tokenizer.add_tokens([<span>"new_tok1"</span>, <span>"my_new-tok2"</span>])
<span>print</span>(<span>"We have added"</span>, num_added_toks, <span>"tokens"</span>)
model.resize_token_embeddings(<span>len</span>(tokenizer))
add_special_tokens
( special_tokens_dict: Dictreplace_additional_special_tokens = True ) → int
Parameters
-
special_tokens_dict (dictionary str to str or
tokenizers.AddedToken
) — Keys should be in the list of predefined special attributes: [bos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
].Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the
unk_token
to them). -
replace_additional_special_tokens (
bool
, optional,, defaults toTrue
) — IfTrue
, the existing list of additional special tokens will be replaced by the list provided inspecial_tokens_dict
. Otherwise,self._additional_special_tokens
is just extended. In the former case, the tokens will NOT be removed from the tokenizer’s full vocabulary - they are only being flagged as non-special tokens. Remember, this only affects which tokens are skipped during decoding, not theadded_tokens_encoder
andadded_tokens_decoder
. This means that the previousadditional_special_tokens
are still added tokens, and will not be split by the model.
Number of tokens added to the vocabulary.
Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the resize_token_embeddings() method.
Using add_special_tokens
will ensure your special tokens can be used in several ways:
- Special tokens can be skipped when decoding using
skip_special_tokens = True
. - Special tokens are carefully handled by the tokenizer (they are never split), similar to
AddedTokens
. - You can easily refer to special tokens using tokenizer class attributes like
tokenizer.cls_token
. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (for instance BertTokenizer cls_token
is already registered to be :obj_’[CLS]’_ and XLM’s one is also registered to be '</s>'
).
Examples:
tokenizer = GPT2Tokenizer.from_pretrained(<span>"openai-community/gpt2"</span>)
model = GPT2Model.from_pretrained(<span>"openai-community/gpt2"</span>)
special_tokens_dict = {<span>"cls_token"</span>: <span>"<CLS>"</span>}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
<span>print</span>(<span>"We have added"</span>, num_added_toks, <span>"tokens"</span>)
model.resize_token_embeddings(<span>len</span>(tokenizer))
<span>assert</span> tokenizer.cls_token == <span>"<CLS>"</span>
apply_chat_template
( conversation: Unionchat_template: Optional = Noneadd_generation_prompt: bool = Falsetokenize: bool = Truepadding: bool = Falsetruncation: bool = Falsemax_length: Optional = Nonereturn_tensors: Union = Nonereturn_dict: bool = Falsetokenizer_kwargs: Optional = None**kwargs ) → Union[List[int], Dict]
Converts a list of dictionaries with "role"
and "content"
keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting. When chat_template is None, it will fall back to the default_chat_template specified at the class level.
batch_decode
( sequences: Unionskip_special_tokens: bool = Falseclean_up_tokenization_spaces: bool = None**kwargs ) → List[str]
Parameters
- sequences (
Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
( token_ids: Unionskip_special_tokens: bool = Falseclean_up_tokenization_spaces: bool = None**kwargs ) → str
Parameters
- token_ids (
Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.
encode
( text: Uniontext_pair: Union = Noneadd_special_tokens: bool = Truepadding: Union = Falsetruncation: Union = Nonemax_length: Optional = Nonestride: int = 0return_tensors: Union = None**kwargs ) → List[int]
, torch.Tensor
, tf.Tensor
or np.ndarray
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text))
.
push_to_hub
( repo_id: struse_temp_dir: Optional = Nonecommit_message: Optional = Noneprivate: Optional = Nonetoken: Union = Nonemax_shard_size: Union = ‘5GB’create_pr: bool = Falsesafe_serialization: bool = Truerevision: str = Nonecommit_description: str = Nonetags: Optional = None**deprecated_kwargs )
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
<span>from</span> transformers <span>import</span> AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(<span>"google-bert/bert-base-cased"</span>)
tokenizer.push_to_hub(<span>"my-finetuned-bert"</span>)
tokenizer.push_to_hub(<span>"huggingface/my-finetuned-bert"</span>)
convert_ids_to_tokens
( ids: Unionskip_special_tokens: bool = False ) → str
or List[str]
Parameters
- ids (
int
orList[int]
) — The token id (or token ids) to convert to tokens. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding.
The decoded token(s).
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
convert_tokens_to_ids
( tokens: Union ) → int
or List[int]
Parameters
The token id or list of token ids.
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
Returns the added tokens in the vocabulary as a dictionary of token to index.
num_special_tokens_to_add
( pair: bool = False ) → int
Parameters
- pair (
bool
, optional, defaults toFalse
) — Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.
Number of special tokens added to sequences.
Returns the number of added tokens when encoding a sequence with special tokens.
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
set_truncation_and_padding
( padding_strategy: PaddingStrategytruncation_strategy: TruncationStrategymax_length: intstride: intpad_to_multiple_of: Optional )
Parameters
- padding_strategy (PaddingStrategy) — The kind of padding that will be applied to the input
- truncation_strategy (TruncationStrategy) — The kind of truncation that will be applied to the input
- max_length (
int
) — The maximum size of a sequence. - stride (
int
) — The stride to use when handling overflow. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5
(Volta).
Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.
The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.
train_new_from_iterator
( text_iteratorvocab_sizelength = Nonenew_special_tokens = Nonespecial_tokens_map = None**kwargs ) → PreTrainedTokenizerFast
Parameters
- text_iterator (generator of
List[str]
) — The training corpus. Should be a generator of batches of texts, for instance a list of lists of texts if you have everything in memory. - vocab_size (
int
) — The size of the vocabulary you want for your tokenizer. - length (
int
, optional) — The total number of sequences in the iterator. This is used to provide meaningful progress tracking - new_special_tokens (list of
str
orAddedToken
, optional) — A list of new special tokens to add to the tokenizer you are training. - special_tokens_map (
Dict[str, str]
, optional) — If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special token name to new special token name in this argument. - kwargs (
Dict[str, Any]
, optional) — Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.
A new tokenizer of the same type as the original one, trained on text_iterator
.
Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one.
BatchEncoding
class transformers.BatchEncoding
( data: Optional = Noneencoding: Union = Nonetensor_type: Union = Noneprepend_batch_axis: bool = Falsen_sequences: Optional = None )
Parameters
- data (
dict
, optional) — Dictionary of lists/arrays/tensors returned by the__call__
/encode_plus
/batch_encode_plus
methods (‘input_ids’, ‘attention_mask’, etc.). - encoding (
tokenizers.Encoding
orSequence[tokenizers.Encoding]
, optional) — If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character space to token space thetokenizers.Encoding
instance or list of instance (for batches) hold this information. - tensor_type (
Union[None, str, TensorType]
, optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization. - prepend_batch_axis (
bool
, optional, defaults toFalse
) — Whether or not to add a batch axis when converting to tensors (seetensor_type
above). - n_sequences (
Optional[int]
, optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
Holds the output of the call(), encode_plus() and batch_encode_plus() methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.
char_to_token
( batch_or_char_index: intchar_index: Optional = Nonesequence_index: int = 0 ) → int
Parameters
- batch_or_char_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence - char_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.
Index of the token.
Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.
Can be called as:
self.char_to_token(char_index)
if batch size is 1self.char_to_token(batch_index, char_index)
if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
char_to_word
( batch_or_char_index: intchar_index: Optional = Nonesequence_index: int = 0 ) → int
or List[int]
Parameters
- batch_or_char_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the character in the original string. - char_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the character in the original string. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.
Index or indices of the associated encoded token(s).
Get the word in the original string corresponding to a character in the original string of a sequence of the batch.
Can be called as:
self.char_to_word(char_index)
if batch size is 1self.char_to_word(batch_index, char_index)
if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
convert_to_tensors
( tensor_type: Union = Noneprepend_batch_axis: bool = False )
Parameters
- tensor_type (
str
or TensorType, optional) — The type of tensors to use. Ifstr
, should be one of the values of the enum TensorType. IfNone
, no modification is done. - prepend_batch_axis (
int
, optional, defaults toFalse
) — Whether or not to add the batch dimension during the conversion.
Convert the inner content to tensors.
sequence_ids
( batch_index: int = 0 ) → List[Optional[int]]
Parameters
Returns
List[Optional[int]]
A list indicating the sequence id corresponding to each token. Special tokens added by the tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding sequence.
Return a list mapping the tokens to the id of their original sentences:
None
for special tokens added around or between sequences,0
for tokens corresponding to words in the first sequence,1
for tokens corresponding to words in the second sequence when a pair of sequences was jointly encoded.
to
( device: Union ) → BatchEncoding
Parameters
The same instance after modification.
Send all values to device by calling v.to(device)
(PyTorch only).
token_to_chars
( batch_or_token_index: inttoken_index: Optional = None ) → CharSpan
Parameters
- batch_or_token_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence. - token_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token or tokens in the sequence.
Span of characters in the original string, or None, if the token (e.g. , ) doesn’t correspond to any chars in the origin string.
Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a CharSpan with:
- start — Index of the first character in the original string associated to the token.
- end — Index of the character following the last character in the original string associated to the token.
Can be called as:
self.token_to_chars(token_index)
if batch size is 1self.token_to_chars(batch_index, token_index)
if batch size is greater or equal to 1
token_to_sequence
( batch_or_token_index: inttoken_index: Optional = None ) → int
Parameters
- batch_or_token_index (
int
) — Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the token in the sequence. - token_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.
Index of the word in the input sequence.
Get the index of the sequence represented by the given token. In the general use case, this method returns 0
for a single sequence or the first sequence of a pair, and 1
for the second sequence of a pair
Can be called as:
self.token_to_sequence(token_index)
if batch size is 1self.token_to_sequence(batch_index, token_index)
if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
token_to_word
( batch_or_token_index: inttoken_index: Optional = None ) → int
Parameters
- batch_or_token_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence. - token_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.
Index of the word in the input sequence.
Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.
Can be called as:
self.token_to_word(token_index)
if batch size is 1self.token_to_word(batch_index, token_index)
if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
tokens
( batch_index: int = 0 ) → List[str]
Parameters
The list of tokens at that index.
Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to integer indices) at a given batch index (only works for the output of a fast tokenizer).
word_ids
( batch_index: int = 0 ) → List[Optional[int]]
Parameters
Returns
List[Optional[int]]
A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
word_to_chars
( batch_or_word_index: intword_index: Optional = Nonesequence_index: int = 0 ) → CharSpan
or List[CharSpan]
Parameters
- batch_or_word_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence - word_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.
Returns
CharSpan
or List[CharSpan]
Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:
- start: index of the first character associated to the token in the original string
- end: index of the character following the last character associated to the token in the original string
Get the character span in the original string corresponding to given word in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
Can be called as:
self.word_to_chars(word_index)
if batch size is 1self.word_to_chars(batch_index, word_index)
if batch size is greater or equal to 1
word_to_tokens
( batch_or_word_index: intword_index: Optional = Nonesequence_index: int = 0 ) → (TokenSpan, optional)
Parameters
- batch_or_word_index (
int
) — Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the word in the sequence. - word_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.
Returns
(TokenSpan, optional)
Span of tokens in the encoded sequence. Returns None
if no tokens correspond to the word. This can happen especially when the token is a special token that has been used to format the tokenization. For example when we add a class token at the very beginning of the tokenization.
Get the encoded token span corresponding to a word in a sequence of the batch.
Token spans are returned as a TokenSpan with:
- start — Index of the first token.
- end — Index of the token following the last token.
Can be called as:
self.word_to_tokens(word_index, sequence_index: int = 0)
if batch size is 1self.word_to_tokens(batch_index, word_index, sequence_index: int = 0)
if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
words
( batch_index: int = 0 ) → List[Optional[int]]
Parameters
Returns
List[Optional[int]]
A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.