Simpler, Clearer, and More Modular

-



thumbnail

Transformers v5 redesigns how tokenizers work. The big tokenizers reformat separates tokenizer design from trained vocabulary (very like how PyTorch separates neural network architecture from learned weights). The result’s tokenizers you’ll be able to inspect, customize, and train from scratch with far less friction.

TL;DR: This blog explains how tokenization works in Transformers and why v5 is a serious redesign, with clearer internals, a clean class hierarchy, and a single fast backend. It’s a practical guide for anyone who wants to know, customize, or train model-specific tokenizers as an alternative of treating them as black boxes.



Table of Contents

For experts: If you happen to’re already accustomed to the concepts and wish to know the changes in v5, go to v5 Separates Tokenizer Architecture from Trained Vocab

Before diving into the changes, let’s quickly cover what tokenization does and the way the pieces fit together.



What’s tokenization?

Language models don’t read raw text. They devour sequences of integers often called token IDs or input IDs. Tokenization is the means of converting raw text into these token IDs. (Try the tokenization playground here to visualise tokenization.)

Tokenization is a broad concept used across natural language processing and text processing generally. This post focuses specifically on tokenization for Large Language Models (LLMs) using the transformers and tokenizers libraries.

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM3-3B")

text = "Hello world"
tokens = tokenizer(text)

print(tokens["input_ids"])


print(tokenizer.convert_ids_to_tokens(tokens["input_ids"]))

Ġworld (above) is a single token that represents the character sequence ” world” (with the space).

A token is the smallest string unit the model sees. It could possibly be a personality, word, or subword chunk like “play” or “##ing” (“##” is a pattern, don’t be concerned should you don’t completely understand it now 🤗). The vocabulary maps each unique token to the token ID.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM3-3B")
print(tokenizer.vocab)


An excellent tokenizer compresses text into the least amount of tokens. Fewer tokens means more usable context without increasing model size. Training a tokenizer boils all the way down to finding the perfect compression rules to your datasets. For instance, should you work on Chinese corpus you’ll be able to sometimes find very nice surprises 😉.



The tokenization pipeline

Tokenization happens in stages. Each stage transforms text before passing it to the subsequent:

Stage Purpose Example
Normalizer Standardizes text (lowercasing, unicode normalization, whitespace cleanup) "HELLO World""hello world"
Pre-tokenizer Splits text into preliminary chunks "hello world"["hello", " world"]
Model Applies the tokenization algorithm (BPE, Unigram, etc.) ["hello", " world"][9906, 1917]
Post-processor Adds special tokens (BOS, EOS, padding) [9906, 1917][1, 9906, 1917, 2]
Decoder Converts token IDs back to text [9906, 1917]"hello world"

Each component is independent. You possibly can swap normalizers or change the algorithm without rewriting every thing else.

You possibly can access the rust based tokenizer through _tokenizer. We go in additional depth about it in this section

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-270m-it")

print(f"{tokenizer._tokenizer.normalizer=}")


print(f"{tokenizer._tokenizer.pre_tokenizer=}")


print(f"{tokenizer._tokenizer.model=}")


print(f"{tokenizer._tokenizer.post_processor=}")


print(f"{tokenizer._tokenizer.decoder=}")



Tokenization algorithms

The next algorithms dominate modern language model tokenizers:

  1. Byte Pair Encoding (BPE) iteratively merges essentially the most frequent character pairs. This algorithm is deterministic and widely used. (Read more about BPE)
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("openai/gpt-oss-20b")
print(tokenizer._tokenizer.model)


  1. Unigram takes a probabilistic approach, choosing the almost certainly segmentation from a big initial vocabulary. That is more flexible than the strict BPE. (Read more about Unigram)
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-base")
print(tokenizer._tokenizer.model)


  1. WordPiece resembles BPE but uses different merge criteria based on likelihood. (Read more about WordPiece)
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
print(tokenizer._tokenizer.model)




Accessing tokenizers through transformers

The tokenizers library is a Rust-based tokenization engine. It’s fast, efficient, and completely language model agnostic. The library handles the mechanics of converting text into token IDs and back. The tokenizers library is a general-purpose tool that implements the tokenization algorithms, but doesn’t implement the conventions that connect those algorithms to specific language models.

Consider what happens whenever you use tokenizers directly with the SmolLM3-3B model:

from tokenizers import Tokenizer

tokenizer = Tokenizer.from_pretrained("HuggingFaceTB/SmolLM3-3B")
text = "Hello world"
encodings = tokenizer.encode(text)

print(encodings.ids)

print(encodings.tokens)

The output is raw tokenization. You get token IDs and the string pieces they correspond to. Nothing more.

Now consider what’s missing. The SmolLM3-3B is a conversational model. If you interact with it, you sometimes structure your input as a conversation with roles like “user” and “assistant”. The language model expects special formatting tokens to point these roles. The raw tokenizers library has no concept of any of this.



How do you bridge the gap between raw tokenization and model requirements?

The transformers library bridges this gap. The library is primarily often known as a model definition library, nevertheless it also provides a tokenizer abstraction layer that wraps the raw tokenizers backend and adds model-aware functionality.

Here’s the identical tokenization with the transformers wrapper:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM3-3B")


prompt = "Give me a transient explanation of gravity in easy terms."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

print(text)







model_inputs = tokenizer([text], return_tensors="pt")

Notice how the special tokens like <|im_start> and <|im_end> are applied to the prompt before tokenizing. This is helpful for the model to learn where a brand new sequence starts and ends.

The transformers tokenizer adds every thing the raw library lacks:

  • Chat template application. The apply_chat_template method formats conversations in response to the model’s expected format, inserting the proper special tokens and delimiters.
  • Automatic special token insertion. Starting-of-sequence and end-of-sequence tokens are added where the model expects them.
  • Truncation to context length. You possibly can specify truncation=True and the tokenizer will respect the model’s maximum sequence length.
  • Batch encoding with padding. Multiple inputs could be padded to the identical length with the proper padding token and direction.
  • Return format options. You possibly can request PyTorch tensors (return_tensors="pt"), NumPy arrays and others.

transformers implements the tokenization API that’s mostly utilized in your entire ML community (encode, decode, convert_tokens_to_ids, etc.)



The tokenizer class hierarchy in transformers

The transformers library organizes tokenizers right into a class hierarchy. At the highest sits a base class that defines the common interface. Below it, backend classes handle the actual tokenization using different engines. At the underside, model-specific classes configure the backends for particular models.

class hierarchy
The category hierarchy for tokenizers inside transformers



PreTrainedTokenizerBase defines the common interface for all tokenizers

PreTrainedTokenizerBase is the abstract base class for all tokenizers in transformers. It defines the interface that each tokenizer must implement.

The bottom class handles functionality that does not depend upon the tokenization backend:

  • Special token properties. Properties like bos_token, eos_token, pad_token, and unk_token are defined here. These properties provide access to the special tokens that models use to mark sequence boundaries and handle unknown inputs.
  • Encoding interface. The __call__ method, encode, and encode_plus methods are defined here. These methods accept text input and return token IDs together with attention masks and other metadata.
  • Decoding interface. The decode and batch_decode methods convert token IDs back to text.
  • Serialization. The save_pretrained and from_pretrained methods handle downloading the proper files, reading information, saving tokenizers to disk etc.
  • Chat template support. The apply_chat_template method lives here, formatting conversations in response to Jinja templates stored within the tokenizer configuration.

Every tokenizer in transformers ultimately inherits from PreTrainedTokenizerBase. The bottom class ensures consistent behavior across all tokenizers, no matter which backend they use for the actual tokenization.



TokenizersBackend wraps the tokenizers library

TokenizersBackend is the first backend class for latest tokenizers. It inherits from PreTrainedTokenizerBase and wraps the Rust-based tokenizers library.

The category stores the Rust tokenizer object internally:

class TokenizersBackend(PreTrainedTokenizerBase):
    def __init__(self, tokenizer_object, ...):
        self._tokenizer = tokenizer_object  
        ...

If you call encoding methods on a TokenizersBackend tokenizer, the category delegates the actual tokenization to the Rust backend:

def _batch_encode_plus(self, batch_text_or_text_pairs, ...):
    encodings = self._tokenizer.encode_batch(batch_text_or_text_pairs, ...)
    ...

The Rust backend performs computationally intensive work, while the Python wrapper adds the model-aware features on top.

Many model-specific tokenizers inherit from TokenizersBackend, examples include:

  • LlamaTokenizer
  • GemmaTokenizer

These model-specific classes configure the backend with the proper vocabulary, merge rules, special tokens, and normalization settings for his or her respective models.



PythonBackend provides a pure-Python mixin

PythonBackend inherits from PreTrainedTokenizerBase and implements tokenization in pure Python. The category is aliased as PreTrainedTokenizer.

The pure-Python backend exists for several reasons:

  • Custom tokenization logic. Some models require tokenization behavior that does not fit the usual tokenizers pipeline.
  • Legacy compatibility. Older model implementations may depend on Python-specific behavior.

The Python backend is slower than the Rust backend. For many use cases, the Rust-backed TokenizersBackend is preferred.

Model-specific tokenizers that inherit from PythonBackend (or its alias PreTrainedTokenizer) include some older or specialized models, like:

  • CTRLTokenizer
  • CanineTokenizer



SentencePieceBackend handles SentencePiece models

SentencePieceBackend inherits from PythonBackend and provides integration with Google’s SentencePiece library. SentencePiece is a standalone tokenization library that many models use, particularly those trained by Google.

The backend wraps a SentencePiece processor:

class SentencePieceBackend(PythonBackend):
    def __init__(self, vocab_file, ...):
        self.sp_model = spm.SentencePieceProcessor()
        self.sp_model.Load(vocab_file)
        ...

Models that use SentencePiece tokenization inherit from this backend. Examples include:

  • SiglipTokenizer
  • BartphoTokenizer

The SentencePiece backend inherits from PythonBackend quite than directly from PreTrainedTokenizerBase since it shares much of the identical interface and padding/truncation logic.



AutoTokenizer robotically selects the proper tokenizer class

AutoTokenizer is the beneficial entry point for loading tokenizers. It robotically determines which tokenizer class to make use of for a given model and returns an instance of that class.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("gpt2")

Behind the scenes, AutoTokenizer performs these steps:

  1. Download the tokenizer configuration. The from_pretrained method fetches tokenizer_config.json from the Hub (or from a neighborhood directory).
  2. Discover the model type. The configuration comprises metadata that identifies the model type (e.g., “gpt2”, “llama”, “bert”).
  3. Look up the tokenizer class. AutoTokenizer maintains a mapping called TOKENIZER_MAPPING_NAMES that maps model types to tokenizer class names:
TOKENIZER_MAPPING_NAMES = {
    "gpt2": "GPT2Tokenizer",
    "llama": "LlamaTokenizer",
    "bert": "BertTokenizer",
    ...
}
  1. Instantiate the proper class. AutoTokenizer imports the suitable tokenizer class and calls its from_pretrained method.
  2. Return the configured tokenizer. You receive a completely configured, model-specific tokenizer ready to be used.

The advantage of AutoTokenizer is that you just need not know which tokenizer class a model uses. Whether a model uses LlamaTokenizer, GPT2Tokenizer, or BertTokenizer, the identical AutoTokenizer.from_pretrained("model-name") call works.

The tokenizer system in transformers forms a layered architecture:

Layer Component Responsibility
Entry Point AutoTokenizer Robotically selects and instantiates the proper tokenizer class
Model-Specific LlamaTokenizer, GPT2Tokenizer, etc. Configures the backend with model-specific architecture of normalizer, pre tokenizer, etc, special tokens, and settings
Backend TokenizersBackend, PythonBackend, SentencePieceBackend Implements the actual tokenization using a selected engine
Base PreTrainedTokenizerBase Defines the common interface and shared functionality
Engine tokenizers (Rust), SentencePiece, Pure Python Performs raw tokenization



v5 Separates Tokenizer Architecture from Trained Vocab

Probably the most significant change in Transformers v5 is a philosophical shift in how tokenizers are defined. Tokenizers now work like PyTorch’s nn.Module: you define the architecture first, then fill it with learned parameters.



The issue with v4: tokenizers were opaque and tightly coupled

In v4, tokenizers were black boxes tied to pretrained checkpoint files. If you happen to loaded LlamaTokenizerFast, you could not easily answer basic questions on it:

  • Is it BPE or Unigram?
  • How does it normalize text?
  • What pre-tokenization strategy does it use?
  • What are the special tokens and their positions?

The __init__ method gave no clues. You needed to dig through serialized files or external documentation to know what the tokenizer actually did.

v4 llama
LlamaTokenizerFast as seen in v4 transformers

v4 also maintained two parallel implementations for each model:

  1. a “slow” Python tokenizer (LlamaTokenizer inheriting from PreTrainedTokenizer) and
  2. a “fast” Rust-backed tokenizer (LlamaTokenizerFast inheriting from PreTrainedTokenizerFast).

This meant:

  • Two files per model (e.g., tokenization_llama.py and tokenization_llama_fast.py)
  • Code duplication across a whole bunch of models
  • Behavioral discrepancies between slow and fast versions, resulting in subtle bugs
  • A growing test suite dedicated to verifying that slow and fast tokenizers produced equivalent outputs
  • User confusion about which tokenizer to make use of and when

Worst of all, you could not create an empty tokenizer architecture. If you happen to desired to train a LLaMA-style tokenizer on your individual data, there was no clean option to instantiate a “blank” LLaMA tokenizer and fill it along with your vocabulary and merges. Tokenizers existed only as loaded checkpoints, not as configurable templates.



The v5 solution: architecture and parameters at the moment are separate

v5 treats tokenizer architecture (normalizer, pre-tokenizer, model type, post-processor, decoder) as distinct from trained parameters (vocabulary, merges). This mirrors how PyTorch separates model architecture from learned weights.

With nn.Module, you define layers first:

from torch import nn

model = nn.Sequential(
    nn.Embedding(vocab_size, embed_dim),
    nn.Linear(embed_dim, hidden_dim),
)

V5 tokenizers follow the identical pattern:

from transformers import LlamaTokenizer


tokenizer = LlamaTokenizer()


tokenizer.train(files=["my_corpus.txt"])

The tokenizer class now explicitly declares its structure. LlamaTokenizer in v5, you’ll be able to immediately see:

  • It uses BPE as its tokenization model
  • It might add a prefix space before text
  • Its special tokens (unk, bos, eos) sit at specific vocabulary positions
  • It does not normalize input text
  • Its decoder replaces the metaspace character with spaces
v5 llama
LlamaTokenizer as seen in v5 transformers

This transparency was unattainable in v4, where the identical information was buried in serialized files.



One file, one backend, one beneficial path

v5 consolidates the two-file system right into a single file per model. LlamaTokenizer now inherits from TokenizersBackend, which wraps the Rust-based tokenizer that was previously exposed because the “fast” implementation and is now the default.

The previous “slow” Python implementation lives explicitly behind PythonBackend, and SentencePieceBackend stays for models that require it, but Rust-backed tokenization is the popular default.

This transformation eliminates:

  • Duplicate code across slow/fast implementations
  • The confusing Tokenizer vs TokenizerFast naming convention
  • Test suites dedicated to checking slow-fast parity

Users now have one clear entry point. Advanced users who must customize can still access lower-level components, however the library not forces everyone to navigate two parallel implementations.



You possibly can now train model specific tokenizers from scratch

Suppose you would like a tokenizer that behaves exactly like LLaMA’s – same normalization, same pre-tokenization, same BPE model type – but trained on a domain-specific corpus (medical text, legal documents, a brand new language). In v4, this required manually reconstructing the tokenizer pipeline from low-level tokenizers library primitives. In v5, you’ll be able to instantiate the architecture directly and call train:

from transformers import LlamaTokenizer
from datasets import load_dataset


tokenizer = LlamaTokenizer()

dataset = load_dataset("wikitext", "wikitext-2-raw-v1", split="train")

def get_training_corpus():
    batch = 1000
    for i in range(0, len(dataset), batch):
        yield dataset[i : i + batch]["text"]

trained_tokenizer = tokenizer.train_new_from_iterator(
    text_iterator=get_training_corpus(),
    vocab_size=32000,
    length=len(dataset),
    show_progress=True,
)

trained_tokenizer.push_to_hub("my_custom_tokenizer")

tokenizer = LlamaTokenizer.from_pretrained("my_custom_tokenizer")

The resulting tokenizer could have your custom vocabulary and merge rules, but will process text identically to how an ordinary LLaMA tokenizer would with the identical whitespace handling, same special token conventions, same decoding behavior.

Aspect V4 V5
Files per model Two (tokenization_X.py, tokenization_X_fast.py) One (tokenization_X.py)
Default backend Split between Python and Rust Rust (TokenizersBackend) preferred
Architecture visibility Hidden in serialized files Explicit at school definition
Training from scratch Required manual pipeline construction tokenizer.train(files=[...])
Component inspection Difficult, undocumented Direct properties (tokenizer.normalizer, etc.)
Parent classes PreTrainedTokenizer, PreTrainedTokenizerFast TokenizersBackend (or SentencePieceBackend, PythonBackend)

The shift from “tokenizers as loaded checkpoints” to “tokenizers as configurable architectures” makes the library more modular, more transparent, and more aligned with how practitioners take into consideration constructing ML systems.



Summary

Transformers v5 brings three improvements to tokenization:

  1. One file per model as an alternative of separate slow/fast implementations
  2. Visible architecture so you’ll be able to inspect normalizers, pre-tokenizers, and decoders
  3. Trainable templates that permit you create custom tokenizers matching any model’s design

The wrapper layer between tokenizers and Transformers stays essential. It adds model awareness, context lengths, chat templates, special tokens, that raw tokenization doesn’t provide. V5 just makes that layer clearer and more customizable.

If you happen to need to learn more about tokenization listed here are some resources:



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x