On the Adversarial Robustness of Discrete Image Tokenizers

Rishika Bhagwatkar1, Irina Rish1, Nicolas Flammarion2, Francesco Croce2
Mila - Quebec AI Institute EPFL

This is the first work to systematically study discrete image tokenizers against adversarial attacks and showing that simple, task-agnostic training can make multimodal systems far safer.

Abstract

Discrete image tokenizers encode visual inputs as sequences of tokens from a finite vocabulary and are gaining popularity in multimodal systems, including encoder-only, encoder-decoder, and decoder-only models. However, unlike CLIP encoders, their vulnerability to adversarial attacks has not been explored. Ours being the first work studying this topic, we first formulate attacks that aim to perturb the features extracted by discrete tokenizers, and thus change the extracted tokens. These attacks are computationally efficient, application-agnostic, and effective across classification, multimodal retrieval, and captioning tasks. Second, to defend against this vulnerability, inspired by recent work on robust CLIP encoders, we fine-tune popular tokenizers with unsupervised adversarial training, keeping all other components frozen. While unsupervised and task-agnostic, our approach significantly improves robustness to both unsupervised and end-to-end supervised attacks and generalizes well to unseen tasks and data. Unlike supervised adversarial training, our approach can leverage unlabeled images, making it more versatile. Overall, our work highlights the critical role of tokenizer robustness in downstream tasks and presents an important step in the development of safe multimodal foundation models.

Proposed attack

Idea. Discrete tokenizers map an image to a sequence of visual tokens. If we can induce large changes in the tokenizer’s internal features (and therefore tokens), downstream systems built on top of those tokens—classification, retrieval, captioning, and VQA—can fail.

  • Unsupervised: no labels from downstream tasks are needed.
  • Inexpensive: attacking the tokenizer is cheaper than attacking the full downstream model.
  • Task-agnostic: the attack does not depend on the downstream architecture.
Overview of the tokenizer-based unsupervised, inexpensive, task-agnostic attack.

Attack overview

We craft an ℓ∞-bounded perturbation to maximize the mismatch between tokenizer features extracted from the clean image and the perturbed image, causing the produced token sequence to drift.

Defense

Robust tokenizers via unsupervised adversarial training. Inspired by robust CLIP-style training, we fine-tune only the tokenizer’s encoder. The objective encourages adversarial embeddings to stay close to clean embeddings, so small input perturbations do not significantly change the tokenizer’s internal representations (and thus the tokens).

Defense properties

  • Fully unsupervised
  • Task-agnostic

Robust tokenizer properties

  • Plug-and-play across downstream tasks
  • Robustness without fine-tuning the full model
  • Minimally degrades clean performance
Unsupervised adversarial training objective for robust tokenizers and key properties.

Defense overview

We adversarially fine-tune the tokenizer encoder so that feature representations are stable under bounded perturbations, improving robustness across different downstream systems.

Main results

Robust tokenizers → Robust Embedding Models. We evaluate robustness after swapping the original tokenizer encoder with our unsupervised adversarially fine-tuned version, while keeping the downstream model frozen. For FuseLIP (TiTok-based), robust tokenizers substantially increase adversarial robustness on both classification (Imagenette, Caltech101) and multimodal retrieval (OI-Crop, OI-Pos), and the training radius provides explicit control over the robustness–accuracy trade-off. For UniTok, the same tokenizer-only fine-tuning also improves robustness against end-to-end attacks across Imagenette, Caltech101, and ImageNet.

Table 1. Evaluation of FuseLIP on image classification and multimodal retrieval.

Tokenizer Imagenette Caltech101 OI-Crop OI-Pos Average
clean2/2554/255 clean2/2554/255 clean2/2554/255 clean2/2554/255 clean2/2554/255
clean 93.62.60.0 74.40.60.0 71.87.40.8 69.25.41.4 77.34.00.6
AT4/255 91.863.636.6 73.048.220.8 66.250.626.0 67.246.024.6 74.652.127.0
AT8/255 89.669.048.8 72.451.632.8 62.048.835.8 64.851.235.6 72.255.238.3
AT12/255 87.071.451.0 67.651.236.8 56.249.036.8 61.649.635.2 68.155.340.0
AT16/255 83.466.650.0 61.247.637.4 50.047.235.8 59.448.839.2 63.552.640.6

Table 2. Evaluation of UniTok on image classification.

Tokenizer Imagenette Caltech101 ImageNet Average
clean2/2554/255 clean2/2554/255 clean2/2554/255 clean2/2554/255
clean 99.20.00.0 85.70.00.0 67.30.00.0 84.10.00.0
AT4/255 99.292.175.0 81.256.922.4 66.931.910.5 82.460.336.0
AT8/255 97.891.582.7 77.463.543.9 58.340.323.6 77.865.150.1
AT12/255 95.688.781.4 72.460.147.6 50.436.525.6 72.861.851.5
AT16/255 92.786.379.6 65.357.544.6 42.332.123.6 66.758.749.3

Robust tokenizers → Robust Multimodal LLMs. We next study UniTok-MLLM by replacing only the image tokenizer with our robust UniTok variant. On VQA (VQAv2, OK-VQA, GQA), robust tokenizers yield large gains in adversarial accuracy under ℓ∞ attacks. On captioning, both unsupervised targeted (tokenizer-only) and supervised targeted (end-to-end) attacks can steer the original model toward the target caption, while the model with the robust tokenizer stays close to the correct description.

Targeted attacks on captioning

The examples below demonstrate targeted attacks: (i) an unsupervised targeted attack that matches the perturbed image’s tokenizer embeddings to a target image, and (ii) a supervised targeted end-to-end attack that optimizes toward a specific target caption. In both cases, the robust tokenizer prevents the model from switching to the target caption.

Unsupervised targeted attack on captioning (tokenizer-only).

Unsupervised targeted attack

The attack minimizes the embedding distance between a perturbed input and a target image using only the tokenizer. The original UniTok-MLLM shifts toward the target caption, while the robust tokenizer preserves a correct, safe caption.

Supervised targeted attack on captioning (end-to-end).

Supervised targeted attack

The end-to-end attack directly optimizes the image perturbation toward a chosen target caption. With the original UniTok tokenizer, the model can be forced to output the target caption; swapping in the robust tokenizer prevents this behavior and keeps captions aligned with the input image.

BibTeX


@misc{robust_tokenizers_2026,
  title        = {On the Adversarial Robustness of Discrete Image Tokenizers},
  author       = {Rishika Bhagwatkar, Irina Rish, Nicolas Flammarion, Francesco Croce},
  year         = {2026},
  note         = {Preprint},
}
    

Acknowledgement

This website template is adapted from popular academic project pages (Bulma-based).