ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation

2
Citations
#997
in NeurIPS 2025
of 5858 papers
8
Authors
4
Data Points

Abstract

While humans effortlessly draw visual objects and shapes by adaptively allocating attention based on their complexity, existing multimodal large language models (MLLMs) remain constrained by rigid token representations. Bridging this gap, we propose ALTo, an adaptive length tokenizer for autoregressive mask generation. To achieve this, a novel token length predictor is designed, along with a length regularization term and a differentiable token chunking strategy. We further build ALToLLM that seamlessly integrates ALTo into MLLM. Preferences on the trade-offs between mask quality and efficiency is implemented by group relative policy optimization (GRPO). Experiments demonstrate that ALToLLM achieves state-of-the-art performance with adaptive token cost on popular segmentation benchmarks. Code and models are released at https://github.com/yayafengzi/ALToLLM.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Jan 31, 2026
2+2