Scaling Diffusion Language Models via Adaptation from Autoregressive Models
AuthorsShansan Gong†, Shivam Agarwal‡, Yizhe Zhang, Jiacheng Ye†, Lin Zheng†, Mukai Li†, Chenxin An†, Peilin Zhao§, Wei Bi§, Jiawei Han, Hao Peng‡, Lingpeng Kong†
AuthorsShansan Gong†, Shivam Agarwal‡, Yizhe Zhang, Jiacheng Ye†, Lin Zheng†, Mukai Li†, Chenxin An†, Peilin Zhao§, Wei Bi§, Jiawei Han, Hao Peng‡, Lingpeng Kong†
Diffusion Language Models (DLMs) have emerged as a promising new paradigm for text generative modeling, potentially addressing limitations of autoregressive (AR) models. However, current DLMs have been studied at a smaller scale compared to their AR counterparts and lack fair comparison on language modeling benchmarks. Additionally, training diffusion models from scratch at scale remains challenging. Given the prevalence of open-source AR language models, we propose adapting these models to build text diffusion models. We demonstrate connections between AR and diffusion modeling objectives and introduce a simple continual pre-training approach for training diffusion models. Through systematic evaluation on language modeling, reasoning, and commonsense benchmarks, we show that we can convert AR models ranging from 127M to 7B parameters (GPT2 and LLaMA) into diffusion models DiffuGPT and DiffuLLaMA, using less than 200B tokens for training. Our experimental results reveal that these models outperform earlier DLMs and are competitive with their AR counterparts. We release a suite of DLMs (127M-355M-7B) capable of generating fluent text, performing in-context learning, filling in the middle without prompt re-ordering, and following instructions.
† The University of Hong Kong
‡ University of Illinois at Urbana-Champaign
§ Tencent AI Lab
April 16, 2025research area Computer Visionconference ICLR
October 18, 2020research area Accessibility, research area Human-Computer Interactionconference ASSETS