Not All Tokens Need 40 Steps: Heterogeneous Step Allocation in Diffusion Transformers for Efficient Video Generation

TL;DR Human vision ignores vast amounts of redundant motion. Why, then, do video generators treat every spatiotemporal token with equal priority? We propose HSA that assigns varying step budgets to different spatiotemporal tokens.

Tip: Download the entire website upfront to skip video loading (need node):

git clone https://github.com/ernestchu/hsa
cd hsa
git checkout gh-pages
cd ..
npx -y serve

Then goto http://localhost:3000/hsa

Diffusion Transformers (DiTs) have achieved state-of-the-art video generation quality, but they incur immense computational cost because standard inference applies the same number of denoising steps uniformly to every token in the sequence. It is well known that human vision ignores vast amounts of redundant motion. Why, then, do our densest models treat every spatiotemporal token with equal priority? In this paper, we introduce Heterogeneous Step Allocation (HSA), a training-free inference algorithm that assigns varying step budgets to different spatiotemporal tokens based on their velocity dynamics. To resolve the resulting sequence-length mismatch without sacrificing global context, HSA introduces a KV-cache synchronization mechanism that allows active tokens to attend to the full sequence while entirely bypassing inactive tokens. Furthermore, we derive a cached Euler update that advances the latent states of skipped tokens in a single operation without additional model evaluations. We evaluate HSA on the Wan-2 and LTX-2 models for both text-to-video (T2V) and image-to-video (I2V) generation. Our results demonstrate that HSA significantly outperforms previous state-of-the-art caching methods and the vanilla Flow Matching baseline, especially at aggressive acceleration regimes (e.g., 50% and 25% runtimes). Crucially, HSA achieves a superior quality-runtime Pareto frontier without the need for expensive offline profiling, robustly preserving structural integrity and generation quality even under tight computational budgets.

System Overview

Video comparison

Use our video comaprison tool below to compare videos generated by HSA against the reference with 40 steps. For T2V, we randomly sample 2 prompts in each dimension of VBench. For I2V, we randomly sample 20 image-text pairs in VBench++. LTX-2 prompts are sourced from their official website, and LTX-2 I2V uses the same prompts but conditions on the first frame from the videos on the website.

Hetergenous Step Allocation

{{ currentIndex + 1 }} / {{ videos.length }}
Runtime

{{ section.name }}

Reference (100%) vs Accelerated
Loading...
Reference Flow Matching
(T=40)
{{ label }}
* Click on any video to play its audio

Full Prompt

{{ currentPrompt }}