githubEdit

ASPLOS 2026

Meta Info

Homepage: https://www.asplos-conference.org/asplos2026/arrow-up-right

Acceptance Rate

  • Spring: 9.6% (= 20 / 208)

Papers

Diffusion Models

  • Video DiT Training

    • DSV: Exploiting Dynamic Sparsity to Accelerate Large-Scale Video DiT Training [arXivarrow-up-right]

      • CUHK & StepFun

      • Leverage the dynamic attention sparsity.

      • Adopt a hybrid sparsity-aware context parallelism that re-balances the skewed workload across attention heads and blocks due to sparsity heterogeneity.

Acronyms

  • DiT: Diffusion Transformer

Last updated