githubEdit

NSDI 2026

Meta Info

Homepage: https://www.usenix.org/conference/nsdi26arrow-up-right

Acceptance Rate

  • Spring: 24.2% (= 50 / 207)

Papers

LLM

  • LLM Training

    • Attack of the Bubbles: Straggler-Resilient Pipeline Parallelism for Large Model Training [arXivarrow-up-right]

      • HKUST & Alibaba

  • LLM Inference

    • Fast Distributed Inference Serving for Large Language Models [arXivarrow-up-right]

      • PKU

    • HydraServe: Minimizing Cold Start Latency for Serverless LLM Serving in Public Clouds [arXivarrow-up-right]

      • PKU & Alibaba Cloud

  • LLM Storage

Acronyms

  • LLM: Large Language Model

Last updated