# SC 2025

## Meta Info

Homepage: <https://sc25.supercomputing.org>

Paper list: <https://sc25.conference-program.com>

## Papers

### LLMs

* LLM Inference
  * Hetis: Serving LLMs in Heterogeneous GPU Clusters with Fine-grained and Dynamic Parallelism \[[Paper](https://doi.org/10.1145/3712285.3759784)] \[[arXiv](https://arxiv.org/abs/2509.08309)]
    * University of Macau & SYSU

## Acronyms

* LLM: Large Language Model
