NSDI 2024
Last updated
Was this helpful?
Last updated
Was this helpful?
Homepage:
Paper list:
Resiliency at Scale: Managing Google’s TPUv4 Machine Learning Supercomputer []
Experience in designing and operating the software infrastructure that allows TPUv4 supercomputers to operate at scale.
Autothrottle: A Practical Bi-Level Approach to Resource Management for SLO-Targeted Microservices [] [] []
USTC & ETH & MSR
Minimize CPU allocation of microservice applications while meeting SLO.
Service-level (low overhead & fast reaction) vs. Application-level (global visibility)
Captains (service-level): control based on throttle ratio target; collect data every 100ms, adjust allocation every 1s.
Tower (application-level): determine the best throttle targets for Captains to achieve; online learning (contextual bandit algorithm); one step per minute, each step runs in ~100ms.
CASSINI: Network-Aware Job Scheduling in Machine Learning Clusters []
MIT & UT-Austin
Consider the communication pattern of different jobs while placing them on network links.
LLM characterization
NTU & PKU & CUHK & Shanghai AI Lab
LLM training
ByteDance & PKU
UC Berkeley
Outstanding Paper
Characterization (e.g., availability, pricing, duration) of three-month-long spot availability traces on AWS.
Uniform Progress: a policy to make uniform progress towards the deadline, by distributing the job computation uniformly across the time.
CUHK & ByteDance & CMU & UCLA & Microsoft
Proactively adjust the parallelization strategy of a DNN training job for future preemptions to maximize preemption-aware throughput (i.e., liveput).
Ohio State University & AWS
Partition and parallelize the submodules of a multimodal model based on their modalities and redistribute the training data.
Adobe Research & UIUC
Approximate caching: reduce a certain number of denoising steps by reusing intermediate noise states created during a prior image generation.
HKUST
Herald: an adaptive location-aware inputs allocator to determine where embeddings should be trained and an optimal communication plan generator to determine which embeddings should be synchronized.
Microsoft & USC & Rice
Soroush: Single-Shot Max-Min Fair Allocator.
Deployed on Microsoft WAN.
ByteDance & Cornell
Crescent: ByteDance’s network emulation platform for preventing change-induced network incidents.
UIUC & Duke & Microsoft
Harmonic: microarchitecture-resource-aware RDMA performance isolation; including a programmable intelligent PCIe switch (prototyped with FPGA) and an RDMA-friendly rate limiter.
UW-Madison & ZJU
rPCIeBench: a software-hardware co-designed benchmarking framework to systematically characterize the routable PCIe fabric.
Characterization of Large Language Model Development in the Datacenter [] [] []
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs [] [] []
Can't Be Late: Optimizing Spot Instance Savings under Deadlines [] []
Parcae: Proactive, Liveput-Optimized DNN Training on Preemptible Instances [] [] []
DISTMM: Accelerating Distributed Multimodal Model Training []
Approximate Caching for Efficiently Serving Text-to-Image Diffusion Models [] []
Accelerating Neural Recommendation Training with Embedding Scheduling [] [] []
Solving Max-Min Fair Resource Allocations Quickly on Large Graphs [] [] []
Crescent: Emulating Heterogeneous Production Network at Scale [] []
Harmonic: Hardware-assisted RDMA Performance Isolation for Public Clouds []
Understanding Routable PCIe Performance for Composable Infrastructures []