HotNets 2024

Meta Info

Homepage: https://conferences.sigcomm.org/hotnets/2024/index.html

Paper list: https://conferences.sigcomm.org/hotnets/2024/program.html

Papers

Large Language Models (LLMs)

  • Networking for LLM training

    • I’ve Got 99 Problems But FLOPS Ain’t One [Paper]

      • University Politehnica of Bucharest

      • The future of large-scale AI infrastructure requires

        • (1) novel wide-area transports for inter-DC communication;

        • (2) a multipath transport and novel datacenter topologies for intra-datacenter communication;

        • (3) high-speed scale-up networks and transport.

  • LLM for networking

    • Designing Network Algorithms via Large Language Models [Paper]

      • MSR

      • NADA: Network Algorithm Design Automation via LLMs

Congestion Control

  • MLTCP: A Distributed Technique to Approximate Centralized Flow Scheduling For Machine Learning [Paper]

    • MIT

    • Scale the congestion window size (or sending rate) based on the number of bytes sent at each iteration.

Trading Systems

  • Network Design Considerations for Trading Systems [Paper]

    • Jane Street

Caching

  • Revisiting Cache Freshness for Emerging Real-Time Applications [Paper]

    • UC Berkeley

    • At real-time timescales, making freshness decisions in response to incoming writes is more efficient than TTL-based policies.

  • Rethinking Web Caching: An Optimization for the Latency-Constrained Internet [Paper]

    • Shahid Beheshti University & Università della Svizzera italiana & Institute for Research in Fundamental Sciences & Sharif University of Technology

    • Web servers proactively provide clients with the latest validation tokens for resources during the initial step of page loading → Allow browsers to use unchanged cached content without unnecessary round trips.

Performance Analysis

  • End-to-End Performance Analysis of Learning-enabled Systems [Paper]

    • USC & Hebrew University of Jerusalem & Rice University & Microsoft

    • A gray-box approach (leverage partial information) → Use gradient to analyze the performance of DNNs.

  • Buffy: A Formal Language-Based Framework for Network Performance Analysis [Paper]

    • UWaterloo & Princeton

    • Language abstractions to enable users to model network functionality and analysis tasks in an imperative solver-agnostic program.

    • A framework to transform them into a representation that can be analyzed by the appropriate solver.

Reliability

  • Automatic Configuration Repair [Paper]

    • XJTU & ByteDance

    • Draw some insights from the field of Automatic Software Repair (ASR).

    • Propose localize-fix-validate as a possible approach to realize Automatic Configuration Repair (ACR).

Acronyms

  • TTL: Time-To-Live

  • DNN: Deep Neural Network

  • DC: Datacenter

Last updated