PhD Seminar • Systems and Networking • Cache Your Prompt When It’s Green: Carbon-Aware Caching for Large Language Model Serving

Friday, February 13, 2026 1:00 pm - 2:00 pm EST (GMT -05:00)

Please note: This PhD seminar will take place in DC 1302.

Henry Tian, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Sihang Liu

As Large Language Models (LLMs) become widely used, their environmental impact, particularly carbon emissions, has attracted increasing attention. While caching reduces operational carbon by avoiding redundant computation, it also introduces significant embodied carbon due to the high-performance storage required, creating a non-trivial tradeoff between operational savings and embodied costs.

To address this, I will present GreenCache, a carbon-aware cache management framework that dynamically derives resource allocation plans for LLM serving. GreenCache analyzes the correlation between carbon emission and SLO satisfaction, reconfiguring the resource over time to keep the balance between SLO and carbon emission under dynamic workloads. Our evaluation using real-world request traces demonstrates that GreenCache significantly reduces total carbon emissions while consistently meeting latency constraints.