PhD Defence • Systems and Networking — Building Efficient Software to Support Content Delivery ServicesExport this event to calendar

Tuesday, August 13, 2019 9:00 AM EDT

Benjamin Cassell, PhD candidate
David R. Cheriton School of Computer Science

Many content delivery services use key components such as web servers, databases, and key-value stores to serve content over the Internet. This content can include web pages, streaming video and audio, pictures, games, personal data, social networking content, and software. Today’s content delivery services face challenges unique from those of the past. The first challenge faced is scale: Content is consumed at an unprecedented and growing rate, and much of this content is increasing in size. For example, it is becoming common for videos and photos to be delivered in Ultra High Definition (UHD) or with High Dynamic Range (HDR), resulting in large amounts of data transferred to an increasing number of consumers. This scale drives the need for efficient content delivery, as the physical machines, or virtual machines, or both, required to serve content are expensive.

Another challenge faced by modern content delivery systems is an increase in resource demand and contention. Services which run in cloud environments, for example, must share physical resources with collocated applications. Systems must deal with the resource consumption associated with growing scale and content sizes. Furthermore, other modern features consume additional resources: content encryption, for example, is progressively more ubiquitous even for large content like streaming video, and consumes large amounts of CPU resources.

Existing systems have difficulty adapting to these challenges while still performing efficiently. For instance, while many systems are designed to work with small data, they often struggle to service many concurrent requests for large data (as is the case for streaming video web servers). Our main goal is to demonstrate how software can be augmented or replaced to help improve the performance and hardware efficiency of targeted components of modern content delivery services.

We first introduce Libception, a system designed to help improve disk throughput for applications that process numerous concurrent disk requests for large content. By using serialization and aggressive prefetching, Libception improves the throughput of the Apache and nginx web servers by a factor of 2 on FreeBSD and 2.5 on Linux when serving HTTP streaming video content. Notably, this improvement is achieved without changing the source code of either web server. We additionally show that Libception’s benefits translate into performance gains for other workloads, reducing the runtime of a microbenchmark using the utility diff by 50% (again without modifying the application’s source code).

We next implement Nessie, a distributed, RDMA-based, in-memory key-value store whose unique protocol allows inter-server operations to complete without consuming any CPU resources besides those of the initiating server. Nessie’s design is intended to improve iv performance for systems in environments where CPU resources are shared (such as cloud environments), systems that perform in-memory distribution of large data, and systems that experience frequent periods at non-peak load during which energy could be conserved. We find Nessie improves throughput by 70% versus other approaches when storing large values in write-oriented workloads. Nessie also doubles throughput versus other approaches when CPU contention is introduced. Finally, Nessie provides 41% power savings (relative to idle power consumption) versus other approaches when system load is at 20% of peak throughput.

Finally, we build and evaluate RocketStreams, a framework which facilitates the creation of applications that disseminate and deliver live streaming video. Our framework exposes an easy-to-use API which provides applications with access to high-performance live streaming video dissemination, eliminating the need to implement complicated data management and networking code. RocketStreams’ TCP-based dissemination compares favourably to industry-grade alternatives, reducing CPU utilization on delivery nodes by 54% and increasing viewer throughput by 27% versus the Redis data store. Additionally, when RDMA-enabled hardware is available, RocketStreams provides RDMA-based dissemination which further increases overall performance, decreasing CPU utilization by 95% and increasing concurrent viewer throughput by 55% versus Redis.

Location 
DC - William G. Davis Computer Research Centre
2310
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
25
26
27
28
29
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
4
5
6
  1. 2024 (80)
    1. April (8)
    2. March (22)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)