Delay versus Stickiness Violation Trade-offs for Load Balancing in Large-Scale Data Centers

  • Borst S.
  • Liang Q.

Load balancing provides a key mechanism for achieving efficient resource allocation in data centers, ensuring high levels of server utilization and robust application performance. The load balancing techniques implemented in current data centers tend to rely on mapping packets to a server IP address through a hash value calculated from the flow five-tuple in the packet header. The hash calculation allows extremely fast packet forwarding (at line speed) and provides flow 'stickiness', meaning that all packets belonging to the same flow get dispatched to the same server. Unfortunately, a nominal static hashing operation may not always yield an optimal degree of load balancing, e.g. due to variations in server processing speeds or in traffic characteristics of flows. Dynamic weighted hashing, e.g. implemented via 'bins' to add a level of indirection, provides a natural way to mitigate load imbalances. Specifically, bin reassignment to adjust the hashing weights and redirect flows can improve the degree of load balancing and hence the delay performance, but at the expense of flow stickiness violation and possible disruption of active flows. In the present paper we examine the fundamental trade-off between flow stickiness violation and "delay" performance, where "delay" refers to either bin reassignment delay or packet-level latency. We establish that relaxing the stickiness requirement by a minuscule amount yields a notable reduction in the bin reassignment delay, translating into a significant speed-up of the bin reassignment process. We further demonstrate that flow stickiness violation can help improve packet latency performance even once the bin reassignment process has reached an equilibrium where the structural mismatches between traffic loads and server capacities have been resolved. In particular, a minor level of stickiness violation tolerance is highly effective in clipping the tail of the latency distribution.

Recent Publications

August 09, 2017

A Cloud Native Approach to 5G Network Slicing

  • Francini A.
  • Miller R.
  • Sharma S.

5G networks will have to support a set of very diverse and often extreme requirements. Network slicing offers an effective way to unlock the full potential of 5G networks and meet those requirements on a shared network infrastructure. This paper presents a cloud native approach to network slicing. The cloud ...

August 01, 2017

Modeling and simulation of RSOA with a dual-electrode configuration

  • De Valicourt G.
  • Liu Z.
  • Violas M.
  • Wang H.
  • Wu Q.

Based on the physical model of a bulk reflective semiconductor optical amplifier (RSOA) used as a modulator in radio over fiber (RoF) links, the distributions of carrier density, signal photon density, and amplified spontaneous emission photon density are demonstrated. One of limits in the use of RSOA is the lower ...

July 12, 2017

PrivApprox: Privacy-Preserving Stream Analytics

  • Chen R.
  • Christof Fetzer
  • Le D.
  • Martin Beck
  • Pramod Bhatotia
  • Thorsten Strufe

How to preserve users' privacy while supporting high-utility analytics for low-latency stream processing? To answer this question: we describe the design, implementation and evaluation of PRIVAPPROX, a data analytics system for privacy-preserving stream processing. PRIVAPPROX provides three properties: (i) Privacy: zero-knowledge privacy (ezk) guarantees for users, a privacy bound tighter ...