Delay versus Stickiness Violation Trade-offs for Load Balancing in Large-Scale Data Centers

  • Borst S.
  • Liang Q.

Load balancing provides a key mechanism for achieving efficient resource allocation in data centers, ensuring high levels of server utilization and robust application performance. The load balancing techniques implemented in current data centers tend to rely on mapping packets to a server IP address through a hash value calculated from the flow five-tuple in the packet header. The hash calculation allows extremely fast packet forwarding (at line speed) and provides flow 'stickiness', meaning that all packets belonging to the same flow get dispatched to the same server. Unfortunately, a nominal static hashing operation may not always yield an optimal degree of load balancing, e.g. due to variations in server processing speeds or in traffic characteristics of flows. Dynamic weighted hashing, e.g. implemented via 'bins' to add a level of indirection, provides a natural way to mitigate load imbalances. Specifically, bin reassignment to adjust the hashing weights and redirect flows can improve the degree of load balancing and hence the delay performance, but at the expense of flow stickiness violation and possible disruption of active flows. In the present paper we examine the fundamental trade-off between flow stickiness violation and "delay" performance, where "delay" refers to either bin reassignment delay or packet-level latency. We establish that relaxing the stickiness requirement by a minuscule amount yields a notable reduction in the bin reassignment delay, translating into a significant speed-up of the bin reassignment process. We further demonstrate that flow stickiness violation can help improve packet latency performance even once the bin reassignment process has reached an equilibrium where the structural mismatches between traffic loads and server capacities have been resolved. In particular, a minor level of stickiness violation tolerance is highly effective in clipping the tail of the latency distribution.

Recent Publications

May 01, 2020

A Packaged 0.01-26-GHz Single-Chip SiGe Reflectometer for Two-Port Vector Network Analyzers

  • Chung H.
  • Ma Q.
  • Rebeiz G.
  • Sayginer M.

© 1963-2012 IEEE. This article presents a packaged SiGe BiCMOS reflectometer for 0.01-26-GHz two-port vector network analyzers (VNAs). The reflectometer chip is composed of a resistive bridge coupler and two wideband heterodyne receivers for coherent magnitude and phase detection. In addition, a high-linearity receiver channel is designed to accommodate 20 ...

August 01, 2019

Protecting photonic quantum states using topology

  • Blanco-Redondo A.

The use of topology to protect quantum information is well-known to the condensed-matter community and, indeed, topological quantum computing is a bursting field of research and one of the competing avenues to demonstrate that quantum computers can complete certain problems that classical computers cannot. In photonics, however, we are only ...

May 01, 2019

Digital networks at the nexus of productivity growth

  • Kamat S.
  • Prakash S.
  • Saniee I.
  • Weldon M.

This paper takes a fresh look at the debate over the relationship between digital technology and productivity. The argument of economic historian Robert J. Gordon is that digital technology will not lead to increases in productivity such as we saw in the last century, based on his analysis of the ...