## Fast Layer-4 Packet Classification Using Space Decomposition Techniques

### Abstract

Packet classification is the problem of matching each incoming packet at a router against a database of filters, which specify forwarding rules for the packets. The filters are a powerful and uniform way to implement new network services such as firewalls, Network Address Translation (\NAT ), Virtual Private Networks (\VPN ), and per-flow or class-based Quality of Service (\QoS ) guarantees \cite{Kumar98}. While several schemes have been proposed recently that can perform packet classification at high speeds, none of them achieves fast worst-case time for adding or deleting filters from the database \cite{RouterPlugins,Cheenu98,Lakshman98}. In this paper, we present a new scheme, based on space decomposition, whose search time is comparable to the best existing schemes, but which also offers fast worst-case filter update time. The three key ideas in this algorithm are as follows: (1) innovative data-structure based on quadtrees for a hierarchical representation of the recursively decomposed search space, (2) fractional cascading and pre-computation to improve packet classification time, and (3) prefix partitioning to improve update time. Depending on the actual requirements of the system this algorithm is deployed in, a single parameter $\alpha$ can be used to tradeoff search time for update time. Also, this algorithm is amenable to fast software and hardware implementation.

• ### Comparative Evaluation of Software Implementations of Layer-4 Packet Classification Schemes (Download)

In Proceedins of IEEE ICNP2001, Riverside, CA, Nov 11-14, 2001.

Vivek Sahasranaman, Inktomi

Milind M. Buddhikot, Dept. of Network Software Research, Lucent Bell Labs.

### Abstract

In the recent years, advent of new network processors \cite{IntelIXP-IBM} and GHz speed general CPUs has prompted network equipment designers to increasingly consider implementing per-packet processing in software. Layer-4 packet classification is one of the basic per-packet processing tasks that is central to several new differentiated services such as Quality-of-Service (QOS) guarantees, Virtual Private Networks (VPNs), Network Address Translation (NAT) and tariff based access to resources. To this end, a priori knowledge of performance of software implementations of some of the well known Layer-4 packet classification will be very useful. In this paper, we compare the performance of three state-of-the-art packet classification schemes namely, Grid-of-Tries (GOT) \cite{Cheenu98}, Packet Classification Algorithms using Recursive Space-decomposition (PACARS) \cite{Milind99}, and Tuple-Space-Search (TSS) \cite{Cheenu99}, implemented in FreeBSD 3.3 UNIX kernel. We developed two new OS extensions, namely, the \textit{Virtual Filter Database (VFD)} framework and the new \texttt{routing socket API} to implement these algorithms. We used real-life as well as synthetic rule databases to evaluate their performance. Our key conclusions are: (1) Compression of trie data structure that is central to a lot of classification algorithms has limited benefits on general purpose CPUs. (2) Static algorithms such as GOT that do not support dynamic updates support very fast search performance of the order of a few microseconds per search and may be adequate for static firewalls. (3) With medium sized databases, PACARS and TSS schemes provide update times of the order of 100s of microseconds and search performance of the order of 10s of microseconds. These algorithms are adequate for dynamic firewalls, traffic directors, and network monitoring applications in enterprise networks.