Method for performing tree based ACL lookups

Information

  • Patent Grant
  • 7536476
  • Patent Number
    7,536,476
  • Date Filed
    Monday, December 22, 2003
    20 years ago
  • Date Issued
    Tuesday, May 19, 2009
    15 years ago
Abstract
A method for performing a lookup of a packet against an access control list. In one example, the method includes receiving an access control list, partioning said list into two or more complementary sets, and for each set, forming a tree having one or more end nodes including filtering rules, and internal nodes representing decision points, thereby forming at least two trees. In one example, when a packet arrives, the two or more trees are traversed using the packet header information, wherein the decision points in the internal nodes are used to guide the packet selection down the trees to an end node.
Description
FIELD OF THE INVENTION

This invention relates, in general, to routers, and more particularly, to access control lists used within routers.


BACKGROUND

An Access Control List (ACL) includes a plurality of Filtering Rules (FRs) for packet classification. The FRs are used in firewalls implemented within a router and are used to determine an action to be performed with regard to a received packet based on the classification. Information from the packet's header is compared against the FRs in order to determine if the packet falls within the scope of one or more of the FRs.


Each filtering rule or filter may include one or more of the fields listed in Table I, and the rules specify how each field should be mapped or compared with the packet header. Typically, the source/destination IP address is specified by a prefix/mask of the packet, and the source/destination port numbers are specified by a range. A given packet can match several of the filters in the database, so each filter is given a cost, and the action dictated by the least cost matching filter is applied, in one example.


For instance, upon receipt of a packet, the packet may be compared to the ACL and an action such as permit, deny, count, redirect or log may be performed as to the packet. Fields from the packets used typically include the source IP address, the destination IP address, layer four protocol, source and destination ports, and possibly some other miscellaneous packet properties. Examples of these fields and sizes are shown in Table 1.









TABLE 1







Fields and sizes often used in the


filtering rules in an ACL for classification









Field
Size (bits)
Comment





Type/protocol
4-8
Such as IP, UDP etc . . .


Source IP address
32


Destination IP address
32


Source Port Number
16


Destination Port Number
16


Misc.
4-8
Such as established, echo . . .


Total
104-108









Usually, rules contained within an ACL are presented in a list, a sample of which is shown below:

    • permit tcp any any established
    • permit udp 10.35.0.0/16 10.50.6.128/26 ranges snmp snmptrap
    • deny tcp 10.21.133.0/24 31.50.138.141 eq 3306


Based on sample, conventional ACL lists provided by industry, ACL lists have ranged from a few entries (tens of rules) to a few thousands of filtering rules, as shown in Table 2.









TABLE 2







Example of sizes of ACLs









Source
Size (entries)
Comment












AOL
2814
Particularly long example


UCR
361
Problematic for memory in methods like RFC




and HiCuts


MFN140
153
Typical


MFN112
114
Typical


EBORN8
33
Short list









The most used format for an ACL list is that given by Cisco Systems, in which the rules are given in a linear list, and the first matching rule is applied to a received packet. In this example, the ordering number in the list is used as the implicit filter cost.


Simplest hardware implementations use ternary CAMs (content addressable memory). Rules are stored in the CAM array in the order of decreased priority. Simple and flexible, ternary CAMs are fairly expensive and power hungry. However, 64K×128 ternary CAMs may be available.


Recursive Flow Classification (RFC), as described in “Packet Classification on Multiple Fields,” Proceedings of Sigcomm, Computer Communication Review, Vol. 29, No. 4, pages 147-160, September 1999, by P. Gupta and N. McKeown, is another hardware mappable method, believed to be used in existing implementations, including by Cisco Systems and Lucent Technologies. See also T. V. Lakshman and D. Stidialis, “High Speed Policy-based Packet Forwarding Using Efficient Multidimensional Range Matching,” Proceedings of ACM Sigcomm 98, October 1998. It is also known as equivalenced cross-producing. The method can be pipelined, and lends it self to an efficient hardware implementation using considerable memory, typically implemented in DRAM. However, without pushing the technology envelope, classifiers with thousands of rules can be implemented and achieve OC192 rates for 40 Byte packets. In this method, all fields of the packet header is split up into multiple chunks, which are used to index into multiple memories. The contents of each memory is precomputed so as to compress regions that could fall into similar rule sets. Recursive recombination of the results and subsequent lookups finally yields the best matching rule.


A number of hash methods exist, such as the Tuple Space Search, described in V. Srinivasan, S. Suri and G. Vargese, “Packet Classification using Tuple Space Search”, Proceedings of ACM Sigcomm, pages 135-146, September 1999. In this example, rules have to be initially partitioned into bins which uniquely identifies lengths of prefix specified in each dimension of the rule, and each of these bins are subsequently hashed. An analysis of the available ACL lists indicated that the number of bins are relatively large, and since each bin needs be searched and the results combined, there would not be considerable improvement over a linear search.


Another technique uses a grid of tries, as described in V. Srinivasan, S. Suri, G. Vargese and M. Valdvogel, “Fast Scalable Level Four Switching,” Proceedings of SIGCOMM 98, pages 203-214, September 1998. In this example, a method generalizes the standard one dimensional trie search solution to two dimensions. Further work, described in L. Qiu, S. Suri, G. Vargese, “Fast Firewall Implementations for Software and Hardware-based Routers,” Technical Report MSR-TR-2001-61, www.research.microsoft.com, showed how to extend a trie search to multiple dimensions, and to apply pruning, compression and selective duplication to balance the memory and throughput constraints. Unfortunately, backtracking may be difficult to implement in a forwarding engine.


Also, a method exists called Hierarchical Intelligent Cuttings (HiCuts) is described in Pankaj Gupta and Nick McKeown, “Packet Classification using Hierarchical Intelligent Cuttings,” IEEE Micro, pages 34-41, Vol. 21, No. 1, January/February 2000. HiCuts is a tree based approach that partitions the multi-dimensional search space guided by heuristics. Each end node of the tree stores a small number of rules that are then sequentially searched to find the best match. Associated with each internal node is a cut defined as a number of equal size interval that a particular dimension is partitioned into, and this is performed recursively on the child nodes at each level.


It is also possible to recast classification as a problem in computational geometry, such as the “point location”, “ray tracing” or “rectangle enclosure” problem. An example is described in F. Preparata and M. I. Shamos, “Computational Geometry: an Introduction,” Springer-Verlag, 1985. For non-overlapping regions multiple dimensions, there are results that allow trade-off of query time with storage. Unfortunately, none of these methods perform well for both storage requirements and query time, nor can these results be easily generalized to the case of overlapping regions which is the case for ACL classification.


Accordingly, as recognized by the present inventor, what is needed is a method for performing a lookup of a packet against an access control list that is memory efficient. It is against this background that various embodiments of the present invention were developed.


SUMMARY

In light of the above and according to one broad aspect of one embodiment of the present invention, disclosed herein is a method for performing a lookup of a packet against an access control list. In one example, the method includes receiving an access control list, partioning said list into two or more complementary sets, and for each set, forming a tree having one or more end nodes including filtering rules, and internal nodes representing decision points, thereby forming at least two trees. In one example, when a packet arrives, the two or more trees are traversed using the packet header information, wherein the decision points in the internal nodes are used to guide the packet selection down the trees to an end node.


Other embodiments of the invention are disclosed herein. The foregoing and other features, utilities and advantages of various embodiments of the invention will be apparent from the following more particular description of the various embodiments of the invention as illustrated in the accompanying drawings and claims.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 illustrates an example of logical operations for performing tree based lookups, in accordance with one embodiment of the present invention.



FIG. 2 illustrates an example of a tree.



FIG. 3 illustrates an example of a tree.



FIG. 4 illustrates an example of a tree.



FIG. 5 illustrates an example of a tree.



FIG. 6 illustrates an example of two complementary trees, in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION

Disclosed herein is a method to implement a tree based ACL lookup process that can be used with a forwarding engine or network processing unit of a router, such as disclosed in Ser. No. 10/177,496 entitled “PACKET ROUTING AND SWITCHING DEVICE” the disclosure of which is hereby incorporated by reference in its entirety. Embodiments of the present invention have the advantage that they may avoid the data dependant memory explosion seen in previous methods such as RFC and HiCuts, and use simple, concisely represented decisions to be made at each decision node. Testing of embodiments of the present invention and benchmarking against a number of ACL lists indicates that lists with thousands of filtering rules (FRs) can be implemented with efficient memory utilization and computation time similar to that required for a lookup.


In one embodiment, the methods disclosed herein partition data into two or more sets, each of which has an efficient tree based representation, whereas the original set may not have an efficient representation. Further, decision nodes are chosen in the tree in a manner which is efficient to represent and which yields good partitioning.


In one embodiment, the method includes the operations shown in FIG. 1. At operation 50, an ACL is received, and at operation 52, the ACL list is partitioned into two or more complementary sets (or, alternatively, skewed sets) based on the source IP and destination IP prefixes/masks (or, alternatively, based on other fields of the ACL entries). This partitioning produces two lists that produce trees that are much more memory efficient than a single tree constructed with the original set. At operation 54, a tree is constructed with each of the sets (i.e., two or more trees are formed), where end nodes consist of a short list of degenerate filtering rules, and internal nodes represent decision points. During the tree construction, these decision points are used to partition the filtering rules into smaller and smaller sets. The comparison criteria is specially selected according to a metric that attempts to both balance the tree, as well as prevent filter rule duplication or redundancy (it is this duplication that could cause memory explosion). This process is iterated until the list of filter rules (FRs) at a node becomes degenerate (i.e., no decision criteria can further partition the filter rules such that both child nodes have a strict subset).


At operation 56, when a packet arrives, the two or more trees are traversed. The decision points in the internal nodes are now used to guide the packet selection down the tree to an end node. The least cost match is selected from the filtering rules residing at the end nodes (typically left with only a few filters that are then searched linearly).


The operations of FIG. 1 are described herein. These operations and the process described herein may be used by a forwarding engine or other portion of a router to form a set of trees which may be used to compare information from received packets in order to process the packets received.


The entries of the ACL list may contain IP address information with respect to the FRs, and this IP address information may be represented using prefixes or masks. A prefix is a shorthand expression of an IP address or a range of IP addresses. An IP address is comprised of four eight bit values separated by periods. In one example, a prefix may include an address portion and a mask portion. The address portion specifies a 32 bit value, while the mask portion specifies a number of bits that are relevant in interpreting the address portion of the prefix. In one example, the mask portion of a prefix specifies the number of left most bits of the address portion which are relevant, with the remaining bits being irrelevant or “don't cares.” For instance, the prefix 1.1.0.0/16 represents IP addresses in the range of 1.1.0.0/32 through 1.1.255.255/32.


Building an Efficient Tree


In one example, a tree is built by recursively partitioning the filtering rules of an ACL at each internal decision node. This method is similar to that of HiCuts in that it is a tree based approach that partitions the multi-dimensional search space guided by heuristics, while each end node of the tree stores a small number of rules that are then sequentially searched to find the best match. However, the decision node heuristic is very different in that bit pivot points are used in the Source and Destination IP fields (specified as prefix/mask) as opposed to equal ranges: bit pivots can be more efficiently represented in memory, which may be important depending on the particular implementation of the forwarding engine.


Consider the sample ACL below, with only one dimension shown for simplicity. Without loss of generality, a shorthand prefix/mask is used to represent the filter match rule.









TABLE 3







Example ACL list with rules in one dimension










NAME
Match Rule







FR1
 01*



FR2
0010*



FR3
 111*



FR4
 000*



FR5
  0*










To build a tree, a choice exists to partition using Bit 0, 1, 2 or 3 of the rule. If the bits are followed sequentially as is done in a trie, this results in the tree shown in FIG. 2. By first considering Bit 0, the filtering rules are partitioned into two sets, one containing only 1 rule, the other 4. By next considering Bit 1, the set of 4 filtering rules may be partitioned into two sets, one containing 3 while the other 2. At this point, FR5 has to be replicated. The resulting tree is 4 levels deep, contains 3 internal tree nodes (IN 1, IN 2 and IN 3) and 4 end nodes (EN 1 . . . EN 4), as shown in FIG. 2.


However, by considering Bit 1 first, the filtering rules may be partitioned into sets containing 3 and 3 respectively (see first level of tree in FIG. 3). Note that again FR5 must be replicated to fall in both sets, as Bit 1 is wild carded in it's rule field.


Picking decision points with which to partition can be summarized by two goals:


a) Pick decision points that partitions the ACL list at that point into as even groups as possible, with the aim of keeping the tree as flat as possible.


b) Pick decision points that minimize the number of replicated FR entries, with the aim of minimizing the memory utilization.


Often these goals will be at odds with each other, and a trade-off must be made. This trade-off was made by the following evaluation function: All choices of partitioning the ACLs are then considered. For each choice the ACLs considered are counted if they would end up in the “LEFT” branch only, the “RIGHT” branch only, or need to be replicated to “BOTH” branches. A score of ALL*ALL-LEFT*RIGHT+BOTH*BOTH is tabulated for each choice, and the choice with the minimum score was chosen.


Many other trade-off functions were considered, but the above function yielded the most consistent good results. This function evaluated for the ACLs at the root of the tree is shown below:









TABLE 4







Example Scoring at root of ACL tree.











Candidate bit
LEFT Count
RIGHT Count
BOTH Count
Score














0
4
1
0
21


1
2
2
1
22


2
2
1
2
27


3
1
0
4
41


all others
0
0
5
50









Finally, consider the classification of four example packet headers (PH1 . . . PH4), using the constructed tree in FIG. 4.


When the packet header arrives at a given end node, it is compared with the FRs stored at the end node in order to determine a match, as the tree traversal only serves to narrow down the number of FRs against which the packet header must be compared, but does not do the actual packet classification. This is evident in the case of packet header PH3 shown in FIG. 4, which at end node EN4, does not match the only FR associated with the node.


For fields that are specified as a range as opposed to the prefix/mask format, the decision node specifies “<X” where X is some integer in the field range. FRs with a specified range <X will be mapped to the left, with a range >=X will be mapped to the right, and a range that spans X will have to be replicated. An alternative implementation would be the use of bit comparisons instead of range comparisons for port numbers. Such implementation may be more involved, but will have benefits of actual decision node mapping to memory.


Complementary Trees


Very often the fields of Table I are specified with different degrees of precision. Consider the 6 FRs in Table 5 in which the source and destination addresses are limited to 3 bits for simplicity. If any bit in either the source (or destination) is used in a decision node, half of the FRs have a corresponding wildcard in that position, resulting in these FRs being replicated.









TABLE 5







Worse case memory example









NAME
Source IP
Destination IP





FR1
001
*


FR2
010
*


FR3
101
*


FR4
*
110


FR5
*
010


FR6
*
000









Following the tree construction method described above with reference to building and efficient tree—but where only one tree is built—this would result in the tree of FIG. 5 with 9 end nodes, with 2 FR entries per end node (16 total stored), and 8 internal nodes.


If instead, in accordance with the present invention, these entries of this example were proportionately partitioned into two sets and two trees built in accordance with embodiments of the present invention, this results in 6 end nodes, with a single FR entries per end node (6 total stored), and 4 internal nodes, as shown in FIG. 6.


In the general case, up to N complementary trees could be required if there are N dimensions in the original data set. However, for all the data sets that which may be accessed, two trees are sufficient to give good results.


The elements of each tree may be chosen as follows. In order to chose which tree a FR belongs, a comparison is made between the source and destination address prefix lengths. Those with (source prefix>destination prefix) will be placed on one tree, and those with (destination prefix>source prefix) on the second. If the prefix is of the same length, then two approaches are evaluated. The first, called Balanced Complementary Trees, aims to keep the same number of FRs on both trees, while the second, called Skewed Complementary Trees, aims to make one of the trees as small as possible. Another variation of this that may yield better results would be to make a decision based on the other FR fields, such as the port or protocol.


There is one disadvantage of the complementary tree approach as follows: Consider a packet with PH 010 010. With a single tree, one may end up at end node 5, and realize that it matches FR 2 first, and so that rule applies (if the FR order are preserved in leaves). However, in the two complementary trees, it matches the FR 2 in end node 2, and FR 5 in end node 5. Additional information of the weights of FR 2 and FR 5 may be used in order to determine which rule to follow.


Results


Results are presented for a single tree, and the two variations of the complementary tree methods. Although in general the single tree method results in a tree that is smaller in depth than the complementary tree methods, the memory utilization is in general much higher. Overall the best performing method is the Skewed Complementary Trees method.









TABLE 6







Results for various methods












Two Balanced Complementary
Two Skewed Complementary



Single Tree
Trees
Trees

















Max/


Max/


Max/





Min/Ave
Total
Num FR
Min/Ave
Total
Num FR
Min/Ave
Total
Num FR


Source
Depth
Nodes
Entries
Depth
Nodes
Entries
Depth
Nodes
Entries



















AOL
20/8/14.7
11251
13964
15/7/11.5
7284
6234
16/9/12
7386
4785


(2814)



16/6/11.7


15/5/10.5


UCR
23/5/17.2
25617
49639
15/6/10.8
1322
1636
15/5/10.4
1054
1318


(361)



9/5/6.2


10/4/7.2


MFN
10/7/8.1
257
427
7/3/5.4
254
270
6/3/4.5
256
232


140



8/5/6.5


8/6/6.6


(153)


MFN
11/5/8.5
277
193
8/5/6.4
242
134
9/5/6.7
240
133


112



7/4/5.2


6/3/4.6


(114)


EBOR-
7/4/5.9
61
40
1/1/1
60
40
0/0/0
62
40


N8



6/3/4.8


7/4/5.9


(33)









Total nodes for each tree counts both the internal nodes, as well as the leaf nodes. A total of 36 accesses to forwarding table RAM (FTSRAM) would have been adequate in the above data sets to complete a search. A single recirculation in the forwarding engine referenced above provides 36 access to FTSRAM, and so would be adequate for any of the ACL lists above.


Known Worse Case Pathological Cases


There are pathological cases that cause either the tree to be very deep, or results in many degenerate ACLs at the end node. These examples are shown here, which help provide an understanding of the limitations of the method. None of the data sets evaluated fell into any of these categories.


Very Deep Tree


ACLs that contain FRs with patterns in the source/destination IP address, such as shown here, can construct an example with 64 FRs that require a tree 64 deep:

    • 1*
    • 01*
    • 001*
    • 0001*


ACLs with pattern in the source/destination IP address, such as shown here, can construct an example with 64 FRs that require an end node with 64 entries (many degenerate FRs in end node, requiring serial evaluation):

    • 111111 accept
    • 11111 deny
    • 1111 accept
    • 111 deny


      Mapping to Forwarding Engine Memory


The following is one embodiment of how the ACL decision trees could be mapped to a memory of a forwarding engine such as the forwarding engine referenced above.


Internal Tree Nodes


Internal tree decision nodes can be mapped to 32 bits fairly easily for pointer lengths up to 12 bits. For pointer values of more than 16 bits, the higher order bits for nodes representing port number comparisons are taken from parent nodes, or through the use of bit comparisons instead of range comparisons for port numbers. In the following analysis, assume that each internal node consumes 4 bytes of memory.


End Nodes


The end node can be mapped to FRs in much the same manner as is currently implemented today FRs using a list based search. Currently, 4 32 bit words are utilized per FR, and there is only a single obvious spare bit.


However, some number of extra bits will have to be found to indicate the FR weight, so the result from the appropriate tree can be selected. Three bits can be saved per IPaddress/mask specification with a slightly different encoding as is shown in Table 7 using the IP destination address as an example:









TABLE 7







Representing the IPaddress/mask in 35 bits.









Field
Length
Interpretation












IpDstShift
3 
If 0-6, represents the number of bits to




shift right the IP Destination Address




is right shifted before being stored




into IpDestinationAddress. If 7, then




the first 6 bits of the




IpDestinationAddress is used to




represent the number of bits to that




the IP Destination Address is right




shifted before being stored.


IpDestinationAddress
32
When IpDstShift in range [0 . . . 6],




then this represents the IP Destination




Address right shifted by this number.


IpDestinationAddress
32
When IpDstShift = 7, then the first




6 bits of the IpDestinationAddress is




used as the shift index. The remaining




26 bits represents the Ip Destination




Address right shifted by this index









More precisely in the forwarding engine referenced above in the co-pending application Ser. No. 10/177,496 entitled “PACKET ROUTING AND SWITCHING DEVICE,” in one example, 1024 word block per LxU stage major block may be dedicated for ACLs in software, which translates to 12,228 words total (12 stages), and 49152 bytes. It takes 4 words per FR (ACL list item), for a total of 3072 items. Because of memory allocation and organization constraints (e.g., lists not divisible by 3 wastes some memory), effective usage will not be 100%.


However, these 6 bits combined with the one obvious spare bit will allow only 7 bits to specify the FR weight. If this method is used, and if restricted to 4 32 bit words, some additional thought needs to be given in order to extract additional bits for the weight field.


In the case where there are multiple FRs in an end node, these FRs are degenerate, and as such are closely related. Typical entries from end node are shown in the two following Tables 8-9:









TABLE 8







An example of entries from a worse case end


node from the UCR ACL list.
















source
destination
Source
Destination


Weight
action
type
IP/mask
IP/mask
Port
Port
















62
deny
any
0.0.0.0/0
129.45.100.225/32
any
any


313
permit
any
0.0.0.0/0
129.45.100.0/24
any
933


348
deny
any
0.0.0.0/0
129.45.100.0/24
any
<1024


352
permit
any
0.0.0.0/0
129.45.100.0/24
any
any
















TABLE 9







Another example of entries from a worse case


end node from the UCR ACL list.
















source
destination
Source
Destination


Weight
action
type
IP/mask
IP/mask
Port
Port





91
permit
any
209.249.96.0/24
172.17.230.0/25
any
any


94
deny
any
0.0.0.0/0
172.17.230.0/32
any
any


95
permit
any
0.0.0.0/0
0.0.0.0/0
any
any









In particular, few fields need to be re-evaluated as the list is traversed. In Table 8, after FR with weight 62 is evaluated, if it fails, only the destination IP/mask and the Destination Port need to be re-evaluated. If FR with weight 313 fails, then only the Destination Port now needs to be re-evaluated etc. It should be possible to encode these with an average of 2 32 bit words per FR. Lastly, common wildcards could probably be encoded in a more compressed representation. In the following analysis, assume that each FR consumes 16 bytes of memory if it is the first entry in the end node, else 8 bytes for subsequent FR entries in the end node.


Memory Considerations


Using the estimates above for memory utilization in the end nodes and internal nodes, the approximate the memory requirements for each ACL list in bytes is computed in Table 10. This results in 4*(Internal Nodes+2*(End Nodes+FR entries)) bytes for a given tree.









TABLE 10







Example ACLs memory utilization















Internal
End
FR
Memory
Bytes per


Source
Size
Nodes
Nodes
Entries
(K Bytes)
FR
















AOL
2814
3692
3694
4785
80.67
29.4


UCR
361
526
528
1318
16.48
46.7


MFN140
153
127
129
232
3.32
22.1


MFN112
114
119
121
133
2.45
22


EBORN8
33
30
32
40
0.68
21.1









From the data, budget of 32 bytes per FR may be made in an ACL list.


To make trees shorter, more than one bit at a time may be considered in the source and destination IP fields, in a similar way that this is done in the lookup. This may help speed up the search by up to 40%, but will increase memory by about 25%.


Accordingly, as discussed above, tree based ACL computation as disclosed herein is possible on a forwarding engine of a router, and would allow lists of many thousand entries to be processed with, for instance, 3 strokes through the LxU of the forwarding engine referenced above (i.e., can be done with one recirculation). This means that a tree based ACL lookup can be done with I-mix at line rate.


In one example, the methods described herein may be coded in a similar manner as used for packet lookup operations (i.e., IPv6 lookup).


In a tree based implementation, each FR of an ACL would consume on average 32 bytes of storage. Current list based implementations require 16 bytes of storage per ACL. Hence, the total number of ACLs supported for a given memory size will be about half for a tree based implementation relative to the current list based implementations.


In one example, using ¼ of the memory (i.e., FTSRAM) for this application will result in the support of up to 16K FRs per input line card. This will not allow 3K entries for every one of 40 Ethernet ports on a given Linecard today (would use about 4 MBytes). However, given a certain memory limit, the number of total FRs that a line card can support can be specified, and a customer can decide how to allocate them between input ports, if desired.


In one example, doing output ACLs at the input of a forwarding engine as described in the co-pending application Ser. No. 10/177,496 entitled “PACKET ROUTING AND SWITCHING DEVICE” with only one recirculation may not be feasible due to memory con-straints. By doing two recirculations, the first for the input ACL lookup, the second for the output ACL lookup, a certain level of output ACLs can be supported at the expense of memory. In addition to the input ACL lists, and output ACL list will have to be present for every output port that performs ACLs. This could result in 40 input ACL lists, and 480 output ACL lists per Line Card. The total number of FRs decided upon based on memory could be shared among these lists. Simulation results indicate that even with 2 recirculations, I-mix could be supported at line rate with a forwarding engine such as referenced above.


In order to support ACLs at line rate for 40 Byte packets, other forwarding engines may use an additional 36 accesses to FTSRAM in order to run the tree based ACL method without recirculation. This can be accomplished either by adding more pipeline stages, increasing the clock frequency to allow more “strokes” through the pipeline, or by a combination of both (i.e., double the number of accesses).


While the methods disclosed herein have been described and shown with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form equivalent methods without departing from the teachings of the present invention. Accordingly, unless specifically indicated herein, the order and grouping of the operations is not a limitation of the present invention.


While the invention has been particularly shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims
  • 1. A method for performing a lookup of a packet against an access control list, comprising: receiving an access control list including a set of filtering rules;identifying a decision point to partition the access control list into two or more complementary sets, wherein the decision point is identified such that the access control list is partitioned into nearly even groups, and wherein the decision point is identified to reduce the number of replicated filtering rules and minimizing memory utilization;forming a tree for each complementary set, wherein the tree has one or more end nodes including a subset of filtering rules, and an internal decision node representing the decision point; andtraversing the two or more trees when a packet arrives and comparing header information from the packet against each of the two or more trees and determining a match, wherein the decision point in the internal decision nodes is used to guide the packet down the trees to an end node that includes at least one filtering rule that is included in the set of filtering rules.
  • 2. The method of claim 1, comprising: sequentially searching the subset of filtering rules at the end nodes to find the best match.
  • 3. The method of claim 1, comprising: recursively partitioning the set of filtering rules at each of the internal decision nodes.
  • 4. The method of claim 1, comprising: determining the tree to which a particular filtering rule belongs based on comparing the source and destination address prefix lengths.
  • 5. The method of claim 4, further comprising: assigning the filtering rules having source prefix lengths greater than destination prefix lengths in a first tree.
  • 6. The method of claim 5, further comprising: assigning the filtering rules having destination prefix lengths greater than source prefix lengths in a second tree.
  • 7. The method of claim 1, further comprising: determining the two or more trees to which a particular filtering rule belongs based on a particular port to which the packet is being forwarded.
  • 8. The method of claim 1, further comprising: determining the tree to which a particular filtering rule belongs based on the protocol associated with the packet.
  • 9. A network device comprising: at least one processor;a memory in communication with the at least one processor, the memory including logic encoded in one or more tangible media for execution and when executed operable to:receive an access control list including a set of filtering rules;identify a decision point to partition the access control list into two or more complementary sets, wherein the decision point is identified such that the access control list is partitioned into nearly even groups, and wherein the decision point is identified to reduce the number of replicated filtering rules and minimizing memory utilization;form a tree for each complementary set, wherein the tree has one or more end nodes including a subset of filtering rules, and an internal decision node representing the decision point; andtraverse the two or more trees when a packet arrives and comparing header information from the packet against each of the two or more trees and determining a match, wherein the decision point in the internal decision nodes is used to guide the packet down the trees to an end node that includes at least one filtering rule that is included in the set of filtering rules.
  • 10. The network device of claim 9, further comprising logic encoded in one or more tangible media for execution and when executed operable to: sequentially search the subset of filtering rules at the end nodes to find the best match.
  • 11. The network device of claim 9, further comprising logic encoded in one or more tangible media for execution and when executed operable to: recursively partition the set of filtering rules at each of the internal decision nodes.
  • 12. The network device of claim 9, further comprising logic encoded in one or more tangible media for execution and when executed operable to: determine the two or more trees to which a particular filtering rule belongs based on comparing the source and destination address prefix lengths.
  • 13. The network device of claim 12, further comprising logic encoded in one or more tangible media for execution and when executed operable to: assign the filtering rules having source prefix lengths greater than destination prefix lengths in a first tree.
  • 14. The network device of claim 13, further comprising logic encoded in one or more tangible media for execution and when executed operable to: assign the filtering rules having destination prefix lengths greater than source prefix lengths in a second tree.
  • 15. The network device of claim 9, further comprising logic encoded in one or more tangible media for execution and when executed operable to: determine the tree to which a particular filtering rule belongs based on a particular port to which the packet is being forwarded.
  • 16. Logic encoded in one or more tangible media for execution and when executed operable to: receive an access control list including a set of filtering rules;identify a decision point to partition the access control list into two or more complementary sets, wherein the decision point is identified such that the access control list is partitioned into nearly even groups, and wherein the decision point is identified to reduce the number of replicated filtering rules and minimizing memory utilization;form a tree for each complementary set, wherein the tree has one or more end nodes including a subset of filtering rules, and an internal decision node representing the decision point; andtraverse the two or more trees when a packet arrives and comparing header information from the packet against each of the two or more trees and determining a match, wherein the decision point in the internal decision nodes are used to guide the packet down the trees to an end node that includes at least one filtering rule that is included in the set of filtering rules.
CROSS REFERENCE TO RELATED APPLICATION

This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 60/435,397, filed Dec. 20, 2002, entitled “METHOD FOR PERFORMING TREE BASED ACL LOOKUPS” the disclosure of which is hereby incorporated by reference herein in its entirety.

US Referenced Citations (109)
Number Name Date Kind
5471592 Gove et al. Nov 1995 A
5524258 Corby, Jr. et al. Jun 1996 A
5677851 Kingdon et al. Oct 1997 A
5734649 Carvey et al. Mar 1998 A
5781772 Wilkinson, III et al. Jul 1998 A
5787430 Doeringer et al. Jul 1998 A
5802278 Isfeld et al. Sep 1998 A
5838894 Horst Nov 1998 A
5878415 Olds Mar 1999 A
5905725 Sindhu et al. May 1999 A
5909440 Ferguson et al. Jun 1999 A
5920699 Bare Jul 1999 A
5923643 Higgins et al. Jul 1999 A
5930256 Greene et al. Jul 1999 A
6011795 Varghese et al. Jan 2000 A
6018524 Turner et al. Jan 2000 A
6078963 Civaniar et al. Jun 2000 A
6091725 Cheriton et al. Jul 2000 A
6101192 Wakeland Aug 2000 A
6161139 Win et al. Dec 2000 A
6192405 Bunnell Feb 2001 B1
6212184 Venkatachary et al. Apr 2001 B1
6308219 Hughes Oct 2001 B1
6430181 Tuckey Aug 2002 B1
6453413 Chen et al. Sep 2002 B1
6526055 Perlman et al. Feb 2003 B1
6614789 Yazdani et al. Sep 2003 B1
6631419 Greene Oct 2003 B1
6658002 Ross et al. Dec 2003 B1
6675187 Greenberger Jan 2004 B1
6687781 Wynne et al. Feb 2004 B2
6697875 Wilson Feb 2004 B1
6721316 Epps et al. Apr 2004 B1
6731633 Sohor et al. May 2004 B1
6732203 Kanapathippillai et al. May 2004 B2
6751191 Kanekar et al. Jun 2004 B1
6778490 Achilles et al. Aug 2004 B1
6785728 Schneider et al. Aug 2004 B1
6795886 Nguyen Sep 2004 B1
6801950 O'Keeffe et al. Oct 2004 B1
6804815 Kerr et al. Oct 2004 B1
6859455 Yazdani et al. Feb 2005 B1
6879559 Blackmon et al. Apr 2005 B1
6920456 Lee et al. Jul 2005 B2
6922724 Freeman et al. Jul 2005 B1
6944183 Iyer et al. Sep 2005 B1
6944860 Schmidt Sep 2005 B2
6954220 Bowman-Amuah Oct 2005 B1
6961783 Cook et al. Nov 2005 B1
6965615 Kerr et al. Nov 2005 B1
6973488 Yavatkar et al. Dec 2005 B1
6990527 Spicer et al. Jan 2006 B2
7006431 Kanekar et al. Feb 2006 B1
7020718 Brawn et al. Mar 2006 B2
7024693 Byrne Apr 2006 B2
7028098 Mate et al. Apr 2006 B2
7031320 Choe Apr 2006 B2
7043494 Joshi et al. May 2006 B1
7051039 Murthy et al. May 2006 B1
7051078 Cheriton May 2006 B1
7054315 Liao May 2006 B2
7054944 Tang et al. May 2006 B2
7069372 Leung, Jr. et al. Jun 2006 B1
7069536 Yaung Jun 2006 B2
7073196 Dowd et al. Jul 2006 B1
7095713 Willhite et al. Aug 2006 B2
7096499 Munson Aug 2006 B2
7103708 Eatherton et al. Sep 2006 B2
7111071 Hooper Sep 2006 B1
7124203 Joshi et al. Oct 2006 B2
7139238 Hwang Nov 2006 B2
7150015 Pace et al. Dec 2006 B2
7155518 Forslow Dec 2006 B2
7159125 Beadles et al. Jan 2007 B2
7167918 Byrne et al. Jan 2007 B2
7185192 Kahn Feb 2007 B1
7185365 Tang et al. Feb 2007 B2
7200144 Terrell et al. Apr 2007 B2
7200865 Roscoe et al. Apr 2007 B1
7203171 Wright Apr 2007 B1
7225204 Manley et al. May 2007 B2
7225256 Villavicencio May 2007 B2
7225263 Clymer et al. May 2007 B1
7227842 Ji et al. Jun 2007 B1
7230912 Ghosh et al. Jun 2007 B1
7231661 Villavicencio et al. Jun 2007 B1
7239639 Cox et al. Jul 2007 B2
7249374 Lear et al. Jul 2007 B1
7257815 Gbadegesin et al. Aug 2007 B2
7274702 Toutant et al. Sep 2007 B2
7280975 Donner Oct 2007 B1
7302701 Henry Nov 2007 B2
7308495 Ishiyama et al. Dec 2007 B2
7315900 Ofelt et al. Jan 2008 B1
7355970 Lor Apr 2008 B2
7450438 Holst et al. Nov 2008 B1
7453883 Lynch et al. Nov 2008 B1
20020035639 Xu Mar 2002 A1
20030005178 Hemsath Jan 2003 A1
20030056001 Mate et al. Mar 2003 A1
20030108056 Sindhu et al. Jun 2003 A1
20030163589 Bunce et al. Aug 2003 A1
20030188192 Tang et al. Oct 2003 A1
20030208597 Belgaied Nov 2003 A1
20030212806 Mowers et al. Nov 2003 A1
20030212900 Liu et al. Nov 2003 A1
20040024888 Davis et al. Feb 2004 A1
20060117126 Leung et al. Jun 2006 A1
20060159034 Talur et al. Jul 2006 A1
Provisional Applications (1)
Number Date Country
60435397 Dec 2002 US