The present invention relates to policy-based network equipment and, in particular, to policy-based network equipment that employs a favorable division of hardware and software to provide both performance and flexibility.
Some typical policy-based computer network applications are Virtual Private Networks (VPN), Firewall, Traffic Management, Network Address Translation, Network Monitoring, and TOS Marking. In general, the policy-based application has access to the network media through an operating system driver interface. In a typical network architecture, the policy-based application examines every packet coming in from the network along the data path, compares It against flow classification criteria, and performs the necessary actions based upon the policies defined in a policy database.
Today's policy-based applications are challenged with several key issues. These issues can be major inhibitors for the future growth of the emerging industry:
1) Flow classification overhead—Flow classification specifications 25 can be complicated and lengthy for each network service. As can be seen from
As is also shown in
The flow classification is a rule based operation that can be very flexible to tune to application needs. For example, it may define a rule to identify packets with a pattern of any random byte within a packet, and/or across many packets. The flow classifiers may also differ per action processor for performance optimization. As a result the matching criteria used by a flow classifier to classify a flow may include a specific value, a range, or wildcard on interface port numbers, protocols, IP addresses, TCP ports, applications, application data, or any user specifiable criteria. The distinctions of various implementation makes it difficult to cache a flow with its decision in many ways.
2) Flow classification technique is evolving—Flow classification and analysis technique is more than just looking into the packet's address, port number and protocol type and or other header information. It often involves state tracking for newer applications. This technique is being continuously modified and, therefore, is not practically appropriate for a hardware based implementation. Furthermore, flow classification techniques are often viewed as key differentiaters between vendors.
3) Action execution speed—Once the classification process is complete, the proper actions need to be executed. Some of the actions are simple like a discard or forwarding decision for a firewall, while some others are extremely time consuming, like triple-DES encryption and SHA hashing algorithm or QOS scheduling algorithm. Software based implementations cannot keep up with the bandwidth expansion as newer and faster media technologies are employed.
4) Integrated services—As more and more policy-based applications 10 become available, it is desirable to provide integrated services on a single platform because this ostensibly reduces policy management complexity, avoids potential policy conflicts, and lowers the TCO (Total Cost of Ownership). On the other hand, Integrated Services impose a very large computing power requirement that cannot be practically achieved with off-the-shelf general purpose machines. A disadvantage of the conventional architecture is that, because it is primarily software-based, it is relatively high overhead. However, precisely because it is software-based, it is quite flexible.
What is desired is a policy architecture has the flexibility of present flow classification systems, but that also has lower overhead.
As shown broadly in
The architecture 100 includes three major components—a Policy-Based Application 102, a Policy Engine API 104 (“API” stands for Application Program Interface’) and a Policy Engine 106. As can be seen from
The policy engine API 104 serves as an interface between the policy application 102 and the policy engine 106 (via a system bus 105). The policy engine 106 is a purpose-built hardware (preferably running at wire speed) that operates on input network traffic and network policies and that outputs regulated traffic flows based upon the network policies.
In a typical embodiment, the policy engine API 104 provides the policy-based application 102 access to all the media I/O through a generic operating system driver interface. In addition, the API 104 allows the application 102 to invoke acceleration functions (shown in
Before proceeding, several terms are defined in the context of
Service
A service in a policy-based network defines a network application 102 that is controlled and managed based on a set of policies. Typical services are firewall, VPN, traffic management, network address translation, network monitoring, etc.
Policy
Policies (normally defined by network managers) are collectively stored in a policy database 202 accessible to the policy-based applications 102 (even conventionally) and describe network traffic behaviors based upon business needs. A policy specifies both what traffic is to be subject to control and how the traffic is to be controlled. Thus, a policy typically has two components—a flow classification specification 203a and an action specification 203b.
Flow Classification Specification 203a
A flow classification specification 203a provides the screening criteria for the flow classifier logic 204 to sort network traffic into flows. A flow classification specification 203a can be very elaborate, as detailed as defining a specific pair of hosts running a specific application. Alternately, a flow classification specification 203a can have a simple wildcard expression.
Action Specification 203b
An action specification 203b describes what to do with packets that match an associated flow classification specification 203a. The action specification 203b can be as simple, for example, as a discard or forward decision in the firewall case. It can also be as complicated as IPSec encryption rules based on a SA (Security Association) specification.
Flow
All packets that match the same flow classification specification 203a form a flow.
Flow Classifier
Referring again to
Policy Binding
Policy binding is the process of the flow classifier 204 binding a stream with its associated action specification and loading the appropriate entries (stream specification 208 and action specifications 210) into the policy cache 209.
Stream
A stream is an “instantiation” of a flow—packets that have the same source and destination address, source and destination port, and protocol type. (Optionally, the application can add the input and output media interface to the stream classification criteria in addition to the packet header if desired.) Packets may be sorted into streams, and a flow may include one or more streams. All packets belonging to the same stream are to be regulated by the same policy.
Policy Cache 209
At the completion of the policy binding process, an entry for a given stream is created on the policy engine which contains all the policy information required to subsequently process data of the stream.
Integrated Services
When multiple network services are to apply to the same flow, this is called “Integrated Services.” Integrated Services simplify the management of various service policies, minimize potential policy conflicts and reduce TCO (Total Cost of Ownership).
Stream Specification
A stream specification 208, shown in
Action Processor 206
Each action processor 206 executes an action based upon an action specification 210 in the policy cache 209.
Packet Tagging
Certain applications (e.g. Network Monitoring) would like to receive flows based on the flow classification specification and would prefer that flow classification be performed for them. Packet tagging is a way of tagging all incoming packets with an application specified “tag.”
Policy-Based Application
A policy-based application provides a service to the network users. This service is managed by a set of policies. Firewall, VPN and Traffic Management are the most typical policy-based applications. As the industry evolves, policy-based applications are likely to consolidate onto a single platform called Integrated Services. Integrated Services has the benefits of centralized policy management and lower cost of ownership.
Referring still to
Subsequent packets of the stream are then provided directly to the stream classifier 207 of the policy engine 106 via the logical data path 403. Using the policy cache 209, the stream classifier 207 determines which action processors 206 are to be activated for the packets of the stream. Specifically, the stream classifier 207 matches the packets to a particular stream specification 208 and then, using the corresponding action specifications 210, activates the proper action processors 206. Significantly, these “subsequent packets” can be acted upon without any interaction to the “host” policy-based application 102. The application need not “see” any packets belonging to that stream after the binding (unless the stream is actually destined for the host.). The action processors are specialized in executing specific action specifications, preferably at the wire speed.
Thus, in summary, upon the completion of the policy binding “learning” process, the policy engine 106 may immediately take control of the bound stream and execute the appropriate actions in accordance with the action specifications 210 in the policy cache 209 without any intervention from the “host” (policy-based) application. This method also relieves the policy engine 106 hardware from doing complicated pattern matching because it can simply compute a hash value (or use some other identification function) from the well known fields (which uniquely identify a stream) of the packet to find its corresponding policy decisions (action specifications 210). The classification need not be done more than once for each packet even though there may be multiple applications. As a result, massive computing power is not required to do the classification on an ongoing basis. A benefit is inexpensive hardware cost for very high performance policy-based applications.
It can be seen that in accordance with the present invention, use of the policy engine and policy cache not only addresses many if not all of the performance considerations discussed above in the Background, but also preserves a great amount of flexibility in setting network policies and the following considerations are taken into account.
1) Time-to-market for application developers—Since time-to-market is a major concern for application vendors, the PAPI design minimizes the development effort required by the application developers in order for the existing applications to take advantages of the policy engine's enhanced performance.
2) Maintain flexibility for developers' value-added—PAPI may allow application developers to enhance or maintain their value-add so that vendors' differentiation is not compromised.
3) Platform for integrated services—PAPI has the model of an integrated services platform in mind. Application developers can, over time, migrate their services into an integrated platform without worrying about the extensibility of the API and the performance penalty.
This application is a continuation of U.S. patent application Ser. No. 11/346,899 filed Feb. 3, 2006 now U.S. Pat. No. 7,420,976, which is a continuation of U.S. patent application Ser. No. 10/360,671 filed Feb. 7, 2003 now U.S. Pat. No. 7,006,502, which is a continuation of application, Method for Synchronization of Policy Cache with Various Policy-based applications, Ser. No. 09/465,123, filed on Dec. 16, 1999, now U.S. Pat. No. 6,542,508, and claims the benefit of priority to U.S. Provisional Patent Application No. 60/112,976, filed Dec. 17, 1998. This application claims the benefit of priority to all of the above patent applications and patent.
Number | Name | Date | Kind |
---|---|---|---|
5371852 | Attanasio et al. | Dec 1994 | A |
5473599 | Li et al. | Dec 1995 | A |
5574720 | Lee et al. | Nov 1996 | A |
5781549 | Dai | Jul 1998 | A |
5884312 | Dustan et al. | Mar 1999 | A |
6006259 | Adelman et al. | Dec 1999 | A |
6101543 | Alden et al. | Aug 2000 | A |
6157955 | Narad et al. | Dec 2000 | A |
6167445 | Gai et al. | Dec 2000 | A |
6173399 | Gilbrech | Jan 2001 | B1 |
6208640 | Spell et al. | Mar 2001 | B1 |
6208655 | Hodgins et al. | Mar 2001 | B1 |
6226748 | Bots et al. | May 2001 | B1 |
6226751 | Arrow et al. | May 2001 | B1 |
6286052 | McCloghrie et al. | Sep 2001 | B1 |
6502131 | Vaid et al. | Dec 2002 | B1 |
6542508 | Lin | Apr 2003 | B1 |
6578077 | Rakoshitz et al. | Jun 2003 | B1 |
6608816 | Nichols | Aug 2003 | B1 |
6701437 | Hoke et al. | Mar 2004 | B1 |
Number | Date | Country | |
---|---|---|---|
20080285446 A1 | Nov 2008 | US |
Number | Date | Country | |
---|---|---|---|
60112976 | Dec 1998 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11346899 | Feb 2006 | US |
Child | 12145751 | US | |
Parent | 10360671 | Feb 2003 | US |
Child | 11346899 | US | |
Parent | 09465123 | Dec 1999 | US |
Child | 10360671 | US |