Network management systems use network resource management in telecommunications networks to maintain an understanding of a status of link resources and allocations, among other reasons. Resource management is used to track and manage network capacity, such as bandwidth, as well as other network resources. Resource management can occur at many hierarchical levels within a network, such as at traffic control nodes, gateways, routers, or switches. Within such nodes are often control circuits, such as central processing units (CPUs), which communicate with other nodes at a control plane level via a node-to-node control channel (i.e., inter-node control channel). The CPUs control states of traffic modules, such as network processing units (NPUs), operating at a data plane level via a CPU-to-NPU control channel (i.e., intra-node control channel). In a typical NPU programming paradigm, a host CPU accesses and programs the NPU resources using a control channel. Communications between the CPU and NPU may be bidirectional to enable the CPU to monitor a state of the NPU, or other data plane processors or modules within the node. Such bidirectional communications between control and data enables service providers to provision network nodes based on network congestion or other states, such as faults within the network, and to maintain sufficient resources for traffic to traverse network communications paths without interruption.
An example embodiment of the present invention is a network functional element, e.g., a line card in a gateway for assigning resources in a network node. Components integrated with or used by the functional element determine provisioning information in a data plane based on subscriber information that is available at the data plane. The components are configured to look-up data plane resources in order to determine subscriber services, such that the data plane resources can be assigned to the subscriber services in the network node.
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
A description of example embodiments of the invention follows.
Sophisticated resource management employing a network processing unit (NPU) can be challenging due to fixed and limited instruction set(s) of the NPU. In a typical NPU programming paradigm, a host central processing unit (CPU) has access to NPU resources; the CPU programs the NPU resources using a control channel within a network node. This model is suitable in cases in which contexts are relatively static and resources are available at all times. However, in the case of a large mobile network with interconnected networks, contexts are dynamic and each subscriber in the network can consume multiple hardware resources, such as statistics pointers, policers, forwarding entries, and the like. In many designs, any time new information is learned about the resources in real-time, the CPU must be involved in order to program these resources. However, as mobile services become a more active part of network services overall, CPU involvement became impractical due to high session rates; for example, 5-tuple flow information is learned at a rate of over 100K 5-tuples per second.
One example embodiment of the present invention that leads to suitable resolution of resource allocation in mobile networks is through avoiding programming individual resources attached to each subscriber. Alternatively, another example embodiment creates a pool or group of resources, divided into categories, for example, which can be shared dynamically when a flow or subscriber is active in the network.
Embodiments of the present invention include methods, network elements, and computer readable media products for assigning resources in a network node by dynamically allocating NPU resources in a fast path (i.e., data plane, as opposed to a control plane) without a host CPU and without a static hold on the NPU resources. An example embodiment of the present invention includes an apparatus, for example, a functional element, physical or logical, in a network node that includes a determination module to determine provisioning information in a data plane based on subscriber information that is available in the data plane, a performance module that looks-up data plane resources based on the subscriber information in order to determine a subscriber service, and an assignment module that assigns the data plane resources in the data plane to the subscriber services in that (or a different) network node.
Embodiments of the present invention provide various technical advantages over conventional methods and apparatuses for allocating resource in a network node, such as allocating network processing unit resources dynamically in the fast path, without host central processing unit involvement and without statically holding-up resources. Some of these technical advantages are shown and described in the following description of the present invention with respect to the accompanying figures. Certain embodiments of the present invention may enjoy some, all, or none of these advantages. Other technical advantages may be readily apparent to those skilled in the art from the accompanying figures or claims.
The access network 101 can connect basic network elements such as a mobility management entity (MME) (not shown), home location register (HLR) (not shown), home agent 125, gateways 120a-b, or other known network elements. The access network 101 connects to at least one base transceiver station (base station) 140a-f, either directly or through additional networks, such as an edge network (not shown), which connects mobile devices 150a-g via a telecommunications interface or wireless medium, e.g., an air interface. The home agent 125 further connects the wireless network 135 portion of the network 100 to external networks, e.g., the Internet 116 or a mobile switching center 130 containing service portals 115a-d. The service portals 115a-d can provide support for multiple service types through use of, for example, an authentication, authorization, and accounting (AAA) server 115a, dynamic host configuration protocol (DHCP) server 115b, billing server 115c, home policy function (PF) server 115d, or other type of portal that may be used at the mobile switching center 130. The AAA server 115a may provide authentication services to validate a subscriber, authorization to determine the subscriber's rights, and accounting to determine subscriber's usage. The DHCP server 115b may provide for address allocation services in a manual, automatic, or dynamic manner, or as otherwise provided by a network administrator. The home PF server 115d may provide general policy rules or application dependent policy rules. The home PF server 115d may also evaluate network requests against the policies and may be associated with a home policy database, which may be associated with a network service provider.
Continuing to refer to
An example embodiment of the present invention can include a subscriber-aware switch, such as switch 119 in
In the example network 100, the gateway 120b contains at least one functional element, such as a line card 160a, which supports traffic packets, or other traffic signals, at traffic rates; multiple line cards in a chassis 160b-f can also be present.
The functional element 160a (described in more detail below in reference to
Example embodiments of the present invention provide for a network processing unit (NPU) 163 to request information regarding a subscriber in the network 100 from a network service processor (NSP) 162. The NSP 162, located in a data plane of the anchor line card 160a, provides a “fast path” (i.e., data plane, as opposed to a “slow path.” i.e., control plane) look-up of subscriber information in the NSP 162 subscriber database (not shown). The NSP 162 may also provide the NPU 163 with the subscriber information in a resource map (not shown) via a traffic rate bus. The traffic rate bus from the NSP 162 to the NPU 163 allows for high traffic rates without using a central processing unit (CPU) 164, which is located in a control plane of the anchor line card 160a and is connected to the NPU 163 via a PCI bus. The PCI bus and the CPU 164 are slow mechanisms of transfer and cause allocation of resources to be slow, accordingly, as compared to rates of data bus.
To begin processing, a traffic packet 202 is sent by a base station 240d-f, via a wireless interface 299, and received by a traffic management entity 219, via any of a multitude of ingress-interface ports 271. The ingress-interface ports 271 being determined based on protocols in the traffic packet 202 or alternatively, determined by a network management entity. The traffic packet 202 enters the NPU 263 via an NPU interface 276; after examining the traffic packet 202, the NPU 263 may perform a look-up of provisioning information in a subscriber table 244 based on subscriber information available in the data plane 280.
If NPU 263 cannot locate subscriber information, it can transmit the first traffic packet 202 to the NSP 262, which can look-up the subscriber information in an NSP subscriber database 244. Following locating the subscriber information, the NSP 262 can create or amend the resource map 232, at a mapping unit 242, including the located subscriber information in a resource map 232 and assign data plane resources in an assignment unit 243. Data plane resources can include policers, forwarding entries, QoS parameters 233, subscriber information or profiles, or other data plane resources. The NSP 262 returns the first packet 202 to the NPU 263 with the resource map 232 in a fast-packet processing path 272, such as a traffic rate bus or control channel; the fast-packet processing path 272 can operate at traffic rates or multiples thereof. Following receipt of the first packet 202 and resource map 232 at the NPU 263, the NPU 263 can store the resource map in a memory 235, which can be a ternary content addressable memory (TCAM), or other finite memory. The NPU 263 can dynamically create a hash table entry 203, such as a 5-tuple entry or the dynamically generated table can include a subscriber Internet protocol (IP) address or other table entries, in the memory 235, which points to the resources allocated by the NSP 262 to be used by the NPU 263. The 5-tuple entry can include information, regarding the traffic packet 202 that was returned from the NSP 262 with the resource map 232, such as a source, destination, first port, second port, and protocol to be used.
The NPU may not contain any subscriber information until it receives a return packet from the NSP.
The NPU 263 can process any subsequent packets, belonging to the first packet flow, based on the resource map 232. In an embodiment where subsequent packets, belonging to the first packet flow of packet 202, continue to arrive, the hash table entry 203 does not age out of the memory 235; the hash table entry 203 can auto refresh. Further, the NPU 263 may determine hardware resources based on packets received from the NSP 262 in real time, as well as, scaling network resources using multicast messaging and using the hash table entry 203.
During periods of idle activity at the NPU 263, such as no packets entering the functional element 260a, the NPU 263 can notify the NSP 262 with subscriber information and the resource map 232, so that the NSP 262 may age out flow information from a cache (not shown), allowing the resource map 232 to be marked as free and open for another request. Following process completion of packets mapped to the same resources, the NPU 263 can forward the packets 202 to an additional functional element (not shown) using the fabric 265 or the NPU 263 can transmit the processed packets to an element external to the functional element 260a via any output-egress port 279. The output-egress port 279 can be determined based on the routing protocol of the traffic packet 202, for example, the protocol stored in the 5-tuple entry.
In alternative example embodiments, the aging process can be explicitly provided for via signaling protocols or other control methods. For example, in the situation of session initiation protocol (SIP), the SIP will generate a “bye” message that will signal a module, such as the NSP, to tear down resources. Further examples can include the NPU having an awareness of the signaling that is being torn down and using such information to signal another module, such as the NSP, to tear down its resources. In alternative situations the NPU may not recognize the idle period and can continue to send the control channel information to the NSP, the NSP can realize the session is completed and tear down resources.
In alternative example embodiments, additional methods of table learning can be used, such as, tree, array, radix tree, hash table, 5-tuple, or other table entries commonly employed or hereafter developed.
Alternative embodiments of the present invention may include a module or set of modules in the NSP 262 that collect subscriber information that can include subscriber identifiers, subscriber QoS parameters, deep packet inspection (DPI) parameters, or additional subscriber information, any of which may be passed between or among the NPU 263 and NSP 262 as a specialized packet (not shown). In further alternative embodiments, it is possible to collect information and assign resources because the NPU 263 and NSP 262 are operably interconnected. The NPU 263 does not have to pre-program contexts (e.g., policers, forwarding entries, QoS parameters, classifiers, etc.) such that the hardware resources are statically reserved. Such embodiments enable dynamic resource allocation without involvement of a central processing unit (CPU) 264.
In some example embodiments, QoS can allow for resource reservation and control or can provide different priorities to different elements of the network. QoS may include, for example, providing different services based on applications, subscribers, performance level, data flows, or other commonly known or here-after developed elements requiring QoS specifications. QoS parameters can include, for example, delay, jitter, bit rate, guarantees, bandwidth, or other commonly employed or hereafter-developed parameters pertaining to QoS in a network.
In alternative example embodiments of the present invention, network resources or hardware resources can include, for example, NPU, CPU, or other hardware resources such as search capabilities, TCAM, control functions, statistics, memory channels, fabric buffering memory, fabric backplane, or other commonly known or hereafter developed network resources. Details of network and hardware resources are described further in Applicants' pending U.S. patent application (Serial Number not yet assigned) being filed concurrently herewith, entitled “Method and Apparatus to Report Resource Values in a Mobile Network” by Santosh Chandrachood, which claims priority to Applicants' U.S. Provisional Patent Application No. 61/278,520, filed Oct. 7, 2009, entitled “A Method and Apparatus to Read Large Hardware Counters in a Scalable Way” by Chandrachood et al., the entire teachings of both applications being incorporated herein by reference in their entirety.
Further example embodiments of the present invention may include the traffic packet 202 sent from a second functional element (not shown) to the functional element 260a via the fabric 265 or the traffic packet 202 may enter the NPU 263 directly without entering a traffic management entity 219. Alternative embodiments of the present invention can connect hardware components, for example, the CPU 264, memory 235, NPU 263, NSP 262, or additional components used in a line card, via component subsystems, such as PCI bus 273, or other known or future developed methods for operably interconnecting hardware. Alternatively, example embodiments of the present invention can include any of the NPU, CPU, or NSP operating in the control plane of the functional element.
In the example flow chart 300, a determination is made as to the provisioning information available or existing in a data plane based on subscriber information available in the data plane (380). Next, a look-up of data plane resources is performed to determine subscriber services based on the subscriber information available in the data plane (381). Finally, the data plane resources are assigned to the subscriber services in the network node (382).
After beginning, the assignment procedure of
Once the packet is transmitted to the NSP, the NSP performs a look-up of the subscriber information associated with or corresponding to the received traffic packet in an NSP database (488). The NSP provides the subscriber information and assigns associated data plane resources, optimally in a form of a resource map, in a fast-packet-processing path (i.e., in the data plane), to the NPU (489). The NSP and transmits the traffic packet and associated resource map, including at least the data plane resources, to the NPU (490). Once the traffic packet and resource map are received by the NPU, and upon receiving corresponding subsequent packets at the NPU (491), the NPU can process the subsequent packets by employing the resource map (492).
Following completion of processing the traffic flow or during intervals as may be determined by a network management entity, the NPU can determine if traffic activity is idle (493). If traffic is not idle, the NPU can continue to receive corresponding subsequent packets (491). However, if it is determined that traffic is idle, the NPU can notify the NSP of the idle status and include the currently known subscriber information and corresponding resource maps (495). Alternatively, the NPU can notify the NSP of idle status without additional information. The NSP ages-out flow information from a cache (496) and marks resources, such as the resource map, free for a next request for a look-up from the NPU (497). A determination is made as to whether a packet not affiliated with subscriber information currently known by the NPU is received (498); if such packet is identified, the procedure of the flow diagram 400 begins again (480).
Further example embodiments of the present invention may include a non-transitory computer readable medium containing instructions that may be executed by a processor, and, when executed by the processor, cause the processor to monitor the information, such as status, of at least a first and second network element. It should be understood that elements of the block and flow diagrams described herein may be implemented in software, hardware, firmware, or other manifestation available in the future. In addition, the elements of the block and flow diagrams described herein may be combined or divided in any manner in software, hardware, or firmware. If implemented in software, the software may be written in any language that can support the example embodiments disclosed herein. The software may be stored in any form of computer readable medium, such as random access memory (RAM), read only memory (ROM), compact disk read only memory (CD-ROM), and so forth. In operation, a general purpose or application-specific processor loads and executes the software in a manner well understood in the art. It should be understood further that the block and flow diagrams may include more or fewer elements, be arranged or oriented differently, or be represented differently. It should be understood that implementation may dictate the block, flow, and/or network diagrams and the number of block and flow diagrams illustrating the execution of embodiments of the invention.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/278,486, filed on Oct. 7, 2009, the entire teachings of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6707826 | Gorday et al. | Mar 2004 | B1 |
7039026 | Francoeur | May 2006 | B2 |
7065083 | Oren et al. | Jun 2006 | B1 |
7065085 | Shin | Jun 2006 | B2 |
7076551 | Gary | Jul 2006 | B2 |
7277948 | Igarashi et al. | Oct 2007 | B2 |
7657706 | Iyer et al. | Feb 2010 | B2 |
7706291 | Luft et al. | Apr 2010 | B2 |
7719995 | Luft | May 2010 | B2 |
7733891 | Reynolds et al. | Jun 2010 | B2 |
7773510 | Back et al. | Aug 2010 | B2 |
7855982 | Ramankutty et al. | Dec 2010 | B2 |
8018955 | Agarwal et al. | Sep 2011 | B2 |
8036230 | Gray et al. | Oct 2011 | B2 |
8111705 | Bartlett et al. | Feb 2012 | B1 |
8111707 | Riddle et al. | Feb 2012 | B2 |
8284786 | Mirandette et al. | Oct 2012 | B2 |
8381264 | Corddry et al. | Feb 2013 | B1 |
8447803 | Boucher | May 2013 | B2 |
8531945 | Chandrachood et al. | Sep 2013 | B2 |
8533360 | Chandrachood et al. | Sep 2013 | B2 |
8745179 | Raghavan et al. | Jun 2014 | B2 |
9106563 | Chandrachood et al. | Aug 2015 | B2 |
20010049753 | Gary | Dec 2001 | A1 |
20050050136 | Golla | Mar 2005 | A1 |
20060026682 | Zakas | Feb 2006 | A1 |
20080013470 | Kopplin | Jan 2008 | A1 |
20080137646 | Agarwal et al. | Jun 2008 | A1 |
20080155101 | Welsh et al. | Jun 2008 | A1 |
20080274729 | Kim et al. | Nov 2008 | A1 |
20090083367 | Li et al. | Mar 2009 | A1 |
20090086651 | Luft et al. | Apr 2009 | A1 |
20090116513 | Gray et al. | May 2009 | A1 |
20090129271 | Ramankutty et al. | May 2009 | A1 |
20090285225 | Dahod | Nov 2009 | A1 |
20100191839 | Gandhewar et al. | Jul 2010 | A1 |
20100192207 | Raleigh | Jul 2010 | A1 |
20100229192 | Marilly et al. | Sep 2010 | A1 |
20100325275 | Van Elburg et al. | Dec 2010 | A1 |
20110020236 | Bohmer et al. | Jan 2011 | A1 |
20110021236 | Dinan et al. | Jan 2011 | A1 |
20110080886 | Chandrachood | Apr 2011 | A1 |
20110085439 | Chandrachood et al. | Apr 2011 | A1 |
20110085571 | Chandrachood | Apr 2011 | A1 |
20110087786 | Chandrachood | Apr 2011 | A1 |
20110087798 | Chandrachood | Apr 2011 | A1 |
20110238855 | Korsunsky et al. | Sep 2011 | A1 |
20120239626 | Aysan | Sep 2012 | A1 |
Number | Date | Country |
---|---|---|
WO 2011044396 | Apr 2011 | WO |
Entry |
---|
Notification of Transmittal of the International Search Report and Written Opinion of the International Searching Authority; International Application No. PCT/US2010/051874, Date of Mailing: Jun. 30, 2011. |
International Preliminary Report on Patentability; International Application No. PCT/US2010/051874 Date of Mailing: Apr. 19, 2012. |
Number | Date | Country | |
---|---|---|---|
20110085571 A1 | Apr 2011 | US |
Number | Date | Country | |
---|---|---|---|
61278486 | Oct 2009 | US |