Hardware-accelerated packet multicasting in a virtual routing system

Information

  • Patent Grant
  • 8644311
  • Patent Number
    8,644,311
  • Date Filed
    Sunday, April 24, 2011
    14 years ago
  • Date Issued
    Tuesday, February 4, 2014
    11 years ago
Abstract
Methods and systems are provided for hardware-accelerated packet multicasting in a virtual routing system. According to one embodiment, a virtual routing engine (VRE) including virtual routing processors and corresponding memory systems are provided. The VRE implements virtual routers (VRs) operable on the virtual routing processors and associated routing contexts utilizing potentially overlapping multicast address spaces resident in the memory systems. Multicasting of multicast flows originated by subscribers of a service provider is simultaneously performed on behalf of the subscribers. A VR is selected to handle multicast packets associated with a multicast flow. A routing context of the VRE is switched to one associated with the VR. A packet of the multicast flow is forwarded to multiple destinations by reading a portion of the packet from a common buffer for each instance of multicasting and applying transform control instructions to the packet for each instance of multicasting.
Description
COPYRIGHT NOTICE

Contained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever. Copyright© 2002-2011, Fortinet, Inc.


BACKGROUND

1. Field


Embodiments of the present invention generally relate to data communications, and in particular to network routing and routing systems, and more particularly to packet multicasting.


2. Description of the Related Art


Conventional routing systems generally perform packet multicasting in a single routing context using a single multicast address space. With this approach, supporting various multicast features for different customers may require the use of a separate router for each customer. This approach may also prevent users from taking advantage of packet multicasting resources available from multiple routing contexts with private and potentially overlapping address spaces.


SUMMARY

Methods and systems are described for hardware-accelerated packet multicasting in a virtual routing system. According to one embodiment, a virtual routing engine (VRE) including multiple virtual routing processors and corresponding memory systems are provided. The VRE implements multiple virtual routers (VRs) operable on one or more of the virtual routing processors and associated routing contexts utilizing potentially overlapping multicast address spaces resident in the corresponding memory systems. For each of multiple multicast flows originated by multiple subscribers of a service provider, multicasting is simultaneously performed on behalf of the subscribers. A VR is selected to handle multicast packets associated with the multicast packet flow. A routing context of the VRE is switched to the associated routing context of the selected VR. A multicast packet of the multicast flow is forwarded to multiple multicast destinations by reading at least a portion of the multicast packet from a common buffer for each instance of multicasting and applying destination specific transform control instructions to the multicast packet for each instance of multicasting.


Other features of embodiments of the present invention will be apparent from the accompanying drawings and from the detailed description that follows.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:



FIG. 1 is a simplified functional block diagram of a virtual routing system in accordance with an embodiment of the present invention;



FIG. 2 is a functional block diagram of a packet multicasting system in accordance with an embodiment of the present invention;



FIG. 3 illustrates the identification of flow classification indices for multicast packets in accordance with an embodiment of the present invention;



FIG. 4 is a flow chart of an ingress system packet flow procedure in accordance with an embodiment of the present invention;



FIG. 5 is a flow chart of an egress system packet flow procedure in accordance with an embodiment of the present invention; and



FIG. 6 is a functional block diagram of a packet-forwarding engine in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

Methods and systems are described for hardware-accelerated packet multicasting in a virtual routing system. In various embodiments of the present invention, virtual routing systems and methods takes advantage of multiple routing contexts thereby allowing a service provider to support multicast features for many different access clients with a single piece of hardware.


Reference is made herein to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.


In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form.


Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware, software, firmware and/or by human operators.


Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, ROMs, random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, embodiments of the present invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).


TERMINOLOGY

Brief definitions of terms used throughout this application are given below.


The terms “connected” or “coupled” and related terms are used in an operational sense and are not necessarily limited to a direct connection or coupling.


The phrases “in one embodiment,” “according to one embodiment,” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one embodiment of the present invention, and may be included in more than one embodiment of the present invention. Importantly, such phases do not necessarily refer to the same embodiment.


If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.


The term “responsive” includes completely or partially responsive.



FIG. 1 is a simplified functional block diagram of a virtual routing system in accordance with an embodiment of the present invention. Virtual routing system 100, among other things, may provide hardware-based network processor capabilities and high-end computing techniques, such as parallel processing and pipelining. In embodiment of the present invention, virtual routing system 100 may implement one or more virtual private networks (VPNs) and one or more associated virtual routers (VRs), and in some embodiments, system 100 may implement hundreds and even thousands of VPNs and VRs. Virtual routing system 100 may include one or more line interfaces 102, one or more virtual routing engines (VREs) 104, one or more virtual service engines (VSEs) 106, and one or more advanced security engines (ASEs) 108 coupled by switching fabric 110. Virtual routing system 100 may also include interface 112 which may interface with other routing systems. Virtual routing system 100 may also include one or more control blades 114 to create VPNs and/or VRs to operate on VREs 104.


In one embodiment, several VPNs and/or VRs may, for example, run on one of processing engines (PEs) 116 of VRE 104. A VPN or VR may be a software context comprised of a set of objects that are resident in the processing engine's memory system. The software context may include the state and processes found in a conventional router, however hundreds or more of these virtual router contexts may be overlaid onto a single processing engine and associated memory system. Accordingly, one of processing engines 116 may provide the context of many VRs to be shared allowing one piece of hardware, such as virtual routing system 100, to function as up to a hundred or even a thousand or more routers.


Line interface 102 may receive packets of different packet flows from an external network over a communication channel. VREs 104 may perform packet classification, deep packet inspection, and service customization. In one embodiment, VRE 104 may support up to one million or more access control list (ACL) level packet flows. VREs 104 may include a virtual routing processor (not illustrated) to provide hardware assisted IP packet forwarding, multi-protocol label switching (MPLS), network address translation (NAT), differentiated services (DiffServ), statistics gathering, metering and marking VREs 104 and VSEs 106 may include a virtual service controller (not illustrated) to support parallel processing and pipelining for deep packet inspection and third-party application computing. VSEs 106 may perform parallel processing and/or pipelining, and other high-end computing techniques, which may be used for third party applications such as firewall services and anti-virus services. ASEs 108 may provide for hardware and hardware assisted acceleration of security processing, including encryption/decryption acceleration for IP security protocol type (IPSec) packet flows and virtual private networks (VPNs). Switching fabric 110 may be a high-capability non-blocking switching fabric supporting rates of up to 51.2 Gbps and greater.


Line interface 102 may include a flow manager (not illustrated) to load-balance service requests to VSEs 106 and VREs 104, and may support robust priority and/or weighted round robin queuing. In one embodiment, the flow manager may provide for service load balancing and may dynamically determine one of VREs 104, which may best handle a certain packet flow. Accordingly, all packets of a particular flow may be sent to the same VRE 104. Line interface 102 may identify one of the VREs to process packets of a packet flow based on a physical interface and virtual channel from which the packets of the packet flow were received. The identified VRE may perform ingress metering, header transformation and egress metering for packets of the packet flow. In one embodiment, hardware based metering and marking using a dual token bucket scheme assists in rate-control capabilities of system 100. This may allow for granular application level support and the ability to provide strong performance based service level agreements (SLAs).


Different packets may take different paths through virtual routing system 100 and may not necessarily require the resources of all the various functional elements of virtual routing system 100. In one embodiment, a packet, such as a virtual local area network (VLAN) Ethernet packet, may arrive at an input port of line interface 102. The input port may be a gigabit Ethernet input port, which may be one of several input ports. The flow manager may program a steering table look-up to determine which VLAN is associated with a particular one of VREs 104. The flow manager may tag the packet with an internal control header and may transfer the packet from line interface 102 across switching fabric 110 to the selected VRE 104. A service controller of VRE 104 may perform deep packet classification and extract various fields on the packet header. A flow cache may be looked up to determine whether the packet should be processed in hardware or software. If the packet is to be processed in hardware, an index to the packet processing action cache may be obtained.


The packet may be deposited via a high-speed direct access memory (DMA) into the VRE's main memory. A routing processor may retrieve the packet, identify the packet processing actions and may perform actions, such as time-to-live decrementation, IP header and checksum updating, and IP forwarding patch matching. Egress statistics counters may also be updated. The packet may be forwarded to one of ASEs 108 for security operations. The packet may also be forwarded to another one of VREs 104.


Although system 100 is illustrated as having several separate functional elements, one or more of the functional elements may be combined and may be implemented by combinations of software configured elements, such as processors including digital signal processors (DSPs), and/or other hardware elements.


In accordance with embodiments of the present invention, virtual routing system 100 supports a plurality of virtual routers (VRs) instantiated by one of virtual routing engines (VRE) 104 and which may operate on PE's 116. In this embodiment, the instantiation of each VR includes an associated routing context. The virtual routing system may perform a method of multicasting packets that comprises determining one of the plurality of VRs for a packet received from a service provider for multicasting, and switching a routing context of the VRE to a routing context associated with the VR determined for received packet. At least a portion of the packet is read from one of a plurality of multicast address spaces associated with the selected VR to multicast the packet. The packet may be a first packet received from a service provider for multicasting to a first multicast destination, and when a second packet is received from the service provider for multicasting, the method may also include determining another one of the VRs for the second packet, and switching the routing context of the VRE to a routing context associated with the VR determined for the second packet. At least a portion of the second packet is read from another of the plurality of multicast address spaces associated with the VR determined for the second packet to multicast the second packet. The second packet may be forwarded to second multicast destinations.


Accordingly, multiple VRs may utilize multiple multi-cast address spaces, which may allow a service provider, such as an Internet Service Provider (ISP), to utilize system 100 simultaneously for multicasting for many different access clients (i.e., subscribers). Conventional routing systems may require a separate router for each customer or service provider.



FIG. 2 is a functional block diagram of a packet multicasting system in accordance with an embodiment of the present invention. Packet multicasting system 200 may be implemented by a virtual routing engine, such as one of VREs 104 (FIG. 1). System 200 may include packet-classifying system 202, which receives packets from a network and may classify a packet for multicasting in a certain routing context using flow classification block 204. Packet classifying system 202 may also buffer the received packets in input buffer 206. System 200 may also include packet-transforming system 208 which may receive the multicast packet and a first of a plurality of flow classification indices from packet classifying system 202 and may buffer the multicast packet in output buffer 212, which may be associated with the packet transformer. Packet transforming system 208 may identify first transform control instructions from the first flow classification index, and may transform the multicast packet in accordance with the first transform control instructions.


For next instances of multicasting the packet, packet classifying system 202 may send a next of the flow classification indices to packet transforming system 208 without the multicast packet, and packet transforming system 208 may identify next transform control instructions from the next of the flow classification indices. Packet transforming system 208 may also read the multicast packet from buffer 212, and transform the multicast packet in accordance with the next transform control instructions.


In one embodiment, the flow classification index may identify the packet as a multicast packet and accordingly, the packet can re-read from buffer 212 rather than be re-sent from packet classifier 208 for each instance of multicasting. This is described in more detail below. Although system 200 is illustrated as having several separate functional elements, one or more of the functional elements may be combined and may be implemented by combinations of software configured elements, such as processors including digital signal processors (DSPs), and/or other hardware elements. In embodiments of the present invention, at least a payload portion of a packet (e.g., a packet without all or portions of the header) may be buffered in input buffer 206, may be transferred to packet transforming system 208 and may be buffered in output buffer 212. In these embodiments, packet classifying system 202 may remove all or portions of the header during packet classification, and packet transforming system 208 may add all or portions of a new header during packet transformation.



FIG. 3 illustrates the identification of flow classification indices for multicast packets in accordance with an embodiment of the present invention. When a packet is received at a routing system, such as system 200 (FIG. 2), hash 302 may be performed on a header portion of the packet to generate flow classification index 304 which may be used to locate a particular flow index of flow classification block (FCB) 304. FCB 304 may correspond with FCB 202 (FIG. 2). In the case of a multicast packet flow, the particular flow index of FCB 304 may point to array 308 of flow indices. Each flow index of array 308 may correspond with an instance of multicasting. In accordance with an embodiment of the present invention, one of the flow indices of array 308 may be provided to a packet transformer, such as packet transforming system 208, for use in transforming a buffered packet for multicasting. This is described in more detail below.



FIG. 4 is a flow chart of an ingress system packet flow procedure in accordance with an embodiment of the present invention. Procedure 400 may be implemented by an ingress system, such as packet classifying system 202 (FIG. 2) although other systems may also be suitable. In operation 402, a packet is received and in operation 404, the packet flow may be classified. Operation 404 may classify the packet flow by performing a hash on header portions of the packet as illustrated in FIG. 3. In operation 406, a flow index is retrieved based on the packet flow classification of operation 404. In the case of a non-multicast packet flow (e.g., a unicast packet flow), one flow index may be identified and retrieved. In the case of a multicast packet flow, a plurality of flow indices may be identified, such as array 308 (FIG. 3). In operation 408, the received packet may be buffered in an input memory, such as input buffer 206 (FIG. 2). In operation 410, the packet along with the flow index may be sent to an egress system, such as packet transforming system 208 (FIG. 2). In the case of a multicast packet, operation 410 may send the packet along with a first flow index of the plurality of flow indices. A descriptor may be included to identify the flow as a multicast flow and instruct the egress system to re-read the same packet for subsequently received flow indices.


Operation 412 determines if the classified packet flow is a multicast packet flow or a unicast packet flow. When the packet flow is a unicast packet flow, operation 414 may repeat the performance of operations 402 through 412 for a subsequent packet. When the packet flow is a multicast packet flow, operation 416 is performed. In operation 416, the next flow index of the plurality of indices is retrieved and in operation 418, it is sent to the egress system. In one embodiment, a descriptor included with the next flow index indicates that the flow is a multicast flow instructing the egress system to use a previous packet. Operation 420 determines when there are more flow indices and operations 416 and 418 may be performed for each of the remaining indices. Operation 422 may set a memory release bit to allow the egress system to release the memory location where it has stored the multicast packet after receipt of the last flow index. In one embodiment, the memory release bit may be part of a descriptor, and in another embodiment, it may be a particular bit of the flow index sent in operation 418.


When there are no more flow indices of the plurality to be sent, each instance of packet multicasting has been provided to the egress system, and operation 424 may be performed for a next packet flow re-performing procedure 400. Although the individual operations of procedure 400 are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently and nothing requires that the operations be performed in the order illustrated.



FIG. 5 is a flow chart of an egress system packet flow procedure in accordance with an embodiment of the present invention. Procedure 500 may be performed by an egress system such as packet transforming system 208 (FIG. 2) although other systems may also be suitable for performing procedure 500. In operation 502, a flow index may be received from an ingress system. The flow index may be received with a packet (e.g., at least the payload) or may be received without a packet. Flow indices received with a packet may be for packets having a unicast packet flow or may be a packet of a first instance of multicast packet flow. Flow indices received in operation 502 without a packet may be for subsequent instances of a multicast packet flow. In one embodiment, a descriptor may be received in operation 502 to indicate whether the flow is a multicast flow.


Operation 504 determines when the flow index is for a multicast packet flow. When operation 504 determines when the flow index is for a multicast packet flow, operation 506 is performed. Operation 506 determines whether the flow index is for a first instance of a multicast flow. When operation 506 determines that the flow index is for a first index of a multicast flow, or when operation 504 determines that the flow index is not for a multicast flow, operation 508 is performed. In operation 508, the received packet is buffered in memory, such as buffer 212. In operation 510, a transform index may be identified for the packet from the received flow index. In operation 512, the buffered packet may be read from the buffer, the transform index may be attached to the packet in operation 514. In operation 516, the transform index and packet are sent to a packet transform processor, such as an egress processor. In operation 518, the transform processor may perform a packet transform on the packet by using the transform index. In one embodiment, the transform index may identify a transform control block (TCB), such as TCB 210 (FIG. 2), which may be identified by the transform processor for performing packet transformation in operation 518. In operation 520, the transformed packet may be sent out for routing to a network.


In the case of a multicast packet flow wherein the packet is not received in operation 502, operations 522-526 are performed. Similar to operation 510, operation 522 identifies a transform index from the received flow index. In operation 522, similar to operation 512, the buffered packet is read from the buffer. In operation 526, the memory location where the multicast packet is stored may be released in the case of the last instance of the multicast flow. In one embodiment, a descriptor may be used to identify when to release the memory location. The descriptor may be part of the flow index received in operation 502.


Accordingly, for a multicast flow, a packet may be received only once (i.e., the first time) and stored only once (e.g., operation 508) and for subsequent instances of multicasting, the packet is re-read from a buffer. Although the individual operations of procedure 500 are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently and nothing requires that the operations be performed in the order illustrated.



FIG. 6 is a functional block diagram of a packet-forwarding engine in accordance with an embodiment of the present invention. Packet-forwarding engine (PFE) 600 may be suitable for use as system 200, although other systems may also be suitable. PFE 600 may provide hardware-assisted packet forwarding, and in one embodiment, PFE 600 may implement VR/VI-based forwarding of L3/L4 packet types including MPLS, IP, TCP/IP, UDP/IP and IPSec packet types. In some embodiments, PFE 600 may also implement flow cache and IP/MPLS route look-up forwarding modes, header insertion/replacement, MPLS header processing, including label push/pop and TTL decrement. In some embodiments, PFE 600 may also implement IP header processing including header validation, TTL decrement, DiffSery code-point marking, and header checksum adjustment. In some embodiments, PFE 600 may also implement TCP/IP Network Address Translation (NAT), ingress and egress rate limiting and ingress and egress statistics.


PFE 600 may operate in one of PEs 116 (FIG. 1) and may be logically situated between a switch fabric interface and a DMA engine of one of PEs 116 (FIG. 1). PFE 600 may be partitioned into ingress system 602 and egress system 604 as illustrated. Ingress system 602 may be suitable for use as packet classifier 202 (FIG. 2) and egress system 604 may be suitable for use as packet transformer 208 (FIG. 2). Ingress system 602 may process incoming packets received from the switch fabric ingress interface 606 and may transfer them to the DMA engine ingress 608. Egress system 604 may process outgoing packets from the DMA engine egress 610 and may transfer them to switch fabric egress interface 612. Both the ingress and egress systems may have direct access to a processing engine's memory system.


In one embodiment, the micro-architecture of both PFE 600 ingress and egress units may include an array of packet processors 616 that may share an on-chip write-back cache 614. Each packet processor may operate on a different packet and hardware interlocks may maintain packet order. The ingress packet processors may share common micro-code for ingress processing and the egress packet processors may share common micro-code for egress processing. Memory of PFE 600 may map the ingress and egress instruction stores and supports micro-code updates through write transactions.


Ingress system 602 may pass forwarding state to the DMA engine, which may incorporate this state into the packet, receive descriptor. This forwarding state indicates whether the CPU should software forward the packet or the packet may bypass the CPU and PFE 600 can hardware forward the packet. The forwarding state also may include an index into a forwarding transform cache that describes PFE processing per packet micro-flow. For software forwarded packets, the receive descriptor may be pushed onto the DMA ingress descriptor queue. For hardware forwarded packets, including multicast packets, the descriptor may bypass the DMA ingress queue and be pushed directly onto the DMA egress descriptor queue as a transmit descriptor.


In an embodiment of the present invention, ingress system 602 may provide at least two basic forms of packet classification. One is flow-based, using various fields of the LQ header along with fields in the L3/L4 headers to identify a particular micro-flow in the context of a particular VR. The other form uses the upper bits of the IP address or MPLS label to index a table of flow indices. The host software controls which classification form PFE 600 uses by programming different micro-code into the ingress instruction store. In both forms, the classification result may be a forwarding index that the hardware uses to select the correct packet transformations.


In an embodiment of the present invention, each flow ID cache entry stores the LQ ID, LQ protocol, L3, and L4 fields that identify a particular VR micro-flow along with state indicating whether to hardware or software forward packets belonging to the micro-flow. Ingress system 602 generates an index (e.g., flow classification index 304 (FIG. 3)) into the flow ID cache (e.g., FCB 306 (FIG. 3)) by hashing the incoming packet's LQ ID, LQ protocol, L3, and L4 header fields. It then looks-up the indexed cache entry and compares the packet micro-flow ID fields to the cached micro-flow ID fields. On a cache hit, the FwdAction field of the cache entry indicates whether to software or hardware forward the packet. On a cache miss, the ingress controller allocates a cache entry and forwards the packet to software for flow learning.


In an embodiment of the present invention, when programmed for table lookup mode, PFE 600 classifies an IP packet by performing an IP destination address route look-up from the IP Prefix Table. In one embodiment, the IP Prefix Table may include a 16M entry first level IP prefix table indexed by the upper 24-bits of the IP destination address and some number of 256-entry IP prefix sub-tables indexed by the lower 8-bits of IP destination address. A prefix table entry may include either a transform cache index or a pointer to a prefix sub-table. The state of the table entry's Next Table field determines the format of the table entry. When the NextTable bit is set to ‘1’, the bottom 31 bits of the entry indicate the address to the next-level table. When the NextTable bit is set to ‘0’, the bottom bits of the entry indicate the forwarding index, and whether or not to send packets to software. The host software can steer packets with particular IP prefixes to the CPU by setting the Software Only field in the table leaf entries.


In an embodiment of the present invention, when programmed for table lookup mode and the protocol field of the ingress switch fabric header contains MPLS bit set, PFE 600 classifies a packet by performing a table lookup based on the packet's 20-bit MPLS label. In this embodiment, there may be two tables—one for when the MPLS BOS bit isn't set and one for when the MPLS BOS bit is set. Each of the table's 1M entries contains the 20-bit forwarding index, and a bit to direct packets to the CPU.


In an embodiment of the present invention, PFE 600 maintains a table of transform control blocks (TCBs), which direct how the egress controller may process outgoing-packets. The egress controller uses a forwarding index, carried by the DMA descriptor, to select a transform control block from the table before processing packets. To update a TCB, host software may send a control packet containing a message with an address parameter that points to the new TCB. Software may issue the TCB update control packet before issuing the packet being forwarded. This may ensure that the forwarded packet is processed according to the updated TCB.


In an embodiment of the present invention, some fields may be used to maintain packet order and associate the TCB with a specific flow. In flow mode where several new packets for a flow could be sent to the CPU there is a danger that once the CPU updates the TCB and FCB a packet could be hardware forwarded while the CPU still has packets for that flow. Packet order may be enforced by the TCB. When the TCB is written the DropCpuPkt bit should be zero, this may allow the CPU to send the NEW packets it has for that flow. However when the first FWD_HW packet is seen with this bit clear, the forward engine may update the TCB and set this bit. Subsequent packets from the CPU (recognized because they are marked FWD_HW_COH) may be dropped. There may also be a consistency check performed between the FCB and the TCB. On ingress the SF header SrcChan is replaced with the PendingTag field of the FCB, on egress the SrcChan is compared against the FCBTag field of the TCB. If the tags mismatch the packet is dropped. For prefix mode the SrcChan is replaced with zero, and the FCBTag field may be initialized to zero.


In an embodiment of the present invention, packet header transformation involves the replacement of some number of header bytes of an ingress packet with some number of bytes of replacement header data. Under the control of a TCB, egress system 604 may selectively replace and recompute specific fields in a small set of protocol headers. Egress system 604 begins the header transform by stripping the incoming packet's SF header along with the number of bytes indicated by the SF header offset field. At that point, the controller may begin copying bytes from the buffer pointed to by the TCB's HDRPTR field into the egress packet buffer. PFE 600 may copy the number of new header bytes defined by the TCB's HDRLEN field. After performing this header replacement, PFE 600 then goes through the TCB enable bits to determine what other header transformations need to be made.


Egress system 604 may perform a network address translation (NAT) for IP addresses and for TCP/UDP port addresses. When software enables IP or TCP/UDP NAT, it may also provide the associated replacement addresses and checksum adjustments in the corresponding TCB fields. When the hardware detects one of the NAT enable bits may be set to ‘1’, it may replace both the source and destination addresses. If software intends to translate only the source address, it may still supply the correct destination address in the TCB replacement field. Similarly, the software may also supply the correct source address in the TCB replacement field when it is just replacing the destination address. A checksum adjustment may also be computed.


On the ingress side, layer two packets may be distinguished by bit five of the SF header protocol field being set. Micro-code checks this bit and jumps to separate L2 header loading logic when it is set. Separate code-points for each L2/L3 protocol are defined in the SF spec, jumping to the proper parsing logic is done by using the entire SF protocol (including the L2 bit) field as an index into a jump table and jumping to that instruction which causes a jump to the proper code segment. One of the functions of the L2 parsing logic is to determine the size of the variable length L2 headers and increment the SF offset field by that amount (in some cases, such as de-tunneling 2.sup.nd pass) so that egress system 604 may strip off that part of the header. In addition the SF protocol field may be changed (also 2.sup.nd pass de-tunneling) to another protocol type depending what the underlying packet type is, this may also be determined by the parsing logic and causes the proper egress code path to be taken.


The foregoing description of specific embodiments reveals the general nature of the invention sufficiently that others can, by applying current knowledge, readily modify and/or adapt it for various applications without departing from the generic concept. Therefore such adaptations and modifications are within the meaning and range of equivalents of the disclosed embodiments. The phraseology or terminology employed herein is for the purpose of description and not of limitation. Accordingly, the invention embraces all such alternatives, modifications, equivalents and variations as fall within the spirit and scope of the appended claims.

Claims
  • 1. A method comprising: providing a virtual routing engine (VRE) including a plurality of virtual routing processors and corresponding memory systems, the VRE implementing a plurality of virtual routers (VRs) operable on one or more of the plurality of virtual routing processors and associated routing contexts utilizing a plurality of multicast address spaces resident in the corresponding memory systems; andsimultaneously performing multicasting on behalf of a plurality of subscribers of a service provider by, for each of a plurality of multicast flows originated by the plurality of subscribers: selecting a VR of the plurality of VRs to handle multicast packets associated with the multicast flow;switching a routing context of the VRE to the associated routing context of the selected VR; andforwarding a multicast packet of the multicast flow to a plurality of multicast destinations by reading at least a portion of the multicast packet from a common buffer for each instance of multicasting and applying destination specific transform control instructions to the multicast packet for each instance of multicasting.
  • 2. The method of claim 1, further comprising: identifying a plurality of flow classification indices for the multicast packet flow;sending the multicast packet and a first of the flow classification indices to a packet transformer;buffering the multicast packet in a memory associated with the packet transformer; andidentifying first transform control instructions from the first flow classification index.
  • 3. The method of claim 2, further comprising: sending a next of the flow classification indices without the multicast packet to the packet transformer;identifying next transform control instructions from the next of the flow classification indices;reading the multicast packet from the memory; andtransforming the multicast packet in accordance with the next transform control instructions.
  • 4. A virtual routing system comprising: a plurality of virtual routing engines (VREs) each including a plurality of virtual routing processors and corresponding memory systems, each of the plurality of VREs implement a plurality of virtual routers (VRs) operable on one or more of the plurality of virtual routing processors and associated routing contexts utilizing a plurality of multicast address spaces resident in the corresponding memory systems;a flow manager operable to cause appropriate VRs of the plurality of VRs to handle multicast packets received from a service provider; andwherein the virtual routing system simultaneously handles multicasting for a plurality of subscribers of the service provider by performing a method comprising for each of a plurality of multicast flows originated by the plurality of subscribers: dynamically identifying a VRE of the plurality of VREs for the multicast flow by selecting a VR of the plurality of VRs implemented by the identified VRE to handle multicast packets associated with the multicast flow;switching a routing context of the identified VRE to the associated routing context of the selected VR; andforwarding a multicast packet of the multicast flow to a plurality of multicast destinations by reading at least a portion of the multicast packet from a common buffer for each instance of multicasting and applying destination specific transform control instructions to the multicast packet for each instance of multicasting.
  • 5. The virtual routing system of claim 4, wherein the service provider comprises an Internet Service Provider (ISP) and wherein the plurality of subscribers comprise a plurality of access clients.
  • 6. The virtual routing system of claim 4, wherein the portion of the multicast packet excludes a header of the multicast packet or one or more portions of the header of the multicast packet.
  • 7. The virtual routing system of claim 6, further comprising a packet classification system and wherein the header or the one or more portions of the header are removed by the packet classification system.
  • 8. The virtual routing system of claim 4, further comprising a packet transforming system and wherein a new header or one or more portions of the new header are added to the multicast packet by the packet transforming system.
  • 9. A virtual routing system comprising: a plurality of virtual routing engines (VREs) each including a plurality of virtual routing processors and corresponding memory systems, each of the plurality of VREs implement a plurality of virtual routers (VRs) operable on one or more of the plurality of virtual routing processors and associated routing contexts utilizing a plurality of multicast address spaces resident in the corresponding memory systems;a flow manager operable to cause appropriate VRs of the plurality of VRs to handle multicast packets received from a service provider; andwherein the virtual routing system simultaneously handles multicasting for a plurality of subscribers of the service provider by performing a method comprising: receiving a first multicast flow from the service provider that is originated by a first subscriber of the plurality of subscribers;dynamically identifying a first VRE of the plurality of VREs for the first multicast flow by selecting a first VR of the plurality of VRs implemented by the first VRE to handle multicast packets associated with the first multicast flow;switching a routing context of the first VRE to the associated routing context of the first VR;forwarding a first multicast packet of the first multicast flow to a multicast destination associated with the first multicast flow by reading at least a portion of the first multicast packet from a first multicast address space of the plurality of multicast address spaces;receiving a second multicast flow from the service provider that is originated by a second subscriber of the plurality of subscribers;dynamically identifying the first VRE for the second multicast packet flow by selecting a second VR of the plurality of VRs implemented by the first VRE to handle multicast packets associated with the second multicast flow;switching a routing context of the first VRE to the associated routing context of the second VR; andforwarding a second multicast packet of the second multicast flow to a multicast destination associated with the second multicast flow by reading at least a portion of the second multicast packet from a second multicast address space of the plurality of multicast address spaces.
  • 10. The virtual routing system of claim 9, wherein the method further comprises: transforming a header of the first multicast packet in accordance with transform control instructions of the associated routing context of the first VR; andtransforming a header of the second multicast packet in accordance with transform control instructions of the associated routing context of the second VR.
  • 11. The virtual routing system of claim 9, wherein the method further comprises: reading the first multicast packet from a same buffer for each instance of multicasting.
  • 12. The virtual routing system of claim 9, wherein the method further comprises: reading the second multicast packet from a same buffer for each instance of multicasting.
  • 13. The virtual routing system of claim 9, wherein the service provider comprises an Internet Service Provider (ISP) and wherein the plurality of subscribers comprise a plurality of access clients.
  • 14. The virtual routing system of claim 9, further comprising a packet transformer and wherein the method further comprises: identifying a first plurality of flow classification indices for the first multicast packet flow;sending the first multicast packet of the first multicast flow and a first flow classification index of the first plurality of flow classification indices to the packet transformer;buffering the first multicast packet in a memory associated with the packet transformer; andidentifying first transform control instructions from the first flow classification index.
  • 15. The virtual routing system of claim 14, wherein the method further comprises: sending a next flow classification index of the first plurality of flow classification indices without the first multicast packet to the packet transformer;identifying next transform control instructions from a next classification index of the first plurality of flow classification indices;reading the first multicast packet from the memory; andtransforming the first multicast packet in accordance with the next transform control instructions.
  • 16. The virtual routing system of claim 9, wherein the portion of the first multicast packet excludes a header of the first multicast packet or one or more portions of the header of the multicast packet.
  • 17. The virtual routing system of claim 16, further comprising a packet classification system and wherein the header or the one or more portions of the header are removed by the packet classification system.
  • 18. The virtual routing system of claim 9, further comprising a packet transforming system and wherein a new header or one or more portions of the new header are added to the first multicast packet by the packet transforming system.
  • 19. The virtual routing system of claim 9, wherein the first multicast address space and the second multicast address space overlap.
  • 20. The method of claim 1, wherein the plurality of multicast address spaces include overlapping multicast address spaces.
  • 21. The virtual routing system of claim 4, wherein the plurality of multicast address spaces include overlapping multicast address spaces.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 11/849,352, filed on Sep. 3, 2007, now U.S. Pat. No. 7,933,269, which is a continuation of U.S. application Ser. No. 10/298,815 filed on Nov. 18, 2002, now U.S. Pat. No. 7,266,120, which is hereby incorporated by reference in its entirety for all purposes. This application is also related to U.S. Pat. No. 7,177,311, which is hereby incorporated by reference in its entirety for all purposes.

US Referenced Citations (231)
Number Name Date Kind
4590468 Stieglitz May 1986 A
4667287 Allen et al. May 1987 A
4667323 Engdahl et al. May 1987 A
4726018 Bux et al. Feb 1988 A
5371852 Attanasio et al. Dec 1994 A
5473599 Li et al. Dec 1995 A
5483525 Song et al. Jan 1996 A
5490252 Macera et al. Feb 1996 A
5491691 Shtayer et al. Feb 1996 A
5581705 Passint et al. Dec 1996 A
5598414 Walser et al. Jan 1997 A
5633866 Callon May 1997 A
5745778 Alfieri Apr 1998 A
5825091 Adams et al. Oct 1998 A
5825772 Dobbins et al. Oct 1998 A
5825891 Levesque et al. Oct 1998 A
5841973 Kessler et al. Nov 1998 A
5875290 Bartfai et al. Feb 1999 A
5892924 Lyon et al. Apr 1999 A
5920705 Lyon et al. Jul 1999 A
5963555 Takase et al. Oct 1999 A
5987521 Arrowood et al. Nov 1999 A
6014382 Takihiro et al. Jan 2000 A
6032193 Sullivan Feb 2000 A
6047330 Stracke Apr 2000 A
6069895 Ayandeh May 2000 A
6085238 Yuasa et al. Jul 2000 A
6094674 Hattori et al. Jul 2000 A
6098110 Witkowski et al. Aug 2000 A
6118791 Fichou et al. Sep 2000 A
6137777 Vaid et al. Oct 2000 A
6169739 Isoyama Jan 2001 B1
6169793 Godwin et al. Jan 2001 B1
6173399 Gilbrech Jan 2001 B1
6175867 Taghadoss Jan 2001 B1
6192051 Lipman et al. Feb 2001 B1
6212556 Arunachalam Apr 2001 B1
6220768 Barroux Apr 2001 B1
6226788 Schoening et al. May 2001 B1
6243580 Garner Jun 2001 B1
6246682 Roy et al. Jun 2001 B1
6249519 Rangachar Jun 2001 B1
6256295 Callon Jul 2001 B1
6260072 Rodriguez Jul 2001 B1
6260073 Walker et al. Jul 2001 B1
6266695 Huang et al. Jul 2001 B1
6272500 Sugita Aug 2001 B1
6278708 Von Hammerstein et al. Aug 2001 B1
6286038 Reichmeyer et al. Sep 2001 B1
6295297 Lee Sep 2001 B1
6298130 Galvin Oct 2001 B1
6324583 Stevens Nov 2001 B1
6330602 Law et al. Dec 2001 B1
6338092 Chao et al. Jan 2002 B1
6339782 Gerard et al. Jan 2002 B1
6343083 Mendelson et al. Jan 2002 B1
6397253 Quinlan et al. May 2002 B1
6405262 Vogel et al. Jun 2002 B1
6414595 Scrandis et al. Jul 2002 B1
6434619 Lim et al. Aug 2002 B1
6449650 Westfall et al. Sep 2002 B1
6463061 Rekhter et al. Oct 2002 B1
6466976 Alles et al. Oct 2002 B1
6487666 Shanklin Nov 2002 B1
6493349 Casey Dec 2002 B1
6526056 Rekhter et al. Feb 2003 B1
6532088 Dantu Mar 2003 B1
6542466 Pashtan et al. Apr 2003 B1
6542502 Herring et al. Apr 2003 B1
6556544 Lee Apr 2003 B1
6608816 Nichols Aug 2003 B1
6611522 Zheng et al. Aug 2003 B1
6614781 Elliott et al. Sep 2003 B1
6625650 Stelliga Sep 2003 B2
6629128 Glass Sep 2003 B1
6633571 Sakamoto et al. Oct 2003 B1
6636516 Yamano Oct 2003 B1
6639897 Shiomoto et al. Oct 2003 B1
6640248 Jorgensen Oct 2003 B1
6654787 Aronson et al. Nov 2003 B1
6658013 de Boer et al. Dec 2003 B1
6680922 Jorgensen Jan 2004 B1
6694437 Pao et al. Feb 2004 B1
6697359 George Feb 2004 B1
6697360 Gai et al. Feb 2004 B1
6701449 Davis et al. Mar 2004 B1
6732314 Borella et al. May 2004 B1
6738371 Ayres May 2004 B1
6763236 Siren Jul 2004 B2
6775267 Kung Aug 2004 B1
6775284 Calvignac et al. Aug 2004 B1
6785224 Uematsu et al. Aug 2004 B2
6785691 Hewett et al. Aug 2004 B1
6802068 Guruprasad Oct 2004 B1
6807181 Weschler Oct 2004 B1
6816462 Booth et al. Nov 2004 B1
6820210 Daruwalla et al. Nov 2004 B1
6822958 Branth et al. Nov 2004 B1
6839348 Tang et al. Jan 2005 B2
6850531 Rao et al. Feb 2005 B1
6868082 Allen et al. Mar 2005 B1
6883170 Garcia Apr 2005 B1
6914907 Bhardwaj et al. Jul 2005 B1
6920146 Johnson et al. Jul 2005 B1
6920580 Cramer et al. Jul 2005 B1
6938097 Vincent Aug 2005 B1
6944128 Nichols Sep 2005 B2
6944168 Paatela et al. Sep 2005 B2
6954429 Horton et al. Oct 2005 B2
6980526 Jang et al. Dec 2005 B2
6985438 Tschudin Jan 2006 B1
6985956 Luke et al. Jan 2006 B2
7020143 Zdan Mar 2006 B2
7042848 Santiago et al. May 2006 B2
7046665 Walrand et al. May 2006 B1
7054311 Norman et al. May 2006 B2
7089293 Grosner et al. Aug 2006 B2
7145898 Elliott Dec 2006 B1
7149216 Cheriton Dec 2006 B1
7161904 Hussain et al. Jan 2007 B2
7174372 Sarkar Feb 2007 B1
7177311 Hussain et al. Feb 2007 B1
7187676 DiMambro Mar 2007 B2
7221945 Milford et al. May 2007 B2
7243371 Kasper et al. Jul 2007 B1
7266120 Cheng et al. Sep 2007 B2
7272643 Sarkar Sep 2007 B1
7278055 Talaugon et al. Oct 2007 B2
7293355 Lauffer et al. Nov 2007 B2
7313614 Considine et al. Dec 2007 B2
7316029 Parker et al. Jan 2008 B1
7324489 Iyer Jan 2008 B1
7340535 Alam Mar 2008 B1
7376125 Hussain et al. May 2008 B1
7499398 Damon et al. Mar 2009 B2
7587633 Talaugon et al. Sep 2009 B2
7639632 Sarkar Dec 2009 B2
7668087 Hussain et al. Feb 2010 B2
7720053 Hussain May 2010 B2
7761743 Talaugon Jul 2010 B2
7830787 Wijnands et al. Nov 2010 B1
7843813 Balay Nov 2010 B2
7869361 Balay Jan 2011 B2
7876683 Balay Jan 2011 B2
7881244 Balay Feb 2011 B2
7885207 Sarkar Feb 2011 B2
7912936 Rajagopalan Mar 2011 B2
7925920 Talaugon Apr 2011 B2
7933269 Cheng et al. Apr 2011 B2
7957407 Desai Jun 2011 B2
7961615 Balay Jun 2011 B2
8068503 Desai et al. Nov 2011 B2
8085776 Balay et al. Dec 2011 B2
8107376 Balay et al. Jan 2012 B2
8208409 Millet Jun 2012 B2
8213347 Balay et al. Jul 2012 B2
8306040 Desai et al. Nov 2012 B2
8320279 Sarkar et al. Nov 2012 B2
8369258 Balay et al. Feb 2013 B2
8374088 Balay et al. Feb 2013 B2
8503463 Desai et al. Aug 2013 B2
20010024425 Tsunoda et al. Sep 2001 A1
20010033580 Dorsey et al. Oct 2001 A1
20010043571 Jang et al. Nov 2001 A1
20010048661 Clear et al. Dec 2001 A1
20010052013 Munguia et al. Dec 2001 A1
20020023171 Garrett et al. Feb 2002 A1
20020049902 Rhodes Apr 2002 A1
20020062344 Ylonen et al. May 2002 A1
20020066034 Schlossberg et al. May 2002 A1
20020075901 Perlmutter et al. Jun 2002 A1
20020097730 Langille et al. Jul 2002 A1
20020097872 Maliszewski Jul 2002 A1
20020099849 Alfieri et al. Jul 2002 A1
20020150093 Ott et al. Oct 2002 A1
20020152373 Sun et al. Oct 2002 A1
20020186661 Santiago et al. Dec 2002 A1
20020191604 Mitchell et al. Dec 2002 A1
20030026262 Jarl Feb 2003 A1
20030033401 Poisson et al. Feb 2003 A1
20030108041 Aysan Jun 2003 A1
20030112799 Chandra et al. Jun 2003 A1
20030115308 Best et al. Jun 2003 A1
20030117954 De Neve et al. Jun 2003 A1
20030131228 Twomey Jul 2003 A1
20030169747 Wang Sep 2003 A1
20030200295 Roberts et al. Oct 2003 A1
20030212735 Hicok et al. Nov 2003 A1
20030223406 Balay Dec 2003 A1
20040037279 Zelig et al. Feb 2004 A1
20040042416 Ngo et al. Mar 2004 A1
20040095934 Cheng et al. May 2004 A1
20040141521 George Jul 2004 A1
20040199569 Kalkunte et al. Oct 2004 A1
20050002417 Kelly et al. Jan 2005 A1
20050055306 Miller et al. Mar 2005 A1
20050108340 Gleeson et al. May 2005 A1
20050113114 Asthana May 2005 A1
20050147095 Guerrero et al. Jul 2005 A1
20050163115 Dontu et al. Jul 2005 A1
20060087969 Santiago et al. Apr 2006 A1
20070109968 Hussain et al. May 2007 A1
20070237172 Zelig et al. Oct 2007 A1
20070291755 Cheng et al. Dec 2007 A1
20090131020 van de Groenendaal May 2009 A1
20090225759 Hussain et al. Sep 2009 A1
20090279567 Ta et al. Nov 2009 A1
20100142527 Balay et al. Jun 2010 A1
20100146098 Ishizakl et al. Jun 2010 A1
20100146627 Lin Jun 2010 A1
20100189016 Millet Jul 2010 A1
20100220732 Hussain et al. Sep 2010 A1
20100220741 Desai et al. Sep 2010 A1
20110122872 Balay May 2011 A1
20110128891 Sarkar Jun 2011 A1
20110200044 Cheng et al. Aug 2011 A1
20110235548 Balay Sep 2011 A1
20110235649 Desai Sep 2011 A1
20110249812 Barnhouse et al. Oct 2011 A1
20120057460 Hussain Mar 2012 A1
20120069850 Desai Mar 2012 A1
20120072568 Matthews Mar 2012 A1
20120099596 Balay Apr 2012 A1
20120131215 Balay et al. May 2012 A1
20120170578 Anumala et al. Jul 2012 A1
20120324216 Sun Dec 2012 A1
20120324532 Matthews Dec 2012 A1
20130022049 Millet Jan 2013 A1
20130083697 Sarkar et al. Apr 2013 A1
20130156033 Balay Jun 2013 A1
20130170346 Balay Jul 2013 A1
Foreign Referenced Citations (5)
Number Date Country
0051290 Aug 2000 WO
0076152 Dec 2000 WO
0163809 Aug 2001 WO
0223855 Mar 2002 WO
03010323 Dec 2003 WO
Non-Patent Literature Citations (165)
Entry
Non-Final Rejection for U.S. Appl. No. 12/906,999 mailed Nov. 21, 2012.
Non-Final Rejection for U.S. Appl. No. 13/050,387 mailed Nov. 13, 2012.
Notice of Allowance dated Dec. 1, 2004 for U.S. Appl. No. 09/661,636.
Amendment and Response filed on Sep. 2, 2004 for U.S. Appl. No. 09/661,636.
Office Action dated May 28, 2004 for U.S. Appl. No. 09/661,636.
Amendment and Response filed on Mar. 22, 2004 for U.S. Appl. No. 09/661,636.
Office Action dated Nov. 18, 2003 U.S. Appl. No. 09/661,636.
Amendment and Response filed on Apr. 29, 2007 for U.S. Appl. No. 09/661,130.
Office Action dated Dec. 28, 2006 for U.S. Appl. No. 09/661,130.
Amendment and Response filed on Mar. 6, 2006 for U.S. Appl. No. 09/661,130.
Office Action dated Oct. 18, 2004 for U.S. Appl. No. 09/661,130.
Amendment and Response filed on Apr. 9, 2004 for U.S. Appl. No. 09/661,130.
Office Action dated Nov. 5, 2003 for U.S. Appl. No. 09/661,130.
Notice of Allowance dated Jun. 14, 2007 for U.S. Appl. No. 10/067,106.
Amendment and Response filed on Mar. 10, 2007 for U.S. Appl. No. 10/067,106.
Office Action dated Nov. 16, 2006 for U.S. Appl. No. 10/067,106.
Amendment and Response filed on Aug. 28, 2006 for U.S. Appl. No. 10/067,106.
Office Action dated Mar. 27, 2006 for U.S. Appl. No. 10/067,106.
Amendment and Response filed on Nov. 6, 2006 for U.S. Appl. No. 09/663,483.
Office Action dated Jul. 6, 2006 for U.S. Appl. No. 09/663,483.
Amendment and Response filed on Mar. 13, 2006 for U.S. Appl. No. 09/663,483.
Advisory Action dated Nov. 12, 2004 for U.S. Appl. No. 09/663,483.
Amendment and Response filed on Oct. 8, 2004 for U.S. Appl. No. 09/663,483.
Office Action dated Jun. 3, 2004 for U.S. Appl. No. 09/663,483.
Amendment and Response filed on Feb. 26, 2004 for U.S. Appl. No. 09/663,483.
Office Action dated Aug. 21, 2003 for U.S. Appl. No. 09/663,483.
Amendment and Response filed on Mar. 13, 2006 for U.S. Appl. No. 09/952,520.
Office Action dated Mar. 14, 2005 for U.S. Appl. No. 09/952,520.
Notice of Allowance dated Jul. 30, 2007 for U.S. Appl. No. 09/663,485.
Amendment and Response filed on Jun. 11, 2007 for U.S. Appl. No. 09/663,485.
Office Action dated Jan. 11, 2007 for U.S. Appl. No. 09/663,485.
Amendment and Response filed on Aug. 28, 2006 for U.S. Appl. No. 09/663,485.
Office Action dated Jul. 26, 2007 for U.S. Appl. No. 09/663,485.
Amendment and Response filed on Feb. 2, 2006 for U.S. Appl. No. 09/663,485.
Office Action dated Dec. 21, 2004 for U.S. Appl. No. 09/663,485.
Amendment and Response filed on Nov. 16, 2004 for U.S. Appl. No. 09/663,485.
Office Action dated May 14, 2004 for U.S. Appl. No. 09/663,485.
Amendment and Response filed on Mar. 15, 2004 for U.S. Appl. No. 09/663,485.
Office Action dated Sep. 8, 2003 for U.S. Appl. No. 09/663,485.
Office Action dated Aug. 8, 2007 for U.S. Appl. No. 09/663,457.
Amendment and Response filed on Jul. 11, 2007 for U.S. Appl. No. 09/663,457.
Office Action dated May 17, 2007 for U.S. Appl. No. 09/663,457.
Amendment and Response filed on Oct. 2, 2006 for U.S. Appl. No. 09/663,457.
Office Action dated Apr. 22, 2005 for U.S. Appl. No. 09/663,457.
Office Action dated Aug. 27, 2004 for U.S. Appl. No. 09/663,457.
Amendment and Response filed on Jun. 21, 2004 for U.S. Appl. No. 09/663,457.
Office Action dated Dec. 11, 2003 for U.S. Appl. No. 09/663,457.
Notice of Allowance dated Nov. 21, 2006 for U.S. Appl. No. 09/663,484.
Amendment and Response filed on Aug. 24, 2006 for U.S. Appl. No. 09/663,484.
Office Action dated Feb. 24, 2006 for U.S. Appl. No. 09/663,484.
Amendment and Response filed on Feb. 7, 2006 for U.S. Appl. No. 09/663,484.
Office Action dated Apr. 6, 2005 for U.S. Appl. No. 09/663,484.
Non-Final Rejection for U.S. Appl. No. 13/295,077 mailed May 6, 2013.
Final Rejection for U.S. Appl. No. 12/906,999 mailed May 9, 2013.
Non-Final Rejection for U.S. Appl. No. 13/600,179 mailed Jul. 19, 2013.
Notice of Allowance for U.S. Appl. No. 13/295,077 mailed Jul. 15, 2013.
Notice of Allowance for U.S. Appl. No. 13/015,880 mailed Dec. 5, 2012.
Non-Final Rejection for U.S. Appl. No. 12/328,858, mailed Dec. 6, 2011.
Chan, Mun C. et al., “An architecture for broadband virtual networks under customer control.” IEEE Network Operations and Management Symposium. Apr. 1996. pp. 135-144.
Chan, Mun C. et al “Customer Management and Control of Broadband VPN Services.” Proc. Fifth IFIP/IEEE International Symposium of Integrated Network Management. May 1997. pp. 301-314.
Gasparro, D.M., “Next-Gen VPNs: The Design Challenge.” Data Communications. Sep. 1999. pp. 83-95.
Hanaki, M. et al., “LAN/WAN management integration using ATM CNM interface.” IEEE Network Operations Management Symposium, vol. 1. Apr. 1996. pp. 12-21.
Kapustka, S., “CoSine Communications Move VPNs ‘Into the Cloud’ with the Leading Managed IP Service Delivery Platform.” http://wwwcosinecom.com/news/pr—5—24.html. Press Release, CoSine Communications. 1995. p. 5.
Keshav, S., “An Engineering Approach to Computer Networking: ATM networks, the internet, and the telephone network.” Reading Mass: Addison-Wesley, Addison-Wesley Professional Computing Series. 1992. pp. 318-324.
Kim, E.C. et al., “The Multi-Layer VPN Management Architecture.” Proc. Sixth IFIP/IEEE International Symposium on Integrated Network Management. May 1999. pp. 187-200.
Rao, J.R., Intranets and VPNs: Strategic Approach. 1988 Annual Review of Communications. 1998. pp. 669-674.
Tanenbaum, A.S., “Computer Networks.” Upper Saddle River, N.J.: Prentice Hall PTR, 3rd Edition. 1996. pp. 348-364.
European Search Report for PCT/US03/37009 (Jul. 4, 2004) 2 pgs.
International Search Report for PCTUS03/17674. 6 pgs.
Notice of Allowance for U.S. Appl. No. 13/359,960 mailed Jan. 9, 2013.
Non-Final Rejection for U.S. Appl. No. 12/762,362 mailed Feb. 2, 2012.
Tsiang et al. “RFC 2892, The Cisco SRP MAC Layer Protocol.” Aug. 2000, 1-52.
Zhang et al. “Token Ring Arbitration Circuits for Dynamic Priority Algorithms.” IEEE 1995.
Non-Final Rejection for U.S. Appl. No. 13/154,330 mailed Mar. 27, 2013.
Amendment and Response filed on Nov. 12, 2004 for U.S. Appl. No. 09/663,484.
Office Action dated May 6, 2004 for U.S. Appl. No. 09/663,484.
Amendment and Response filed on Feb. 18, 2004 for U.S. Appl. No. 09/663,484.
Office Action dated Aug. 12, 2003 for U.S. Appl. No. 09/663,484.
Notice of Allowance dated Jan. 4, 2007 for U.S. Appl. No. 09/894,471.
Amendment and Response filed on Nov. 2, 2006 for U.S. Appl. No. 09/894,471.
Office Action dated Oct. 26, 2006 for U.S. Appl. No. 09/894,471.
Amendment and Response filed on Mar. 10, 2006 for U.S. Appl. No. 09/894,471.
Office Action dated Dec. 14, 2004 for U.S. Appl. No. 09/894,471.
Notice of Allowance dated Nov. 7, 2006 for U.S. Appl. No. 09/771,346.
Amendment and Response filed on Oct. 18, 2006 for U.S. Appl. No. 09/771,346.
Office Action dated Jul. 18, 2006 for U.S. Appl. No. 09/771,346.
Amendment and Response filed on Mar. 13, 2006 for U.S. Appl. No. 09/771,346.
Office Action dated Jan. 25, 2005 for U.S. Appl. No. 09/771,346.
Amendment and Response filed on Oct. 14, 2004 for U.S. Appl. No. 09/771,346.
Office Action dated Mar. 26, 2004 for U.S. Appl. No. 09/771,346.
Notice of Allowance dated Nov. 19, 2006 for U.S. Appl. No. 10/163,162.
Amendment and Response filed on Aug. 5, 2006 for U.S. Appl. No. 10/163,162.
Office Action dated May 5, 2006 for U.S. Appl. No. 10/163,162.
Notice of Allowance dated Jan. 4, 2007 for U.S. Appl. No. 10/163,261.
Amendment and Response filed on Nov. 9, 2006 for U.S. Appl. No. 10/163,261.
Office Action dated Nov. 3, 2006 for U.S. Appl. No. 10/163,261.
Amendment and Response filed on Aug. 22, 2006 for U.S. Appl. No. 10/163,261.
Office Action dated May 22, 2006 for U.S. Appl. No. 10/163,261.
Notice of Allowance dated Jul. 27, 2006 for U.S. Appl. No. 10/163,073.
Office Action dated May 30, 2007 for U.S. Appl. No. 10/273,669.
Amendment and Response filed on Mar. 9, 2007 for U.S. Appl. No. 10/273,669.
Office Action dated Sep. 21, 2006 for U.S. Appl. No. 10/273,669.
Amendment and Response filed on Jun. 21, 2006 for U.S. Appl. No. 10/273,669.
Office Action dated Feb. 21, 2006 for U.S. Appl. No. 10/273,669.
Notice of Allowance dated Aug. 14, 2007 for U.S. Appl. No. 10/163,071.
Amendment and Response filed on Jul. 17, 2007 for U.S. Appl. No. 10/163,071.
Office Action dated Jul. 3, 2007 for U.S. Appl. No. 10/163,071.
Amendment and Response filed on May 6, 2007 for U.S. Appl. No. 10/163,071.
Office Action dated Nov. 7, 2006 for U.S. Appl. No. 10/163,071.
Amendment and Response filed on Sep. 1, 2006 for U.S. Appl. No. 10/163,071.
Office Action dated Jun. 1, 2006 for U.S. Appl. No. 10/163,071.
Amendment and Response filed on Mar. 6, 2006 for U.S. Appl. No. 10/163,071.
Office Action dated Dec. 2, 2005 for U.S. Appl. No. 10/163,071.
Notice of Allowance dated Nov. 29, 2006 for U.S. Appl. No. 10/163,079.
Amendment and Response filed on Nov. 1, 2006 for U.S. Appl. No. 10/163,079.
Office Action dated Oct. 27, 2006 for U.S. Appl. No. 10/163,079.
Amendment and Response filed on Aug. 17, 2006 for U.S. Appl. No. 10/163,079.
Office Action dated May 17, 2006 for U.S. Appl. No. 10/163,079.
Notice of Allowance dated Jul. 17, 2007 for U.S. Appl. No. 10/298,815.
Amendment and Response filed on Mar. 9, 2007 for U.S. Appl. No. 10/298,815.
Office Action dated Feb. 23, 2007 for U.S. Appl. No. 10/298,815.
Notice of Allowance dated Jun. 27, 2005 for U.S. Appl. No. 10/232,979.
Notice of Allowance dated Jul. 5, 2007 for U.S. Appl. No. 11/466,098.
Amendment and Response filed on Aug. 10, 2007 for U.S. Appl. No. 10/163,260.
Office Action dated Aug. 1, 2007 for U.S. Appl. No. 10/163,260.
Amendment and Response filed on May 23, 2007 for U.S. Appl. No. 10/163,260.
Office Action dated Apr. 13, 2007 for U.S. Appl. No. 10/163,260.
Amendment and Response filed on Mar. 13, 2007 for U.S. Appl. No. 10/163,260.
Office Action dated Dec. 21, 2006 for U.S. Appl. No. 10/163,260.
Amendment and Response filed on Sep. 18, 2006 for U.S. Appl. No. 10/163,260.
Office Action dated May 18, 2006 for U.S. Appl. No. 10/163,260.
Office Action dated Aug. 22, 2007 for U.S. Appl. No. 10/650,298.
Response to Restriction Requirement Apr. 26, 2004 for U.S. Appl. No. 09/663,483.
Restriction Requirement dated Mar. 22, 2004 for U.S. Appl. No. 09/663,483.
Office Action dated Sep. 11, 2007 for U.S. Appl. No. 09/661,637.
Amendment and Response filed on Jun. 20, 2007 for U.S. Appl. No. 09/661,637.
Office Action dated Feb. 8, 2007 for U.S. Appl. No. 09/661,637.
Amendment and Response filed on Mar. 6, 2006 for U.S. Appl. No. 09/661,637.
Office Action dated Dec. 23, 2004 for U.S. Appl. No. 09/661,637.
Amendment and Response filed on Aug. 5, 2004 for U.S. Appl. No. 09/661,637.
Office Action dated May 5, 2004 for U.S. Appl. No. 09/661,637.
Supplemental Amendment and Response filed on Sep. 17, 2007, 2007 for U.S. Appl. No. 09/663,457.
Non-Final Rejection for U.S. Appl. No. 13/359,960, mailed Apr. 26, 2012.
Non-Final Rejection for U.S. Appl. No. 12/477,124 mailed May 23, 2011.
Notice of Allowance for U.S. Appl. No. 12/328,858 mailed May 25, 2012.
Notice of Allowance for U.S. Appl. No. 12/762,362 mailed May 22, 2012.
Non-Final Rejection for U.S. Appl. No. 13/338,213 mailed Jun. 28, 2013.
Notice of Allowance for U.S. Appl. No. 13/154,330 mailed Jun. 26, 2013.
Non-Final Rejection for U.S. Appl. No. 13/585,727 mailed Jun. 17, 2013.
Non-Final Rejection for U.S. Appl. No. 12/637,140, mailed Sep. 17, 2010.
Non-Final Rejection for U.S. Appl. No. 12/537,898, mailed Sep. 9, 2010.
Final Rejection for U.S. Appl. No. 12/202,223, mailed Sep. 16, 2010.
Non-Final Rejection for U.S. Appl. No. 12/202,233 mailed Jun. 21, 2010.
Non-Final Rejection for U.S. Appl. No. 11/460,977, mailed Jul. 2, 2010.
Non-Final Rejection for U.S. Appl. No. 11/537,609 mailed Jul. 11, 2011.
Notice of Allowance for U.S. Appl. No. 13/305,743 mailed Jul. 25, 2012.
Notice of Allowance for U.S. Appl. No. 11/530,901 mailed Jul. 20, 2012.
Notice of Allowance for U.S. Appl. No. 09/952,520 mailed Jul. 6, 2012.
Notice of Allowance for U.S. Appl. No. 12/477,124 mailed Sep. 19, 2012.
Final Rejection for U.S. Appl. No. 112/477,124, mailed Nov. 4, 2011.
Non-Final Rejection for U.S. Appl. No. 11/530,901, mailed Nov. 9, 2011.
Notice of Allowance for U.S. Appl. No. 13/022,696 mailed Oct. 12, 2012.
Notice of Allowance for U.S. Appl. No. 11/849,352 mailed Jun. 16, 2010.
Non-Final Rejection for U.S. Appl. No. 11/849,352 mailed Jul. 17, 2009.
Notice of Allowance for U.S. Appl. No. 13/600,179 mailed Sep. 24, 2013.
Related Publications (1)
Number Date Country
20110200044 A1 Aug 2011 US
Continuations (2)
Number Date Country
Parent 11849352 Sep 2007 US
Child 13092962 US
Parent 10298815 Nov 2002 US
Child 11849352 US