The present invention generally relates to data communication networks. The invention relates more specifically to a method and apparatus for providing multicast messages across a data communication network.
The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
In computer networks such as the Internet packets of data are sent from a source to a destination via a network of elements including links (communication paths such as telephone or optical lines) and nodes (usually routers directing the packet along one or more of a plurality of links connected to it) according to one of various routing protocols, including internet protocol (IP).
Each node on the network advertises, throughout the network, links to neighboring nodes and provides a cost associated with each link, which can be based on any appropriate metric such as link bandwidth or delay and is typically expressed as an integer value. A link may have an asymmetric cost, that is, the cost in the direction AB along a link may be different from the cost in a direction BA. Based on the advertised information each node constructs a link state database (LSDB), which is a map of the entire network topology and from that constructs generally a single optimum route to each available node based on an appropriate algorithm such as, for example, a shortest path first (SPF) algorithm. As a result a “spanning tree” is constructed, rooted at the node and showing an optimum path including intermediate nodes to each available destination node. Because each node has a common LSDB (other than when advertised changes are propagating around the network) any node is able to compute the spanning tree rooted at any other node. The results of the SPF are stored in a routing information base (RIB) and based on these results the forwarding information base (FIB) or forwarding table is updated to control forwarding of packets appropriately. When there is a network change, information representing the change is flooded through the network, each node sending it to each adjacent node.
IP Multicast is a bandwidth-conserving technology that reduces traffic by simultaneously delivering a single stream of information from a source to a plurality of receiving devices, for instance to thousands of corporate recipients and homes. Examples of applications that take advantage of multicast technologies include video conferencing, corporate communications, distance learning, and distribution of software, stock quotes and news. IP multicast delivers source traffic to multiple receivers without burdening the source or the receivers while using a minimum of network bandwidth. Multicast packets are replicated in the network at the point where paths diverge by routers enabled with Protocol Independent Multicast (PIM) and other supporting multicast protocols, resulting in efficient delivery of data to multiple receivers. The routers use Protocol Independent Multicast (PIM) to dynamically create a multicast distribution tree.
This can be understood by referring to
Each VPN is associated with one or more VPN routing/forwarding instances (VRFs). A VRF defines the VPN membership of a customer site attached to a PE router. A VRF consists of an IP routing table, a derived forwarding table, a set of indicators that uses the forwarding table, and a set of rules and routing protocol parameters that control the information that is included in the routing table.
A service provider edge (PE) router 16 can learn an IP prefix from a customer edge router 14 by static configuration, through a BGP session with a CE router or through a routing information protocol (RIP) exchange with the CE router 14.
A Route Distinguisher (RD) is an 8-byte value that is concatenated with an IPv4 prefix to create a unique VPN IPv4 prefix. The IP prefix is a member of the IPv4 address family. After it learns the IP prefix, the PE converts it into a VPN-IPv4 prefix by combining it with an 8-byte route distinguisher (RD). The generated prefix is a member of the VPN-IPv4 address family. It serves to uniquely identify the customer address, even if the customer site is using globally non-unique (unregistered private) IP addresses. The route distinguisher used to generate the VPN-IPv4 prefix is specified by a configuration command associated with the VRF on the PE router.
Border Gateway Protocol (BGP) distributes reachability information for prefixes for each VPN. BGP communication takes place at two levels: within IP domains, known as autonomous systems (interior BGP or IBGP) and between autonomous systems (external BGP or EBGP). PE-PE or PE-RR (route reflector) sessions are IBGP sessions, and PE-CE sessions are EBGP sessions.
BGP propagates reachability information for VPN-IPv4 prefixes among PE routers 16 by means of BGP multiprotocol extensions (for example see RFC 2283, Multiprotocol Extensions for BGP-4) which define support for address families other than IPv4. It does this in a way that ensures the routes for a given VPN are learned only by other members of that VPN, enabling members of the VPN to communicate with each other.
Based on routing information stored in the VRF IP routing table and forwarding tables, packets are forwarded to their destination using multi-protocol label switching (MPLS). A PE router binds the label to each customer prefix learnt from the CE router 14 and includes the label in the network reachability information for the prefix that advertises to other PE routers. When a PE router 16 forwards a packet received from a CE router 14 across the provider network 13, it labels the packet with a label (an example of which is a PIM join) learned from the destination PE router. When the destination PE router 16 receives a label packet it pops the label and uses it to direct the packet to the correct CE router. Label forwarding across the provider backbone is based on either dynamic label switching or traffic engineered paths. A customer packet carries two levels of labels when traversing the backbone: a top label which directs the packet to the correct PE router and a second label which indicates how that PE router should forward the packets to the CE router.
Multicast Virtual Private Networks (MVPN) have been devised to provide a user with the ability to send multicast packets over VPNs. To achieve this, MVPN uses a Multicast GRE Tunnel to forward packets across a provider network. Customers can use the MVPN service from a provider to connect office locations as if they were virtually one network. The GRE Tunnel, also known as a Multicast Distribution Tunnel (MDT), is built across the provider network and spans a single BGP Autonomous System (AS).
However, it would be beneficial for the MDT to be spanned over multiple AS's since many customers have an internal network that is split into multiple AS's or have VPN sites that are connected to multiple service providers. This means that service providers, who may be competitors, would need to provide their internal IP address to each other to make the MDT reachable. The MDT is built between two Provider Edge (PE) routers, and other routers in between the PE routers need a way to select the RPF interface towards the other PE of the other AS or VPN. However service providers are unwilling to make their PE routers reachable via unicast for security reasons and therefore do not want to redistribute the PE information into other (competitor) domains.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
A method and apparatus for providing multicast messages across a data communication network is described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Embodiments are described herein according to the following outline:
The needs identified in the foregoing Background, and other needs and objects that will become apparent for the following description, are achieved in the present invention, which comprises, in one aspect, a method for providing multicast messages across a data communication network, the method comprising: receiving a multicast message; adding to the multicast message a vector stack including at least one address of a router to which the multicast message is to be sent; and forwarding the multicast message and the vector stack. There is also provided a method of providing multicast messages across a data communication network, the method comprising: receiving at a receiving node of the network a multicast message having a vector stack including at least one address of a router to which the multicast message is to be sent. The first address of the vector stack is read and, when the first address of the vector stack corresponds to the address of the receiving node, the first address of the vector stack is removed and, when the vector stack includes a further vector, reading the next address to which the multicast message is to be sent and forwarding the multicast message in accordance with the next address. This is repeated as necessary until the multicast message is received by the final address in the vector stack at which point the multicast message is forwarded to the address indicated in the original multicast message.
In other aspects, the invention encompasses a computer apparatus and a computer-readable medium configured to carry out the foregoing steps.
2.0 Structural and Functional Overview
Each Autonomous System 13 comprises a Provider Edge (PE) router 16 that interfaces to a Customer Edge router 14. The PE router is then attached to one or more Provider (P) routers 18.
Each Autonomous System 13 also comprises an Autonomous System Boundary Router (ASBR) 22. An ASBR router is located on the border of an Autonomous System that connects the Autonomous System to a backbone network. These routers are considered members of both the backbone and the attached Autonomous System. PIM uses the ABSR to discover and announce RP-set information for each group prefix to all the routers in a PIM domain. They therefore maintain routing tables describing both the backbone topology and the topology of the associated Autonomous System.
Thus, in the arrangement shown in
To enable Multicast, end nodes (for instance CE devices 14) inform the adjacent PE router 16 of the network layer Multicast addresses they wish to receive. This may be done using Internet Group Management Protocol (IGMP). Routers then use a technique (such as Protocol Independent Multicast) PIM to build a tree for the route. The PE routers 16 typically use a reverse path forwarding technique which is an optimized form of flooding. In reverse path forwarding a node accepts a packet from source S via interface N only if N is the interface you would forward to in order to reach S. This reduces the overhead of flooding considerably. Because a router accepts the packet only from one interface, it floods it only once. Thus, in the example shown in
One way of implementing a Multicast system is to use a tree building protocol, for instance Protocol Independent Multicast (PIM).
To allow MVPN's to span multiple AS's, the customer VPNv4 routes are advertised to each of the PE routers that has information about the VPN. Such routes are customer routes and do not belong to the provider. We call the routes VPNv4 routes.
The routes may be advertised using BGP and follow the complete path from one PE to the other. The BGP VPNv4 routes may be advertised with a Next-Hop (NH) attribute. This NH indicates via which router the route is reachable. These NH's are global routes belonging to the provider.
When a user wishes to join a multicast group, a device associated with the user obtains the source and group address. This may be achieved in many ways. One way is for a node to direct the user to an intranet page which includes the source and group address of the multicast group of interest. This information is then input to the user device. When a host joins a multicast group, the directly connected PE router sends a PIM join message toward the rendezvous point (RP). The RP keeps track of multicast groups. Hosts that send multicast packets are registered with the RP by the first hop router of that host. The RP then sends join messages toward the source. At this point, packets are forwarded on a shared distribution tree. If the multicast traffic from a specific source is sufficient, the first hop router of the host may send join messages toward the source to build a source-based distribution tree.
Thus when a host (attached to a CE device 14) wishes to join a multicast group, it sends a message which includes the multicast group and a source address (for instance obtained as described). This source is used by the receiving PE router to create a PIM join which is then sent to an upstream RP router. For a single autonomous system as shown in
In this case, if PE router 16A is the Rendezvous Point for the Multicast group, RP is in a different AS from the sending router 14D. As addresses are not typically passed across AS boundaries, the PE device on one AS is unaware of the addresses for devices on another AS. The NH's of the VPNv4 routes are rewritten at the exit of the network (ASBR routers) and internal addresses are not advertised into the other AS. As a result, the VPNv4 becomes unreachable.
To overcome this issue, the receiving PE router adds a vector to the join message received from a CE device. This vector indicates the addresses of the ASBRs that the message needs to traverse to reach the intended source RP. This vector is referred to herein as a Multicast Vector Stack (MVS). This vector contains information that intermediate routers can use to determine where to RPF to so that a tree may be established. In the example given above, PE 16C will add a MVS to its PIM join. The MVS contains the address of ASBR 22B, then ASBR 22A. This information may be obtained by various means, for example via a static configuration or via a dynamic protocol such as BGP. The PE routers obtain the address of the ASBR in other ASs via BGP updates that carry additional information to tell PE routers which source to join. The PE router 16C looks up in its VRF the routing information for the source and adds this to the PIM join as a stack of vectors, each of which is read in turn. The P routers 18 then use this vector to route the message through the network.
When the PIM join with the MVS arrives at P router 18E, the P router 18E examines the first address in the MVS (ASBR 22B) and performs a RPF on that address and so forward the PIM join to that ASBR. When the ASBR 22B receives the PIM join, the ASBR 22B strips off this vector (as it relates to the receiving router) then examines the next address in the MVS (as the first is its own). The ASBR then forwards the PIM join to this address (ASBR 22A). When the PIM join arrives on ASBR 22A, an RPF check is done on the real address in the PIM join as there are no vectors left in the stack. ASBR 22A has all the information it needs to reach PE router 16A so no additional vector is necessary and the tree has been established.
The technique of vector stacking may also be used within an AS. For instance, it may be used to traffic engineer multicast trees within an AS. Since the route taken through a network by a PIM join for a host determines the route taken by traffic for that host from the multicast source, the vector stack of the PIM join determines the route to be taken by subsequent traffic for the host. For instance, considering the network of routers shown in
Thus this solution allows a MDT to be built between PE's in different AS's without the need to make the PE routers globally addressable via unicast.
The MVS is either defined statically or learnt dynamically, for instance via BGP. The MVS may be added to PIM and included in the Join message which is sent to build the multicast tree. The PIM join is targeted to the first vector in the stack until the router is reached that owns this address. The router that owns the address is responsible for removing the vector from the list. If there is another vector in the list, this router targets that next vector. If the vector is the last vector in the list, the router uses the source information in the original multicast join message, as would happen in normal operation without a MVS.
With the MVS a traffic-engineered path is built from the receiver to the source. With the MVS it is possible to build multicast trees in absence of unicast routing for a particular source (MVPN Inter-AS scenario) or to build multicast trees that divert from the existing unicast routing (Traffic engineering).
If there is only one vector available, for example the BGP Next-Hop, or not all vectors are known, it may not be enough to build a MDT Tunnel across more than one AS in absence of unicast routing. If that is the case, a Route Distinguisher (RD) of the Tunnel source is included in the PIM Join. This RD allows intermediate routers that have BGP tables (ASBR 22A and ASBR 22B in
When the PIM join with the MVS arrives at router 18F, the router 18F examines the first address in the MVS (ASBR 22C) and performs a RPF on that address and so forwards the PIM join to that ASBR. When the ASBR 22C receives the PIM join, the ASBR 22C removes the first vector from the stack (as that indicates ASBR 22C) and then examines the next address in the MVS. ASBR 22C then forwards the PIM join to the next address in the stack (ASBR 22B). When ASBR 22B receives the PIM join, the ASBR 22B removes the first vector from the stack (as that indicates ASBR 22B) and examines the next address in the MVS. ASBR 22B then forwards the PIM join to the next address in the stack (ASBR 22A). When the PIM join arrives at ASBR 22A, ASBR 22A removes the first vector from the stack as that vector indicates ASBR 22A. As there are now no vectors left in the stack, ASBR carries out an RPF check on the real address (S, G) in the PIM join. ASBR 22A has all the information it needs to reach the source PE router 16A of the multicast group so no additional vector stacking is necessary and the tree has been established.
3.0 Method of Providing Multicast Messages Across a Data Communication Network
Methods of providing multicast messages across a data communication network will now be described with reference to a network as shown in
The PE router generates the adapted multicast join message by looking up in its forwarding table any routing information relating to the group-address of the multicast group. From this routing table the PE router can determine the route to be taken through the network and the PE router adds vectors to the MVS that indicate PEs or ASBRs in the network. The vectors in the MVS allow intervening P routers to route multicast join messages for other ASs using the vectors of the stack. Otherwise intervening P routers would be unable to route multicast join messages of the form ip igmp join-group group-address as routing information for source-address will not be known to the P routers if the source-address is in a different AS. It should be noted that group and source addresses are almost never statically configured, so the syntax ip igmp join-group group-address may be omitted.
If the address at the top of the Multicast Vector Stack is the same as the address of the receiving node (step 804) (for instance, say the receiving node is ASBR router 22B), then the receiving node removes the vector from the top of the MVS (step 808) and determines whether there is another address in the MVS (step 810). If there is, the receiving node forwards the multicast join message with a MVS on through the network (step 806) towards the address now given at the top of the vector stack, according to routing information at the node. These steps may be repeated many times in the network as the multicast join message is routed through the network.
If, at step 810, the MVS does not include any other addresses, the receiving node looks at the address contained in the multicast join message and forwards the multicast join message on through the network (step 812) towards the address, according to routing information at the node.
Thus a multicast join message having a source address S in a first AS is routed through a network comprising a plurality of AS. The source address may not be known to all routers in a network comprising a plurality of AS and attaching a vector to the multicast join message allows the message to be routed through AS that do not know how to reach source S.
The embodiment illustrated in
As described above with reference to
If, at step 810, the MVS does not include any other addresses, the multicast join message is now of the form RD:source, group. The receiving node therefore (as described with reference to
Thus the network will consider the MVS first (to route through the P routers of the ASs that do not include the source of the multicast group) and then consider the RD.
4.0 Implementation Mechanisms—Hardware Overview
Computer system 1100 includes a bus 1102 or other communication mechanism for communicating information, and a processor 1104 coupled with bus 1102 for processing information. Computer system 1100 also includes a main memory 1106, such as a random access memory (RAM), flash memory, or other dynamic storage device, coupled to bus 1102 for storing information and instructions to be executed by processor 1104. Main memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104. Computer system 1100 further includes a read only memory (ROM) 1108 or other static storage device coupled to bus 1102 for storing static information and instructions for processor 1104. A storage device 1110, such as a magnetic disk, flash memory or optical disk, is provided and coupled to bus 1102 for storing information and instructions.
A communication interface 1118 may be coupled to bus 1102 for communicating information and command selections to processor 1104. Interface 1118 is a conventional serial interface such as an RS-232 or RS-422 interface. An external terminal 1112 or other computer system connects to the computer system 1100 and provides commands to it using the interface 1118. Firmware or software running in the computer system 1100 provides a terminal interface or character-based command interface so that external commands can be given to the computer system.
A switching system 1116 is coupled to bus 1102 and has an input interface and a respective output interface (commonly designated 1119) to external network elements. The external network elements may include a plurality of additional routers 1120 or a local network coupled to one or more hosts or routers, or a global network such as the Internet having one or more servers. The switching system 1116 switches information traffic arriving on the input interface to output interface 1119 according to pre-determined protocols and conventions that are well known. For example, switching system 1116, in cooperation with processor 1104, can determine a destination of a packet of data arriving on the input interface and send it to the correct destination using the output interface. The destinations may include a host, server, other end stations, or other routing and switching devices in a local network or Internet.
The computer system 1100 implements as a router acting as a node the above described method generating routing information. The implementation is provided by computer system 1100 in response to processor 1104 executing one or more sequences of one or more instructions contained in main memory 1106. Such instructions may be read into main memory 1106 from another computer-readable medium, such as storage device 1110. Execution of the sequences of instructions contained in main memory 1106 causes processor 1104 to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory 1106. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the method. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor 1104 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1110. Volatile media includes dynamic memory, such as main memory 1106. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1102. Transmission media can also take the form of wireless links such as acoustic or electromagnetic waves, such as those generated during radio wave and infrared data communications.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 1104 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 1100 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector coupled to bus 1102 can receive the data carried in the infrared signal and place the data on bus 1102. Bus 1102 carries the data to main memory 1106, from which processor 1104 retrieves and executes the instructions. The instructions received by main memory 1106 may optionally be stored on storage device 1110 either before or after execution by processor 1104.
Interface 1119 also provides a two-way data communication coupling to a network link that is connected to a local network. For example, the interface 1119 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the interface 1119 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the interface 1119 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
The network link typically provides data communication through one or more networks to other data devices. For example, the network link may provide a connection through a local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”. The local network and the Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on the network link and through the interface 1119, which carry the digital data to and from computer system 1100, are exemplary forms of carrier waves transporting the information.
Computer system 1100 can send messages and receive data, including program code, through the network(s), network link and interface 1119. In the Internet example, a server might transmit a requested code for an application program through the Internet, ISP, local network and communication interface 1118. One such downloaded application provides for the method as described herein.
The received code may be executed by processor 1104 as it is received, and/or stored in storage device 1110, or other non-volatile storage for later execution. In this manner, computer system 1100 may obtain application code in the form of a carrier wave.
5.0 Extensions and Alternatives
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5323362 | Aziz | Jun 1994 | A |
6078590 | Farinacci et al. | Jun 2000 | A |
6147970 | Troxel | Nov 2000 | A |
6154463 | Aggarwal et al. | Nov 2000 | A |
6185210 | Troxel | Feb 2001 | B1 |
6314105 | Luong | Nov 2001 | B1 |
6339595 | Rekhter et al. | Jan 2002 | B1 |
6385647 | Willis et al. | May 2002 | B1 |
6415323 | McCanne et al. | Jul 2002 | B1 |
6473421 | Tappan | Oct 2002 | B1 |
6483832 | Civanlar et al. | Nov 2002 | B1 |
6484257 | Ellis | Nov 2002 | B1 |
6526056 | Rekhter et al. | Feb 2003 | B1 |
6580722 | Perlman | Jun 2003 | B1 |
6584082 | Willis et al. | Jun 2003 | B1 |
6625773 | Boivie et al. | Sep 2003 | B1 |
6633835 | Moran et al. | Oct 2003 | B1 |
6636895 | Li et al. | Oct 2003 | B1 |
6654796 | Slater et al. | Nov 2003 | B1 |
6701361 | Meier | Mar 2004 | B1 |
6721315 | Xiong et al. | Apr 2004 | B1 |
6732189 | Novaes | May 2004 | B1 |
6735200 | Novaes | May 2004 | B1 |
6791981 | Novaes | Sep 2004 | B1 |
6801940 | Moran et al. | Oct 2004 | B1 |
6804492 | Kay | Oct 2004 | B2 |
6810417 | Lee | Oct 2004 | B2 |
6839348 | Tang et al. | Jan 2005 | B2 |
6973057 | Forslow | Dec 2005 | B1 |
7082140 | Hass | Jul 2006 | B1 |
7120165 | Kasvand-Harris et al. | Oct 2006 | B2 |
7139278 | Gibson et al. | Nov 2006 | B2 |
7158497 | Li et al. | Jan 2007 | B2 |
7281058 | Shepherd et al. | Oct 2007 | B1 |
7484003 | Chandra et al. | Jan 2009 | B2 |
7570605 | Aggarwal et al. | Aug 2009 | B1 |
20010024443 | Alriksson et al. | Sep 2001 | A1 |
20020004843 | Andersson et al. | Jan 2002 | A1 |
20020012320 | Ogier | Jan 2002 | A1 |
20020023164 | Lahr | Feb 2002 | A1 |
20020031107 | Li et al. | Mar 2002 | A1 |
20020046287 | La Porta et al. | Apr 2002 | A1 |
20020062388 | Ogier et al. | May 2002 | A1 |
20020067725 | Oguchi et al. | Jun 2002 | A1 |
20020075807 | Troxel et al. | Jun 2002 | A1 |
20020075866 | Troxel et al. | Jun 2002 | A1 |
20020078127 | Troxel | Jun 2002 | A1 |
20020078238 | Troxel et al. | Jun 2002 | A1 |
20020085498 | Nakamichi et al. | Jul 2002 | A1 |
20020116529 | Hayden | Aug 2002 | A1 |
20020147011 | Kay | Oct 2002 | A1 |
20020172155 | Kasvand-Harris et al. | Nov 2002 | A1 |
20020184368 | Wang | Dec 2002 | A1 |
20030037109 | Newman et al. | Feb 2003 | A1 |
20030048790 | McAllister et al. | Mar 2003 | A1 |
20030051048 | Watson et al. | Mar 2003 | A1 |
20030053457 | Fox et al. | Mar 2003 | A1 |
20030063608 | Moonen | Apr 2003 | A1 |
20030067928 | Gonda | Apr 2003 | A1 |
20030074584 | Ellis | Apr 2003 | A1 |
20030105865 | McCanne et al. | Jun 2003 | A1 |
20030110288 | Ramanujan et al. | Jun 2003 | A1 |
20030147405 | Khill | Aug 2003 | A1 |
20030152063 | Giese et al. | Aug 2003 | A1 |
20030165140 | Tang et al. | Sep 2003 | A1 |
20030172145 | Nguyen | Sep 2003 | A1 |
20030174706 | Shankar et al. | Sep 2003 | A1 |
20030179742 | Ogier et al. | Sep 2003 | A1 |
20030200307 | Raju et al. | Oct 2003 | A1 |
20030212821 | Gillies et al. | Nov 2003 | A1 |
20040025018 | Haas et al. | Feb 2004 | A1 |
20040037279 | Zelig et al. | Feb 2004 | A1 |
20040039839 | Kalyanaraman et al. | Feb 2004 | A1 |
20040054799 | Meier et al. | Mar 2004 | A1 |
20040062267 | Minami et al. | Apr 2004 | A1 |
20040081154 | Kouvelas | Apr 2004 | A1 |
20040090919 | Callon et al. | May 2004 | A1 |
20040103282 | Meier et al. | May 2004 | A1 |
20040133619 | Zelig | Jul 2004 | A1 |
20040165600 | Lee | Aug 2004 | A1 |
20040205215 | Kouvelas et al. | Oct 2004 | A1 |
20050108419 | Eubanks | May 2005 | A1 |
20060147204 | Yasukawa et al. | Jul 2006 | A1 |
20090086644 | Kompella et al. | Apr 2009 | A1 |