Data centers have become ever more common and complex. With this complexity comes an increase in the complexity of the networks that enable communication among the computers of a data center. In particular, there is a need to reduce simplify and enable the configuration of network routing capacity for a large number of computers.
In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
It will be readily understood that the components of the invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the invention. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. Accordingly, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the invention may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. In selected embodiments, a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages, and may also use descriptive or markup languages such as HTML, XML, JSON, and the like. The program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The systems and methods disclosed herein relate to logical routers for computer data routing systems. Specifically, the systems and methods described herein relate to a logical router “chassis” that is formed from a set of disaggregated network elements that are not necessarily in the same chassis or coupled to the same backplane of a chassis. The logical router may include a single logical point of management and control, with a distributed data plane. The logical router also includes a control plane offloaded to an external computing system in order to reduce network topology size. This also allows the control plane to be migrated to a different computer system to take advantage of newer generations of central processing units (CPUs). The disaggregated network elements comprising the logical router may be implemented using dedicated network components incorporated into the systems and methods disclosed herein. In the embodiments disclosed below, the network elements include silicon devices such as the JERICHO 2 and the RAMON developed by BROADCOM. These are exemplary only and other network elements providing the basic network routing function of these devices may also be used in a like manner.
In the logical router 100, each spine element 102 functions as a fabric element of a self-routing fabric. This self-routing fabric implements all associated routing protocols in silicon, including handling link failures without requiring any software assistance. Each fabric element in the logical router is interfaced with one or more leaf elements 104 via fabric interfaces, as shown in
The method 200 may include queuing 202, by the logical router 100, a data packet on an ingress associated with the logical router 100, such as on one of the leaf elements 104 on whose port the packet was received. Next, the ingress sends 204 a queue request to the logical router 100, such as a to a second leaf element 104 corresponding to the destination address of the data packet. An egress (e.g., the second leaf element 104) associated with the logical router 100 responds with a credit grant. Finally, the ingress sends the packet to the egress, such as over the fabric implemented by the spine elements 102.
Referring to
In the embodiment of
The system of
In the illustrated embodiment, there are 13 spine elements 102. The spine elements 102 in the logical router architecture of
The logical router 100 of
In some implementations, the logical router 100 may be managed by one or more control plane elements 300 that are implemented using computing systems (see, e.g., the example computing system of
Referring to
Referring to
The interface between each leaf element 104a, 104b and the control plane element 300 may be associated with an in-band network 500 and a host packet path. On the other hand, each interface with the management LAN switch 400 may be associated with an out-of-band (OOB) network 502. The management LAN switch 400 may communicate over the OOB network 502 with the elements 104a, 104b, 300 to perform functions such as bootstrap/image download, system state distribution, and gathering system statistics and similar data.
Referring to
In some implementations, the route processor software 600 implements following functions or data structures:
In some realizations, the router state database 602 includes following functions or data structures:
In some realizations, the linecard software 604 implements with the following functions or data structures:
The logical router 100 together with the control elements 300 and management LAN switch 400 as described above with respect to
The element state database 800, which may be part of or equivalent to the router state database 602, may be coupled to each spine element 102 and leaf element 104 forming part of the logical router 100. The element state database 800 may store data associated with each spine element 102 and leaf element 104, such as its configuration (ports, connections of ports to other elements 102, 104, 300, addresses of elements 102, 104, 300, etc.). This information may be discovered by the control plane element 300 using any of the fabric discovery techniques disclosed herein (e.g., LSoE, LLDP). The element state database 800 provides this data to the route processor. For each interface on each spine element 102 and leaf element 104, the route processor 600 creates a unique interface (swp1/1 . . . swp1/40, swp2/1 . . . swp2/40 . . . swp48/1 . . . swp48/40 in
Referring to
Once the interfaces have been created inside a LINUX (or other operating system) instance on the control element 300 executing the route processor 600, the actual interface on the front panel of the individual leaf elements 104 may then be ‘stitched’ to the created interfaces corresponding to them. One way to do this is to allocate a unique VLAN (virtual LAN) tag to each front panel interface on each of the leaf elements 104, each VLAN tag being further mapped to one of the interfaces created on the control element 300.
The egress leaf 104b forwards the packet 1002 to the control plane element 300 upon receipt. The LINUX instance executing on the control plane element 300 then identifies the interface 1004 referenced by the VLAN tag of the packet 1002, strips out the VLAN tag, and injects the stripped packet 1006 into the corresponding interface 1004. From there on the packet 1006 flows through the Linux data path as usual and the applications, such as the border gateway protocol (BGP) module 1008, see that packet as coming in on the interface 1004.
In particular, the ingress leaf 104b (connected to the control plane element 300) receives the packet 1100 from the application 1008 and looks up the VLAN tag for the appropriate egress leaf 104a, i.e. the egress leaf to which the packet should be routed according to the programming according to the routing database 602 as described above. The ingress leaf 104b tags the packet 1100 with the VLAN tag and forwards the tagged packet 1102 to the egress leaf 104a through the elements 102, 104 of the logical router 100 (see packet 1104). The egress leaf 104a strips off the VLAN tag and forwards the stripped packet 1106 out of the correct front panel port, i.e. the front panel port associated with the VLAN tag and corresponding to routing corresponding to the destination of the packet and the programming according to the routing database 602.
Referring to
The examples of
Each control plane element 300a, 300b, 300c may include an individual router state database 602a, 602b, 602c, respectively. Each of route processor 600a, 600b runs health check diagnostics on the other route processor 600b, 600a (600b checks 600a, 600a checks 600b). The primary route processor 600a may be interfaced with each router state database 602a, 602b, 602c in each of the control plane elements 300a, 300b, 300c as shown in
The router state database 602a in the control plane element 300a shares health check replication data with the router state database in the control plane element 300b. The router state database 602b shares health check replication data with the router state database 602c in the control plane element 300c. In this way, data associated with the health of the primary and secondary route processors 600a, 600b is redundantly stored over multiple databases 602a, 602b, 602c.
In some implementations, the primary route processor 600a checkpoints a required state in the router state databases 602a, 602b, 602c. The router state databases 602a, 602b, 602c may be spawned on all cluster nodes, as illustrated in
Referring to
In the case of failure of the primary control plane element 300a, the control plane element 300b hosting the secondary route processor 600b may assume the role of the master control plane element in response to detecting failure during one of its health checks on the primary route processor 600a. The route processor 600b will then assume the role of the primary route processor and establishes connections with the healthy router state databases 602b, 602c as shown in
The embodiment described above with respect to
Note that LSoE and BGP-SPF are standardized protocols leveraged in this design to build a routed backplane for a disaggregated chassis based logical router 100. Design for such a routed backplane is discussed in more detail below.
The embodiment of
The backplane fabric implemented by the interconnections between the fabric ports of the spine units 1502 and the line units 1504 provides data traffic packet transport across all line-units 1504 and controllers 1500. An MPLS routed fabric may be used as a transport underlay across all line unit 1504 and controller fabric ports. The fabric may have some or all of the following properties:
Most external facing control planes for the logical router 100 that include external BGP peerings, IGP (interior gateway protocol) routing protocols, ARP, and ND (neighbor discouvery) may be hosted on the controller node 1500. In other words, besides the backplane fabric control plane that is distributed across all nodes 1500, 1502, 1504, most logical router control plane functions may be centralized on the controller node 1500. The illustrated architecture will however allow specific functions (such as BFD (bidirectional forwarding detection), LLDP (link layer discovery protocol), VRRP (virtual router redundancy protocol), and LSoE) to be distributed across line units 1504 as needed. Data paths of the units 1502, 1504 may be accordingly programmed to send locally bound packets to either the local CPU (for distributed functions) or to send them to controller node 1500 (to implement the centralized control plane).
The centralized logical router control plane running on the controller node 1500 drives programming of a data-plane that is distributed across the line units 1504. A one-stage forwarding model is defined as one in which (a) all layer 3 route look-ups are done on the ingress line-units 1504 and (b) resulting rewrites and egress port are resolved on ingress line-unit 1504. All resulting encapsulation rewrites are put on the packet and packet is sent to egress line-unit 1504 over the backplane transport fabric with the resulting egress port information. All packet editing happens on the ingress line-unit 1504. Egress line unit 1504 simply forwards the packet on the egress port 1504. A one-stage forwarding model, as defined above is simulated across standalone line-units 1504 in this logical router 100 to accomplish layer-3 forwarding across line-units:
In some embodiments, all line unit 1504 front panel ports (except for ports designated as fabric-ports) are designated as external switch-ports as noted above. Each of these switch-ports would be represented as an interface in the logical router 100. All logical router interfaces would be represented in a data plane, a control plane, and a management plane on the controller 1500, as well as in a data plane on all line-units 1504. For example, an interface “swp3/2” representing port 2 on line-unit 3 would be programmed in the data plane on all the line-units 1504. It would also be visible in the management plane hosted on the controller node 1500 and in the routing control plane hosted on the controller 1500.
In some embodiments, all router interfaces, including ones on remote line units 1504 are programmed in the data plane on each line unit 1504 in order to accomplish one-stage forwarding across line units 1504 as defined above. A local interface on a line unit 1504 simply resolves to a local port. However, a remote interface on a line unit 1504 is programmed in the data plane such that a packet egressing this remote interface is sent to the remote line unit 1504 to be egressed out of the corresponding router port on the remote line unit 1504. An underlay fabric transport tunnel is setup to stitch the data path to the egress line unit 1504 for this purpose and an overlay encapsulation may be used to identify the router port on the egress line unit 1504.
There are a couple of choices with respect to transport tunnel and overlay encapsulation that may be used for this purpose:
An MPLS transport and overlay may be used in this architecture. However, overall architecture does not preclude using an IP transport with a VXLAN tunnel to accomplish the same.
In order to improve or optimize the number of internal label encapsulations put on the packet, both the transport label and the interface label may be collapsed into a single label that both identifies a physical port and provides a transport LSP to or from the line unit 1504 hosting the physical interface. This overlay label identifies the egress interface for egress traffic switched towards the egress line unit 1504 (e.g., egress line card) and interface, as well as identifying an ingress interface for ingress traffic on the interface that needs to be punted to the controller 1500 that hosts routing protocols running on that interface. Two internal label allocations may be defined for this purpose:
Each of the above label contexts may be globally scoped across all nodes 1500, 1502, 1504 within the logical router 100 and identify both the physical port as well as a directed LSP. The above label allocation scheme essentially results in two global labels being allocated for each router-port within the logical router 100. MPLS labels may be statically reserved and assigned for this purpose on switch-port interface discovery and these reserved labels would not available for external use in some embodiments.
A globally scoped label (across all logical router nodes 1500, 1502, 1504) that is allocated for each local router port of each line unit 1504 identifies both the egress router-port as well as a transport LSP from ingress line-unit to the egress line-unit that hosts the physical port. This label is programmed on logical router nodes 1500, 1502, 15014 as follows:
This process is illustrated in
A packet may be received by an ingress line unit 1504 (LU−(N+M)). Upon exiting the ingress line unit LU−(N+M), the packet is labeled according to the illustrated label table 1600, which includes the egress interface (“[12.1.1.2,swp(N+2)/1]->MAC-A”) as well as the transport LSP, i.e. tunnel path, to the egress interface (“MAC-A->L(e,x,y)+MAC-1, port: fp(N+M)/1->L(e,x,y)+MAC-N, port: fp(N+M)/N”). The packet is sent to a spine unit 1502 (SU-N). The spine unit SU-N rewrites the packet according to the label table 1602 that includes the fabric-next-hop rewrite (“L(e,x,y)->MAC-N+2, port:fpN/2”) and the egress label. The spine unit SU-N forwards the rewritten packet to the egress line unit 1504 (LU(N+2)), which transforms the label of the packet according to the table 1604 that simply points to the egress interface (L(e,x,y)->swp(N+2)/1).
Referring to
Punted packets need to be injected into the LINUX kernel making it look as if they arrived on the Linux interface corresponding to the front panel port the packet arrived on. On a standalone system, the host path runs in the LINUX Kernel running on the local CPU of the switch, i.e. line unit 1504, which would be the line unit LU−(N+M) in the example of
In the illustrated architecture, the host data path runs in multiple places. On the line unit 1504, packets may need to be punted to the BGP LSVR (link state vector routing) instance running on that line unit 1504. If the packet is destined to a control plane protocol instance running on the controller 1500, then the line unit 1504 needs to be able to deliver the packet to the controller. Since there is no system header in this path, the ingress interface needs to be identified and encapsulated within the packet itself.
As mentioned in the earlier sections, this is achieved using a unique label that identifies the ingress interface. An ACL rule can be used to match on the ingress interface and supply the corresponding label and the subsequent forwarding chain. However, this result needs to be used only when the packet really needs to be sent to the controller 1500. In other cases, the forwarding lookup should drive the encapsulations.
As shown in
Auto-bring-up of layer-3 backplane fabric may be orchestrated according to the explanation below in which R0 refers to the controller 1500.
Auto-configure R0 with a startup config:
Assume R0 has been imaged and management Ethernet (mal) is up and addressed. R0 reads a start-up configuration file (packaged with the image) that has the following:
R0 brings its southbound fabric interfaces up (spine units 1502 and line units 1504 in the topology of
R0 runs dhcpd (dynamic host configuration protocol daeomon) so line units' 1504 and spine units' 1502 management ethernets mals can get addresses from a pool given in the startup configuration file. The line card numbers for the units 1502, 1504 are assumed to be the R0 port to which they are wired. R0 runs a ZTP service to the units 1502, 1504.
Push Startup Configuration to Line-Units:
R0 pushes startup configuration to the line units 1504 and spine units 1502. This configuration identifies a card role for each unit 1502, 1504; identifies each local port as “fabric-port” or “router-port,” specifies northbound fabric interface addressing, and provides MPLS labels for router-port overlay tunnels (two labels per port).
The units 1502, 1504 then run LSoE on fabric ports to make sure they are wired as expected from the startup configuration. LSoE discovers layer-3 fabric neighbors and corresponding encapsulations. The database of information learned by LSoE is exported into BGP-SPF, as per standard LSoE function.
BGP-SPF peering is established on each line unit-to-spine unit fabric link. Fabric topology is learned on each unit 1502, 1504 and fabric-VRF IP reachability is established to each routed fabric-port via BGP-SPF computation. BGP-SPF programs each local line-unit/spine-unit RIBs (router information base) with fabric routes within the fabric-VRF. At this point, there is IP reachability across all fabric port IP addresses.
Switch-Port Discovery and Tunnel Bring-Up:
Local router ports may be discovered on each line unit 1504. Discovered router ports along with assigned MPLS labels are pushed into local BGP-LSVR instances on each line unit 1504. BGP-SPF may be enhanced further to be able to carry ports+labels independent of IP addressing. Accordingly, BGP-SPF may be configured to compute shortest path first (SPF) SPF to each “switch-port” in the logical router. BGP-SPF, may also incorporate these external switch-ports into its fabric-VRF topology independent of the user VRF that they are configured in. BGP on each unit 1504 instantiates ingress/egress overlay MPLS tunnels for each interface that resolve via fabric-VRF next-hops. Tunnel reachability may be resolved via fabric-VRF next-hops and tunnels may be programmed as described earlier with assigned MPLS label on each unit 1504.
User configuration on R0 follows the bringing up of the backplane fabric and may be handled on the controller 1500. Switch state computed as a result of this user configuration and control plane may be further distributed for programming across some or all of the line units 1504.
Example Packet Paths
This section goes over how some common packet paths would work in the system using data path programming of the control node 1500 and units 1502, 1504 described in earlier sections.
ARP Resolution
Glean Processing on a unit 1502, 1504 is performed by an ingress L3 route lookup on destination IP address that resolves to an incomplete next-hop or subnet (glean) route that is programmed pointing to PUNT path. The PUNT path is pre-programmed pointing to ingress-interface-tunnel to the controller 1500. An ingress layer-2 packet is encapsulated with ingress-interface-label+rewrite to fabric-spine-next-hop. The encapsulated packet is transmitted on the fabric port to one of the spine units 1502. The spine unit 1502 terminates outer layer-2. An MPLS in-label lookup on the spine unit 1502 points to ingress-interface-label+rewrite to fabric-controller-next-hop. This information is used to route the packet to the controller 1500. The controller terminates outer layer-2. The controller 1500 is programmed to perform an MPLS in-label lookup action as POP (point of presence) and identifies the ingress interface context. The controller performs an L3 route lookup on the destination IP of the packet and resolves to an incomplete next-hop or subnet (glean) route. The controller 1500 then delivers the packet using the next-hop or subnet route for ARP resolution with the ingress interface.
ARP Request
The controller 1500 generates a broadcast ARP request on the ingress L3-interface. The controller L3-interface resolves to egress-interface-tunnel port. The ARP packet of the broadcast ARP request is encapsulated with egress-interface-label+rewrite to fabric-spine-next-hop. The encapsulated packet is transmitted on the fabric port to one of the spine units 1502. The spine unit 1502 terminates outer layer-2. An MPLS in-label lookup on the spine unit 1502 points to egress-interface-label+rewrite to fabric-line-unit-next-hop. The encapsulated packet is transmitted on the fabric port to the egress line unit 1504 according to the MPLES in-label lookup. The egress line-unit 1504 terminates outer layer-2. The egress line unit 1504 performs an MPLS in-label lookup, resulting in POP and forward on an egress interface of the egress line unit identified from the MPLS in-label look up.
ARP Reply
ARP reply packets may be programmed with a PUNT path to the controller 1500. The PUNT path is pre-programmed and points to an ingress-interface-tunnel to the controller 1500. An ingress L2 ARP packet from a line unit 1504 may be encapsulated with ingress-interface-label+rewrite to fabric-spine-next-hop according to the PUNT path. The encapsulated packet is transmitted on the fabric port to one of the spine units 1502. The spine unit 1502 terminates the outer layer-2. An MPLS in-label lookup on the spine unit 1502 points to ingress-interface-label+rewrite to fabric-controller-next-hop. This information is used to forward the ARP packet to the controller 1500.
The controller 1500 terminates outer layer-2. The controller 1500 performs an MPLS in-label lookup action and is programmed as POP. The controller 1500 identifies the ingress interface context according to the lookup action. The inner packet encapsulated in the packet from the line unit 1504 is identified as an ARP packet and delivered to ARP module executing on the controller 1500, which processes the ARP reply according to address resolution protocol (ARP).
Ingress LC->Egress LC Routed Packet Walk
The ingress line unit 1504 performs an ingress L3 route lookup on destination IP of a packet and resolves to next-hop rewrite, L3-egress-interface, L2-egress-interface-tunnel-port. The packet is re-written with next-hop rewrite result from the route lookup and VLAN editing derived from egress L3-interface and L2-port. The resulting layer-2 packet is encapsulated with egress-interface-label+rewrite to fabric-spine-next-hop. The encapsulated packet is transmitted on the fabric port to one of the spine units 1504 according to the fabric-spine-next-hop. The spine unit 1504 receives the encapsulated packet, terminates the outer layer-2, and performs an MPLS in-label lookup that points to egress-interface-label+rewrite to fabric-egress-line-unit-next-hop. The spine unit 1504 transmits the encapsulated packet to the egress line unit 1504 referenced by the fabric-egress-line-unit-next hope. The egress line unit 1504 terminates the outer layer-2, performs an MPLS in-label lookup result to obtain POP and forwards the encapsulated packet on an egress interface of the egress line unit 1504 referenced by the encapsulated packet.
Computing device 1900 may be used to perform various procedures, such as those discussed herein. Computing device 1900 can function as a server, a client, or any other computing entity. Computing device can perform various monitoring functions as discussed herein, and can execute one or more application programs, such as the application programs described herein. Computing device 1900 can be any of a wide variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.
Computing device 1900 includes one or more processor(s) 1902, one or more memory device(s) 1904, one or more interface(s) 1906, one or more mass storage device(s) 1908, one or more Input/Output (I/O) device(s) 1910, and a display device 1930 all of which are coupled to a bus 1912. Processor(s) 1902 include one or more processors or controllers that execute instructions stored in memory device(s) 1904 and/or mass storage device(s) 1908. Processor(s) 1902 may also include various types of computer-readable media, such as cache memory.
Memory device(s) 1904 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 1914) and/or nonvolatile memory (e.g., read-only memory (ROM) 1916). Memory device(s) 1904 may also include rewritable ROM, such as Flash memory.
Mass storage device(s) 1908 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in
I/O device(s) 1910 include various devices that allow data and/or other information to be input to or retrieved from computing device 1900. Example I/O device(s) 1910 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Display device 1930 includes any type of device capable of displaying information to one or more users of computing device 1900. Examples of display device 1930 include a monitor, display terminal, video projection device, and the like.
Interface(s) 1906 include various interfaces that allow computing device 1900 to interact with other systems, devices, or computing environments. Example interface(s) 1906 include any number of different network interfaces 1920, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 1918 and peripheral device interface 1922. The interface(s) 1906 may also include one or more user interface elements 1918. The interface(s) 1906 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
Bus 1912 allows processor(s) 1902, memory device(s) 1904, interface(s) 1906, mass storage device(s) 1908, and I/O device(s) 1910 to communicate with one another, as well as other devices or components coupled to bus 1912. Bus 1912 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1900, and are executed by processor(s) 1902. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
This application claims the benefit of U.S. Provisional Application Ser. No. 62/771,407, filed Nov. 26, 2018 and entitled LOGICAL ROUTER COMPRISING DISAGGREGATED NETWORK ELEMENTS, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7583590 | Sivakumar | Sep 2009 | B2 |
9274843 | Jagtap | Mar 2016 | B2 |
9444768 | Raghunathan | Sep 2016 | B1 |
9571388 | Ward et al. | Feb 2017 | B1 |
10219050 | Cai | Feb 2019 | B2 |
10599636 | Lu | Mar 2020 | B2 |
10623316 | Gafni | Apr 2020 | B2 |
20040090913 | Scudder | May 2004 | A1 |
20060092940 | Ansari | Jun 2006 | A1 |
20070058657 | Holt | Mar 2007 | A1 |
20070140240 | Dally | Jun 2007 | A1 |
20120033669 | Mohandas | Feb 2012 | A1 |
20130058348 | Koponen | Mar 2013 | A1 |
20130204921 | Diaz | Aug 2013 | A1 |
20140036924 | Christenson | Feb 2014 | A1 |
20140082156 | Jagtap | Mar 2014 | A1 |
20140146823 | Angst et al. | May 2014 | A1 |
20140177447 | Venkataswami | Jun 2014 | A1 |
20140269415 | Banavalikar et al. | Sep 2014 | A1 |
20140280687 | Egi et al. | Sep 2014 | A1 |
20150188770 | Naiksatam et al. | Jul 2015 | A1 |
20160173400 | Banavalikar et al. | Jun 2016 | A1 |
20160373355 | Zhang et al. | Dec 2016 | A1 |
20170093636 | Chanda et al. | Mar 2017 | A1 |
20170180220 | Leckey | Jun 2017 | A1 |
20170310413 | Cai | Oct 2017 | A1 |
20180316613 | Gafni | Nov 2018 | A1 |
20180341562 | Lu | Nov 2018 | A1 |
20200169433 | Patel | May 2020 | A1 |
20200169501 | Patel | May 2020 | A1 |
20200169512 | Patel | May 2020 | A1 |
Number | Date | Country |
---|---|---|
2571210 | Apr 2017 | EP |
2015142404 | Sep 2015 | WO |
Entry |
---|
NEC, “OCP Enabled Switching”, NEC White Paper, 2017. |
Broadcom, “Broadcom ships Jericho2: Driving the Merchant Silicon Revolution in Carrier Networks”. |
Bush, “Link State Over Ethernet”, Internet Engineering Task Force, 2018. |
Number | Date | Country | |
---|---|---|---|
20200169516 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
62771407 | Nov 2018 | US |