More and more enterprises have moved or are in the process of moving large portions of their computing workloads into various public clouds (e.g., Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, etc.). If an enterprise uses one of the native offerings of these clouds, and only uses one public cloud for their workloads, then many of these public clouds offer connectivity between the workloads in the different datacenters of the same public cloud provider. For instance, AWS virtual private clouds (VPCs) can be connected via a transit gateway.
However, enterprises might want to use workloads in multiple public clouds as well as their on-premises/branch datacenters that are not as easily all connected without traffic traveling through the public Internet. In addition, some enterprises might want to retain management of their workloads in the cloud, rather than running native cloud provider managed workloads. Solutions that enable these objectives while navigating the limitations imposed by specific cloud providers are therefore desirable.
Some embodiments provide a method for configuring a gateway router that provides connectivity between a virtual datacenter in a public cloud and a set of external datacenters (e.g., on-premises datacenters, other virtual datacenters in the same or different public clouds). Specifically, when the gateway router is implemented according to constraints imposed by the public cloud in which the virtual datacenter resides, some embodiments automatically aggregate network addresses of logical network endpoints in the virtual datacenter into a single subnet address that is used for an aggregated route in the routing table of the gateway router.
The virtual datacenter, in some embodiments, is a datacenter that includes network and/or compute management components operating in the public cloud as well as network endpoints connected by a logical network within the virtual datacenter. The network endpoints as well as the logical network are managed by the management components that operate within the virtual datacenter. The logical network of some embodiments connects to other datacenters via the gateway router that is implemented as part of the public cloud underlay. As such, the routing table for this gateway router is constrained according to rules imposed by the public cloud, such as limiting the number of routes in the routing table.
However, the gateway routing table needs to include routes for each of the network endpoints of the virtual datacenter (i.e., for the network addresses of these endpoints or for the logical switch subnets to which these network endpoints belong) so that the gateway router will route data messages directed to these network endpoints to the virtual datacenter logical network (rather than to other datacenters or to the public Internet). To limit the number of routes needed in the gateway routing table, a network management component of the virtual datacenter receives the network addresses of these network endpoints and automatically aggregates at least a subset of these network addresses into a single subnet address that encompasses the entire subset of the network addresses. Rather than providing individual routes for each network endpoint or logical switch subnet in the subset, the network management component provides a single aggregated route for the subnet address to the routing table of the gateway router.
In some embodiments, the network management component monitors the virtual datacenter (e.g., a compute management component) to determine when new network endpoints are created in the virtual datacenter in order to perform route aggregation. The route aggregation process, in some embodiments, identifies a set of network addresses (for which all of the routes have the same next hop) that share a set of most significant bits in common (e.g., for IPv4 addresses, the first 22 out of 32 bits). The aggregate subnet address uses these shared most significant bits and then the value zero for each of the remaining least significant bits (e.g., in the prior example, the 10 remaining bits). In some embodiments, the process also ensures that no network addresses for which the gateway router routes data messages to other next hops are encompassed by this subnet (e.g., addresses for network endpoints located in an on-premises datacenter). If needed, the network management component generates routes for multiple aggregate subnet addresses to (i) encompass all of the virtual datacenter network endpoints while (ii) ensuring that no other network addresses are encompassed by the aggregate subnet addresses.
The aggregate subnet addresses are not only used by the gateway router in some embodiments but are also provided to connector gateways that are used to connect the virtual datacenter to on-premises datacenters, other virtual datacenters, and/or native virtual private clouds (VPCs) of the public cloud datacenter. The subnet addresses may be provided to these other connector gateways using a routing protocol or through network management connections in different embodiments. In addition, the gateway router also receives network addresses for endpoints located in these other datacenters or VPCs in order to route data messages sent from the virtual datacenter to network endpoints in these other datacenters or VPCs to the correct connector gateway. In some embodiments, the gateway router receives aggregate routes (e.g., for routing to other virtual datacenters) or the network management components of the virtual datacenter perform a similar route aggregation process to aggregate the network addresses for the outbound routes.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide a method for configuring a gateway router that provides connectivity between a virtual datacenter in a public cloud and a set of external datacenters (e.g., on-premises datacenters, other virtual datacenters in the same or different public clouds). Specifically, when the gateway router is implemented according to constraints imposed by the public cloud in which the virtual datacenter resides, some embodiments automatically aggregate network addresses of logical network endpoints in the virtual datacenter into a single subnet address that is used for an aggregated route in the routing table of the gateway router.
The virtual datacenter, in some embodiments, is a datacenter that includes network and/or compute management components operating in the public cloud as well as network endpoints connected by a logical network within the virtual datacenter. The network endpoints as well as the logical network are managed by the management components that operate within the virtual datacenter. The logical network of some embodiments connects to other datacenters via the gateway router that is implemented as part of the public cloud underlay. As such, the routing table for this gateway router is constrained according to rules imposed by the public cloud, such as limiting the number of routes in the routing table.
The management and compute T1 logical routers 115 and 120 are sometimes referred to as the management gateway and compute gateway. In some embodiments, a typical virtual datacenter is defined with these two T1 logical routers connected to a T0 logical router, which segregates the management network segments from the compute network segments to which workload DCNs connect. In general, public network traffic from external client devices would not be allowed to connect to the management network but would (in certain cases) be allowed to connect to the compute network (e.g., if the compute network includes web servers for a public-facing application). Each of the T1 logical routers 115 and 120 may also apply services to traffic that it processes, whether that traffic is received from the T0 logical router 110 or received from one of the network segments underneath the T1 logical router.
In this example, the virtual datacenter logical network 100 includes three management logical switches 125-135 (also referred to as network segments) and two compute logical switches 140-145. In this example, one or more compute manager DCNs 150 connect to the first management logical switch 125 and one or more network manager and controller DCNs 155 connect to the second management logical switch 130. The DCNs shown here may be implemented in the public cloud as virtual machines (VMs), containers, or other types of machines, in different embodiments. In some embodiments, multiple compute manager DCNs 150 form a compute manager cluster connected to the logical switch 125, while multiple network manager DCNs 155 form a management plane cluster and multiple network controller DCNs 155 form a control plane cluster (both of which are connected to the same logical switch 130). Via the management gateway 115, the compute manager DCNs 150 can communicate with the network manager and controller DCNs 155 (e.g., to notify the network manager DCNs when new logical network endpoint workloads have been added to the virtual datacenter).
The virtual datacenter 105 also includes logical network endpoint workload DCNs 160. These DCNs can host applications that are accessed by users (e.g., employees of the enterprise that owns and manages the virtual datacenter 105), external client devices (e.g., individuals accessing a web server through a public network), or other DCNs (e.g., in the same virtual datacenter or different datacenters). In some embodiments, at least a subset of the DCNs are related to an application mobility platform (e.g., VMware HCX) that helps deliver hybrid cloud solutions.
The workload DCNs 160 in this example connect to two logical switches 140 and 145 (e.g., because they implement different tiers of an application, or different applications altogether). In some embodiments, workload network endpoint DCNs connect to different logical switches than the application mobility platform DCNs. These DCNs 160 can communicate with each other, with workload DCNs in other datacenters, etc. via the interfaces connected to these compute logical switches 140 and 145. In addition, in some embodiments (and as shown in this example), the workload DCNs 160 include a separate interface (e.g., in a different subnet) that connects to a management logical switch 135. The workload DCNs 160 communicate with the compute and network management DCNs 150 and 155 via this logical switch 135, without requiring this control traffic to be sent through the T0 logical router 110.
The virtual datacenter 105 also includes a gateway router 165 and an internet gateway 170, with the T0 logical router having different interfaces for connecting to each of these routers. The internet gateway 170 handles traffic sent between the workload DCNs 160 and the public Internet (e.g., when external clients communicate with the workload DCNs 160). The gateway router 165 operates to connect the virtual datacenter logical network to (i) on-premises datacenter(s) belonging to the enterprise that manages the virtual datacenter, (ii) additional virtual datacenter(s) in the same public cloud or other public clouds, and/or (iii) native virtual private cloud(s) (VPCs) implemented in the same public cloud or other public clouds. These connections will be described further below by reference to
In different embodiments, the entire virtual datacenter may be implemented on a single host computer of the public datacenter 200 (which may host many VMs, containers, or other DCNs) or multiple different host computers. As shown, in this example, at least two host computers 210 and 215 execute workload and/or management VMs. Another host computer 220 executes a gateway datapath 225. In some embodiments, this gateway datapath 225 implements a centralized component of the T0 logical router, such that all traffic between any external networks and the virtual datacenter is processed by the gateway datapath 225. Additional details regarding logical routers and their implementation can be found in U.S. Pat. No. 9,787,605, which is incorporated herein by reference.
The gateway router 305 includes two outgoing interfaces. One of these interfaces connects to an on-premises gateway connector router 315 and a second interface connects to a transit gateway connector router 320. In some embodiments, both of these connector routers 315 and 320 are implemented in the public cloud provider underlay (i.e., outside of the virtual datacenter 300 but within the public cloud datacenter in which the virtual datacenter is implemented). For instance, when the virtual datacenter 300 is implemented in an Amazon Web Services (AWS) public cloud, the on-premises gateway connector router is an AWS Direct Connect router while the transit gateway connector router is an AWS Transit Gateway router.
As shown, the on-premises gateway connector router 315 enables the virtual datacenter 300 to connect to an on-premises datacenter 325. In some embodiments, the connection between the on-premises gateway connector router 315 and the on-premises datacenter 325 uses a virtual private network (VPN) connection such that traffic between the virtual datacenter 300 and the on-premises datacenter 325 does not pass through the public Internet at any time. In some embodiments, the on-premises gateway connector router 315 enables connection to multiple on-premises and/or branch network private datacenters. In other embodiments, a separate gateway connector router 315 is used for each private datacenter to which the virtual datacenter connects. The on-premises gateway connector router 315 connects to an edge router 330 at the on-premises datacenter 325, and numerous workload DCNs (e.g., VMs, bare metal computing devices, mobile devices, etc.) can communicate with the virtual datacenter through this connection.
The transit gateway connector router 320 enables the virtual datacenter 300 to connect to both (i) another virtual datacenter 335 and (ii) a native VPC 340 of the public cloud. The second virtual datacenter 335 is arranged similarly to the first virtual datacenter (e.g., as shown in
In this example, the second virtual datacenter 335 and the native VPC 340 are both implemented in the same public cloud datacenter as the first virtual datacenter 300. In other examples, however, the first virtual datacenter 300 may connect to virtual datacenters and/or native VPCs in other public cloud datacenters. For additional discussion regarding these connections, see U.S. patent application Ser. No. 17/212,662, filed Mar. 25, 2021 and titled “Connectivity Between Virtual Datacenters”, which is incorporated herein by reference.
As noted, the gateway router 305 needs to be configured to route data messages sent between the logical network endpoints of the first virtual datacenter 300 (as well as, in some cases, the management DCNs) and the other datacenters 325, 335, and 340. The routing table(s) for the gateway router 305 thus include (i) routes for the endpoints located at the first virtual datacenter 300 (with a next hop of the T0 logical router 310), (ii) routes for the endpoints located at the on-premises datacenter 325 (with a next hop interface of the on-premises gateway connector router 315), (iii) routes for the endpoints located at the second virtual datacenter 335 (with a next hop interface of the transit gateway connector router 320), and (iv) routes for the endpoints located at the native VPC 340 (also with a next hop interface of the transit gateway connector router 320).
As noted, however, the routing table of the gateway router 305 is constrained according to rules imposed by the public cloud. For instance, in some embodiments the public cloud limits the number of routes that can be programmed into the routing table (e.g., 20 routes, 100 routes, etc.). However, the number of workloads at any of the datacenters will often easily exceed these numbers, and thus the routing table cannot accommodate an individual route for each network endpoint. In many cases, the network endpoints are organized by logical switches that each have their own subnet and thus a route is provided for each logical switch. However, the number of logical switches can still far exceed the maximum number of allowed routes for the gateway router in many deployments.
To limit the number of routes needed in the gateway routing table, some embodiments automatically aggregate the network addresses of the network endpoints into one or more aggregate subnet addresses that each encompasses multiple network endpoints (and, typically, multiple logical switch subnets) at the same location. Rather than configuring individual routes in the gateway routing table for each network endpoint or subnet in a given subset of network addresses, some embodiments configure a single aggregated route for the aggregate subnet.
As shown, the process 400 begins by receiving (at 405) network addresses for a set of network endpoints. In some embodiments, a scheduler component of the network manager monitors the virtual datacenter (e.g., by communicating with a compute management component) to determine when new network endpoints are created in the virtual datacenter in order to perform route aggregation. The process 400 may be performed when the virtual datacenter is initially set up as well as anytime additional network endpoints are created in the virtual datacenter for which routes need to be added to the routing table. The received network addresses, in some embodiments, can be IPv4 addresses, IPv6 addresses, or other types of network addresses (the described examples will use IPv4 addresses for simplicity, but the principles described can be extended to IPv6 addresses or other similarly-structured types of network addresses). For instance, in IPv4, the received network addresses can be addresses of individual DCNs (e.g., /32 addresses) or subnets associated with logical switches that can include multiple DCNs (e.g., /24 addresses).
Returning to
The process 400 then removes (at 415) any of the received network addresses that are encompassed by the existing aggregate routes. Specifically, larger subnets can encompass smaller subnets or individual DCN addresses when the smaller subnet or individual DCN address would match the larger subnet. For instance, an existing route for a /20 IPv4 address encompasses many different /24 addresses (e.g., the address 10.10.0.0/16 encompasses 10.10.5.0/24 and 10.10.6.1 but does not encompass 10.11.0.0/24). It should also be noted that a route for a larger subnet only encompasses a network address if the next hop for the larger subnet route is the same as the next hop would be for a new route for the encompassed network address. For instance, a 10.10.0.0/16 route with a next hop interface pointing to a transit gateway router would not encompass a 10.10.10.0/24 route pointing to the local T0 router (but would create a conflict that would need to be managed). Any network addresses encompassed by the existing aggregate routes can be removed from analysis because they do not need to be further aggregated as they will not add to the size of the routing table.
Though not shown in the figure, if all of the received network addresses are encompassed by existing aggregate routes, then process 400 ends as there is no need to perform additional aggregation analysis. However, if there are not any existing aggregate routes or some of the received addresses are not encompassed by the existing routes, the process 400 performs route aggregation.
To aggregate the routes, the process identifies (at 420) the most significant bits shared by a group of network addresses. In some embodiments, the process starts by grouping all of the addresses and identifying a set of most significant bits shared by all of these addresses. Other embodiments impose an initial requirement that a minimum number of bits must be shared in order to group a set of addresses. At the extreme end, the default route 0.0.0.0/0 would encompass all of the network addresses, but if any egress routes for network addresses in other datacenters exist in the routing table, then this default route would conflict with these routes (though this could be handled by longest prefix match). In addition, a default route pointing outward might already exist for the router. Thus, for instance, some embodiments start with 16 most significant bits (for IPv4 addresses) and only group addresses that share these first 16 bits in common (e.g., all 192.168.x.x subnets). These grouped addresses might share additional bits in common, in which case further most significant bits can be used.
The process 400 then defines (at 425) an aggregate subnet formed by the identified most significant bits. This aggregate subnet, in some embodiments, includes (i) the shared values for the identified most significant bits and (ii) the value zero for each of the remaining least significant bits of the address. The netmask for this aggregated subnet is defined by the number of most significant bits that are shared between the grouped addresses.
Next, the process 400 determines (at 430) whether the newly-defined aggregate subnet conflicts with any other existing routes. That is, existing routes should not be encompassed by the aggregate subnet. For instance, in the example shown in
It should be noted that the process described in operations 420-430 and shown in
Once any aggregate subnets have been defined, the process 400 generates (at 435) routes for the aggregate subnets as well as any non-aggregated addresses (either individual addresses or subnets). These routes have their next hop in the virtual datacenter logical network (e.g., the outward-facing interface T0 router that connects to the gateway router). In some embodiments, the routes generated for the gateway router are static routes and thus cannot be aggregated by routing protocol (e.g., BGP) mechanisms. The process 400 also configures (at 440) the routing table of this gateway router with these routes (e.g., via the network management and control system implemented in the virtual datacenter or via interaction with the cloud provider management system). The process 400 then ends.
The aggregate subnet addresses are not only used by the gateway router in some embodiments. Referring to
The gateway router 305 also receives routes for endpoints located in other datacenters or VPCs (e.g., the on-premises datacenter 325, virtual datacenter 335, and native VPC 340) in order to route outgoing data messages sent from the local network endpoint DCNs to these other datacenters or VPCs. In some embodiments, the network management components of the virtual datacenter perform similar route aggregation processes (e.g., the operations of
The bus 605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 600. For instance, the bus 605 communicatively connects the processing unit(s) 610 with the read-only memory 630, the system memory 625, and the permanent storage device 635.
From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
The read-only-memory (ROM) 630 stores static data and instructions that are needed by the processing unit(s) 610 and other modules of the electronic system. The permanent storage device 635, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 600 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 635.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 635, the system memory 625 is a read-and-write memory device. However, unlike storage device 635, the system memory is a volatile read-and-write memory, such a random-access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 625, the permanent storage device 635, and/or the read-only memory 630. From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 605 also connects to the input and output devices 640 and 645. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 640 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 645 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.
Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.
It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including
Number | Name | Date | Kind |
---|---|---|---|
8830835 | Casado et al. | Sep 2014 | B2 |
8964767 | Koponen et al. | Feb 2015 | B2 |
9137052 | Koponen et al. | Sep 2015 | B2 |
9209998 | Casado et al. | Dec 2015 | B2 |
9288081 | Casado et al. | Mar 2016 | B2 |
9444651 | Koponen et al. | Sep 2016 | B2 |
9755960 | Moisand et al. | Sep 2017 | B2 |
9876672 | Casado et al. | Jan 2018 | B2 |
9935880 | Hammam et al. | Apr 2018 | B2 |
10091028 | Koponen et al. | Oct 2018 | B2 |
10193708 | Koponen et al. | Jan 2019 | B2 |
10735263 | McAlary et al. | Aug 2020 | B1 |
10754696 | Chinnam et al. | Aug 2020 | B1 |
10931481 | Casado et al. | Feb 2021 | B2 |
11005710 | Garg et al. | May 2021 | B2 |
11005963 | Maskalik et al. | May 2021 | B2 |
11171878 | Devireddy et al. | Nov 2021 | B1 |
11212238 | Cidon et al. | Dec 2021 | B2 |
11240203 | Eyada | Feb 2022 | B1 |
11362992 | Devireddy et al. | Jun 2022 | B2 |
11582147 | Raman et al. | Feb 2023 | B2 |
11606290 | Patel et al. | Mar 2023 | B2 |
11729094 | Arumugam et al. | Aug 2023 | B2 |
11729095 | Sadasivan et al. | Aug 2023 | B2 |
20040223491 | Levy-Abegnoli | Nov 2004 | A1 |
20060133265 | Lee | Jun 2006 | A1 |
20070058604 | Lee et al. | Mar 2007 | A1 |
20080159150 | Ansari | Jul 2008 | A1 |
20090003235 | Jiang | Jan 2009 | A1 |
20090296713 | Kompella | Dec 2009 | A1 |
20090307713 | Anderson et al. | Dec 2009 | A1 |
20110126197 | Larsen et al. | May 2011 | A1 |
20110131338 | Hu | Jun 2011 | A1 |
20120054624 | Owens, Jr. et al. | Mar 2012 | A1 |
20120110651 | Biljon et al. | May 2012 | A1 |
20130044641 | Koponen et al. | Feb 2013 | A1 |
20130044751 | Casado et al. | Feb 2013 | A1 |
20130044752 | Koponen et al. | Feb 2013 | A1 |
20130044761 | Koponen et al. | Feb 2013 | A1 |
20130044762 | Casado et al. | Feb 2013 | A1 |
20130044763 | Koponen et al. | Feb 2013 | A1 |
20130044764 | Casado et al. | Feb 2013 | A1 |
20130142203 | Koponen et al. | Jun 2013 | A1 |
20130185413 | Beaty et al. | Jul 2013 | A1 |
20130283364 | Chang et al. | Oct 2013 | A1 |
20140282525 | Sapuram et al. | Sep 2014 | A1 |
20140334495 | Stubberfield et al. | Nov 2014 | A1 |
20140376367 | Jain et al. | Dec 2014 | A1 |
20150113146 | Fu | Apr 2015 | A1 |
20150193246 | Luft | Jul 2015 | A1 |
20160105392 | Thakkar et al. | Apr 2016 | A1 |
20160127202 | Dalvi et al. | May 2016 | A1 |
20160170809 | Schmidt et al. | Jun 2016 | A1 |
20160182336 | Doctor et al. | Jun 2016 | A1 |
20160234161 | Banerjee | Aug 2016 | A1 |
20170033924 | Jain et al. | Feb 2017 | A1 |
20170063673 | Maskalik et al. | Mar 2017 | A1 |
20170195517 | Seetharaman et al. | Jul 2017 | A1 |
20170353351 | Cheng et al. | Dec 2017 | A1 |
20180062917 | Chandrashekhar | Mar 2018 | A1 |
20180270308 | Shea et al. | Sep 2018 | A1 |
20180287902 | Chitalia et al. | Oct 2018 | A1 |
20180295036 | Krishnamurthy et al. | Oct 2018 | A1 |
20180332001 | Ferrero et al. | Nov 2018 | A1 |
20190068500 | Hira | Feb 2019 | A1 |
20190104051 | Cidon et al. | Apr 2019 | A1 |
20190104413 | Cidon et al. | Apr 2019 | A1 |
20190149360 | Casado et al. | May 2019 | A1 |
20190149463 | Bajaj et al. | May 2019 | A1 |
20190327112 | Nandoori et al. | Oct 2019 | A1 |
20190342179 | Barnard et al. | Nov 2019 | A1 |
20200235990 | Janakiraman | Jul 2020 | A1 |
20210067375 | Cidon et al. | Mar 2021 | A1 |
20210067439 | Kommula et al. | Mar 2021 | A1 |
20210067468 | Cidon et al. | Mar 2021 | A1 |
20210075727 | Chen et al. | Mar 2021 | A1 |
20210112034 | Sundararajan et al. | Apr 2021 | A1 |
20210126860 | Ramaswamy et al. | Apr 2021 | A1 |
20210136140 | Tidemann et al. | May 2021 | A1 |
20210184898 | Koponen et al. | Jun 2021 | A1 |
20210314388 | Zhou et al. | Oct 2021 | A1 |
20210336886 | Vijayasankar et al. | Oct 2021 | A1 |
20210359948 | Durrani et al. | Nov 2021 | A1 |
20210409303 | Pande | Dec 2021 | A1 |
20220094666 | Devireddy et al. | Mar 2022 | A1 |
20220311707 | Patel et al. | Sep 2022 | A1 |
20220311714 | Devireddy et al. | Sep 2022 | A1 |
20220377009 | Raman et al. | Nov 2022 | A1 |
20220377020 | Sadasivan et al. | Nov 2022 | A1 |
20220377021 | Sadasivan et al. | Nov 2022 | A1 |
20230006920 | Arumugam et al. | Jan 2023 | A1 |
20230006941 | Natarajan et al. | Jan 2023 | A1 |
20230239238 | Patel et al. | Jul 2023 | A1 |
Number | Date | Country |
---|---|---|
101977156 | Feb 2011 | CN |
111478850 | Jul 2020 | CN |
114531389 | May 2022 | CN |
2013026050 | Feb 2013 | WO |
2022060464 | Mar 2022 | WO |
2022250735 | Dec 2022 | WO |
Entry |
---|
Non-Published Commonly Owned U.S. Appl. No. 18/235,869, filed Aug. 20, 2023, 50 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 18/235,874, filed Aug. 20, 2023, 69 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/212,662, filed Mar. 25, 2021, 37 pages, VMware, Inc. |
Non-published commonly owned U.S. Appl. No. 17/344,956, filed Jun. 11, 2021, 61 pages, VMware, Inc. |
Non-published commonly owned U.S. Appl. No. 17/344,958, filed Jun. 11, 2021, 62 pages, VMware, Inc. |
Non-published commonly owned U.S. Appl. No. 17/344,959, filed Jun. 11, 2021, 61 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/365,960, filed Jul. 1, 2021, 32 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/366,676, filed Jul. 2, 2021, 32 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/839,336, filed Jun. 13, 2022, 50 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/845,716, filed Jun. 21, 2022, 32 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 18/119,208, filed Mar. 8, 2023, 46 pages, VMware, Inc. |
Number | Date | Country | |
---|---|---|---|
20240007386 A1 | Jan 2024 | US |