The invention relates generally to network switches. More particularly, the invention relates to network switches for use in a virtualized server data center environment.
Server virtualization in data centers is becoming widespread. In general, server virtualization describes a software abstraction that separates a physical resource and its use from the underlying physical machine. Most physical resources can be abstracted and provisioned as virtualized entities. Some examples of virtualized entities include the central processing unit (CPU), network input/output (I/O), and storage I/O.
Virtual machines (VM), which are a virtualization of a physical machine and its hardware components, play a central role in virtualization. A virtual machine typically includes a virtual processor, virtual system memory, virtual storage, and various virtual devices. A single physical machine can host a plurality of virtual machines. Guest operating systems execute on the virtual machines, and function as though executing on the actual hardware of the physical machine.
A layer of software provides an interface between the virtual machines resident on a physical machine and the underlying physical hardware. Commonly referred to as a hypervisor or virtual machine monitor (VMM), this interface multiplexes access to the hardware among the virtual machines, guaranteeing to the various virtual machines use of the physical resources of the machine, such as the CPU, memory, storage, and I/O bandwidth.
Typical server virtualization implementations have the virtual machines share the network adapter or network interface card (NIC) of the physical machine for performing external network I/O operations. The hypervisor typically provides a virtual switched network (called a vswitch) that provides interconnectivity among the virtual machines. The vswitch interfaces between the NIC of the physical machine and the virtual NICs (vNICs) of the virtual machines, each virtual machine having one associated vNIC. In general, each vNIC operates like a physical NIC, being assigned a media access control (MAC) address that is typically different from that of the physical NIC. The vswitch performs the routing of packets to and from the various virtual machines and the physical NIC.
Advances in network I/O hardware technology have produced multi-queue NICs that support network virtualization by reducing the burden on the vswitch and improving network I/O performance. Generally, multi-queue NICs assign transmit and receive queues to each virtual machine. The NIC places outgoing packets from a given virtual machine into the transmit queue of that virtual machine and incoming packets addressed to the given virtual machine into its receive queue. The direct assignment of such queues to each virtual machine thus simplifies the handling of outgoing and incoming traffic. As used herein, a virtualized server or host is a physical server or host in which either virtual machines, multi-queued NICs, or both have been deployed; a non-virtualized server or host is physical server lacking both such virtualization technologies.
In a non-virtualized server environment, the network interface of each physical server (i.e., a single or multi-homed host) is directly connected to one port of a network switch. Therefore, in a non-virtualized environment, a port-based switch configuration on the network switch implicitly and directly corresponds to a physical host-based switch configuration. Thus, network policies that are to apply to a certain physical host are assigned to a particular port on the network switch.
This model succeeds in a non-virtualized host environment, but breaks down in a virtualized host environment because physical host machines, and thus network switch ports, no longer have a one-to-one mapping to servers or services. The virtualization of a physical host machine that can simultaneously run multiple virtual machines changes the traditional networking model in the following ways:
(1) Each virtual machine can run a full featured operating system and requires configuration and management, and because one physical host machine can support many virtual machines, the network configuration and administration effort per physical host machine increases significantly;
(2) Each multi-queued NIC can be provisioned into multiple virtual NICs and can be configured as multiple NICs within an operating system running in a non-virtualized host environment or within a virtual machine; and
(3) To provide network management of the various virtual machines hosted by a single hypervisor running on a single physical host machine, the hypervisor provides a virtual switch that provides connectivity between the various virtual machines running on the same physical host machine.
Consequent to these characteristics of virtualization, a physical port of the network switch no longer suffices to uniquely identify the servers or services of a physical host machine because now multiple virtual machines or multiple queues of a multi-queue NIC are connected to that single physical port.
In one aspect, the invention features a data center comprising a first physical host machine operating one or more virtualized entities and a second physical host machine operating one or more virtualized entities. A network switch has a first physical port connected to the first physical host machine, a second physical port connected to the second physical host machine, and a management module that acquires information about each virtualized entity operating on the physical host machines. The management module uses the information to associate each virtualized entity with the physical port to which the physical host machine operating that virtualized entity is connected. The management module also assigns each virtualized entity to a group and associates each group with a traffic-handling policy. A switching fabric processes packet traffic received from each of the virtualized entities based on the traffic-handling policy associated with the group assigned to that virtualized entity.
In another aspect, the invention features a data center comprising a physical host machine operating a virtualized entity and a network switch having a physical port connected to the physical host machine. The network switch has a management module that acquires information about the virtualized entity operating on the physical host machine and uses the information to associate the virtualized entity with the physical port and to detect when packet traffic arriving at the network switch is coming from the virtualized entity.
In yet another aspect, the invention features a network switch comprising a physical port connected to a physical host machine that is operating a virtualized entity and a management module in communication with the physical host machine through the physical port. The management module acquires information about the virtualized entity operating on the physical host machine and uses the information to associate the virtualized entity with the physical port and to detect when ingress packet traffic is coming from the virtualized entity.
In still another aspect, the invention features a method of configuring a network switch to process packet traffic from a virtualized entity operating on a physical host machine connected to a physical port of the network switch. The network switch acquires information about the virtualized entity operating on the physical host machine, associates the acquired information about the virtualized entity with the physical port, assigns the virtualized entity to a group associated with a traffic-handling policy, and processes packet traffic from the virtualized entity in accordance with the traffic-handling policy.
The above and further advantages of this invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like numerals indicate like structural elements and features in various figures. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Data centers described herein extend virtualization beyond the server-network boundary, from the physical host machines (or servers) into the network switches. Such network switches are “virtualization-aware”. As used herein, a network element that is virtualization-aware generally means that the network element “sees” the virtualized host environment of a physical host machine, by learning of the existence and identities of one or more virtualized entities (VEs) on the physical host machine, and can detect, monitor, and control packet traffic to and from those virtualized entities. Examples of virtualized entities described herein include virtual machines (VMs) and multi-queued network I/O adapters (also called multi-queue NICs).
Through the network switch, an administrator can place these virtualized entities into groups (referred to herein as VE groups), irrespective of the physical host machine upon which the virtualized entities operate. To maximize management granularity and flexibility, membership in a VE group can be as small as a single physical host machine, a single virtual machine, or a single queue of a multi-queue NIC. Data centers can also have a mixed variety of VE groups; for example, the network switches can simultaneously manage VE groups established at the VE granularity and at the physical host machine granularity.
The network switch also associates each group with a traffic-handling policy. For example, the network element can assign access control lists (ACLS), quality of service (QoS), and VLAN membership at the VE group level. This grouping of virtualized entities also facilitates the control of network resource allocation; each VE group can have dedicated network resources. For example, the network switch assigns each group to a particular physical uplink port of the network switch. To network elements upstream of the network switch, this uplink connectivity causes the network switch to appear as a multi-home NIC.
The network switch processes the packet traffic of each virtualized entity in accordance with the traffic-handling policy associated with the group to which that virtualized entity is assigned. Thus, the grouping, associated traffic-handling policy, and allocated network resources are a function of the virtualized entities, and not a function of the physical downlink ports of the network switch.
In addition, the grouping of virtualized entities can serve to isolate virtualized entities in one group from virtualized entities in another group, thereby maintaining service-oriented security for network traffic across VE groups. When a virtual machine moves from one physical host machine to another physical host machine, the traffic-handling policy associated with that virtual machine (e.g., the ACL, QoS, and VLAN assignments) moves with it. The particular physical location in the data center to which the virtual machine moves is of no consequence; the virtual machine remains a member of its assigned group and continues to undergo the traffic-handling policy and receive the allocated network resources associated with that group.
The ability to monitor and manage packet traffic at a VE granularity also facilitates service level agreement (SLA) configuration; an administrator can provision virtualized entities on a physical host machine to accommodate distinct and disjoint SLAs, and the grouping of such virtualized entities can be established so that the distinct SLAs can be individually serviced.
A virtualization-aware network switch can also implement redundancy and failover operations based on VE-granular groups. Service-level and application-aware health checks to support failover and redundancy can likewise occur at the VE-granular level, not just at the physical hardware level.
The physical host machine 12 is an embodiment of a physical server, such as a server blade. The physical host 12 includes hardware (not shown) such as one or more processors, memory, input/output (I/O) ports, network input/output adapter (i.e., network interface card or NIC) and, in some embodiments, one or more host bus adaptors (HBA). The physical host machine 12 can reside alone or be stacked within a chassis with other physical host machines, for example, as in a rack server or in a blade server. In general, the physical host machine 12 provides a virtualized host environment that includes a virtualized entity (VE) 18.
The oversimplified embodiment of the network switch 16 shown in
The network switch 16 includes a management module 24, through which the network switch 16 is configured to be “virtualization-aware”. An Ethernet switch is an example of one implementation of the network switch 16. In one embodiment, the virtualization-aware network switch is implemented using a Rackswitch™ G8124, a 10 Gb Ethernet switch manufactured by Blade Network Technologies, Inc. of Santa Clara, Calif.
Three different examples of embodiments of virtualized host environments that can be provided by a physical host machine appear in
An example of virtualization software for implementing virtual machines on a physical host machine is VMware ESX Server™, produced by VMware® of Palo Alto, Calif. Other examples of virtualization software that can be used in conjunction with virtualization-aware network switches include XenSource™ produced by Citrix of Ft. Lauderdale, Fla., and Hyper-V™ produced by Microsoft of Redmond, Wash., Virtuozzo™ produced by SWsoft of Herndon, Va., and Virtual Iron produced by Virtual Iron Software of Lowell, Mass. Advantageously, the virtualization-aware network switches described herein can detect, group, and manage virtualized entities irrespective of the particular brand of virtualization software running on any given physical host machine.
Each virtual machine 32 includes at least one application (e.g., a database application) executing within its own guest operating system. Generally, any type of application can execute on a virtual machine. In addition, each virtual machine 32 has an associated virtual NIC (vNIC) 36, with each vNIC 36 having its own unique virtual MAC address (vMAC).
In
The embodiment of virtualized host environment provided by a physical host machine 12′″ of
The management module 24 (
The management module 24 includes a management processor 50 that communicates with a switch configuration module 54. In one embodiment, the switch configuration module 54 is a software program executed by the management processor 50 to give the network switch its awareness of server virtualization, as described herein. Alternatively, the switch configuration module 54 may be implemented in firmware.
In brief overview, the switch configuration module 54 configures the network switch 16 to be aware of the existence and identity of virtualized entities operating on those physical host machines 12 to which the downlink ports 20 are connected. In addition, the switch configuration module 54 enables an administrator to define groups, associate such groups with traffic-handling policies, and to place virtualized entities into such groups.
More specifically, the switch configuration module 54 enables: (1) the grouping of virtualized entities of similar function (e.g., database servers in one VE group, finance servers in another VE group, web servers in yet another VE group); (2) the application of network policies on a VE-group basis (such as best effort QoS to web server virtual machines and guaranteed QoS to database server virtual machines); (3) distributed (across multiple network switches) and redundant uplink connectivity per group of virtualized entities across multiple physical host machines such that a network switch appears as an end-host (server) multi-homed NIC to upstream network elements; (4) failover and redundancy per VE group, so that on a failover the applicable traffic-handling policy moves to a backup VE group, making a VE failover transparent to upstream network elements; (5) service-oriented security for network traffic across different VE groups (e.g., traffic to web server virtual machines are segregated from traffic to finance server virtual machines); and (6) service-level and application-aware health checks to provide failover and redundancy at the VE-granular level, and not just at the physical hardware level.
The switch configuration module 54 employs various data structures (e.g., tables) for maintaining associations among virtualized entities, groups, and ports. A first table 58 maintains associations between downlink ports 20 and virtualized entities, a second table 60 maintains associations between virtualized entities and groups, and a third table 62 maintains associations between groups and uplink ports 22. Although shown as separate tables, the tables 58, 60, 62 can be embodied in one table or in different types of data structures.
At step 84, the network switch 16 acquires the identity of a virtualized entity and associates (step 86) the virtualized entity with a downlink port 20. The port-to-VE table 58 maintains this association. An administer assigns (step 88) the virtualized entity to one of the defined groups. The VE-to-group table 60 can hold this assignment.
After being configured to be aware of a particular virtualized entity, the network switch 16 can detect when ingress packet traffic is coming from or addressed to the virtualized entity. Upon receiving packet traffic on a downlink port 20 related to the virtualized entity, the switching fabric 52 processes (step 90) the traffic in accordance with the network policy associated with the group in which the virtualized entity is a member. If in processing the packet traffic the switching fabric 52 determines to the forward the packet traffic to an upstream network element, the switching fabric 52 selects the particular uplink port 22 dedicated to the group in which the virtualized entity is a member.
The network switch 16 can learn of a virtualized entity in one of three manners: (1) the network switch can learn the identity of a virtualized entity from packet traffic received on a downlink port; (2) the network switch can directly query the virtualized entity for identifying information; or (3) an administrator can directly enter the information identifying the virtualized entity into the management module.
Packets arriving at a downlink port 20 have various fields for carrying information from which the network element can detect and identify a virtualized entity from which the packet has come. One such field holds the Organizationally Unique Identifier (OUI). Another such field is the source address. In brief, the network switch extracts the OUI from a received packet and determines whether that OUI is associated with a vender of virtualization software. For example, hexadecimal values 00-0C-29 and 00-50-56 are associated with VMware, hexadecimal value 00-16-3E is associated with XenSource, hexadecimal value 00-03-FF is associated with Microsoft, and hexadecimal value 00-0f-4B is associated with Virtual Iron, and hexadecimal value 00-18-51 is associated with SWsoft.
If, based on the OUI value, the network switch determines that the packet is from a virtualization software vendor, the network switch extracts the address from the source address field of the packet. This address serves to identify the virtualized entity. For a virtual machine, this address is a unique virtual MAC address of the vNIC of that virtual machine. For a multi-queue NIC, this address is a unique MAC address associated with one of the queues of that multi-queue NIC. In virtualized host environments having both virtual machines and multi-queue NICs, the network switch can use either the vMAC address of the vNIC or the MAC address of a queue to identify the virtualized entity. The network switch places the virtual MAC (or MAC) address into the port-VE table 58, associating that address with the downlink port on which the packet arrived.
Instead of eavesdropping on incoming packet traffic to detect and identify a virtualized entity, the network element can directly query the virtualized entities operating on a physical host machine to acquire attribute information. The network element can use one of a variety of attribute-gathering mechanisms to send an information request to a driver of a virtual machine, hypervisor, or multi-queue NIC. Examples of such attribute-gathering mechanisms include, but are not limited to proprietary and non-proprietary protocols, such as CIM (Common Information Model), and application program interfaces (APIs), such as VI API for VMware virtualized environments. Examples of attributes that may be gathered include, but are not limited to, the name of the virtualized entity (e.g., VM name, hypervisor name), the MAC or vMAC address, and the IP (Internet Protocol) address of the VM or hypervisor. The network switch places the virtual MAC (or MAC) address into the port-VE table 58, associating that address with the downlink port on which the packet arrived.
Alternatively, the administrator can directly configure the management module 24 of the network element with information that identifies the virtualized entity. Typically, an administrator comes to know the vMAC addresses of the vNICs (or MAC addresses of the queues of a multi-queue NIC) when configuring a virtualized host environment on a physical host machine. This address information can be entered into the network switch before the virtualized entity begins to transmit traffic.
Typically, administrators of a data center tend to place servers that perform a similar function (application or service) into a group and apply certain policies to this group (and thus to each server in the group). Such policies include, but are not limited to, security policies, storage policies, and network policies. Reference herein to a “traffic-handling policy” contemplates generally any type of policy that can be applied to traffic related to an application or service. In contrast, reference herein to a “network policy” specifically contemplates a network layer 2 or layer 3 switching configuration on the network switch, including, but not limited to, a VLAN configuration, a multicast configuration, QoS and bandwidth management policies, ACLs and filters, security and authentication policies, a load balancing and traffic steering configuration, and a redundancy and failover configuration. Although described herein primarily with reference to network policies, the principles described herein generally apply to traffic-handling policies, examples of which include security and storage policies.
Administrators apply network policies to virtualized entities on a group basis, regardless of the physical location of the virtualized entity or the particular downlink port 20 by which the virtualized entity accesses the network switch 16. For example, an administrator may place those servers or virtual machines performing database functions into a first VE group, while placing those servers or virtual machines performing web server functions into a second VE group. To the first VE group the administrator can assign high-priority QoS (quality of service), port security, access control lists (ACL), and strict session-persistent load balancing, whereas to the second VE group the administrator can assign less stringent policies, such as best-effort network policies. Furthermore, the administrator can use VE groups to isolate traffic associated with different functions from each other, thereby securing data within a given group of servers or virtual machines. Moreover, the network switch 16 can ensure that virtualized entities belonging to one VE group cannot communicate with virtualized entities belonging to another VE group.
An administrator further associates groups with specific network resources including, for example, bandwidth. In addition, each group is assigned an optional given uplink port 22 of the network switch 16, through which the switching fabric 52 forwards traffic from the virtualized entities belonging to that group toward their destinations. More than one group may be assigned the same uplink port.
Any number of different VE groups may be defined. A given VE group can be comprised of a single physical host machine, a single virtual machine, or a single queue in a multi-queue NIC. Such group assignments enable the network switch to operate at a virtual machine granularity, a queue granularity, at a physical machine granularity, or at a combination thereof.
As an example illustration of grouping,
In this illustrated embodiment, the hypervisor 30 of physical host machine 12-1 generates individual virtual machines 32-1, 32-2, and 32-3; physical host machine 12-2 is running virtual machine 32-4; and physical host machine 12-3 is running virtual machines 32-5 and 32-6. Consider, for illustration purposes, that the application programs running on virtual machines 32-1, 32-4, and 32-5 are database application programs, those running on virtual machines 32-3 and 32-6 are web server application programs, and the application running on virtual machine 32-2 is an engineering application program. Each virtual machine 32 has a virtual NIC (vNIC) 36, each having an associated virtual MAC address (vMAC).
The uplink ports 22 connect the network switch 16 to a plurality of networks 14-1, 14-2, 14-3 (generally, 14), each uplink port 22 being used to connect to a different one of the networks. Specifically, the network 14-1 is connected to uplink port 22-1; network 14-2, to uplink port 22-2; and network 14-3, to uplink 22-3. Examples of networks 14 include, but are not limited to, finance Ethernet network, engineering Ethernet network, and operations Ethernet network. Although shown as separate networks 14-1, 14-2, 14-3, these networks can be part of a larger network. Also for illustration purposes, consider that the network 14-1 is the target of communications from the database applications running on virtual machines 32-1, 32-4, and 32-5, that the network 14-2 is the target of communications from the engineering application running on the virtual machine 32-2, and that the network 14-3 is the target of communications from the web server applications running on virtual machines 32-3 and 32-6. In
During the operation of the data center 10′, the management module 24 of the network switch 16 becomes aware of the identities of the virtual machines 32 (through one of the means previously described) running on the various physical host machines 12. Each virtual machine 32 is associated with the downlink port 20 to which the physical host machine 12 is directly connected.
The administrator configures the management module 24 to place the virtual machines 32-1, 32-4, and 32-5 into a first group because of their common functionality (database access), the virtual machine 32-2 into a second group, and the virtual machines 32-3 and 32-6 into a third group because of their common functionality (web server).
In addition, the administrator configures the management module 24 to assign each defined group to one of the uplink ports 20.
After the configuration of the network switch 16, as described above, packets are switched at the granularity of a single virtual machine (in contrast to being switched at a coarser granularity of a single physical host machine or of a single downlink port). For instance, whereas packets from both virtual machines 32-1 and 32-3 running on the same physical host machine 12-1 arrive at the same downlink port 20-1, because of the above-described configuration, the network switch 16 can separate the packets at a virtual machine granularity, forwarding those packets from virtual machine 32-1 to uplink port 22-1 and those packets from virtual machine 32-3 to uplink port 22-3.
Presuming that the address of the virtualized entity is currently in the port-VE table 58 and currently recorded as associated with the downlink port at which the packet arrived, the network switch identifies (step 106) the virtualized entity. Using the identified virtualized entity, the network switch searches the VE-group table 60 to identify (step 108) the group to which the virtualized entity is assigned. After identifying the group, the network switch allocates (step 110) any network resources associated with the group, acquires (step 112) the identity of the uplink port assigned to the group from the group-port table 62, and applies (step 114) the traffic-handling policy associated with the group to the packet when forwarding the packet to the acquired uplink port.
If the address of the virtualized entity is currently in the port-VE table 58, but it appears associated with a different downlink port, then the virtualized entity has moved to a different physical host machine. The management module updates the port-VE table 58 to reflect the present association between the virtualized entity and the present physical downlink port being used to access the network switch. The virtualized entity remains a member of its previously assigned group and continues to receive the same network resources and undergo the same traffic-handling policy that it was previously assigned.
If the address of the virtualized entity is not currently in the port-VE table 58, the management module 24 may have discovered a new virtualized entity. The management module 24 can then add the VMAC or MAC address of the virtualized entity to the port-VE table 58 and prompt the administrator to assign the virtualized entity to a group. After the virtualized entity becomes a member of a group, the network element can process traffic from the virtualized entity in accordance with the traffic-handling policy associated with that group.
One approach for implementing grouping is to use VLANs (virtual LANs) to group the virtualized entities of similar function. If the network switch is VLAN-aware, the VLAN tag (IEEE 802.1Q) can serve to identify the group.
For a VLAN-agnostic (i.e., VLAN-transparent) network switch, a Q-in-Q VLAN tag (IEEE 802.1 Q-in-Q) can be used to identify the group, while the inner VLAN tag represents a user's virtual LAN and remains transparent to the network switch.
To translate between VLANs and virtualized entities, the network switch can use a translation table (e.g., the VE-group table 60) to associate VLAN tag values (whether an inner VLAN tag or outer VLAN tag) with MAC addresses of the virtualized entities. Alternatively, intelligent filters or ACLs can be used to translate between VLAN tag values (inner or outer VLAN tags) and the MAC addresses of the virtualized entities. As another alternative, the attribute-gathering mechanisms described above, namely, the CIM or proprietary APIs and protocols for acquiring attribute information about a virtualized entity, can be used to translate between virtualized entities and VM-granular network policies.
To accommodate the use of VLANs for identifying groups of virtualized entities, the network switch has a VLAN-based configuration engine for all network policies so that the network switch can provide group-based (VE-granular) configuration and network policies.
As described previously, a given group can be comprised of a single physical host machine, a single virtual machine, or a single queue in a multi-queue NIC. As shown in
During the operation of the data center 10″, the management module 24 of the network switch 16 becomes aware of the identities of the virtual machines 32-1, 32-2, 32-3, 32-4, and 32-5 and of each queue 44 of the multi-queue NIC 42. Each virtualized entity (i.e., virtual machine and queue) is associated with the downlink port 20 to which the physical host machine 12 is directly connected.
The administrator configures the management module 24 to place the virtual machine 32-1 into a first VE group, the virtual machine 32-2 into a second VE group, and the virtual machine 32-3 into a third VE group, a queue of the multi-queue into a fourth VE group, and the entire physical host machine 12-3 into a fifth VE group. Alternatively, the administrator can place the virtual machines 32-4 and 32-5 in the first group with the virtual machine 32-1 because these virtual machines perform a similar function (as denoted by their shading). In addition, the administrator configures the management module 24 to assign each defined group to one of the uplink ports 22. An uplink port 22 can be shared by multiple groups or be exclusively dedicated to one group in particular. After the configuration of the network switch 16, as described above, packets are switched at the granularity of a single virtual machine (as is done for virtual machines 32-1, 32-2, and 32-3), at the granularity of a single queue, and at the granularity of a single physical host machine.
The practice of grouping virtualized entities and applying network policies on a group basis can scale beyond the network switch 16. Groups can span multiple tiers of a network topology tree and, hence, enable the deployment of group-based network policies and fine-grained network resource control throughout the data center. As an illustrative example of such scalability,
Each network switch 16-1, 16-2 is virtualization-aware, places VEs into groups, and applies network policies to VE traffic based on the groups. In
Each network switch 16-1, 16-2 is connected to an aggregator switch 150. The aggregator switch 150 can be in the same chassis as one of the network switches or in a chassis separate from the network switches. In one embodiment, the aggregator switch 150 is in communication with a gateway switch 160.
To support a network policy management across the entire data center at a VE granularity, the aggregator switch 150 and, optionally, the gateway 160 also become VE group-based. One approach to extend VE groups to upstream network elements in the data center (i.e., to aggregator and gateway switches) is for the aggregator switch 150 to run a control protocol that communicates with the network switches to acquire the group attributes and the group-to-uplink port assignments made at those network switches and to pass such information to the gateway switch 160. Examples of attributes acquired for a given group include the VE group identifier, members of the VE group, uplink bandwidth for the VE group, and ACLs associated with the VE group. Alternatively, the data packets passing from the network switches to the aggregator switch can carry the group attributes (e.g., within the 802.1Q tag or 802.1q-in-Q tag). In addition, the aggregator switch 150 assigns groups to its uplink ports, and consequently appears as a multi-homed NIC to its upstream network elements (e.g., the gateway switch 160).
Embodiments of the described invention may be implemented in hardware (digital or analog), software (program code), or combinations thereof. Program code implementations of the present invention may be embodied as computer-executable instructions on or in one or more articles of manufacture, or in or on computer-readable medium. A computer, computing system, or computer system, as used herein, is any programmable machine or device that inputs, processes, and outputs instructions, commands, or data. In general, any standard or proprietary, programming or interpretive language can be used to produce the computer-executable instructions. Examples of such languages include C, C++, Pascal, JAVA, BASIC, Visual Basic, and C#.
Examples of articles of manufacture and computer-readable medium in which the computer-executable instructions may be embodied include, but are not limited to, a floppy disk, a hard-disk drive, a CD-ROM, a DVD-ROM, a flash memory card, a USB flash drive, an non-volatile RAM (NVRAM or NOVRAM), a FLASH PROM, an EEPROM, an EPROM, a PROM, a RAM, a ROM, a magnetic tape, or any combination thereof. The computer-executable instructions may be stored as, e.g., source code, object code, interpretive code, executable code, or combinations thereof.
While the invention has been shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 61/044,950, filed on Apr. 15, 2008, the entirety of which application is incorporated by reference herein.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US09/40416 | 4/14/2009 | WO | 00 | 10/8/2010 |
Number | Date | Country | |
---|---|---|---|
61044950 | Apr 2008 | US |