Hardware assisted flow acceleration and L2 SMAC management in a heterogeneous distributed multi-tenant virtualized clustered system

Information

  • Patent Grant
  • 9519501
  • Patent Number
    9,519,501
  • Date Filed
    Monday, September 30, 2013
    11 years ago
  • Date Issued
    Tuesday, December 13, 2016
    7 years ago
Abstract
A method performed by a hypervisor in a virtual network traffic management cluster, the method comprising: assigning a set of continuous available source media access control (SMAC) addresses to one or more virtual network traffic management devices in a network traffic management cluster, the one or more virtual network traffic management devices configured to handle connections for virtual guest instances; assigning a region of predetermined size in a SMAC-index mapping table to a corresponding virtual network traffic management device; wherein the assigned SMAC addresses and assigned region in the SMAC-index mapping table are accessible by the virtual guest instances; and maintaining SMAC-index pool allocation to virtual guest instances handled by corresponding virtual network traffic management devices.
Description
FIELD

The present disclosure relates to a hardware assisted flow acceleration and L2 SMAC management in a heterogeneous distributed multi-tenant virtualized clustered system.


BACKGROUND

Some existing network traffic management devices include a network interface comprising of a software-based control segment (CS) and a hardware-based data flow segment (DFS), whereby the network interface performs network address translation or transformations to facilitate packet transmission to clients and servers. Performing the transformations in the hardware DFS component is much faster than in the software CS component. Whenever a new flow is handled by the network traffic management device, the CS enters a new flow entry and translation information into a flow table accessible by the network interface.


More than one network traffic management device may be incorporated into a virtualized clustered system, in which the network traffic management devices in the cluster can operate as virtual network devices which share the same flow table. Each network traffic management device in the cluster is referred to as a ‘guest’. For each guest, the network interface enters a flow entry, in which each flow entry may include source and destination L2 MAC and virtual MAC addresses, source IP, destination IP, source TCP port, destination TCP port, sequence number(s), VLAN, and/or a timestamp, for example. Accordingly, the amount of data that needs to be entered into the flow table is often more than 64 bytes, which is the per flow storage size of the flow table.


SUMMARY

In an aspect, a method performed by a hypervisor in a virtual network traffic management cluster is disclosed. The method comprises assigning a set of continuous available source media access control (SMAC) addresses to one or more virtual network traffic management devices in a network traffic management cluster, the one or more virtual network traffic management devices configured to handle connections for virtual guest instances. The method comprises assigning a region of predetermined size in a MAC table to a corresponding virtual network traffic management device; wherein the assigned SMAC addresses and assigned region in the MAC table are accessible by the virtual guest instances. The method comprises maintaining SMAC allocation to virtual guest instances handled by corresponding virtual network traffic management devices.


In an aspect, a processor readable medium having stored thereon instructions for performing a method is disclosed. The medium comprises processor executable code which when executed by at least one processor, causes the processor to assign a set of continuous available source media access control (SMAC) addresses to one or more virtual network traffic management devices in a network traffic management cluster, the one or more virtual network traffic management devices configured to handle connections for virtual guest instances. The processor is further configured to assign a region of predetermined size in a MAC table to a corresponding virtual network traffic management device; wherein the assigned SMAC addresses and assigned region in the MAC table are accessible by the virtual guest instances. The processor is further configured to maintain SMAC-allocation to virtual guest instances handled by corresponding virtual network traffic management devices.


In an aspect, a hypervisor of a network traffic management device comprises a network interface configured to communicate with one or more virtual network traffic management devices in a virtual network management cluster; a memory containing non-transitory machine readable medium comprising machine executable code having stored thereon instructions to be executed to perform a method. The hypervisor of the network traffic management device includes a processor coupled to the network interface and the memory. The processor or network interface is configured to execute the code to assign a set of continuous available source media access control (SMAC) addresses to one or more virtual network traffic management devices in a network traffic management cluster, the one or more virtual network traffic management devices configured to handle connections for virtual guest instances. The processor or network interface also assigns a region of predetermined size in a MAC table to a corresponding virtual network traffic management device; wherein the assigned SMAC addresses and assigned region in the MAC table are accessible by the virtual guest instances. The processor or network interface further maintains SMAC allocation to virtual guest instances handled by corresponding virtual network traffic management devices.


In one or more of the above aspects, a L2 and virtual MAC address for each virtual guest instance is stored in the one or more virtual network traffic management devices.


In one or more of the above aspects, at least a portion of the MAC table is assigned to a corresponding virtual cluster having the one or more virtual guest instances.


In one or more of the above aspects, at least a portion of the MAC table has a base boundary and a limit boundary, wherein storing of data associated with the virtual guest instance is performed from the limit boundary in a converging manner to the base boundary.


In one or more of the above aspects, the processor is further configured to receive a packet at the one or more virtual network traffic management devices; identify a flow signature from the received packet; perform a look up in a flow table using the identified flow signature; retrieve index information from the flow table; access the MAC table and retrieve a MAC address and transform information for the packet to establish a connection.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of an example system environment that includes a network traffic management device in accordance with an aspect of the present disclosure;



FIG. 2A is a block diagram of the network traffic management device in accordance with an aspect of the present disclosure;



FIG. 2B is a block diagram of the network interface of the network traffic management device in accordance with an aspect of the present disclosure;



FIG. 3 illustrates a flow chart describing the process performed in accordance with an aspect of the present disclosure; and



FIG. 4 illustrates a flow chart describing the process performed in accordance with an aspect of the present disclosure.





While these examples are susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail preferred examples with the understanding that the present disclosure is to be considered as an exemplification and is not intended to limit the broad aspect to the embodiments illustrated.


DETAILED DESCRIPTION

In general, the system and method of the present disclosure reduces the size of the transformation data in the flow table by placing an index which is representative of the L2 and virtual MAC addresses required for each flow in the flow table. The L2 and virtual MAC addresses for each corresponding flow is stored in a separate MAC table, in which the index for the corresponding flow points to the corresponding L2 and virtual MAC addresses for that flow in the MAC table.



FIG. 1 is a diagram of an example system environment that includes a network traffic management device in accordance with an aspect of the present disclosure. The example system environment 100 includes one or more Web and/or non Web application servers 102 (referred generally as “servers”), one or more client devices 106 and one or more network traffic management devices 110, although the environment 100 can include other numbers and types of devices in other arrangements. The network traffic management device 110 is coupled to the servers 102 via local area network (LAN) 104 and client devices 106 via a wide area network 108. Generally, client device requests are sent over the network 108 to the servers 102 which are received or intercepted by the network traffic management device 110.


Client devices 106 comprise network computing devices capable of connecting to other network computing devices, such as network traffic management device 110 and/or servers 102. Such connections are performed over wired and/or wireless networks, such as network 108, to send and receive data, such as for Web-based requests, receiving server responses to requests and/or performing other tasks. Non-limiting and non-exhausting examples of such client devices 106 include personal computers (e.g., desktops, laptops), tablets, smart televisions, video game devices, mobile and/or smart phones and the like. In an example, client devices 106 can run one or more Web browsers that provide an interface for operators, such as human users, to interact with for making requests for resources to different web server-based applications and/or Web pages via the network 108, although other server resources may be requested by client devices.


The servers 102 comprise one or more server network devices or machines capable of operating one or more Web-based and/or non Web-based applications that may be accessed by other network devices (e.g. client devices, network traffic management devices) in the environment 100. The servers 102 can provide web objects and other data representing requested resources, such as particular Web page(s), image(s) of physical objects, JavaScript and any other objects, that are responsive to the client devices' requests. It should be noted that the servers 102 may perform other tasks and provide other types of resources. It should be noted that while only two servers 102 are shown in the environment 100 depicted in FIG. 1, other numbers and types of servers may be utilized in the environment 100. It is contemplated that one or more of the servers 102 may comprise a cluster of servers managed by one or more network traffic management devices 110. In one or more aspects, the servers 102 may be configured implement to execute any version of Microsoft® IIS server, RADIUS server, DIAMETER server and/or Apache® server, although other types of servers may be used.


Network 108 comprises a publicly accessible network, such as the Internet, which is connected to the servers 102, client devices 106, and network traffic management devices 110. However, it is contemplated that the network 108 may comprise other types of private and public networks that include other devices. Communications, such as requests from clients 106 and responses from servers 102, take place over the network 108 according to standard network protocols, such as the HTTP, UDP and/or TCP/IP protocols, as well as other protocols. As per TCP/IP protocols, requests from the requesting client devices 106 may be sent as one or more streams of data packets over network 108 to the network traffic management device 110 and/or the servers 102. Such protocols can be utilized by the client devices 106, network traffic management device 110 and the servers 102 to establish connections, send and receive data for existing connections, and the like.


Further, it should be appreciated that network 108 may include local area networks (LANs), wide area networks (WANs), direct connections and any combination thereof, as well as other types and numbers of network types. On an interconnected set of LANs or other networks, including those based on differing architectures and protocols. Network devices such as client devices, 106, servers 102, network traffic management devices 110, routers, switches, hubs, gateways, bridges, cell towers and other intermediate network devices may act within and between LANs and other networks to enable messages and other data to be sent between network devices. Also, communication links within and between LANs and other networks typically include twisted wire pair (e.g., Ethernet), coaxial cable, analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links and other communications links known to those skilled in the relevant arts. Thus, the network 108 is configured to handle any communication method by which data may travel between network devices.


LAN 104 comprises a private local area network that allows communications between the one or more network traffic management devices 110 and one or more servers 102 in the secured network. It is contemplated, however, that the LAN 104 may comprise other types of private and public networks with other devices. Networks, including local area networks, besides being understood by those skilled in the relevant arts, have already been generally described above in connection with network 108 and thus will not be described further.


As shown in the example environment 100 depicted in FIG. 1, the one or more network traffic management devices 110 is interposed between client devices 106 with which it communicates with via network 108 and servers 102 with which it communicates with via LAN 104. Generally, the network traffic management device 110 manages network communications, which may include one or more client requests and server responses, via the network 108 between the client devices 106 and one or more of the servers 102. In any case, the network traffic management device 110 may manage the network communications by performing several network traffic related functions involving the communications. Some functions include, but are not limited to, load balancing, access control, and validating HTTP requests using JavaScript code that are sent back to requesting client devices 106.



FIG. 2A is a block diagram of the network traffic management device shown in FIG. 1 in accordance with an aspect of the present disclosure. As shown in FIG. 2A, the example network traffic management device 110 includes one or more device processors 200, one or more device I/O interfaces 202, one or more network interfaces 204, and one or more device memories 206, which are coupled together by one or more bus 208. It should be noted that the network traffic management device 110 can be configured to include other types and/or numbers of components and is thus not limited to the configuration shown in FIG. 2A.


Device processor 200 of the network traffic management device 110 comprises one or more microprocessors configured to execute computer/machine readable and executable instructions stored in the device memory 206. Such instructions, when executed by one or more processors 200, implement general and specific functions of the network traffic management device 110, including the inventive process described in more detail below. It is understood that the processor 200 may comprise other types and/or combinations of processors, such as digital signal processors, micro-controllers, application specific integrated circuits (“ASICS 214”), programmable logic devices (“PLDs”), field programmable logic devices (“FPLDs”), field programmable gate arrays (“FPGAs”), and the like. The processor 200 is programmed or configured according to the teachings as described and illustrated herein.


Device I/O interfaces 202 comprise one or more user input and output device interface mechanisms. The interface may include a computer keyboard, mouse, display device, and the corresponding physical ports and underlying supporting hardware and software to enable the network traffic management device 110 to communicate with other network devices in the environment 100. Such communications may include accepting user data input and providing user output, although other types and numbers of user input and output devices may be used. Additionally or alternatively, as will be described in connection with network interface 204 below, the network traffic management device 110 may communicate with the outside environment for certain types of operations (e.g. smart load balancing) via one or more network management ports.


Bus 208 may comprise one or more internal device component communication buses, links, bridges and supporting components, such as bus controllers and/or arbiters. The bus 208 enables the various components of the network traffic management device 110, such as the processor 200, device I/O interfaces 202, network interface 204, and device memory 206, to communicate with one another. However, it is contemplated that the bus 208 may enable one or more components of the network traffic management device 110 to communicate with one or more components in other network devices as well. Example buses include HyperTransport, PCI, PCI Express, InfiniBand, USB, Firewire, Serial ATA (SATA), SCS 214I, IDE and AGP buses. However, it is contemplated that other types and numbers of buses may be used, whereby the particular types and arrangement of buses will depend on the particular configuration of the network traffic management device 110.


Device memory 206 comprises computer readable media, namely computer readable or processor readable storage media, which are examples of machine-readable storage media. Computer readable storage/machine-readable storage media may include volatile, nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information. Examples of computer readable storage media include RAM, BIOS, ROM, EEPROM, flash/firmware memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information, which can be accessed by a computing or specially programmed network device, such as the network traffic management device 110.


Such storage media includes computer readable/processor-executable instructions, data structures, program modules, or other data, which may be obtained and/or executed by one or more processors, such as device processor 200. Such instructions, when executed, allow or cause the processor 200 to perform actions, including performing the inventive processes described below. The memory 206 may contain other instructions relating to the implementation and operation of an operating system for controlling the general operation and other tasks performed by the network traffic management device 110.


The network interface 204 performs the operations of routing, translating/transforming, and switching packets and comprises one or more mechanisms that enable the network traffic management device 110 to engage in network communications over the LAN 104 and the network 108 using one or more of a number of protocols, such as TCP/IP, HTTP, UDP, RADIUS and DNS. However, it is contemplated that the network interface 204 may be constructed for use with other communication protocols and types of networks. Network interface 204 is sometimes referred to as a transceiver, server array controller, transceiving device, or network interface card (NIC), which transmits and receives network data packets over one or more networks, such as the LAN 104 and the network 108. In an example, where the network traffic management device 110 includes more than one device processor 200 (or a processor 200 has more than one core), each processor 200 (and/or core) may use the same single network interface 204 or a plurality of network interfaces 204. Further, the network interface 204 may include one or more physical ports, such as Ethernet ports, to couple the network traffic management device 110 with other network devices, such as servers 102. Moreover, the interface 204 may include certain physical ports dedicated to receiving and/or transmitting certain types of network data, such as device management related data for configuring the network traffic management device 110 or client request/server response related data.


The network interface 204 also maintains flow entry and flow state information for flow of packets as well as dynamically selects operations on “flows” based on the content of the packets in the flow that the network traffic management device 110 handles. A flow is a sequence of packets that have the same flow signature. The flow signature is a tuple which includes information about the source and destination network devices which are to handle the packets in the flow. The flow exists for a finite period, wherein subsequent flows may have the same flow signature as a previously handled flow. The network interface 204 is configured to leverage the flow signatures of previously handled flows to more efficiently handle new flows having the same flow signatures, as will be discussed in more detail below.



FIG. 2B is a block diagram of the network interface in accordance with an aspect of the present disclosure. As shown in FIG. 2B, the network interface 204 includes a Data Flow Segment (DFS) 212 and at least one Control Segment (CS) 214. Although the network interface 204 is shown as two partitions, it is understood and appreciated that the segmented blocks may be incorporated into one or more separate blocks including, but not limited to, two segments in the same chassis.


The DFS 212 includes the hardware-optimized portion whereas the CS 214 includes the software-optimized portion of the network interface 204. The DFS 212 performs most of the repetitive tasks including statistics gathering and per-packet policy enforcement (e.g. packet switching). The DFS 212 may also perform tasks such as that of a router, a switch, or a routing switch. The CS 214 determines the translation to be performed on each new flow and performs high-level control functions and per-flow policy enforcement.


As mentioned above, the network interface 204 (and the combined operation of the DFS 212 and CS 214) performs network address translation (NAT) functions on flows between client devices in external networks and servers in internal secure or non-secure networks. Translation or transformation information may include a set of rules that provide instructions on how parts of a packet are to be rewritten, and the values that those parts will be rewritten to. Packets can be received by the DFS 212 from both internal and external networks. After the packets are received, the DFS 212 categorizes the packets into flows, analyzes the flow signature, and looks up the transformation data for that flow signature in a table (or another suitable data construct). If the table does not have an entry for the particular flow signature, the DFS 212 sends a query to the CS 214 over the message bus for instructions. The CS 214 accesses a flow table to see whether there is an already existing flow signature that matches the queried flow signature. If no match is found, the CS 214 makes a new entry in the table for the new flow signature and replies to the DFS 212 with translation instructions on handling the new flow. The DFS 212 makes a new entry in its table for the new flow signature and then routes, switches or otherwise directs the flow based on the translation information for that particular flow signature.


As mentioned above, a plurality of network traffic management devices 110 can be combined as a network traffic management cluster, wherein the multiple network traffic management devices 110 share resources to handle network traffic. Additionally, the network traffic management devices in a particular cluster may be configured to act as virtual devices, thereby allowing multiple instances of software to run on one physical network traffic management device.


In the virtualized network traffic management cluster (also termed “virtual cluster”), the DFS will be virtualized in which one or more virtual network interfaces will handle the functions of the network interface 204 described above for virtual services. Additionally, considering that the virtual network interfaces are virtual in nature, they do not communicate with one another. The virtual network interfaces (referred to as ‘guests’) will require a virtual MAC address from the DFS 212.


When a guest first communicates with the network interface 204 (such as during start up), each of the guest's virtual network devices is provided an L2 MAC address. Additionally, each virtual guest can allocate one or more virtual MAC addresses not assigned by the host (referred to as virtual masquerader MAC addresses) that the virtual guest can configure and associate with a traffic service group.


The DFS component 212 of the network interface 204 maintains a MAC table for a cluster, in which the MAC table is separate from the flow table described above. DFS component 212 is configured to store the L2 and virtual MAC addresses for each guest in the cluster. In an aspect, the MAC table is able to store the L2 and virtual MAC addresses for up to 32 guest devices, although a greater or lesser number is contemplated. In particular, each guest is given a dedicated portion of the MAC table.


At provisioning or start up, each virtual cluster is assigned a set of continuous SMAC(s) starting with a base. A region of a MAC table is also assigned to the corresponding virtual cluster. Multiple sets of SMAC(s) can also be used with different bases. The DFS 212 maintains SMAC/index allocation info for each guest cluster. The assigned region of MAC table is managed by the corresponding guests, although the DFS 212 performs access boundary check and lookup verification/transformation. Additionally, requests from virtual network traffic management devices for additional SMAC addresses can be serviced by allocating additional space in the MAC table for, and assigning to the corresponding virtual network traffic management device, the additional SMAC addresses.


Each guest's portion in the MAC table is designated to have storage boundaries of a base boundary and a limit boundary. In an aspect, L2 MAC address entries are stored beginning at the base boundary and take up storage space toward the limit boundary, wherein the portion is effectively cut in half to accommodate L2 MAC addresses and virtual MAC addresses. Similarly, virtual MAC address entries are stored beginning at the limit boundary and take up storage space toward the base boundary. As a result, the storage area of the guest device's portion gets filled in a converging manner.



FIG. 3 illustrates a flow chart describing the process performed by the application module for a new flow in accordance with an aspect of the present disclosure. As shown in FIG. 3, the network traffic management device 110 receives a request from a guest network traffic management device to establish a connection for a flow (Block 302). This request is initially handled by the DFS 212 of the network traffic management device 110. In the example process show in FIG. 3, the DFS 212 determines that the request is for a new flow connection. Accordingly, the DFS 212 forwards the request to the CS 214 for handling (Block 304). The CS 214 thereafter establishes the connection and generates a mapping of the MAC address to an index in the MAC table (Block 306). The CS 214 also creates a flow table entry in the flow table, wherein the flow table entry contains the index (Block 308). The CS then inserts the flow table entry into the flow table of the DFS 212 and the DFS validates the entry (Block 310).



FIG. 4 illustrates a flow chart describing the process performed by the application module for an existing flow in accordance with an aspect of the present disclosure. As shown in FIG. 4, the packet is received by the network traffic management device 110, whereby the DFS 212 analyzes the packet to identify the flow signature for the connection to allow the DFS 212 to look up the flow signature in the flow table (Block 402). Once the DFS 212 locates the flow signature in the flow table, the DFS 212 retrieves the corresponding transform information as well as the MAC index information from the flow table and validates the retrieved information (Block 404). The DFS 212 then accesses the MAC table and uses the MAC index to look up and retrieve a MAC address (Block 406), whereby the DFS utilizes the MAC address as the MAC source address in the packet transform operation (Block 408).


In this example, the traffic management device 110 further enforces use of a respective one of the SMAC addresses assigned to the virtual network traffic management devices. Enforcement can include dropping the packet, preventing packets returning on the flow associated with the corresponding MAC address from returning to the virtual traffic management device, or logging information about the problem in the hypervisor, for example, although other methods of enforcing use of only assigned addresses by each of the virtual traffic management devices can also be used.


Accordingly, when a packet is received by a network traffic management device in which the packet is associated with a new connection, the DFS component of the network traffic management device forwards the new connection request to the CS component of the network traffic management device. The CS component then establishes the connection, creates a mapping of the MAC addresses for that to an index in a table (which may already be populated) in the DFS component. The CS creates a corresponding flow entry in the flow table, in which the flow entry includes the requisite data for the connection and index for the new connection. The CFS component then inserts the flow into the flow table in the DFS 212 component. When a packet is received by the network traffic management device in which the packet is associated with an established connection, the DFS 212 component will look up the flow entry in the flow table (based on a key generated based on characteristics 214 of the packet) to retrieve transform and MAC index information, look up and retrieve the MAC address in the MAC table based on the retrieved MAC index, and transform the packet using the retrieved MAC address as the MAC source address.


Having thus described the basic concepts, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the examples. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed system and/or processes to any order except as may be specified in the claims. Accordingly, the system and method is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for facilitating network address translation in a virtualized network traffic management cluster, executable by one or more traffic management devices with at least one processor executing the method, the method comprising steps to: assign, by a first processor on the one or more network traffic management devices, a set of continuous available source media access control (SMAC) addresses stored in a region of a MAC table to a network traffic management cluster comprising one or more virtual network traffic management devices;establish, by a second processor on the one or more network traffic management devices, a connection in response to a request from one of the virtual network traffic management devices that received a packet associated with a new flow;insert, by a third processor on the one or more network traffic management devices, a flow table entry comprising an index to the MAC table into a flow table in a hardware-based data flow segment (DFS), wherein the flow table entry can be identified based on a key generated from a flow signature of the packet and the index corresponds to a MAC table entry in the MAC table storing one of the SMAC addresses corresponding to the one of the virtual network traffic management devices; andtransform, by a fourth processor on the one or more network traffic management devices, the packet associated with the connection using the one of the SMAC addresses as a source address of the received packet and send the received packet to a destination network device.
  • 2. The method of claim 1, further comprising employing at least one of the first, second, third, or fourth processors on the one or more network traffic management devices to: generate the key based on the flow signature of another received packet;retrieve the flow table entry using the key and the MAC table entry using the index retrieved from the flow table entry; andtransform the another received packet using the MAC address.
  • 3. The method of claim 1, wherein the at least a portion of the MAC table has a base boundary and a limit boundary, wherein storing of data associated with the virtual network traffic management devices is performed from the limit boundary in a converging manner to the base boundary.
  • 4. The method of claim 1, further comprising employing at least one of the first, second, third, or fourth processors on the one or more network traffic management devices to enforce use of the set of SMAC addresses assigned to the virtual network traffic management devices.
  • 5. The method of claim 1, further comprising employing at least one of the first, second, third, or fourth processors on the one or more network traffic management devices to: receive a request from another one of the virtual network traffic management devices for one or more additional SMAC addresses; andallocate additional space in the MAC table for, and assign to the another one of the virtual network traffic management devices, the one or more additional SMAC addresses.
  • 6. The method as set forth in claim 1, wherein the first processor, the second processor, the third, and the fourth processor are the same processor.
  • 7. The method as set forth in claim 1, wherein two or more of the first processor, the second processor, the third processor, or the fourth processor are on a same one of the network traffic management devices.
  • 8. A non-transitory computer readable medium having stored thereon instructions for facilitating network address translation in a virtualized, comprising executable code which when executed by one or more processors causes the processors to perform steps comprising: assigning a set of continuous available source media access control (SMAC) addresses stored in a region of a MAC table to a network traffic management cluster comprising one or more virtual network traffic management devices;establishing a connection in response to a request from one of the virtual network traffic management devices that received a packet associated with a new flow;inserting a flow table entry comprising an index to the MAC table into a flow table in a hardware-based data flow segment (DFS), wherein the flow table entry can be identified based on a key generated from a flow signature of the packet and the index corresponds to a MAC table entry in the MAC table storing one of the SMAC addresses corresponding to the one of the virtual network traffic management devices; andtransforming the packet associated with the connection using the one of the SMAC addresses as a source address of the received packet and send the received packet to a destination network device.
  • 9. The non-transitory computer readable medium of claim 8, further having stored thereon executable code which when executed by the processors further causes the processors to perform one or more additional steps comprising: generating the key based on the flow signature of another received packet;retrieving the flow table entry using the key and the MAC table entry using the index retrieved from the flow table entry; andtransforming the another received packet using the MAC address.
  • 10. The non-transitory computer readable medium of claim 8, wherein the at least a portion of the MAC table has a base boundary and a limit boundary, wherein storing of data associated with the virtual network traffic management devices is performed from the limit boundary in a converging manner to the base boundary.
  • 11. The non-transitory computer readable medium of claim 8, further having stored thereon executable code which when executed by the processors further causes the processors to perform one or more additional steps comprising enforcing use of the set of SMAC addresses assigned to the virtual network traffic management devices.
  • 12. The non-transitory computer readable medium of claim 8, further having stored thereon executable code which when executed by the processors further causes the processors to perform one or more additional steps comprising: receiving a request from another one of the virtual network traffic management devices for one or more additional SMAC addresses; andallocating additional space in the MAC table for, and assign to the another one of the virtual network traffic management devices, the one or more additional SMAC addresses.
  • 13. One or more network traffic management devices comprising: memory comprising programmed instructions stored in the memory; andone or more processor configured to be capable of executing the programmed instructions stored in the memory to: assign a set of continuous available source media access control (SMAC) addresses stored in a region of a MAC table to a network traffic management cluster comprising one or more virtual network traffic management devices;establish a connection in response to a request from one of the virtual network traffic management devices that received a packet associated with a new flow;insert a flow table entry comprising an index to the MAC table into a flow table in a hardware-based data flow segment (DFS), wherein the flow table entry can be identified based on a key generated from a flow signature of the packet and the index corresponds to a MAC table entry in the MAC table storing one of the SMAC addresses corresponding to the one of the virtual network traffic management devices; andtransform the packet associated with the connection using the one of the SMAC addresses as a source address of the received packet and send the received packet to a destination network device.
  • 14. The one or more network traffic management devices of claim 13, wherein the one or more processor are further configured to be capable of executing the programmed instructions stored in the memory to: generate the key based on the flow signature of another received packet;retrieve the flow table entry using the key and the MAC table entry using the index retrieved from the flow table entry; andtransform the another received packet using the MAC address.
  • 15. The one or more network traffic management devices of claim 13, wherein the at least a portion of the MAC table has a base boundary and a limit boundary, wherein storing of data associated with the virtual network traffic management devices is performed from the limit boundary in a converging manner to the base boundary.
  • 16. The one or more network traffic management devices of claim 13, wherein the one or more processor are further configured to be capable of executing the programmed instructions stored in the memory to enforce use of the set of SMAC addresses assigned to the virtual network traffic management devices.
  • 17. The one or more network traffic management devices of claim 13, wherein the one or more processor are further configured to be capable of executing the programmed instructions stored in the memory to: receive a request from another one of the virtual network traffic management devices for one or more additional SMAC addresses; andallocate additional space in the MAC table for, and assign to the another one of the virtual network traffic management devices, the one or more additional SMAC addresses.
  • 18. A method for facilitating network address translation in a virtualized network traffic management cluster, the method comprising: assigning, by a network traffic management device, a set of continuous available source media access control (SMAC) addresses stored in a region of a MAC table to a network traffic management cluster comprising one or more virtual network traffic management devices;establishing, by the network traffic management device, a connection in response to a request from one of the virtual network traffic management devices that received a packet associated with a new flow;inserting, by the network traffic management device, a flow table entry comprising an index to the MAC table into a flow table in a hardware-based data flow segment (DFS), wherein the flow table entry can be identified based on a key generated from a flow signature of the packet and the index corresponds to a MAC table entry in the MAC table storing one of the SMAC addresses corresponding to the one of the virtual network traffic management devices; andtransforming, by the network traffic management device, the packet associated with the connection using the one of the SMAC addresses as a source address of the received packet and send the received packet to a destination network device.
STATEMENT OF RELATED APPLICATION

The present application claims the benefit of priority based on U.S. Provisional Patent Application Ser. No. 61/707,960, filed on Sep. 30, 2012, in the name of inventors Hao Cai, Tim Michels and Paul Szabo, entitled “Hardware Assisted Flow Acceleration and L2 SMAC Management in Heterogeneous Distributed Multi-Tenant Virtualized Clustered System”, which is hereby incorporated by reference.

US Referenced Citations (460)
Number Name Date Kind
4993030 Krakauer et al. Feb 1991 A
5218695 Noveck et al. Jun 1993 A
5303368 Kotaki Apr 1994 A
5410667 Belsan et al. Apr 1995 A
5473362 Fitzgerald et al. Dec 1995 A
5511177 Kagimasa et al. Apr 1996 A
5537585 Blickenstaff et al. Jul 1996 A
5548724 Akizawa et al. Aug 1996 A
5550965 Gabbe et al. Aug 1996 A
5583995 Gardner et al. Dec 1996 A
5586260 Hu Dec 1996 A
5590320 Maxey Dec 1996 A
5623490 Richter et al. Apr 1997 A
5644698 Cannon Jul 1997 A
5649194 Miller et al. Jul 1997 A
5649200 Leblang et al. Jul 1997 A
5668943 Attanasio et al. Sep 1997 A
5692180 Lee Nov 1997 A
5721779 Funk Feb 1998 A
5724512 Winterbottom Mar 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5832496 Anand et al. Nov 1998 A
5832522 Blickenstaff et al. Nov 1998 A
5838970 Thomas Nov 1998 A
5862325 Reed et al. Jan 1999 A
5884303 Brown Mar 1999 A
5893086 Schmuck et al. Apr 1999 A
5897638 Lasser et al. Apr 1999 A
5905990 Inglett May 1999 A
5917998 Cabrera et al. Jun 1999 A
5920873 Van Huben et al. Jul 1999 A
5926816 Bauer et al. Jul 1999 A
5937406 Balabine et al. Aug 1999 A
5991302 Beri et al. Nov 1999 A
5995491 Richter et al. Nov 1999 A
5999664 Mahoney et al. Dec 1999 A
6012083 Savitzky et al. Jan 2000 A
6029168 Frey Feb 2000 A
6044367 Wolff Mar 2000 A
6047129 Frye Apr 2000 A
6072942 Stockwell et al. Jun 2000 A
6078929 Rao Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6088694 Burns et al. Jul 2000 A
6088759 Hasbun et al. Jul 2000 A
6104706 Richter et al. Aug 2000 A
6128627 Mattis et al. Oct 2000 A
6128717 Harrison et al. Oct 2000 A
6161145 Bainbridge et al. Dec 2000 A
6161185 Guthrie et al. Dec 2000 A
6181336 Chiu et al. Jan 2001 B1
6182188 Hasbun et al. Jan 2001 B1
6202071 Keene Mar 2001 B1
6202156 Kalajan Mar 2001 B1
6223206 Dan et al. Apr 2001 B1
6226759 Miller et al. May 2001 B1
6233648 Tomita May 2001 B1
6237008 Beal et al. May 2001 B1
6256031 Meijer et al. Jul 2001 B1
6282610 Bergsten Aug 2001 B1
6289345 Yasue Sep 2001 B1
6308162 Ouimet et al. Oct 2001 B1
6311290 Hasbun et al. Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6329985 Tamer et al. Dec 2001 B1
6339785 Feigenbaum Jan 2002 B1
6349343 Foody et al. Feb 2002 B1
6370543 Hoffert et al. Apr 2002 B2
6374263 Bunger et al. Apr 2002 B1
6374336 Peters et al. Apr 2002 B1
6389433 Bolosky et al. May 2002 B1
6393581 Friedman et al. May 2002 B1
6397246 Wolfe May 2002 B1
6412004 Chen et al. Jun 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6466580 Leung Oct 2002 B1
6469983 Narayana et al. Oct 2002 B2
6477544 Bolosky et al. Nov 2002 B1
6487561 Ofek et al. Nov 2002 B1
6493804 Soltis et al. Dec 2002 B1
6516350 Lumelsky et al. Feb 2003 B1
6516351 Borr Feb 2003 B2
6542909 Tamer et al. Apr 2003 B1
6549916 Sedlar Apr 2003 B1
6553352 Delurgio et al. Apr 2003 B2
6556997 Levy Apr 2003 B1
6556998 Mukherjee et al. Apr 2003 B1
6560230 Li et al. May 2003 B1
6601101 Lee et al. Jul 2003 B1
6606663 Liao et al. Aug 2003 B1
6612490 Herrendoerfer et al. Sep 2003 B1
6654346 Mahalingaiah et al. Nov 2003 B1
6697871 Hansen Feb 2004 B1
6704755 Midgley et al. Mar 2004 B2
6721794 Taylor et al. Apr 2004 B2
6728265 Yavatkar et al. Apr 2004 B1
6738357 Richter et al. May 2004 B1
6738790 Klein et al. May 2004 B1
6742035 Zayas et al. May 2004 B1
6744776 Kalkunte et al. Jun 2004 B1
6748420 Quatrano et al. Jun 2004 B1
6754215 Arikawa et al. Jun 2004 B1
6757706 Dong et al. Jun 2004 B1
6775672 Mahalingam et al. Aug 2004 B2
6775673 Mahalingam et al. Aug 2004 B2
6775679 Gupta Aug 2004 B2
6782450 Arnott et al. Aug 2004 B2
6801960 Ericson et al. Oct 2004 B1
6826613 Wang et al. Nov 2004 B1
6839761 Kadyk et al. Jan 2005 B2
6847959 Arrouye et al. Jan 2005 B1
6847970 Keller et al. Jan 2005 B2
6850997 Rooney et al. Feb 2005 B1
6868439 Basu et al. Mar 2005 B2
6871245 Bradley Mar 2005 B2
6880017 Marce et al. Apr 2005 B1
6889249 Miloushev et al. May 2005 B2
6914881 Mansfield et al. Jul 2005 B1
6922688 Frey, Jr. Jul 2005 B1
6934706 Mancuso et al. Aug 2005 B1
6938039 Bober et al. Aug 2005 B1
6938059 Tamer et al. Aug 2005 B2
6959373 Testardi Oct 2005 B2
6961815 Kistler et al. Nov 2005 B2
6973455 Vahalia et al. Dec 2005 B1
6973549 Testardi Dec 2005 B1
6975592 Seddigh et al. Dec 2005 B1
6985936 Agarwalla et al. Jan 2006 B2
6985956 Luke et al. Jan 2006 B2
6986015 Testardi Jan 2006 B2
6990114 Erimli et al. Jan 2006 B1
6990547 Ulrich et al. Jan 2006 B2
6990667 Ulrich et al. Jan 2006 B2
6996841 Kadyk et al. Feb 2006 B2
6999912 Loisey et al. Feb 2006 B2
7003533 Noguchi et al. Feb 2006 B2
7006981 Rose et al. Feb 2006 B2
7010553 Chen et al. Mar 2006 B2
7013379 Testardi Mar 2006 B1
7020644 Jameson Mar 2006 B2
7020669 McCann et al. Mar 2006 B2
7024427 Bobbitt et al. Apr 2006 B2
7039061 Connor et al. May 2006 B2
7051112 Dawson May 2006 B2
7054998 Arnott et al. May 2006 B2
7055010 Lin et al. May 2006 B2
7072917 Wong et al. Jul 2006 B2
7075924 Richter et al. Jul 2006 B2
7089286 Malik Aug 2006 B1
7111115 Peters et al. Sep 2006 B2
7113962 Kee et al. Sep 2006 B1
7120728 Krakirian et al. Oct 2006 B2
7120746 Campbell et al. Oct 2006 B2
7127556 Blumenau et al. Oct 2006 B2
7133967 Fujie et al. Nov 2006 B2
7143146 Nakatani et al. Nov 2006 B2
7146524 Patel et al. Dec 2006 B2
7152184 Maeda et al. Dec 2006 B2
7155466 Rodriguez et al. Dec 2006 B2
7165095 Sim Jan 2007 B2
7167821 Hardwick et al. Jan 2007 B2
7171469 Ackaouy et al. Jan 2007 B2
7173929 Testardi Feb 2007 B1
7181523 Sim Feb 2007 B2
7194579 Robinson et al. Mar 2007 B2
7197615 Arakawa et al. Mar 2007 B2
7206863 Oliveira et al. Apr 2007 B1
7216264 Glade et al. May 2007 B1
7234074 Cohn et al. Jun 2007 B2
7236491 Tsao et al. Jun 2007 B2
7237076 Nakano et al. Jun 2007 B2
7243089 Becker-Szendy et al. Jul 2007 B2
7243094 Tabellion et al. Jul 2007 B2
7263610 Parker et al. Aug 2007 B2
7269168 Roy et al. Sep 2007 B2
7269582 Winter et al. Sep 2007 B2
7272613 Sim et al. Sep 2007 B2
7272654 Brendel Sep 2007 B1
7280536 Testardi Oct 2007 B2
7284150 Ma et al. Oct 2007 B2
7293097 Borr Nov 2007 B2
7293099 Kalajan Nov 2007 B1
7293133 Colgrove et al. Nov 2007 B1
7299250 Douceur et al. Nov 2007 B2
7308475 Pruitt et al. Dec 2007 B1
7330486 Ko et al. Feb 2008 B2
7343398 Lownsbrough Mar 2008 B1
7346664 Wong et al. Mar 2008 B2
7383288 Miloushev Jun 2008 B2
7401220 Bolosky et al. Jul 2008 B2
7406484 Srinivasan et al. Jul 2008 B1
7415488 Muth et al. Aug 2008 B1
7415608 Bolosky et al. Aug 2008 B2
7418439 Wong Aug 2008 B2
7437358 Arrouye et al. Oct 2008 B2
7440982 Lu et al. Oct 2008 B2
7457982 Rajan Nov 2008 B2
7467158 Marinescu Dec 2008 B2
7475241 Patel et al. Jan 2009 B2
7477796 Sasaki et al. Jan 2009 B2
7496367 Ozturk et al. Feb 2009 B1
7509322 Miloushev et al. Mar 2009 B2
7512673 Miloushev et al. Mar 2009 B2
7519813 Cox et al. Apr 2009 B1
7562110 Miloushev et al. Jul 2009 B2
7571168 Bahar et al. Aug 2009 B2
7574433 Engel Aug 2009 B2
7587471 Yasuda et al. Sep 2009 B2
7590747 Coates et al. Sep 2009 B2
7599941 Bahar et al. Oct 2009 B2
7610307 Havewala et al. Oct 2009 B2
7610390 Yared et al. Oct 2009 B2
7620775 Waxman Nov 2009 B1
7624109 Testardi Nov 2009 B2
7639883 Gill Dec 2009 B2
7644109 Manley et al. Jan 2010 B2
7653699 Colgrove et al. Jan 2010 B1
7656788 Ma et al. Feb 2010 B2
7680836 Anderson et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7685177 Hagerstrom et al. Mar 2010 B1
7689596 Tsunoda Mar 2010 B2
7694082 Golding et al. Apr 2010 B2
7711771 Kirnos et al. May 2010 B2
7734603 McManis Jun 2010 B1
7739540 Akutsu et al. Jun 2010 B2
7743031 Cameron et al. Jun 2010 B1
7743035 Chen et al. Jun 2010 B2
7752294 Meyer et al. Jul 2010 B2
7788335 Miloushev et al. Aug 2010 B2
7809691 Karmarkar et al. Oct 2010 B1
7818299 Federwisch et al. Oct 2010 B1
7822939 Veprinsky et al. Oct 2010 B1
7831639 Panchbudhe et al. Nov 2010 B1
7849112 Mane et al. Dec 2010 B2
7853958 Mathew et al. Dec 2010 B2
7870154 Shitomi et al. Jan 2011 B2
7877511 Berger et al. Jan 2011 B1
7885970 Lacapra Feb 2011 B2
7886218 Watson Feb 2011 B2
7889734 Hendel et al. Feb 2011 B1
7900002 Lyon Mar 2011 B2
7903554 Manur et al. Mar 2011 B1
7904466 Valencia et al. Mar 2011 B1
7913053 Newland Mar 2011 B1
7937421 Mikesell et al. May 2011 B2
7953085 Chang et al. May 2011 B2
7953701 Okitsu et al. May 2011 B2
7958347 Ferguson Jun 2011 B1
7984108 Landis et al. Jul 2011 B2
8005953 Miloushev et al. Aug 2011 B2
8010756 Linde Aug 2011 B1
8015157 Kamei et al. Sep 2011 B2
8046547 Chatterjee et al. Oct 2011 B1
8055724 Amegadzie et al. Nov 2011 B2
8099758 Schaefer et al. Jan 2012 B2
8103622 Karinta Jan 2012 B1
8112392 Bunnell et al. Feb 2012 B1
8117244 Marinov et al. Feb 2012 B2
8171124 Kondamuru May 2012 B2
8180747 Marinkovic et al. May 2012 B2
8195760 Lacapra et al. Jun 2012 B2
8204860 Ferguson et al. Jun 2012 B1
8209403 Szabo et al. Jun 2012 B2
8239354 Lacapra et al. Aug 2012 B2
8271751 Hinrichs, Jr. Sep 2012 B2
8302100 Deng et al. Oct 2012 B2
8306948 Chou et al. Nov 2012 B2
8326798 Driscoll et al. Dec 2012 B1
8351600 Resch Jan 2013 B2
8352785 Nicklin et al. Jan 2013 B1
8392372 Ferguson et al. Mar 2013 B2
8396895 Miloushev et al. Mar 2013 B2
8397059 Ferguson Mar 2013 B1
8400919 Amdahl et al. Mar 2013 B1
8417681 Miloushev et al. Apr 2013 B1
8417746 Gillett, Jr. et al. Apr 2013 B1
8433735 Lacapra Apr 2013 B2
8463850 McCann Jun 2013 B1
8468542 Jacobson et al. Jun 2013 B2
8548953 Wong et al. Oct 2013 B2
8549582 Andrews et al. Oct 2013 B1
8682916 Wong et al. Mar 2014 B2
8725692 Natanzon et al. May 2014 B1
8745266 Agarwal et al. Jun 2014 B2
8954492 Lowell, Jr. Feb 2015 B1
9020912 Majee et al. Apr 2015 B1
20010007560 Masuda et al. Jul 2001 A1
20010047293 Waller et al. Nov 2001 A1
20020035537 Waller et al. Mar 2002 A1
20020059263 Shima et al. May 2002 A1
20020087887 Busam et al. Jul 2002 A1
20020106263 Winker Aug 2002 A1
20020120763 Miloushev et al. Aug 2002 A1
20020143909 Botz et al. Oct 2002 A1
20020150253 Brezak et al. Oct 2002 A1
20020156905 Weissman Oct 2002 A1
20020161911 Pinckney, III et al. Oct 2002 A1
20020194342 Lu et al. Dec 2002 A1
20030012382 Ferchichi et al. Jan 2003 A1
20030028514 Lord et al. Feb 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030033535 Fisher et al. Feb 2003 A1
20030065956 Belapurkar et al. Apr 2003 A1
20030072318 Lam et al. Apr 2003 A1
20030088671 Klinker et al. May 2003 A1
20030128708 Inoue et al. Jul 2003 A1
20030156586 Lee et al. Aug 2003 A1
20030159072 Belinger et al. Aug 2003 A1
20030171978 Jenkins et al. Sep 2003 A1
20030177364 Walsh et al. Sep 2003 A1
20030177388 Botz et al. Sep 2003 A1
20030179755 Fraser Sep 2003 A1
20030200207 Dickinson Oct 2003 A1
20030204635 Ko et al. Oct 2003 A1
20040003266 Moshir et al. Jan 2004 A1
20040006575 Visharam et al. Jan 2004 A1
20040010654 Yasuda et al. Jan 2004 A1
20040017825 Stanwood et al. Jan 2004 A1
20040028043 Maveli et al. Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040044705 Stager et al. Mar 2004 A1
20040054748 Ackaouy et al. Mar 2004 A1
20040093474 Lin et al. May 2004 A1
20040098595 Aupperle et al. May 2004 A1
20040133577 Miloushev et al. Jul 2004 A1
20040133606 Miloushev et al. Jul 2004 A1
20040139355 Axel et al. Jul 2004 A1
20040148380 Meyer et al. Jul 2004 A1
20040153479 Mikesell et al. Aug 2004 A1
20040199547 Winter et al. Oct 2004 A1
20040210731 Chatterjee et al. Oct 2004 A1
20040213156 Smallwood et al. Oct 2004 A1
20040236798 Srinivasan et al. Nov 2004 A1
20050027862 Nguyen et al. Feb 2005 A1
20050050107 Mane et al. Mar 2005 A1
20050091214 Probert et al. Apr 2005 A1
20050108575 Yung May 2005 A1
20050114701 Atkins et al. May 2005 A1
20050117589 Douady et al. Jun 2005 A1
20050160161 Barrett et al. Jul 2005 A1
20050160243 Lubbers et al. Jul 2005 A1
20050175013 Le Pennec et al. Aug 2005 A1
20050187866 Lee Aug 2005 A1
20050198501 Andreev et al. Sep 2005 A1
20050213570 Stacy et al. Sep 2005 A1
20050213587 Cho et al. Sep 2005 A1
20050240756 Mayer Oct 2005 A1
20050246393 Coates et al. Nov 2005 A1
20050289111 Tribble et al. Dec 2005 A1
20060010502 Mimatsu et al. Jan 2006 A1
20060031374 Lu et al. Feb 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060074922 Nishimura Apr 2006 A1
20060075475 Boulos et al. Apr 2006 A1
20060080353 Miloushev et al. Apr 2006 A1
20060106882 Douceur et al. May 2006 A1
20060117048 Thind et al. Jun 2006 A1
20060123062 Bobbitt et al. Jun 2006 A1
20060140193 Kakani et al. Jun 2006 A1
20060153201 Hepper et al. Jul 2006 A1
20060167838 Lacapra Jul 2006 A1
20060179261 Rajan Aug 2006 A1
20060184589 Lees et al. Aug 2006 A1
20060200470 Lacapra et al. Sep 2006 A1
20060206547 Kulkarni et al. Sep 2006 A1
20060218135 Bisson et al. Sep 2006 A1
20060224636 Kathuria et al. Oct 2006 A1
20060224687 Popkin et al. Oct 2006 A1
20060230265 Krishna Oct 2006 A1
20060242179 Chen et al. Oct 2006 A1
20060259949 Schaefer et al. Nov 2006 A1
20060268692 Wright et al. Nov 2006 A1
20060268932 Singh et al. Nov 2006 A1
20060270341 Kim et al. Nov 2006 A1
20060271598 Wong et al. Nov 2006 A1
20060277225 Mark et al. Dec 2006 A1
20060282461 Marinescu Dec 2006 A1
20060282471 Mark et al. Dec 2006 A1
20070024919 Wong et al. Feb 2007 A1
20070027929 Whelan Feb 2007 A1
20070027935 Haselton et al. Feb 2007 A1
20070028068 Golding et al. Feb 2007 A1
20070061441 Landis et al. Mar 2007 A1
20070088702 Fridella et al. Apr 2007 A1
20070128899 Mayer Jun 2007 A1
20070136308 Tsirigotis et al. Jun 2007 A1
20070139227 Speirs et al. Jun 2007 A1
20070180314 Kawashima et al. Aug 2007 A1
20070208748 Li Sep 2007 A1
20070209075 Coffman Sep 2007 A1
20070260830 Faibish et al. Nov 2007 A1
20080046432 Anderson et al. Feb 2008 A1
20080070575 Claussen et al. Mar 2008 A1
20080114718 Anderson et al. May 2008 A1
20080177994 Mayer Jul 2008 A1
20080189468 Schmidt et al. Aug 2008 A1
20080208933 Lyon Aug 2008 A1
20080209073 Tang Aug 2008 A1
20080215836 Sutoh et al. Sep 2008 A1
20080222223 Srinivasan et al. Sep 2008 A1
20080243769 Arbour et al. Oct 2008 A1
20080263401 Stenzel Oct 2008 A1
20080282047 Arakawa et al. Nov 2008 A1
20080294446 Guo et al. Nov 2008 A1
20090007162 Sheehan Jan 2009 A1
20090013138 Sudhakar Jan 2009 A1
20090019535 Mishra et al. Jan 2009 A1
20090037500 Kirshenbaum Feb 2009 A1
20090037975 Ishikawa et al. Feb 2009 A1
20090041230 Williams Feb 2009 A1
20090049260 Upadhyayula et al. Feb 2009 A1
20090055507 Oeda Feb 2009 A1
20090055607 Schack et al. Feb 2009 A1
20090077097 Lacapra Mar 2009 A1
20090077312 Miura Mar 2009 A1
20090089344 Brown et al. Apr 2009 A1
20090094252 Wong et al. Apr 2009 A1
20090106255 Lacapra et al. Apr 2009 A1
20090106263 Khalid et al. Apr 2009 A1
20090132616 Winter et al. May 2009 A1
20090161542 Ho Jun 2009 A1
20090187915 Chew et al. Jul 2009 A1
20090204649 Wong et al. Aug 2009 A1
20090204650 Wong et al. Aug 2009 A1
20090204705 Marinov et al. Aug 2009 A1
20090210431 Marinkovic et al. Aug 2009 A1
20090210875 Bolles et al. Aug 2009 A1
20090240705 Miloushev et al. Sep 2009 A1
20090240899 Akagawa et al. Sep 2009 A1
20090265396 Ram et al. Oct 2009 A1
20090313503 Atluri et al. Dec 2009 A1
20100017643 Baba et al. Jan 2010 A1
20100030777 Panwar et al. Feb 2010 A1
20100061232 Zhou et al. Mar 2010 A1
20100082542 Feng et al. Apr 2010 A1
20100122248 Robinson et al. May 2010 A1
20100199042 Bates et al. Aug 2010 A1
20100205206 Rabines et al. Aug 2010 A1
20100211547 Kamei et al. Aug 2010 A1
20100325257 Goel et al. Dec 2010 A1
20100325634 Ichikawa et al. Dec 2010 A1
20110083185 Sheleheda et al. Apr 2011 A1
20110087696 Lacapra Apr 2011 A1
20110093471 Brockway et al. Apr 2011 A1
20110107112 Resch May 2011 A1
20110119234 Schack et al. May 2011 A1
20110255537 Ramasamy et al. Oct 2011 A1
20110296411 Tang et al. Dec 2011 A1
20110320882 Beaty et al. Dec 2011 A1
20120007239 Kolics et al. Jan 2012 A1
20120042115 Young Feb 2012 A1
20120078856 Linde Mar 2012 A1
20120144229 Nadolski Jun 2012 A1
20120150699 Trapp et al. Jun 2012 A1
20120246637 Kreeger et al. Sep 2012 A1
20130058225 Casado et al. Mar 2013 A1
20130058252 Casado et al. Mar 2013 A1
20140226666 Narasimhan et al. Aug 2014 A1
20140372599 Gutt et al. Dec 2014 A1
Foreign Referenced Citations (21)
Number Date Country
2080530 Apr 1994 CA
2512312 Jul 2004 CA
0605088 Jun 1994 EP
0605088 Feb 1996 EP
0738970 Oct 1996 EP
63010250 Jan 1988 JP
6205006 Jul 1994 JP
060332782 Dec 1994 JP
8021924 Mar 1996 JP
08328760 Dec 1996 JP
080339355 Dec 1996 JP
9016510 Jan 1997 JP
11282741 Oct 1999 JP
2000183935 May 2000 JP
566291 Dec 2008 NZ
0239696 May 2002 WO
02056181 Jul 2002 WO
2004061605 Jul 2004 WO
2006091040 Aug 2006 WO
2008130983 Oct 2008 WO
2008147973 Dec 2008 WO
Non-Patent Literature Citations (78)
Entry
Ott D., et al., “A Mechanism for TCP-Friendly Transport-level Protocol Coordination”, USENIX Annual Technical Conference, 2002, University of North Carolina at Chapel Hill, pp. 1-12.
Padmanabhan V., et al., “Using Predictive Prefetching to Improve World Wide Web Latency”, SIGCOM, 1996, pp. 1-15.
Pashalidis et al., “A Taxonomy of Single Sign-On Systems,” 2003, pp. 1-16, Royal Holloway, University of London, Egham Surray, TW20, 0EX, United Kingdom.
Pashalidis et al., “Impostor: A Single Sign-On System for Use from Untrusted Devices,” Global Telecommunications Conference, 2004, GLOBECOM '04, IEEE, Issue Date: Nov. 29-Dec. 3, 2004.Royal Holloway, University of London.
Patterson et al., “A case for redundant arrays of inexpensive disks (RAID)”, Chicago, Illinois, Jun. 1-3, 1998, in Proceedings of ACM SIGMOD conference on the Management of Data, pp. 109-116, Association for Computing Machinery, Inc., www.acm.org, last accessed on Dec. 20, 2002.
Pearson, P.K., “Fast Hashing of Variable-Length Text Strings,” Comm. of the ACM, Jun. 1990, pp. 1-4, vol. 33, No. 6.
Peterson, M., “Introducing Storage Area Networks,” Feb. 1998, InfoStor, www.infostor.com, last accessed on Dec. 20, 2002.
Preslan et al., “Scalability and Failure Recovery in a Linux Cluster File System,” in Proceedings of the 4th Annual Linux Showcase & Conference, Atlanta, Georgia, Oct. 10-14, 2000, pp. 169-180 of the Proceedings, www.usenix.org, last accessed on Dec. 20, 2002.
Response filed Jul. 6, 2007 to Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784.
Response filed Mar. 20, 2008 to Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784.
Rodriguez et al., “Parallel-access for mirror sites in the Internet,” InfoCom 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE Tel Aviv, Israel Mar. 26-30, 2000, Piscataway, NJ, USA, IEEE, US, Mar. 26, 2000, pp. 864-873, XP010376176 ISBN: 0-7803-5880-5 p. 867, col. 2, last paragraph-p. 868, col. 1, paragraph 1.
Rosen E., et al., “MPLS Label Stack Encoding”, (RFC:3032) Network Working Group, Jan. 2001, pp. 1-22, (http://www.ietf.org/rfc/rfc3032.txt).
RSYNC, “Welcome to the RSYNC Web Pages,” Retrieved from the Internet URL: http://samba.anu.edu.ut.rsync/. (Retrieved on Dec. 18, 2009).
Savage, et al., “AFRAID—A Frequently Redundant Array of Independent Disks,” Jan. 22-26, 1996, pp. 1-13, USENIX Technical Conference, San Diego, California.
“Scaling Next Generation Web Infrastructure with Content-Intelligent Switching: White Paper,” Apr. 2000, p. 1-9 Alteon Web Systems, Inc.
Soltis et al., “The Design and Performance of a Shared Disk File System for IRIX,” Mar. 23-26, 1998, pp. 1-17, Sixth NASA Goddard Space Flight Center Conference on Mass Storage and Technologies in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems, University of Minnesota.
Soltis et al., “The Global File System,” Sep. 17-19, 1996, in Proceedings of the Fifth NASA Goddard Space Flight Center Conference on Mass Storage Systems and Technologies, College Park, Maryland.
Sorenson, K.M., “Installation and Administration: Kimberlite Cluster Version 1.1.0, Rev. Dec. 2000,” Mission Critical Linux, http://oss.missioncriticallinux.corn/kimberlite/kimberlite.pdf.
Stakutis, C., “Benefits of SAN-based file system sharing,” Jul. 2000, pp. 1-4, InfoStor, www.infostor.com, last accessed on Dec. 30, 2002.
Thekkath et al., “Frangipani: A Scalable Distributed File System,” in Proceedings of the 16th ACM Symposium on Operating Systems Principles, Oct. 1997, pp. 1-14, Association for Computing Machinery, Inc.
Tulloch, Mitch, “Microsoft Encyclopedia of Security,” 2003, pp. 218, 300-301, Microsoft Press, Redmond, Washington.
Uesugi, H., English translation of office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
“VERITAS SANPoint Foundation Suite(tm) and SANPoint Foundation Suite(tm) HA: New VERITAS Volume Management and File System Technology for Cluster Environments,” Sep. 2001, VERITAS Software Corp.
Wang B., “Priority and Realtime Data Transfer Over the Best-Effort Internet”, Dissertation Abstract, Sep. 2005, ScholarWorks@UMASS.
Woo T.Y.C., “A Modular Approach to Packet Classification: Algorithms and Results”, Nineteenth Annual Conference of the IEEE Computer and Communications Societies 3(3):1213-22, Mar. 26-30, 2000, abstract only, (http://ieeexplore.ieee.org/xpl/freeabs—all.jsp?arnumber=832499).
Wilkes, J., et al., “The HP AutoRAID Hierarchical Storage System,” Feb. 1996, vol. 14, No. 1, ACM Transactions on Computer Systems.
“Windows Clustering Technologies—An Overview,” Nov. 2001, Microsoft Corp., www.microsoft.com, last accessed on Dec. 30, 2012.
Zayas, E., “AFS-3 Programmer's Reference: Architectural Overview,” Transarc Corp., version 1.0 of Sep. 2, 1991, doc. No. FS-00-D160.
“The AFS File System in Distributed Computing Environment,” www.transarc.ibm.com/Library/whitepapers/AFS/afsoverview.html, last accessed on Dec. 20, 2002.
Aguilera, Marcos K. et al., “Improving recoverability in multi-tier storage systems,” International Conference on Dependable Systems and Networks (DSN-2007), Jun. 2007, 10 pages, Edinburgh, Scotland.
Anderson, Darrell C. et al., “Interposed Request Routing for Scalable Network Storage,” ACM Transactions on Computer Systems 20(1): (Feb. 2002), pp. 1-24.
Anonymous, “How DFS Works: Remote File Systems,” Distributed File System (DFS) Technical Reference, retrieved from the Internet on Feb. 13, 2009:URL<:http://technetmicrosoft.com/en-us/library/cc782417WS.10,printer).aspx> (Mar. 2003).
Apple, Inc., “Mac OS X Tiger Keynote Intro. Part 2,” Jun. 2004, www.youtube.com <http://www.youtube.com/watch?v=zSBJwEmRJbY>, p. 1.
Apple, Inc., “Tiger Developer Overview Series: Working with Spotlight,” Nov. 23, 2004, www.apple.com using www.archive.org<http://web.archive.org/web/20041123005335/developer.apple.com/macosx/tiger/spotlight.html>, pp. 1-6.
“A Storage Architecture Guide,” Second Edition, 2001, Auspex Systems, Inc., www.auspex.com, last accessed on Dec. 30, 2002.
Basney et al., “Credential Wallets: A Classification of Credential Repositories Highlighting MyProxy,” TPRC 2003, Sep. 19-21, 2003, pp. 1-20.
Botzum, Keys, “Single Sign On—A Contrarian View,” Open Group Website, <http://www.opengroup.org/security/topics.htm>, Aug. 6, 2001, pp. 1-8.
Novotny et al., “An Online Credential Repository for the Grid: MyProxy,” 2001, pp. 1-8.
Cabrera et al., “Swift: A Storage Architecture for Large Objects,” In Proceedings of the—Eleventh IEEE Symposium on Mass Storage Systems, Oct. 1991, pp. 123-128.
Cabrera et al., “Swift: Using Distributed Disk Striping to Provide High I/O Data Rates,” Fall 1991, pp. 405-436, vol. 4, No. 4, Computing Systems.
Cabrera et al., “Using Data Striping in a Local Area Network,” 1992, technical report No. UCSC-CRL-92-09 of the Computer & Information Sciences Department of University of California at Santa Cruz.
Callaghan et al., “NFS Version 3 Protocol Specifications” (RFC 1813), Jun. 1995, The Internet Engineering Task Force (IETN), www.ietf.org, last accessed on Dec. 30, 2002.
Carns et al., “PVFS: A Parallel File System for Linux Clusters,” in Proceedings of the Extreme Linux Track: 4th Annual Linux Showcase and Conference, Oct. 2000, pp. 317-327, Atlanta, Georgia, USENIX Association.
Cavale, M. R., “Introducing Microsoft Cluster Service (MSCS) in the Windows Server 2003”, Microsoft Corporation, Nov. 2002.
“CSA Persistent File System Technology,” A White Paper, Jan. 1, 1999, p. 1-3, http://www.cosoa.com/white—papers/pfs.php, Colorado Software Architecture, Inc.
“Distributed File System: A Logical View of Physical Storage: White Paper,” 1999, Microsoft Corp., www.microsoft.com, <http://www.eu.microsoft.com/TechNet/prodtechnol/windows2000serv/maintain/DFS nt95>, pp. 1-26, last accessed on Dec. 20, 2002.
English Translation of Notification of Reason(s) for Refusal for JP 2002-556371 (Dispatch Date: Jan. 22, 2007).
Fan et al., “Summary Cache: A Scalable Wide-Area Protocol”, Computer Communications Review, Association Machinery, New York, USA, Oct. 1998, vol. 28, Web Cache Sharing for Computing No. 4, pp. 254-265.
Gibson et al., “File Server Scaling with Network-Attached Secure Disks,” in Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics '97), Association for Computing Machinery, Inc., Jun. 15-18, 1997.
Gibson et al., “NASD Scalable Storage Systems,” Jun. 1999, USENIX99, Extreme Linux Workshop, Monterey, California.
Harrison, C., May 19, 2008 response to Communication pursuant to Article 96(2) EPC dated Nov. 9, 2007 in corresponding European patent application No. 02718824.2.
Hartman, J., “The Zebra Striped Network File System,” 1994, Ph.D. dissertation submitted in the Graduate Division of the University of California at Berkeley.
Haskin et al., “The Tiger Shark File System,” 1996, in proceedings of IEEE, Spring COMPCON, Santa Clara, CA, www.research.ibm.com, last accessed on Dec. 30, 2002.
Hu, J., Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784.
Hu, J., Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784.
Hwang et al., “Designing SSI Clusters with Hierarchical Checkpointing and Single 1/0 Space,” IEEE Concurrency, Jan.-Mar. 1999, pp. 60-69.
International Search Report for International Patent Application No. PCT/US2008/083117 (Jun. 23, 2009).
International Search Report for International Patent Application No. PCT/US2008/060449 (Apr. 9, 2008).
International Search Report for International Patent Application No. PCT/US2008/064677 (Sep. 6, 2009).
International Search Report for International Patent Application No. PCT/US02/00720, Jul. 8, 2004.
International Search Report from International Application No. PCT/US03/41202, mailed Sep. 15, 2005.
Karamanolis, C. et al., “An Architecture for Scalable and Manageable File Services,” HPL-2001-173, Jul. 26, 2001. p. 1-114.
Katsurashima, W. et al., “NAS Switch: A Novel CIFS Server Virtualization, Proceedings,” 20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies, 2003 (MSST 2003), Apr. 2003.
Kimball, C.E. et al., “Automated Client-Side Integration of Distributed Application Servers,” 13th LISA Conf., 1999, pp. 275-282 of the Proceedings.
Klayman, J., Nov. 13, 2008 e-mail to Japanese associate including instructions for response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
Klayman, J., Jul. 18, 2007 e-mail to Japanese associate including instructions for response to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371.
Kohl et al., “The Kerberos Network Authentication Service (V5),” RFC 1510, Sep. 1993. (http://www.ietf.org/rfc/rfc1510.txt?number=1510).
Korkuzas, V., Communication pursuant to Article 96(2) EPC dated Sep. 11, 2007 in corresponding European patent application No. 02718824.2-2201.
Lelil, S., “Storage Technology News: AutoVirt adds tool to help data migration projects,” Feb. 25, 2011, last accessed Mar. 17, 2011, <http://searchstorage.techtarget.com/news/article/0,289142,sid5—gci1527986,00.html>.
Long et al., “Swift/RAID: A distributed RAID System”, Computing Systems, Summer 1994, vol. 7, pp. 333-359.
Modiano E., “Scheduling Algorithms for Message Transmission Over a Satellite Broadcast System,” MIT Lincoln Laboratory Advanced Network Group, Nov. 1997, pp. 1-7.
“NERSC Tutorials: I/O on the Cray T3E, ‘Chapter 8, Disk Striping’,” National Energy Research Scientific Computing Center (NERSC), http://hpcfnersc.gov, last accessed on Dec. 27, 2002.
Noghani et al., “A Novel Approach to Reduce Latency on the Internet: ‘Component-Based Download’,” Proceedings of the Computing, Las Vegas, NV, Jun. 2000, pp. 1-6 on the Internet: Intl Conf. on Internet.
Norton et al., “CIFS Protocol Version CIFS-Spec 0.9,” 2001, Storage Networking Industry Association (SNIA), www.snia.org, last accessed on Mar. 26, 2001.
Debnath, Biplob et al., “ChunkStash: Speeding up inline Storage Deduplication using Flash Memory,” USENIX Annual Technical Conference, 2010, pp. 1-16, usenix.org.
Oracle Secure Backup Reference Release 10.1, B14236-01, Mar. 2006, pp. 1-456.
Uesugi, H, Jul. 15, 2008 letter from Japanese associate reporting office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
“Book Review—Building Storage Networks”, Aug. 6, 2002, pp. 1-2, 2nd Edition, Retrieved from: http://www.enterprisestorageforum.com/sans/features/print/010556—1441201,00.html.
Provisional Applications (1)
Number Date Country
61707960 Sep 2012 US