Various examples relate to mitigating latency in a file virtualization environment, and providing non-disruptively services to requesting client devices from a secondary data recovery data center site in the event that the primary data center site goes off-line.
In a file system virtualization environment, a configuration of the entire virtualized file system is stored in a file virtualization device. This configuration may represent one or more services representing one or more virtual file systems. However, in the event of a disaster rendering the entire virtualized file system in a malfunctioning or a completely inoperable state, it is difficult to immediately switch over to a secondary site to continue providing services to clients in a non-disruptive manner. Further, in the event of a partial failure rendering a portion of the virtual file system inoperable, it is difficult to immediately switch over those affected portions to a secondary site in a non-disruptive manner. In such conventional file systems, configuration information from the file virtualization device at a primary data center site has to be manually imported and then enabled at the data recovery or secondary data center site before failed services can be provided again. Further, in such conventional systems, when the failure is resolved at the primary data center site, similar manual techniques have to be applied again to switch all or a portion of failed services back to the primary data center site, thereby resulting in disruption of service to the client devices. Further, such manual techniques of failing a portion or all services over to the secondary site are not only time consuming, but are also highly error prone. In another scenario, if a customer deploying the file virtualization device elects to purchase newer, faster file virtualization device, existing snapshots are difficult to transfer to the new file virtualization device. Alternatively, if the customer wishes to split a virtual volume on a file virtualization device into two or more volumes, there is no technique or system that lets the new volumes to be automatically reflected in a new virtual snapshot that provides information about the splitting of the original volume into two or more volumes.
In yet another scenario, if a customer is using file server based replication for data and file virtualization device clusters are front-ending both primary and disaster recovery (or, backup) sites, conventional file virtualization systems fail to efficiently make the replicated configuration between the primary and the secondary data recovery data center site in real-time.
Furthermore, using current file virtualization devices, maintaining the configuration updates while at the same time performing operations such as reconfiguring a file switch, upgrading, renaming, mounting and/or unmounting a new volume, coalescing multiple volumes into a lesser number of volumes, splitting one volume into a plurality of volumes, and other events that alter the configuration is complex, time consuming, and error prone. Unfortunately, current file virtualization systems fail to address the above-noted and other problems associated with resolving latency issues and failing over to a secondary site smoothly.
In an aspect, a back-up file virtualization device at a second data center site comprises a network interface component configured to communicate with an active file virtualization device at a first data center site via a communication channel on a scheduled basis; a memory configured to store machine executable code for reducing latency when re-routing at least partial client communications from a first data center site to a second data center site due to a virtualization service disruption; one or more processors coupled to the memory and configured to execute the code in the memory to: import configuration data from the first file virtualization device, wherein the imported configuration data is stored in the memory, the configuration data representing object relationships and mapping information between components in the first data center site and the second data center site; receive an instruction for the back-up file virtualization device to begin handling at least one virtualization service that is disrupted between the active file virtualization device and one or more storage devices at the first data center site; load, from the memory, a most recent import of at least a portion of the configuration data for the one or more disrupted virtualization services; enable the at least a portion of the loaded imported configuration data such that the back-up file virtualization device performs the disrupted virtualization service with one or more storage devices in the second data center site using the at least a portion of the imported configuration data.
In an aspect, a file virtualization system comprises a first data center site including one or more active first virtualization devices and one or more first storage devices, wherein the first virtualization device is configured to handle one or more virtualization services between one or more client devices and the one or more first storage device; a second data center site including one or more second file virtualization devices and one or more second storage devices, at least one second file virtualization devices further comprising: a network interface component configured to communicate with the first file virtualization device via a communication channel on a scheduled basis; a memory configured to store machine executable code for reducing latency when re-routing at least partial client communications from the first data center site to the second data center site due to a virtualization service disruption; one or more processors coupled to the memory and configured to execute the code in the memory to: import configuration data from the first file virtualization device, wherein the imported configuration data is stored in the memory, the configuration data representing object relationships and mapping information between components in the first data center site and the second data center site; receive an instruction for the back-up file virtualization device to begin handling at least one virtualization service that is disrupted between the active file virtualization device and one or more storage devices at the first data center site; load, from the memory, a most recent import of at least a portion of the configuration data for the one or more disrupted virtualization services; enable the at least a portion of the loaded imported configuration data such that the back-up file virtualization device performs the disrupted virtualization service with one or more storage devices in the second data center site using the at least a portion of the imported configuration data.
In one or more of the above aspects, the virtualization service disruption is caused by the active file virtualization device at the first data center failing, wherein the back-up file virtualization device enables configuration data to handle all virtualization services previously handled by the failed file virtualization device of the first data center.
In one or more of the above aspects, the virtualization service disruption is caused by one or more storage devices at the first data center failing, wherein the back-up file virtualization device enables a portion of the configuration data to begin handling the disrupted virtualization service with the one or more storage devices at the second data center.
In one or more of the above aspects, the network communications relating to the disrupted virtualization service at the first data center is received at the back-up file virtualization device at the second data center site. In one or more the above aspects, all virtualization services at the first data center become disrupted, and all corresponding back-up virtualization devices at the second data center enable the configuration data to handle all the virtualization services previously handled at the first data center site.
In one or more of the above aspects, wherein one or more virtualization services at the first data center are not disrupted between file virtualization devices and storage devices at the first data center. The corresponding back-up virtualization devices at the second data center do not enable portions of the configuration data associated with the one or more non-disrupted virtualization services.
In one or more of the above aspects, wherein the imported configuration data received at the back-up file virtualization device includes objects in a disabled state, wherein the disabled objects are enabled upon the enabling of the at least a portion of the configuration data by the back-up virtualization device.
In one or more of the above aspects, conflicts in the back-up file virtualization device are avoided between enabled objects from the configuration data and objects already executing and being handled by the back-up virtualization device.
In one or more of the above aspects, wherein the back-up virtualization device is configured to change a state of one or more components in the back-up file virtualization system from a read-only state to a read/write state when the back-up virtualization device operates in the active mode.
In one or more of the above aspects, at least a portion of the configuration data is exported from the back-up virtualization device to its corresponding virtualization device at the first data center site on a scheduled basis via the communication channel after at least a portion of the first data center site is back on-line, wherein the at least a portion of the imported configuration data is stored in a memory of the receiving virtualization device.
In one or more of the above aspects, the receiving virtualization device at the first data center is instructed to begin handling the previously disrupted virtualization service, wherein the receiving virtualization device loads the most recently received import of the at least a portion of the configuration data from the memory and enables a portion of the configuration data associated with the previously disrupted virtualization service. The virtualization service at the back-up virtualization device is then disabled.
For purposes of discussion, the first data center site 100 is described in terms of a virtualization site that utilizes one or more file virtualization devices 110(1)-110(n) which, when in an active state, host active services and operates to handle and execute various virtualization services between client devices and hardware devices, such as virtual file server storage devices 102(1)-102(n). Additionally, the second data center site 100′ is described in terms of a virtualization site that utilizes one or more file virtualization devices 110(1)′-110(n)′ which, when in an active state, handle and execute various virtualization services between client devices and the hardware devices, such as virtual file server storage devices 102(1)′-102(n)′. It should be noted that although only a first data center site 100 and a second data center site 100′ are illustrated and described, additional data center sites may be employed in the environment.
In this example, the network 112 comprises a publicly accessible network, for example, the Internet, which includes client devices 104(1)-104(n), although the network 112 may comprise other types of private and public networks that include other devices. Communications, such as read and write requests between client devices 104(1)-104(n) and storage devices 102(1)-102(n), take place over the network 112 according to standard network protocols, such as the HTTP, TCP/IP, request for comments (RFC) protocols, Common Internet File System (CIFS) protocols, Network File System (NFS) protocols and the like. However, it should be noted that such protocols are exemplary and are not limited thereto as other application protocols be used.
Further, the network 112 can include local area networks (LANs), wide area networks (WANs), direct connections and any combination thereof, other types and numbers of network types. On an interconnected set of LANs or other networks, including those based on different architectures and protocols, routers, switches, hubs, gateways, bridges, and other intermediate network devices may act as links within and between LANs and other networks to enable messages and other data to be sent between network devices. Also, communication links within and between LANs and other networks typically include twisted wire pair (e.g., Ethernet), coaxial cable, analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links and other communications links known to those skilled in the relevant arts. In essence, the network 112 can include any communication medium and method by which data may travel between client devices 104(1)-104(n), storage devices 102(1)-102(n) and file virtualization devices 110.
LANs 114 and 114′ can include a private local area network that allows communications between file virtualization devices 110 and 110′ and one or more storage devices 102(1)-102(n), although the LANs 114 and 114′ may comprise other types of private and public networks with other devices.
Storage devices 102(1)-102(n) and 102(1)′-102(n)′ comprise one or more network devices capable of performing operations such as, for example, storing files and data in a virtualized file system. In an aspect, storage devices 102(1)-102(n) and 102(1)′-102(n)′ are accessed by client devices 104(1)-104(n) via the file virtualization device 110 whereby the file virtualization device 110 selectively stores to and retrieves files from storage devices 102(1)-102(n) through the virtualization layer. In
In an aspect, storage devices 102(1)-102(n) can comprise heterogeneous file server storage devices or systems provided by independent vendors. Further, according to various examples, storage devices 102(1)-102(n) can be used to form a tiered storage arrangement where high priority data and/or frequently accessed data is stored in fast, more expensive storage devices, whereas low priority and/or relatively less accessed data can be stored in slower, less expensive storage devices. Such storage tiering can be, for example, based upon a time stamp based policy engine, although other types of policies (e.g., data size based policies and the like) may be used. A series of applications run on the storage devices 102(1)-102(n) that allow the transmission of data, cookies, descriptor files, namespace data, and other file system data. The storage devices 102(1)-102(n) can provide data or receive data in response to requests from the client devices 104(1)-104(n). In an aspect, storage device 102(1)-102(n) and 102(1)′-102(n)′ may store and/or provide other data representative of requested resources, such as particular Web page(s), image(s) of physical objects, and any other objects.
As shown in
Each of the storage devices 102(1)-102(n), file virtualization devices 110, and client devices 104(1)-104(n) can include a central processing unit (CPU), controller or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of each of the components and other configurations and locations for the components can be used.
Generally, the file virtualization devices 110, 110′ are enterprise-class intelligent file virtualization systems that simplify storage management and lower total storage management costs. In an aspect, the file virtualization devices 110, 110′ automate data management tasks and eliminate the disruption associated with storage management operations. The file virtualization devices 110, 110′ provide a virtual layer of intelligence between the network 112 and the respective storage devices via their corresponding LANs 114, 114′. The file virtualization devices 110, 110′ thus eliminate the inflexible mapping which typically ties client devices to physical file storage devices. The file virtualization device 110 decouples the logical access to files from their physical location, so files are free to move among different storage devices, which are now free to change without disrupting users, applications, or administrators. The file virtualization devices 110, 110′ implement intelligent file virtualization that simplifies data management further by providing automated, policy-based management across heterogeneous storage environments.
An example file virtualization device can be the ARX® Series devices provided by F5 networks, Inc. of Seattle, Wash. The file virtualization device can be configured to plug directly into existing IP/Ethernet network 112 and/or LAN 114, in substantial real-time. The file virtualization devices 110, 110′ are configured to virtualize heterogeneous file storage devices 102(1)-102(n), 102(1)′-102(n)′ that present file systems via NFS and/or CIFS, for example.
In an example, the file virtualization device 110, 110′ do not connect directly to a storage area network (SAN) but instead manages SAN data presented through a gateway or storage device, without changing the existing infrastructure of the system 100. The file virtualization device(s) appear as a single data storage device to client devices 104(1)-104(n), and as a single CIFS or NFS client to their respective storage devices. In an aspect, the file virtualization devices can be configured to carry out data management operations, although the file virtualization devices can additionally or alternative carry out storage management operations.
For example, the file virtualization devices 110, 110′ may be configured to automate common storage management tasks (e.g., data migration, storage tiering, and/or load balancing), which take place without affecting access to the file data or requiring re-configuration of file system(s) on client devices 104(1)-104(n). The file virtualization device manages metadata that tracks the location of files and directories that are distributed across storage devices, which is stored in configuration data. The file virtualization device uses the configuration data to utilizes namespace data, which is an aggregation of the underlying file systems, and as well as masked changes to the underlying storage systems from users and applications of client devices 104(1)-104(n). The file virtualization devices manage the various object relationships in the configuration data associated with individual volumes and shares by storing them in a configuration database, as will be described below.
In an aspect, file server storage devices 102(1)-102(n) of the active data center site 100 continually replicate their housed content and other data to the storage devices 102(1)′-102(n)′ of the non-active data center site 100′, as shown by arrows 107(1)-107(n) in
The file virtualization devices 110, 110′ at the respective first data center site 100 and the second data center site 100′ communicate with each other over a secure or insecure communication link or channel 103. In an aspect, the communication link 103 could be a dedicated Secure Sockets Layer (SSL) tunnel or channel 103 which is independent of the communication channels used by storage devices 102(1)-102(n) and 102(1)′-102(n)′ to replicate their corresponding stored content data.
The file virtualization device(s) 110(1)-110(n) of the first data center site 100 provides configuration data to the file virtualization devices 110(1)′-110(n)′ of the second data center site 100′ via the channel 103. In particular, each file virtualization device at a data center site has a corresponding file virtualization device at the other data center site, whereby the configuration data is periodically exported from the active file virtualization device(s) 110(1)-110(n) to the non-active file virtualization device(s) 110(1)′-110(n)′ in accordance with a predetermined schedule. The non-active file virtualization device(s) 110(1)′-110(n)′, upon receiving the imported configuration data, will store the configuration data in the configuration database(s) 150. It should be noted that the configuration data stored in the non-active file virtualization device 110(1)′-110(n)′ is not enabled, as will be discussed in more detail below.
In an aspect, the configuration data is transmitted from the active file virtualization device(s) 110(1)-110(n) to the corresponding non-active file virtualization device(s) 110(1)′-110(n)′ in accordance with a seamless import process described in more detail in co-pending U.S. patent application Ser. No. 13/024,147, which is hereby incorporated by reference. It is contemplated that other import/export techniques may be used to replicate the configuration data among the file virtualization devices without being limiting in any way.
In general, the configuration data contains information representative of object relationships and mapping information among hardware and software components in the first and second data center sites 100, 100′. In an aspect, the configuration data may include, but is not limited to, IP addresses of network devices (e.g. servers, storage devices and the like) at the primary and secondary data center sites 100, 100′; IP addresses of services hosted on the file virtualization devices at both data center sites; session IDs of existing connections; information describing the equivalent file systems participating in a file virtualization layer for each site implemented by respective file virtualization devices; information describing the locations and capabilities of databases and processing nodes in the data center sites. The configuration data may present this data as a mapping scheme/table stored in mapping registers or other hardware, one or more cookie files and/or hash tables, although other numbers and types of systems can be used and other numbers and types of functions can be performed.
As discussed, each file virtualization device in a data center site has a corresponding mirrored file virtualization device in another data center site that can serve as a backup when there is a disruption in a virtual service. In the example shown in
A virtualization service becomes disrupted if one or more file virtualization device fail and/or if one or more file storage devices 102(1)-102(n) fail. The failure can occur as a result of a catastrophic disaster, equipment breakdown, or equipment/software upgrade.
In the event that the disruption in service one or more file virtualization devices fail, the non-active file virtualization devices 110(1)′-110(n)′ at the second data center site 100′, which correspond to the one or more failed file virtualization devices 110(1)-110(n), are activated and begin to handle virtual services between one or more client devices 104(1)-104(n) and the one or more storage devices 102(1)′-102(n)′ of the second data center site 100′ with minimal disruption and latency. The one or more file virtualization devices 110(1)-110(n) first data center site 100, upon becoming non-active, can then serve as a back up and/or again become active once placed back on-line.
In an example scenario, the first data center site may include three file virtualization devices, whereby only one file virtualization device becomes inactive while the remaining two file virtualization devices remain active. In this example scenario, the file virtualization device at the second data center which corresponds to the inactive file virtualization device in the first data center site becomes active and begins handling network services between one or more client devices 104(1)-104(n) and the one or more storage devices 102(1)′-102(n)′ of the second data center site. However, considering that the remaining file virtualization devices at the first data center site are active, their corresponding file virtualization devices at the second data center site do not need to be activated.
In an example scenario, all of the file virtualization devices 110(1)-110(n) in the first data center site 100 may become inactive and go-offline. In this example scenario, all of the corresponding file virtualization devices 110(1)′-110(n)′ in the second data center site 100′ become active and begin handling network services between one or more client devices 104(1)-104(n) and the one or more storage devices 102(1)′-102(n)′ of the second data center site 100′ with minimal disruption and latency. This example scenario is referred to as “passive-active” considering all of the file virtualization devices at one data center site are inactive.
The file virtualization devices 110, 110′ are used to implement a virtualization layer that is transparent to the client devices 104(1)-104(n), whereby the file virtualization devices 110, 110′ are able to communicate with the selected file server storage devices 102(1)-102(n) over the virtualization layer. Each file virtualization device is configured to store configuration data which describes a state of the complete virtual file system for the data center 110 at a point in time. The configuration data is able to be sent from a file virtualization device in an active data center to one or more other file virtualization devices in a non-active data center when a fail-over occurs. In particular, the configuration data is loaded and enabled by the non-active file virtualization device to reproduce the complete virtual file system of the data center site that will be going off-line, wherein reproduction of the complete virtual file system occurs quickly to allow the newly active data center to take over without disrupting services provided to the users of client devices 104(1)-104(n).
Each active file virtualization device handles a plurality of virtualization services between a plurality of client devices 104 and a plurality of storage devices 102. In particular, one type of virtualization service performed by a file virtualization device can involve the file virtualization device storing and/or retrieving portions of data among one or more storage devices for one file virtualization service. In the event that one or more file storage devices 102 fails or stops functioning properly, the one or more file virtualization devices, tasked with handing file virtualization services between client devices and the failed file storage device(s), will consider the storage device 102 to be inactive, and will thus initiate the fail over process to the non-active file virtualization device. In particular to this example event, the corresponding file virtualization device at the second data center site, upon being activated, will only handle the virtualization services which involve the one or more storage devices in the second data center which correspond with the one or more failed storage devices in the first data center.
For example, a first data center site may contain three file virtualization devices (file virtualization devices A, B and C) and four storage devices (storage devices A, B, C, and D). Similarly, a second data center site may contain three file virtualization devices (file virtualization devices A′, B′ and C′) and four storage devices (storage devices A′, B′, C′, and D′), whereby the file virtualization devices and storage devices correspond to their respective paired devices in the first data center. In the example, file virtualization device A may handle a virtualization service A that has virtual IP addresses which require file virtualization device A to access storage devices A and B. Additionally, in the example, file virtualization device B may handle virtualization service B that has virtual IP addresses which require file virtualization device B to access storage devices B and C. Moreover, in the example, file virtualization device C may handle virtualization services C1 and C2 that has virtual IP addresses which require file virtualization device C to access storage devices A and D for virtualization service C1 and storage devices C and D for virtualization service C2. In the example, if storage device A fails, file virtualization devices A and C are affected as their virtualization services have virtual IP addresses which require access to storage device A (and potentially other storage devices). Accordingly, virtualization services A and C1 must be handled by the corresponding file virtualization devices A′ and C′ to ensure that virtualization services A and C1 continue to be provided to the client device with minimal disruption and latency. In particular, file virtualization devices A′ and C′ activate and enable configuration data for virtualization services A and C1, such that file virtualization devices A′ and C′ are able to provide these services between the one or more client devices and the storage device A′. In the present example, file virtualization device C also accesses storage devices C and D when performing virtual service C2. Considering that storage devices C and D are functioning properly in this example, file virtualization device C continues to perform virtual service C2 and thus does not fail over that virtual service C to file virtualization device C′. This is an “active-active” scenario, wherein one or more file virtualization devices in both data center sites are in active operation.
The input-output interface 124 is configured to allow the file virtualization device 110 to communicate with other network devices, such as another file virtualization device 110′, via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS), or the Citrix Gateway Protocol manufactured by Citrix Systems, Inc. of Fort Lauderdale, Fla. Input-output device 142 may in some examples connect to multiple input-output devices external to file virtualization device 110. Some examples of the input-output device 142 may be configured to provide storage or an installation medium, while others may provide a universal serial bus (USB) interface for receiving USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. Still other examples of the input-output device 142 may be a bridge between the data plane bus 130, control plane bus 140, and an external communication bus, such as: a USB bus; an Apple Desktop Bus; an RS-232 serial connection; a SCSI bus; a FireWire bus; a FireWire 800 bus; an Ethernet bus; an AppleTalk bus; a Gigabit Ethernet bus; an Asynchronous Transfer Mode bus; a HIPPI bus; a Super HIPPI bus; a SerialPlus bus; a SCI/LAMP bus; a FibreChannel bus; or a Serial Attached small computer system interface bus. Further, file virtualization device 110A can be single powered or dual-powered depending upon specific user needs.
In an aspect, the data plane 122 of the file virtualization device 110 functions to provide a data path that handles non-metadata operations at wire speed. The control plane 132 of the file virtualization device 110 functions to provide handling of operations that affect metadata and migration of file data to and from storage devices 102(1)-102(n). In some other examples, control plane memory 138 can store an operating system used for file virtualization device 110, and log files generated during operation of file virtualization device 110. Each path provided by data plane 122 and control plane 132, respectively, has dedicated processing and memory resources and each can scale independently based upon varying network and storage conditions. In an aspect, the control plane 132 is configured to perform certain functions such as logging, reporting, port mirroring, and hosting Simple Network Management Protocol (SNMP) and other protocols.
In this example shown in
Data plane CPU 126 and control plane CPU 136 can comprise one or more computer readable medium and logic circuits that respond to and process instructions fetched from the data plane memory 128; one or more microprocessor units, one or more microprocessors, one or more microcontrollers, and central processing units with a single processing core or a plurality of processing cores.
The data plane memory 128 and the control plane memory 138, can comprise: Static random access memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Dynamic random access memory (DRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Enhanced DRAM (EDRAM), synchronous DRAM (SDRAM), JEDECSRAM, PCIOO SDRAM, Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), SyncLink DRAM (SLDRAM), Direct Rambus DRAM (DRDRAM), Ferroelectric RAM (FRAM), disk type memory, tape memory, spinning storage media, or any other type of memory device capable of executing the systems and methods described herein.
The data plane CPU 126 and the control plane CPU 136 execute one or more programs of stored instructions of one or more aspects which perform some or all of the processes described below in accordance with mitigating latency and minimizing interruption by activating the non-active file virtualization device after the first data center site 100 goes off-line. In particular, the data plane CPU 126 and the control plane CPU 136 communicate with the file virtualization device 110′ at the non-active data center site 100′ and instruct it to activate so that communications from the client devices 104(1)-104(n) are able to be redirected or rerouted to that file virtualization device 110′ after the second data center site 100′ has become active and can handle the client communications.
File virtualization device 110 can be configured in a manner that data plane CPU 126 and control plane CPU 136 may also include a computer readable medium having instructions stored thereon for automatic synchronizing of configuration information to a non-active file virtualization device 110′ in the event that the active data center site 100 goes off-line.
By way of example only, data plane 122 and control plane 132 in file virtualization device 110A are configured to translate client requests received from client devices 104(1)-104(n) over network 112 at the input-output interface 124 of data plane 122 into request from the file virtualization device 110 to one or more storage devices 102(1)-102(n) over LAN 114. Upon receipt of the request, data plane 122 communicates with control plane 132 to search for virtual snapshot data related to the request in a configuration database 150. Control plane 132 returns data related to the request to data plane 122, which then forwards it to file data and metadata stores in storage devices 102(1)-102(n). Alternatively, file virtualization device 110 may be configured to receive responses from file data and metadata stores in storage devices 102(1)-102(n). In such a scenario, file virtualization device 110 can store the outcome of various file operations into a virtual snapshot, described in more detail in
In an aspect, the configuration database 150 can be a relational database including various fields, records, and files, used in conjunction with a database management system, although other types of databases may also be used. Although the configuration database 150 is shown in
As stated above, files and stored objects are continuously replicated between the storage devices 102(1)-102(n) in the first data center 100 and the storage devices 102(1)′-102(n)′ in the second data center 100′, as represented by arrows 107(1)-107(n) in
As indicated in Block 206, the process repeats back to Block 204 until the one or more non-active file virtualization devices 110(1)′-110(n)′ receive an instruction from a network administrator that there has been one or more virtualization service disruptions at the first data center site 100. In an aspect, the virtualization service disruption may be due to failure of one or more file virtualization devices 110(1)-110(n) and/or one or more storage devices 102(1)-102(n) at the first data center site 110. In an aspect, the instruction provides information as to which of the file virtualization devices 110(1)′-110(n)′ at the second data center site 100′ will become active and which virtual services will need to be handled.
In an aspect, based on the information in the instruction, the one or more file virtualization devices 110(1)′-110(n)′ load, from corresponding configuration database(s) 150, the configuration data most recently imported (Block 208). In an aspect, the configuration data will contain all of the parameters (e.g. site common parameters, site specific parameters, information regarding the virtual services which need to be taken over) which relate to the virtualization services that were being handled by the active file virtualization device 110(1)-110(n).
In particular, the one or more file virtualization devices 110(1)′-110(n)′ will enable only the parameters associated with the one or more virtualization services that the back-up file virtualization devices will need to take over. Once these parameters are enabled at the back-up virtualization device(s) 110(1)′-110(n)′, they will be able to handle the identified virtual services between the client devices 104(1)-104(n) and the storage devices 102(1)′-102(n)′ in the second data center 100′ (Block 210). In particular, once the configuration data is enabled by the file virtualization devices 110(1)′-110(n)′, the now-active file virtualization devices 110(1)′-110(n)′ are able to use the IP addresses for each virtualized service to effectively access contents from the storage devices 102(1)′-102(n)′ in the second data center 100′.
As discussed above, the present system and method can be applied in for “active-active” failover scenarios or “passive-active” failover scenarios. For the “passive-active” failover scenario, all of the active file virtualization devices 110(1)-110(n) become inactive, whereby all of the file virtualization devices 110(1)′-110(n)′ become enabled to thereafter handle all network communications (previously performed at the active first data center site 100 at the second data center site 100′. For the “active-active” failover scenario, at least one set of corresponding file virtualization devices 110(1)-110(n), 110(1)′-110(n)′ remain active, as the service disruption is caused by one or more failed storage devices 102(1)-102(n).
Upon enabling the parameters from the configuration data, the file virtualization devices 110(1)′-110(n)′ will resolve any conflicts that may arise between parameters that have been newly enabled and parameters that are already being executed at the second data center site 100′ (Block 212). In an aspect, the file virtualization devices 110(1)′-110(n)′ will allow already running parameters to continue to run while the newly enabled conflicting parameters will not be executed.
Once the file virtualization devices 110(1)′-110(n)′ and the other components in the second data center site 100′ are active and able to handle network traffic, client traffic is rerouted or redirected to the active second data center site 100′ via the file virtualization devices 110(1)′-110(n)′ (Block 214).
Thereafter, the roles between the file virtualization devices and storage devices in the first and second data centers are reversed for the fail-over virtualization service(s). In particular, content data of the storage devices 102(1)′-102(n)′ in the second data center 100′ are replicated in the storage devices 102(1)-102(n) in the first data center 100. Further, as shown in
This process repeats back to Block 214 until the file virtualization device(s) 110(1)′-110(n)′ receives instructions that the virtualization service(s) are to be passed back to the file virtualization device(s) 110(1)-110(n) at the first data center site 100 (Block 218). Once the file virtualization device(s) 110(1)′-110(n)′ receive confirmation that the file virtualization devices 110(1)-110(n) are back on-line and active, the file virtualization device(s) 110(1)′-110(n)′ terminate handling the virtualization service(s) and go back into stand-by mode for those virtualization service (Block 220). The process repeats back to Block 204 wherein the file virtualization device(s) 110(1)′-110(n)′ to import configuration data from the file virtualization device(s) 110(1)-110(n).
Having thus described the basic concepts, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. For example, different non-TCP networks using different types of file virtualization devices may be selected by a system administrator. The order that the measures are implemented may also be altered. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the examples. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the processes to any order.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4993030 | Krakauer et al. | Feb 1991 | A |
| 5218695 | Noveck et al. | Jun 1993 | A |
| 5282201 | Frank et al. | Jan 1994 | A |
| 5303368 | Kotaki | Apr 1994 | A |
| 5473362 | Fitzgerald et al. | Dec 1995 | A |
| 5511177 | Kagimasa et al. | Apr 1996 | A |
| 5537585 | Blickenstaff et al. | Jul 1996 | A |
| 5548724 | Akizawa et al. | Aug 1996 | A |
| 5550965 | Gabbe et al. | Aug 1996 | A |
| 5583995 | Gardner et al. | Dec 1996 | A |
| 5586260 | Hu | Dec 1996 | A |
| 5590320 | Maxey | Dec 1996 | A |
| 5606665 | Yang et al. | Feb 1997 | A |
| 5623490 | Richter et al. | Apr 1997 | A |
| 5649194 | Miller et al. | Jul 1997 | A |
| 5649200 | Leblang et al. | Jul 1997 | A |
| 5668943 | Attanasio et al. | Sep 1997 | A |
| 5692180 | Lee | Nov 1997 | A |
| 5721779 | Funk | Feb 1998 | A |
| 5724512 | Winterbottom | Mar 1998 | A |
| 5806061 | Chaudhuri et al. | Sep 1998 | A |
| 5832496 | Anand et al. | Nov 1998 | A |
| 5832522 | Blickenstaff et al. | Nov 1998 | A |
| 5838970 | Thomas | Nov 1998 | A |
| 5862325 | Reed et al. | Jan 1999 | A |
| 5884303 | Brown | Mar 1999 | A |
| 5893086 | Schmuck et al. | Apr 1999 | A |
| 5897638 | Lasser et al. | Apr 1999 | A |
| 5905990 | Inglett | May 1999 | A |
| 5917998 | Cabrera et al. | Jun 1999 | A |
| 5920873 | Van Huben et al. | Jul 1999 | A |
| 5937406 | Balabine et al. | Aug 1999 | A |
| 5991302 | Berl et al. | Nov 1999 | A |
| 5995491 | Richter et al. | Nov 1999 | A |
| 5999664 | Mahoney et al. | Dec 1999 | A |
| 6012083 | Savitzky et al. | Jan 2000 | A |
| 6029168 | Frey | Feb 2000 | A |
| 6029175 | Chow et al. | Feb 2000 | A |
| 6041365 | Kleinerman | Mar 2000 | A |
| 6044367 | Wolff | Mar 2000 | A |
| 6047129 | Frye | Apr 2000 | A |
| 6067558 | Wendt et al. | May 2000 | A |
| 6072942 | Stockwell et al. | Jun 2000 | A |
| 6078929 | Rao | Jun 2000 | A |
| 6085234 | Pitts et al. | Jul 2000 | A |
| 6088694 | Burns et al. | Jul 2000 | A |
| 6104706 | Richter et al. | Aug 2000 | A |
| 6128627 | Mattis et al. | Oct 2000 | A |
| 6128717 | Harrison et al. | Oct 2000 | A |
| 6154777 | Ebrahim | Nov 2000 | A |
| 6161145 | Bainbridge et al. | Dec 2000 | A |
| 6161185 | Guthrie et al. | Dec 2000 | A |
| 6181336 | Chiu et al. | Jan 2001 | B1 |
| 6202156 | Kalajan | Mar 2001 | B1 |
| 6223206 | Dan et al. | Apr 2001 | B1 |
| 6233648 | Tomita | May 2001 | B1 |
| 6237008 | Beal et al. | May 2001 | B1 |
| 6256031 | Meijer et al. | Jul 2001 | B1 |
| 6259405 | Stewart et al. | Jul 2001 | B1 |
| 6260070 | Shah | Jul 2001 | B1 |
| 6282610 | Bergsten | Aug 2001 | B1 |
| 6289345 | Yasue | Sep 2001 | B1 |
| 6292832 | Shah et al. | Sep 2001 | B1 |
| 6304913 | Rune | Oct 2001 | B1 |
| 6308162 | Ouimet et al. | Oct 2001 | B1 |
| 6324581 | Xu et al. | Nov 2001 | B1 |
| 6330574 | Murashita | Dec 2001 | B1 |
| 6338082 | Schneider | Jan 2002 | B1 |
| 6339785 | Feigenbaum | Jan 2002 | B1 |
| 6349343 | Foody et al. | Feb 2002 | B1 |
| 6353848 | Morris | Mar 2002 | B1 |
| 6363056 | Beigi et al. | Mar 2002 | B1 |
| 6370527 | Singhal | Apr 2002 | B1 |
| 6374263 | Bunger et al. | Apr 2002 | B1 |
| 6389433 | Bolosky et al. | May 2002 | B1 |
| 6389462 | Cohen et al. | May 2002 | B1 |
| 6393581 | Friedman et al. | May 2002 | B1 |
| 6397246 | Wolfe | May 2002 | B1 |
| 6412004 | Chen et al. | Jun 2002 | B1 |
| 6438595 | Blumenau et al. | Aug 2002 | B1 |
| 6446108 | Rosenberg et al. | Sep 2002 | B1 |
| 6466580 | Leung | Oct 2002 | B1 |
| 6469983 | Narayana et al. | Oct 2002 | B2 |
| 6477544 | Bolosky et al. | Nov 2002 | B1 |
| 6487561 | Ofek et al. | Nov 2002 | B1 |
| 6493804 | Soltis et al. | Dec 2002 | B1 |
| 6513061 | Ebata et al. | Jan 2003 | B1 |
| 6514085 | Slattery et al. | Feb 2003 | B2 |
| 6516350 | Lumelsky et al. | Feb 2003 | B1 |
| 6516351 | Borr | Feb 2003 | B2 |
| 6542936 | Mayle et al. | Apr 2003 | B1 |
| 6549916 | Sedlar | Apr 2003 | B1 |
| 6553352 | Delurgio et al. | Apr 2003 | B2 |
| 6556997 | Levy | Apr 2003 | B1 |
| 6556998 | Mukherjee et al. | Apr 2003 | B1 |
| 6560230 | Li et al. | May 2003 | B1 |
| 6578069 | Hopmann et al. | Jun 2003 | B1 |
| 6601101 | Lee et al. | Jul 2003 | B1 |
| 6606663 | Liao et al. | Aug 2003 | B1 |
| 6612490 | Herrendoerfer et al. | Sep 2003 | B1 |
| 6615267 | Whalen et al. | Sep 2003 | B1 |
| 6654346 | Mahalingaiah et al. | Nov 2003 | B1 |
| 6701415 | Hendren, III | Mar 2004 | B1 |
| 6721794 | Taylor et al. | Apr 2004 | B2 |
| 6728704 | Mao et al. | Apr 2004 | B2 |
| 6738357 | Richter et al. | May 2004 | B1 |
| 6738790 | Klein et al. | May 2004 | B1 |
| 6742035 | Zayas et al. | May 2004 | B1 |
| 6744776 | Kalkunte et al. | Jun 2004 | B1 |
| 6748420 | Quatrano et al. | Jun 2004 | B1 |
| 6754215 | Arikawa et al. | Jun 2004 | B1 |
| 6754699 | Swildens et al. | Jun 2004 | B2 |
| 6757706 | Dong et al. | Jun 2004 | B1 |
| 6760337 | Snyder, II et al. | Jul 2004 | B1 |
| 6775673 | Mahalingam et al. | Aug 2004 | B2 |
| 6775679 | Gupta | Aug 2004 | B2 |
| 6782450 | Arnott et al. | Aug 2004 | B2 |
| 6795860 | Shah | Sep 2004 | B1 |
| 6801960 | Ericson et al. | Oct 2004 | B1 |
| 6826613 | Wang et al. | Nov 2004 | B1 |
| 6839761 | Kadyk et al. | Jan 2005 | B2 |
| 6847959 | Arrouye et al. | Jan 2005 | B1 |
| 6847970 | Keller et al. | Jan 2005 | B2 |
| 6850997 | Rooney et al. | Feb 2005 | B1 |
| 6865593 | Reshef et al. | Mar 2005 | B1 |
| 6868447 | Slaughter et al. | Mar 2005 | B1 |
| 6871221 | Styles | Mar 2005 | B1 |
| 6871245 | Bradley | Mar 2005 | B2 |
| 6880017 | Marce et al. | Apr 2005 | B1 |
| 6883137 | Girardot et al. | Apr 2005 | B1 |
| 6889249 | Miloushev et al. | May 2005 | B2 |
| 6914881 | Mansfield et al. | Jul 2005 | B1 |
| 6922688 | Frey, Jr. | Jul 2005 | B1 |
| 6934706 | Mancuso et al. | Aug 2005 | B1 |
| 6938039 | Bober et al. | Aug 2005 | B1 |
| 6938059 | Tamer et al. | Aug 2005 | B2 |
| 6959373 | Testardi | Oct 2005 | B2 |
| 6961815 | Kistler et al. | Nov 2005 | B2 |
| 6970924 | Chu et al. | Nov 2005 | B1 |
| 6973455 | Vahalia et al. | Dec 2005 | B1 |
| 6973490 | Robertson et al. | Dec 2005 | B1 |
| 6973549 | Testardi | Dec 2005 | B1 |
| 6975592 | Seddigh et al. | Dec 2005 | B1 |
| 6985936 | Agarwalla et al. | Jan 2006 | B2 |
| 6985956 | Luke et al. | Jan 2006 | B2 |
| 6986015 | Testardi | Jan 2006 | B2 |
| 6990074 | Wan et al. | Jan 2006 | B2 |
| 6990114 | Erimli et al. | Jan 2006 | B1 |
| 6990547 | Ulrich et al. | Jan 2006 | B2 |
| 6990667 | Ulrich et al. | Jan 2006 | B2 |
| 6996841 | Kadyk et al. | Feb 2006 | B2 |
| 7003533 | Noguchi et al. | Feb 2006 | B2 |
| 7003564 | Greuel et al. | Feb 2006 | B2 |
| 7006981 | Rose et al. | Feb 2006 | B2 |
| 7010553 | Chen et al. | Mar 2006 | B2 |
| 7013379 | Testardi | Mar 2006 | B1 |
| 7020644 | Jameson | Mar 2006 | B2 |
| 7020669 | McCann et al. | Mar 2006 | B2 |
| 7023974 | Brannam et al. | Apr 2006 | B1 |
| 7024427 | Bobbitt et al. | Apr 2006 | B2 |
| 7039061 | Connor et al. | May 2006 | B2 |
| 7051112 | Dawson | May 2006 | B2 |
| 7054998 | Arnott et al. | May 2006 | B2 |
| 7065482 | Shorey et al. | Jun 2006 | B2 |
| 7072917 | Wong et al. | Jul 2006 | B2 |
| 7075924 | Richter et al. | Jul 2006 | B2 |
| 7080314 | Garofalakis et al. | Jul 2006 | B1 |
| 7089286 | Malik | Aug 2006 | B1 |
| 7089491 | Feinberg et al. | Aug 2006 | B2 |
| 7111115 | Peters et al. | Sep 2006 | B2 |
| 7113962 | Kee et al. | Sep 2006 | B1 |
| 7113996 | Kronenberg | Sep 2006 | B2 |
| 7120728 | Krakirian et al. | Oct 2006 | B2 |
| 7120746 | Campbell et al. | Oct 2006 | B2 |
| 7127556 | Blumenau et al. | Oct 2006 | B2 |
| 7133863 | Teng et al. | Nov 2006 | B2 |
| 7133967 | Fujie et al. | Nov 2006 | B2 |
| 7143146 | Nakatani et al. | Nov 2006 | B2 |
| 7146524 | Patel et al. | Dec 2006 | B2 |
| 7155466 | Rodriguez et al. | Dec 2006 | B2 |
| 7165095 | Sim | Jan 2007 | B2 |
| 7167821 | Hardwick et al. | Jan 2007 | B2 |
| 7171469 | Ackaouy et al. | Jan 2007 | B2 |
| 7173929 | Testardi | Feb 2007 | B1 |
| 7191163 | Herrera et al. | Mar 2007 | B2 |
| 7194579 | Robinson et al. | Mar 2007 | B2 |
| 7228359 | Monteiro | Jun 2007 | B1 |
| 7234074 | Cohn et al. | Jun 2007 | B2 |
| 7236491 | Tsao et al. | Jun 2007 | B2 |
| 7240100 | Wein et al. | Jul 2007 | B1 |
| 7280536 | Testardi | Oct 2007 | B2 |
| 7284150 | Ma et al. | Oct 2007 | B2 |
| 7292541 | C S | Nov 2007 | B1 |
| 7293097 | Borr | Nov 2007 | B2 |
| 7293099 | Kalajan | Nov 2007 | B1 |
| 7293133 | Colgrove et al. | Nov 2007 | B1 |
| 7296263 | Jacob | Nov 2007 | B1 |
| 7308475 | Pruitt et al. | Dec 2007 | B1 |
| 7343398 | Lownsbrough | Mar 2008 | B1 |
| 7346664 | Wong et al. | Mar 2008 | B2 |
| 7383288 | Miloushev et al. | Jun 2008 | B2 |
| 7401220 | Bolosky et al. | Jul 2008 | B2 |
| 7406484 | Srinivasan et al. | Jul 2008 | B1 |
| 7409440 | Jacob | Aug 2008 | B1 |
| 7415488 | Muth et al. | Aug 2008 | B1 |
| 7415608 | Bolosky et al. | Aug 2008 | B2 |
| 7440982 | Lu et al. | Oct 2008 | B2 |
| 7457982 | Rajan | Nov 2008 | B2 |
| 7467158 | Marinescu | Dec 2008 | B2 |
| 7475241 | Patel et al. | Jan 2009 | B2 |
| 7477796 | Sasaki et al. | Jan 2009 | B2 |
| 7509322 | Miloushev et al. | Mar 2009 | B2 |
| 7512673 | Miloushev et al. | Mar 2009 | B2 |
| 7519813 | Cox et al. | Apr 2009 | B1 |
| 7562110 | Miloushev et al. | Jul 2009 | B2 |
| 7571168 | Bahar et al. | Aug 2009 | B2 |
| 7574433 | Engel | Aug 2009 | B2 |
| 7577723 | Matsuda et al. | Aug 2009 | B2 |
| 7587471 | Yasuda et al. | Sep 2009 | B2 |
| 7590747 | Coates et al. | Sep 2009 | B2 |
| 7599941 | Bahar et al. | Oct 2009 | B2 |
| 7610307 | Havewala et al. | Oct 2009 | B2 |
| 7610390 | Yared et al. | Oct 2009 | B2 |
| 7624109 | Testardi | Nov 2009 | B2 |
| 7639883 | Gill | Dec 2009 | B2 |
| 7644109 | Manley et al. | Jan 2010 | B2 |
| 7653699 | Colgrove et al. | Jan 2010 | B1 |
| 7689596 | Tsunoda | Mar 2010 | B2 |
| 7694082 | Golding et al. | Apr 2010 | B2 |
| 7711771 | Kirnos | May 2010 | B2 |
| 7734603 | McManis | Jun 2010 | B1 |
| 7743035 | Chen et al. | Jun 2010 | B2 |
| 7752294 | Meyer et al. | Jul 2010 | B2 |
| 7769711 | Srinivasan et al. | Aug 2010 | B2 |
| 7788335 | Miloushev et al. | Aug 2010 | B2 |
| 7822939 | Veprinsky et al. | Oct 2010 | B1 |
| 7831639 | Panchbudhe et al. | Nov 2010 | B1 |
| 7849112 | Mane et al. | Dec 2010 | B2 |
| 7870154 | Shitomi et al. | Jan 2011 | B2 |
| 7877511 | Berger et al. | Jan 2011 | B1 |
| 7885970 | Lacapra | Feb 2011 | B2 |
| 7913053 | Newland | Mar 2011 | B1 |
| 7953701 | Okitsu et al. | May 2011 | B2 |
| 7958347 | Ferguson | Jun 2011 | B1 |
| 8005953 | Miloushev et al. | Aug 2011 | B2 |
| 20010007560 | Masuda et al. | Jul 2001 | A1 |
| 20010014891 | Hoffert et al. | Aug 2001 | A1 |
| 20010047293 | Waller et al. | Nov 2001 | A1 |
| 20010051955 | Wong | Dec 2001 | A1 |
| 20020012352 | Hansson et al. | Jan 2002 | A1 |
| 20020035537 | Waller et al. | Mar 2002 | A1 |
| 20020038360 | Andrews et al. | Mar 2002 | A1 |
| 20020059263 | Shima et al. | May 2002 | A1 |
| 20020065810 | Bradley | May 2002 | A1 |
| 20020065848 | Walker et al. | May 2002 | A1 |
| 20020073105 | Noguchi et al. | Jun 2002 | A1 |
| 20020083118 | Sim | Jun 2002 | A1 |
| 20020087571 | Stapel et al. | Jul 2002 | A1 |
| 20020087744 | Kitchin | Jul 2002 | A1 |
| 20020087887 | Busam et al. | Jul 2002 | A1 |
| 20020099829 | Richards et al. | Jul 2002 | A1 |
| 20020103823 | Jackson et al. | Aug 2002 | A1 |
| 20020133330 | Loisey et al. | Sep 2002 | A1 |
| 20020133491 | Sim et al. | Sep 2002 | A1 |
| 20020143819 | Han et al. | Oct 2002 | A1 |
| 20020143909 | Botz et al. | Oct 2002 | A1 |
| 20020147630 | Rose et al. | Oct 2002 | A1 |
| 20020150253 | Brezak et al. | Oct 2002 | A1 |
| 20020156905 | Weissman | Oct 2002 | A1 |
| 20020161911 | Pinckney, III et al. | Oct 2002 | A1 |
| 20020162118 | Levy et al. | Oct 2002 | A1 |
| 20020174216 | Shorey et al. | Nov 2002 | A1 |
| 20020188667 | Kirnos | Dec 2002 | A1 |
| 20020194112 | dePinto et al. | Dec 2002 | A1 |
| 20020194342 | Lu et al. | Dec 2002 | A1 |
| 20020198956 | Dunshea et al. | Dec 2002 | A1 |
| 20030009429 | Jameson | Jan 2003 | A1 |
| 20030009528 | Sharif et al. | Jan 2003 | A1 |
| 20030012382 | Ferchichi et al. | Jan 2003 | A1 |
| 20030018450 | Carley | Jan 2003 | A1 |
| 20030018585 | Butler et al. | Jan 2003 | A1 |
| 20030028514 | Lord et al. | Feb 2003 | A1 |
| 20030033308 | Patel et al. | Feb 2003 | A1 |
| 20030033535 | Fisher et al. | Feb 2003 | A1 |
| 20030055723 | English | Mar 2003 | A1 |
| 20030061240 | McCann et al. | Mar 2003 | A1 |
| 20030065956 | Belapurkar et al. | Apr 2003 | A1 |
| 20030074301 | Solomon | Apr 2003 | A1 |
| 20030105846 | Zhao et al. | Jun 2003 | A1 |
| 20030115218 | Bobbitt et al. | Jun 2003 | A1 |
| 20030115439 | Mahalingam et al. | Jun 2003 | A1 |
| 20030128708 | Inoue et al. | Jul 2003 | A1 |
| 20030130945 | Force et al. | Jul 2003 | A1 |
| 20030139934 | Mandera | Jul 2003 | A1 |
| 20030149781 | Yared et al. | Aug 2003 | A1 |
| 20030156586 | Lee et al. | Aug 2003 | A1 |
| 20030159072 | Bellinger et al. | Aug 2003 | A1 |
| 20030171978 | Jenkins et al. | Sep 2003 | A1 |
| 20030177364 | Walsh et al. | Sep 2003 | A1 |
| 20030177388 | Botz et al. | Sep 2003 | A1 |
| 20030179755 | Fraser | Sep 2003 | A1 |
| 20030191812 | Agarwalla et al. | Oct 2003 | A1 |
| 20030195813 | Pallister et al. | Oct 2003 | A1 |
| 20030204635 | Ko et al. | Oct 2003 | A1 |
| 20030212954 | Patrudu | Nov 2003 | A1 |
| 20030220835 | Barnes, Jr. | Nov 2003 | A1 |
| 20030229665 | Ryman | Dec 2003 | A1 |
| 20030236995 | Fretwell, Jr. | Dec 2003 | A1 |
| 20040003266 | Moshir et al. | Jan 2004 | A1 |
| 20040006575 | Visharam et al. | Jan 2004 | A1 |
| 20040006591 | Matsui et al. | Jan 2004 | A1 |
| 20040010654 | Yasuda et al. | Jan 2004 | A1 |
| 20040015783 | Lennon et al. | Jan 2004 | A1 |
| 20040017825 | Stanwood et al. | Jan 2004 | A1 |
| 20040025013 | Parker et al. | Feb 2004 | A1 |
| 20040028043 | Maveli et al. | Feb 2004 | A1 |
| 20040028063 | Roy et al. | Feb 2004 | A1 |
| 20040030627 | Sedukhin | Feb 2004 | A1 |
| 20040030740 | Stelting | Feb 2004 | A1 |
| 20040030857 | Krakirian et al. | Feb 2004 | A1 |
| 20040043758 | Sorvari et al. | Mar 2004 | A1 |
| 20040054777 | Ackaouy et al. | Mar 2004 | A1 |
| 20040059789 | Shum | Mar 2004 | A1 |
| 20040064544 | Barsness et al. | Apr 2004 | A1 |
| 20040064554 | Kuno et al. | Apr 2004 | A1 |
| 20040093361 | Therrien et al. | May 2004 | A1 |
| 20040098383 | Tabellion et al. | May 2004 | A1 |
| 20040098595 | Aupperle et al. | May 2004 | A1 |
| 20040122926 | Moore et al. | Jun 2004 | A1 |
| 20040123277 | Schrader et al. | Jun 2004 | A1 |
| 20040133605 | Chang et al. | Jul 2004 | A1 |
| 20040133606 | Miloushev et al. | Jul 2004 | A1 |
| 20040138858 | Carley | Jul 2004 | A1 |
| 20040139355 | Axel et al. | Jul 2004 | A1 |
| 20040148380 | Meyer et al. | Jul 2004 | A1 |
| 20040153479 | Mikesell et al. | Aug 2004 | A1 |
| 20040167967 | Bastian et al. | Aug 2004 | A1 |
| 20040181605 | Nakatani et al. | Sep 2004 | A1 |
| 20040213156 | Smallwood et al. | Oct 2004 | A1 |
| 20040215665 | Edgar et al. | Oct 2004 | A1 |
| 20040236798 | Srinivasan et al. | Nov 2004 | A1 |
| 20040236826 | Harville et al. | Nov 2004 | A1 |
| 20050021615 | Arnott et al. | Jan 2005 | A1 |
| 20050021703 | Cherry et al. | Jan 2005 | A1 |
| 20050027841 | Rolfe | Feb 2005 | A1 |
| 20050044158 | Malik | Feb 2005 | A1 |
| 20050050107 | Mane et al. | Mar 2005 | A1 |
| 20050091214 | Probert et al. | Apr 2005 | A1 |
| 20050108575 | Yung | May 2005 | A1 |
| 20050114291 | Becker-Szendy et al. | May 2005 | A1 |
| 20050114701 | Atkins et al. | May 2005 | A1 |
| 20050117589 | Douady et al. | Jun 2005 | A1 |
| 20050160161 | Barrett et al. | Jul 2005 | A1 |
| 20050165656 | Frederick et al. | Jul 2005 | A1 |
| 20050175013 | Le Pennec et al. | Aug 2005 | A1 |
| 20050187866 | Lee | Aug 2005 | A1 |
| 20050198234 | Leib et al. | Sep 2005 | A1 |
| 20050198501 | Andreev et al. | Sep 2005 | A1 |
| 20050213587 | Cho et al. | Sep 2005 | A1 |
| 20050234928 | Shkvarchuk et al. | Oct 2005 | A1 |
| 20050240664 | Chen et al. | Oct 2005 | A1 |
| 20050246393 | Coates et al. | Nov 2005 | A1 |
| 20050289109 | Arrouye et al. | Dec 2005 | A1 |
| 20050289111 | Tribble et al. | Dec 2005 | A1 |
| 20060010502 | Mimatsu et al. | Jan 2006 | A1 |
| 20060031374 | Lu et al. | Feb 2006 | A1 |
| 20060045096 | Farmer et al. | Mar 2006 | A1 |
| 20060047785 | Wang et al. | Mar 2006 | A1 |
| 20060075475 | Boulos et al. | Apr 2006 | A1 |
| 20060080353 | Miloushev et al. | Apr 2006 | A1 |
| 20060106882 | Douceur et al. | May 2006 | A1 |
| 20060112151 | Manley et al. | May 2006 | A1 |
| 20060112367 | Harris | May 2006 | A1 |
| 20060123062 | Bobbitt et al. | Jun 2006 | A1 |
| 20060140193 | Kakani et al. | Jun 2006 | A1 |
| 20060153201 | Hepper et al. | Jul 2006 | A1 |
| 20060167838 | Lacapra | Jul 2006 | A1 |
| 20060184589 | Lees et al. | Aug 2006 | A1 |
| 20060190496 | Tsunoda | Aug 2006 | A1 |
| 20060200470 | Lacapra et al. | Sep 2006 | A1 |
| 20060212746 | Amegadzie et al. | Sep 2006 | A1 |
| 20060224687 | Popkin et al. | Oct 2006 | A1 |
| 20060230265 | Krishna | Oct 2006 | A1 |
| 20060259320 | LaSalle et al. | Nov 2006 | A1 |
| 20060259949 | Schaefer et al. | Nov 2006 | A1 |
| 20060268692 | Wright et al. | Nov 2006 | A1 |
| 20060271598 | Wong et al. | Nov 2006 | A1 |
| 20060277225 | Mark et al. | Dec 2006 | A1 |
| 20060282442 | Lennon et al. | Dec 2006 | A1 |
| 20060282461 | Marinescu | Dec 2006 | A1 |
| 20060282471 | Mark et al. | Dec 2006 | A1 |
| 20070005807 | Wong | Jan 2007 | A1 |
| 20070016613 | Foresti et al. | Jan 2007 | A1 |
| 20070024919 | Wong et al. | Feb 2007 | A1 |
| 20070027929 | Whelan | Feb 2007 | A1 |
| 20070027935 | Haselton et al. | Feb 2007 | A1 |
| 20070028068 | Golding et al. | Feb 2007 | A1 |
| 20070088702 | Fridella et al. | Apr 2007 | A1 |
| 20070124502 | Li | May 2007 | A1 |
| 20070130255 | Wolovitz et al. | Jun 2007 | A1 |
| 20070136308 | Tsirigotis et al. | Jun 2007 | A1 |
| 20070162891 | Burner et al. | Jul 2007 | A1 |
| 20070168320 | Borthakur et al. | Jul 2007 | A1 |
| 20070208748 | Li | Sep 2007 | A1 |
| 20070209075 | Coffman | Sep 2007 | A1 |
| 20070226331 | Srinivasan et al. | Sep 2007 | A1 |
| 20070233826 | Tindal et al. | Oct 2007 | A1 |
| 20080010372 | Khedouri et al. | Jan 2008 | A1 |
| 20080046432 | Anderson et al. | Feb 2008 | A1 |
| 20080070575 | Claussen et al. | Mar 2008 | A1 |
| 20080104443 | Akutsu et al. | May 2008 | A1 |
| 20080208917 | Smoot et al. | Aug 2008 | A1 |
| 20080209073 | Tang | Aug 2008 | A1 |
| 20080222223 | Srinivasan et al. | Sep 2008 | A1 |
| 20080243769 | Arbour et al. | Oct 2008 | A1 |
| 20080282047 | Arakawa et al. | Nov 2008 | A1 |
| 20090007162 | Sheehan | Jan 2009 | A1 |
| 20090037975 | Ishikawa et al. | Feb 2009 | A1 |
| 20090041230 | Williams | Feb 2009 | A1 |
| 20090055607 | Schack et al. | Feb 2009 | A1 |
| 20090077097 | Lacapra et al. | Mar 2009 | A1 |
| 20090089344 | Brown et al. | Apr 2009 | A1 |
| 20090094252 | Wong et al. | Apr 2009 | A1 |
| 20090106255 | Lacapra et al. | Apr 2009 | A1 |
| 20090106263 | Khalid et al. | Apr 2009 | A1 |
| 20090125955 | DeLorme | May 2009 | A1 |
| 20090132616 | Winter et al. | May 2009 | A1 |
| 20090204649 | Wong et al. | Aug 2009 | A1 |
| 20090204650 | Wong et al. | Aug 2009 | A1 |
| 20090204705 | Marinov et al. | Aug 2009 | A1 |
| 20090210431 | Marinkovic et al. | Aug 2009 | A1 |
| 20090217163 | Jaroker | Aug 2009 | A1 |
| 20090254592 | Marinov et al. | Oct 2009 | A1 |
| 20090265396 | Ram et al. | Oct 2009 | A1 |
| 20090300161 | Pruitt et al. | Dec 2009 | A1 |
| 20100064001 | Daily | Mar 2010 | A1 |
| 20100070476 | O'Keefe et al. | Mar 2010 | A1 |
| 20100179984 | Sebastian | Jul 2010 | A1 |
| 20100228819 | Wei | Sep 2010 | A1 |
| 20100250497 | Redlich et al. | Sep 2010 | A1 |
| 20110087696 | Lacapra | Apr 2011 | A1 |
| 20120117028 | Gold et al. | May 2012 | A1 |
| 20120150805 | Pafumi et al. | Jun 2012 | A1 |
| Number | Date | Country |
|---|---|---|
| 2003300350 | Jul 2004 | AU |
| 2080530 | Apr 1994 | CA |
| 2512312 | Jul 2004 | CA |
| 0605088 | Jul 1994 | EP |
| 0 738 970 | Oct 1996 | EP |
| 1081918 | Mar 2001 | EP |
| 63010250 | Jan 1988 | JP |
| 06-205006 | Jul 1994 | JP |
| 06-332782 | Dec 1994 | JP |
| 8021924 | Mar 1996 | JP |
| 08-328760 | Dec 1996 | JP |
| 08-339355 | Dec 1996 | JP |
| 9016510 | Jan 1997 | JP |
| 11282741 | Oct 1999 | JP |
| 2000183935 | Jun 2000 | JP |
| 566291 | Dec 2008 | NZ |
| 0058870 | Oct 2000 | WO |
| 0239696 | May 2002 | WO |
| WO 02056181 | Jul 2002 | WO |
| WO 2004061605 | Jul 2004 | WO |
| 2006091040 | Aug 2006 | WO |
| WO 2008130983 | Oct 2008 | WO |
| WO 2008147973 | Dec 2008 | WO |
| Entry |
|---|
| Baer, T., et al., “The elements of Web services” ADTmag.com, Dec. 1, 2002, pp. 1-6, (http://www.adtmag.com). |
| Blue Coat, “Technology Primer: CIFS Protocol Optimization,” Blue Coat Systems Inc., 2007, pp. 1-3, (http://www.bluecoat.com). |
| “Diameter MBLB Support Phase 2: Generic Message Based Load Balancing (GMBLB)”, last accessed Mar. 29, 2010, pp. 1-10, (http://peterpan.f5net.com/twiki/bin/view/TMOS/TMOSDiameterMBLB). |
| F5 Networks Inc., “Big-IP® Reference Guide, version 4.5”, F5 Networks Inc., Sep. 2002, pp. 11-1-11-32, Seattle, Washington. |
| F5 Networks Inc., “3-DNS® Reference Guide, version 4.5”, F5 Networks Inc., Sep. 2002, pp. 2-1-2-28, 3-1-3-12, 5-1-5-24, Seattle, Washington. |
| F5 Networks Inc., “Using F5's-DNS Controller to Provide High Availability Between Two or More Data Centers”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5products/3dns/relatedMaterials/3DNSRouting.html). |
| F5 Networks Inc., “Deploying the Big-IP LTM for Diameter Traffic Management” F5® Deployment Guide, Publication date Sep. 2010, Version 1.2, pp. 1-19. |
| F5 Networks Inc., “F5 Diameter RM”, Powerpoint document, Jul. 16, 2009, pp. 1-7. |
| F5 Networks Inc., “Routing Global Internet Users to the Appropriate Data Center and Applications Using F5's 3-DNS Controller”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5producs/3dns/relatedMaterials/UsingF5.html). |
| F5 Networks Inc., “Case Information Log for ‘Issues with BoNY upgrade to 4.3’”, as early as Feb. 2008. |
| F5 Networks Inc., “F5 WANJet CIFS Acceleration”, White Paper, F5 Networks Inc., Mar. 2006, pp. 1-5, Seattle, Washington. |
| Fajardo V., “Open Diameter Software Architecture,” Jun. 25, 2004, pp. 1-6, Version 1.0.7. |
| Gupta et al., “Algorithms for Packet Classification”, Computer Systems Laboratory, Stanford University, CA, Mar./Apr. 2001, pp. 1-29. |
| Heinz G., “Priorities in Stream Transmission Control Protocol (SCTP) Multistreaming”, Thesis submitted to the Faculty of the University of Delaware, Spring 2003, pp. 1-35. |
| Ilvesjmaki M., et al., “On the capabilities of application level traffic measurements to differentiate and classify Internet traffic”, Presented in SPIE's International Symposium ITcom, Aug. 19-21, 2001, pp. 1-11, Denver, Colorado. |
| Internet Protocol,“DARPA Internet Program Protocol Specification”, (RFC:791), Information Sciences Institute, University of Southern California, Sep. 1981, pp. 1-49. |
| Kawamoto, D., “Amazon files for Web services patent”, CNET News.com, Jul. 28, 2005, pp. 1-2, last accessed May 4, 2006, (http://news.com). |
| LaMonica M., “Infravio spiffs up Web services registry idea”, CNET News.com, May 11, 2004, pp. 1-2, last accessed Sep. 20, 2004, (http://www.news.com). |
| Mac Vittie, L., “Message-Based Load Balancing: Using F5 solutions to address the challenges of scaling Diameter, Radius, and message-oriented protocols”, F5 Technical Brief, 2005, pp. 1-9, F5 Networks Inc., Seattle, Washington. |
| “Market Research & Releases, CMPP PoC documentation”, last accessed Mar. 29, 2010, (http://mainstreet/sites/PD/Teams/ProdMgmt/MarketResearch/Universal). |
| “Market Research & Releases, Solstice Diameter Requirements”, last accessed Mar. 29, 2010, (http://mainstreet/sites/PD/Teams/ProdMgmt/MarketResearch/Unisversal). |
| Modiano E., “Scheduling Algorithms for Message Transmission Over a Satellite Broadcast System”, MIT Lincoln Laboratory Advanced Network Group, Nov. 1997, pp. 1-7. |
| Nichols K., et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”, (RFC:2474) Network Working Group, Dec. 1998, pp. 1-19, last accessed Oct. 8, 2012, (http://www.ietf.org/rfc/rfc2474.txt). |
| Ott D., et al., “A Mechanism for TCP-Friendly Transport-level Protocol Coordination”, USENIX Annual Technical Conference, 2002, University of North Carolina at Chapel Hill, pp. 1-12. |
| Padmanabhan V., et al., “Using Predictive Prefetching to Improve World Wide Web Latency”, SIGCOM, 1996, pp. 1-15. |
| “Respond to server depending on TCP::client—port”, DevCentral Forums iRules, pp. 1-6, last accessed Mar. 26, 2010, (http://devcentral.f5.com/Default/aspx?tabid=53&forumid=5&tpage=1&v). |
| Rosen E., et al., “MPLS Label Stack Encoding”, (RFC:3032) Network Working Group, Jan. 2001, pp. 1-22, last accessed Oct. 8, 2012, (http://www.ietf.org/rfc/rfc3032.txt). |
| Schilit B., “Bootstrapping Location-Enhanced Web Services”, University of Washington, Dec. 4, 2003, (http://www.cs.washington.edu/news/colloq.info.html). |
| Seeley R., “Can Infravio technology revive UDDI?”, ADTmag.com, Oct. 22, 2003, last accessed Sep. 30, 2004, (http://www.adtmag.com). |
| Shohoud, Y., “Building XML Web Services with VB .NET and VB 6”, Addison Wesley, 2002, pp. 1-14. |
| Sommers F., “Whats New in UDDI 3.0—Part 1”, Web Services Papers, Jan. 27, 2003, pp. 1-4, last accessed Mar. 31, 2004, (http://www.webservices.org/index.php/article/articleprint/871/-1/24/). |
| Sommers F., “Whats New in UDDI 3.0—Part 2”, Web Services Papers, Mar. 2, 2003, pp. 1-8, last accessed Nov. 1, 2007, (http://www.web.archive.org/web/20040620131006/). |
| Sommers F., “Whats New in UDDI 3.0—Part 3”, Web Services Papers, Sep. 2, 2003, pp. 1-4, last accessed Mar. 31, 2007, (http://www.webservices.org/index.php/article/articleprint/894/-1/24/). |
| Sleeper B., “The Evolution of UDDI” UDDI.org White Paper, The Stencil Group, Inc., Jul. 19, 2002, pp. 1-15, San Francisco, California. |
| Sleeper B., “Why UDDI Will Succeed, Quietly: Two Factors Push Web Services Forward”, The Stencil Group, Inc., Apr. 2001, pp. 1-7, San Francisco, California. |
| “UDDI Overview”, Sep. 6, 2000, pp. 1-21, uddi.org, (http://www.uddi.org/). |
| “UDDI Version 3.0.1” UDDI Spec Technical Committee Specification, Oct. 14, 2003, pp. 1-383, uddi.org, (http://www.uddi.org/). |
| “UDDI Technical White Paper,” Sep. 6, 2000, pp. 1-12, uddi-org, (http://www.uddi.org/). |
| Wang B., “Priority and realtime data transfer over the best-effort Internet”, Dissertation Abstract, 2005, ScholarWorks@UMASS. |
| Wikipedia, “Diameter (protocol)”, pp. 1-11, last accessed Oct. 27, 2010, (http://en.wikipedia.org/wiki/Diameter—(protocol)). |
| Woo T.Y.C., “A Modular Approach to Packet Classification: Algorithms and Results”, Bell Laboratories, Lucent Technologies, Mar. 2000, pp. 1-10. |
| “The AFS File System in Distributed Computing Environment,” www.transarc.ibm.com/Library/whitepapers/AFS/afsoverview.html, last accessed on Dec. 20, 2002. |
| Aguilera, Marcos K. et al., “Improving recoverability in multi-tier storage systems,” International Conference on Dependable Systems and Networks (DSN-2007), Jun. 2007, 10 pages, Edinburgh, Scotland. |
| Anderson, Darrell C. et al., “Interposed Request Routing for Scalable Network Storage,” ACM Transactions on Computer Systems 20(1): (Feb. 2002), pp. 1-24. |
| Anderson et al., “Serverless Network File System,” in the 15th Symposium on Operating Systems Principles, Dec. 1995, Association for Computing Machinery, Inc. |
| Anonymous, “How DFS Works: Remote File Systems,” Distributed File System (DFS) Technical Reference, retrieved from the Internet on Feb. 13, 2009: URL<:http://technetmicrosoft.com/en-us/library/cc782417WS.10,printer).aspx> (Mar. 2003). |
| Apple, Inc., “Mac OS X Tiger Keynote Intro. Part 2,” Jun. 2004, www.youtube.com <http://www.youtube.com/watch?v=zSBJwEmRJbY>, p. 1. |
| Apple, Inc., “Tiger Developer Overview Series: Working with Spotlight,” Nov. 23, 2004, www.apple.com using www.archive.org <http ://web. archive.org/web/20041123005335/developer.apple.com/macosx/tiger/spotlight.html>, pp. 1-6. |
| “A Storage Architecture Guide,” Second Edition, 2001, Auspex Systems, Inc., www.auspex.com, last accessed on Dec. 30, 2002. |
| Basney et al., “Credential Wallets: A Classification of Credential Repositories Highlighting MyProxy,” TPRC 2003, Sep. 19-21, 2003, pp. 1-20. |
| Botzum, Keys, “Single Sign on—A Contrarian View,” Open Group Website, <http://www.opengroup.org/security/topics.htm>, Aug. 6, 2001, pp. 1-8. |
| Cabrera et al., “Swift: A Storage Architecture for Large Objects,” In Proceedings of the-Eleventh IEEE Symposium on Mass Storage Systems, Oct. 1991, pp. 123-128. |
| Cabrera et al., “Swift: Using Distributed Disk Striping to Provide High I/O Data Rates,” Fall 1991, pp. 405-436, vol. 4, No. 4, Computing Systems. |
| Cabrera et al., “Using Data Striping in a Local Area Network,” 1992, technical report No. UCSC-CRL-92-09 of the Computer & Information Sciences Department of University of California at Santa Cruz. |
| Callaghan et al., “NFS Version 3 Protocol Specifications” (RFC 1813), Jun. 1995, The Internet Engineering Task Force (IETN), www.ietf.org, last accessed on Dec. 30, 2002. |
| Carns et al., “PVFS: A Parallel File System for Linux Clusters,” in Proceedings of the Extreme Linux Track: 4th Annual Linux Showcase and Conference, Oct. 2000, pp. 317-327, Atlanta, Georgia, USENIX Association. |
| Cavale, M. R., “Introducing Microsoft Cluster Service (MSCS) in the Windows Server 2003”, Microsoft Corporation, Nov. 2002. |
| “CSA Persistent File System Technology,” A White Paper, Jan. 1, 1999, p. 1-3, http://www.cosoa.com/white—papers/pfs.php, Colorado Software Architecture, Inc. |
| “Distributed File System: A Logical View of Physical Storage: White Paper,” 1999, Microsoft Corp., www.microsoft.com, <http://www.eu.microsoft.com/TechNet/prodtechnol/windows2000serv/maintain/DFSnt95>, pp. 1-26, last accessed on Dec. 20, 2002. |
| English Translation of Notification of Reason(s) for Refusal for JP 2002-556371 (Dispatch Date: Jan. 22, 2007). |
| Fan et al., “Summary Cache: A Scalable Wide-Area Protocol”, Computer Communications Review, Association Machinery, New York, USA, Oct. 1998, vol. 28, Web Cache Sharing for Computing No. 4, pp. 254-265. |
| Farley, M., “Building Storage Networks,” Jan. 2000, McGraw Hill, ISBN 0072120509. |
| Gibson et al., “File Server Scaling with Network-Attached Secure Disks,” in Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics '97), Association for Computing Machinery, Inc., Jun. 15-18, 1997. |
| Gibson et al., “NASD Scalable Storage Systems,” Jun. 1999, USENIX99, Extreme Linux Workshop, Monterey, California. |
| Harrison, C., May 19, 2008 response to Communication pursuant to Article 96(2) EPC dated Nov. 9, 2007 in corresponding European patent application No. 02718824.2. |
| Hartman, J., “The Zebra Striped Network File System,” 1994, Ph.D. dissertation submitted in the Graduate Division of the University of California at Berkeley. |
| Haskin et al., “The Tiger Shark File System,” 1996, in proceedings of IEEE, Spring COMPCON, Santa Clara, CA, www.research.ibm.com, last accessed on Dec. 30, 2002. |
| Hu, J., Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784. |
| Hu, J., Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784. |
| Hwang et al., “Designing SSI Clusters with Hierarchical Checkpointing and Single 1/0 Space,” IEEE Concurrency, Jan.-Mar. 1999, pp. 60-69. |
| International Search Report for International Patent Application No. PCT/US2008/083117 (Jun. 23, 2009). |
| International Search Report for International Patent Application No. PCT/US2008/060449 (Apr. 9, 2008). |
| International Search Report for International Patent Application No. PCT/US2008/064677 (Sep. 6, 2009). |
| International Search Report for International Patent Application No. PCT /US02/00720, Jul. 8, 2004. |
| International Search Report from International Application No. PCT/US03/41202, mailed Sep. 15, 2005. |
| Karamanolis, C. et al., “An Architecture for Scalable and Manageable File Services,” HPL-2001-173, Jul. 26, 2001. p. 1-114. |
| Katsurashima, W. et al., “NAS Switch: A Novel CIFS Server Virtualization, Proceedings,” 20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies, 2003 (MSST 2003), Apr. 2003. |
| Kimball, C.E. et al., “Automated Client-Side Integration of Distributed Application Servers,” 13th LISA Conf, 1999, pp. 275-282 of the Proceedings. |
| Klayman, J., Nov. 13, 2008 e-mail to Japanese associate including instructions for response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. |
| Klayman, J., response filed by Japanese associate to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371. |
| Klayman, J., Jul. 18, 2007 e-mail to Japanese associate including instructions for response to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371. |
| Kohl et al., “The Kerberos Network Authentication Service (V5),” RFC 1510, Sep. 1993. (http://www.ietf.org/ rfc/rfc1510.txt?number=1510). |
| Korkuzas, V., Communication pursuant to Article 96(2) EPC dated Sep. 11, 2007 in corresponding European patent application No. 02718824.2-2201. |
| Lelil, S., “Storage Technology News: AutoVirt adds tool to help data migration projects,” Feb. 25, 2011, last accessed Mar. 17, 2011, <http ://searchstorage.techtarget.com/news/article/0,289142,sid5—gci1527986,00.html>. |
| Long et al., “Swift/RAID: A distributed RAID System”, Computing Systems, Summer 1994, vol. 7, pp. 333-359. |
| “NERSC Tutorials: I/O on the Cray T3E, ‘Chapter 8, Disk Striping’,” National Energy Research Scientific Computing Center (NERSC), http://hpcfnersc.gov, last accessed on Dec. 27, 2002. |
| Noghani et al., “A Novel Approach to Reduce Latency on the Internet: ‘Component-Based Download’,” Proceedings of the Computing, Las Vegas, NV, Jun. 2000, pp. 1-6 on the Internet: Intl Conf. on Internet. |
| Norton et al., “CIFS Protocol Version CIFS-Spec 0.9,” 2001, Storage Networking Industry Association (SNIA), www.snia.org, last accessed on Mar. 26, 2001. |
| Novotny et al., “An Online Credential Repository for the Grid: MyProxy,” 2001, pp. 1-8. |
| Pashalidis et al., “A Taxonomy of Single Sign-On Systems,” 2003, pp. 1-16, Royal Holloway, University of London, Egham Surray, TW20, 0EX, United Kingdom. |
| Pashalidis et al., “Impostor: A Single Sign-On System for Use from Untrusted Devices,” Global Telecommunications Conference, 2004, GLOBECOM '04, IEEE, Issue Date: Nov. 29-Dec. 3, 2004.Royal Holloway, University of London. |
| Patterson et al., “A case for redundant arrays of inexpensive disks (RAID)”, Chicago, Illinois, Jun. 1-3, 1998, in Proceedings of ACM SIGMOD conference on the Management of Data, pp. 109-116, Association for Computing Machinery, Inc., www.acm.org, last accessed on Dec. 20, 2002. |
| Pearson, P.K., “Fast Hashing of Variable-Length Text Strings,” Comm. of the ACM, Jun. 1990, pp. 1-4, vol. 33, No. 6. |
| Peterson, M., “Introducing Storage Area Networks,” Feb. 1998, InfoStor, www.infostor.com, last accessed on Dec. 20, 2002. |
| Preslan et al., “Scalability and Failure Recovery in a Linux Cluster File System,” in Proceedings of the 4th Annual Linux Showcase & Conference, Atlanta, Georgia, Oct. 10-14, 2000, pp. 169-180 of the Proceedings, www.usenix.org, last accessed on Dec. 20, 2002. |
| Response filed Jul. 6, 2007 to Office action dated Feb. 6, 2007 for related patent U.S. Appl. No. 10/336,784. |
| Response filed Mar. 20, 2008 to Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784. |
| Rodriguez et al., “Parallel-access for mirror sites in the Internet,” InfoCom 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE Tel Aviv, Israel Mar. 26-30, 2000, Piscataway, NJ, USA, IEEE, US, Mar. 26, 2000, pp. 864-873, XP010376176 ISBN: 0-7803-5880-5 p. 867, col. 2, last paragraph -p. 868, col. 1, paragraph 1. |
| RSYNC, “Welcome to the RSYNC Web Pages,” Retrieved from the Internet URL: http://samba.anu.edu.ut.rsync/. (Retrieved on Dec. 18, 2009). |
| Savage, et al., “AFRAID—A Frequently Redundant Array of Independent Disks,” Jan. 22-26, 1996, pp. 1-13, USENIX Technical Conference, San Diego, California. |
| “Scaling Next Generation Web Infrastructure with Content-Intelligent Switching: White Paper,” Apr. 2000, p. 1-9 Alteon Web Systems, Inc. |
| Soltis et al., “The Design and Performance of a Shared Disk File System for IRIX,” Mar. 23-26, 1998, pp. 1-17, Sixth NASA Goddard Space Flight Center Conference on Mass Storage and Technologies in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems, University of Minnesota. |
| Soltis et al., “The Global File System,” Sep. 17-19, 1996, in Proceedings of the Fifth NASA Goddard Space Flight Center Conference on Mass Storage Systems and Technologies, College Park, Maryland. |
| Sorenson, K.M., “Installation and Administration: Kimberlite Cluster Version 1.1.0, Rev. Dec. 2000,” Mission Critical Linux, http://oss.missioncriticallinux.corn/kimberlite/kimberlite.pdf. |
| Stakutis, C., “Benefits of SAN-based file system sharing,” Jul. 2000, pp. 1-4, InfoStor, www.infostor.com, last accessed on Dec. 30, 2002. |
| Thekkath et al., “Frangipani: A Scalable Distributed File System,” in Proceedings of the 16th ACM Symposium on Operating Systems Principles, Oct. 1997, pp. 1-14, Association for Computing Machinery, Inc. |
| Tulloch, Mitch, “Microsoft Encyclopedia of Security,” 2003, pp. 218, 300-301, Microsoft Press, Redmond, Washington. |
| Uesugi, H., Nov. 26, 2008 amendment filed by Japanese associate in response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. |
| Uesugi, H., English translation of office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. |
| Uesugi, H., Jul. 15, 2008 letter from Japanese associate reporting office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. |
| “VERITAS SANPoint Foundation Suite(tm) and SANPoint Foundation Suite(tm) HA: New VERITAS Volume Management and File System Technology for Cluster Environments,” Sep. 2001, VERITAS Software Corp. |
| Wilkes, J., et al., “The HP AutoRAID Hierarchical Storage System,” Feb. 1996, vol. 14, No. 1, ACM Transactions on Computer Systems. |
| “Windows Clustering Technologies—An Overview,” Nov. 2001, Microsoft Corp., www.microsoft.com, last accessed on Dec. 30, 2002. |
| Zayas, E., “AFS-3 Programmer's Reference: Architectural Overview,” Transarc Corp., version 1.0 of Sep. 2, 1991, doc. No. FS-00-D160. |